372 107 50MB
English Pages 422 [425] Year 2022
Intelligent Control and Automation
INTELLIGENT CONTROL AND AUTOMATION
Edited by: Jovan Pehcevski
ARCLER
P
r
e
s
s
www.arclerpress.com
Intelligent Control and Automation Jovan Pehcevski
Arcler Press 224 Shoreacres Road Burlington, ON L7L 2H2 Canada www.arclerpress.com Email: [email protected] e-book Edition 2023 ISBN: 978-1-77469-611-8 (e-book) This book contains information obtained from highly regarded resources. Reprinted material sources are indicated. Copyright for individual articles remains with the authors as indicated and published under Creative Commons License. A Wide variety of references are listed. Reasonable efforts have been made to publish reliable data and views articulated in the chapters are those of the individual contributors, and not necessarily those of the editors or publishers. Editors or publishers are not responsible for the accuracy of the information in the published chapters or consequences of their use. The publisher assumes no responsibility for any damage or grievance to the persons or property arising out of the use of any materials, instructions, methods or thoughts in the book. The editors and the publisher have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission has not been obtained. If any copyright holder has not been acknowledged, please write to us so we may rectify. Notice: Registered trademark of products or corporate names are used only for explanation and identification without intent of infringement. © 2023 Arcler Press ISBN: 978-1-77469-525-8 (Hardcover) Arcler Press publishes wide variety of books and eBooks. For more information about Arcler Press and its products, visit our website at www.arclerpress.com
DECLARATION Some content or chapters in this book are open access copyright free published research work, which is published under Creative Commons License and are indicated with the citation. We are thankful to the publishers and authors of the content and chapters as without them this book wouldn’t have been possible.
ABOUT THE EDITOR
Jovan currently works as a presales Technology Consultant at Dell Technologies. He is a result-oriented technology leader with demonstrated subject matter expertise in planning, architecting and managing ICT solutions to reflect business objectives and achieve operational excellence. Jovan has broad deep technical knowledge in the fields of data center and big data technologies, combined with consultative selling approach and exceptional client-facing presentation skills. Before joining Dell Technologies in 2017, Jovan spent nearly a decade as a researcher, university professor and IT business consultant. In these capacities, he served as a trusted advisor to a multitude of customers in financial services, health care, retail, and academic sectors. He holds a PhD in Computer Science from RMIT University in Australia and worked as a postdoctoral visiting scientist at the renowned INRIA research institute in France. He is a proud father of two, an aspiring tennis player, and an avid Science Fiction/Fantasy book reader.”
TABLE OF CONTENTS
List of Contributors........................................................................................xv
List of Abbreviations..................................................................................... xxi
Preface.................................................................................................... ....xxv Section 1: Intelligent Control Methods Chapter 1
Automatic Intelligent Control System Based on Intelligent Control Algorithm...................................................................................... 3 Abstract...................................................................................................... 3 Introduction................................................................................................ 4 Related Work.............................................................................................. 5 Vector Control Based on Intelligent Control Algorithm................................ 7 Automatic Intelligent Control System Based on Intelligent Control Algorithm........................................................................... 19 Conclusion............................................................................................... 23 References................................................................................................ 24
Chapter 2
Intelligent Multi-Agent Based Information Management Methods to Direct Complex Industrial Systems........................................ 27 Abstract.................................................................................................... 27 Introduction.............................................................................................. 28 Current Industrial Systems Architecture..................................................... 29 IAS and Mass in Industrial Systems........................................................... 33 Mass: Approches and Algorithms.............................................................. 38 Hybrid Systems: A Case Study................................................................... 45 Conclusion............................................................................................... 46 References................................................................................................ 47
Chapter 3
Design Method of Intelligent Ropeway Type Line Changing Robot Based on Lifting Force Control and Synovial Film Controller........ 51 Abstract.................................................................................................... 51 Introduction.............................................................................................. 52 Related Work............................................................................................ 52 The Proposed Method............................................................................... 54 Experiment and Analysis........................................................................... 67 Conclusion............................................................................................... 70 Acknowledgments.................................................................................... 70 References................................................................................................ 71
Chapter 4
A Summary of PID Control Algorithms Based on AI-Enabled Embedded Systems................................................................................... 75 Abstract.................................................................................................... 75 Introduction.............................................................................................. 76 Basic Principles of PID.............................................................................. 77 Classification of PID Control..................................................................... 78 Comparisons of Key Algorithms for PID Control........................................ 87 Conclusion............................................................................................... 89 References................................................................................................ 90 Section 2: Fuzzy Control Techniques
Chapter 5
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems... 95 Abstract.................................................................................................... 95 Introduction.............................................................................................. 96 Sliding Mode Control (SMC) Design......................................................... 98 Decoupled Robot Tracking Control Design............................................. 101 Simulation Results.................................................................................. 107 Conclusions............................................................................................ 116 Appendix A............................................................................................ 117 References.............................................................................................. 118
Chapter 6
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System... 121 Abstract.................................................................................................. 121 Introduction............................................................................................ 122 Problem Formulation.............................................................................. 124 x
Interval Type-2 Fuzzy Logic Systems....................................................... 125 Adaptive Backstepping Fuzzy Controller Design Using IT2FLS................ 129 Simulation.............................................................................................. 136 Conclusion............................................................................................. 148 Acknowledgment.................................................................................... 148 References.............................................................................................. 149 Chapter 7
Fuzzy PID Control for Respiratory Systems............................................ 155 Abstract.................................................................................................. 155 Introduction............................................................................................ 156 Mathematical Model of Respiratory Systems........................................... 158 Controller Design................................................................................... 161 System’s Simulations and Results............................................................ 164 Conclusion............................................................................................. 166 Acknowledgments.................................................................................. 167 References.............................................................................................. 168
Chapter 8
A Parameter Varying PD Control for Fuzzy Servo Mechanism............... 171 Abstract.................................................................................................. 171 Introduction............................................................................................ 172 Motivation for Fuzzy Control.................................................................. 174 Problem Description and Methodology.................................................. 177 Modelling and Implementation............................................................... 183 Simulation Results.................................................................................. 185 Conclusion............................................................................................. 189 References.............................................................................................. 190 Section 3: Neural Networks-based Control
Chapter 9
Neural Network Supervision Control Strategy for Inverted Pendulum Tracking Control................................................................... 195 Abstract.................................................................................................. 195 Introduction............................................................................................ 196 Control Objective................................................................................... 199 Neural Network Supervision Control Design.......................................... 201 Simulation Studies.................................................................................. 208
xi
Conclusions............................................................................................ 218 Acknowledgments.................................................................................. 219 References.............................................................................................. 220 Chapter 10 Neural PID Control Strategy for Networked Process Control................ 225 Abstract.................................................................................................. 225 Introduction............................................................................................ 226 Stochastic Characteristics of NCS in Operation Processes....................... 229 Design of an NCS Controller................................................................... 231 Case Studies: Air-Pressure Tank............................................................... 240 Conclusions............................................................................................ 247 Acknowledgments.................................................................................. 248 References.............................................................................................. 249 Chapter 11 Control Loop Sensor Calibration Using Neural Networks for Robotic Control...................................................................................... 253 Abstract.................................................................................................. 253 Introduction............................................................................................ 254 Recalibration Approach.......................................................................... 256 Control Example I................................................................................... 260 Control Example II.................................................................................. 264 Conclusion............................................................................................. 269 References.............................................................................................. 270 Chapter 12 Feedforward Nonlinear Control Using Neural Gas Network................. 273 Abstract.................................................................................................. 273 Introduction............................................................................................ 274 Neural Gas Approach............................................................................. 276 Plant Model............................................................................................ 279 Local Linear Control By State Feedback.................................................. 280 Experimental Testing............................................................................... 283 Conclusions............................................................................................ 293 References.............................................................................................. 294
xii
Section 4: Intelligent Control Applications Chapter 13 Ship Steering Control Based on Quantum Neural Network................... 299 Abstract.................................................................................................. 299 Introduction............................................................................................ 300 IASV Mathematical Model...................................................................... 302 QNN Steering Controller Design............................................................ 303 Simulations and Analysis........................................................................ 309 Conclusions............................................................................................ 315 Acknowledgments.................................................................................. 315 References.............................................................................................. 316 Chapter 14 Human-Simulating Intelligent PID Control............................................ 321 Abstract.................................................................................................. 321 Introduction............................................................................................ 322 Human-Simulating Pid Control Law........................................................ 324 Tuning Controller.................................................................................... 329 Example and Simulation......................................................................... 331 Conclusions............................................................................................ 333 References.............................................................................................. 334 Chapter 15 Intelligent Situational Control of Small Turbojet Engines....................... 335 Abstract.................................................................................................. 335 Introduction............................................................................................ 336 Situational Control Methodology Framework Design.............................. 339 A Small Turbojet Engine: An Experimental Object................................... 344 Situational Control System for A Small Turbojet Engine........................... 347 Experimental Evaluation of the Designed Control System........................ 355 Conclusions............................................................................................ 360 Nomenclature......................................................................................... 361 Acknowledgments.................................................................................. 362 References.............................................................................................. 363 Chapter 16 An Antilock-Braking Systems (ABS) Control: A Technical Review.......... 369 Abstract.................................................................................................. 369 Introduction............................................................................................ 370 Principles of Antilock-Brake System........................................................ 371 xiii
ABS Control............................................................................................ 374 Conclusions............................................................................................ 382 Acknowledgement.................................................................................. 382 References.............................................................................................. 383 Index...................................................................................................... 391
xiv
LIST OF CONTRIBUTORS Zishan Huang School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan 430068, China Danilo Avola Department of Life, Health and Environmental Sciences, University of L’Aquila, L’Aquila, Italy Luigi Cinque Department of Computer Science, Sapienza University of Rome, Rome, Italy Giuseppe Placidi Department of Life, Health and Environmental Sciences, University of L’Aquila, L’Aquila, Italy Jiazhen Duan State Grid Changzhou Power Supply Company, Transmission and Distribution Engineering Company, Changzhou, Jiangsu 213000, China Ruxin Shi State Grid Changzhou Power Supply Company, Office, Changzhou, Jiangsu 213000, China Hongtao Liu State Grid Changzhou Power Supply Company, Transmission and Distribution Engineering Company, Changzhou, Jiangsu 213000, China Hailong Rong School of Mechanical Engineering and Rail Transit, Changzhou University, Changzhou, Jiangsu 213164, China Yi Zhou Aircraft Design and Engineering, Northwestern Polytechnical University, Xian, 710000, China
Abdel Badie Sharkawy Mechanical Engineering Department, Faculty of Engineering, Assiut University, Assiut, Egypt Shaaban Ali Salman Mechanical Engineering Department, Faculty of Engineering, Assiut University, Assiut, Egypt Ll Yi-Min Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu 212013, China Yue Yang Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu 212013, China Li Li School of Computer Science Telecommunication Engineering, Zhenjiang, Jiangsu 212013, China Ibrahim M. Mehedi Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia Heidir S. M. Shah Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia Ubaid M. Al-Saggaf Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia Rachid Mansouri Laboratoire de Conception et Conduite des Systemes de Production (L2CSP), Tizi Ouzou, Algeria Maamar Bettayeb Electrical Engineering Department, University of Sharjah, Sharjah, UAE Nader Jamali Soufi Sama Technical and Vocational Training College, Islamic Azad University, Islamshahr Branch, Tehran, Iran xvi
Mohsen Kabiri Moghaddam Department of Electrical and Electronic Engineering, K. N. Toosi University of Technology, Tehran, Iran Saeed Sfandiarpour Boroujeni Department of Electrical and Electronic Engineering, University of Shiraz, Shiraz, Iran Alireza Vahidifar Sama Technical and Vocational Training College, Islamic Azad University, Islamshahr Branch, Tehran, Iran Hongliang Gao School of Electrical Engineering and Automation, Hubei Normal University, Huangshi 435002, China Xiaoling Li School of Electrical Engineering and Automation, Hubei Normal University, Huangshi 435002, China Chao Gao The China Ship Development and Design Center, Wuhan 430064, China Jie Wu School of Electrical Engineering and Automation, Hubei Normal University, Huangshi 435002, China Jianhua Zhang State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources, North China Electric Power University, Beijing 102206, China Junghui Chen Department of Chemical Engineering, Chung-Yuan Christian University, Chung-Li 320, Taiwan Kathleen A. Kramer Department of Engineering, University of San Diego, 5998 Alcalá Park, San Diego, CA 92110, USA Stephen C. Stubberud Advanced Programs, Oakridge Technology, Del Mar, CA 92014, USA
xvii
Iván Machón-González Departamento de Ingeniería Eléctrica, Electrónica de Computadores y Sistemas, Universidad de Oviedo, Edificio Departamental 2, Zona Oeste, Campus de Viesques s/n, 33204 Gijón/Xixón, Spain Hilario López-García Departamento de Ingeniería Eléctrica, Electrónica de Computadores y Sistemas, Universidad de Oviedo, Edificio Departamental 2, Zona Oeste, Campus de Viesques s/n, 33204 Gijón/Xixón, Spain Wei Guan Navigation College, Dalian Maritime University, Dalian 116026, China Haotian Zhou Navigation College, Dalian Maritime University, Dalian 116026, China Zuojing Su Navigation College, Dalian Maritime University, Dalian 116026, China Xianku Zhang Navigation College, Dalian Maritime University, Dalian 116026, China Chao Zhao Navigation College, Dalian Maritime University, Dalian 116026, China Zhuning Liu Qingdao No.2 Middle School of Shandong Province, Qingdao, China Rudolf Andoga Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia Ladislav Főző Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia Jozef Judičák Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia Róbert Bréda Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia
xviii
Stanislav Szabo Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia Róbert Rozenberg Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia Milan Džunda Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia Ayman A. Aly Department of Mechanical Engineering, Faculty of Engineering, Taif University, AlHaweiah, Saudi Arabia Department of Mechanical Engineering, Faculty of Engineering, Assiut University, Assiut, Egypt El-Shafei Zeidan Department of Mechanical Engineering, Faculty of Engineering, Taif University, AlHaweiah, Saudi Arabia Department of Mechanical Power Engineering, Faculty of Engineering, Mansoura University, Mansoura, Egypt Ahmed Hamed Department of Mechanical Engineering, Faculty of Engineering, Taif University, AlHaweiah, Saudi Arabia Department of Mechanical Power Engineering, Faculty of Engineering, Mansoura University, Mansoura, Egypt Farhan Salem Department of Mechanical Engineering, Faculty of Engineering, Taif University, AlHaweiah, Saudi Arabia
xix
LIST OF ABBREVIATIONS AACP
Advanced Analysis of Critical Processes
ABABC
Approximator-Based Adaptive Backstepping Control
ABFC
Adaptive Backstepping Fuzzy Control
ABNNC
Adaptive Backstepping Neural Network Control
ABS Antilock-Braking System ABWC
Adaptive Backstepping Wavelet Control
ADS
Altium Designer Summer
AFC
Adaptive Fuzzy Control
AFSMC
Adaptive Fuzzy Sliding Mode Control
AI Artificial Intelligence AOPL
Agent Oriented Programming Language
AOSE
Agent Oriented Software Engineering
ARDS
Acute Respiratory Distress Syndrome
ARF
Acute Respiratory Failure
BDI Belief-Desire-Intention BG Bond Graphs BLDC
Brushless DC motor
BMU
Best Matching Unit
BOA Bisector of Area CAS Collision Avoidance System CEF
Controller Efficiency Function
CI Computational Intelligent COA Center of Average COG
Center of Gravity
COGS
Center of Gravity for Singletons
DAI Distributed Artificial Intelligence DCMP
Driving and Control of Manufacturing Processes
DE
Derivate of Error
dMARS
Distributed Multi-Agent Reasoning System
DPS
Distributed Problem Solving
DSR
Deanship of Scientific Research
EBCM
Electronic Brake Control Module
ECU
Electronic Control Unit
EGT Exhaust Gas Temperature EKF
Extended Kalman Filter
EPR
Engine Pressure Ratio
FADEC
Full Authority Digital Engine Control
FBF
Fuzzy Basis Function
FIS
Fuzzy Inference System
FLC
Fuzzy Logic Control
FMRLC
Fuzzy Model Reference Learning Control
FNN
Fuzzy Neural Network
FOU
Footprint of Uncertainty
FPID
Fuzzy Proportional Integral Derivative
FSMC
Fuzzy Sliding-Mode Control
GPS
Global Position System
GTMA Gateway Task Master Agent HSI-PID
Human-Simulating Intelligent PID
IAs Intelligent Agents IASV Intelligent Autonomous Surface Vessel ICU
Intensive Care Unit’s
i-FADEC
Intelligent Full Authority Digital Engine Control
IMAN
Immune Multiagent Network
INSs
Inertial Navigation Systems
IP Information Potential ISV
Integral of the Square Value
IT2FLC
Interval Type-2 Fuzzy Logic Control
ITAE Integral Time Absolute Error MASs Multi-Agent Systems MMAC
Multiple Model Adaptive Control
MOM
Mean of Maximum
NARMA
Nonlinear Autoregressive-Moving Average
NCSs
Networked Control Systems xxii
NEKF
Neural Extended Kalman Filter
NG Neural Gas NN Neural Networks OP Operating Point PCB
Printed Circuit Board
PDF
Probability Density Function
PECM
Pressure Evaluate Correction Module
PI Proportional-Integral PID
Proportion Integration Differentiation
PID Proportional-Integral-Derivative PMA Process Master Agent PSMP
Planning and Strategies of Management Processes
QNN
Quantum Neural Network
RBF
Radial Basis Function
RLS
Recursive Least Squares
SCM
Supply Chain Management
SLFSMC
Self-Learning Fuzzy Sliding-Mode Control
SMC Sliding-Mode Control SVPWM
Space Vector Pulse Width Modulation
SWIPR
Single-Wheeled Inverted Pendulum Robot
T1ABFC
Type-1 Adaptive Backstepping Fuzzy Control
T1FLS
Type-1 Fuzzy Logic System
T1FSs
Type-1 Fuzzy Sets
T2ABFC
Type-2 Adaptive Backstepping Fuzzy Control
T2FLS
Type-2 Fuzzy Logic System
T2FSs
Type-2 Fuzzy Sets
TCS
Traction Control System
TMA Task Master Agent UAV Unmanned Aerial Vehicle UF1
Unknown Function 1
VDSC
Vehicle Dynamic Stability Control
WDPR
Wire-Driven Parallel Robot
WHO
World Health Organization
ZN Ziegler Nichols xxiii
PREFACE
The intelligent control main tasks include application of intelligent (AI/machine learning) algorithms for monitoring and control of industrial processes, autonomous vehicles (smart cars, unmanned drones, satellites), smart homes, etc. Intelligent control executes in real time by using IPC (intelligent process control) software which allows connection to multiple machine tool controllers. This allows the offset values to be updated and enables real-time, automatic process control. The main benefits of intelligent control methods include: • Reduction of live-operator involvement; • Reducing the machine downtime; • No delays due to proximity to the machine tool; • Errors are recorded when measuring several parts simultaneously, which allows for more accurate updates; and • One-to-Many upgradeability, providing gauges for machine tools Intelligent control systems apply intelligent methods and tools, as well as information and communication as the main entities in complex systems. They are used in various types of systems (such as manufacturing, vehicles, and aircraft) as an option to real-time demands, where it has been proven that intelligent methods can more easily, securely and efficiently contribute to the control of the system and increase its efficiency. Intelligent control includes information-communication upgrade of a classic control system with significant usage of sensors, actuators and the software as a component. Adaptive control is a method of controlling the system based on independent adaptation to external changes in the system. Building a modern adaptive control system includes algorithms for intelligent control, such as fuzzy-logic control, machine learning control, neural networks, and expert systems. This edition covers different topics from intelligent control and automation, including: intelligent control methods, fuzzy control techniques, neural networks-based control, and intelligent control applications. Section 1 focuses on intelligent control methods, describing automatic intelligent control system based on intelligent control algorithm, intelligent multi-agent based information management methods to direct complex industrial systems, design method of intelligent ropeway type line changing robot based on lifting force control and synovial film controller, and summary of PID control algorithms based on AI-enabled embedded systems.
Section 2 focuses on fuzzy control techniques, describing an adaptive fuzzy sliding mode control scheme for robotic systems, adaptive backstepping fuzzy control based on type-2 fuzzy system, fuzzy PID control for respiratory systems, a parameter varying PD control for fuzzy servo mechanism, and robust fuzzy tracking control scheme for robotic manipulators with experimental verification. Section 3 focuses on neural networks-based control, describing neural network supervision control strategy for inverted pendulum tracking control, neural PID control strategy for networked process control, control loop sensor calibration using neural networks for robotic control, feedforward nonlinear control using neural gas network, and stable adaptive neural control of a robot arm. Section 4 focuses on intelligent control applications, describing ship steering control based on quantum neural network, human-simulating intelligent PID control, intelligent situational control of small turbojet engines, and antilock-braking systems (ABS) control: a technical review.
xxvi
SECTION 1: INTELLIGENT CONTROL METHODS
Chapter 1
Automatic Intelligent Control System Based on Intelligent Control Algorithm
Zishan Huang School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan 430068, China
ABSTRACT In order to improve the effect of automatic intelligent control, this paper improves the intelligent vector control algorithm, analyzes the performance of the algorithm with common power pulses as an example, and obtains an improved intelligent vector control algorithm suitable for modern intelligent systems. By making two mutually adjacent vectors nonidentical combination in time, the two vectors have the difference between zero and nonzero, and the vector can be rotated into a certain area. In addition, with the support of improved algorithms, this paper gets an automated intelligent control system based on intelligent control algorithms. Finally, this paper verifies the effect of the intelligent control system through experimental research.
Citation: Zishan Huang, “Automatic Intelligent Control System Based on Intelligent Control Algorithm”, Journal of Electrical and Computer Engineering, vol. 2022, Article ID 3594256, 10 pages, 2022. https://doi.org/10.1155/2022/3594256. Copyright: © 2022 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
4
Intelligent Control and Automation
The analysis of the test results shows that the automated intelligent control system proposed in this paper has a better effect in intelligent control.
INTRODUCTION Traditional control theory is the general term of classic control theory and modern control theory. Their main feature is model-based control. Traditional control uses inaccurate models and certain fixed control algorithms to place the entire control system under the model framework, which lacks flexibility and adaptability, so it is difficult to be competent for the control of complex systems. The core of intelligent control is control decision-making, which uses flexible decision-making methods to force the control system to approach the desired goal [1]. Traditional control is suitable for solving relatively simple control problems such as being linear and time-invariant. These problems can also be solved with intelligent methods. Intelligent control is the development of traditional control theory, and traditional control and intelligent control can be unified under the framework of intelligent control [2]. The generation of intelligent control comes from the high complexity and uncertainty of the controlled object and the increasingly high control performance that people demand. It is a type of automatic control that can independently drive intelligent machines to achieve their goals without human intervention. In addition, its creation and development require a high degree of integration and utilization of contemporary multiple frontier disciplines, multiple advanced technologies, and multiple scientific methods [3]. The intelligent behavior in the intelligent control system is essentially a mapping relationship from input to output. When the input of the system is not an example that has been learned, it can give a suitable output because of its adaptive function. Even when some parts of the system fail, the system can work normally. If the system has a higher degree of intelligence, it can also automatically find faults and even have the function of self-repair, thus reflecting stronger adaptability. It has the function of self-organization and coordination for complex tasks and scattered sensor information. The intelligent controller can make decisions on its own and take actions proactively within the scope of the task requirements; and when there is a multitarget conflict, the controller has the right to make judgments on its own under certain restrictions. The intelligent control system can learn the inherent information of the unknown characteristics of a process or its environment and use the
Automatic Intelligent Control System Based on Intelligent Control ...
5
obtained experience for further estimation, classification, decision-making, or control, so as to improve the performance of the system. That is, it has the ability to learn and acquire new knowledge about the object and environment during operation and use the new knowledge to improve control behavior. The third feature of intelligent control is that it can comprehensively cross various technologies, making the design of intelligent control systems and intelligent controllers increasingly diversified, and the application scope of intelligent control technology is becoming more and more extensive. It is difficult for simple technology to achieve intelligent simulation, and the combination of multiple technologies is conducive to intelligent simulation; the simple control method makes the system lack intelligence, and the control of integrated adaptive, self-organization, (self)learning, and other technologies can improve the intelligence of the system control. The fourth characteristic of intelligent control is that it can describe the stability of the system, the controllability and observability of the system, and the optimal control of the system (that is, the description of the entropy function and the energy function) like the traditional control theory which analyzes the dynamic characteristics of the system, system complexity, etc. The fifth characteristic of intelligent control is that it is robust and of real time, is insensitive to environmental interference and uncertain factors, can respond to highly abstract discrete symbol commands, and has online real-time response capabilities. This paper combines the intelligent control algorithm to study the automated intelligent control system, builds the intelligent control system model according to the current more advanced algorithm model, and verifies the system function through experimental research.
RELATED WORK With the rapid development of artificial intelligence and big data technology, related multiagent technology has also been applied in many fields in recent years. Because multiagent technology has the characteristics of reasoning, cooperation, and self-healing, it creates the possibility to solve the problem of complex and dynamic scheduling system. Literature [4] conducted the research on agent-based distributed dynamic job shop scheduling and proposed a multiagent distributed dynamic job shop scheduling scheme based on contract network protocol bidding mechanism; literature [5] analyzed agent-based technology distributed job shop scheduling structure, proposes the concept of autonomous scheduling execution, and provides an
6
Intelligent Control and Automation
autonomous scheduling execution algorithm based on disturbance analysis, so that the scheduling execution agent can autonomously make rapid and optimal decisions on system disturbances. Literature [6] proposes to use multiagent negotiation to solve the problem of intelligent transfer conflicts and finally designs a kind of information that can solve the problem of virtual enterprise transfer conflicts through rational and efficient use of the agent’s scheduling, collaboration, and intelligence characteristics. Integrated system: Literature [7] studied the agile production line scheduling based on multiagent, combined with the latest theoretical results of many technologies in the field of computer and artificial intelligence, applied multiagent technology to the solution field of agile production line scheduling, and solved the actual production scheduling problem, providing useful exploration. Literature [8] studied the dynamic scheduling method of automatic surface treatment production line based on multiagent, established an architecture model based on multiagent, and abstracted the complex surface treatment automatic production line workshop scheduling problem into management, processing, and workpiece, transports four kinds of agents, focuses on the analysis of the communication and negotiation mechanism between these four agents, studies a tender-bid mechanism based on the contract network, and optimizes scheduling through comprehensive indicators. Literature [9] uses a combination of multiagent model and cellular automation model to construct a city dynamic model using geographic information system technology and takes the actual city as an empirical object, which proves that the model has a high degree of credibility. Literature [10] uses bionics to summarize the structural and functional similarities between the ship formation and the biological immune system and combines the multiagent theory with the biological immune system and proposes a new immune multiagent network (IMAN) model and applied it to the cooperative air defense system of the fleet. Literature [11] established a distributed and coordinated multiagent system and applied it to the coordination and optimization of the power system. The system includes multiple power supply agents and management coordination agents. Finally, the system is verified by simulation experiments. The feasibility: Literature [12] uses multiagent theory to establish a production scheduling system, simulates the production scheduling system according to actual production conditions, and obtains the optimal production scheduling plan, which reduces production costs and improves production efficiency. It is useful for the formulation of production scheduling plans. Certain reference value: Literature [13] studied the adaptive conditions of distributed structure,
Automatic Intelligent Control System Based on Intelligent Control ...
7
centralized structure, and decentralized structure under certain interactive equipment and data resources and further explored distributed collaborative solving and centralized collaborative solving problems. Literature [14] explored the control problem of the agent network with a fixed topology and finally proved through simulation experiments that the method based on the consensus protocol can be successfully applied to the multiagent formation control, and the effect is better. Literature [15] studied the dynamic scheduling strategy of multiagent production line based on MDP. Aiming at the dynamic task allocation problem of workshop production line, based on Markov decision process theory, a dynamic task scheduling mathematical model of multiagent production line was established. Literature [16] carried out modeling, optimization, and simulation research on the production line based on multiagent, built the model through Simulink/Stateflow simulation tool and conducted experimental research, and proposed related control strategies according to the actual production situation. The communication program is written by VC language, and the distributed simulation of the multiagent system is completed. Finally, through statistics and analysis of simulation data, the production line optimization strategy is verified.
VECTOR CONTROL BASED ON INTELLIGENT CONTROL ALGORITHM When researching the multimodal control algorithm, it is necessary to use the MATLAB/Simulink platform to build the simulation model of this variable structure AC servo pressurization system. If we want to build a physical model of an AC motor, we should understand its characteristics. One is a nonlinear system, the other is a strongly coupled system, and the third is a high-order system, all of which are multivariable. These characteristics indicate that it should be expressed by nonlinear formulas, which is very inconvenient when controlling such systems. The main reason for the complexity of the AC motor model is the coupling between the magnetic flux and the magnetic flux. If the motor model is decomposed into a model controlled separately by torque and flux linkage, then it can be simulated as a DC motor control model to control a non-DC motor. Vector control can achieve the abovementioned control effect. The principle of vector control is to measure the stator current vector of the motor and to decompose the stator current through the basic principle of the magnetic field, so as to realize the control of the motor. The specific method is to decompose the current vector into the torque current
8
Intelligent Control and Automation
and the excitation current and then control them. The control here refers to controlling the amplitude and phase of the current component of the magnetic field and the torque current component. This control method is a vector control method. Therefore, the vector control strategy is introduced to the simulation of the motor when doing the optimization algorithm part this time. Coordinate transformation is a very critical link in vector control technology. Only by transforming its coordinates can the motor torque and flux linkage be decoupled. Coordinate transformation is the core step of AC motor vector control and an important mathematical method to simplify the complex physical model of the motor. The purpose of coordinate transformation is to transform the physical model of an AC motor into a model similar to a DC motor. The AC motor model after coordinate transformation is more convenient for analysis and control. The essence of coordinate transformation is to replace the original variables in the formula with a set of new variables obtained after transformation. It is based on the principle of generating the same rotating magnetomotive force. The variable in the three-phase stator coordinate system is equivalent to the variable αβ0 in the two-phase stationary coordinate system through Clarke transformation, which is equivalent to the alternating currents iα and iβ Through the Park transformation, the variable αβ0 in the two-phase stationary coordinate system can be equivalent to the variable dq 0 in the two-phase rotating coordinate system, which is equivalent to the DC currents id and iq. The variable ABC in the three-phase stator coordinate system, the variable αβ0 in the two-phase static coordinate system, and the variable dq0 in the two-phase rotating coordinate system are three commonly used coordinate systems in the vector control of AC motors. According to the above equivalent relationship, the structure diagram shown in Figure 1 can be obtained. It can be seen from Figure 1 that the vector control input is three-phase power A, B, and C. To obtain a DC motor, which is input by id and iq and output by ω, it is necessary to perform Park conversion and Clarke conversion according to the oriented rotor flux [17].
Automatic Intelligent Control System Based on Intelligent Control ...
9
Figure 1. Coordinate transformation structure diagram of motor.
Clarke Transformation (3S/2S Transformation) There are two types of Clarke transformation: equal-amplitude vector transformation and equal-power vector transformation. If equal-amplitude vector transformation is used, compensation needs to be added in the later Park transformation. Therefore, the Clarke transformation used this time is an equal-power transformation method. The following two points must be done in the constant power coordinate transformation: •
The effect of the magnetic field generated by the current before and after the coordinate transformation remains unchanged. • The power of the motor before and after the coordinate transformation remains at the original value. The vector u and current i in the original coordinates are transformed into u′ and current i′ in the new coordinates. Since we want them to have the same transformation matrix C, then [18] (1)
10
Intelligent Control and Automation
In order to realize the inverse transformation, the transformation matrix C must have an inverse matrix C−1, so the transformation matrix C must be a square matrix, and the value of its determinant must not be equal to zero. Because u = zi, z is the impedance matrix,
(2) In the above formula, the changed impedance matrix is represented by z′, which can be expressed as (3) In order to follow the principle of constant power consumption, the electrical power consumption i Tu = u1i1 + u2i2 + · · · unin under one coordinate is equal to the electrical power consumption iT′ u′ = u1 ′i1 ′ + u2 ′i2 ′ + · · · un ′in ′ under the other coordinate, namely [19]: (4) (5) In order to make formula (4) the same as formula (5), the transformation matrix c should be an orthogonal matrix. In the above formula, C−1 is the inverse matrix of C; iT is the transposed matrix of i; iT′ is the transposed matrix of i′; CT is the transposed matrix of C; I is the identity matrix; Z and Z are impedance matrices, respectively; u, u′, i, i′ are the vector and current column and row matrix, respectively. At the same time, according to the matrix algorithm, there are C−1C I; (Ci′) T i T′ CT; (KC) T KCT; u Cu′, and u′ C−1u., and . Figure 2 is the relative positional relationship of the magnetic potential vector. The magnetic potential vectors are the magnetic potential vectors of the nonrotor three-phase motor windings A, B, and C and the magnetic potential vectors of the two-phase motor windings α and β, respectively. The A axis coincides with the α axis. It can be known from the vector coordinate transformation principle that the two are completely equal magnetic fields; that is, the resultant magnetic potential vector has equal components on the two position and space axes.
Automatic Intelligent Control System Based on Intelligent Control ...
11
Figure 2. Vector coordinate system.
Therefore, there are (6) That is: (7)
(8) In the formula, N2 and N3, respectively, represent the effective number of turns of each phase of the stator of a three-phase motor and a two-phase motor. Formulas (7) and (8) are expressed by matrix, namely:
(9)
12
Intelligent Control and Automation
The transformation matrix cannot be changed inversely, because it is a nonsquare matrix. To this end, it is necessary to introduce the zero-axis current, that is, an unrestricted nonoriginal variable i0 of iα and . In the formed α − β − 0-axis coordinate system, the zero axis is perpendicular to the α and β axes at the same time. We define (10) (11) In the formula, k is the undetermined coefficient. Therefore, formula (10) is rewritten as
(12) In the formula, the matrix C is defined as
(13) The transpose matrix C of C is T
(14)
Automatic Intelligent Control System Based on Intelligent Control ...
13
The inverse matrix C−1 of C is
(15) In order to comply with the principle of constant power conversion, CT=C−1 is known. That is, the above formula (14) is equal to the formula (15); then: (16) We can obtain separately
(17) By substituting formula (17) into formula (13) and formula (15), then
(18)
14
Intelligent Control and Automation
Therefore, the Clarke transformation (or 2/3 transformation) formula is
(19)
Park Transformation (2s/2r Transformation) Generally speaking, the 2s/2r conversion refers to the conversion between the two-phase nondynamic coordinate system and the two-phase rotating coordinate system. To obtain the rotating magnetomotive force, two-phase nondynamic winding is required, and two-phase balanced nondirect current is added to it. If the two-phase windings whose rotational angular velocity is equal to the rotational angular velocity of the synthesized magnetomotive force are turned, then the space-rotating magnetomotive force is generated in the two-phase winding with nonalternating current. Figure 3 shows the Park transformation relationship diagram, and the Park transformation is derived from Clark transformation as
(20)
(21)
Figure 3. Park transformation relationship diagram.
Automatic Intelligent Control System Based on Intelligent Control ...
15
From formula (20), we can get
(22) By adding the two types, there are
(23) By adding the two formulas, we get and we get (24) and we get
(25) Park is inversely transformed into the inverse matrix of matrix (29):
(26) The function of space vector pulse width modulation (SVPWM) is to make the output waveform closer to a sine waveform. The specific method is as follows: a three-phase power inverter is composed of six power switching elements, and the switching mode is appropriately switched to control the generation of a pulse width modulated wave. Compared with the traditional pulse width modulation technology, it has the advantages of reducing the motor torque pulsation to a certain extent, and the rotation trajectory is closer to the circular flux trajectory. The space vector pulse width modulation technology uses the principle of average equivalent utility as the basis of theoretical analysis; that is, by combining the basic vector vectors in a single open and closed cycle, the
16
Intelligent Control and Automation
average vector is equal to the given vector. At a certain moment, by making two mutually adjacent vectors carry out a nonidentical combination in time, the two vectors have a difference between zero and nonzero, and the vector can be rotated into a certain area. In order to obtain the PWM waveform, the action time of the two vectors can be added multiple times, and the timing of the addition is within a complete sampling time period. In this way, the action time of each vector can be controlled, and a space vector rotating according to a circular trajectory can be obtained. The actual magnetic flux close to the ideal magnetic flux circle is generated by changing the open and closed state of the inverter, and then the switching state of the inverter is determined by the comparison result of the two. Figure 4 shows the inverter circuit.
Figure 4. Inverter circuit.
Udc is the non-AC bus side vector, and the three-phase vectors output by the inverter are UA, UB, and UC, re respectively, which form an arrangement of 120° on each other in space, and the coordinate system is a three-phase plane nondynamic coordinate system. Three nonplanar vectors UA(t), UB(t), and n UC(t) can be defined. The directions of the three are always on the reference line of each phase, and the magnitude is changed with time according to the sine law, and the time phase differs by 120° If it is assumed that the effective value of the phase vector is Um and the power supply frequency is f, then the formula can be obtained:
Automatic Intelligent Control System Based on Intelligent Control ...
17
(27) In the formula, θ 2πft; then the resultant space vector U(t) added by the three-phase vector space vector can be expressed as
(28)
According to the above, U(t) is a rotating space vector whose amplitude is 1.5 times the peak value of the phase vector, and Um is the peak value of the phase vector, and the space vector rotates at an angular frequency ω 2πf in the counterclockwise direction at a uniform speed. The projection of the space vector U(t) on the three-phase coordinate axis (a, b, c) is the symmetrical three-phase sine. Since the inverter’s three-phase bridge arm is composed of 6 opening and closing tubes, in order to explore the space vector output by the inverter when the upper and lower arms of each phase are opened and closed in different combinations, the opening and closing expression Sχ(χ a, b, c) is specifically described as
(29)
There are eight possible combinations of (Sa, Sb, Sc), including 6 nonzero vectors U1(001), U2(010), U3(011), U4(100), U5(101), U6(110) and two zero vectors U0(000) and U7(111).
The magnitude modulus of a nonzero vector is 2Udc/3, the distance between two adjacent vectors is 60°, and the magnitude of the two zero vectors at the center of the sector is zero. The components of any vector in each sector are taken from the two vectors and the zero vector that are adjacent to each other in each sector. The resultant vector is
(30)
Among them, Uref is the expected vector; T is the sampling time; Tx, Ty, and T0, respectively, correspond to the action time of the two nonzero
18
Intelligent Control and Automation
vectors Ux, Uy and the zero vector U0 in one cycle. Among them, U0 includes two zero vectors, U0and U7 The meaning of formula (30) is that the sum of integral effect produced by vector Uref in time T is the same as the sum of integral effects produced by values Ux, Uy, and U0 in times Tx, Ty, and T0, respectively. The equivalent rotation vector is usually a combination of a three-phase sine wave vector in a vector space vector. The input power angle frequency is its rotation speed, and its trajectory is shown in Figure 5.
Figure 5. Vector pie chart.
Using the abovementioned vector synthesis technique, a three-phase sine wave vector can be obtained. The acquisition method is to use the set vector to start from position U4(100) on the vector space vector and use the two adjacent nonzero vectors and zero vector in the area to synthesize the vector as a small increase. The tentative vector obtained by a small increase each time is equivalent to a vector space vector, and the vector space vector plane can be used for smooth rotation of the vector to achieve the purpose of pulse width modulation of the vector.
Automatic Intelligent Control System Based on Intelligent Control ...
19
AUTOMATIC INTELLIGENT CONTROL SYSTEM BASED ON INTELLIGENT CONTROL ALGORITHM The overall architecture design is the first step to complete the design and realization of the intelligent control system. It points out the working principle and control flow of the entire system. The software and hardware platforms implemented by the system determine the functional characteristics of the system, and the design of the communication protocol specifies all the data formats in the system. The system is mainly composed of an embedded intelligent terminal and a wireless sensor network. The controlled object is placed within the coverage of the wireless sensor network, and the embedded intelligent terminal communicates with the wireless sensor network through a serial port. The overall system architecture is shown in Figure 6.
Figure 6. The overall architecture of the intelligent control system.
The overall control flowchart of the intelligent control system in this paper is shown in Figure 7. The specific control process is as follows: (1) The embedded intelligent terminal performs initialization operations and starts core services. (2) The embedded intelligent terminal is connected to the convergence point through the serial port, so that the coordinator can access the network. After the coordinator accesses the network, it accesses the control node, so that the control node accesses the network. (3) The sensor collects the data information of the controlled object, and after relevant processing such as A/D conversion of the control node, it actively periodically reports
20
Intelligent Control and Automation
it to the embedded intelligent terminal according to the ZigBee private communication protocol format. (4) After the embedded intelligent terminal receives the data from the ZigBee network, it is handed over to the parsing module for processing, and the network data is distinguished from the data reported actively according to the function code. In the network access data, it extracts ZigBee network equipment information and stores it in the net equipment table of the database. The sensor data information and collected data information are extracted from the actively reported data and stored in the sensor table and collection data table, respectively, in the database. (5) It extracts the specific data information of the controlled object from the collected data and obtains the control vector corresponding to the current data through the offline look-up table method of the fuzzy intelligent control algorithm. (6) It encapsulates the obtained control vector into a control command in the ZigBee private communication protocol format and sends it to the corresponding device in the underlying sensor network. (7) The corresponding device at the bottom layer receives the control instruction, and after D/A conversion and other processing, the corresponding analog vector is output from the AO output interface of the control node and added to the control unit to realize real-time control of the controlled object.
Figure 7. The control flow of the intelligent control system.
Automatic Intelligent Control System Based on Intelligent Control ...
21
Embedded intelligent terminals play a central role in the entire intelligent control system. The specific content can be seen in Figure 8. The interface between it and the coordinator is a serial port, which realizes the realization of the intelligent control function of the system by collecting information from the control node, intelligently analyzing the information, and intelligently issuing control commands.
Figure 8. The flowchart of terminal processing of network access data.
If the node exists and matches, the actively reported data will be handed over to the two processing database subclasses for processing. Figure 9 shows the process of the smart terminal processing the actively reported data.
Figure 9. The flowchart of the terminal processing the actively reported data.
22
Intelligent Control and Automation
Through the above research, the performance verification of the intelligent control system in this paper is carried out. First, the algorithm effect evaluation of the intelligent vector control algorithm proposed in this paper is carried out. The statistical test results are shown in Figure 10.
Figure 10. Evaluation of the intelligent effect of intelligent vector control algorithm.
Through the above research, it can be seen that the intelligent vector control algorithm proposed in this paper has a good intelligent effect, and then the evaluation of intelligent control effect of the automatic intelligent control system is carried out, and the result shown in Figure 11 is obtained.
Figure 11. Evaluation of intelligent control effect of automated intelligent control system.
Automatic Intelligent Control System Based on Intelligent Control ...
23
Through the above experimental analysis, it can be seen that the automated intelligent control system proposed in this paper has a better effect in intelligent control, and the embedded intelligent system of this paper can be applied to multiple systems with intelligent control requirements.
CONCLUSION One of the characteristics of intelligent control is that it is suitable for uncertain or difficult-to-define process control, complex nonlinear controlled object control, and time-varying process control. As a complex model, the description can be quantitative (number or parameter) or qualitative (causal relationship) description. The second characteristic of intelligent control is that it uses self-adaptation, self-organization, self-learning, and other methods to improve the automation and intelligent control of the system. This enables it to process various uncertain, qualitative information and data structures, as well as unstructured information and data, process and utilize knowledge of different natures, identify changes in the structure or composition of the main control system, and modify or reconstruct its own parameters or structure according to changes in the main control system or environment. In addition, it is also fault-tolerant, capable of self-diagnosing, shielding, and self-recovering various faults. This paper combines the intelligent control algorithm to study the automated intelligent control system, builds the intelligent control system model according to the current more advanced algorithm model, and verifies the system function through experimental research.
24
Intelligent Control and Automation
REFERENCES 1.
V. Syrotiuk, S. Syrotyuk, V. Ptashnyk et al., “A hybrid system with intelligent control for the processes of resource and energy supply of a greenhouse complex with application of energy renewable sources,” Przeglad Elektrotechniczny, vol. 96, no. 7, pp. 149–152, 2020. 2. L. Vladareanu, V. Vladareanu, H. Yu, D. Mitroi, and A.-C. Ciocîrlan, “Intelligent control interfaces using e multidimensional theory applied on VIPRO platforms for developing the IT INDUSTRY 4.0 concept,” IFAC-PapersOnLine, vol. 52, no. 13, pp. 922–927, 2019. 3. P. Zheng, J. Zhang, H. Liu, J. Bao, Q. Xie, and X. Teng, “A wireless intelligent thermal control and management system for piglet in largescale pig farms,” Information Processing in Agriculture, vol. 8, no. 2, pp. 341–349, 2021. 4. M. Nadafzadeh and S. Abdanan Mehdizadeh, “Design and fabrication of an intelligent control system for determination of watering time for turfgrass plant using computer vision system and artificial neural network,” Precision Agriculture, vol. 20, no. 5, pp. 857–879, 2019. 5. A. P. Araújo Neto, G. W. Farias Neto, T. G. Neves, W. Ramos, and K. Brito, “Changing product specification in extractive distillation process using intelligent control system,” Neural Computing & Applications, vol. 32, no. 17, pp. 13255–13266, 2020. 6. Y. Arya, “Automatic generation control of two-area electrical power systems via optimal fuzzy classical controller,” Journal of the Franklin Institute, vol. 355, no. 5, pp. 2662–2688, 2018. 7. N. Harrabi, M. Souissi, A. Aitouche, and M. Chaabane, “Intelligent control of grid‐connected AC-DC-AC converters for a WECS based on T-S fuzzy interconnected systems modelling,” IET Power Electronics, vol. 11, no. 9, pp. 1507–1518, 2018. 8. H. Yang, A. Alphones, Z. Xiong, D. Niyato, J. Zhao, and K. Wu, “Artificial-intelligence-enabled intelligent 6G networks,” IEEE Network, vol. 34, no. 6, pp. 272–280, 2020. 9. M. Al-Khafajiy, S. Otoum, T. Baker et al., “Intelligent control and security of fog resources in healthcare systems via a cognitive fog model,” ACM Transactions on Internet Technology, vol. 21, no. 3, pp. 1–23, 2021. 10. L. Haibo, L. Zhenglin, and L. Guoliang, “Neural network prediction model to achieve intelligent control of unbalanced drilling’s
Automatic Intelligent Control System Based on Intelligent Control ...
11.
12.
13.
14.
15.
16.
17.
18.
19.
25
underpressure value,” Multimedia Tools and Applications, vol. 78, no. 21, pp. 29823–29851, 2019. L. Peng and Z. Jiang, “Intelligent automatic pig feeding system based on PLC,” Revista Científica de la Facultad de Ciencias Veterinarias, vol. 30, no. 5, pp. 2479–2490, 2020. S. Anila, B. Saranya, G. Kiruthikamani, and P. Devi, “Intelligent system for automatic railway gate controlling and obstacle detection,” International Journal Of Current Engineering And Scientific Research (IJCESR), vol. 4, no. 8, pp. 2394–0697, 2017. L. B. Prasad, H. O. Gupta, and B. Tyagi, “Intelligent control of nonlinear inverted pendulum system using Mamdani and TSK fuzzy inference systems: a performance analysis without and with disturbance input[J],” International Journal of Intelligent Systems Design and Computing, vol. 2, no. 3-4, pp. 313–334, 2018. J. Yang, P. Wang, W. Yuan, Y. Ju, W. Han, and J. Zhao, “Automatic generation of optimal road trajectory for the rescue vehicle in case of emergency on mountain freeway using reinforcement learning approach,” IET Intelligent Transport Systems, vol. 15, no. 9, pp. 1142– 1152, 2021. X. Yao, J. H. Park, H. Dong, L. Guo, and X. Lin, “Robust adaptive nonsingular terminal sliding mode control for automatic train operation,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 12, pp. 2406–2415, 2018. M. J. Mahmoodabadi and M. Taherkhorsandi, “Intelligent control of biped robots: optimal fuzzy tracking control via multi-objective particle swarm optimization and genetic algorithms,” AUT Journal of Mechanical Engineering, vol. 4, no. 2, pp. 183–192, 2020. W. He, Z. Li, and C. L. P. Chen, “A survey of human-centered intelligent robots: issues and challenges,” IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 4, pp. 602–609, 2017. Y. Huang, S. Liu, C. Zhang, X. You, and H. Wu, “True-data testbed for 5G/B5G intelligent network,” Intelligent and Converged Networks, vol. 2, no. 2, pp. 133–149, 2021. H. Dong, C. Roberts, Z. Lin, and F.-Y. Wang, “Guest editorial introduction to the special issue on intelligent rail transportation,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 7, pp. 2677–2680, 2019.
Chapter 2
Intelligent Multi-Agent Based Information Management Methods to Direct Complex Industrial Systems
Danilo Avola1, Luigi Cinque2, and Giuseppe Placidi1 Department of Life, Health and Environmental Sciences, University of L’Aquila, L’Aquila, Italy 2 Department of Computer Science, Sapienza University of Rome, Rome, Italy 1
ABSTRACT In recent years, the increasingly complexity of the logistic and technical aspects of the novel manufacturing environments, as well as the need to increase the performance and safety characteristics of the related cooperation, coordination and control mechanisms is encouraging the development of new information management strategies to direct and manage the automated systems involved in the manufacturing processes. The Computational Intelligent (CI) approaches seem to provide an effective support to the challenges posed by the next generation industrial systems. In particular, the Intelligent Agents (IAs) and the Multi-Agent Systems
Citation: D. Avola, L. Cinque and G. Placidi, “Intelligent Multi-Agent Based Information Management Methods to Direct Complex Industrial Systems,” Intelligent Information Management, Vol. 4 No. 6, 2012, pp. 338-347. doi: 10.4236/iim.2012.46038. Copyright: © 2012 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
28
Intelligent Control and Automation
(MASs) paradigms seem to provide the best suitable solutions. Autonomy, flexibility and adaptability of the agent-based technology are the key points to manage both automated and information processes of any industrial system. The paper describes the main features of the IAs and MASs and how their technology can be adapted to support the current and next generation advanced industrial systems. Moreover, a study of how a MAS is utilized within a productive process is depicted. Keywords: Industrial Systems; Information Management; Intelligent Agents; Multi-Agent Systems
INTRODUCTION The current industrial systems are destined to become environments even more technologically advanced in every aspect. This is due to several reasons: the complexity of the new manufacturing processes, the need to optimize costs, the increasing of the complexity in managing resources, the need to improve performances, the aspects tied to the information exchanges, the manufacture of complex products, the need to ensure advanced safety standard, and so on. All these aspects imply a continuous and growing renovation of the current industrial activities. Artificial Intelligence (AI) techniques [1,2], utilized to overcome these new qualitative and quantitative challenges bring from the next industrial age, seem to provide the more suitable support since they naturally have all these technical features (e.g., adaptability, reasoning, learning) required from the new manufacturing processes [3]. In particular, the use of AI based strategies allow to the industrial environments to adopt autonomous, dynamic and intelligent procedures to face expected and unexpected issues. The IAs and MASs are technologies that can play an important role in every aspect involved in the development of the next generation of the advanced industrial systems. In fact, these systems have to be designed to include intelligent, autonomous and evolvable entities in turn composed by different sub-entities having the same features. The number of levels (i.e., entities, sub-entities) and the complexity of each entity and sub-entity depend on the specific kind of the industrial system. The mentioned features are strongly tied to the IAs and MASs paradigms [4,5]. Moreover, several IAs and MASs have been just adopted in a wide range of applications that are connected to advanced
Intelligent Multi-Agent Based Information Management Methods to ...
29
industrial systems, such as: Robotic, artificial vision, production planning, systems control, engineering, information exchanging, and so on. In what follows, we describe the main features of the IAs and MASs and how their technology can be adapted to support the current and next generation advanced industrial systems. The paper is organized as follows. Section 2 introduces the general aspects of the logical architecture of the current industrial systems. Section 3 describes the main features of the IAs and MASs to be adapted to support industrial systems. Section 4 provides a study of how a MAS is utilized within a productive process. In particular, the section discusses the different MASs algorithms highlighting their use to support the real-time production process. Section 5 shows the principles of the hybrid systems and presents the main characteristics of a case study. Section 6 concludes the paper.
CURRENT INDUSTRIAL SYSTEMS ARCHITECTURE The logical architectures of the current industrial systems can be seen as a correlated set of connected crucial processes that have to achieve a prefixed objective taking into account the current elaboration state of every other process. All the processes are aimed to reach a common target: The Artifact. In our context, the last mentioned term has to be considered both a physical object (e.g., technical or mechanical device, industrial product) and an abstract object (e.g., general service, immaterial product). The concepts regarding the creation or delivery of an artifact can be generalized to provide a common description of current industrial system architectures. Each process deals with a specific activity (or part of it) of the whole industrial process. In fact, a generic Industrial system includes heterogeneous activities, such as: economical planning, quality and test control, real-time production control, internal activities, processes monitoring, robot control, sales planning, and so on. For this reason, the specification of a process can be completely different from another. As shown in Figure 1, a process is an autonomous entity characterized by internal rules and protocols. Every process works in coordination with others by taking into account their internal state.
30
Intelligent Control and Automation
Figure 1. A general scheme of a process.
The internal rules and protocols are highly dependent from the specific process. In general, the output of a process (i.e., artifact and information) regards a collection of homogeneous activities, while the input (i.e., processes interaction) concerns the connection of several heterogeneous activities, such as: Technical and schedulling information, control and manufacturing activities, data, artifact (or part of it). Every single process is composed of several different tasks (or sub-tasks) concerning a specific part of the whole industrial process. As shown in Figure 2, a task (or a sub-task) is an activity that can be solved, by decisional processes, using autonomous and intelligent methods without a specific knowledge of the surrounding processes. The depth of the tree depends on both the kind of process and its complexity. Usually, processes concerning long term economic planning and technical activities are more complex than others.
Figure 2. Process, tasks and sub-tasks hierarchy.
Intelligent Multi-Agent Based Information Management Methods to ...
31
The process at the root level (i.e., level 0) provides each of its tasks the needed input to accomplish the related assignments. Likewise, each further task (i.e., level 1) provides each of its sub-tasks the suitable part of the original input to accomplish the related sub-assignments. This activity of splitting input and assignments ends at the last level of the tree. The solution of each task and sub-task, by a decisional process, can require information or activities coming from different tasks and/or sub-tasks connected to any tree level. For this reason, the interaction mechanisms are a critical aspect of the current industrial systems. A failure in a task can produce a critical crash of the related process, or of the connected processes. Furthermore, the last mentioned aspect is fundamental to establish the intelligent mechanisms to solve the heterogeneous activities belonging to the complex industrial systems (e.g., real-time monitoring, dynamic business planning, chain control, simulation environments) and to overcome the expected and unexpected issues (e.g., breakdowns, environment changes, damages). The decisional process that solves a task at the level 1 of the hierarchy can interact with any other decisional process that solves any other task at the same hierarchical level. Subsequently, the decisional processes that solve sub-tasks on the others levels of the hierarchy (e.g., level 2) have to directly interact only with the decisional processes that solve a sub-task connected at the same task or sub-task at the previous hierarchical level. In others worlds, on each tree level, only the “brothers” can directly interact. If decisional processes belonging to different “fathers” have to interact, then the same “fathers” will accomplish the interaction activity. As shown in Figure 3, the artifact is the result of joined processes. Likewise the previously described cooperation strategy, each decisional process that solves a process within the level 0 can interact with any other decisional process that solves any other process at the same level 0. Also in this case, if two decisional processes, driving two different tasks (e.g., at level 1) belonging to two different processes, have to interact, the same two processes will accomplish the interaction activity. This kind of hierarchical interaction serves to avoid disorder (i.e., anarchy) between tasks, sub-tasks and processes. Moreover, it is useful to allow the decisional process to have a total autonomy with respect the fixed assignment. Each decisional process involved in the solution of a process, task or sub-task has to have well defined features. These features specify the way (e.g., protocols of communication and execution, priority of the decisional tree, priority of the queues, relationships between tasks or sub-tasks) and
32
Intelligent Control and Automation
the resources (e.g., technical, economical, time planning, supply chain management) to carry on the activities. A generic decisional process can be considered as a set of activities that can be carried on by an intelligent, autonomous and flexible entity. Technically, these entities can be seen as a set of interactive procedures able to learn and evolve according to their life cycle. For these reasons, the use of IA and MAS technologies are considered the best solutions to these kinds of issues. As shown in Figure 4, each decisional process should be driven by a single IA. Each IA knows only the rules, the procedures and the protocols related to its specific task. It has a limited point of view regarding the whole industrial process and it is able to adopt solutions only on its specific contextual environment. Moreover, the networked set of the IAs (i.e., MASs) should drive each process into the industrial system. This association between tasks, assignments and intelligent entities allows to implement independent customized solutions to solve global and local specific issues.
Figure 3. Level 0: The processes.
Figure 4. Decisional process and intelligent agent.
Intelligent Multi-Agent Based Information Management Methods to ...
33
IAS AND MASS IN INDUSTRIAL SYSTEMS The agent and multi-agent technologies have been widely adopted in robotic and automatic control system fields. The growing complexity of the current industrial environments and their affinity to the mentioned fields has encouraged the design of IAs and MASs aimed to direct each aspect of the industrial chain. The CI based techniques promise to be the best way to perform these historical changes. There are several similar definitions of intelligent agent [6,7]. Considering the set of these definitions, in our context an intelligent agent can be described as: A social, autonomous and intelligent computational entity capable to accomplish a definite objective through an adaptive reasoning. Regardless of how an IA is defined, it is characterized by several general features [8] that can be adapted to our context. These features, joined with the given definition, complete the IA paradigm.
IA Properties Adapted to Industrial Systems There are five main properties belonging to the IA that have to be adapted to build a basic industrial oriented IA.
Reactivity An IA is a reactive entity that continuously perceives and affects with its surrounding environment. In an industrial system this feature depends on the process on which the agent is applied. For example, in economical planning processes the reactivity can be considered a not active feature, i.e., the agent has a starting receptive state and it will tend to switch its condition only if it recognizes particular changes in the environment. Instead, in technical processes the reactivity can be considered active, i.e., each change in the environment is considered an important event that has to be analyzed in real-time.
Pro-Activity An IA has to operate in an expected way, but it can also take initiatives to overcome unexpected events. Usually, this feature does not depend on the process, it is tied to the environment in which the agent has to operate. The environments into the industrial processes can be classified in deterministic and non-deterministic. At the first class often belong processes derived from economical tasks that are tied to several unexpected changes (e.g.,
34
Intelligent Control and Automation
economical plans, supply chain management). At the last class almost always belong processes derived from technical tasks that are usually scheduled and coordinated (e.g., test and control, robot interaction, real-time monitoring).
Autonomy An IA has to be capable of autonomous actions to accomplish its objectives. There are several situations in which an agent has to autonomously decide what is necessary to do, and how it has to be done. The main issue of this feature is to decide which, how many and when specified actions/activities can be adopted by the IA. As general rule, an IA is authorized to take autonomous decisions only if there is the real possibility that the whole process crashes. These situations occur quite often in advanced industrial systems where several technological actors are involved. In fact, the introduction of complex systems in the industrial environments (e.g., real-time systems, online robot vision monitoring, automatic control management) has led high level of entropy. Usually, a reasoning decision tree is used to decide the actions that have to be performed [9,10]. By the autonomy property each agent can interact with the environments or other agents without an external presence to ensure the achievement of the prefixed objectives.
Flexibility An IA has to interact with the surrounding environment in different ways. It has to have the capability to adopt quickly itself to drastic changes into the relationships, environments or events. Sometimes conventional communication or interaction ways could be not sufficient to reach a prefixed objective. For example, an agent could want data from another agent in different way respect the standard protocols. For this reason, an agent has to manage its characteristics (e.g., output, external interface, internal protocols). This property is particularly important in industrial systems because they are prone to dynamic changes in management and technical levels.
Social Ability An IA has to interact with other agents (also humans). This feature is the core of the intelligent agent theory; through the interaction an agent can understand the events and adapt its characteristics to the dynamic situations surrounding it. An IA needs to communicate with the internal and external environments to ensure several main activities (e.g., find out the state of
Intelligent Multi-Agent Based Information Management Methods to ...
35
other agents, sub-tasks synchronize, tasks scheduling, acknowledgement). Furthermore, it has to be considered that some agents (e.g., mediator agents, contractor agents, negotiator agents) have particular assignments that, more than others, need to interact with the surrounding entities. In industrial systems this feature takes a key value in business plans and technical activities. The last property introduces a main matter in the industrial systems, i.e., the languages through which the agents exploit their own social ability. In general, the most important challenges in Agent Oriented Software Engineering (AOSE) is to use or create a suitable Agent Oriented Programming Language (AOPL) to allow the agents an effective and efficient interaction. There are several advanced tools that aid the developers in designing complex agent based software systems. Moreover, there are several powerful programming languages designed to facilitate the programming of agent entities. The chosen of the correct tool and language to achieve a prefixed assignment is a critical matter. Independently from the agent implementation there are some issues (e.g., debugging, validation, verification) that depend on the specific context in which the agents have to operate. In AOPLs the focus is on how to describe the behavior of an agent in terms of constructs [11] (e.g., plans, communication strategies, interaction patterns, goals, messages). This agent description is aimed to specify the “reasoning state” of the agent during its activities. A typical example is AGENT-0 [12] a simple and powerful generic agent interpreter. This AOPL tends to program agents in terms of basic behavior, in this way the properties of the agents can be developed by definite notions, such as: Reactions, communication ways, interactions, rules, and so on. Another interesting AOPL is dMARS (Distributed Multi-Agent Reasoning System) [13], which provides both a sophisticated monitor and manageable macros to develop interaction activity between agents. Moreover, this AOPL provides several functionalities to manage critical implementation steps (e.g., configurations of internal states, perception of events). The dMARS can be considered a reference technology to direct complex industrial systems since it has a wide range of high solutions that support the whole industrial processes. A last interesting AOPL considered to develop agents in industrial systems is JACK [14]. It is a versatile framework designed to create, by models, the concepts that drive the intelligent agents. This language has a wide use in industrial contexts since it provides powerful tools for the run-time support. Moreover,
36
Intelligent Control and Automation
JACK has predefined structured patterns to manage critical situations (e.g., concurrency, reaction to events, failure). In the last years, there have been many efforts to support the IA communication in different fields, including the Industrial systems, the result has been a growing development of multipurpose languages (e.g., 2APL, 3APL) [15]. An agent has several others features that can be considered implicitly included in the previous features (e.g., learn capability, reasoning, auto-improvement). In complex industrial systems there are several kinds of agents according to different processes or tasks. For example, the communication-agents are specialized in communication-activities between sets of agents. They deal with strategies and protocols for the information or data exchange. Another example is the learning-agents that learn about the functionality and potentiality of the surrounding environment and transmit this knowledge to the other agents. Several others dedicated agents are developed according to specific requirements. Beyond the potentiality of a single IA the algorithms that drive the industrial processes are multi-agent based. MASs can be considered as composed by heterogeneous agents, which are aimed to achieve common objectives. Likewise to the intelligent agent definition, also in this case there are several similar definitions of MAS [16]. In our context a MAS can be described as a suitable and reasoning assemble of agents that, according to their features, achieve common objectives through connections and teamwork intelligent mechanisms. Also the MASs have the same features observed for the intelligent agents: the main difference is the coordination and cooperation processes that drive the agents. Moreover, a MAS has interaction properties that define the collective reasoning of the whole system. Usually, each MAS has a set of rules and protocols that are inherited from each single agent. These last define the kind of collective intelligence characterizing each single MAS. These considerations conclude the paradigm definition of MASs. A complex MAS can be composed by different sets of MASs obtaining a system with high levels of communication processes. In any case, these systems are characterized from more complex coordination, cooperation and teamwork mechanisms. Usually, no centralized control methods are implemented, but specific strict rules and hierarchic behavior strategies are performed to ensure the correct working of the system. The difficulty in the MASs management depends on the number of agents and their complexity. Figure 5 shows a typical MAS scheme where each process is driven by a set of agents that have different dedicated assignments.
Intelligent Multi-Agent Based Information Management Methods to ...
37
Figure 5. Simple MAS scheme.
Each process (X, Y, Z and others) is composed by some tasks that are accomplished by several agents. As presented in Figure 5, the process X is composed by four industrial activities (T1, T2, T3 and T4, where T = Task), and two service activities (E and I, where E = Executor, I = Interface). All the processes involved in the industrial environment, independently from their role, are aimed to reach the common target: the objective (artifact/ activity). The agents tied to the industrial activities accomplish specific tasks tied to the industrial process, while the others two activities provide coordination and cooperation mechanisms. I-IA and E-IA work on the whole process. In fact, the first provides the result of the Industrial process (e.g., artifact or part of it, data, information) to the environment; the second uses a communication channel to interact with other processes (i.e., with the related I-IAs). Note that only the I-IA and E-IA can make actions outside their own process. The other four agents (T1-IA, T2-IA, T3-IA and T4-IA) can accomplish only internal assignments and can interact only with agents belonging to the same process.
38
Intelligent Control and Automation
MASS: APPROCHES AND ALGORITHMS The new generation of the industrial systems can be conceived like Distributed Artificial Intelligence (DAI) systems described as cooperative systems where a set of heterogeneous agents act jointly to solve a given problem [17]. A DAI system belongs to the Distributed Problem Solving (DPS) field where a problem is solved by using several modules [18] which cooperate in dividing and sharing knowledge about the common issue. All these factors influence the choice of the multi-agent architecture to drive the algorithms used into each agent and the related communication strategies. There are several multiagent architectures used to perform the activities of a large set of heterogeneous environments; the most effective are inspired by the distributed version of BDI (Belief-Desire-Intention) [19]. This architecture is based on the correspondence between beliefs, desires and intentions with reciprocal identifiable data structures. The BDI based systems have several features (e.g., agent scalability, real-time communication, acknowledge mechanisms). They can be managed to be adapted in any environment. In Figure 6 a general overview of the basic patterns to implement industrial architectures is given. They are based on the classical configurations used in others fields, the differences are came out during the implementation by considering the features of each agent in accordance to the specific industrial system.
Figure 6. Sample of architectural industrial system.
Intelligent Multi-Agent Based Information Management Methods to ...
39
As a general rule, each task is assigned to a specific Task Master Agent (TMA) which, according to the specific architecture, can solve different assignments (e.g., safety and security, execution, coordination, cooperation). The TMA can work alone or supported by several subagents, depending on the nature and complexity of the task. These sub-agents deal with only a peculiar and homogeneous step of the task. Moreover, they are usually execution-oriented (i.e., executor agents) or information-oriented (i.e., acknowledger agents). Commonly, the sub-agents can have relationships only with sub-agents of the same task or with the related TMA. A particular kind of TMA, the Gateway Task Master Agent (GTMA), has to interact with all the TMAs belonging to a same process. Finally, the Process Master Agent (PMA) is used to allow interaction of different agents. Each GTMA can interact only with the related PMA, each PMA can interact with any other PMA. The communication activities are usually performed by well-known protocols and algorithms in the network field. Indeed, each communication, in the MASs, tends to be implemented according to FIPA specifications [20]. It is the IEEE Computer Society organization that promotes agent-based technology and the interoperability of its standards with other technologies. The MASs communication activity can be analyzed by graph theory [21], where each node is an intelligent agent and the edges between nodes are the communication channels. This allows to adopt the well-known algorithms to manage environments composed by multi-agent systems. The current MASs communication processes are inspired to the basic algorithms on network architectures [22]: Routing, broadcasting, and semi-group computation. The routing algorithms regard the processes used to visit a graph to reach a particular objective. In literature exists different algorithms to accomplish this task [23]. For example, the Dijkstra and Bellman-Ford algorithms research a shortest-path from a single source node to any other node in the graph. A variant of these algorithms is given by Floyd-Warshall and Johnson that provide strategies to research a shortest-path from multiple source nodes. Also the Prim and Kruskal algorithms (minimum spamming tree) and the Ford-Fulkerson and Karp algorithms (maximum flow) are commonly used in routing issues. All these algorithms are used to allow the agents different communication strategies which are tied to several factors, such as: type and aim of the interaction, kind of activity, kind of involved agents, and so on. The broadcasting algorithms provide a direct way to communicate to a node in a graph. These algorithms are based on one-to-all philosophy, where
40
Intelligent Control and Automation
a single node (i.e., agent) has to send (i.e., interact) a message to all the nodes of the graph. The weighs in the graph can drive the communication process according to the specific objective. They are usually represented by vectors that include a wide set of information (e.g., priority access value, identification value (ID), agent state value). In industrial contexts these algorithms are used to support control activities and to provide commands and scheduling steps to technical processes. The semi-group computation algorithms are based on the binary communication of the intelligent agents composing a definite and closed set of agents. In general, the binary communication is used to reach each node in the graph through a dynamic bridge that allows two agents to communicate according to some convenience rule (e.g., distance between the agents, cost of the connection, priority level). These algorithms are usually applied, in industrial environments, where a strict intelligent agent structure is required. The particular structure of a semi-group of agent is suitable to implement systems able to react to non-deterministic events (e.g. external operations, unexpected broken events, sabotages). In addition to the three mentioned basic algorithms (and their evolutions) on network architectures, there are several experience-based approaches for everyday use which provide ad-hoc solutions allowing a qualitative improvement of the management standards. In this way, the human experience can be utilized to optimize the agent implementation improving the industrial process. Table 1 presents a brief description of the communication algorithms according to the different main industrial environments. The physical connection of a MAS is a crucial point of the current and next generation Industrial environments, in fact the way in which sets of agents are connected each other can drive the chooses about the adopted interaction communication technologies. For example, in large-scale management processes centralized and hierarchical architectures are preferred. On the contrary, in advanced manufacturing processes are used flexible, scalable and modular architectures. Figure 7 shows that the architectures for industrial systems are implemented with four approaches: Star, ring, chain and network.
Intelligent Multi-Agent Based Information Management Methods to ...
41
Table 1. Communication algorithms.
Figure 7. (a) Star; (b) ring; (c) chain; (d) network structures.
The star-structure is used in processes, tasks and subtasks where a centralized mechanism is needed. The activities are managed by the central agent that can directly interact with each node in the graph. A typical
42
Intelligent Control and Automation
example of this architecture regards the interaction between the interface intelligent agent (i.e., I-IA, the central node) and the other agents belonging to the same task. A different approach is provided by the ring-structure where a decentralized interaction mechanism is given. The responsibilities and the activities are subdivided between the agents involved in the graph. A typical example is given in the processes managed from a large amount of heterogeneous agents. The chain-structure provides a hierarchical interaction structure. This approach is commonly used to allow the communication between different levels. This interaction is the basic communication approach used in the industrial systems where every level has to solve different assignments. Each node can interact only with its own father or sons. The network-structure provides an interaction mechanism where each node can directly interact with any other. In this architecture the nodes have similar assignments. It is usually used to allow the agent of a same level to interact each other. In our context, an industrial process can be classified within one of the following classes: Planning and Strategies of Management Processes (PSMP), Driving and Control of Manufacturing Processes (DCMP), Advanced Analysis of Critical Processes (AACP). The PSMP term highlights the high level activities that drive the whole business processes. These activities cover each level of the industrial architecture and include a wide range of assignments (e.g., economical strategies and planning, coordination of internal and external activities, definition of policies and rules). A classical application of the MASs regards the Supply Chain Management (SCM) [24]. A supply chain is a network that deals with several activities (e.g., materials, services, logistic planning) to reach the distribution of a final artifact. The PSMP are usually directed by statistical and computational algorithms. Each agent in the system controls a specific algorithm. The aim of the agent is to elaborate the information about the environments to obtain numerical values and, then, computes the values through statistical models to obtain a numerical transposition of the real world involved in the coordination activities. An interesting kind of algorithms regards the deduction/cognition process. Usually, these algorithms are performed by a mathematical-agent that knows linear programming and polynomial approximation techniques, useful to derive deductions. Other classes of algorithms are available according to the specific process.
Intelligent Multi-Agent Based Information Management Methods to ...
43
Other important algorithms included in this processes class regard those to solve conflicts. The agents that accomplish this kind of algorithms exploit the social ability feature. In particular, fuzzy logic approaches [25] are implemented into the agent to face different issues (e.g., reliability of events, flexibility, mediator activity). Table 2 shows a brief description of the common algorithms used in PSMP class. Table 2. PSMP algorithms.
The DCMP class regards all the activities concerning the manufacturing processes. This class deals with assignments, such as: Robots collective control, industrial tools management, technical safety policies, and so on. A classical application of this class regards the manufacturing robots control [26,27]. These algorithms regard the real-time reactivity on runtime environments. In fact, they must guarantee an immediate and solving interacttion with the surrounding context. Indeed, in this class it is possible to distinguish two sub-classes of algorithms. The first regards the algorithms inside the industrial tools (e.g., firmware or machine level), the second is about the algorithms inside the technological structures of the manufacturing environments (e.g., workstation, data processing centre). The algorithms belonging to the first sub-class are designed to be independent and light. These algorithms do not have a high computational level, but they have more parallel mechanisms to avoid and overcome bad events (e.g., failures, accidents). The most important agent is the interface-agent that controls and manages the interaction between the tool and the external world. In the second case the implemented algorithms are statistic-based. In particular, a large amount of control algorithms, and related agents, have knowledge about the discrete and continue random variables. In this way, for example, a prediction about a lot of targets can be done (e.g., life of a component, the state of an entity, the safety state of a tool). An interesting aspect related
44
Intelligent Control and Automation
to the second sub-class of algorithms regards the simulation environment for manufacturing activities. These algorithms are used in modeling and simulation environments for improving or testing the different levels involved in the manufacturing activities. The algorithms used in this context regard the model understanding area (e.g., probabilistic models, prevision models). These algorithms are used to check the planning and scheduling strategies for a specific operative task. Moreover, they allow to the user to achieve simulations and predictions and to obtain different information about the events that occur during the plan strategy [28]. Also in this case Table 3 shows a brief description of the main DCMP algorithms. Table 3. DCMP algorithms.
The last class of processes, AACP, is referred to critical tasks, sub-tasks and processes. This class considers each level of the industrial system, from technical up to economical issues. An interesting example is provided by the monitoring and diagnosis systems applied to sophisticate manufacturing environments (e.g., production of chips, production of complex artifacts, production of components, model testing). In this case the system provides several run-time dependent functionalities that perform activities as monitoring and diagnosis. These functionalities are accomplished by advanced algorithms (e.g., sensor understanding, pattern recognition, machine learning). If an agent has some doubts regarding the correct execution of a task, it may ask to interrupt each activity of the process. For example, particular agents can be created to recognize particular features on images that represent welding on an artifact in control process. The agents work on the images and they give information about the quality of the welding to the industrial manufacturing system. Table 4 presents the main algorithms belonging to the AACP class.
Intelligent Multi-Agent Based Information Management Methods to ...
45
Table 4. AACP algorithms.
HYBRID SYSTEMS: A CASE STUDY An interesting opportunity regards the use of more than one computational intelligent (CI) approach to solve issues in industrial systems. For example, in [29] is described a DNA-MAS genetic programming system, which is designed for application-generic multi-agent simulation and generation using an advanced symbolic language built specifically for the use in a random mutation and crossover environment. Indeed, in industrial systems, the other techniques of artificial intelligence (e.g., genetic algorithms, neural networks) are used to improve specific features of an agent. For example, in a manufacturing environment, where the visual images of the artifact are important for the quality control, an agent could need to exploit the potentiality of a genetic algorithm (which approximates a set of solution to the optimal solution) to try a better identification of the meaningful features of the same images. There are several examples, not only in industrial systems, where hybrid architectures can result suitable to face critical problems. The purpose of this brief section is to highlight that the artificial intelligence techniques have a common root (i.e., the intelligent dynamic reasoning) that can be applied using different techniques to adopt a different representation of the human reasoning for solving activity. An interesting case study, shown in [30], regards a hierarchically organized multi-agent system for production control of semiconductor wafer manufacturing facilities (wafer fabrication). The production control of wafer fabrication is challenging from a complexity and coordination point of view. The goal of semiconductor manufacturing is the production of integrated circuits. This environment is characterized by several industrial tools that are managed, in different way, from a large amount of users. Moreover, the semiconductor manufacturing domain is characterized by stochastic events
46
Intelligent Control and Automation
like machine breakdowns and the change of customer related due dates of the lots. A flexible representation of the process conditions of semiconductor manufacturing domain has been defined with the purpose to develop the MAS architectture. Then, it has been provided modeling capabilities for agent hierarchies; moreover it has been provided to the system the capabilities to emulate a wafer fabrication represented by a discrete event simulation model for performance assessment of our agent-based production control system. Finally, it has been given a mechanism about integration capabilities for legacy software to use more advanced heuristics for staff agents. Through this implemented strategy an efficient complex industrial system has been developed.
CONCLUSION Danilo Avola, Luigi Cinque, Giuseppe Placidi The evolution of the current industrial systems and the challenges of the next generation ones are encouraging the development of new information management strategies to direct and manage the automated systems involved in the manufacturing processes. The technology tied to the CI approaches, like IAs and MASs, seem to provide a valid solution to this kind of issues. This paper presents an overview of how efficient, autonomous and intelligent entities can support a new point of view on each and every aspect regarding a complex industrial environment.
Intelligent Multi-Agent Based Information Management Methods to ...
47
REFERENCES 1.
R. Lee, “Computer and Information Science, Studies in Computational Intelligence,” Springer-Verlag, Berlin and Heidelberg, 2012. doi:10.1007/978-3-642-30454-5 2. L. Benyoucef and B. Grabot, “Artificial Intelligence Techniques for Networked Manufacturing Enterprises Management,” SpringerVerlag, London, 2010. doi:10.1007/978-1-84996-119-6 3. D. Laha and P. Mandal, “Handbook of Computational Intelligence in Manufacturing and Production Management,” IGI Publishing, Hershey, 2007. doi:10.4018/978-1-59904-582-5 4. K. Hermann, “Distributed Manufacturing: Paradigm, Concepts, Solutions and Examples,” Springer-Verlag, London, 2010. doi:10.1007/978-1-84882-707-3 5. W. Xiang and H. P. Lee, “Ant Colony Intelligence in MultiAgent Dynamic Manufacturing Scheduling,” Journal of Engineering Applications of Artificial Intelligence, Vol. 21, No. 1, 2008, pp. 73-85. doi:10.1016/j.engappai.2007.03.008 6. D. S. Kim, C. S. Kim and K. W. Rim, “Modeling and Design of Intelligent Agent System,” International Journal of Control, Automation, and Systems, Vol. 1, No. 2, 2003, pp. 257-261. 7. V. Graudina and J. Grundspenkis, “Technologies and MultiAgent System Architectures for Transportation and Logistics Support: An Overview,” Proceedings of the International Conference on Computer Systems and Technologies (CompSysTech’05), Varna, 16-17 June 2005, pp. IIIA.6-1-IIIA.6-6. 8. S. J. Russell and P. Norvig, “Artificial Intelligence: A Modern Approach,” 3rd Edition, Prentice-Hall Inc., New Jersey, 2010. 9. C. Z. Janikow, “Fuzzy Decision Trees: Issues and Methods,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 28, No. 14, 1998, pp. 1- 14. doi:10.1109/3477.658573 10. P. Stone and M. Veloso, “Using Decision Tree Confidence Factors for Multi-Agent Control,” Proceeding of the Second International Conference on Autonomous Agents (AGENTS’98), Minneapolis, 1013 May 1998, pp. 86-91. doi:10.1145/280765.280780 11. D. Mitrović, M. Ivanović and M. Vidaković, “Introducing ALAS: A Novel Agent-Oriented Programming Language,” Proceeding of
48
12.
13.
14.
15.
16.
17. 18.
19.
20.
Intelligent Control and Automation
the International Conference on Numerical Analysis and Applied Mathematics, 2011, pp. 861-864. doi:10.1063/1.3636869 Y. Shoham, “Agent-Oriented Programming,” Journal of Artificial Intelligence, Vol. 60, No. 1, 1993, pp. 51-92. doi:10.1016/00043702(93)90034-9 M. D’Inverno, M. Luck, M. Georgeff, D. Kinny and M. Wooldridge, “The dMARS Architecture: A Specifiction of the Distributed MultiAgent Reasoning System,” Journal of Autonomous Agents and Multi-Agent Systems, Vol. 9, No. 1-2, 2004, pp. 5-53. doi:10.1023/ B:AGNT.0000019688.11109.19 M. Winikoff, “JACK Intelligent Agents: An Industrial Strength Platform, Multi-Agent Programming: Languages, Platforms and Applications,” Proceeding of the International Conference in Multiagent Systems, Artificial Societies, and Simulated Organizations, Vol. 15, No. 2, 2005, pp. 175-193. doi:10.1007/0-387-26350-0_7 K. V. Hindriks, F. S. De Boer, W. Van Der Hoek and J.-J. C. Meyer, “Agent Programming in 3APL,” Proceeding of Autonomous Agents and Multi-Agent Systems, Vol. 2, No. 4, 1999, pp. 357-401. doi:10.1023/A:1010084620690 H. Sevay and C. Tsatsoulis, “Multiagent Reactive Plan Application Learning in Dynamic Environments,” Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 2 (AAMAS’02), Bologna, 15-19 July 2002, pp. 839840. doi:10.1145/544862.544937 N. M. Avouris and L. Gasser, “Distributed Artificial Intelligence: Theory and Praxis,” Kluwer Academic Publishers, Norwell, 1991. S. Kovalchuk, A. Larchenko and A. Boukhanovsky, “Knowledge-Based Resource Management for Distributed Problem Solving,” Proceeding of the International Conference on Knowledge Engineering and Management, Vol. 123, No. 2012, 2012, pp. 121-128. doi:10.1007/9783-642-25661-5_16 N. Ronald, “Modelling Pedestrian Behaviour Using the BDI Architecture,” Proceeding of the IEEE/WIC/ACM International conference on Intelligent Agent Technology, Compiègne, 19-22 September 2005, pp. 161-164. doi:10.1109/IAT.2005.104 M.-P. Huget, “The Foundation for Intelligent Physical Agents,” 2012. http://www.fipa.org
Intelligent Multi-Agent Based Information Management Methods to ...
49
21. R. J. Trudeau, “Introduction to Graph Theory,” 2nd Edition, Dover Publications, Mineola, New York, 1993. 22. B. Parhami, “Introduction to Parallel Processing: Algorithms and Architecture,” Plenum Press, New York, 1999. 23. T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein, “Introduction to Algorithms,” McGraw-Hill, New York, 2002. 24. T. Mentzer, “Supply Chain Management,” Sage Publications Ltd., Thousand Oaks, 2000. 25. M. Mohammadian, “Designing Unsupervised Hierarchical Fuzzy Logic Systems,” Machine Learning: Concepts, Methodologies, Tools and Applications, IGI Global, Hershey, 2011, pp. 253-261. 26. H. Colestock, “Industrial Robotics,” McGraw-Hill, New York, 2005. 27. D. Patrick and S. Fardo, “Industrial Process Control Systems,” 2nd Edition, The Fairmont Press, Lilburn, 2011. 28. S. Kraus, “Strategic Negotiation in Multiagent Environments, Intelligent Robotics and Autonomous Agents,” The MIT Press, Cambridge, 2001. 29. S. J. Kollmansberger and S. L. Mabry, “Intelligent Agent Generation with the DNA-MAS Genetic Programming System,” Proceeding of the International Conference on Artificial Intelligence and Soft Computing, Banf, 17-19 July 2002, pp. 101-116. 30. L. Monch, S. Marcel and J. Zimmermann, “FABMAS: An AgentBased System for Production Control of Semiconductor Manufacturing Processes,” Proceeding of the 1st International Conference on Industrial Applications of Holonic and Multi-agent Systems (HoloMAS’03), Prague, 1-3 September 2003, pp. 258-267. doi:10.1007/978-3-54045185-3_24
Chapter 3
Design Method of Intelligent Ropeway Type Line Changing Robot Based on Lifting Force Control and Synovial Film Controller Jiazhen Duan1, Ruxin Shi2, Hongtao Liu1, and Hailong Rong3 State Grid Changzhou Power Supply Company, Transmission and Distribution Engineering Company, Changzhou, Jiangsu 213000, China 1
2
State Grid Changzhou Power Supply Company, Office, Changzhou, Jiangsu 213000, China
School of Mechanical Engineering and Rail Transit, Changzhou University, Changzhou, Jiangsu 213164, China 3
ABSTRACT Aiming at the problems of low efficiency, reliability, and safety of manual construction for demolition of old lines, a design method of an intelligent ropeway type line changing robot based on lifting force control and synovial film controller is proposed. First, the mechanical model of robot load and line sag is established, and the sag of the overhead line where the robot is located is used to calculate the jacking force that the jacking device needs
Citation: Jiazhen Duan, Ruxin Shi, Hongtao Liu, Hailong Rong, “Design Method of Intelligent Ropeway Type Line Changing Robot Based on Lifting Force Control and Synovial Film Controller”, Journal of Robotics, vol. 2022, Article ID 3640851, 11 pages, 2022. https://doi.org/10.1155/2022/3640851 Copyright: © 2022 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
52
Intelligent Control and Automation
to provide to the robot. Then, by introducing the radial basis function (RBF) neural network adaptive algorithm into the synovial controller, an adaptive sliding mode position control algorithm based on the RBF neural network is designed to achieve high-precision motion control of the robot in complex operating environments. Finally, based on the compactness, weight, and reliability of the robot, the optimal design is carried out from four aspects of topology, size, shape and morphology, and the design scheme of the robot for wire removal is proposed, and the robot is produced. The developed robot and the other three robots are compared and analyzed under the same conditions through simulation experiments. The results show that the maximum operating time, maximum climbing angle, and maximum traveling speed of the robot developed in this study are all optimal, which are 45 min, 10°, and 1 m/s respectively, and the performance is better than the other three comparison algorithms.
INTRODUCTION With the continuous growth of the national electricity demand, more and more transmission lines are put into use. At the same time, there are a large number of old lines. Old lines may affect the safe operation of the power system, personal safety, and property safety [1–3]. At present, the demolition of the old lines of the power grid is basically completed by manual work, and there are many problems such as long construction time, high construction cost, and wide influence range [4, 5]. In view of the above problems, using robots to replace humans to complete construction work such as overhead line demolition and replacement has broad application scenarios and development space and is currently a research hotspot in the industry [6–8].
RELATED WORK Using robots instead of manual construction of the foot frame and completion of the loosening of wire and guide wire back pumping procedures after upping the tower can improve the efficiency of demolition construction and reduce the construction difficulty and cost. It plays a significant role in improving the overall technical level of power line construction [9, 10]. dos Santos [11] developed a rope-climbing robot that can move on distribution lines and the corresponding motion planning to avoid collisions with insulators and other devices. It designs a geometric motion planning control method by using the quintic polynomial interpolation method, so that the robot
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
53
articulated suspension can retract when approaching obstacles and expand after crossing obstacles. However, this method does not analyze the accurate control of robot motion speed. In view of the difficulty in calculating the drop of power lines, Zengin et al. [12] proposed a new method to accurately measure the drop of transmission lines based on the power line inspection robot and by using the sensors carried by the robot to collect data and send it remotely. However, this method does not study the motion control method of the robot itself. In view of the problem of poor battery life of the live working robot, Jiang et al. [13] constructed a method to optimize the motion energy consumption of the robot arm. It adapts the genetic algorithm and selects the appropriate algorithm parameters to solve the optimal motion planning of the robot energy consumption, which improves the operation efficiency of the robot. However, this method only takes the lowest energy consumption as the objective function and has limitations. Nguyen et al. [14] developed a robot for cleaning solar panels and simplified it. By establishing the control motion equation of the robot during driving and combining with the linear quadratic regulator, a scheme is proposed to ensure the stable movement of the system and track the desired trajectory. However, this method only focuses on the control method of the robot motion path and cannot guarantee the accuracy of speed control. Aiming at the difficulties and high risks of power grid fault detection, Zhao et al. [15] proposed a patrol robot that can detect equipment faults and identify infrared images of faulty equipment based on infrared imaging technology and support vector machine technology. Although the detection efficiency is enhanced, the accuracy of the method using robots for fault identification is not significantly improved. Xie et al. [16] proposed an integrated 3D printing tube-climbing robot composed of an ordered new soft bending mechanism. The finite element method is used to predict the maximum bending angle of the module, and the output torque and recovery torque are obtained by building a torque test bench. On this basis, the model of the whole robot is established. However, the calculation process of this method is complicated, and the cost is too high to apply in practical engineering. Song et al. [17] proposed an automatic inspection mechanical diagnostic robot based on the fuzzy search method of acoustic signals. The intelligent control of the inspection path of the mechanical diagnostic robot is realized through rough sets, fuzzy neural network (FNN), and self-positioning azimuth correction, and it obtains navigation and search method by using the extracted fault sound signal for fuzzy reasoning. However, this method does not analyze the control complexity and motion accuracy of the robot itself.
Intelligent Control and Automation
54
Based on the above analysis, in view of the low efficiency and high cost of the current old line dismantling work, a design method of intelligent ropeway type line changing robot based on lifting force control and synovial film controller is proposed. The basic idea is as follows: the mechanical model of robot load and line sag is established to analyze its force and the calculation method of lifting force required by the robot, and by introducing the RBF neural network into the synovial controller, an adaptive algorithm which can accurately control the robot motion is proposed. Compared with traditional detection methods, the innovations of the proposed method are listed: •
•
•
A new method is proposed to calculate the lifting force of the robot. The lifting force required by the robot is calculated based on the vertical radian of the position of the overhead line where the robot is located and the attitude sensor The RBF neural network is introduced into the synovial controller, which greatly improves the precision of robot motion control in complex operating environment On the basis of not increasing the weight of the robot, the reliability of the robot is improved by optimizing and adjusting the shape, position, and quantity of different structures
THE PROPOSED METHOD Automatic Control of Lifting Force Based on Vertical Radian of Overhead Line When the robot moves on the wire, it needs certain climbing ability to ensure that the wheel will not slide at any time as far as possible, which means that the wheel surface and overhead line are relatively static. This requires that the friction force provided by the driving wheel and the fixed wheel is enough to overcome the influence of the heavy torque and the pulling torque of the rope, so that the robot can remain relatively static [18]. Since the load of the robot is closely related to the sag of overhead line, the mechanical model of the load and sag of the robot should be established first. The statics analysis of the robot is the same in ascending and descending stages. The following is an example of robot climbing. The static model of the robot when it goes uphill is shown in Figure 1.
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
55
Figure 1. Statics model of the robot going uphill.
In Figure 1, M represents the mass of the robot, g represents the acceleration of gravity, v represents the speed of the robot, d0 represents the distance between the front and rear wheels of the robot, Nf and Nb, respectively, represent the positive pressure of the front and rear wheels of the robot on the line, Ffs and Fbs, respectively, represent the static friction forces of the front and rear wheels of the robot, Ms1 and Ms2, respectively, represent the braking torque applied by the front and rear wheels of the robot, δfs and δbs, respectively, represent the static friction coefficients of the front and rear wheels of the robot, and Mfs and Mbs are, respectively, the rolling friction moments suffered by the front and rear wheels of the robot, whose values are very small and negligible. It can be regarded as Mfs = Mbs = 0. Combined with the studies [19, 20] on the dynamic modeling of the shock contact phenomenon in a closed-loop robot chain and the kinetic models based on kinematic control of ellipsoids and cubic nanoparticles, according to Figure 1, the static analysis equation of the robot when it is uphill can be obtained as follows.
56
Intelligent Control and Automation
(1) In (1), rw is the angular velocity of the robot wheel.
It can be seen from (1) that in the process of wire climbing, when the driving torque is constant, the running state of the robot is mainly affected by friction. The dip angle of the transmission wire and the support force provided by the lifting device to the driving wheel are two main factors affecting the friction. Therefore, the lifting force required by the lifting device can be calculated by detecting the vertical radian of the position of the overhead wire where the robot is located. For the robot that adopts the lifting grasping principle with fixed stroke, the friction between the driving wheel and overhead line is constant and cannot be changed according to the change of load, so the adaptability is poor [21, 22]. When the load of the robot is large, because the friction force is not enough to keep the driving wheel and the wire relatively still, it will affect the traveling speed or even cannot climb the slope. In addition, when downhill or under a small load, excessive friction will lead to waste of power and can affect the duration of continuous operation of the robot. Because there are certain differences in the sag of different overhead lines, it is necessary to detect the inclination of the wire before calculating the power required for the robot to move and control the pressure output of the lifting mechanism to adapt to the current load condition. In order to solve the above problems, the attitude sensor is used to sense the own attitude of robot so as to obtain the inclination of the wire indirectly [23, 24]. The outputs of these sensors are calibrated by the lowpower processor in the sensor, and then, these outputs are fused by the complementary filtering algorithm or extended Kalman algorithm, and the attitude quaternion characterizing the inclination of the transmission wire is obtained. Finally, the attitude quaternion is transformed into Euler angle (pitch angle, roll angle, and azimuth angle) to lay a foundation for the
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
57
subsequent control of lifting force. The torque to be overcome by the driving wheel is shown in Figure 2.
Figure 2. The torque to be overcome by the driving wheel.
It can be seen from Figure 2 that the lifting force required to be provided by the two driving wheels is different. After measuring the inclination of the wire, the friction force needed by the driving wheel and the fixed wheel can be calculated combining with the weight of the robot and the real-time drag force of the tow rope, and the lifting force needed by the robot can be calculated on this basis. The specific design scheme of the lifting force control system is shown in Figure 3. The entire jacking force control system adopts a single closedloop control method. The control system takes the torque calculation result at the contact point between the driving wheel and the power line as feedback, takes the torque exerted by the driving wheel as the reference, and the difference between the feedback and the reference as the input of the controller, and the output of the controller directly controls the brushless DC. The output of the motor finally achieves the purpose of indirectly adjusting the force of the driving wheel through the electric cylinder. The two driving wheels share a set of torque calculation modules, but the control systems are different. The design of the controller is one of the key points. First, the mathematical model of each part of the brushless DC motor and electric cylinder is constructed, and then, the mathematical model of the controller is constructed by using the automatic control theory, and then, the stability
Intelligent Control and Automation
58
of the controller is analyzed by the Lyapunov method, and other related control theories are used. The robustness and adaptability of the controller are analyzed, and finally, a robust controller is designed.
Figure 3. The design scheme of the lifting force control system.
The entire lifting force control system adopts a single closed-loop control method. The control system takes the torque calculation result at the contact point between the driving wheel and the power line as the feedback and takes the torque exerted by the driving wheel as the input, and the difference between the feedback and the input is the input of the controller. Since the output of the controller can directly control the output of the brushless DC motor, the electric cylinder can indirectly adjust the force applied by the driving wheel. The two driving wheels share a set of torque calculation modules, and the difference is that their control systems are different. The design process of the controller consists of 4 steps: • • • •
Build the mathematical model of each part of the brushless DC motor and electric cylinder Use automatic control theory to build the mathematical model of the controller Use the Lyapunov method to analyze the stability of the controller Use other related control theories to analyze the robustness and adaptability of the controller
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
59
Adaptive Speed Control Based on Neural Network Synovial Controller When the robot moves along the overhead line, the vertical radian of the overhead line and the length of the traction rope change all the time. In addition, the operating environment is very complicated due to the slippery circuit caused by rain or the swing caused by wind. The complex environment will cause the robot load to change constantly. For the robot controller that realizes speed control through the PID controller, it cannot achieve highprecision speed control in a time-varying environment of load due to its low degree of freedom: when the load of the robot is small, its traveling speed will be fast, which will affect the fastening installation of the hook. When the load of the robot is large, its traveling speed will be very slow, which will affect the construction efficiency [25, 26]. Therefore, it is necessary to design a more accurate controller to control the walking speed of the robot. In this study, the speed of the brushless DC motor is controlled by using a synovial controller, so as to achieve accurate control of robot walking speed. The synovial control method for system parameter variations and external disturbances has good robustness and complete adaptability. In practical applications, due to the back-and-forth switching of the control action, the inertia and delay of the system, and the measurement error and other influencing factors, the structure control will appear highfrequency chattering in the sliding mode, which seriously affects the control performance of the system. It is difficult to solve the above problems only by improving the synovial control method. There will be a static error, and the implementation process of the high-order sliding mode control algorithm is very complicate. So, it is difficult to apply in practice. This study considers the application of adaptive control idea. Combining the radial basis function (RBF) neural network adaptive algorithm with sliding mode variable structure control, it designs the corresponding RBF neural network adaptive sliding mode position control algorithm and finally realizes the high-precision motion control of the robot in the above complex operation environment. The RBF neural network is an advanced intelligent control algorithm, which has strong self-learning, self-adaptation, and selforganization functions, and has a good application prospect in dealing with nonlinear and uncertain problems of control systems. In addition, the RBF neural network has good approximation ability, simple network structure, and fast learning ability.
60
Intelligent Control and Automation
The main body of the synovial controller based on the RBF neural network is the synovial controller. By introducing the RBF neural network into the synovial controller, the synovial surface switching function of the controller is adjusted, and the external load disturbance component is added to the switching function. Therefore, the synovial controller generated by modifying the switching function with the neural network becomes an adaptive synovial controller. Once the external disturbance due to the external environment change, the external disturbance will make a sharp change in the switch function of the synovial surface, so that the adaptive synovial controller can respond to the external disturbance quickly and adjust the input current of the brushless DC motor timely; thus, the speed control accuracy is greatly improved. In order to improve the position control accuracy of the brushless DC motor, the following torque balance equation is given, considering the changes of internal parameters and external loads. (2) In (2), a = B/J, b = KT/J, z = TL/J. Δa, Δb, and Δz, respectively, represent the disturbance variation caused by the disturbance of the internal parameters of the system and the disturbance of the external load. In order to make the corresponding angle θ of the position controller track the set angle θd faster, the position tracking error of the controller can be expressed as (3) At this time, the following equation is established.
(4) It can be seen from (4) that there is when e ⟶ 0, so the position controller meets the design requirements. At this time, the sliding mode surface switching function is set as (5)
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
61
In (5), x = [x1, x2]T represents the input of the neural network, and c is a constant. The neural network adaptive sliding mode controller system structure can be divided into three parts: sliding mode variable structure controller, RBF network, and adaptive law. The input of the neural network will continuously change the size of the weights after learning by the neural network, so that the output function approximates the ideal nonlinear function f(x). The output of the RBF network and the ideal nonlinear function f(x) are shown in the following equations, respectively. (6) (7) In (6), hf(x) is the Gaussian function of the RBF neural network and represents the weighted vector. In order to further improve the chattering problem of the adaptive sliding mode of the RBF network, the reaching law is optimized, and the optimized reaching law is shown as (8) In (8), the reaching law can be divided into power part and exponential part. When the distance between the moving point of the control system and the sliding mode surface is large, the value of s is large. In this case, the exponential part and the power part work simultaneously and the approaching speed is fast s ⟶ 0. When, the power part tends to zero, and only the exponential part plays a role. The jitter caused by the sign function sgn s will diminish with the decrease of the power part. Therefore, the design of approach law not only ensures the convergence speed but also makes the dynamic response of the control system more stable. Finally, the control law of the system can be obtained as (9)
62
Intelligent Control and Automation
Here, the gradient descent method is used to learn the RBF neural network. If the learning rate η is set to a fixed value in the process of weight update, it will cause problems such as low learning efficiency and slow convergence speed. Therefore, the adaptive learning rate is used to adjust the learning rate online, which can speed up the learning rate, while ensuring the stability of the system and the stability of the learning process. The recursive error is
(10) The rules for adjusting the learning rate according to the size of the recursive error are as
(11) In (11), γ1, γ2, and γ3 are the proportional constants.
Thus, the specific design scheme of the controller can be obtained as shown in Figure 4.
Figure 4. Design scheme of synovial controller based on the neural network.
In Figure 4, SMC represents the synovial controller, BLDCM represents the brushless DC motor, the RBF network represents the radial basis function neural network, and adaptive law represents the law of adaptive adjustment.
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
63
Design of Line Changing Robot The design of the line changing robot needs to ensure the following three technical indicators: •
Compact structure: the uncompact structure of the robot affects the flexibility of its movement and reduces the work efficiency • Weight size: the weight of the robot will increase the time and energy consumption of the tower up and down, and the high energy consumption during the wiring, and prolong the continuous working time. • Reliability: improving the reliability of robot operation is very important for safety and maintenance cost reduction in engineering applications • Mobility: the mobility of a robot affects its adaptability to the operating environment • Load capacity: in order to ensure the working ability of the robot, the robot also needs to bear the maximum load ≥5 kg in addition to carrying its own weight • Speed requirements: the robot should improve its maximum moving speed as far as possible under the premise of ensuring stability and reliability and reduce the influence of external factors such as wind on work efficiency • Self-protection ability: in order to prevent the robot from falling, it should have certain safety self-protection measures Aiming at the above seven indexes, the optimized design is mainly carried out from four aspects of topology optimization, size optimization, shape optimization, and morphology optimization. Topology optimization: for the critical path of load, aluminum alloy polypropylene composite lightweight laminate is used. For the noncritical path, the polymer plate is used to minimize the bodyweight while ensuring the payload. Designing based on the payload transfer path and optimal material distribution can improve the overall structure and reduce the overall design cost. Dimension optimization: the volume of robot parts is set as the objective function, and the combination of optimal design parameters is calculated based on dimension parameters of robot parts such as thickness of plate and section area of pillar.
Intelligent Control and Automation
64
Shape optimization: based on the designed robot topology, the geometry of different parts is optimized to improve the strength of parts. Morphology optimization: based on the weight of the designed robot, after fixing the weight, adjust the shape, position, and quantity of different concave-convex structures and optimize the stiffness and mode of sheet metal structural parts, so as to eliminate potential weak links and improve the robot the purpose of reliability. The main process of robot design includes software and hardware design. The hardware design includes the four aspects: • • •
Hardware design of robot ontology structure Hardware design of the robot control system The design of the main control module and the peripheral basic hardware circuit • Hardware circuit design The specific design content is given in Table 1. Table 1. The design content of the line changing robot.
Content
Purpose
1
Simplify
Reduce structural complexity
2
Redundancy
Redundancy of key structural parameters, functional realization
3
Derating
Reduce the failure rate of components
4
Components outline
Control and manage electronic components and mechanical parts
5
Critical and important
Utilize existing resources to improve the reliability of key and important parts
6
Environmental protection
Choose appropriate materials or solutions to requirements of waterproof, electromagnetic protection, ambient temperature, and humidity.
7
Software reliability
By adopting N-version programming and implementing software engineering and specifications to improve critical reliability
8
Packaging, shipping, storage
Determine packaging protection measures, meet shipping, storage requirements
9
Ergonomics
Make the equipment easy to operate and maintain
The hardware design drawing and the line changing robot obtained are shown in Figures 5 and 6, respectively.
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
65
Figure 5. Hardware design of line changing robot.
Figure 6. Actual object of line changing robot.
The software design mainly includes the realization of the robot control system, the display of robot running state, and the realization of robot remote
Intelligent Control and Automation
66
control. The control flowchart of the line changing robot is shown in Figure 7. The workflow of robot mainly includes the following steps: • • •
•
First initialize the robot before work, start the subroutine receiving instructions, and connect it with the ground control device The robot starts the synovial controller based on the RBF neural network to realize the adaptive speed control at work The robot starts the subroutine that collects and sends its own state and the subroutine that collects and sends video and sends the above data information to the ground receiving device The robot receives the instruction from the ground and judges the type of the instruction and executes it until it stops working after receiving the disconnected instruction
Figure 7. Control flowchart of line changing robot.
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
67
EXPERIMENT AND ANALYSIS Practical Application Experiment Some experiments are carried out on the 220 kV overhead transmission line for the designed line changing robot. It is shown in Figure 8.
Figure 8. Practical application experiment of line changing robot.
In the process of practical application, a number of performances of the robot are tested. It measures and analyzes the indicators including the operation success rate of the line changing robot, the online moving speed, the speed control accuracy, the climbing angle, the continuous operation time, and the environmental adaptation. The experimental results are given in Table 2. Table 2. The experimental data of the line changing robot. Number
Runtime (min)
Success rate (%)
Velocity Climbing (m/s) angle
Wind
Temperature
1
28
100
0.5
5
2
−3°C
2
30
100
0.6
6
2
−5°C
3
32
100
0.8
5
2
−10°C
4
34
100
1.0
7
3
30°C
5
36
100
0.7
8
4
40°C
6
37
100
0.6
8
5
−7°C
7
40
100
0.8
9
5
−8°C
8
42
100
1.0
10
4
20°C
9
44
100
1.0
10
5
−10°C
10
45
100
1.0
10
3
40°C
68
Intelligent Control and Automation
The experimental results show that the robot can move up to 1 m/s, remote control distance up to 1 km, climbing angle up to 10°, continuous operation time up to 45 minutes, and waterproof grade up to 6 on 220 kV and below overhead line. The adaptability of the environment is as follows: it can work normally in the environment, in which the wind is level 5 or less, and the precipitation is 8 ml or less per hour, and the temperature range is −10–40°C. In addition, the proposed intelligent ropeway type line changing robot based on lifting force control and synovial film controller does not need to build scaffolding and cannot hinder the normal operation of the facilities to be crossed. Compared with on-site construction, the time cost is shortened from days to hours, and the process is fast, efficient, simple, safe, and reliable. It can be reused after a one-time investment, which greatly reduces the cost.
Performance Comparison Analysis Since the main external factor affecting the high-altitude operation of robot is the wind, the robot developed in this study and the robot developed in [11, 13, 14] are, respectively, compared and analyzed in terms of operation duration, climbing angle, and traveling speed of the robot under different winds. The relationship between the maximum operating time, maximum climbing angle, and maximum traveling speed of different robots and wind power is shown in Figures 9–11, respectively.
Figure 9. The relationship between the maximum runtime of different robots and the wind level.
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
69
Figure 10. The relationship between the maximum climbing angle of different robots and the headwind level.
Figure 11. The relationship between the maximum velocity of different robots and the headwind level.
It can be seen from Figures 9–11 that the developed robot has the largest working time, forward speed, and climbing angle compared with other robots, no matter in no wind or in different wind levels. It indicates that the developed robot is better able to work outdoors at high altitude. The
70
Intelligent Control and Automation
reason is that the sliding mode controller is used to control the robot, and the RBF neural network is introduced in it, which can self-learn, self-adapt, and self-organize to achieve high-precision motion control of the robot in the complex operation environment. However, the robot developed in [11] does not take into account the accurate control of the robot movement speed. The robot developed in [13] has good endurance without considering the external environment, but does not consider the performance of other aspects of the robot. The robot developed in [14] overemphasizes the importance of path control while neglects the optimization of the climbing ability of robot speed.
CONCLUSION To solve the problem of low efficiency in dismantling old lines, an intelligent ropeway line changing robot design method based on lifting force control and synovial controller is proposed. The simulation results are compared between this method and other three methods. The results show that the lifting force required to be provided to the robot can be accurately calculated by using the vertical radian of the overhead line where the robot is located. By introducing the radial basis function (RBF) neural network into the synovial controller, the self-learning and adaptive robot high-precision motion control can be realized. Optimizing the shape, position, and number of different structures without changing the weight of the robot can effectively eliminate potential weak links and improve the reliability of the robot. In the future, further research will be conducted on the battery module of the line changing robot. Therefore, in the future research on related robots, the RBF neural network can be introduced into the control module to improve the control stability of the robot, and the reliability of the robot can be improved through reasonable structural adjustment and optimization. On the basis of ensuring the good function of the robot, the endurance time of the robot can be extended as far as possible to enhance the ability of its continuous work.
ACKNOWLEDGMENTS This work was supported by State Grid Jiangsu Electric Power Co., Ltd. (Incubate project: Research on the Intelligent Cable Method-Based Robot Used for Power Line Removing and Changing under grant JF2021022).
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
71
REFERENCES 1.
J. Yerramsetti, D. S. Paritala, and R. Jayaraman, “Design and implementation of automatic robot for floating solar panel cleaning system using AI technique,” in Proceedings of the 11th International Conference of Computer Communication and Informatics (ICCCI), pp. 105–110, Sri Shakthi Institute of Engineeing and Technology, Coimbatore, India, 2021. 2. D. Hrubý, D. Marko, M. Olejár, V. Cviklovič, and D. Horňák, “Comparison of control algorithms by simulating power consumption of differential drive mobile robot motion control in vineyard row,” Acta Technologica Agriculturae, vol. 24, no. 4, pp. 195–201, 2021. 3. W. Liao, K. Nagai, and J. Wang, “An evaluation method of electromagnetic interference on bio-sensor used for wearable robot control,” IEEE Transactions on Electromagnetic Compatibility, vol. 62, no. 1, pp. 36–42, 2020. 4. W. Q. Zou, X. Shu, Q. Q. Tang, and S. Lu, “A survey of the application of robots in power system operation and maintenance management,” in Proceedings of the 2019 Chinese Automation Congress (CAC), pp. 4614–4619, Hangzhou, China, November 2019. 5. M. B. Khan, T. Chuthong, C. D. Do et al., “iCrawl: an inchworminspired crawling robot,” IEEE Access, vol. 8, no. 32, pp. 200655– 200668, 2020. 6. N. B. David and D. Zarrouk, “Design and analysis of FCSTAR, a hybrid flying and climbing sprawl tuned robot,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6188–6195, 2021. 7. T. He, Y. Zeng, and Z. Hu, “Research of multi-rotor UAVs detailed autonomous Inspection Technology of Transmission lines based on route planning,” IEEE Access, vol. 7, no. 3, pp. 114955–114965, 2019. 8. Z. P. Wang, B. He, Y. M. Zhou, K. Liu, and C. Zhang, “Design and implementation of a cable inspection robot for cable-stayed bridges,” Robotica, vol. 39, no. 8, pp. 1417–1433, 2021. 9. J. Katrasnik, F. Pernus, and B. Likar, “A survey of mobile robots for distribution power line inspection,” IEEE Transactions on Power Delivery, vol. 25, no. 1, pp. 485–493, 2010. 10. A. Papadimitrio, G. Andrikopoulos, and G. Nikolakopoulos, “On path following evaluation for a tethered climbing robot,” in Proceedings of
72
11.
12.
13.
14.
15.
16.
17.
18.
19.
Intelligent Control and Automation
the 46th Annual Conference of the IEEE-Industrial-Electronics-Society (IECON), pp. 656–661, ELECTR Network, Singapore, October 2020. C. H. F. dos Santos, M. H. Abdali, D. Martins, and C. B. Aníbal Alexandre, “Geometrical motion planning for cable-climbing robots applied to distribution power lines inspection,” International Journal of Systems Science, vol. 52, no. 8, pp. 1646–1663, 2021. A. T. Zengin, G. Erdemir, T. C. Akinci, and S. Seker, “Measurement of power line sagging using sensor data of a power line inspection robot,” IEEE Access, vol. 8, no. 5, pp. 99198–99204, 2020. W. Jiang, G. C. Ye, D. Zou et al., “Dynamic model based energy consumption optimal motion planning for high-voltage transmission line mobile robot manipulator,” Proceedings of the Institution of Mechanical Engineers, Part K: Journal of Multi-Body Dynamics, vol. 235, no. 1, pp. 93–105, 2021. T. P. Nguyen, H. Nguyen, V. H. Phan, and H. Q. Thinh Ngo, “Modeling and practical implementation of motion controller for stable movement in a robotic solar panel dust-removal system,” Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, vol. 23, no. 2, pp. 57–66, 2021. X. Zhao, Z. Peng, and S. Zhao, “Substation electric power equipment detection based on patrol robots,” Artificial Life and Robotics, vol. 25, no. 3, pp. 482–487, 2020. D. Xie, J. Liu, R. Kang, and S. Zuo, “Fully 3D-printed modular pipeclimbing robot,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 462–469, 2020. L. Song, H. Wang, and P. Chen, “Automatic patrol and Inspection method for machinery diagnosis robot-sound signal-based fuzzy search approach,” IEEE Sensors Journal, vol. 20, no. 15, pp. 8276– 8286, 2020. X. Liu, X. Lu, and S. Zhao, “Kinematics and singularity analysis of an electricity pylon climbing robot,” Journal of Machine Design, vol. 33, no. 5, pp. 7–13, 2019. M. Ahmadizadeh, A. M. Shafei, and M. Fooladi, “Dynamic modeling of closed-chain robotic manipulators in the presence of frictional dynamic forces: a planar case,” Mechanics Based Design of Structures and Machines, no. 1, pp. 1–21, 2021.
Design Method of Intelligent Ropeway Type Line Changing Robot Based...
73
20. M. H. Korayem and H. Khaksar, “Estimation of critical force and time required to control the kinematics and friction of rough ellipsoidal and cubic nanoparticles using mechanics of contact surfaces,” Tribology International, vol. 137, pp. 11–21, 2019. 21. X. Lu, S. Zhao, D. Yu, and X. Liu, “Pylon-climber: a novel climbing assistive robot for pylon maintenance,” Industrial Robot: An International Journal, vol. 44, no. 1, pp. 38–48, 2017. 22. A. T. Zengin, G. Erdemir, T. C. Akinci, F. A. Selcuk, M. N. Erduran, and S. S. N. Seker, “ROSETLineBot: one-wheel-drive low-cost power line Inspection robot design and control,” Journal of Electrical Systems, vol. 15, no. 4, pp. 626–634, 2019. 23. T. Q. Mao, K. Huang, X. W. Zeng et al., “Development of power transmission line defects diagnosis system for UAV inspection based on binocular depth imaging technology,” in Proceedings of the 2nd International Conference on Electrical Materials and Power Equipment (ICEMPE), pp. 478–481, Guangzhou, China, April 2019. 24. M. U. Sumagayan, C. Premachandra, R. B. Mangorsi, C. J. Salaan, H. W. H. Premachandra, and H. Kawanaka, “Detecting power lines using point Instance network for distribution line Inspection,” IEEE Access, vol. 9, no. 4, pp. 107998–108008, 2021. 25. Z. Pu, Y. Xiong, H. Wang et al., “Design and construction of a new insulator detection robot for application in 500 kV strings: electric field analysis and field testing,” Electric Power Systems Research, vol. 173, no. 7, pp. 48–55, 2019. 26. Y. Li, S. He, L. Xu, X. Wang, and A. Zou, “Effect of patrol robot on electric field distribution of human body in 500 kV substation,” Advanced Technology of Electrical Engineering and Energy, vol. 39, no. 9, pp. 74–80, 2021.
Chapter 4
A Summary of PID Control Algorithms Based on AI-Enabled Embedded Systems
Yi Zhou Aircraft Design and Engineering, Northwestern Polytechnical University, Xian, 710000, China
ABSTRACT Proportional-integral-derivative (PID) controllers are extensively used in engineering practices for their simple structures, robustness to model errors, and easy operations. At present, there is a great variety of PID controllers. Companies have developed intelligent regulators with functions for automatically tuning PID parameters. For present PID controllers, strategies such as intelligence, self-adaptation, and self-correction are extended to transmission PID. PID controllers and corresponding improved ones are utilized in 90% of industrial control processes. In this paper, PID control algorithms are summarized. This paper focuses on advanced control
Citation: Yi Zhou, “A Summary of PID Control Algorithms Based on AI-Enabled Embedded Systems”, Security and Communication Networks, vol. 2022, Article ID 7156713, 7 pages, 2022. https://doi.org/10.1155/2022/7156713. Copyright: © 2022 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
76
Intelligent Control and Automation
strategies such as PID control, predictive PID control, adaptive PID control, fuzzy PID control, neural network PID control, expert intelligent PID control, PID control based on genetic algorithms, and PID control based on ant colony algorithms. Besides, these kinds of algorithms are compared, and prospects of PID algorithms are forecast at the end of this paper.
INTRODUCTION Minorsky [1] put forward a method for designing PID controllers based on the output feedback in 1922. By the forties of the last century, PID controllers had been used most widely as regulators in engineering practices. Almost 70 years have passed since the emergence of PID controllers. Because of their simple algorithms, high stability, robustness, reliable working, and convenient regulation, PID controllers have become one of the leading technologies for industrial control. PID regulation is the most technologically mature continuous system which is employed widely. PID control will be the most suitable means if we do not completely understand a system and its controlled objects or we cannot determine system parameters by effective measurement methods. The uses of PID control algorithms are relatively satisfactory in many control fields. Digital PID control algorithms realized by microcomputers, single chip microcomputers, and DSP have been further corrected and improved owing to flexibility of their software systems. There are many types of PID control algorithms, of which the requirements somewhat differ in varying applications. With the development of industries, objects have become more and more complicated. Particularly for large time delay and time-varying and non-linear systems, some parameters are unknown, or change slowly, or have time delay, or have random disturbance, or it is impossible to get relatively accurate digital models. Meanwhile, as people have increasingly more rigorous requirements for quality control, deficiencies of routine PID control have been gradually exposed. Conventional PID control is rarely effective for time-varying objects and non-linear systems. Therefore, routine PID control is considerably limited. In view of this, it has been improved in different aspects, which are mainly introduced as follows. On the one hand, routine PID is structurally improved; on the other hand, fuzzy control,
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
77
neural network control, and expert control are the most active among existing intelligent controls. Once they are used in combination with routine PID control, they can learn from each other, give play to their respective strengths, and constitute intelligent PID control. This paper primarily sums up development and classification of PID algorithms.
BASIC PRINCIPLES OF PID Basic Components PID control is a linear combination of the proportion (P), integral (I), and differential (D) of deviations in a feedback system. These three basic control laws have their respective features [2].
Proportional (P) Control Proportional controllers only change their signal amplitudes without impacting their phases in controlling changes to input signal e (t). Proportional control increases open-loop gains of systems. This part of control is dominant.
Differential (D) Control Differential controllers determine differential for input signals, and differential reflects the rate of changes to a system, so differential control, a leading mode of predictive regulation, forecasts system variations, increases system damping, and enhances phase margin, thus improving system performances.
Integral (I) Control Integral, a kind of additive effects, records history of system changes, so integral control manifests effects of histories upon current systems. In general, integral control is not separately adopted but combined with PD control, as shown in Figure 1.
78
Intelligent Control and Automation
Figure 1. Schematic diagram of PID.
PID Control Laws Basic input/output relationships of PID control laws may be conveyed by the differential equation as follows:
(1) where e (t) is the input bias of the controller; Kp is the gain in proportional control; Ti 1/2 is the integral time constant; and Td is the differential time constant. The corresponding transfer function is as follows:
(2)
CLASSIFICATION OF PID CONTROL There are many PID control algorithms and improved PID control algorithms. In this paper, only some classical algorithms are summarized and elaborated.
Predictive PID Control Smith’s (1958) predictive compensator was one of the first pure plans for lag compensation. His basic thought was to move pure time delay out of the control loop [3].
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
79
In his algorithm, it is hypothesized that past input variations are the same at each step and equal to current input variations. In practice, these relationships are always not tenable during dynamic responses of systems. The impacts of such approximation can be ignored if systems are subjected to no delay or short delay, but with the increase of delayed steps, it is beyond doubt that the effects upon system robustness will be gradually aggravated. Therefore, Smith’s predictor is integrated into the system to compensate time lag, so that delayed regulated variables are reported to the regulator in advance. Then, the regulator will move ahead of time to eliminate impacts of system time delay, reduce overshoot, improve system stability, speed up regulation, and improve effectiveness of large time delay systems [4, 5]. In principle, outputs of a PID controller are fed back to the input terminal of PID through a part for compensation, in order to compensate for controlled objects’ lag. In engineering practices, a Smith predictor is fed back to the PID regulator to overcome pure time delay of the controlled objects, as shown in Figure 2.
Figure 2. Schematic diagram of Smith’s predictive PID controller.
Adaptive PID Control In the actual process of industrial control, many controlled mechanisms are highly non-linear and time-varying with pure time delay. Impacted by some factors, process parameters might change, so adaptive PID control is effective for solving these problems. Adaptive PID controllers have strengths of both adaptive control and routine PID controllers. Besides being helpful for automatically identifying explored process parameters, automatically tuning controller parameters, and adapting to changes to controlled process parameters, they are also structurally simple, highly robust, and fairly reliable like conventional PID controllers. Field staff and design engineers
Intelligent Control and Automation
80
are familiar with adaptive PID controllers. With these strengths, adaptive PID controllers have developed into relatively ideal automatic devices for process control [6]. They are classified into two major categories. PID controllers, which are based on identification of controlled process parameters, are collectively known as adaptive PID controllers. Their parameter design is dependent upon parameter estimation for controlled process models. The other type of adaptive PID controllers is based on some characteristic parameters of a controlled process such as critical oscillation gains and critical oscillation frequency. They are called non-parametric adaptive PID controllers. Parameters of non-parametric adaptive PID controllers are directly adjusted according to characteristic parameters of processes. Parametric adaptive PID control [2] includes •
•
•
Adaptive PID pole-placement control: adaptive pole-placement control algorithm was firstly put forward by Wellstead et al. in 1979, which was subsequently improved and deepened by Astrom and Wittenmark [7]. Adaptive PID control based on cancellation principles: Wittenmark and Astrom firstly put forward parametric adaptive PID control algorithms based on cancellation principles. Further development has been achieved. Adaptive PID control based on quadratic performance indices: extensive development has been also achieved in non-parametric adaptive PID control, where parameters are optimized by artificial intelligence.
Fuzzy PID Control In 1965, Zadeh [8], an expert of cybernetics, developed the fuzzy set theory as a new tool for describing, studying, and dealing with fuzzy phenomena. As to fuzzy control, theories on fuzzy sets are adopted. In particular, it is impossible to get systematic and precise mathematical models in some complicated time-varying and non-linear systems with large time delay. For fuzzy control, precise mathematical models of controlled objects are not needed. Like PID controllers, the control precision of these controllers is high. In addition, the controllers are flexible and adaptive, being highly effective for controlling complicated control systems and high-precision servo systems. They have been quite active in control fields over the past years [9–11].
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
81
Their basic principles are as follows. Based on traditional PID control algorithms, self-tuning of PID parameters is performed. Laws on fuzzy control are set for adaptive tuning of control parameters by controlling parameter errors E and variations in errors Ec in order to satisfy the requirements for E and Ec parameters in different control periods. For fuzzy PID control algorithms, functional relationships between Kp, KI, Kd and error changes Ec are established according to theories on fuzzy sets:
(3)
During self-tuning of Kp, KI, and Kd, numerical value of E and Ec is determined. The control parameters are self-tuned online in accordance with laws about fuzzy control to satisfy control requirements in different control periods, so that control systems of controlled objects are kept highly dynamic and static. At present, there have been some common fuzzy PID controllers such as fuzzy PI controllers, fuzzy PD controllers, fuzzy PI + D controllers, fuzzy PD + I controllers, fuzzy (P + D) 2 controllers, and fuzzy PID controllers. Figure 3 shows online self-tuning of PID parameters based on laws about fuzzy control.
Figure 3. Architecture of fuzzy PID control system.
Neural Network PID Control The adaptive neurons proposed by Windrow [12] are structurally simple and real time without precisely modelling controlled objects. Based on neural model-free control, scholars [13–15] brought forth neural model-free adaptive PID control methods, identified input signals of neural networks, and designed online correction algorithms in combination with strengths of
Intelligent Control and Automation
82
PID control, thus achieving some outcomes in studying sound dynamic and static properties in control systems. RBF (radial basis function) neural networks [16] are forward networks with three layers, namely, an input layer, a hidden layer, and an output layer. The structure of RBF neural network is shown in Figure 4. Their inputs are processed, weighted, and sent to neurons of the output layer. Only a neuron controls outputs on the output layer.
Figure 4. Schematic diagram of neural network.
To gain desirable control outcomes in PID control, roles of proportional, integral, and differential in control shall be properly regulated, so that they can coordinate with and restrict each other. These relationships are unnecessarily simple linear combinations, and the optimal relationships can be identified among non-linear combinations with countless variations. By studying system performances, PID control with the optimal combination can be performed, thereby finding out P, I, and D under certain optimal control laws. The architecture of the PID control system based on BP neural networks is shown in Figure 5. The controller is made up of two parts: •
•
Classical PID controller: it exercises closed-loop control over controlled objects and tunes the three parameters Kp, KI, and Kd online. Neural networks: parameters of PID controllers are regulated according to operation state of systems, in hope of optimizing certain performance indices.
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
83
Figure 5. Architecture of neural network PID control system.
Even though output state of neurons on the output layer corresponds to three adjustable parameters (Kp, KI, and Kd) of the PID controller, the steady state of the controller is in line with its parameters under some optimal control laws by independent learning of neural networks and adjustment of weighted coefficients.
PID Control Based on Genetic Algorithms Genetic algorithms [17], abbreviated as GAs, were efficient, parallel, and globally optimal searching methods firstly put forward by Professor Holland from the Michigan University of the United States in sixties of the last century. They are adaptive and globally optimal probabilistic search algorithms which have formed in simulating genetics and evolution of living creatures under natural environment. Their basic thoughts are as follows. To convert problems to be solved, a group composed of individuals operates a group of genetic operators and repeats the process of generationevaluation-selection-operation before the optimum solution is searched. The optimization method for PID parameters based on genetic algorithms is helpful for simplifying analytical calculations [18]. Genetic algorithms are ways for naturally selecting and solving optimum solutions by simulating natural evolution. For PID control based on genetic algorithms [19–22], actual problems are converted into genetic codes by genetic algorithms at first. In practice, coding methods such as binary coding, floating-point coding, and parametric coding are used. Subsequently, initial
84
Intelligent Control and Automation
populations are generated, and after searching, PID regulation is performed. The schematic diagram is shown in Figure 6.
Figure 6. Architecture of PID control system based on genetic algorithms.
PID Control Based on Ant Colony Algorithms In early 1990s, the Italian scholar Dorigo Macro and some others [23] put forward ant colony algorithms by simulating ants’ behaviors for seeking ways together in the nature. Ant colony algorithms, as new algorithms for simulating evolution, are a type of population-based algorithms for simulating evolution inspired by studies on true ant colonies’ behaviors in the nature and random search algorithms. Ants transfer information by virtue of a material known as pheromone. In the course of their movement, ants leave this substance on the paths they have passed by. Besides, they can perceive this substance during their movement, thus guiding their movement direction. Therefore, collective behaviors of colonies composed by numerous ants reflect a kind of positive feedback. The more the ants walk
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
85
on certain path, the higher the likelihood for late comers to choose the path. In this case, intensity of pheromone is enhanced. This selection process is referred to as ants’ auto-catalysis, and its principle is a positive feedback mechanism, so the ant system is also called enhanced learning system [24, 25]. The optimization of ant colony algorithms for PID control parameters can be schematically described by Figure 7. A set of three number sequences, namely, Kp, Ti, Td, is reckoned as a set of three cities. Artificial ants start from S and separately pass by a city of the set. At last, they arrive at point D and conform to criterion functions, thereby finding the optimal path. The path searched by ant colony algorithms for optimizing parameters of PID control [26, 27] reflects that the system has the optimal performance index, which is reflected from node value of three control parameters in an ant colony system. Pheromone is released on nodes that ants have passed, and its concentration changes based on criterion functions rather than path length. The criterion functions shall contain information about nodes that ants have passed and current performance indices of systems.
Figure 7. Parameters of PID control based on ant colony algorithms.
Expert Intelligent PID Control With the development of artificial intelligence, many forms of expert control systems have emerged. Naturally, people have thought of developing PID
Intelligent Control and Automation
86
parameters based on expert experiences [28]. The EXACT expert self-tuning controllers launched by Foxboro (the United States) in 1984 are the most classical ones, and this company applies expert systems in PID controllers. Functions such as self-tuning and self-taught learning realized by the combination of expert control and routine PID control can be used for depicting characteristics of complex systems. Corresponding control strategies can be developed and identified by learning and self-organizing. Scholars have investigated design methods and applications of expert selftuning PID controllers [29]. For defects of ordinary expert self-tuning PID controllers, they have additionally developed intelligent self-tuning controllers and proposed using staircase signals as system inputs. Thus, the systems need not be frequently started in the process of parametric training. In addition, stair number of given signals is flexibly determined in accordance with actual system changes to satisfy control requirements in some special occasions. Because of their high capacity for self-tuning, structures and parameters of object models vary within a relatively big range. An expert system comprises of two elements: •
Knowledge base: it stores knowledge entries about a special field which are summarized in advance and represented in certain format. • Inference mechanism: entries from the knowledge base are used for making inferences, judgments, and decisions by similar methods for solving expert problems. Concerning principles of expert intelligent PID control, measured characteristic parameters are compared with predetermined ones, and their deviations are imported into the expert system, which analyzes requisite corrections of parameters of modulators for eliminating characteristic quantity and imports them into routine PID modulators, so as to correct parameters of the modulators. Meanwhile, the modulators perform operations in accordance with system errors and tuned parameters. Output control signals controlled by system error and generalized objects will be output until characteristic parameters on response curves meet expectations in the controlled process. The schematic diagram of expert intelligent PID is shown in Figure 8.
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
87
Figure 8. Schematic diagram of expert intelligent PID.
Other PID Controls With the rapid development of computer technologies and intelligent machines, people have begun to make PID control algorithms intelligent for the purpose of further improving control [30–33]. Thus, a series of new improved PID algorithms have been developed, including fuzzy PID controllers, intelligent PID controllers, and neural network PID controllers. With the emergence of these algorithms, PID algorithms have become more functionally complete and played more important roles in controlling industrial processes. Their applications will be further promoted. In addition, there are some other PID control algorithms such as self-tuning PID control, non-linear PID control, and PID control based on the combination of GA and BP neural networks.
COMPARISONS OF KEY ALGORITHMS FOR PID CONTROL In this paper, comparisons and summaries are made based on aforementioned algorithms. As shown in Table 1, strengths and weaknesses of algorithms are summarized to fully introduce performances of each algorithm.
88
Intelligent Control and Automation
Table 1. Summarized strengths and weaknesses of PID algorithms. PID type
Tuning techniques
Strengths
Weaknesses
Predictive PID control
Smith predictor
Reduces overshoot, increases system stability, facilitates regulation, and improves large time delay systems
It is necessary to obtain precise mathematical models of controlled objects for controlled systems during use, and the systems are rather sensitive to models
Adaptive PID control
Adaptive control systems
Automatic identification, decision making, and modification
Conflicts between control accuracy and parametric estimation
Fuzzy PID control
Calculates control outputs according to type of input information or checks the list of fuzzy rules
Highly precise, flexible, and adaptive
Inapplicable in case of severe system with non-linearity and uncertainty
Neural network PID control
Control parameters are corrected online by BP networks
Parameters P, I, and D can be determined under certain optimal control laws by self-taught learning of neural networks
Low rate of convergence
PID control based on genetic algorithms
Converts actual probSimplifies optimized lems into genetic codes analytical calculations for iteration
PID control based on ant colony algorithms
Criterion functions by passing pheromone
Determines the optimal Algorithms are rather performance index complex and computing is time consuming. If parameters are improperly determined, lag or local optimization will be caused.
Expert intelligent PID control
Adjustments are automatically made according to response characteristics and control requirements of systems, or controller parameters are determined by the company
Describes characteristics of complex systems and develops corresponding control strategies by selftuning and self-taught learning
Parameter range is so wide that the initial process for seeking optimal solutions is a little purposeless
Enough knowledge bases and inference mechanisms shall be used as foundations
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
89
CONCLUSION PID control, which is widely used, exhibits fairly high capacity when updated. Successfully using PID controllers for controlling complex objects is a main research area. The optimal PID control methods are sought among various models such as fuzzy models, non-parametric predictive models, and expert systems, so that designs of PID controllers can be simplified. Moreover, in order to solve some existing problems in automation and control, PID control is combined with other algorithms to develop more functionally complete controllers. With the rapid development of computer technologies and smart machines, the applications of these controllers will be further promoted. Therefore, it is necessary to further study PID algorithms.
90
Intelligent Control and Automation
REFERENCES 1. 2. 3.
4.
5.
6.
7.
8. 9.
10.
11.
12.
13.
N. Minorsky, “Directional stability of automatically steered bodies,” Naval Engineers Journal, vol. 34, no. 2, pp. 280–309, 1922. Y. Tao and Y. Yin, New PID Control and Applications, China Machine Press, Beijing, China, 1998. Y. Hu, T. Guo, and P. Han, “Research on Smith’s predictive control algorithms and their applications in DCS,” Computer Simulation, vol. 33, no. 5, pp. 409–412, 2016. D. Liang, J. Deyi, Z. Ma, and W. Zhang, “Variable-pitch controllers for wind turbine generators based on an improved Smith prediction algorithm,” Electric Drive Automation, vol. 38, no. 6, pp. 25–29, 2016. P. Lu, H. Zhang, and R. Mao, “Comparative research on Smith predictive compensation control and PID control,” Journal of China University of Metrology, vol. 20, no. 2, pp. 171–179, 2009. K. J. Åström, T. Hägglund, C. C. Hang, and W. K. Ho, “Automatic tuning and adaptation for PID controllers-a survey,” Control Engineering Practice, vol. 1, no. 4, pp. 699–714, 1993. K. J. Astrom and B. Wittenmark, “Self-tuning controllers based on polezero placement,” IEE proceedings D-control theory and applications, vol. 127, no. 3, pp. 120–130, 1980. L. A. Zadeh, “Fuzzy sets as a basis for a theory of possibility,” Fuzzy Sets and Systems, vol. 1, no. 1, pp. 3–28, 1978. L. Xue, Y. Liu, E. Zhu, and X. Ma, “Design of intelligent fuzzy-PID temperature control systems,” Information Recording Material, vol. 19, no. 11, pp. 118–120, 2018. Z. Guo, H. Yu, and L. Chen, “High-speed galvanometer motor control based on fuzzy PID,” Small & Special Electrical Machines, vol. 47, no. 4, pp. 1–5, 2019. X. Bai, Research on Fuzzy Controllers and Their Applications in Host Steam Temperature Controllers, North China Electric Power University (Hebei), Beijing, China, 2006. B. Widrow and R. Winter, “Neural nets for adaptive filtering and adaptive computer, pattern recognition,” An Introduction To Neural And Electronic Networks, vol. 21, 1990. T. Liu and Y. Zhang, “Research on neural network PID control in speed control systems for motors of hydraulic pumps,” Telecom Power Technology, vol. 35, no. 5, pp. 4–7, 2018.
A Summary of PID Control Algorithms Based on AI-Enabled Embedded ...
91
14. X. You, C. Su, and Y. Wang, “An overview of improvement of algorithms for BP neural networks,” Minying Keji, vol. 34, no. 4, pp. 146-147, 2018. 15. C. Peng, Y. Zheng, and Z. Hu, “Adaptive single neuron control over time-varying time delay systems,” Computing Technology and Automation, no. 1, pp. 17–19, 2005. 16. E. P. Maillard and D. Gueriot, “RBF neural network, basis functions and genetic algorithm,” in Proceedings of the International Conference on Neural Networks (ICNN’97), vol. 4, pp. 2187–2192, IEEE, Houston, TX, USA, July 1997. 17. D. E. Goldberg and J. H. Holland, “Genetic algorithms and machine learning,” Machine Learning, vol. 3, no. 2-3, pp. 95–99, 1988. 18. H. Liu, Q. Duan, N. Li, and Y. Zhou, “PID parameter tuning and optimization based on genetic algorithms,” Journal of North China Electric Power University, vol. 3, pp. 31–33, 2001. 19. C. Chen, C. Cheng, C. Luo, and R. Wang, “PID temperature control for reactors based on genetic algorithms,” Scientific and Technological Innovation, vol. 28, no. 6, pp. 72-73, 2019. 20. H. Wang, J. Meng, and R. Xu, “Design of ATO speed controllers based on PID control over genetic algorithms,” Industrial Control Computer, vol. 31, no. 7, pp. 27–29+31, 2018. 21. X. Zheng, “Application of principles of genetic algorithms in mechanical engineering,” China High-Tech Enterprises, vol. 34, pp. 62-63, 2014. 22. Y. Zhao, L. Meng, and C. Peng, “A summary about principles of genetic algorithms and development orientations,” Heilongjiang Science and Technology Information, vol. 13, pp. 79-80, 2010. 23. M. Dorigo, G. Di Caro, and M. Gambardella, “Ant Alborithms for solving weapon-target at assignment problem,” Applied Soft Computing, vol. 2, pp. 39–47, 2002. 24. Y. Xiao, J. Jiao, D. Qiao, J. Du, and K. Zhou, “An overview of basic principles and applications of ant colony algorithms,” Light Industry Science and Technology, vol. 34, no. 3, pp. 69–72, 2018. 25. R. Yang and Y. Wang, “Research on basic principles and parameter setting of ant colony algorithms,” South Agricultural Machinery, vol. 49, no. 13, pp. 38-39, 2018.
92
Intelligent Control and Automation
26. Y. Liu and B. Jiang, “Applied research on PID controllers based on ant colony algorithms,” Electronic Design Engineering, vol. 20, no. 18, pp. 28–30, 2012. 27. T. Zhang, S. Zhang, and C. Li, “Application of ant colony algorithms in PID control and impacts of their parameters,” Modern Electronics Technique, vol. 38, no. 20, pp. 20–25, 2015. 28. H. Yang and P. Zhu, “Application of expert intelligent PID control in 762CAN modulators,” Liaoning Chemical Industry, vol. 30, no. 4, pp. 172–174, 2001. 29. L. Wang and W. Song, “PID control,” Automation Instrumentation, vol. 25, no. 4, pp. 3–8, 2004. 30. Z. Yang, H. Zhu, and Y. Huang, “An overview of PID controller design and parameter tuning Methods,” Control and Instruments In Chemical Industry, no. 5, pp. 1–7, 2005. 31. K. J. Åström and T. Hägglund, “The future of PID control,” Control Engineering Practice, vol. 9, no. 11, pp. 1163–1175, 2001. 32. Y. Tang, “Research on PID control methods,” Electronics World, vol. 7, pp. 65-66, 2019. 33. M. Zhu, Y. Zhan, and S. Zhang, “Application of PID algorithms in constant pressure water supply,” Computer Knowledge and Technology, vol. 14, no. 22, p. 290, 2018.
SECTION 2: FUZZY CONTROL TECHNIQUES
Chapter 5
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
Abdel Badie Sharkawy and Shaaban Ali Salman Mechanical Engineering Department, Faculty of Engineering, Assiut University, Assiut, Egypt
ABSTRACT In this article, an adaptive fuzzy sliding mode control (AFSMC) scheme is derived for robotic systems. In the AFSMC design, the sliding mode control (SMC) concept is combined with fuzzy control strategy to obtain a model-free fuzzy sliding mode control. The equivalent controller has been substituted for by a fuzzy system and the uncertainties are estimated online. The approach of the AFSMC has the learning ability to generate the fuzzy control actions and adaptively compensates for the uncertainties. Despite the high nonlinearity and coupling effects, the control input of the proposed control algorithm has been decoupled leading to a simplified
Citation: A. Sharkawy and S. Salman, “An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems,” Intelligent Control and Automation, Vol. 2 No. 4, 2011, pp. 299-309. doi: 10.4236/ica.2011.24035. Copyright: © 2011 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
96
Intelligent Control and Automation
control mechanism for robotic systems. Simulations have been carried out on a two link planar robot. Results show the effectiveness of the proposed control system. Keywords: Sliding Mode Control (SMC), Adaptive Fuzzy Sliding Mode Control (AFSMC), Fuzzy Logic Control (FLC), Adaptive Laws, Robotic Control.
INTRODUCTION Performance of many tracking control systems is limited by variation of parameters and disturbances. This specially applies for direct drive robots with highly nonlinear dynamics and model uncertainties. Payload changes and/or its exact position in the end effector are examples of uncertainties. The control methodologies that can be used are ranging from classical adaptive control and robust control to the new methods that usually combine good properties of the classical control schemes to fuzzy [1,2], genetic algorithms [3], neuro-fuzzy [4,5] and neural network [6] based approaches. Classical adaptive control of manipulators requires a precise mathematical model of the system’s dynamics and the property of linear parameterization of the system’s uncertain physical parameters [7]. The study of output tracking problems has a longstanding history. Sliding mode control (SMC) is often favored basic control approach, because of the insensitivity to parametric uncertainties and external disturbances [7-10]. The theory is based on the concept of changing the structure of the controller to achieve a desired response of the system. By using a variable high speed switching feedback gain, the trajectory of the system can be forced on a chosen manifold, which is called sliding surfaces or switching surfaces, and remains thereafter. The design of proper switching surfaces to obtain the desired performance of the system is very important and has been the topic of many previous works [11,12]. With the desired switching surface, we need to design a SMC such that any state outside the switching surface can be driven to the switching surface in finite time. Generally, in the SMC design, the uncertainties are assumed to be bounded. This assumption may be reasonable for external disturbance, but it is rather restrictive as far as unmodelled dynamics are concerned. Nowadays, fuzzy logic control (FLC) systems have been proved to be able to solve complex nonlinear control problems. They provide an effective means to capture the approximate nature of real world. Examples
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
97
are numerous; see [13] for instance. While non-adaptive fuzzy control has proven its value in some applications [1,2,14], it is sometimes difficult to specify the rule base for some plants, or the need could arise to tune the rule-base parameters if the plant changes. This provides the motivation for adaptive fuzzy control, where the focus is on the automatic on-line synthesis and tuning of fuzzy controller parameters. It means the use of on-line data to continually “learn” the fuzzy controller, which will ensure that the performance objectives are met. This concept has proved to be a promising approach for solving complex nonlinear control problems [15,16]. Recently, adaptive fuzzy sliding mode control design has drawn much attention of many researchers. Because, control chattering, an inherent problem associated with SMC, can evoke un-modeled and undesired high frequency dynamics, Ho et al. [17] have proposed an adaptive fuzzy sliding mode control with chattering elimination for nonlinear SISO systems. The adaptive laws, however, rely on the projection algorithms, which can hardly be satisfied in practical problems. In [18], the authors have established an adaptive sliding controller design based on T-S fuzzy system models. The fuzzy system used is rather complicated and the upper bound of the uncertainty is needed to synthesize the controller. A robust fuzzy tracking controller for robotic manipulator which uses sliding surfaces in the control context can be found in [19]. The control scheme, however, depends heavily on the properties of the dynamic model of robotic manipulators and similar to [17], the authors use the projection algorithms which have practical limitations. More recently, Li and Huang [20] have designed a MIMO adaptive fuzzy terminal sliding mode controller for robotic manipulators. In the first phase of their work, the fuzzy control part relied on some expert knowledge and a trial-and-error procedure is needed to determine the output singletons. In the second phase, they designed an adaptive control scheme that determines these parameters on-line. The rule base, however is restricted to five rules per each joint and the fuzzy singletons should have values within specified ranges to enforce stability. In this work, an adaptive fuzzy sliding mode control (AFSMC) scheme is proposed for robotic systems. The scheme is based on the universal approximation property of fuzzy systems and the powerfulness of SMC theory. A one dimensional adaptive FLC is designed to generate the appropriate control actions so that the system’s trajectories stick to the sliding surfaces. Adaptive control laws are developed to determine the fuzzy rule base and the uncertainties. With respect to SMC, the proposed algorithm
98
Intelligent Control and Automation
eliminates the usual assumptions needed to synthesize the SMC and better performance can be achieved. The paper is organized as follows. In Section 2, the equivalent control method is used to derive a SMC for rigid robots. Section 3 introduces the proposed AFSMC which is a model free approach. Simulation results which include comparison between AFSMC and SMC are presented in Section 4. Section 5 offers our concluding remarks.
SLIDING MODE CONTROL (SMC) DESIGN In this Section, the well-developed literature is used to demonstrate the main features and assumptions needed to synthesis a SMC for robotic systems. SMC employs a discontinuous control effort to derive the system trajectories toward a sliding surface, and then switching on that surface. Then, it will gradually approach the control objective, the origin of the phase plane. To this end, consider a general n-link robot arm, which takes into account the friction forces, unmodeled dynamics, and disturbances, with the equation of motion given by (1) where joint angular position vector of the robot; applied joint torques (or forces); inertia matrix, positive definite; effect of Coriolis and centrifugal forces; gravitational torques; diagonal matrix of viscous and/or dynamic friction coefficient; vector of unstructured friction effects and static friction terms; dynamics.
vector of generalized input due to disturbances or unmodeled
The controller design problem is as follows. Given the desired trajectories with some (or all) system parameters being unknown, derive a control law for the torque (or force) input τ(t) such that the position vector
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
99
x and the velocity vector can track the desired trajectories, if not exactly then closely. For simplicity, let (1) rewritten as: (2) where the vector The following assumptions are needed to synthesis a SMC: Assumption 1: The matrix M(x) is bounded by a known positive definite matrix . Assumption 2: There exists a known estimate function
for the vector
in (2).
The tracking control problem is to force the state vector to follow desired state trajectories xd(t). Let be the tracking error vector. Further, let us define the linear time-varying surface s(t) [21], (3) where and β(t) is a time varying linear function. Thus from (2) and (3), we can get the equivalent control (also called ideal controller): (4) where τeq(t) is equivalently the average value of τ(t) which maintains the system’s trajectories (i.e. tracking errors) on the sliding surface s(t) = 0. To ensure that they attain the sliding surface in a finite time and there after maintains the error e(t) on the sliding manifold, generally the control torque τ(t) consists of a low frequency (average) component τeq(t) and a hitting (high frequency) component τht as follows (5) The role of τht(t) acts to overcome the effects of the uncertainties and bend the entire system trajectories toward the sliding surface until sliding mode occurs. The hitting controller τht(t) is taken as [8,21] (6) where,
and
100
Intelligent Control and Automation
To verify the control stability, let us first get an expression for Using (3)-(5), the first derivative of (3) is:
.
(7) Choosing a Lyapunov function (8) and differentiating using (6) and (7), we obtain:
(9) which provides an exponentially stable system. Since the parameters of (2) depend on the manipulator structure and payload it carries, it is difficult to obtain completely accurate values for these parameters. In SMC theory, estimated values are usually used in the control context instead of the exact parameters. So that (4) can be written as: (10) where are bounded estimates for M(x) and respectively. As mentioned earlier in Assumption 1 and 2, they are assumed to be known in advance. In sliding mode, the system trajectories are governed by [9]: (11) So that, the error dynamics are determined by the function β(t). If coefficients of β(t) were chosen to correspond to the coefficients of a Hurwitz polynomial, it is thus implying that limt→∞e(t) = 0 . This suggests β(t) taking the following form: (12) So that, in a sliding manifold, the error dynamics is: (13) and the desired performance is governed by the coefficients c1 and c2.
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
101
In summary, the sliding mode control in (5), (6) and (10) can guarantee the stability in the Lyapunov sense even under parameter variations. As a result, the system trajectories are confining to the time varying surfaces (3). With this in hand, the error dynamics is decoupled i.e. each degree of freedom is dependent on its perspective error function, (13). The control law (10) however, shows that the coupling effects have not eliminated since the control signal for each degree of freedom is dependent on the dynamics of the other degrees of freedom. Independency is usually preferred in practice. Furthermore, to satisfy the existence condition, a large uncertainty bound should be chosen in advance. In this case, the controller results in large implementation cost and leads to chattering efforts.
DECOUPLED ROBOT TRACKING CONTROL DESIGN In this Section, we propose a fuzzy system that would approximate the equivalent control (4). The main challenge facing the application of fuzzy logic is the development of fuzzy rules. To overcome this problem, an adaptive control law is developed for the on-line generation of the fuzzy rules. The input of the fuzzy system is the sliding surfaces (3), and the output is a fuzzy controller, which substitutes for the equivalent (4). With this choice, no bounds are needed about the system functions. Furthermore, the uncertainties are estimated and continuously compensated for, which means that the hitting controller uht (6) is adaptively determined on-line. The coming Subsection gives a brief introduction to fuzzy logic systems and characterizes them with the type, which is utilized in this contribution.
Fuzzy Logic Systems A fuzzy logic system consists of a collection of L fuzzy IF-THEN rules. A one-input one-output fuzzy system has the following form: (14) where is the rule number, s and τf are respectively, the input and output variables. Al is the antecedent linguistic term in rule l; and is the label of the rule conclusion, a real number called fuzzy singleton. The conclusion of each rule (control action), a numerical value not a fuzzy set, can be considered as pre-defuzzified output. Defuzzification maps output fuzzy sets defined over an output universe of discourse to a
102
Intelligent Control and Automation
crisp output, τf. In this work, we have adopted singleton fuzzifier, product inference, the center-average defuzzifier which reduces the fuzzy rules (14) into the following fuzzy logic system:
(15) where is the membership grade of the input into the fuzzy set Al. In (15), if θl ’s are free (adjustable) parameters, then it can be rewritten as: (16) where is the parameter vector and a regression vector given by
is
(17) Generally, there are two main reasons for using the fuzzy systems in (16) as building blocks for adaptive fuzzy controllers. Firstly, it has been proved that they are universal approximators [22]. Secondly, all the parameters in ξ (s) can be fixed at the beginning of adaptive fuzzy systems expansion design procedure so that the only free design parameter vector is ϑ. In this case, τ (θ, s) is linear in parameters. This approach is adopted in synthesizing the adaptive control law in this paper. Without loss of generality, Gaussian membership functions have been selected for the input variables. A Gaussian membership function is specified by two parameters {c, σ}:
where c represents the membership function’s center and σ determines its width. The fuzzy system used in this contribution is one input one output system, (14). The input of the fuzzy system is normalized using L number of equally spaced Gaussian membership functions inside the universe of discourse. Slopes are identical, see Figure 1.
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
103
Figure 1. Input fuzzy sets.
The described fuzzy system is used to approximate the nonlinear dynamics of robotic systems. In a decoupled manner, the control action is computed for each degree of freedom, based on the corresponding sliding surface. The control actions θl (output singletons) which are contained in the parameter vector e θ should be known. In the coming Subsection, adaptive laws are derived to do this task. The antecedent part is fixed with Gaussian membership functions.
The Adaptation Mechanism Fuzzy systems are universal function approximators. They can approximate any nonlinear function within a predefined accuracy if enough rules are used. This implies the necessity of using expert knowledge in the form of large number of rules and suitable membership functions. Usually trial and error procedure is needed to achieve the requested accuracy. Assigning parameters of the fuzzy systems (or some of them) adaptively greatly facilitates the design (e.g. reduce the number of rules) and enhances the performance (saves the computation resources). In this Subsection, we derive an adaptive control law to determine the consequent part (control actions contained in parameter vector θ) of the fuzzy system which is used to approximate the unknown nonlinear dynamics of robotic systems. The proposed scheme saves the need to expert knowledge and tedious work needed to assign parameters of the fuzzy system. Furthermore, disturbances, approximation errors and uncertainties are determined compensate for on-line leading to a stable closed loop system. Lyapunov stability analysis is the most popular approach to prove and evaluate the convergence property of nonlinear controllers, e.g., sliding mode control, fuzzy control system. Here, Lyapunov analysis is employed to investigate the stability property of the proposed control system. By the
104
Intelligent Control and Automation
universal approximation theorem [22], there exists a fuzzy controller τf(s, θ ) in the form of (16) such that where εi is the approximation error and is bounded by a fuzzy controller
to approximate
(18)
. Employing
as
(19) where is the estimated value of the parameter vector θi. Now, the SMC in (5) can be rewritten as: (20) where the fuzzy controller
is designed to approximate the equivalent
controller τeqi(t). Define use (17), then it is obtained that
and
(21) An expression for
can be expressed as follows:
(22) Substituting from (19-21): (23) where . Now, assume that M−1(x) can be approximated by known constant positive definite diagonal matrix . Unlike constant control gain schemes (see [23,24] for example), this assumption has been taken into accoun s follows. Equation (23) can be rewritten as
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
105
(24) where Ei is the sum of approximation errors and uncertainties. A control goal would on-line determination of its estimate, defined by
. The estimation error is
(25) Define a Lyapunov function as
(26) where η1 and η2 are positive constants. Differentiating (25) with respect to time and using (23), it is obtained that
To satisfy
the adaptive laws can be selected as (27) (28)
Using (20) (29) then (22) can be rewritten as
106
Intelligent Control and Automation
(30) Therefore, V2 is reduced gradually and the control system is stable which means that the system trajectories converge to the sliding surfaces s(t) while and
remain bounded. Now, if we let (31)
and integrate Γ(t) with respect to time, then it is shown that (32) Because is bounded and and bounded, it implies that
is non-increasing
(33) Furthermore,
is bounded, so that by Barbalat’s lemma [7], it can be
shown that . That is, s(t) → 0 as t → 0. As a result, the proposed AFSMC is asymptotically stable. Hence, the control law (18) can be rewritten as follow ler (34) has two terms; (34) In summary, the adaptive fuzzy sliding mode controller (34) has two terms; given in (19) with the parameter adjusted by (27) and the uncertainties and approximation bound adjusted by (29). By applying these adaptive laws, the AFSMC is model free and can be guaranteed to be stable for any nonlinear system has the form of (2). It should be noted that implementing the algorithm implies that the both error dynamics and control signals has been decoupled, since each of them is dependent only on the perspective sliding surface. Unlike SMC, the proposed AFSMC does not require any knowledge about the system functions nor their bounds. It adaptively determines and compensates for the
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
107
unknown dynamics and external disturbances leading to a stable closed loop system. Figure 2 shows the main elements of the control system.
Figure 2. The closed loop control system utilizing AFSMC.
Remark 1. Since the control laws (6) and (34) contain the sign function, direct application of such control signals to the robotic system may result in chattering caused by the signal discontinuity. To overcome this problem, the control law is smoothed out within a thin boundary layer ϕ [7,21] by replacing the sign function by a saturation function defined as:
SIMULATION RESULTS In this Section, we simulate the AFSMC and SMC on a two link robot; Figure 3. Simulation tests are carried out using MATLAB R2009a, version 7.8 under Windows 7 environment. A two link robot arm with varying loads is used to generate data in the simulation tests. The arm is depicted as 2-input, 2-ouput nonlinear system. The control architecture shown in Figure 2 represents the closed loop system, in which the robot is the plant to be controlled. The detailed descriptions of the matrices M(x), and G(x) in (1) for this robot are given in Appendix A. We consider the state variable vector as the joint positions; i.e. x = [x1, x2]T. They are usually available feedback signals through encoders mounted on the motor shafts.
108
Intelligent Control and Automation
Figure 3. A two link rigid robot.
Link
parameters
are
and where the mass of link one m1 and link two m2 are randomly varied; rand (1) is a pseudo-random number ranges from 0.0 → 1.0. Figure 4 (a) shows their time history. A random disturbance torque has been added to the gravity torque of link two, such that Td = [0, 7 × rand(1)]T, Figure 4(b).Dynamic and static friction torques were selected as follows:
The friction and disturbance torques were unknown to the algorithm. Random signals were generated by the rand function in MATLAB.
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
109
Figure 4. Mass of links (a) and disturbance (b) profiles.
The desired trajectories for x1 and x2 were set as:
with Initially, the arm is assumed at rest, i.e. and position of links as in initial position error
which resulted degree and velocity error .
The AFSMC has been simulated under the following settings. Two rules were implemented to determine each of the two equivalent control components, i.e. L = 2 in (14). Each rule base has one input, si and one output, τeqi where the subscript i = 1,2 denotes the joint number. This means that a total of 4 rules were used to determine the two equivalent torques. This is relatively a quite small number of rules. In a similar study, i.e. adaptive fuzzy sliding mode control for nonlinear systems [25], the rule base consists of 36 rules for a one degree of freedom system (the inverted pendulum). Coefficients of the sliding surfaces in (12) were picked as c1 = [40, 40]T and c2 = [3, 3]T. After few simulation tests, the learning rates were adjusted as and . The estimated errors in (28) have been initiated as . As mentioned earlier, the sign function in (6) and (28) has been replaced by the saturation function with .
110
Intelligent Control and Automation
Evolution of the parameter vectors is given in Figure 5(a). Zeros were used to initiate their elements. The superscripts denote the rule number, 1 and 2. The rates of adaptation for the parameter vectors are depicted in Figure 5(b). As it can be noticed, the rate of adaptation of rule 1 is very close to rule 2 for the same joint. This remark was noticed by the authors from an enlarged version of Figure 5(b). Time history of the estimated errors is shown in Figure 6.
Figure 5. Time history of (a) parameter vectors (i.e. control actions) and (b) adaptation rate.
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
111
Figure 6. Time history of the estimated errors.
With respect to SMC in (5), (6) and (10), we have simulated it under the following settings. The control system has been initiated with the same initial conditions (i.e. e and ė) followed by the AFSMC. Similar to what we did with respect to the AFSMC, the sign function in (6) has been replaced by the saturation function. The gain K of the hitting controller gain in (6) was set as K = 70I where I is 2 × 2 identity matrix. This value of K has been selected as the maximum possible one, which means maximum possible rate of convergence. Larger value results in chattering. To synthesize the SMC, and in (10) were selected as follows: that it is a time-independent matrix and
where Fd, Fs and Td are defined above.
which means
Similar to AFSMC, the friction and disturbance torques were unknown to the control algorithm. Results are shown in Figures 7-12. A close look to these Figures shows that the AFSMC was little-bit faster than SMC. Figure 12 depicts the control signals. In the transient phase, the maximum input torques of the SMC exhibits larger values than those of the AFSMC.
112
Intelligent Control and Automation
Figure 7. The desired joint angles, xd and actual angles x.
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
Figure 8. Time history of the sliding surfaces.
Figure 9. Phase plots.
113
114
Intelligent Control and Automation
Figure 10. Velocity tracking errors.
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
115
Figure 11. Trajectory tracking errors.
Figure 12. The input torques.
In order to quantify the performance of the two controllers, we have used the following three criteria.
Intelligent Control and Automation
116
1)
Integral of the absolute value of error (IAE):
2)
Integral of time multiplied by the absolute value of the error (ITAE)
3)
Integral of the square value (ISV) of the control input
Both IAE and ITAE are used as objective numerical measures of tracking performance for an entire error curve, where tf represents the total running time (3 seconds). The IAE criterion gives an intermediate result. In ITAE, time appears as a factor; it will heavily emphasize errors that occur late in time. The criterion ISV shows the consumption of energy. Results are given in Table 1. Table 1. The performance indices.
These results slightly differ when we run the software more than one time under the same conditions. This is referred to the random signals involved in the simulation (masses of the links and the disturbances). Nevertheless, one can clearly notice that the AFSMC out performs the SMC with respect to all the performance indices. Finally, it can be concluded that all signals of the proposed control system are bounded, the states have converged to the equilibrium points and the control targets have been met.
CONCLUSIONS In this article, we utilized the universal approximation property of fuzzy systems and powerfulness of SMC theory to compose an AFSMC scheme for robotic systems. Optimal parameters of the fuzzy system and uncertainty bound are generated on-line. The proposed control scheme has the following
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
117
advantages: 1) does not require the system model; 2) guarantees the stability of the closed loop system; 3) uses a simple rule base (one-input oneoutput fuzzy system). The adaptive control law generates on-line the fuzzy rules. Furthermore, the uncertainties are learned on-line and adaptively compensated for. In comparison with SMC, the proposed control scheme is decoupled and has eliminated the assumptions, which are usually needed to synthesize a SMC. The control scheme has been simulated on a two link planar robot. The fuzzy system needs only two rules per joint to determine the control signal. The approach significantly eliminates the fuzzy data base burden and reduces the computing time, thereby increasing the sampling frequency for possible implementation. It should be emphasized that, the developed adaptive laws learn the fuzzy rules and uncertainties. Zeros have been used to initiate them. Results show the effectiveness of the overall closed-loop system performance.
APPENDIX A Assuming rigidity of links and joints and using the Lagrange method, it can be shown that the equation of motion of the robot arms is given by
where
and g = 9.8 m/sec2 is the acceleration of gravity.
118
Intelligent Control and Automation
REFERENCES 1.
A. B. Sharkawy, H. El-Awady and K. A. F. Moustafa, “Stable Fuzzy Control for a Class of Nonlinear Systems,” Tranactions of the Institute of Measurement and Control, Vol. 25, No. 3, 2003, pp. 265-278. doi:10.1191/0142331203tm086oa 2. J. Lin, R.-J. Lian, C.-N. Huang and W.-T. Sie, “Enhanced Fuzzy Sliding Mode Controller for Active Suspension Systems,” Mechatronics, Vol. 19, No. 7, 2009, pp. 1178-1190. doi:10.1016/j.mechatronics.2009.03.009 3. P. C. Chen, C. W. Chen and W. L. Chiang, “GA-Based Modified Adaptive Fuzzy Sliding Mode Controller for Nonlinear Systems,” Expert Systems with Applications, Vol. 36, No. 3, 2009, pp. 58725879. doi:10.1016/j.eswa.2008.07.003 4. S. Kaitwanidvilai and M. Parnichkun, “Force Control in a Pneumatic System Using Hybrid Adaptive Neuro-Fuzzy Model Reference Control,” Mechatronics, Vol. 15, No. 1, 2005, pp. 23-41. doi:10.1016/j. mechatronics.2004.07.003 5. L.-C. Hung and H.-Y. Chung, “Decoupled Sliding-Mode with FuzzyNeural Network Controller for Nonlinear Systems,” International Journal of Approximate Reasoning, Vol. 46, No. 1, 2007, pp. 74-97. doi:10.1016/j.ijar.2006.08.002 6. Y. Zhiyong, W. Jiang and M. Jiangping, “Motor-Mechanism Dynamic Model Based Neural Network Optimized Computed Torque Control of a High Speed Parallel Manipulator,” Mechatronics, Vol. 17, 2007, pp. 381-390. doi:10.1016/j.mechatronics.2007.04.009 7. J.-J. Slotine and W. Li, “Applied Nonlinear Control,” Printice-Hall International, Inc., Upper Saddle River, 1991. 8. V. I. Utkin, “Sliding Modes and Their Applications in Variable Structure Systems,” Mir Publishers, Moscow, 1978. 9. V. I. Utkin, “Sliding Modes in Control and Optimization,” SpringerVerlag, Berlin, 1992. 10. A. Harifi, A. Aghagolzadeh, G. Alizadel and M. Sadeghi, “Designing a Sliding Mode Controller for Slip Control of Antilock Brake Systems,” Transportation Research Part C, Vol. 16, No. 6, 2008, pp. 731-741. doi:10.1016/j.trc.2008.02.003 11. B. Yao, S. P. Chan and D. Wang, “Variable Structure Adaptive Motion and Force Control of Robot Manipulator,” Automatica, Vol. 30, No. 9, 1994, pp. 1473-1477. doi:10.1016/0005-1098(94)90014-0
An Adaptive Fuzzy Sliding Mode Control Scheme for Robotic Systems
119
12. B. Yao, S.P. Chan and D. Wang, “Unified Formulation of Variable Structure Control Schemes for Robot Manipulators,” IEEE Transactions on Automatic Control, Vol. 39, No. 2, 1994, pp. 371-376. doi:10.1109/9.272337 13. K. Passino and S. Yurkovich, “Fuzzy Control,” Addison- Wesley Longman, Inc., Boston, 1998. 14. R. Ordonez, J. Zumberge, J. T. Spooner and K. M. Passino, “Adaptive Fuzzy Control: Experiments and Comparative Analysis,” IEEE Transactions on Fuzzy Systems, Vol. 5, No. 2, 1997, pp. 167-188. doi:10.1109/91.580793 15. F. H. Hsiao, C. W. Chen, Y. W. Liang, S. D. Xu and W. L. Chiang, “T-S Fuzzy Controllers for Nonlinear Interconnected TMD Systems with Multiple Time Delays,” IEEE Transactions on Circuits and Systems, Vol. 52, No. 9, 2005, pp. 1883-1893. doi:10.1109/TCSI.2005.852492 16. F. H. Hsiao, J. D. Hwang, C. W. Chen and Z. R. Tsai, “Robust Stabilization of Nonlinear Multiple Time-Delay Large-Scale Systems via Decentralized Fuzzy Control,” IEEE Transactions on Fuzzy Systems, Vol. 13, No. 1, 2005, pp. 152-163. doi:10.1109/TFUZZ.2004.836067 17. H. F. Ho, Y. K. Wong and A. B. Rad, “Adaptive Fuzzy Sliding Mode Control with Chattering Elimination for SISO Systems,” Simulation Modeling Practice and Theory, Vol. 17, No. 7, 2009, pp. 1199-1210. doi:10.1016/j.simpat.2009.04.004 18. C.-C. Cheng and S.-H. Chien, “Adaptive Sliding Mode Controller Design Based on T-S Fuzzy System Models,” Automatica, Vol. 42, No. 1, 2006, pp. 1005-1010. doi:10.1016/j.automatica.2006.02.016 19. H. F. Ho, Y. K. Wong and A. B. Rad, “Robust Fuzzy Tracking Control for Robotic Manipulators,” Simulation Modelling Practice and Theory, Vol. 15, No. 7, 2007, pp. 801-816. doi:10.1016/j.simpat.2007.04.008 20. T.-H. S. Li and Y.-C. Huang, “MIMO Adaptive Fuzzy Terminal SlidingMode Controller for Robotic Manipulators,” Information Sciences, Vol. 180, No. 23, 2010, pp. 4641-4660. doi:10.1016/j.ins.2010.08.009 21. Y. Stephaneko and C.-Y. Su, “Variable Structure Control of Robot Manipulators with Nonlinear Sliding Manifolds,” International Journal of Control, Vol. 58, No. 2, 1993, pp. 285-300. doi:10.1080/00207179308923003 22. L. X. Wang, “Adaptive Fuzzy Systems and Control,” PTR Prentice Hall, Upper Saddle River, 1994.
120
Intelligent Control and Automation
23. S.-J. Huang and W.-C. Lin, “Adaptive Fuzzy Controller with Sliding Surface for Vehicle Suspension Control,” IEEE Transactions on Fuzzy Systems, Vol. 11, No. 4, 2003, pp. 550-559. doi:10.1109/ TFUZZ.2003.814845 24. A. Poursamad and A. H. Davaie-Markazi, “Robust Adaptive Fuzzy Control of Unknown Chaotic Systems,” Applied Soft Computing, Vol. 9, No. 3, 2009, pp. 970-976. doi:10.1016/j.asoc.2008.11.014 25. M. Roopaei, M. Zolghadri and S. Meshksar, “Enhanced Adaptive Fuzzy Sliding mode Control for Uncertain Nonlinear Systems,” Communications in Nonlinear Science and Numerical Simulation, Vol. 14, No. 9-10, 2009, pp. 3670-3681. doi:10.1016/j.cnsns.2009.01.029
Chapter 6
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
Ll Yi-Min1, Yue Yang1, and Li Li2 Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu 212013, China School of Computer Science Telecommunication Engineering, Zhenjiang, Jiangsu 212013, China 1 2
ABSTRACT A novel indirect adaptive backstepping control approach based on type-2 fuzzy system is developed for a class of nonlinear systems. This approach adopts type-2 fuzzy system instead of type-1 fuzzy system to approximate the unknown functions. With type-reduction, the type-2 fuzzy system is replaced by the average of two type-1 fuzzy systems. Ultimately, the adaptive laws, by means of backstepping design technique, will be developed to adjust the parameters to attenuate the approximation error and external disturbance. According to stability theorem, it is proved that the proposed
Citation: Ll Yi-Min, Yue Yang, Li Li, “Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System”, Journal of Applied Mathematics, vol. 2012, Article ID 658424, 27 pages, 2012. https://doi.org/10.1155/2012/658424. Copyright: © 2012 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
122
Intelligent Control and Automation
Type-2 Adaptive Backstepping Fuzzy Control (T2ABFC) approach can guarantee global stability of closed-loop system and ensure all the signals bounded. Compared with existing Type-1 Adaptive Backstepping Fuzzy Control (T1ABFC), as the advantages of handling numerical and linguistic uncertainties, T2ABFC has the potential to produce better performances in many respects, such as stability and resistance to disturbances. Finally, a biological simulation example is provided to illustrate the feasibility of control scheme proposed in this paper.
INTRODUCTION Early results on adaptive control for nonlinear systems are usually obtained based on the assumption that nonlinearities in systems satisfied matching conditions [1, 2]. To control the nonlinear systems with mismatched conditions, backstepping design technique has been developed in [3], and the papers [4, 5] addressed some robust adaptive control results by backstepping design. So far, backstepping approach has become one of the most popular design methods for a series of nonlinear systems. However, in many real plants, not only the nonlinearities in the system are unknown but also prior knowledge of the bounds of these nonlinearities is unavailable. In order to stabilize those nonlinear systems, many Approximator-Based Adaptive Backstepping Control (ABABC) methods have been developed by combining the concepts of adaptive backstepping and several universal approximators, like the Adaptive Backstepping Fuzzy Control (ABFC) [6– 9], Adaptive Backstepping Neural Network Control (ABNNC) [10–13], and Adaptive Backstepping Wavelet Control (ABWC) [14, 15]. So far, lots of important results on ABFC have been reported. In [16], direct ABFC scheme has been proposed by combining the modified integral Lyapunov functions and the backstepping technique. In [17], the author introduced the further extended ABFC scheme to the time-delay setting. Recently, in [18], the fuzzy systems are used as feedforward compensators to model some system functions depending on the reference signal. Based on backstepping technique, in [19], adaptive fuzzy controller has been proposed for temperature control in a general class of continuous stirred tank reactors. In [20], the author developed a fuzzy adaptive backstepping design procedure for a class of nonlinear systems with nonlinear uncertainties, unmodeled dynamics, and dynamics disturbances. In [21], a fuzzy adaptive backstepping output feedback control approach is developed for a class of MIMO nonlinear systems with unmeasured states.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
123
However, all the existing ABFC schemes have the common problem that they cannot fully handle or accommodate the uncertainties as they use precise type-1 fuzzy sets. In general, the uncertainty rules will be existed in the following three possible ways [22–28]: (i) the words that are used in antecedents and consequents of rules can mean different things to different people; (ii) consequents obtained by polling a group of experts will often be different for the same rule because the experts will not necessarily be in agreement; (iii) noisy training data. So when something is uncertain and the circumstances are fuzzy, we have trouble determining the membership grade even as a crisp number in [0, 1]. To overcome this drawback, we consider using fuzzy sets of type-2 in this paper. The concept of Type-2 Fuzzy Sets (T2FSs) was first introduced in [29] as an extension of the well-known ordinary fuzzy set, the Type-1 Fuzzy Sets (T1FSs). A T2FSs is characterized by a fuzzy membership function; that is, the membership grade for each element is also a fuzzy set in [0, 1]. The membership functions of T2FSs are three-dimensional and include a Footprint of Uncertainty (FOU), which is a new third dimension of T2FSs, and the FOU provides an additional degree of freedom. So compared to Type-1 Fuzzy Logic System (T1FLS), Type-2 Fuzzy Logic System (T2FLS) has many advantages as follows (i) As T2FSs are able to handle the numerical and linguistic uncertainties, T2FLC based on T2FSs will have the potential to produce a better performance than T1FLC. (ii) Using T2FSs to represent the FLC inputs and outputs will also result in the reduction of the FLC rule base compared to using T1FSs. (iii) In a T2FLC each input and output will be represented by a large number of T1FSs, which allows for greater accuracy in capturing the subtle behavior of the user in the environment. (iv) The T2FSs enable us to handle the uncertainty when trying to determine the exact membership functions for the fuzzy sets with the inputs and outputs of the FLC. The papers [30] by Hagras and [31] by Melin and Castillo were the first two papers on T2FLC. Subsequently, Castillo, Hagras, and Sepulveda presented T2FLC designs, respectively; for details see [32–37]. Also, some results on T2 fuzzy sliding-mode controller have been presented in [38–40]. Moreover, in recent years, indirect adaptive interval T2 fuzzy control for SISO nonlinear system is proposed in [41] and direct adaptive interval T2 fuzzy control has been developed in [26] for a MIMO nonlinear system. Robust adaptive tracking control of multivariable nonlinear systems based on interval T2 fuzzy approach is developed in [42]. Adaptive control of two-axis motion control system using interval T2 fuzzy neural network is
124
Intelligent Control and Automation
presented in [43]. The author introduced interval T2 fuzzy logic congestion control method for video streaming across IP networks in [44]. And in [45], optimization of interval T2 fuzzy logic controllers for a perturbed autonomous wheeled mobile robot has been developed. Inspired by all of that, a novel ABFC approach based on T2FSs is drawn in this paper. Compared with traditional T1ABFC, T2ABFC can fully handle or accommodate the uncertainties and achieve higher performances. So, T2ABFC method proposed in this paper succeeds in solving the control problem of a series of nonlinear systems with not only mismatched conditions but also complicated uncertainties. The rest of this paper is organized as follows. First is the problem formulation, with some preliminaries given in Section 2, and in Section 3, a brief introduction of the interval T2FLS. Indirect adaptive backstepping fuzzy controller design using interval T2FLS is presented in Section 4. In Section 5, a simulation example is provided to illustrate the feasibility of the proposed control scheme. In Section 6, we conclude the work of the paper.
PROBLEM FORMULATION Consider a class of SISO nonlinear systems described by the differential equations as
(2.1)
where is the stable vector and 𝑢 ∈ 𝑅 and 𝑦 ∈ 𝑅 are the input and output of the system, respectively. are unknown smooth nonlinear functions. The control objective of this paper is formulated as follows: for a given bounded reference signal (𝑡) ∈ 𝐷 with continuous and bounded derivatives up to order 𝜌, where 𝐷 ∈ 𝑅 is a known compact set. Utilize fuzzy logic system and parameters adaptive laws such that (1) all the signals involved in the closed-loop system are ultimately and uniformly bounded, (2) the tracking errors converge to a small neighborhood around zero, (3) the closed-loop system is global stable.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
125
INTERVAL TYPE-2 FUZZY LOGIC SYSTEMS In this section, the interval type-2 fuzzy set and the inference of the type-2 fuzzy logic system are presented. Formally a type-2 fuzzy set is characterized by a type-2 membership function [22], where 𝑥 ∈ 𝑋 is the primary variable and 𝑢 ∈ 𝐽𝑥 ⊆ [0,1] is the secondary variable: (3.1)
in which
can also be expressed as follows [22]: (3.2)
Due to the facts that an Interval Type-2 Fuzzy Logic Control (IT2FLC) is computationally far less intensive than a general T2FLC and thus better suited for real-time computation in embedded computational artifacts, our learning and adaptation technique use an IT2FLC (using interval T2FSs to represent the inputs and outputs). The IT2FSs, currently the most widely used kind of T2FSs, are characterized by IT2 membership functions in which the secondary membership grades are equal to 1. The theoretic background of IT2 FLS can be seen in [23, 46–50]. It is described as
(3.3) Also, a Gaussian primary membership function with uncertain mean and fixed standard deviation having an interval type-2 secondary membership function can be called an interval type-2 Gaussian membership function. Consider the case of a Gaussian primary membership function having an uncertain mean in [𝑚1, 𝑚2] and a fixed standard deviation 𝜎. It can be expressed as (3.4) Uncertainty about can be expressed by the union of all the primary memberships, and is bounded by an upper membership function and a lower membership function [22–24], which is called the FOU of :
where
(3.5)
126
Intelligent Control and Automation
(3.6) The concept of FOU, associated with the concepts of lower and upper membership functions, models the uncertainties in the shape and position of the T1FSs. The distinction between T1 and T2 rules is associated with the nature of the membership functions; the structure of the rules remains exactly the same in the T1 case, but all the sets involved are T2 now. We can consider a T2FLS having 𝑛 inputs 𝑥1 ∈ 𝑋1,…, ∈ 𝑋𝑛 and one output 𝑦 ∈ 𝑌, and assuming there are 𝑀 rules, the 𝑖th rule of the IT2 SMC can be described as
(3.7)
Since the output of the inference-engine is a T2FS, it must be typereduced and the defuzzifier is used to generate a crisp output. The typereducer is an extension of T1 defuzzifier obtained by applying the extension principle [29]. There are many kinds of type-reduction methods [23, 24, 51, 52], such as the centroid, center of sets, center of sums, and height type-reduction, and these are elaborated upon in [25]. As in [23], the most commonly used type-reduction method is the center of sets type-reducer, as it has reasonable computational complexity that lies between the computationally expensive centroid type-reduction and the simple height and modified height type-reduction which have a problem when only one rule fires [25]. The type-reduced set using the center of sets type-reduction can be expressed as follows: (3.8) where 𝑌cos(𝑥) is an interval output set determined by its left-most point 𝑦𝑙 and its right-most point 𝑦𝑟, and
. In the meantime, an IT2FLS
with singleton fuzzification and meet under minimum or product 𝑡-norm and
can be obtained as
(3.9) and
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
(3.10) Also, 𝑦 ∈ 𝑌 and is the centroid of the IT2 consequent set centroid of a T2FS, and for any value 𝑦 ∈ 𝑌cos, 𝑦 can be expressed as 𝑖
𝑖
127
, the
(3.11) where 𝑦 is a monotonic increasing function with respect to 𝑦𝑖. Also, 𝑦𝑙 is the minimum associated only with , and 𝑦𝑟 is the maximum associated only with . Note that 𝑦𝑙 and 𝑦𝑟 depend only on the mixture of or values. Therefore, the left-most point 𝑦𝑙 and the right-most point 𝑦𝑟 can be expressed as a Fuzzy Basis Function (FBF) expansion, that is, (3.12)
and
(3.13) respectively,
where ,
and
In order to compute 𝑦𝑙 and 𝑦𝑟, the Karnik-Mendel iterative procedure is needed [25, 51]. It has been shown in [25, 26, 49, 51]. For illustrative purposes, we briefly provide the computation procedure for 𝑦𝑟. Without loosing of generality, assume
is arranged in ascending order, that is,
Step 1. Compute 𝑦𝑟 in (3.13) by initially setting
= 1,…,, where
and
have been precomputed by (3.10) and let
Step 2 :. Find (1 ≤ 𝑅 ≤ 𝑀 − 1) such that
Step 3. Compute 𝑦𝑟 in (3.13) with 𝑅 and let .
for 𝑖 ≤ 𝑅 and
for 𝑙 .
for 𝑖 >
128
Intelligent Control and Automation
Step 4. If . Step 5. Set
, then go to Step 5; if equal to
, then stop and set
and return to Step 2.
The point to separate two sides by number 𝑅 can be decided from the
above algorithm, one side using lower firing strengths using upper firing strengths
and another side
. Therefore, 𝑦𝑟 can be expressed as (3.14)
where meantime, we have .
and
. In the and
The procedure to compute 𝑦𝑙 is similar to compute 𝑦𝑟. In Step 2, it only
determines 𝐿 (1 ≤ 𝐿 ≤ 𝑀 − 1), such that for 𝑖 ≤ 𝐿 and
. In Step 3, let
for 𝑖 > 𝐿. Therefore, 𝑦𝑙 can be expressed as (3.15)
where
and
. In the meantime,
we have and . We defuzzify the interval set by using the average of 𝑦𝑙 and 𝑦𝑟, hence, the defuzzified crisp output becomes (3.16) where 4Lemma 3.1. (Wang [53]). Let (𝑥) be a continuous function defined on a compact set Ω. Then for any constant 𝜀 > 0, there exists a fuzzy logic system (3.16) such as (3.17)
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
129
ADAPTIVE BACKSTEPPING FUZZY CONTROLLER DESIGN USING IT2FLS In this section, our objective is to use IT2FLS to approximate the nonlinear functions. With type-reduction, the IT2FLS is replaced by the average of two T1FLSs. Ultimately, the adaptive laws, by means of backstepping design technique, will be developed to adjust the parameters to attenuate the approximation error and external disturbance. To begin with, some assumptions are given as follows. Assumption 4.1. There exist positive constants such that Assumption 4.2. There exist positive constant Assumption 4.3. Define the optimal parameter vectors
such that as
(4.1) where Ω𝑖, 𝑈𝑖 are compact regions for respectively. The fuzzy logic system minimum approximation errors 𝜔𝑖 are defined as (4.2)
T2ABFC method proposed in this paper is summarized in Theorem 4.4. Theorem 4.4. Suppose assumptions are 4.1–4.3 tenable, then the fuzzy adaptive output tracking design described by (2.1), control law
and
parameter
adaptive
laws
based on T2FLS guarantees that closed-loop system is globally uniform ultimately bounded, output tracking error converges to a small neighborhood of the origin and resistant to extern disturbance. (𝑘𝑛, 𝛾𝑛𝑙, 𝛾𝑛𝑟, 𝑐𝑛𝑙, 𝑐𝑛𝑟 are all design parameters).
The detailed design and certification procedures are described in the following steps. Step 1. Define the tracking error for the system as
130
Intelligent Control and Automation
(4.3) The time derivative of 𝑒1 is
(4.4)
Take 𝑥2 as a virtual control, and define
(4.5)
where 𝑘1 is a positive constant.
Since are unknown, ideal controller is not available in practice. By Lemma 3.1, fuzzy logic systems are universal approximators, so we can assume that the unknown function can be approximated by the following type-2 fuzzy logic system , and we obtain (4.6) Express 𝑥2 as 𝑥2 = 𝑒2 + 𝛼1 and define
(4.7)
then the time derivative of 𝑒1 is
(4.8) where
.
Consider the following Lyapunov function:
then the time derivative of 𝑉1 is
(4.9)
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
131
(4.10) Choose the intermediate adaptive laws as
(4.11) where 𝑐1𝑙 and 𝑐1𝑟 are given positive constants. Substituting (4.11) into (4.10) yields (4.12) where We can obtain that
(4.13)
From (4.12), (4.13), it follows that
Step 2. Differentiating 𝑒2 yields
(4.14)
(4.15)
132
Intelligent Control and Automation
Take 𝑥3 as a virtual control, and define
(4.16)
where 𝑘2 is a positive constant. From (4.7), we obtain
(4.17) Since are unknown, ideal controller is not available in practice, so we can assume that the unknown function can be approximated by the following type-2 fuzzy logic system
, and obtain (4.18)
Express 𝑥3 as 𝑥3 = 𝑒3 + 𝛼2 and the time derivative of 𝑒2 is (4.19) where Consider the following Lyapunov function
(4.20) then the time derivative of 𝑉2 is
Choose the intermediate adaptive laws as
(4.21)
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
133
(4.22) Substituting (4.22) into (4.21) yields
(4.23) Step i (3 ≤ 𝑖 ≤ 𝑛 − 1)
A similar procedure is employed recursively at each step. By defining (4.24) the time derivative of 𝑒𝑖 is
(4.25)
Take 𝑥𝑖+1 as a virtual control, and define (4.26) where 𝑘𝑖 is a positive constant:
(4.27) The unknown function
can be approximated
by the following type-2 fuzzy logic system
, and we can obtain
(4.28) Express 𝑥𝑖+1 as 𝑥𝑖+1 = 𝑒𝑖+1 + 𝛼𝑖 and the time derivative of 𝑒𝑖 is (4.29)
134
where
Intelligent Control and Automation
.
Consider the following Lyapunov function:
(4.30) then the time derivative of 𝑉𝑖 is
(4.31) Choose the intermediate adaptive laws as
(4.32) Substituting (4.32) into (4.31) yields
(4.33)
Step n In the final design step, the actual control input 𝑢 will appears. Defining (4.34)
the time derivative of 𝑒𝑛 is
(4.35)
Define the actual control input 𝑢 as
(4.36)
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
where 𝑘𝑛 is a positive constant, function as
. Choose the whole Lyapunov
(4.37) then the time derivative of 𝑉 is
(4.38)
Choose the actual adaptive laws as
(4.39) Substituting (4.39) into (4.38) yields
is chosen such that
(4.40)
, then
(4.41) where 𝜌 is a positive constant and Then (4.40) becomes
135
(4.42)
136
Intelligent Control and Automation
(4.43) From (4.43) we have (4.44) It can be shown that the signals (𝑡), (𝑡), Θ𝑙(𝑡), Θ𝑟(𝑡), 𝑢(𝑡) are globally uniformly ultimately bounded and
and that
. In order to achieve the tracking error convergences to a small neighborhood around zero, the parameters 𝜌 and 𝛿 should be chosen appropriately, then it is possible to make as small as desired. Denote . Since as , therefore, it follows that there exists 𝑇, when
SIMULATION In this section, we provide a biological simulation example to illustrate the feasibility of the control scheme proposed in this paper. In recent years, interest in adaptive control systems has increased rapidly along with interest and progress in control topics. The adaptive control has a variety of specific meanings, but it often implies that the system is capable of accommodating unpredictable environmental changes, whether these changes arise within the system or external to it. Adaptation is a fundamental characteristic of living organisms such as prey-predator systems and many other biological models since these systems attempt to maintain physiological equilibrium in the midst changing environmental conditions.
Background Knowledge
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
137
The Salton Sea, which is located in the southeast desert of California, came into the limelight due to deaths of fish and fish catching birds on a massive scale. Recently, Chattopadhyay and Bairagi [54] proposed and analyzed an eco-epidemiological model on Salton Sea.We assume that there are two populations. •
•
The prey population, Tilapia fish, whose population density is denoted by 𝑁, which is the number of Tilapia fish per unit designated area. In the absence of bacterial infection, the fish population grows according to a logistic law with carrying capacity 𝐾(𝐾 ∈ 𝑅+), with an intrinsic birth rate constant 𝑟(𝑟 ∈ 𝑅+), such that (5.1)
•
In the presence of bacterial infection, the total fish population 𝑁 is divided into two classes, namely, susceptible fish population, denoted by 𝑆, and infected fish population, denoted by 𝐼. Therefore, at any time 𝑡, the total density of prey (i.e., fish) population is
•
•
Only the susceptible fish population 𝑆 is capable of reproducing with the logistic law, and the infected fish population 𝐼 dies before having the capacity of reproduction. However, the infected fish 𝐼 still contributes with 𝑆 to population growth towards the carrying capacity. Liu et al. [55] concluded that the bilinear mass action incidence rate due to saturation or multiple exposures before infection could lead to a nonlinear incidence rate as 𝜆𝑆𝑝𝐼𝑞 with 𝑝 and 𝑞 near 1 and without a periodic forcing term, which have much wider range of dynamical behaviors in comparison to bilinear incidence rate 𝜆𝑆𝐼. Here, 𝜆 ∈ 𝑅+ is the force of infection or rate of transmission. Therefore, the evolution equation for the susceptible fish population 𝑆 can be written as
•
(5.2)
(5.3)
The predator population, Pelican birds, whose population density is denoted by 𝑃, which is the number of birds per unit designated area.
Intelligent Control and Automation
138
•
•
It is assumed that Pelicans cannot distinguish the infected and healthy fish. They consume the fish that are readily available. Since the prey population is infected by a disease, infected preys are weakened and become easier to predate, while susceptible (healthy) preys easily escape predation. Considering this fact, it is assumed that the Pelicans mostly consume the infected fish only. The natural death rate of infected prey (not due to predation) is denoted by (𝜇 ∈ 𝑅+). 𝑑 is the total death of predator population (including natural death and death due to predation of infected prey). 𝑚 is the search rate, 𝜃 is the conversion factor, and 𝑎 is the half saturation coefficient. The system appears to exhibit a chaotic behavior for a range of parametric values. The range of the system parameters for which the subsystems converge to limit cycles is determined. In Figure 1, typical chaotic attractor for the model system is obtained for the parameter values as see [54]: (5.4)
Figure 1. Typical chaotic attractor for the model system.
Practically, the populations of prey and predator are supposed to be stable or constant, so controller will be used to achieve the intended target. In this model, we add a controller to the third differential equation, that is to say, we control the population of Tilapia fish to be constant by changing the population of Pelican birds in some ways.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
139
From the above assumptions and substituting 𝑆, 𝐼, by 𝑥1, 𝑥2, 𝑥3, respectively, we can write down the following differential equations:
(5.5) The backstepping design algorithm for type-2 adaptive fuzzy control is proposed as follows. Step 1. Define the type-2 membership functions as
(5.6) then the fuzzy basis functions are obtained as
140
Intelligent Control and Automation
(5.7) Step 2. We assume that there exist some language rules of unknown functions 1, 2, and 3, respectively. The unknown function 1: (5.8) The unknown function 2:
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
(5.9) The unknown function 3: (5.10) The fuzzy rules of the Unknown Function 1 (UF1) are as follows. R1:if 𝑥1 is Small (S), then UF1 is Powerful (P).
R2:if 𝑥1 is Medium (M), then UF1 is Weak (W). R3:if 𝑥1 is Large (L), then UF1 is Modest (M).
The fuzzy rules of the Unknown Function 2 (UF2) are as follows. R1:if 𝑥1 is S, 𝑥2 is S, then UF2 is W.
R2:if 𝑥1 is S, 𝑥2 is M, then UF2 is M. R3:if 𝑥1 is S, 𝑥2 is L, then UF2 is P.
R4:if 𝑥1 is M, 𝑥2 is S, then UF2 is W.
R5:if 𝑥1 is M, 𝑥2 is M, then UF2 is M. R6:if 𝑥1 is M, 𝑥2 is L, then UF2 is P. R7:if 𝑥1 is L, 𝑥2 is S, then UF2 is P.
R8:if 𝑥1 is L, 𝑥2 is M, then UF2 is M. R9:if 𝑥1 is L, 𝑥2 is L, then UF2 is W.
The fuzzy rules of the Unknown Function 3 (UF3) are as follows. R1:if 𝑥2 is S, 𝑥3 is S, then UF3 is M.
R2:if 𝑥2 is S, 𝑥3 is M, then UF3 is W. R3:if 𝑥2 is S, 𝑥3 is L, then UF3 is P.
R4:if 𝑥2 is M, 𝑥3 is S, then UF3 is W.
R5:if 𝑥2 is M, 𝑥3 is M, then UF3 is M. R6:if 𝑥2 is M, 𝑥3 is L, then UF3 is P.
R7:if 𝑥2 is L, 𝑥3 is S, then UF3 is M.
R8:if 𝑥2 is L, 𝑥3 is M, then UF3 is W. R9:if 𝑥2 is L, 𝑥3 is L, then UF3 is P.
Step 3. Specify positive design parameters as follows:
141
Intelligent Control and Automation
142
(5.11)
The first intermediate controller is chosen as (5.12) The first adaptive laws are designed as
(5.13) The second intermediate controller is chosen as (5.14) The second adaptive laws are designed as
(5.15) The actual controller is chosen as (5.16) The actual adaptive laws are designed as
(5.17) The reference output is specified as (𝑡) = 93.The performance of T2ABFC is compared by traditional T1ABFC in state variable outputs, controller trajectory, tracking error, and resistance to disturbances. The simulation results are shown in Figures 1–11. •
• •
Figures 2 and 3 show the state variable output with T2ABFC and T1ABFC, respectively. We can obtain that the T2ABFC has a higher performance in terms of response speed than T1ABFC. Figure 4 shows the trajectories of controller based on T2ABFC and T1ABFC, respectively. Figure 5 shows the trajectories of tracking error based on T2ABFC
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
143
and T1ABFC, respectively. By comparing, the superiority of T2ABFC in tracking performance is obvious.In order to show the property of resistance to disturbances, training data is corrupted by a random noise ±0.05𝑥 and ±0.5𝑥, that is, 𝑥 is replaced by (1 ± random(0.05)) 𝑥 and (1± random (0.5)) 𝑥. • Figures 6 and 7 show the responses of state variables 𝑥1 with disturbances based on T2ABFC and T1ABFC, respectively. • Figures 8 and 9 show the responses of state variables 𝑥2 with disturbances based on T2ABFC and T1ABFC, respectively. • Figures 10 amd 11 show the responses of state variables 𝑥3 with disturbances based on T2ABFC and T1ABFC respectively. It is obvious that the system based on T2ABFC has the better property of resistance to disturbances. From all the outputs of simulation, we can obtain that T2ABFC method proposed in this paper guaranteeS the higher tracking performance and resistance to external disturbances than traditional T1ABFC method.
Figure 2. State variable outputs with T2ABFC.
144
Intelligent Control and Automation
Figure 3. State variable outputs with T1ABFC.
Figure 4. Trajectories of controller.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
145
Figure 5. Trajectories of tracking error.
Figure 6. Trajectories of the state variable 𝑥1 by T2ABFC with external disturbances.
146
Intelligent Control and Automation
Figure 7. Trajectories of the state variable 𝑥1 by T1ABFC with external disturbances.
Figure 8. Trajectories of the state variable 𝑥2 by T2ABFC with external disturbances.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
147
Figure 9. Trajectories of the state variable 𝑥2 by T1ABFC with external disturbances.
Figure 10. Trajectories of the state variable 𝑥3 by T2ABFC with external disturbances.
148
Intelligent Control and Automation
Figure 11. Trajectories of the state variable 𝑥3 by T1ABFC with external disturbances.
CONCLUSION In this paper, we solve the globally stable adaptive backstepping fuzzy control problem for a class of nonlinear systems and T2ABFC is recommended in this approach. From the simulation results, the main conclusions can be drawn. (1) T2ABFC guarantees the outputs of the closed-loop system follow the reference signal, and all the signals in the closed-loop system are uniform ultimately bounded. (2) Compared by traditional T1ABFC, T2ABFC has higher performances, in terms of stability, response speed, and resistance to external disturbances.
ACKNOWLEDGMENT Work supported by National Natural Science Foundation of China (11072090) and project of advanced talents of Jiangsu University (10JDG093).
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
149
REFERENCES 1.
I. Kanellakopoulos, P. V. Kokotović, and R. Marino, “An extended direct scheme for robust adaptive nonlinear control,” Automatica, vol. 27, no. 2, pp. 247–255, 1991. 2. S. S. Sastry and A. Isidori, “Adaptive control of linearizable systems,” IEEE Transactions on Automatic Control, vol. 34, no. 11, pp. 1123– 1131, 1989. 3. I. Kanellakopoulos, P. V. Kokotović, and A. S. Morse, “Systematic design of adaptive controllers for feedback linearizable systems,” IEEE Transactions on Automatic Control, vol. 36, no. 11, pp. 1241– 1253, 1991. 4. M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic,, Nonlinear and Adaptive Control Design, John Wiley & Sons, New York, NY, USA, 1995. 5. R. A. Freeman and P. V. Kokotović, Robust Nonlinear Control Design, Birkhäuser, Boston, Mass, USA, 1996. 6. W.-Y. Wang, M.-L. Chan, T.-T. Lee, and C.-H. Liu, “Adaptive fuzzy control for strict-feedback canonical nonlinear systems with H tracking performance,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 30, no. 6, pp. 878–885, 2000. 7. W.-Y. Wang, M.-L. Chan, T.-T. Lee, and C.-H. Liu, “Recursive backstepping design of an adaptive fuzzy controller for strict output feedback nonlinear systems,” Asian Journal of Control, vol. 4, no. 3, pp. 255–264, 2002. 8. S. S. Zhou, G. Feng, and C. B. Feng, “Robust control for a class of uncertain nonlinear systems: adaptive fuzzy approach based on backstepping,” Fuzzy Sets and Systems, vol. 151, no. 1, pp. 1–20, 2005. 9. B. Chen and X. P. Liu, “Fuzzy approximate disturbance decoupling of MIMO nonlinear systems by backstepping and application to chemical processes,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 6, pp. 832–847, 2005. 10. M. M. Polycarpou, “Stable adaptive neural control scheme for nonlinear systems,” IEEE Transactions on Automatic Control, vol. 41, no. 3, pp. 447–451, 1996. 11. W. Chen and Y. P. Tian, “Neural network approximation for periodically disturbed functions and applications to control design,” Neurocomputing, vol. 72, no. 16–18, pp. 3891–3900, 2009.
150
Intelligent Control and Automation
12. S. S. Ge and J. Wang, “Robust adaptive neural control for a class of perturbed strict feedback nonlinear systems,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 1409–1419, 2002. 13. C. Wang, D. J. Hill, S. S. Ge, and G. Chen, “An ISS-modular approach for adaptive neural control of pure-feedback systems,” Automatica, vol. 42, no. 50, pp. 723–731, 2006. 14. D. W. Ho, J. Li, and Y. Niu, “Adaptive neural control for a class of nonlinearly parametric time-delay systems,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 625–635, 2005. 15. C.-F. Hsu, C.-M. Lin, and T.-T. Lee, “Wavelet adaptive backstepping control for a class of nonlinear systems,” IEEE Transactions on Neural Networks, vol. 17, no. 5, pp. 1175–1183, 2006. 16. M. Wang, B. Chen, and S.-L. Dai, “Direct adaptive fuzzy tracking control for a class of perturbed strict-feedback nonlinear systems,” Fuzzy Sets and Systems, vol. 158, no. 24, pp. 2655–2670, 2007. 17. M. Wang, B. Chen, X. P. Liu, and P. Shi, “Adaptive fuzzy tracking control for a class of perturbed strict-feedback nonlinear time-delay systems,” Fuzzy Sets and Systems, vol. 159, no. 8, pp. 949–967, 2008. 18. W. Chen and Z. Zhang, “Globally stable adaptive backstepping fuzzy control for output-feedback systems with unknown high-frequency gain sign,” Fuzzy Sets and Systems, vol. 161, no. 6, pp. 821–836, 2010. 19. S. Salehi and M. Shahrokhi, “Adaptive fuzzy backstepping approach for temperature control of continuous stirred tank reactors,” Fuzzy Sets and Systems, vol. 160, no. 12, pp. 1804–1818, 2009. 20. S. Tong, Y. Li, and P. Shi, “Fuzzy adaptive backstepping robust control for SISO nonlinear system with dynamic uncertainties,” Information Sciences, vol. 179, no. 9, pp. 1319–1332, 2009. 21. T. Shaocheng, L. Changying, and L. Yongming, “Fuzzy adaptive observer backstepping control for MIMO nonlinear systems,” Fuzzy Sets and Systems, vol. 160, no. 19, pp. 2755–2775, 2009. 22. J. Mendel and R. John, “Type-2 fuzzy sets made simple,” IEEE Transactions on Fuzzy Systems, vol. 10, no. 2, pp. 117–127, 2002. 23. Q. Liang and J. M. Mendel, “Interval type-2 fuzzy logic systems: theory and design,” IEEE Transactions on Fuzzy Systems, vol. 8, no. 5, pp. 535–550, 2000. 24. N. Karnik, J. Mendel, and Q. Liang, “Type-2 fuzzy logic systems,” IEEE Transactions on Fuzzy Systems, vol. 7, no. 6, pp. 643–658, 1999.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
151
25. J. M. Mendel, Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions, Prentice-Hall, Upper-Saddle River, NJ, USA, 2001. 26. T. C. Lin, H. L. Liu, and M. J. Kuo, “Direct adaptive interval type2 fuzzy control of multivariable nonlinear systems,” Engineering Applications of Artificial Intelligence, vol. 22, no. 3, pp. 420–430, 2009. 27. J. M. Mendel, “Type-2 fuzzy sets and systems: an overview,” IEEE Computational Intelligence Magazine, vol. 2, no. 1, pp. 20–29, 2007. 28. D. Wu and W. W. Tan, “A simplified type-2 fuzzy logic controller for real-time control,” ISA Transactions, vol. 45, no. 4, pp. 503–516, 2006. 29. L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning-I,” Information Sciences, vol. 8, pp. 199–249, 1975. 30. H. A. Hagras, “A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots,” IEEE Transactions on Fuzzy Systems, vol. 12, no. 4, pp. 524–539, 2004. 31. P. Melin and O. Castillo, “A new method for adaptive control of non-linear plants using Type-2 fuzzy logic and neural networks,” International Journal of General Systems, vol. 33, no. 2-3, pp. 289– 304, 2004. 32. O. Castillo, N. Cázarez, D. Rico, and L. T. Aguilar, “Intelligent control of dynamic systems using type-2 fuzzy logic and stability issues,” International Mathematical Forum, vol. 1, no. 28, pp. 1371–1382, 2006. 33. O. Castillo, N. Cazarez, and P. Melin, “Design of stable type-2 fuzzy logic controllers based on a fuzzy lyapunov approach,” in Proceedings of the IEEE International Conference on Fuzzy Systems, pp. 2331– 2336, Vancouver, Canada, July 2006. 34. O. Castillo, L. Aguilar, N. Cázarez, and S. Cárdenas, “Systematic design of a stable type-2 fuzzy logic controller,” Applied Soft Computing Journal, vol. 8, no. 3, pp. 1274–1279, 2008. 35. H. Hagras, “Type-2 FLCs: a new generation of fuzzy controllers,” IEEE Computational Intelligence Magazine, vol. 2, no. 1, pp. 30–43, 2007. 36. R. Sepúlveda, O. Castillo, P. Melin, A. Rodríguez-Díaz, and O. Montiel, “Experimental study of intelligent controllers under uncertainty using
152
37.
38.
39.
40.
41.
42.
43.
44.
45.
Intelligent Control and Automation
type-1 and type-2 fuzzy logic,” Information Sciences, vol. 177, no. 10, pp. 2023–2048, 2007. R. Sepúlveda, O. Castillo, P. Melin, and O. Montiel, “An efficient computational method to implement type-2 fuzzy logic in control applications,” Advances in Soft Computing, vol. 41, pp. 45–52, 2007. P.-Z. Lin, C.-M. Lin, C.-F. Hsu, and T.-T. Lee, “Type-2 fuzzy controller design using a sliding-mode approach for application to DC-DC converters,” IEE Proceedings—Electric Power Applications, vol. 152, no. 6, pp. 1482–1488, 2005. M.-Y. Hsiao, T. H. S. Li, J.-Z. Lee, C.-H. Chao, and S.-H. Tsai, “Design of interval type-2 fuzzy sliding-mode controller,” Information Sciences, vol. 178, no. 6, pp. 1696–1716, 2008. T.-C. Lin, “Based on interval type-2 fuzzy-neural network direct adaptive sliding mode control for SISO nonlinear systems,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 12, pp. 4084–4099, 2010. K. Chafaa, L. Saïdi, M. Ghanaï, and K. Benmahammed, “Indirect adaptive interval type-2 fuzzy control for nonlinear systems,” International Journal of Modelling, Identification and Control, vol. 2, no. 2, pp. 106–119, 2007. T. C. Lin, M. J. Kuo, and C. H. Hsu, “Robust adaptive tracking control of multivariable nonlinear systems based on interval type2 fuzzy approach,” International Journal of Innovative Computing, Information and Control, vol. 6, no. 3, pp. 941–961, 2010. F. J. Lin and P. H. Chou, “Adaptive control of two-axis motion control system using interval type-2 fuzzy neural network,” IEEE Transactions on Industrial Electronics, vol. 56, no. 1, pp. 178–193, 2009. E. A. Jammeh, M. Fleury, C. Wagner, H. Hagras, and M. Ghanbari, “Interval type-2 fuzzy logic congestion control for video streaming across IP networks,” IEEE Transactions on Fuzzy Systems, vol. 17, no. 5, pp. 1123–1142, 2009. R. Martínez, O. Castillo, and L. T. Aguilar, “Optimization of interval type-2 fuzzy logic controllers for a perturbed autonomous wheeled mobile robot using genetic algorithms,” Information Sciences, vol. 179, no. 13, pp. 2158–2174, 2009.
Adaptive Backstepping Fuzzy Control Based on Type-2 Fuzzy System
153
46. Q. Liang and J. M. Mendel, “Designing interval type-2 fuzzy logic systems using an SVD-QR method: rule reduction,” International Journal of Intelligent Systems, vol. 15, no. 10, pp. 939–957, 2000. 47. J. M. Mendel, H. Hagras, and R. I. John, “Standard background material about interval type-2 fuzzy logic systems that can be used by all authors,” http://www.ieee-cis.org/_files/standards.t2.win.pdf. 48. J. M. Mendel, R. I. John, and F. Liu, “Interval type-2 fuzzy logic systems made simple,” IEEE Transactions on Fuzzy Systems, vol. 14, no. 6, pp. 808–821, 2006. 49. J. M. Mendel and H. Wu, “New results about the centroid of an interval type-2 fuzzy set, including the centroid of a fuzzy granule,” Information Sciences, vol. 177, no. 2, pp. 360–377, 2007. 50. H. Wu and J. M. Mendel, “Uncertainty bounds and their use in the design of interval type-2 fuzzy logic systems,” IEEE Transactions on Fuzzy Systems, vol. 10, no. 5, pp. 622–639, 2002. 51. N. N. Karnik and J. M. Mendel, “Centroid of a type-2 fuzzy set,” Information Sciences, vol. 132, no. 1–4, pp. 195–220, 2001. 52. N. N. Karnik and J. M. Mendel, “Introduction to type-2 fuzzy logic systems,” IEEE World Congress on Computational Intelligence, pp. 915–920, 1998. 53. L. X. Wang, Adaptive Fuzzy Systems and Control: Design and Stability Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 1994. 54. J. Chattopadhyay and N. Bairagi, “Pelicans at risk in Salton sea—an eco-epidemiological model,” Ecological Modelling, vol. 136, no. 2-3, pp. 103–112, 2001. 55. W. Liu, H. Hethcote, and S. A. Levin, “Dynamical behavior of epidemiological models with nonlinear incidence rates,” Journal of Mathematical Biology, vol. 25, no. 4, pp. 359–380, 1987.
Chapter 7
Fuzzy PID Control for Respiratory Systems
Ibrahim M. Mehedi1,2, Heidir S. M. Shah1, Ubaid M. Al-Saggaf1,2, Rachid Mansouri3, and Maamar Bettayeb4 Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia 1
Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia 2
Laboratoire de Conception et Conduite des Systemes de Production (L2CSP), Tizi Ouzou, Algeria 3
4
Electrical Engineering Department, University of Sharjah, Sharjah, UAE
ABSTRACT This paper presents the implementation of a fuzzy proportional integral derivative (FPID) control design to track the airway pressure during the mechanical ventilation process. A respiratory system is modeled as a combination of a blower-hose-patient system and a single compartmental Citation: Ibrahim M. Mehedi, Heidir S. M. Shah, Ubaid M. Al-Saggaf, Rachid Mansouri, Maamar Bettayeb, “Fuzzy PID Control for Respiratory Systems”, Journal of Healthcare Engineering, vol. 2021, Article ID 7118711, 6 pages, 2021. https://doi. org/10.1155/2021/7118711. Copyright: © 2021 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
156
Intelligent Control and Automation
lung system with nonlinear lung compliance. For comparison purposes, the classical PID controller is also designed and simulated on the same system. According to the proposed control strategy, the ventilator will provide airway flow that maintains the peak pressure below critical levels when there are unknown parameters of the patient’s hose leak and patient breathing effort. Results show that FPID is a better controller in the sense of quicker response, lower overshoot, and smaller tracking error. This provides valuable insight for the application of the proposed controller.
INTRODUCTION The world has been shocked by the COVID-19 disease since its outbreak was first detected in Wuhan, China, in December 2019. It was then declared as a global pandemic by the World Health Organization (WHO) three months later, and at the time of writing, more than 174 million people worldwide have been infected with the disease with close to 4 million deaths recorded [1, 2]. The analysis shows that acute respiratory failure (ARF) is the leading cause of death [3], and one study found that 40 percent of critically ill COVID-19 patients developed acute respiratory distress syndrome (ARDS) which requires invasive incubation and ventilation [4–7]. Such therapies can be provided by an intensive care unit’s (ICU) device called a mechanical ventilator that is used to assist or replace the spontaneous breathing of a patient [8]. Mechanical ventilators were first used to assist in ventilation as early as the 18th century, but the first closed-loop system for mechanical ventilation did not become available until the 1950s. A mechanical ventilator used mechanical bellows and valves to cycle gas into the lungs, while a simple proportional (P) or proportional-integral (PI) controller was used [9]. Later, microprocessors were used to implement those controllers, and since then, there have been numerous closed-loop control proposals. Closed-loop control in mechanical ventilators can be categorized based on how much the system interacts with patients. A class 1 control loop features no backward interaction from the patient to the device, whereas in the class 2 control loop, interaction between the patient and device is possible. In both classes, control signals are measured inside the device. A class 3 control loop is called physiological compensatory control loops due to the fact that it uses the physiological parameter as its control variable instead of the physical one [10]. In this paper, a pressure-based ventilation
Fuzzy PID Control for Respiratory Systems
157
controller under the class 2 category is developed where the control objective is to track a set-point target airway pressure. Having a reputation as the most reliable industrial controller, the PID controller has been used widely in mechanical ventilators. One of the earliest implementations of PID controller on a mechanical ventilator since the introduction of the microprocessor can be found in Ohlson works [11]. PID, however, has some limitations: it did not perform well when the system’s dynamics are not constant. An example of this is the relationship between ventilation and pressure. During ventilation, pressure must be adjusted according to the level of ventilation to prevent lung injury. To improve controller performance, Dai et al. [12] used two separate algorithms where the PD algorithm is used during the initial phase while PI algorithm will be activated when the output pressure started to be constant. Besides this, other techniques were also used to improve PID controller performance in the mechanical ventilation system including the use of optimization techniques called pressure evaluate correction module (PECM) [13], an automatic tuning of PID gains using particle swarm optimization (PSO) [14, 15] and repetitive control [8]. In this paper, we proposed a fuzzy PID (FPID) controller for airway pressure set-point tracking of mechanical ventilation. Fuzzy reasoning is used to evaluate the changes of the system’s dynamic through the measured set-point error and the rate of change of error which, in turn, updates the PID tuning parameters based on the rules set. The process of updating the tuning parameters is done in an online manner. The proposed controller is then simulated on a respiratory system model which consists of a blowerhose-patient system model and a single compartmental lung model which is obtained from the works of Hunnekens et al. [16] and Bates [17], respectively. The fuzzy logic-based controller has been implemented in many applications including the longitudinal autopilot of an unmanned aerial vehicle (UAV) [18], controlling the speed of the conveyor system [19], simulating the tissue differentiation process [20], and induction motor control [21]. The primary purpose of this proposed controller is to enhance the performance of the PID controller on a respiratory system where some of its mechanical parameters are not constant, specifically, lung compliance, which can be increased or decreased according to the lung volume. The rest of this paper is organized as follows: Section 2 presents the details of the mathematical model for the blower-hose-patient system and single lung compartmental model and also presents a brief explanation about lung compliance. The details of the proposed controller design are discussed
158
Intelligent Control and Automation
in Section 3, while the simulation results, analysis, and comparison between PID and FPID are presented in Section 4. Section 5 concludes the work.
MATHEMATICAL MODEL OF RESPIRATORY SYSTEMS The respiratory system model used in this paper is based on the blowerhose-patient system model presented in [16] with a single compartmental lung model obtained from [17]. As shown in Figure 1, the system consists of 3 main components: the blower which compresses ambient air to the desired pressure, (pout), the hose which connects the respiratory module to the patient, and the patient’s lung. The airway pressure paw is measured using a pressure sensor that is placed inside the module. The control objective is to track the measured pressure so that it follows the target set-point ptarget. Therefore, the error equation can be described as follows: (1)
0 Figure 1. Blower-hose-patient system.
The air from the blower flows (Qout) through the hose with the resistance of Rhose into the lung (Qpat) with the resistance of Rlung for inhalation process. The patient then exhales the air back to the hose where some of it will flow out of the leak (Qleak) with the resistance of Rleak. The leak also prevents some of the exhaled air to be inhaled back by the patient in the next cycle. Thus, we can write the patient flow equation as follows: (2)
Fuzzy PID Control for Respiratory Systems
159
Here, the blower flow, leak flow, and patient flow can be obtained by pressure differences over resistance as follows:
(3)
(4)
(5) The lung pressure can be described by the following differential equation:
(6) The lung dynamic can be written by combining (3)–(6) as follows:
(7) Substituting and rewriting (3)–(5) in (2) results in the following relation for the airway pressure:
(8) By substituting the airway pressure expression in (8) into the differential equation for the lung dynamic (6), the following may be achieved:
(9)
Now, equations (5), (7), and (8) can be arranged into a state-space form with pout as input, [paw, Qpat]T as outputs, and plung as state: (10)
160
Intelligent Control and Automation
(11) where
(12) A state-space model of the blower, however, can be expressed as follows: (13) (14) By coupling expression in (10), (13), and the respiratory system (14), the general state-space model for the respiratory system is obtained:
(15)
Equation (6) shows that one of the factors that determine the dynamic of the lung pressure is a parameter called lung compliance (Clung). It is a measure of the change in lung volume per change in transpulmonary pressure, or in simpler words, the ease at which the lung can expand under pressure. Clinical data show that lung compliance values are not constant all the time and sometimes increases or decreases according to the lung volumes. The value can, however, be constant when a certain transition region of lung volume is entered [22].
Fuzzy PID Control for Respiratory Systems
161
CONTROLLER DESIGN The structure of the PID controller is shown in Figure 2, where three parameters—proportional gain Kp, integral gain Ki, and derivative gain Kd— are used to manually tune the controller. The output of the controller is the blower’s pressure which is given by the following equation:
(16)
where
(17)
(18)
Figure 2. PID control of the respiratory system.
Ti and Td in equations (17) and (18) is the integral and derivative time, respectively, while Ts is the sampling time. Ciancone correlation technique with fine tuning is used to tune the PID parameters. The PID parameter’s value obtained is Kp = 1.1 × 10−3, Ki = 12 × 10−3, and Kd = 15 × 10−6. In fuzzy PID controller whose structure is shown in Figure 3, a fuzzy inference system (FIS) is used to tune all the three PID parameters. The input to the FIS is the error e and the rate of change of error Δe, while the output is the PID parameters Kp, Ki, and Kd, where their initial values are the same as the ones previously obtained via Ciancone correlation. Each of the
162
Intelligent Control and Automation
inputs and outputs consists of 3 membership functions labeled as negative (N), zero (Z), and positive (P). The range of values for these functions was determined from the experience in manually tuning the PID controller. Figures 4 and 5 show the membership function used for the input and output of the FIS, respectively.
Figure 3. Fuzzy PID control of the respiratory system.
Figure 4. Membership function for error e and Δe.
Fuzzy PID Control for Respiratory Systems
163
Figure 5. Membership function for Kp, Ki, and Kd.
The value of PID parameters was determined based on the following 4 basic rules: •
When e is large and Δe is negative, the system’s response is still far from the set-point through its heading towards the right direction. Therefore, Kp must be large, while Ki and Kd should be small to quickly close in with the set-point. • When e is negative and Δe is large, the system’s response has surpassed the set-point and the error is rising. Kd is set to large, while Ki and Kp are set to small in order to limit the overshoot. • When e is small, and Δe is positive, the system’s response is closing in steady-state. Therefore, Kp should be big to reach steady-state quickly; however, to decrease the overshoot and avoid oscillation, Kd should be added while Ki should be diminished. • When e is large and Δe is positive, the system’s response is overshooting at the negative side. Kd should be large to reduce the error, while Kp and Ki should also increase. The complete fuzzy rules for the system are as shown in Tables 1–3.
164
Intelligent Control and Automation
Table 1. Fuzzy rules for Kp.
Table 2. Fuzzy rules for Ki.
Table 3. Fuzzy rules for Kd.
SYSTEM’S SIMULATIONS AND RESULTS The performances of the controller developed in Section 3 are evaluated by simulating it on the model described in Section 3. Integral time absolute error (ITAE) shown as follows is used to measure the set-point tracking performance of both controllers. (19) The maximum target pressure ptarget is set at 20mbar, and the following system’s parameters were used in the simulation: RLung = 0.005 mbar/mL/s, RLeak = 0.06 mbar/mL/s, and Rhose = 0.0045 mbar/mL/s. First, we simulate the PID controller on two different lung compliance values, one with constant 20 mL/mbar throughout the inhalation and exhalation process and the other as a function of lung volume as shown in Figure 6.
Fuzzy PID Control for Respiratory Systems
165
Figure 6. Compliance function.
The results in Figure 7 show that it is more difficult to track the pressure set-point with the lung compliance value changing with lung volumes. Under this condition, the controller responses with more oscillation and higher overshoot as compared with when the PID controller is simulated in the model with constant lung compliance value.
Figure 7. PID controller response on respiratory system with constant vs. nonlinear compliance value.
166
Intelligent Control and Automation
Fuzzy PID controller is then simulated using the same parameters and the results as compared to the PID controller which is illustrated in Figure 8. Here, we can observe that FPID has a quicker response and lower overshoot. FPID controller is also better at tracking the set-point based on the calculated ITAE where FPID scored 7.467 while PID scored 8.293.
Figure 8. PID vs. FPID controller response.
CONCLUSION This paper has presented the simulation results of applying FPID control design in a pressure-based mechanical ventilation system. The respiratory system was modeled by combining the blower-hose-patient system with a single compartmental lung system with nonlinear lung compliance. It has been shown in the simulation results that the poor performance of the PID controller in tracking the airway pressure of the modeled system can be improved by applying fuzzy reasoning to tune the PID parameters online automatically. The performance was improved in terms of quicker response, lower overshoot, and small tracking error (ITAE). However, there is still some room for improvement. Further study on the number and type of membership function used in the FIS could improve the performance further.
Fuzzy PID Control for Respiratory Systems
167
ACKNOWLEDGMENTS This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant no. GCV19-3-1441. The authors, therefore, acknowledge with thanks DSR for technical and financial support.
168
Intelligent Control and Automation
REFERENCES 1.
D. Singh, V. Kumar, and M. Kaur, “Densely connected convolutional networks-based COVID-19 screening model,” Applied Intelligence, vol. 51, no. 3, pp. 3044–3051, 2021. 2. D. Singh, V. Kumar, V. Yadav, and M. Kaur, “Deep neural networkbased screening model for COVID-19-infected patients using chest x-ray images,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 35, no. 3, Article ID 2151004, 2021. 3. K. Wang, Z. Qiu, J. Liu et al., “Analysis of the clinical characteristics of 77 COVID-19 deaths,” Scientific Reports, vol. 10, no. 1, pp. 1–11, 2020. 4. M. Kaur, V. Kumar, V. Yadav et al., “Metaheuristic-based deep COVID-19 screening model from chest x-ray images,” Journal of Healthcare Engineering, vol. 2021, Article ID 8829829, 9 pages, 2021. 5. World Health Organization, Middle East Respiratory Syndrome Coronavirus (Mers-cov), World Health Organization, Geneva, Switzerland, 2019. 6. S. Singh, C. Prakash, and S. Ramakrishna, “Three-dimensional printing in the fight against novel virus COVID-19: technology helping society during an infectious disease pandemic,” Technology in Society, vol. 62, 2020. 7. C. Wu, X. Chen, Y. Cai et al., “Risk factors associated with acute respiratory distress syndrome and death in patients with coronavirus disease 2019 pneumonia in Wuhan, China,” JAMA internal medicine, vol. 180, no. 7, pp. 934–943, 2020. 8. J. Reinders, R. Verkade, B. Hunnekens, Nathan van de Wouw, and O. Tom, “Improving mechanical ventilation for patient care through repetitive control,” 2020, https://arxiv.org/abs/2004.00312. 9. M. Borrello, “Modeling and control of systems for critical care ventilation,” in Proceedings of the 2005, American Control Conference, pp. 2166–2180, IEEE, Portland, OR, USA, June 2005. 10. M. Walter and S. Leonhardt, “Control applications in artificial ventilation,” in Proceeding of the 2007 Mediterranean Conference on Control & Automation, pp. 1–6, IEEE, Athens, Greece, June 2007.
Fuzzy PID Control for Respiratory Systems
169
11. K. B. Ohlson, D. R. Westenskow, and W. S. Jordan, “A microprocessor based feedback controller for mechanical ventilation,” Annals of Biomedical Engineering, vol. 10, no. 1, pp. 35–48, 1982. 12. M. Dai, Z. S. Zhang, Z. G. Liu, and D Fu Yin, “Control module design for a continuous positive airway pressure ventilator,” Applied Mechanics and Materials, vol. 321, pp. 1657–1661, 2013. 13. Y. Xu, L. Li, Y. Jia, and Y. Luo, “An optimized controller for bilevel positive airway pressure ventilator,” in Proceeding of the 2014 International Conference on Future Computer and Communication Engineering (ICFCCE 2014), Atlantis Press, Tianjin, China, March 2014. 14. D. Acharya and D. K. Das, “Swarm optimization approach to design pid controller for artificially ventilated human respiratory system,” Computer Methods and Programs in Biomedicine, vol. 198, Article ID 105776, 2020. 15. S. Ghosh, P. Shivakumara, P. Roy, and T. Lu, “Graphology based handwritten character analysis for human behaviour identification,” CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 55– 65, 2019. 16. B. Hunnekens, S. Kamps, and N. Van De Wouw, “Variable-gain control for respiratory systems,” IEEE Transactions on Control Systems Technology, vol. 28, no. 1, pp. 163–171, 2018. 17. J. H. T. Bates, Lung Mechanics: An Inverse Modeling Approach, Cambridge University Press, Cambridge, UK, 2009. 18. S. Yang, K. Li, and J. Shi, “Design and simulation of the longitudinal autopilot of uav based on self-adaptive fuzzy pid control,” in Proccedings of the 2009 International Conference on Computational Intelligence and Security, vol. 1, pp. 634–638, IEEE, Beijing, China, December 2009. 19. M. H. Mohd Shah, M. F. Rahmat, K. A. Danapalasingam, and N. Abdul Wahab, “Plc based adaptive fuzzy pid speed control of dc belt conveyor system,” International Journal on Smart Sensing and Intelligent Systems, vol. 6, no. 3, 2013. 20. M. Wang and N. Yang, “Three-dimensional computational model simulating the fracture healing process with both biphasic poroelastic finite element analysis and fuzzy logic control,” Scientific Reports, vol. 8, no. 1, pp. 1–13, 2018.
170
Intelligent Control and Automation
21. M. A. Hannan and J. A. AliM. S. Hossain Lipu et al., “Role of optimization algorithms based fuzzy controller in achieving induction motor performance enhancement,” Nature Communications, vol. 11, no. 1, pp. 1–11, 2020. 22. B. Jonson and C. Svantesson, “Elastic pressure–volume curves: what information do they convey?” Thorax, vol. 54, no. 1, pp. 82–87, 1999.
Chapter 8
A Parameter Varying PD Control for Fuzzy Servo Mechanism
Nader Jamali Soufi1, Mohsen Kabiri Moghaddam2, Saeed Sfandiarpour Boroujeni3, and Alireza Vahidifar1 Sama Technical and Vocational Training College, Islamic Azad University, Islamshahr Branch, Tehran, Iran 2 Department of Electrical and Electronic Engineering, K. N. Toosi University of Technology, Tehran, Iran 3 Department of Electrical and Electronic Engineering, University of Shiraz, Shiraz, Iran 1
ABSTRACT This paper presents the formulation of novel implementation method based on parameter varying PD controller for fuzzy servo controllers. This formulation uses the approximation of fuzzy nonlinear function including error and error derivation in operation point. Obtained fuzzy control law has been employed to control angular position of servo using digital control technique applied to a typical microcontroller like AVR. The performance Citation: Soufi, N. , Moghaddam, M. , Boroujeni, S. and Vahidifar, A. (2014), “A Parameter Varying PD Control for Fuzzy Servo Mechanism”. Intelligent Control and Automation, 5, 156-169. doi: 10.4236/ica.2014.53018. Copyright: © 2014 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
172
Intelligent Control and Automation
and robustness of modified fuzzy controller in comparison with PID controller evaluated in no load, applied external disturbance with different magnitude conditions has been studied. The simulation results showed that the proposed fuzzy controller has a considerable advantage in rise time, settling time and overshoot respect to PID controller when the servo system encounters with nonlinear features like saturation and friction. Keywords: Parameter Varying PD Controller, Fuzzy Position Control (FPC) System, FLC, Servo Motor
INTRODUCTION All control systems suffer from problems related to undesirable overshoot, longer settling times, vibrations and stability while going from one state to another state. Real-world systems are nonlinear, accurate modeling is difficult, costly and even impossible in most cases. On other side, a servo control system is one of the most important and widely used in control systems. The role of the control system may include: maintaining the position of a motor within certain limits, even when the load on the output of the motor might vary that this is called regulation or varying the position of the motor and load according to an externally set program of values that this called set point tracking [1] . Today, applications of servomechanisms include their use in automatic machine tools, satellite-tracking antennas, celestial-tracking systems on telescopes, automatic navigation systems, manufacturing industry, transportation, computers, communication networks and antiaircraft-gun control systems. In many applications, servomechanisms allow highpowered devices to be controlled by signals from devices of much lower power. The operation of the high-powered device results from a signal (called the error, or difference, signal) generated from a comparison of the desired position of the high-powered device with its actual position. A servomotor is a specific type of motor and rotary encoder combination, usually with a dedicated, that forms a servomechanism. This assembly may in turn form part of another servomechanism. The encoder provides position and usually speed feedback, which by the use of a PID controller allow more precise control of position and thus faster achievement of a stable position (for a given motor power). However, the conventional digital control systems
A Parameter Varying PD Control for Fuzzy Servo Mechanism
173
like classic PID solve the above problems approximately and generally do not work well for non-linear systems, and particularly complex and vague systems that have no precise mathematical models [2] . Therefore, we need to intelligent and precise control systems to acquiring desired response. Moreover, it has been known that mentioned controllers produce more noise. Therefore, more advanced control techniques need to be used, which will minimize the noise effects [3] . To overcome these difficulties, various types of modified conventional PID controllers such as auto-tuning and adaptive controllers have been developed lately [4] - [6] . Here we suggest intelligent control approach. There are three basic approaches to intelligent control: knowledge-based expert systems, fuzzy logic, and neural networks. All three approaches are interesting and very promising areas of research and development. In this paper, we present only the fuzzy logic approach. Fuzzy logic proposed by Lotfi A. Zadeh in 1973. Zadeh introduced the concept of linguistic variables [7] that can be described simply as “computing with words rather than numbers” or “control with sentences rather than equations” [8] . Fuzzy Logic is a multivalued logic that allows intermediate values to be defined between conventional evaluations like true/false, yes/no, high/ low and emerged as a tool to deal with uncertain, imprecise, or qualitative decision-making problems. Fuzzy logic control (FLC) is a control method based on fuzzy logic. Today, fuzzy control applications cover a variety of practical systems, such as the control of cement kilns [9] , train operation [10] , parking control of a car [11] , heat exchanger, robots, and also in many other systems, such as home appliances, video cameras, elevators, aerospace and etc. Many different approaches are used for fuzzy controller implementation which they can be cited to hybrid- methods such as neuro-fuzzy [12] [13] , PIlike fuzzy [14] , multivariate regression [15] , non-linear bond graphs (BG) [16] . In neuro-fuzzy approach, controller learns fuzzy rules incrementally or decrementally and the learning algorithm uses a linguistic error measure that is expressed by fuzzy rules. In PI-like fuzzy approach, first, the proposed fuzzy controller adopts the step response of phase trajectories to derive the corresponding linguistic control rules of such a system, and then applies the relationship between input and output signals in a proportional integral controller to convert the relationship of linguistic control rules to a decision table. Multivariate regression analysis attempts to determine a formula that can describe how elements in a vector of variables respond simultaneously to changes in
174
Intelligent Control and Automation
others. For linear relations, regression analyses here are based on forms of the general linear model. Also, a bond graph is a graphical representation of a physical dynamic system. It is similar to the better-known block diagram and signal-flow graph, with the major difference that the arcs in bond graphs represent the bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Furthermore, bond graphs are multi domain and domain neutral. This means a bond graph can incorporate multiple domains seamlessly. In particular, for servo system, fuzzy logic PID [17] , Taguchi-method [18] , DSP-based fuzzy logic [19] and neuro-fuzzy controller [20] are used in literatures. In this paper, we proposed a novel method based on parameter varying PD controller for fuzzy servo controllers that uses the approximation of fuzzy nonlinear function including error and its derivation in operation point. This paper is organized as follows: firstly, the overall framework of the theory of fuzzy control and its defuzzification methods are presented. Then fuzzy control algorithm based on PD controller has been approximated. Then the dynamic modeling of servo motor system and its experimental implementation using the proposed algorithm is done. Finally, simulation results of the proposed controller are compared with PID controller.
MOTIVATION FOR FUZZY CONTROL Classical mathematics and conventional control theory are quite limited in modeling and controlling complex nonlinear dynamical systems, particularly ill-formulated and partially described physical systems. The motivation for using fuzzy logic technology in control systems stems from the fact that it allows designers to build a controller even when their understanding of the system is still in a vague, incomplete, and developing phase. Moreover, it can be noted that the servo controller design does not require to explicit knowledge of the motor and load characteristics [21] . Since the fuzzy controllers are designed directly from the properties of the process, the development time is often shorter than for conventional controllers [22] . The overall procedure for developing a fuzzy control system for a model servo system is shown in Figure 1. The system uses measured variables as inputs to describe the error or derivation of error from the servo. These
A Parameter Varying PD Control for Fuzzy Servo Mechanism
175
inputs are then “fuzzified” using membership functions supplied by an expert operator to determine the degree of membership in each input class. The resulting “fuzzy inputs” are evaluated using a linguistic rule base and fuzzy logic operations to yield an appropriate output and an associated degree of membership. This “fuzzy output” is then defuzzified to give a crisp output response that can be applied to the drive servo.
Figure 1. General structure of the fuzzy part.
According to real-world requirements, the linguistic variables have to be transformed to crisp output. There are many defuzzification methods (e.g. Center of gravity, bisector of area, mean of area and center of average) that are in following [23] [24] .
Center of Gravity (COG) For discrete sets, COG is called center of gravity for singletons (COGS) where the crisp control value is the abscissa of the center of gravity of the fuzzy set, where
is a point in the universe of the conclusion
and is the membership value of the resulting conclusion set. For continuous sets, summations are replaced by integrals.
(1)
176
Intelligent Control and Automation
Bisector of Area (BOA) Where is a point in the universe (U) of the conclusion. There may be several solutions xi, thus this method is a complex method computationally. For discrete sets, uBOA is the abscissa xj that minimizes the presented formula. Here imax is the index of the largest abscissa . Moreover
is membership value of the resulting conclusion set.
(2)
Mean of Maximum (MOM) In this method the crisp value is chosen by point with the highest membership. There may be several points in the overall implied fuzzy set which have maximum membership value. Therefore, it is a common practice to calculate the mean value of these points. Here I is the (crisp) set of indices i where reaches its maximum μmax.
(3)
Center of Average (COA) Where ulm is a single value of the fuzzy set of control output, and are membership function for error and derivation of error, respectively. It must be mentioned that in this paper, the chosen defuzzification method for implementation is COA.
(4)
A Parameter Varying PD Control for Fuzzy Servo Mechanism
177
PROBLEM DESCRIPTION AND METHODOLOGY Fuzzy Controller Based on PD Controller A fuzzy logic controller has four main components as shown in Figure 1: fuzzification, inference, rule base and defuzzification. Implementation of an FLC requires the choice of four key factors [25] : number of fuzzy sets that constitute linguistic variables, mapping of the measurements onto the support sets, control protocol that determines the controller behavior and shape of membership functions. Thus, FLCs can be tuned not just by adjusting controller parameters but also by changing control rules, membership functions. Rule bases, inference mechanism and defuzzification methods are the sources of nonlinearities in FLCs but it is possible to construct a rule base with linear input-output characteristic for FLC to become a linear controller with a control signal [26] . It is constructive to apply a similar analysis to Fuzzy PID control in order to accommodate integral action but the integrator creates problems by increasing the order of the closed loop system. Here, output signal is a function with two variables: error and derivation of error. Control signal U(n) is a nonlinear function of “error” and “derivation of error” as shown in Figure 2.
Figure 2. Fuzzy PD controller.
The simplest membership functions are formed using straight lines. In most literatures, because of the simplicity of the theory and implementation of triangular membership functions in design of fuzzy controller, these functions are used. In this study, the triangular membership function is chosen for each fuzzy linguistic value of the error and the derivation of error as shown in Figure 3 and Figure 4, respectively.
178
Intelligent Control and Automation
Figure 3. Membership function for error (e).
Figure 4. Membership function forderivation (ė).
When
, we have
(5) When discussing it, considering the Equation (4) there are eight conditions as mentioned in [24] .
A Parameter Varying PD Control for Fuzzy Servo Mechanism
179
For design FPC, we consider to the fifth condition, arbitrary:
Thus
(6) In later equation, refers to the mapping of membership function (Z) for error to output and also refers to a mapping of membership function for derivation of error to output and etc. Substituting (5) in (6), we have:
(7) We denote the input-output relation of the controller u in the following model:
180
Intelligent Control and Automation
(8) For sufficiently small δe, δė and δu perturbations, the Equation (8) can be approximated by the following linear equation: (9) where Note. The operating point (OP) that we consider it for linearization here is
(10)
Also
(11)
Considering Equation (8), we have:
(12)
A Parameter Varying PD Control for Fuzzy Servo Mechanism
181
Then after some algebra operations the Equation (12) become Thus
We can see that the above controller behaves like a parameter varying PD controller thus the FLC performs like PID controller when we have not any external load or disturbance. Takagi-Sugeno or Mamdani [27] fuzzy inference system can be used to design the fuzzy controller. In this paper, Mamdani fuzzy inference system is used. The inputs of FPD are “Error (E)” and “Derivate of Error (DE)” while the output is “control”. Input and output variables of FPD consist of three fuzzy sets namely N (negative), Z (zero), P (positive) as shown in Figures 5-7, respectively.
Figure 5. Fuzzy input variable “error”.
Figure 6. Fuzzy input variable “derivation of error”.
Figure 7. Fuzzy output variable “control”.
182
Intelligent Control and Automation
The fuzzy rule bases used in design of controller for the servo system are following: Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6 Rule 7 Rule 8 Rule 9
If e is N and de is N then control is N. If e is Z and de is N then control is N. If e is P and de is N then control is Z. If e is N and de is Z then control is N. If e is Z and de is Z then control is Z. If e is P and de is Z then control is P. If e is N and de is P then control is Z. If e is Z and de is P then control is P. If e is P and de is P then control is P.
Flowchart for Real-Time Implementation of FPC Figure 8 shows the flow chart for the hierarchical fuzzy control algorithm, where θd and θ denotes the desired position and displacement date that reported from the position sensor like encoder and potentiometer, respectively.
Figure 8. Flow chart for the position fuzzy control system.
A Parameter Varying PD Control for Fuzzy Servo Mechanism
183
PID Controller When the digital controller is used, one could replace the derivative term with a backward difference and the integral term may be replaced with a sum. For a small constant sampling time, u(t) can be approximated as:
(13)
PID controllers can be set using the rules of Ziegler Nichols (ZN) or manually. Hand-tuning is generally used by experienced control engineers based on the special rules but these rules are not always valid. In recent years, many studies have been performed for optimization of PID coefficients which can be pointed to an extremum seeking [4], stochastic algorithm [5] , genetic algorithm [6] and so on. Then the derivative gain TD and the integral gain Ti are adjusted to improve and optimize the response of the system. In this paper, achieved coefficients of PID controller using (ZN) tuning method are as follows: KP = 1.43, Ti = 0.00067 and TD = 1.04.
MODELLING AND IMPLEMENTATION The heart of the system that integrates the system components, control operations and defines features functionality using related techniques and methods is software. The total schematic diagram of this system including controller, actuator, process, and position sensor is shown in Figure 9. Designed printed circuit board (PCB) in Altium Designer Summer (ADS) environment and photograph of the implemented position tracking control system using fuzzy Logic are shown in Figure 10 and Figure 11 respectively. The system dynamics are represented as follows: The state space equations for a typical servo motor as follows:
(14)
where x(t) is state vector, u(t) is input control signal and w is external disturbance. System matrix A, vectors B and C of the linearized benchmark problem are given by:
184
Intelligent Control and Automation
(15) where Va = armature voltage (v), Ra = armature resistance (Ω), La = armature inductance (H), Jm = rotor inertia (kg×m2), Bm = viscous friction coefficient (N×ms/rad), KT = torque constant (N×m/A) and Kb = backemf constant(V×s/ rad).
Figure 9. Schematic diagram of control system.
Figure 10. Circuit of FLC in ADS environment.
A Parameter Varying PD Control for Fuzzy Servo Mechanism
185
Figure 11. A view of the implemented system in lab.
Remark. For applications where the load is to be rapidly accelerated or decelerated frequently, the electrical and mechanical time constants of the motor play an important role. The mechanical time constants in these motors are reduced by reducing the rotor inertia. In this study, a DC motor with these specifications was used: (moment of inertia of the rotor: 1e-3 j, rated motor voltage: 6 v DC, armature inductance: 0.01 H, armature resistance: 0.005 Ω, Electromotive force constant: 0.22, Back E.M.F. Constant: 1.5, Damping ratio: 1.91 N×s/m).
SIMULATION RESULTS In this paper, first PID classic controller is designed for system control of servo mechanism using Section 3.3. Note that chosen gains for PID controller are given in this section. Then fuzzy controller using nine rules and triangular membership functions designed in Matlab/Simulink environment by Mamdani approach in Section 3.1. The output response of both the PID and fuzzy logic controllers were presented. Figure 12 shows the result of the comparison between the output response (i.e. Position of servo) for the PID and Fuzzy logic controller under no-load condition. Figure 13 and Figure 14 show the obtained result from comparison between PID controller and FLC for applied external disturbance (η) with different amplitudes η = 0.1 + sin(t) , η = 0.9 + 0.9sin(t), respectively.
186
Intelligent Control and Automation
Figure 12. Output response in no load condition.
Figure 13. Output with external disturbance η = 0.1 + sin(t).
Figure 14. Output with external disturbance η = 0.9 + 0.9 sin(t).
A Parameter Varying PD Control for Fuzzy Servo Mechanism
187
Simulation results shows that the FLC performs like PID controller when we have not any load or changed parameters or disturbance [23] but when comparing both controllers under injected load or changed parameters or disturbance, the FLC performs better than PID controller [28] . However, the performance of PID controllers depends heavily on the operating parameters of the system. Also, if there is any change in the system, a significant amount of time is required to reset the controllers. Considering proposed method, output responses using different defuzzification methods such as center of gravity (COG), bisector of area (BOA), mean of maximum (MOM), center of average (COA) under no load condition, load and external disturbance (η) conditions were shown in Figures 15-17, respectively. One can result that center of average (COA) method has an acceptable result than other methods (i.e. COG, BOA and MOM).
Figure 15. Output response in no load condition.
188
Intelligent Control and Automation
Figure 16. Output response with constant load.
Figure 17. Output response with external disturbance.
PID controller is designed on operating point and when it encountered with uncertainty or unmodeled dynamics in the system, it cannot compensate these changes enough effectively. Besides, fuzzy controller can cover these changes and handles them.
A Parameter Varying PD Control for Fuzzy Servo Mechanism
189
CONCLUSION Nader Jamali Soufi, Mohsen Kabiri Moghaddam, Saeed Sfandiarpour Boroujeni, Alireza Vahidifar The formulation of a novel implementation method based on parameter varying PD controller for the fuzzy logic servo controller is presented. Then obtained results from the comparison between PID controller and FLC with proposed method are presented. Simulation results show that FLC performs like PID controller for when we have not any load or changed parameters or disturbance but when comparing both controllers with considering injected load or changed parameters or disturbance, the fuzzy logic controller performs better.
190
Intelligent Control and Automation
REFERENCES 1.
Heidari, Y., Noee, A.R., Shayanfar, H.A. and Salehi, S. (2010) Robust Control of DC Motor Using Fuzzy Sliding Mode Control with Fractional PID Compensator. The Journal of Mathematics and Computer Science, 1, 238-246. 2. Vikas, S.W., Mithun, M.B., Tulasi, R.D. and Kumar, A.D. (2010) A New Fuzzy Logic Based Modelling and Simulation of a Switched Reluctance Motor. Journal of Electrical Engineering & Technology, 5, 276-281. http://dx.doi.org/10.5370/JEET.2010.5.2.276 3. Killingsworth, N.J. and Krstic, M. (2006) PID Tuning Using Extremum Seeking. IEEE Control Systems Magazine, 26, 70-79. http://dx.doi. org/10.1109/MCS.2006.1580155 4. Karthikraja, A. (2009) Stochastic Algorithm for PID Tuning of Bus Suspension System. International Conference on Control Automation Communication and Energy Conservation, Perundurai, 4-6 June 2009, 1-6. 5. Altinten, A., Karakurt, S., Erdogan, S. and Alpbaz, M. (2010) Application of Self-Tuning PID Control with Genetic Algorithm to a Semi-Batch Polystyrene Reactor. Indian Journal of Chemical Technology (IJCT), 17, 356-365. 6. Zadeh, L.A. (1973) Outline of a New Approach to the Analysis of Complex Systems and Decision Processes. IEEE Transactions Systems Man and Cybernetics, SMC-3, 28-44. http://dx.doi.org/10.1109/ TSMC.1973.5408575 7. Natsheh, E. and Buragga, K.A. (2010) Comparison between Conventional and Fuzzy Logic PID Controllers for Controlling DC Motors. International Journal of Computer Science Issues (IJCSI), 7, 128-134. 8. Wang, X., Meng, Q., Yu, H., Yuan, Z. and Xu, X. (2005) Distributed Control for Cement Production of Vertical Shaft Kiln. International Journal for Information & Systems Sciences (IJISS), 1, 264-274. 9. Bing, G., Hairong, D. and Yanxin, Z. (2009) Speed Adjustment Braking of Automatic Train Operation System Based on Fuzzy-PID Switching Control. Sixth International Conference on Fuzzy Systems and Knowledge Discovery, Tianjin, 14-16 August 2009, 577-580. 10. Hanafy, M., Gomaa, M.M., Taher, M. and Wahba, A. (2011) Path Generation and Tracking for Car Automatic Parking Employing Swarm
A Parameter Varying PD Control for Fuzzy Servo Mechanism
11.
12.
13.
14.
15.
16.
17.
18.
19.
191
Algorithm. IEEE International Conference on Computer Engineering Systems, Cairo, 29 November 2011-1 December 2011, 99-104. Mudi, R.K., Dey, C. and Lee, T.T. (2006) Neuro-Fuzzy Implementation of a Self-Tuning Fuzzy Controller. IEEE International Conference on Systems, Man and Cybernetics, 6, 5065-5070. Akcayol, M.A. and Sagiroglu, S. (2007) Neuro Fuzzy Controller Implementation for an Adaptive Cathodic Protection on Iraq-Turkey Crude Oil Pipeline. Applied Artificial Intelligence: An International Journal, 21, 241-256. http://dx.doi.org/10.1080/08839510701196345 Azzouna, A., Sakly, A., Trimeche, A., Mtibaa, A. and Benrejeb, M. (2010) PI-Like Fuzzy Control Implementation Using FPGA Technology. 5th International Conference on Design and Technology of Integrated Systems in Nanoscale Era (DTIS), Hammamet, 23-25 March 2010, 1-6. http://dx.doi.org/10.1109/DTIS.2010.5487595 Baleghy, A.N. and Mashhadi, M.S.K. (2012) Design and Implementation Fuzzy Controller in the Frost-Free Refrigerator by Using Multivariate Regression. 20th Iranian Conference on Electrical Engineering (ICEE), Tehran, 15-17 May 2012, 840-844. http://dx.doi.org/10.1109/ IranianCEE.2012.6292470 Linkens, D.A., Wang, H., Bennett, S. and Xia, S. (1992) Using Qualitative Bond Graph Reasoning to Derive Look-Up Tables for Fuzzy Logic Controllers. First International Conference on Intelligent Systems Engineering, Edinburgh, 19-21 August 1992, 226-231. Yu, J.Z., Hu, X.L. and Ding, R. (2009) Fuzzy Logic PID Based Control Design for Permanent Magnet Synchronous Motor Servo System. Second International Conference on Intelligent Computation Technology and Automation, 2, 728-731. Tzeng, C.B., Liu, Y.C. and Young, M.S. (1995) A Preliminary Study of Fuzzy Control Parameters and Taguchi- Method on DC Servo Motor Control. IEEE/IAS International Conference on Industrial Automation and Control: Emerging Technologies, Taipei, 22-27 May 1995, 170174. Hao, Y. (2012) DSP-Based Fuzzy Logic Servo Motor Control. International Conference on Control Engineering and Communication Technology (ICCECT), Liaoning, 7-9 December 2012, 556-559. Sang, H.K. and Lark, K.K. (2001) Design of a Neuro-Fuzzy Controller for Speed Control Applied to AC Servo Motor. IEEE Proceedings International Symposium on Industrial Electronics (ISIE), 1, 435-440.
192
Intelligent Control and Automation
20. Kumar, N.S. and Kumar, C.S. (2010) Design and Implementation of Adaptive Fuzzy Controller for Speed Control of Brushless DC Motors. International Journal of Computer Applications, 1, 36-41. 21. Claudia, P. and Miguel, S. (2008) Speed Control of a DC Motor by Using Fuzzy Variable Structure Controller. Proceedings of the 27th Chinese Control Conference, Kunming, 16-18 July 2008, 311-315. 22. Namazov, M. and Basturk, O. (2010) DC Motor Position Control Using Fuzzy Proportional-Derivative Controllers with Different Defuzzification Methods. Turkish Journal of Fuzzy Systems, 1, 36-54. 23. Huang, T.T., Chung, H.Y. and Lin, J.J. (1999) A Fuzzy PID Controller Being Like Parameter Varying PID. IEEE International Fuzzy Systems Conference Proceedings, 1, 269-276. 24. Jantzen, J. (2007) Foundations of Fuzzy Control. WS: John Wiley & Sons, Ltd. http://dx.doi.org/10.1002/9780470061176 25. Pahuja, R., Verma, H.K. and Uddin, M. (2011) Design and Implementation of Fuzzy Temperature Control System for WSN Applications. IJCSNS International Journal of Computer Science and Network Security, 11, 1-10. 26. Mamdani, E.H. (1977) Application of Fuzzy Logic to Approximate Reasoning Using Linguistic Synthesis. IEEE Transactions on Computers, C-26, 1182-1191. http://dx.doi.org/10.1109/TC.1977.1674779 27. Alassar, A.Z., Abuhadrous, I.M. and Elaydi, H.A. (2010) Comparison between FLC and PID Controller for 5DOF Robot Arm. International Conference on Advanced Computer Control (ICACC), 5, 277-281. 28. Aslam, F. and Kaur, G. (2011) Comparative Analysis of Conventional, P, PI, PID and Fuzzy Logic Controllers for the Efficient Control of Concentration in CSTR. IJCA, 17, 12-16.
SECTION 3: NEURAL NETWORKS-BASED CONTROL
Chapter 9
Neural Network Supervision Control Strategy for Inverted Pendulum Tracking Control
Hongliang Gao1, Xiaoling Li1, Chao Gao2, and Jie Wu1 School of Electrical Engineering and Automation, Hubei Normal University, Huangshi 435002, China 2 The China Ship Development and Design Center, Wuhan 430064, China 1
ABSTRACT This paper presents several control methods and realizes the stable tracking for the inverted pendulum system. Based on the advantages of RBF and traditional PID, a novel PID controller based on the RBF neural network supervision control method (PID-RBF) is proposed. This method realizes the adaptive adjustment of the stable tracking signal of the system. Furthermore, an improved PID controller based on RBF neural network supervision control strategy (IPID-RBF) is presented. This control strategy adopts the supervision control method of feed-forward and feedback. The response speed of the system is further improved, and the overshoot of the Citation: Hongliang Gao, Xiaoling Li, Chao Gao, Jie Wu, “Neural Network Supervision Control Strategy for Inverted Pendulum Tracking Control”, Discrete Dynamics in Nature and Society, vol. 2021, Article ID 5536573, 14 pages, 2021. https://doi. org/10.1155/2021/5536573. Copyright: © 2021 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
196
Intelligent Control and Automation
tracking signal is further reduced. The tracking control simulation of the inverted pendulum system under three different signals is given to illustrate the effectiveness of the proposed method.
INTRODUCTION Inverted pendulum system has been widely investigated in the past few decades based on two important characteristics of high order and strong coupling, which are important problems in control field. And it is an unstable, nonlinear, and multivariable system. Inverted pendulum control methods have a wide range of applications in military, aerospace, robotics, and general industrial processes, such as balancing problems during robot walking, verticality issues during rocket launch, and attitude control issues during satellite flight. The RBF neural network learning control algorithm has been a hot topic in current academic research. This algorithm can solve nonlinear problems, tracking problems, and external interference problems. Therefore, using this method to study the tracking problem of the inverted pendulum is of great significance. The RBF (radial basis function) neural network controllers of the nonlinear system are designed based on proportion integration differentiation (PID), and these methods have good control results. Here are some related research results. In [1], it is presented that RBF network to estimate complex and precise dynamics mainly solves the problem of uncertainty and external interference in the context of complex space. This method is used to solve the problem of model uncertainty and input error. In [2], a neural network adaptive control algorithm with PID is proposed. The self-learning ability and self-adaptation uncertain system dynamic characteristics are used to significantly reduce the impact of resistance disturbance on speed. The system has strong robustness under the parameter variations and external disturbances. In [3], a scheme which combines a proportion differentiation control and a RBF neural network adaptive control algorithm is used. Among them, it uses the PD control to track the trajectory of the end effector of the wire-driven parallel robot (WDPR). The RBF neural network control algorithm is applied to approximate parameters. The combination of these two methods reduces the approximation error, enhances the robustness, and improves the accuracy of the WDPR. In [4], a fuzzy logic-based offline control strategy of a single-wheeled inverted pendulum robot (SWIPR) is presented to study the error, system set time, and rise time, etc. In the end, a good control effect was achieved. In [5], RBF network is applied to reduce chatter and increase stability. In [6], it proposes a single-layer
Neural Network Supervision Control Strategy for Inverted Pendulum ...
197
nonlinear controller to achieve that the inverted pendulum can be adjusted to a stable state from any initial position and achieves four degrees of freedom. Hossein [7] put forward a kind of improved PID control method based on RBF neural network. It shows that the algorithm has a better control effect than the traditional PID for the tension. In [8, 9], adaptive control methods are applied to the input or output of nonlinear systems. They are, respectively, applicable to a class of nonsmooth nonlinear systems and a class of multi-input multioutput nonlinear time-delay systems with input saturation. In [10, 11], two different adaptive methods were proposed to solve the unknown and stochastic nonlinear tracking problems of nonlinear systems. And other scholars have studied the stabilization and convergence of the system [12–14]. Kumar et al. [15] use adaptive control technology to deal with tracking problems. At present, many researchers [16–20] have conducted in-depth studies on various inverted pendulum models using different control methods. In fact, no matter what the control system is, the system will tend to be stable under the action of the controller. Therefore, RBF neural network is used as a model for approximating uncertainty, to study error convergence, and achieved good convergence results in [21]. In [22], it is proposed to use the RBF algorithm to estimate the residual error, reduce the error through the design of the controller, increase the control signal, and obtain good heating requirements. In [23], the dynamic characteristics of machine system are explored and the characteristics of the RBF neural network are used to study the tracking problem. Under the large structure of the RBF neural network, different basis functions are selected for a comparative study to eliminate chattering in [24]. In [25], the RBF neural network implements self-feedback control, accurate prediction, and real-time control of reasonable data. It has improved tracking accuracy and estimated unmodeled dynamics and external interference issues in [26]. As more and more academic researchers understand the approximation characteristics of RBF, they add RBF neural networks to various fields to study the dynamic characteristics of different systems. It mainly solves the problems of nonlinearity, uncertainty, and external interference and uses the Lyapunov function to ensure the effectiveness of the algorithm, so that it reduces system errors and reaches a stable state [27–34]. In recent years, the footprint of PID applications can be seen in different fields. From simple PID control algorithm to complex PID algorithm control, it has played a role in different control fields. In [35, 36], the PID control algorithm and operation rules are studied, respectively. The characteristics
198
Intelligent Control and Automation
of the PID algorithm are explored, and the PID has a certain degree of adaptability through simulation. According to the characteristics of the PID algorithm, some scholars have studied the tuning of the PID controller [37, 38]. Tuning PID parameters are used to optimize system performance according to actual conditions. As PID parameter tuning technology becomes more and more mature, some interested scholars use PID as a controller to study system stability and tracking issues [39–42]. In order to adapt PID to more situations, some scholars have launched the control research of fuzzy PID [43, 44] and fractional PID [45, 46]. It can be seen that the PID control algorithm is a relatively classic control method. According to the references, we can find that the RBF neural network control algorithm basically uses the Lyapunov function to determine the stability conditions. In [47], based on the nonlinear U model, RBF neural network and PD parallel control algorithm are proposed. The Lyapunov function determines the conditions of system stability, and under this condition, the tracking effect has been improved. However, from the tracking effect, the error between the system output and the tracking signal is large, and the tracking situation with external interference is not considered. Therefore, in this article, we consider these problems based on the inverted pendulum model to study its tracking problem. In short, the control of the inverted pendulum model mainly includes three major control performances, namely, stability, accuracy, and rapidity. Then, for the tracking problem of the inverted pendulum, the three comprehensive performances also need to be considered. Therefore, we designed a supervision control method PID-RBF. We further design another supervision control strategy IPID-RBF. The stable tracking of the signal is achieved by supervision control strategy. In general, the main innovations of this paper include the following: (1) PID-RBF strategy ensures the stability of the system. The overshoot of the system is reduced and the robustness of the system is enhanced. In the case of interference, the parameters can be adjusted adaptively to control signal tracking. (2) IPID-RBF strategy further solves the problem of large overshoot in the control process. The adjustment time of the system is further reduced. This strategy has strong anti-interference ability, fast stability, and small error with the tracking curve. (3) In the control process, we can use the PID-RBF strategy to replace the traditional PID control strategy. This way can make
Neural Network Supervision Control Strategy for Inverted Pendulum ...
199
the system overshoot smaller and system stability better. IPIDRBF strategy further improves the overall performance of the system. In the IPID-RBF control strategy, the system has a faster response speed, better stability, and robustness. The rest of the paper is organized as follows. The relevant control objective is presented in Section 2. Neural network supervision control design is presented in Section 3. The simulation study is discussed in Section 4. Finally, the conclusions are given in Section 5.
CONTROL OBJECTIVE The inverted pendulum system can be difficult to control as the order increases. At the same time, the inverted pendulum system itself has the characteristics of complexity, instability, and nonlinearity. The inverted pendulum system is often used as an experimental project in real society. At the same time, the effectiveness of some control methods in the introduction part has been verified by controlling the inverted pendulum system. Therefore, it has important significance for the research of inverted pendulum. In order to study the signal tracking problem, we consider the inverted pendulum model based on PID and RBF neural network control. The inverted pendulum model is similar to [48]. The force analysis of the inverted pendulum system is shown in Figure 1.
Figure 1. Force analysis diagram of trolley and swing rod.
200
Intelligent Control and Automation
In Figure 1, the external force exerted on the trolley and its moving displacement are indicated by symbols u and x, respectively. θ is expressed as the angle between the pendulum and the vertical direction. After differentiating its displacement is ẋ that is the velocity of the trolley, and the friction coefficient b between it and the trolley and the guide rail phase. Then we can obtain the resistance bẋ of the guide rail to the trolley in the horizontal direction. In addition, the interaction between the trolley and the pendulum is decomposed into two forces perpendicular to each other in the vertical plane, where fV and fH are, respectively, used to represent the component force in the vertical and horizontal directions. We use three motions to express the pendulum bar motion of an inverted pendulum: the horizontal motion of the center of gravity, the vertical motion of the center of gravity, and the rotation around the center of gravity. According to Newton’s law of mechanics, we can get three equations of motion, ϕ = θ + π represents the angle between the pendulum and the vertical downward direction: (1) (2) (3) Equation (1) is equivalent to the following equation: (4) Equation (2) is also equivalent to the following equation: (5) The resultant force in the horizontal direction of the trolley can be expressed as follows: (6) Put (4) into (6), the external force u can be written as
Similarly, substituting (4) and (5) into (3)yields
(7)
Neural Network Supervision Control Strategy for Inverted Pendulum ...
201
(8) Formulas (7) and (8) are nonlinear equations of motion of vehiclemounted inverted pendulum system. In order to facilitate the control, linearize the system. Suppose that θ ≤ 20° is within the error range of keeping stability. Because ϕ = θ + π, θ is so small; therefore, cos ϕ ≈ − 1, sin ϕ =
θ, and . After linearization, the system is transformed into the following mathematical model:
(9) Taking the Laplace transform in (9), one obtains
(10) By eliminating X(s) from the equation set (10), the transfer function of the trolley to the pendulum angle is obtained as follows:
(11)
where q = (M + m)(I + ml ) − (ml ) is a constant. 2
2
NEURAL NETWORK SUPERVISION CONTROL DESIGN PID-RBF and IPID-RBF Control Design Here, we introduce the PID-RBF control and IPID-RBF control. As we know, PID controller consists of three important parameters, which are proportional regulation coefficient kp, integral regulation coefficient ki, and differential regulation coefficient kd. The proportional regulation coefficient kp can change the response speed of the system and improve the regulation precision of the system. The integral adjustment coefficient ki can eliminate the residual error. The dynamic performance of the system can be improved by differential adjustment coefficient kd. As shown in Figure 2, different PID
202
Intelligent Control and Automation
parameters have different response speeds and stability. When the response curve oscillates significantly, kp should be increased, ki should be increased, and kd should be smaller. When the error of the response curve is large, kp should be reduced, ki should be reduced, and kd should be increased appropriately. According to this method, the best parameters are selected to achieve the best control effect of the inverted pendulum system.
Figure 2. Step response curves under different PID parameters.
RBF neural network is a three-layer feed-forward neural network, and the mapping from input to output is linear which greatly speeds up the learning speed and avoids the problem of local minima. RBF neural network supervision control is to study the traditional controller, adjust the weight of the network online, and make the feedback control input up(k) tend to zero. The structure of the RBF neural network supervision control system is shown in Figure 3.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
203
Figure 3. Structure of neural network supervision control system.
In the RBF network structure, the input signal of the network is taken as r(k), H = [h1, . . . , hm]T is radial basis function vector, and Gaussian basis function is hj which is expressed as follows: (12) where j = 1, . . . , m, bj is the base width parameter of node j, bj > 0, Cj is the center vector of node j, Cj = [c11, . . . , c1m]T, and bj = [b1, . . . , bm]T. The weight vector of the network is given by
(13) The output of RBF network is denoted by (14) where m is the number of hidden layer neurons in the network. The control law is given by (15) by
The performance indicators of the neural network adjustment are given
(16) The approximation is as follows:
(17)
Intelligent Control and Automation
204
The error caused by the approximation is compensated by weight adjustment. The gradient descent method is adopted to adjust the weights of the network.
(18) where η is the learning rate. α is the momentum factor, and we get the adjustment process of neural network weights as follows: (19) (1) PID controller based on the RBF neural network supervision control method (PID-RBF) includes error signal e1(k), cumulative error signal , current error, and last time error difference signal Δe1(k) = (e1(k) − e1(k – 1)) processing. The PID-RBF control law can be expressed as follows:
(20)
The network input vector of the PID-RBF control is given by
(21) 1)
where T1 represents sampling time, e1(k) = r(k) − y1(k), the input signal is r(k), and y1(k) is the output sequence of the system response, s = 0, 1, . . . , k. The network output of the PID-RBF control is (22) According to (15), (20), and (22), we can express its total control law as
Neural Network Supervision Control Strategy for Inverted Pendulum ...
205
(23) The weight adjustment process of RBF neural network supervision is expressed as
(24) (2) An improved PID controller based on RBF neural network supervision control strategy (IPID-RBF) includes error signal e2(k), cumulative error signal , current error, and last time error difference signal Δe2(k) = e2(k) − e2(k − 1) processing. (25) The network input vector of the IPID-RBF control is
where y2(k) is the output sequence of the system response, represents sampling time.
(26) and
T2
The network output of the IPID-RBF control is as follows: (27) According to (15), (25), and (27), we can express its total control law as
(28)
206
Intelligent Control and Automation
The weight adjustment process of RBF neural network supervision is expressed as
(29)
Control Algorithm Stability Analysis For the stability analysis of the control algorithm, we fully consider the performance indicators adjusted by the neural network E(k), equation (16) ) E(k) = (1/2)(un(k) − u(k))2. The equation of state for the free motion of a linear stationary discrete system is (30) The discrete Lyapunov equation is (31) where G is system matrix and P and Q are positive definite real symmetric matrices. The Lyapunov function is expressed as (32) When n V(0) = 0, x is the solution of the following state equation [47]: (33) The increment of the Lyapunov function is The PID-RBF control function equation (34), ΔV1[x(k)] can be expressed as
(34) . From
Neural Network Supervision Control Strategy for Inverted Pendulum ...
207
(35) Combining the previous error and weight adjustment methods, we can express e1(k + 1) as (36) The gradient descent method is used to adjust the weight of the network which can be rewritten as (37) Combining equations (36) and (37) and performance indicators we can get the following equation:
(38) where
.
According to equation (38), (35) can be expressed as follows:
(39) When the value field of ηhj is of
we can get
. We analyse equation (39) and find that the product and is positive. Therefore, we can conclude . When the system is stable. Because
of the relation between the preceding and the following is expressed as and then
208
Intelligent Control and Automation
. Therefore, the control algorithm is
convergent.
SIMULATION STUDIES This section provides some simulations to show the inverted pendulum tracking effect of PID-RBF supervision control and IPID-RBF supervision control. We consider the inverted pendulum model and take the swing angle of the swing rod as the controlled object. Under zero initial conditions, and . The transfer function can be expressed as (40) The transfer function of the inverted pendulum system is discretized by z-transformation. The discretized object after z-transformation is
(41)
The base width parameter is given by (42) The center vector is as follows: (43) We consider .
and
In Figure 4, the chart is given a square wave signal using neural network supervision control and traditional PID control. Its amplitude is one. From the diagram, it can be seen that the amplitude oscillation of the neural network supervision control is smaller than that of the traditional PID controller. The IPID-RBF supervision control tends to be stable, fastest, and more gentle. Obviously, PID-RBF and IPID-RBF supervision control have stable speed and high accuracy compared with pure PID control.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
209
Figure 4. Square wave tracking.
In Figure 5, the graph shows the RBF neural network supervision control tracking the input square wave signal parameter curve, the un is the RBF network supervision control online learning adjustment curve, and the up is the PID adjustment curve in the RBF network supervision control; superposed curves u are the sum of un and up in RBF networks supervised control. Comparing two adjustments of three curves, because of the transformation of square wave signal from positive to negative, the value of RBF network supervision control online learning adjusting curve is changed from zero to positive, the PID adjusting curve in RBF network supervision control is negative at the jump instant, and the variation of the superposition curve is relatively smooth.
Figure 5. Square wave tracking RBF control parameter.
210
Intelligent Control and Automation
In Figure 6, the diagram is given step signal using RBF neural network supervision control and traditional PID control. Its amplitude is one. From the diagram, the amplitude oscillation of the RBF network supervisory control is smaller than that of the conventional PID control. PID-RBF supervision control is stable after 3.8 s. IPID-RBF supervision control is stable after 2.5 s. Obviously, IPID-RBF supervision control is more stable and accurate.
Figure 6. Step signal tracking.
In Figure 7, the graph shows the RBF neural network supervision control tracking the input parameter curve of the step signal. RBF neural network supervision control online learning adjustment curve un and superposition curve u change trend are basically consistent.
Figure 7. Square wave tracking RBF control parameter.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
211
In Figure 8, the graph is given sine wave signal r using RBF neural network supervision control and traditional PID control to track. Its amplitude is one. As we can see from the picture, the amplitude oscillation of the RBF network supervision control is smaller than that of the pure PID, and the effect of the RBF neural network supervision control is better than that of pure PID control in the time period from 0s to 20 s. There is no error coincidence between the RBF neural network supervision control curve (y1 or y2) and the r curve. The RBF network supervision control has less error and better accuracy than the pure PID control. It can be clearly observed from the partially enlarged view that the IPID-RBF control has the highest coincidence and the best tracking effect.
Figure 8. Sine wave tracking.
In Figure 9, the diagram shows the RBF neural network supervision control tracking the parameter curve of the input sine wave signal. In RBF network supervision control, the change of PID regulating curve up is small, and the change of u in RBF network supervision control is gentle.
212
Intelligent Control and Automation
Figure 9. Sine wave tracking RBF control parameter.
From Figures 8 and 9, the RBF neural network supervision control online learning adjustment curve un and the input signal r show the opposite trend change under the input sine wave signal. As the input signal increases, the adjustment curve un decreases. The input signal r decreases, and the adjustment curve un increases. The adjustment trend of un is related to the change of weight w. In the process of adjustment, the value of h is positive, . When the value of the input signal is positive, the weight w changes
correspondingly and the value is negative; when
the value of the input signal is negative, the weight w changes correspondingly and the value is positive. Figure 10 shows that, for a given step signal with an amplitude of one, a given pulse-type disturbance amplitude is one. Interference time is 0.5 s. The pure PID and the RBF neural network supervision control are used for tracking, respectively. The disturbance is added when the time is 5 s in Figure 10. And the amplitude oscillation of the RBF network supervision control is smaller than that of the pure PID control when the disturbance appears. Then, the system can adjust to the stable state quickly after the disturbance disappears to realize the step signal tracking.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
213
Figure 10. Step signal tracking.
Figure 11 shows the RBF neural network supervision control with known disturbance signal and step signal input. There are three largescale adjustments in total: the adjustment of the system stability before the disturbance is added, the adjustment of the error caused by the disturbance compensation, and the readjustment of the system stability. In RBF network supervision control, the curves un and u tend to be stable after online learning.
Figure 11. Step signal tracking RBF control parameter.
214
Intelligent Control and Automation
In Figure 12, the graph is given a square wave signal. Its signal amplitude is one. Given pulse-type disturbance amplitude is two, and the time duration is 0.5 s. When time is 5 s, the disturbance is added. When the disturbance appears, the amplitude oscillation of RBF network supervision control is smaller than that of the pure PID. It is observed in the picture that the system quickly adjusts to the stable state without oscillation and realizes the square wave signal tracking under the IPID-RBF control.
Figure 12. Square wave tracking.
In Figure 13, the picture shows the RBF neural network supervision control with the input of square wave signal and known disturbance signal. In the diagram, there are three large-scale adjustments between 0 and 10 seconds, such as the adjustment of system stability before the disturbance is added, the adjustment of the error caused by disturbance compensation, and the adjustment of system stability. Combined with the regulation of compensating disturbance in the case of step input, traditional PID and the RBF neural network supervision control are used in the regulation of compensating disturbance for the time of 0.5 s. After 0.5 s, the stability of the system is adjusted to stable tracking signal. Among them, the IPID-RBF control had the least amount of speed to adjust.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
215
Figure 13. Square wave tracking RBF control parameter.
In Figure 14, the graph is given sine wave signal. Its signal amplitude is one. Given pulse-type disturbance amplitude is two, and the time duration is 0.5 s. When time is 5 s, the disturbance is added. It can be seen that the deviation of y1 or y2 from r is less than that of the pure PID, and the curve of RBF neural network supervision control has a higher coincidence with the given sine signal after the disturbance disappears.
Figure 14. Sine wave tracking.
Figure 15 shows that the disturbance amplitude is given to one, and the time duration is 0.5 s, which is monitored by RBF network with the input of sine wave. Compared with the unperturbed condition, the value of the online learning adjustment curve un of RBF neural network supervision control is larger. There are three large pulse-type jumps of un and up superposed curves u in RBF network supervisory control. The other time RBF network supervision control online learning adjustment curve un and the RBF neural network supervision control superposition curve u change trend are basically consistent.
216
Intelligent Control and Automation
Figure 15. Sine wave tracking RBF control parameter.
Combining Table 1 and the error graphs from Figures 16 to 21 , we can find that the traditional PID control has the worst effect, long adjustment time, and greater error than the other two control algorithms. The IPID-RBF control can reach a stable state in a short time no matter whether there is no interference or interference. Table 1. The values of error when k = 20000.
Figure 16. Step signal tracking error without interference.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
Figure 17. Step signal tracking error under interference.
Figure 18. Sinusoidal signal tracking error without interference.
Figure 19. Sinusoidal signal tracking error under interference.
217
218
Intelligent Control and Automation
Figure 20. Wave signal tracking error without interference.
Figure 21. Tracking error of wave signal under interference.
CONCLUSIONS From the simulation effect, the IPID-RBF control applied to the model has better performance. At the same time, the input signal tracking is well achieved. Under three different input signals, real-time tracking of the input signal is realized through online learning, and the error of the input and output is continuously adjusted, so that the system error eventually approaches zero.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
219
Compared with the traditional PID control, the IPID-RBF control has the best tracking effect with the input signal under three different input signals. It has improved the characteristics of traditional PID with low accuracy. Compared with the PID-RBF supervision control, the IPID-RBF control has smaller curve oscillation amplitude during the adjustment process, and the system reaches a stable state in a short time and has strong anti-interference ability. This algorithm has simple control and good tracking accuracy. Therefore, the improved control algorithms have good robustness, and the stability of the system is good. Simulation graphics and data show that the IPID-RBF controls the controlled object through online learning to achieve online identification and control. It has high control accuracy, good dynamic characteristics, and anti-interference ability.
ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China under Grants 61971181 and 61602163 and the Natural Science Foundation of Hubei Province under Grant 2016CFC735.
220
Intelligent Control and Automation
REFERENCES 1.
S. M. Attaran, R. Yusof, and H. Selamat, “A novel optimization algorithm based on epsilon constraint-RBF neural network for tuning PID controller in decoupled HVAC system,” Applied Thermal Engineering, vol. 99, pp. 613–624, 2016. 2. Z. Aydin, U. Erol, and Y. Hseyin, “Nonlinear control of an inverted pendulum on a cart by a single control law,” Applied Mechanics and Materials, vol. 464, pp. 279–284, 2014. 3. R. M. Brisilla and V. Sankaranarayanan, “Nonlinear control of mobile inverted pendulum,” Robotics and Autonomous Systems, vol. 70, pp. 145–155, 2015. 4. Z. Chen, X. Yang, and X. Liu, “RBFNN-based nonsingular fast terminal sliding mode control for robotic manipulators including actuator dynamics,” Neurocomputing, vol. 362, pp. 72–82, 2019. 5. Y. Chu, Y. Fang, and J. Fei, “Adaptive neural dynamic global PID sliding mode control for MEMS gyroscope,” International Journal of Machine Learning and Cybernetics, vol. 8, no. 5, pp. 1707–1718, 2017. 6. S. Han, H. Wang, and Y. Tian, “Time-delay estimation based computed torque control with robust adaptive RBF neural network compensator for a rehabilitation exoskeleton,” ISA Transactions, vol. 97, pp. 171– 181, 2019. 7. M. Hossein, “Robust predictive control of wheel slip in antilock braking systems based on radial basis function neural network,” Applied Soft Computing Journalhttps, vol. 70, pp. 318–329, 2018. 8. C. Y. Jia, Y. C. Chen, and Z. G. Ding, “Application in composite machine using RBF neural network based on PID control,” Automation Control and Intelligent Systems, vol. 2, no. 6, pp. 100–104, 2014. 9. F. S. T. Jao, P. C. Tiago, S. Samuel da, and S. R. Jean M de, “Attitude control of inverted pendulums using reaction wheels: comparison between using one and two actuators,” Systems and Control Engineering, vol. 234, no. 3, pp. 420–429, 2019. 10. T. Kunitoshi and N. Sumito, “Posture stability control of a small inverted pendulum robot in trajectory tracking using a control moment gyro,” Advanced Robotics, vol. 34, no. 9, 2020.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
221
11. A. Kharola, P. Dhuliya, and P. Sharma, “Anti-swing and position control of single wheeled inverted pendulum robot (SWIPR),” International Journal of Applied Evolutionary Computation, vol. 9, no. 4, 2018. 12. A. Wu and Z. Zeng, “Global Mittag-Leffler stabilization of fractionalorder memristive neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 1, pp. 206–217, 2017. 13. A. Wu and Z. Zeng, “Output convergence of fuzzy neurodynamic system with piecewise constant argument of generalized type and timevarying input,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 12, pp. 1689–1702, 2016. 14. A. Wu and Z. Zeng, “Exponential stabilization of memristive neural networks with time delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 12, pp. 1919–1929, 2012. 15. P. Kumar, N. Kumar, and V. Panwar, “RBF neural control design for SISO nonaffine nonlinear systems,” Procedia Computer Science, vol. 125, pp. 25–33, 2018. 16. X. Lian, J. Liu, T. Yuan, and N. Cui, “RBF network based adaptive sliding mode control for solar sails,” Aircraft Engineering and Aerospace Technology, vol. 90, no. 8, pp. 1180–1191, 2018. 17. Z. Li and Y. Bo, “PID adaptive control in the application of the induction motor system based on the RBF neural network inverse,” Applied Mechanics and Materials, vol. 2014, pp. 2393–2396, 2014. 18. C. X. Liu, Z. W. Ping, Y. Z. Huang, J. G. Lu, and H. Wang, “Position control of spherical inverted pendulum via improved discrete-time neural network approach,” Nonlinear Dynamics, vol. 2020, 2020. 19. H. B. Liang, Z. L. Li, and G. L. Li, “Neural network prediction model to achieve intelligent control of unbalanced drillings underpressure value,” Springer Nature, vol. 2018, 2018. 20. P. Liu, H. Yu, and S. Cang, “Adaptive neural network tracking control for underactuated systems with matched and mismatched disturbances,” Nonlinear Dynamics, vol. 98, no. 2, pp. 1447–1464, 2019. 21. P. Morasso, A. Cherif, and J. Zenzeri, “The Single Inverted Pendulum model is not so bad after all,” PLoS ONE, vol. 14, no. 3, 2019. 22. M. J. Mahmoodabadi and H. Khoobroo Haghbayan, “An optimal adaptive hybrid controller for a fourth-order under-actuated nonlinear inverted pendulum system,” Transactions of the Institute of Measurement and Control, vol. 42, no. 2, pp. 285–294, 2020.
222
Intelligent Control and Automation
23. O. C. Oliveira, A. De Medeiros Martins, and A. D. De Araujo, “A dualmode control with a RBF network,” Transactions of the Institute of Measurement and Control, vol. 28, no. 2, pp. 180–188, 2017. 24. Z. P. Shen, Y. Wang, H. M. Yu, and C. Guo, “Finite-time adaptive tracking control of marine vehicles with complex unknowns and input saturation,” Ocean Engineering, vol. 198, 2020. 25. X. Y. Shi, Y. Y. Cheng, C. Yin, X. G. Huang, and S. M. Zhong, “Design of adaptive back stepping dynamic surface control method with RBF neural network for uncertain nonlinear system,” Neurocomputing, vol. 2018, 2018. 26. F. Wang, Z.-Q. Chao, L.-B. Huang, H.-Y. Li, and C.-Q. Zhang, “Trajectory tracking control of robot manipulator based on RBF neural network and fuzzy sliding mode,” Cluster Computing, vol. 22, no. S3, pp. 5799–5809, 2017. 27. H. Wang, H. R. Karimi, P. X. Liu, and H. Yang, “Adaptive neural control of nonlinear systems with unknown control directions and input deadzone,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 11, pp. 1897–1907, 2018. 28. H. Wang, P. X. Liu, and B. Niu, “Robust fuzzy adaptive tracking control for nonaffine stochastic nonlinear switching systems,” IEEE Transactions on Cybernetics, vol. 48, no. 8, pp. 2462–2471, 2018. 29. Y. Q. Wang, Q. Lin, X. G. Wang, and F. G. Zhou, “Adaptive PD control based on RBF neural network for a wire-driven parallel robot and prototype experiments,” Aircraft Engineering and Aerospace Technology, vol. 2019, 2019. 30. R. L. Wang, B. C. Lu, and W. B. Ni, “Integral separation PID control of certain electro-hydraulic servo system based on RBF neural network supervision,” in Proceedings of the 2nd International Conference on Automation, Mechanical Control and Computational Engineering,(AMCCE 2017), pp. 52–57, Beijing, China, March 2017. 31. H. Wang, P. Shi, H. Li, and Q. Zhou, “Adaptive neural tracking control for a class of nonlinear systems with dynamic uncertainties,” IEEE Transactions on Cybernetics, vol. 47, no. 10, pp. 3075–3087, 2017. 32. L. L. Wan, Y. X. Su, H. J. Zhang, Y. C. Tang, and B. H. Shi, “Global fast terminal sliding mode control based on radial basis function neural network for course keeping of unmanned surface vehicle,” International Journal of Advanced Robotic Systems, vol. 16, no. 2, 2019.
Neural Network Supervision Control Strategy for Inverted Pendulum ...
223
33. A. Wu and Z. Zeng, “Lagrange stability of memristive neural networks with discrete and distributed delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 4, pp. 690–703, 2014. 34. A. Wu, H. Liu, and Z. Zeng, “Observer design and H∞ performance for discrete-time uncertain fuzzy-logic systems,” IEEE Transactions on Cybernetics, vol. 1, 2020. 35. B. Yang, Z.-X. Liu, H.-K. Liu, Y. Li, and S. Lin, “A GPC-based multivariable PID control algorithm and its application in anti-swing control and accurate positioning control for bridge cranes,” International Journal of Control, Automation and Systems, vol. 18, no. 10, pp. 2522– 2533, 2020. 36. A. R. Sergio and F. Gerardo, “PID principles to obtain adaptive variable gains for a Bi-order sliding mode control,” International Journal of Control, Automation and Systems, vol. 18, pp. 2456–2467, 2020. 37. S. Ulusoy, S. M. Nigdeli, and G. Bekda, “Novel metaheuristic-based tuning of PID controllers for seismic structures and verification of robustness,” Journal of Building Engineering, vol. 33, 2020. 38. P. Fišer and J. Fiaer, “A universal map of three-dominant-pole assignment for PID controller tuning,” International Journal of Control, vol. 93, no. 9, pp. 2267–2274, 2020. 39. J.-W. Perng, Y.-C. Kuo, and K.-C. Lu, “Design of the PID controller for hydro-turbines based on optimization algorithms,” International Journal of Control, Automation and Systems, vol. 18, no. 7, pp. 1758– 1770, 2020. 40. M. H. Zhao, X. B. Xu, H. Yang, and Z. J. Pan, “Design of a predictive RBF compensation fuzzy PID controller for 3D laser scanning system,” Applied Sciences, vol. 10, no. 13, 2020. 41. J. Zhang, H. C. Sun, Y. M. Qi, and S. P. Deng, “Temperature control strategy of incubator based on RBF neural network PID,” World Scientific Research Journal, vol. 6, no. 1, 2020. 42. M. K. Debnath, R. Agrawal, S. R. Tripathy, and S. Choudhury, “Artificial neural network tuned PID controller for LFC investigation including distributed generation,” International Journal of Numerical Modelling: Electronic Networks, Devices and Fields, vol. 33, no. 5, 2020. 43. Y. W. Wang, W. A. Zhang, H. Dong, and L. Yu, “A LADRC based fuzzy PID approach to contour error control of networked motion
224
44.
45.
46.
47.
48.
Intelligent Control and Automation
control system with time‐varying delays,” Asian Journal of Control, vol. 22, no. 5, pp. 1973–1985, 2020. M. Taghdisi and S. Balochian, “Maximum power point tracking of variable-speed wind turbines using self-tuning fuzzy PID,” Technology and Economics of Smart Grids and Sustainable Energy, vol. 5, no. 1, pp. 357–366, 2020. A. Jamali, R. Shahnazi, and A. Maheri, “Load mitigation of a class of 5-MW wind turbine with RBF neural network based fractional-order PID controller Asgharnia,” ISA Transactions, vol. 96, pp. 272–286, 2020. T. K. Mohapatra, A. K. Dey, and B. K. Sahu, “Employment of quasi oppositional SSA-based two-degree-of-freedom fractional order PID controller for AGC of assorted source of generations,” IET Generation, Transmission & Distribution, vol. 14, no. 7, pp. 3365–3376, 2020. F. Xu, D. Tang, and S. Wang, “Research on parallel nonlinear control system of PD and RBF neural network based on U model,” Taylor & Francis Group, vol. 61, no. 2, pp. 284–294, 2020. X. Zhao, X. Wang, G. Zong, and H. Li, “Fuzzy-approximation-based adaptive output-feedback control for uncertain nonsmooth nonlinear systems,” IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3847–3859, 2018.
Chapter 10
Neural PID Control Strategy for Networked Process Control
Jianhua Zhang1 and Junghui Chen2 State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources, North China Electric Power University, Beijing 102206, China 1
Department of Chemical Engineering, Chung-Yuan Christian University, Chung-Li 320, Taiwan 2
ABSTRACT A new method with a two-layer hierarchy is presented based on a neural proportional-integral-derivative (PID) iterative learning method over the communication network for the closed-loop automatic tuning of a PID controller. It can enhance the performance of the well-known simple PID feedback control loop in the local field when real networked process control applied to systems with uncertain factors, such as external disturbance or randomly delayed measurements. The proposed PID iterative learning method is implemented by backpropagation neural networks whose weights Citation: Jianhua Zhang, Junghui Chen, “Neural PID Control Strategy for Networked Process Control”, Mathematical Problems in Engineering, vol. 2013, Article ID 752489, 11 pages, 2013. https://doi.org/10.1155/2013/752489. Copyright: © 2013 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
226
Intelligent Control and Automation
are updated via minimizing tracking error entropy of closed-loop systems. The convergence in the mean square sense is analysed for closed-loop networked control systems. To demonstrate the potential applications of the proposed strategies, a pressure-tank experiment is provided to show the usefulness and effectiveness of the proposed design method in network process control systems.
INTRODUCTION Networked control systems (NCSs) make it convenient to control large distributed systems. Process control can integrate the controlled process and the communication network of computational devices, but sensors and actuators cannot be directly used in a conventional way because there are some inherent issues in NCS, such as delay, packet loss, quantization, and synchronization. Recently, some efforts have been made to deal with these issues about NCSs. Zhang et al. (2012) investigated the stability problem of a class of delayed neural networks with stabilizing or destabilizing time-varying impulses [1]. The controlling regions in neural networks were identified in three scales using single-objective evolutionary computation methods [2]. Tian et al. (2008) investigated the observer-based output feedback control algorithm for networked control systems with two quantizers using a set of nonlinear matrix inequalities [3]. In robust design, a robust and reliable 𝐻∞ filter was designed for a class of nonlinear networked control systems with random sensor failure [4]. In addition, some investigations have been made to incorporate the network into process control systems in the literature. In the work of [5], their model predictive control algorithm was extended for processes with random delays using a communication delay model, along with time-stamping and buffering technique. Two novel networked model predictive control schemes based on neighborhood optimization were presented for online optimization and control of a class of serially connected processes [6]. A two-tier control architecture was presented [7]. A lower-tier control system relying on pointto-point communication and continuous measurements was first designed to stabilize the closed-loop system. Also, an upper-tier networked control system was subsequently designed using the Lyapunov-based model predictive control theory to profit from both the continuous and the asynchronous, delayed measurements as well as from additional networked control actuators to improve the closed-loop system performance. A design methodology for faulttolerant control systems of chemical plants with distributed interconnected processing units was presented [8]. This approach incorporated Lyapunov-
Neural PID Control Strategy for Networked Process Control
227
based nonlinear control with the hybrid system theory. It was developed based on a hierarchical architecture that integrated lower-level feedback control of the individual units with upper-level logic-based supervisory control over communication networks. An output feedback controller was proposed. It combined a Lyapunov-based controller with a high-gain observer for nonlinear systems subject to sensor data losses [9]. Since NCS operates over a network, data transfer between the controller and the remote system, in addition to the controlled processing delay, will inevitably introduce network delays, the controller-to-actuator delay and the sensor-to-controller delay. Random access networks, such as CAN and Ethernet, have random network delays, which may cause deterioration of the system performance [10]. The packet loss caused by communication networks also affects the control system performance; that is, process control systems under the communication environment are stochastic in nature because of the random time delays or packet loss caused by the communications on the employed networks. The output variable of a network process is usually subjected to the network delay of the uncertain duration and/or stochastic disturbances. It can be treated as a random variable that follows a specific probability density function (PDF). Consequently, the output tracking error is also a random variable. PDF of the output tracking error can be determined if the information about the process model and PDF of both disturbances and network delays is available. The dynamic stochastic distribution control theory has been proposed [11]. It is generally true for only linear systems with Gaussian random inputs or at least symmetric PDF that the minimum variance of the output tracking error indicates optimization of controller tuning. However, for non-Gaussian stochastic systems, the control method that is focused only on mean and variance of the output tracking error is not sufficient to capture the probabilistic behavior of this stochastic system. A more general measure of uncertainty should be used to characterize the uncertainty of the output tracking error in the control design. If the network is connected with the regulatory control level and the advanced control level is located at the remote side to cooperate with other plants, the whole control system will have a two-level hierarchy. The control structure of the regulatory control level in the plant site is unchanged. Thus, the simple and robust proportional-integral-derivative (PID) controller can be applied to the operating plant. The conventional PID controller can get the support from the networked process control systems when the operating plants encounter a new operation condition or uncertain factors. On the
228
Intelligent Control and Automation
other hand, recently neural networks attracted a very large research interest. They have great capability of solving complex mathematical problems and they have received considerable attention in the field of chemical process control and have been applied to system identification and controller design [12]. They are effectively used in the control region for modeling nonlinear processes, especially in model-based control, such as direct and indirect neural network model based control [13], nonlinear internal model control [14], and recurrent neural network model control [15]. Although the control performances of the above methods are satisfactory, the control design still focuses on point-to-point wired communication links between the designed controllers and the controlled processes. It makes the implementation strategy realistic only for in-site control systems. The optimal fuzzy PID controller for NCSs by minimizing the sum of the integral of time multiplied by absolute errors and the squared controller output was proposed [16]. Neural controllers for NCSs were also applied based on minimum tracking error entropy for the tuning PID controller [17], but it was not easy to implement them on practical applications because they needed a set of tracking errors at the instant time point. Ghostine et al. (2011) used Monte Carlo simulation to demonstrate the effectiveness of the traditional PID controller, but they did not tell how to adjust PID controller parameters [18]. Indeed, for networked process control systems, it is the random time delays and packet loss that lead to stochastic tracking errors for closed-loop systems besides random disturbances. As such, it is natural to represent networked process control systems as a stochastic control framework. The main objective of this work is to propose a two-level hierarchical control system based on the PID-like neural network adaptive scheme for networked process control systems. The tracking error entropy of closedloop control systems is utilized to construct the performance index so as to update the weights of the neural PID controller. The rest of this paper is organized as follows. Section 2 investigates the stochastic characteristics in networked process control systems and formulates the control problem and objective. The reason for selecting the minimum entropy of the closedloop tracking error as a performance index is then fully described. Section 3 presents the neural PID control law for process control systems with network-induced time delays and packet dropout, and, then, the convergent condition in the mean square sense of the proposed controller is presented and proved. Section 4 shows the application of the proposed controller to a networked chemical process control system, and Section 5 concludes this paper.
Neural PID Control Strategy for Networked Process Control
229
STOCHASTIC CHARACTERISTICS OF NCS IN OPERATION PROCESSES A networked process control system with a two-level hierarchical control architecture shown in Figure 1, in which a process plant is controlled by a PID controller at the plant site and the controller can also be self-tuned by a neural network at the remote site, is proposed to improve the performance of the closed-loop system. In this system, the output signal (𝑘) of the plant is synchronously measured with ideal sampler 𝑆 at a sampling rate 1/𝑇. The digital controller uses the information of the plant transmitted through the networks to update the new set of the control parameters. The parameter values transmitted through networks and held by a zero-order holder 𝐻 are then transmitted to drive the plant. y (𝑘) ∈ 𝑅1 is the input to the controller. (𝑘) is the setpoint. NCS operates over a network; data transfers between the controller and the remote system will inevitably introduce network delays; that is, the output quality (𝑘) from the current time point 𝑘 is often unavailable before the measurement is sent to the controller at the current time point due to network delays. Likewise, the control input (𝑘) from the current time point 𝑘 is also unavailable after the control action is calculated at the current time point. 𝜏ca denotes the induced delay between the controller and the actuator and 𝜏sc stands for the induced delay between the sensor and the controller.
Figure 1. Networked process control system.
230
Intelligent Control and Automation
The controlled process is nonlinear no matter whether it is a continuous or batch process. The PID controller control variable (𝑘) at the plant site is (1) where 𝑢𝑠 is the bias value. (𝑘) = (𝑘) − (𝑘) is the output (𝑦(𝑘)) error deviated from the setpoint. 𝐾𝑃, 𝐾𝐼, and 𝐾𝐷 are known as the proportional gain, the integral time constant, and derivative time constant, respectively. In Figure 1, the control structure is similar to an adaptive control structure, but the parameters of the PID controller are adjusted via the backpropagation network control system. The control parameters 𝐾𝑃, 𝐾𝐼, and 𝐾𝐷 transmitted through networks and held by a zero-order holder 𝐻 are
(2)
And they are then transmitted to the controller to drive the plant. In networked control systems, a continuous signal is sampled, encoded in digital format, transmitted over the network, and finally decoded at the receiver’s side. The overall delay between sampling and eventual decoding at the receiver can be highly variable because both the network access delays and the transmission delays depend on highly variable network conditions, such as congestion and channel quality [19]. In a networked control system, message transmission delay can be divided into two parts: device delay and network delay. The device delay includes the time delay at the source and the destination nodes. The time delay at the source node includes the preprocessing time, 𝑇pre, and the waiting time, 𝑇wait. The time delay at the destination node is only the postprocessing time, 𝑇post. The network time delay includes the total transmission time of a message, 𝑇tx, and the propagation delay of the network (𝑇𝑝). The total time delay can be expressed as [20] (3) The waiting time 𝑇wait may be significant due to the amount of data sent by the source node, the transmission protocol, and the traffic on the network. 𝑇wait is random in most cases. The postprocessing time 𝑇post is negligible in a networked control system compared with other time delays. The above delays in networked control systems can be obtained by time-stamping techniques although they are normally not available [5].
Neural PID Control Strategy for Networked Process Control
231
In the networking area, random network delays have been modeled by using various formulations based on probability and the characteristics of sources and destinations [21]. In most cases, the network-induced time delays are random and are not Gaussian. Packet dropouts result from transmission errors in physical network links, which is far more common in wireless than in wired networks, or from buffer overflows due to congestion. Long transmission delays sometimes result in packet reordering, which essentially amounts to a packet dropout if the receiver discards “outdated” arrivals [22]. The packet loss is another random factor in networked control systems. This means that in general the system output will be very unlikely to be a Gaussian noise, leading to a non-Gaussian closed-loop tracking error when the control input is applied to the system. Thus, the randomness measure of the tracking error via the use of variance would not be sufficient in characterizing the performance of the closed-loop system. Therefore, the problems considered in this paper can be formulated to design a proper controller such that the shape of the tracking error of NCSs is made as narrow as possible.
DESIGN OF AN NCS CONTROLLER The output variable of the controlled processes over the communication system is subjected to the network delay of the uncertain duration and/or stochastic disturbances. It can be treated as a random variable that follows a specific PDF. To adjust the shape of PDF, the entropy is used here. Entropy has a more general meaning to arbitrary random variables than mean or variance. When entropy is minimized, all the moments of the error PDF (not only the second moment) are constrained [22]. Consequently, it can be used to measure and form a design criterion for general dynamic stochastic systems subject to arbitrary random inputs whose PDF can be of any shape. The application of entropy in control and estimation is developed in this section.
Estimation of Error PDF and Its Entropy In practical applications, PDF of the tracking error in NCSs is often unknown a priori. A data-driven method that can get the error PDF from the error samples 𝑒𝑘 is employed, where 𝑧 represents the current error, which is a random variable. The error data are obtained by subtracting the process output as actually received by the controller from the setpoint in a given time period. Using the Parzen method [22], the error PDF
232
Intelligent Control and Automation
is approximated as the weighted sum of the 𝐾 Gaussian kernel function defined as 𝜅(𝑧, 𝜎2) = with the mean located at the error data point 𝑒(𝑘) as
(4)
where the multidimensional Gaussian function with a radially symmetric variance 𝜎2 is used here for simplicity. For minimizing the range of the error and maximizing the concentration of the error about mean of the error PDF at the same time, the entropy of the error should be minimized. The entropy (𝐻) of the error PDF can be calculated using Shannon entropy (5) When Shannon entropy definition is used along with this PDF estimation, the calculation of Shannon entropy becomes very complex, whereas the calculation of Rényi quadratic entropy leads to a simpler form. Rényi’s entropy with order 𝑆 of the tracking error is given by (6)
General Minimum Entropy Based Tuning Neural PID Network Unlike the conventional deterministic controller design seeking a control input that minimizes the difference between the process output and the desired output at the next time step, the goal of the proposed controller design is to determine control parameters that minimize the shape of error PDF, for the output over NCS is stochastic. The objective is expressed as (7) where the order of Rényi’s entropy is set to be 𝑆 = 2 owing to the computational efficiency. The above objective function is calculated for the present instant time point 𝑘 only. 𝑉𝑘 is information potential (IP) of quadratic Rényi’s entropy at the time point 𝑘. It can be calculated in closed form from the samples using Gaussian kernels as
Neural PID Control Strategy for Networked Process Control
233
(8)
Although the PID controller is widely utilized because of the simple structure and the intuitive physical meanings, it is difficult to tune proper PID parameters in NCSs owing to induced random delays. Consequently, in this research, a self-tuning neural PID control scheme is proposed in Figure 2, where a neural network is used to tune the parameters of the conventional PID controller similar to the adjustment made by an experienced operator. Like the human operator who has accumulated past experience and knowledge on the controlled system and knows how to do the adjustment, the neural network can store such information and digest it for the final adjustment because neural networks have the function approximation abilities, learning abilities, and versatilities that can adapt to unknown environments. Thus, the operator’s experience and knowledge can be included into a neural network and the network is trained based on the past data history. Finally, the trained neural network can be used as means to tune the PID controller parameters online.
Figure 2. Structure of a neural network in NCS.
234
Intelligent Control and Automation
Using a three-layered neural network shown in Figure 2, the learning rule that finds the suitable PID parameters is realized. Thus, in the present neural network values, the setpoint 𝑟(𝑘), the current error 𝑒(𝑘), the induced delay between the controller and the actuator 𝜏ca, and the induced delay between the sensor and the controller 𝜏sc are denoted by
and
. The outputs at the output layer are 𝐾𝑃, 𝐾𝐼, and 𝐾𝐷, all of which are
denoted by and . Then the outputs of a single hidden layer neural network can be computed via the following steps. The input layer is
(9) The input and the output of the hidden layer are
(10) where 𝑃 is the number of hidden neurons. The input and the output of the output layer are
(11) where 𝑤𝑝𝑞 and 𝑤𝑙𝑝 are weights trained in the neural networks. The initial values of the weight are selected from the uniform distribution across a small range [−1 1]. (𝑥) = (𝑒𝑥 − 𝑒−𝑥)/(𝑒𝑥 + 𝑒−𝑥) and 𝑔(𝑥) = 𝑒𝑥/(𝑒𝑥 + 𝑒−𝑥), known as the activation functions, perform the weighted sum of the incoming signals from the input layer and the hidden layer, respectively. With the input layer and the hidden layer, the bias is introduced into the first node. In order to derive the self-tuning algorithm of the PID controller, the cost function defined in (7) should be minimized. Based on the steepest descent approach, at the output layer, we have
Neural PID Control Strategy for Networked Process Control
235
(12) Consider the following: where 𝜂1 or 𝜂 is the learning factor.
(13)
(14)
where IP shown above is computed using the measurement errors of the past samples. The Jacobian information (𝑘)/𝜕(𝑘) can be replaced by sgn(𝜕𝑦(𝑘)/𝜕𝑢(𝑘)) or it can be calculated by the model prediction algorithm of the plant under the assumption of 𝑢(𝑘 − 𝑖) ≈ 𝑢(𝑘 − 𝑖 + 1) when the proper sampling of the control system is applied as
Also,
(16)
(15)
236
Intelligent Control and Automation
At the hidden layer, (17)
(18)
Using the chain rule, (𝑘1)/𝜕𝑤𝑝𝑞 and 𝜕(𝑘2)/𝜕𝑤𝑝𝑞 can be expressed as
(19)
(20)
where
Neural PID Control Strategy for Networked Process Control
237
Although IP (8) calculation can be done through the Monte Carlo method, it is preferred to find a more practical solution using sliding windowing. Supposing that the estimate 𝑉𝑘(𝑒) for the tracking error has been obtained at the 𝑘th run, the recursive estimation algorithm for 𝑉𝑘(𝑒) of the tracking error can be obtained by using the new sample 𝑒(𝑘 + 1) as follows:
(21)
where 0 ≤ 𝜆 < 1 is the forgetting factor. 𝐿 is the window length. The size of window 𝐿 should be selected to contain the dynamic characteristics of the plant and network induced delays. Note that even if the sliding windowing is used to produce the information potential, the objective function is still optimized for the present instant 𝑘 only.
Online Updating PID Controller Based on SPC Monitoring Chart
The conventional control parameter adjustment is triggered on the noisefree or noise-filtered data. However, it has the disadvantage of postponing the parameter adjustment until after an influent transient has subsided. Noise, on the other hand, can induce unjustified control parameter changes, which would, in turn, induce incorrect control action. Process variations in our problem may indeed be a classical electronic type of instrumentation noise and/or packet loss caused by the communications on the employed networks, but they can also be successive, small but real transient in the process output commonly because of external disturbance changes. Thus, in the past, process variations were classified into the common cause and assignable cause variations in manufacturing and service industries. A common cause variation is inherent in the process and is impossible or hard to be eliminated. It is assumed that the sample comes from a known probability distribution. Based on Shewhart’s classification, Deming (1982) argued the special cause of variations as “something special, not part of the system of common causes” [23] should be identified and removed from the root. If an error value is calculated from noisy measurements, the variable error will be a random distribution variable. Even if controlled processes are well designed and carefully maintained, a certain amount of inherent natural variability is unavoidable. The
238
Intelligent Control and Automation
mechanism that assesses the performance of the control loop for the current operation is the control chart. The chart summarizes the results obtained for the process conducted under the minimum variance. To detect if there is deviation from the minimum bounds in the current operations, a statistical hypothesis testing approach can be applied to the control output error at each time point. If the controlled process is at a steady state, one can conventionally calculate the error distribution function (4). Based on the traditional statistical process control (SPC) method, the upper (𝑈) and the lower (𝐿) control limits for the error are
(22)
where 𝛼 represents a probability of falsely rejecting the hypothesis when, in fact, it should be accepted. The selected area 𝛼 under the normal operation is quite small. In this study, 𝛼 is 0.05. 𝐿 and 𝑈 limits of the integral indicate that the desired value of the probability density function will be between
𝐿 and 𝑈 with the integral value is equal to 1 − 𝛼 (22). Thus, whenever the process is around the setpoint, it is highly unlikely that the discrepancy between the observed and the setpoint values will be out of the lower and the upper limits. One aspect of the SPC philosophy is to accept the normal process variability without reporting any change in value unless the cause is identified [24, 25]. Equation (22) is numerically tractable using the probability model to calculate the thresholds (𝐿 and 𝑈) and detect any departure of the process from its standard behavior. Applying the above control limits to the measured error variable, one would retain the PID controller parameters in the local plant until there is statistically sufficient evidence that the error value is changed. This means that an SPC control chart is superimposed to monitor when the controlled system departures from the target so that the value of the NN-PID controller parameters can be regulated. Therefore, the proposed two-level hierarchical control architecture can be summarized by the following steps. Step 1. Calculate the tracking error in NCSs {(𝑘1), (𝑘2), . . . , 𝑒(𝑘𝑁)} and IP based on the sliding windowing using (21).
Step 2. Train neural networks offline based on the historical data and the conventional turning method, like ITAE (integral of the time-weighted absolute error). Of course, the historical data should cover the desired operation region. A good initial model can be achieved. Also, use (21) to
Neural PID Control Strategy for Networked Process Control
239
construct 𝐿 and 𝑈 control limits. Then the weights of the neural networks are updated by (12) and (17).
Step 3. Check whether the controlled variable is “in statistical control” or not. If the process that is operating with only chance causes of variation present is “in statistical control,” keep the same controller parameters; otherwise, update 𝐾𝑃, 𝐾𝐼, and 𝐾𝐷 obtained via neural networks. Step 4. Formulate the control input (𝑘) by (1) and apply the control input (𝑘) to actuate the plant, set 𝑘 ← 𝑘 + 1, and repeat Step 1.
Note 1. Neural network adjusts the parameters of the controller in terms of the tracking error. The changes of the tracking error cannot be obtained soon when the operating point changes or the disturbance occurs. Nevertheless, the changes of the tracking error can be obtained eventually after a period of transmitting time due to the closed-loop feedback. Note 2. The kernel size (the window width of the Parzen estimator) can be set experimentally after a preliminary analysis of the dynamic range of the tracking error. Some clustering analysis techniques can be applied to get the proper kernel size. To ensure that the proposed algorithm can track the desired output of the controlled system, the convergence condition for the proposed algorithm should be satisfied and is derived as follows. Theorem 1. If the learning factor satisfies 0 < 𝜂 < 1/max𝑖|𝜆𝑖|, the weights of the neural networks are convergent in mean square sense.
The weights of the neural networks can be rearranged in a line and denoted as a vector 𝑊. Equations (12) and (17) can be summarized as follows: (23)
where ∇𝑉(𝑊(𝑘)) is the gradient of 𝑉𝑘 given in (13) or (18) evaluated at 𝑊(𝑘). Perform the Taylor series expansion of the gradient ∇(𝑊(𝑘)) around the optimal weight vector 𝑊∗ as (24)
Define a new weight vector space
and yield
(25)
240
Intelligent Control and Automation
where the Hessian matrix 𝑅 = 𝜕∇𝑉(𝑊∗)/2𝜕𝑊 = 𝜕2 𝑉(𝑊∗)/ 2𝜕𝑊2. Let and ues of 𝑅. Thus,
is the orthonormal matrix consisting of the eigenval-
(26) where Λ is the diagonal eigenvalue matrix with entries ordered in correspondence with the ordering in . It yields
(27) Define ΔΩ(𝑘) = Ω𝑖(𝑘 + 1) − Ω𝑖(𝑘). Since the eigenvalues of the Hessian matrix 𝑅 are negative, |1 + 𝜂𝜆𝑖| < 1 can guarantee that the 𝑖th weight of the neural networks obtains a stable dynamics in mean square sense. Therefore, the following inequality can ensure that the proposed algorithm is convergent in mean square sense:
(28) This means (29) where 𝐸(⋅) is mathematical expectation.
CASE STUDIES: AIR-PRESSURE TANK As for dealing with highly nonlinear and time varying chemical processes, although several schemes of self-tuning PID controllers were proposed in the past, their approaches were limited to the process disturbance with the Gaussian distribution assumption. If the data collected have non-Gaussian distribution, those methods would get weird responses and they would not be able to achieve the expected performance using minimizing variance. In this case study, the comparisons between the proposed network control method and the conventional PID turning method should be made.
Neural PID Control Strategy for Networked Process Control
241
A laboratory scale air-pressure tank system shown in Figure 3 is made accessible for control over the computer network. At the local site, the tank is equipped with differential-pressure-to-current (DP/I) transducers to provide a continuous measurement of the pressure. The computer is connected to a PCI-1710 analog/digital I/O expansion card from Advantech. The expansion board uses a 12 bit converter; therefore, the digital signals are 12 bit. The analog signals from the measured pressure are amplified and conditioned EDM35 (4–20 mA/0–5 volts) modules. The embedded applications, including data acquisition software and the PID controller algorithm, are developed using MATLAB. At the remote site, the Windows operating system and TCP/IP over wireless networks that are implemented on laboratory floors to support mobility laptop are chosen. The Linux operation system and TCP/IP over Ethernet for the data storage server are implemented. The time constant of the air-pressure tank is significantly larger than the time needed during controller computation. Thus, the controller under the Microsoft Windows operating system is pretty close to the real-time environment. Furthermore, with the configuration mentioned above, it is easy to apply this test structure to the laboratory process and demonstrate queuing and buffering delays in the network system although an industrial system should be incorporated with a real-time OS using an appropriate network scheme. Moreover, in the discrete control system, if the controller can be actually acted at each fixed time point and the proper information is collected at the interval of two time points, the operating system can be treated as real-time control. The distributions of the delay between the controller and the actuator and those of the delay between the sensor and the controller are shown in Figures 4(a) and 4(b), respectively. The sensor-to-controller time delay can be calculated by the time difference between the time point when the controlled variable is measured and the time point when the controlled variable is stored on the data server. Likewise, the controller-to-actuator time delay is obtained by computing the time difference between the time point when the output neural network is sent out and the time point when the local controller receives the controller parameters. The objective is to assess the control performance in the presence of communication and measurement delays when the proposed NCS method is applied. The variation of the process output with control inputs over time will be used to evaluate the performance of the proposed method.
242
Intelligent Control and Automation
Figure 3. Schematic diagram of the air-pressure tank control loop over the network.
Neural PID Control Strategy for Networked Process Control
243
Figure 4. The distributions of (a) delay between the controller and the actuator and (b) delay between the sensor and the controller in network control of the air-pressure tank process.
In this application, the sampling period 𝑇 = 1 sec. The setpoints of the pressure values have been changed from 3.5 (Kg/cm2) to 2.5 (Kg/cm2) and then from 2.5 (Kg/cm2) to 3.0 (Kg/cm2). The number of nodes in the hidden layer of the neural controller is 3, the learning factor 𝜂 = 0.01, the kernel size used to estimate the entropy is experimentally set at 𝜎 = 0.5, and the samples for neural controller are trained with a segment of 𝐿 = 50 at each instant. The procedures of designing the controller are implemented according to the summarized algorithm in Section 3. The proposed controller uses an instant performance index shown as (7), whereas the conventional PID controller turning method, ITAE, uses a cumulative performance index (integral means accumulating a series of data). To avoid improper initial weights of NN in our controller and produce large overshoot, NN is initially trained using the historical operation data. In this case, NN is trained before the time point 580 when the controller parameters are fixed without change. The performance using the proposed neural network PID controller is shown in Figure 5, where the 95% control limit is also included. Due to the fluctuation of the upstream pressure, the controlled pressure is still oscillatory even though the required time completes responses for the new set point change. With
Intelligent Control and Automation
244
the new coming measurement, the PDF is regularly updated. The control limit is recalculated based on (22) when PDF is significantly changed. Thus, the control limits shown in Figure 5 are different at different time periods. In order to detect nonrandom changes in operating conditions, the superimposed rules on the chart are used to decide whether the pressure is “in statistical control” or not. The following decision-making rules are adopted [26] when the control parameters are updated: • • • •
eight consecutive points increasing in value; eight consecutive points decreasing in value; three consecutive points above the control limit; three consecutive points below the control limit;
Figure 5. Network PID controller in setpoint change results in the air-pressure tank process: pressure (dashed line) and setpoint (solid line). The dotted line represents the 95% control limits.
The control parameters shown in Figure 6 are calculated with the new coming measurement, but the PID controller is not regularly updated until the pressure is significantly changed. Using the proposed rule to automatically adjust the control parameters, the controlled results will not aggressively react to the random changes beforehand if the pressure is not “in statistical control.”
Neural PID Control Strategy for Networked Process Control
245
Figure 6. Network PID controller parameters in the air-pressure tank process: (a) 𝐾𝑃; (b) 𝐾𝐼; (c) 𝐾𝐷.
If the response to the controlled system with the same controller parameters is tested, the performance is shown in Figure 7. Note that the controller parameters are computed based on the process reaction curve [27]. First, based on the minimum ITAE tuning formula, the controller parameters are computed when the system is operated at the pressure around 3.5 (Kg/ cm2). Figure 7 indicates that after the setpoint changes, the fixed tuning parameters result in a larger overshoot than the proposed neural network PID controller shown in Figure 5. The large oscillations in the process output are caused by the oscillatory control inputs. On the contrary, the neural network PID controller of the process output is less oscillatory and reaches the desired setpoint much more quickly. To clearly illustrate the improvement of the performance, Figure 8 shows the comparison of the error distributions between PID and neural network PID controllers at time point 1,500 where the set point has been changed for a while. The error distribution of the neural network PID controller is sharper and narrower after the controlled process reaches the desired set point. To clearly illustrate the improvement of the PDF distribution based on the proposed method, PDF at time points
246
Intelligent Control and Automation
1,250, 1,350, and 1,500 is replotted in Figure 9. It is shown that the PDF distribution obtained from the last time points is narrower and sharper than that from the early time points. As would be expected, the adaptive tuning parameters are needed for the nonlinear air-pressure tank process.
Figure 7. The PID controller with the fixed controller parameters in setpoint change results in the air-pressure tank process: pressure (dashed line) and setpoint (solid line). The dotted line represents the 95% control limits.
Figure 8. The estimated error distributions of the PID controller (dashed line) and the network PID controller (solid line) in the air-pressure tank process.
Neural PID Control Strategy for Networked Process Control
247
Figure 9. Plot of PDF for the time point at 1,250, 1,350, and 1,500 in the airpressure tank process.
CONCLUSIONS In recent years, there has been a growing interest in the study of the effects of NCSs mainly because of the broad use in NCSs. NCSs can provide Web clients a platform not only for remote monitoring of the current behavior of the operation plants but also for remote control of the plant. These advantages make NCSs applicable to many fields, including spacecraft automotive, remote robots, and manufacturing processes, but not much work has been done in the process control field so far. In a networked environment, the communication delays (also called network-induced time delays) and packet dropouts (probabilistic information missing, missing measurement) are two main problems that would degrade the system performance. In most relevant literature, the network-induced time delays have been commonly assumed to be deterministic, which is fairly unrealistic as, by nature, delays resulting from network transmissions are typically time varying and random. In this paper, the proposed neural network PID controller for the controlled processes with the communication delays in various probabilistic ways is developed. In comparison with the past work, the merits of the proposed method are as following.
Intelligent Control and Automation
248
•
Instead of Monte Carlo that generates a set of tracking errors at the instant time point, the proposed method uses the sliding windowing technique to obtain the estimation of PDF, entropy, and information potential of the tracking error. Then the design of the controller is straightforward based on minimizing the entropy of the tracking error or maximizing the information potential of the tracking error. This is a more practical solution. • Due to the stochastic behavior, the SPC monitoring chart is introduced to solve practical problems. It improves the control performance of the nonlinear process in the network. The implementation is much simpler and less computationally demanding. The control strategy is the balance between the network delays and the conventional nonlinear control designs. • To verify the proposed practice-oriented algorithm, one practical case is conducted in an air-pressure tank. The results show that the proposed method is effective in following the setpoint and reducing the variance of the output caused by unknown stochastic network delays. This strategy is particularly suitable for the industrial processes that is characterized by high nonlinearity and analytical difficulty in network process control. Because of communication delay, packet dropouts, noises, and disturbance, the tracking error may not obey the Gaussian distribution, the Rényi quadratic entropy of the tracking error is used to design the neural network PID controller for networked operation processes. Hence, the method proposed in this paper can be directly extended to deal with the controller design of the NCS with nonlinear plants, noises, and random delays. All the results presented in this paper yield encouraging outcomes in the single variable process, but sensitivity analysis on different initial PID parameters is not considered. The extension of the neural network PID to the sensitivity problem using theoretic analysis and MIMO systems is needed in the future.
ACKNOWLEDGMENTS This work was supported by the National Basic Research Program of China under Grant (973 Program 2011 CB710706) and the Doctoral Fund of the Ministry of Education of China (20110036110005). These are gratefully acknowledged.
Neural PID Control Strategy for Networked Process Control
249
REFERENCES 1.
W. Zhang, Y. Tang, J. Fang, and X. Wu, “Stability of delayed neural networks with time-varying impulses,” Neural Networks, vol. 36, pp. 59–63, 2012. 2. Y. Tang, H. Gao, W. Zou, and J. Kurths, “Identifying controlling nodes in neuronal networks in different scales,” PLoS ONE, vol. 7, no. 7, Article ID e41375, 2012. 3. E. Tian, D. Yue, and C. Peng, “Quantized output feedback control for networked control systems,” Information Sciences, vol. 178, no. 12, pp. 2734–2749, 2008. 4. E. Tian and D. Yue, “Reliable H∞ filter design for T-S fuzzy model-based networked control systems with random sensor failure,” International Journal of Robust and Nonlinear Control, vol. 23, pp. 15–32, 2013. 5. D. Srinivasagupta, H. Schättler, and B. Joseph, “Time-stamped model predictive control: an algorithm for control of processes with random delays,” Computers and Chemical Engineering, vol. 28, no. 8, pp. 1337–1346, 2004. 6. Y. Zhang and S. Li, “Networked model predictive control based on neighbourhood optimization for serially connected large-scale processes,” Journal of Process Control, vol. 17, no. 1, pp. 37–50, 2007. 7. J. Liu, D. M. de la Peña, B. J. Ohran, P. D. Christofides, and J. F. Davis, “A two-tier architecture for networked process control,” Chemical Engineering Science, vol. 63, no. 22, pp. 5394–5409, 2008. 8. N. H. Ei-Farra, A. Gani, and P. D. Christofides, “Fault-tolerant control of process systems using communication networks,” AIChE Journal, vol. 51, no. 6, pp. 1665–1682, 2005. 9. D. M. de la Peña and P. D. Christofides, “Output feedback control of nonlinear systems subject to sensor data losses,” Systems and Control Letters, vol. 57, no. 8, pp. 631–642, 2008. 10. R. Luck and A. Ray, “Experimental verification of a delay compensation algorithm for integrated communication and control systems,” International Journal of Control, vol. 59, no. 6, pp. 1357–1372, 1994. 11. L. Guo and H. Wang, “Generalized discrete-time PI control of output PDFs using square root B-spline expansion,” Automatica, vol. 41, no. 1, pp. 159–162, 2005. 12. H. T. Su and T. J. McAvoy, “Integration of multilayer perceptron networks and linear dynamic models: a hammerstein modeling
250
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Intelligent Control and Automation
approach,” Industrial and Engineering Chemistry Research, vol. 32, no. 9, pp. 1927–1936, 1993. D. C. Psichogios and L. H. Ungar, “Direct and indirect model based control using artificial neural networks,” Industrial & Engineering Chemistry Research, vol. 30, no. 12, pp. 2564–2573, 1991. E. P. Nahas, M. A. Henson, and D. E. Seborg, “Nonlinear internal model control strategy for neural network models,” Computers and Chemical Engineering, vol. 16, no. 12, pp. 1039–1057, 1992. M. Nikolaou and V. Hanagandi, “Control of nonlinear dynamical systems modeled by recurrent neural networks,” American Institute of Chemical Engineers Journal, vol. 39, no. 1, pp. 1890–1894, 1993. I. Pan, S. Das, and A. Gupta, “Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay,” ISA Transactions, vol. 50, no. 1, pp. 28–36, 2011. J. H. Zhang, “Neural controllers for networked control systems based on minimum tracking error entropy,” Journal of Systems and Control Engineering, vol. 222, no. 7, pp. 671–679, 2008. R. Ghostine, J. M. Thiriet, and J. F. Aubry, “Variable delays and message losses: influence on the reliability of a control loop,” Reliability Engineering and System Safety, vol. 96, no. 1, pp. 160–171, 2011. J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent results in networked control systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 138–162, 2007. P. Wen, J. Cao, and Y. Li, “Design of high-performance networked real-time control systems,” IET Control Theory and Applications, vol. 1, no. 5, pp. 1329–1335, 2007. Y. Tipsuwan and M. Y. Chow, “Control methodologies in networked control systems,” Control Engineering Practice, vol. 11, no. 10, pp. 1099–1111, 2003. D. Erdogmus and J. C. Principe, “An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems,” IEEE Transactions on Signal Processing, vol. 50, no. 7, pp. 1780–1786, 2002. W. E. Deming, Quality, Productivity and Competitive Position, MIT Center for Advanced Engineering Study, Cambridge, Mass, USA, 1982.
Neural PID Control Strategy for Networked Process Control
251
24. D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, John Wiley & Sons, New York, NY, USA, 4th edition, 2007. 25. ASTM, Manual on Presentation of Data and Control Chart Analysis, Publication MNL-7, American Society for Testing & Materials, Philadelphia, Pa, USA, 6th edition, 1990. 26. P. C. Badavas, Real-Time Statistical Process Control, Prentice-Hall, Englewoord Cliffs, NJ, USA, 1993. 27. C. A. Smith and A. B. Corripio, Principles and Practices of Automatic Process Control, John Wiley & Sons, New York, NY, USA, 3rd edition, 2006.
Chapter 11
Control Loop Sensor Calibration Using Neural Networks for Robotic Control
Kathleen A. Kramer1 and Stephen C. Stubberud2 Department of Engineering, University of San Diego, 5998 Alcalá Park, San Diego, CA 92110, USA 2 Advanced Programs, Oakridge Technology, Del Mar, CA 92014, USA 1
ABSTRACT Whether sensor model’s inaccuracies are a result of poor initial modeling or from sensor damage or drift, the effects can be just as detrimental. Sensor modeling errors result in poor state estimation. This, in turn, can cause a control system relying upon the sensor’s measurements to become unstable, such as in robotics where the control system is applied to allow autonomous navigation. A technique referred to as a neural extended Kalman filter (NEKF) is developed to provide both state estimation in a control loop and to learn the difference between the true sensor dynamics and the sensor model. The technique requires multiple sensors on the control system so Citation: Kathleen A. Kramer, Stephen C. Stubberud, “Control Loop Sensor Calibration Using Neural Networks for Robotic Control”, Journal of Robotics, vol. 2011, Article ID 845685, 8 pages, 2011. https://doi.org/10.1155/2011/845685. Copyright: © 2011 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
254
Intelligent Control and Automation
that the properly operating and modeled sensors can be used as truth. The NEKF trains a neural network on-line using the same residuals as the state estimation. The resulting sensor model can then be reincorporated fully into the system to provide the added estimation capability and redundancy.
INTRODUCTION In the target tracking applications, sensor systems are placed in various locations about the region of interest and provide measurements that are combined over time to provide an improved picture of the region. One of the significant problems with target tracking systems is that the sensors might not be properly calibrated, aligned, or modeled. This can wreak havoc with the performance of the involved tracking systems. When calibration and alignment are the main factors, the problem is referred to as sensor registration [1, 2]. Since the real-time activities of a tracking problem cannot be postponed, the correction of the registration or the mismodeling must be performed online. This problem can extend to other applications, including that of feedback-control systems. While some control applications use a single sensor to provide measurements that are used in the feedback loop, many control systems use several sensors to provide the necessary measurements. Multiple sensors are used for a variety of reasons. Often times the sensors add redundancy, such as with navigation systems where multiple inertial navigation systems (INSs) are employed or for safety considerations to provide back-up in case of failure [3]. Sometimes multiple sensors are employed because more accurate sensors cannot provide measurements at the necessary update rates. In such cases, less accurate sensors might be used to provide reports at the sampling time between the measurement reports of the more accurate system. Finally, additional sensors are added in that they provide measurements that contain directly observable state information. In mobile robotic systems, for example, sensors include position sensors to determine absolute position, such as a global position system (GPS), or relative position such as radars, sonars, and imagery equipment, as well as speedometers and possible accelerometers that provide the velocity states more directly than the indirectly observed values from position. While additional sensors are often used to provide improved overall accuracy, errors can arise in the accuracy of the measurements provided to the control law by individual sensors. This occurs with tracking sensors and is well known with gyroscopes in an INS. While a sensor that drifts from its
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
255
calibration point in a nonrandom and identifiable manner can be recalibrated, there is no remedy other than redundancy or replacing a sensor that fails or breaks completely since it is unable to provide a meaningful measurement. For a sensor that can be recalibrated, online calibration is desired because taking the inaccurate sensor offline may not be possible in the short term or could have a deleterious effect on the performance of the controller. When a closed-loop control system relies upon a sensor with an error, the result is that the control signal varies from the value needed to achieve the desired response. These effects are evident, even in a linear system [4]. Sensor calibration is thus an important issue for the control design [5]. For nonlinear systems, these issues become even more pronounced and can lead to a more significant effect on stability, as the feedback may not fall within the Lyapunov stability criteria [6]. In this work, an online calibration technique is proposed for the case where the control law utilizes a state estimator [7]. The approach uses a technique referred to as a neural extended Kalman filter (NEKF) [8–10]. The NEKF has the benefit of being able to train a function approximation online and provide the state estimation simultaneously. The NEKF provides a unified approach using the same residual for both components. Techniques such as in [11], in contrast, can provide the desired sensor correction for sensors in a control loop but are not integrated to the state estimator and corrections are performed outside of the control loop. The technique applied to this sensor correction problem is based upon an approach that was developed for a multisensor tracking system where the sensor model was in error or a sensor registration was detected [12]. In that development, one sensor was considered to be local while the other was considered to be off-board. All corrections were made to the off-board reports relative to the local reference frame. Unlike the tracking problem where the estimator is open loop, control applications require the consideration of closed-loop issues including stability. The NEKF is used to recalibrate the sensor model based on online operational measurements while in the control loop. It also provides the state estimation for the control law. Thus, the proposed technique estimates the states and provides the training paradigm in a single algorithmic implementation [13, 14], unlike other neural network sensor modeling techniques [15–17]. The software correction to the actual sensor measurements can be applied to the case where the sensor is still operational but poorly calibrated. The NEKF algorithm requires a truth measurement from which to calibrate the sensor. Using additional sensors on the dynamic system, a
256
Intelligent Control and Automation
poorly calibrated sensor can be modeled such that its reporting errors are removed prior to their incorporation to the feedback loop. In Section 2, the NEKF algorithm and its implementation in the control loop is developed. Section 3 provides the first example system and the performance results of the online calibration using the NEKF for a twosensor moving platform that has a range-bearing sensor and a miscalibrated velocity sensor. A navigation problem where an intermittent use of a position sensor such as GPS is available is presented in Section 4. These examples show the benefits of the NEKF calibration.
RECALIBRATION APPROACH In this effort, a recalibration technique that can be used online is developed. The approach relies upon other sensors in the multisensor system operating properly which are used to provide a level of truth from which to calibrate. The sensor to be recalibrated may have some inaccuracy but cannot be irreparably damaged. The dynamics of a nonlinear system can be modeled as a set of recursive difference equations:
(1)
where 𝐱𝑘 is the state vector representing the system dynamics, 𝐟(⋅) is statecoupling model, 𝐫𝑘 is the reference input vector, 𝐳𝑘 is measurement vector, also considered the report from the sensor system, uk is an external input to the sensor system often simply the reference signal, and 𝐡(⋅) is the outputcoupling function [18]. The vectors 𝝂 and 𝜼 represent noise on the system states and the measurement states, respectively.
If the reference input 𝐫 is assumed to be both the reference signal to the system and the external input to the sensor, then 𝐫 is considered to be the same as 𝐮. Then, the state estimation model used in the control law would be rewritten as
(2)
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
257
The accuracy of the models in the estimator determines the accuracies of state estimates. For this effort, it is assumed that the state-coupling function is very accurate while at least one of the sensor models has some inaccuracy. The error in this model is defined as (3) To reduce the measurement error, the function 𝜀 needs to be identified. A neural network that satisfies the function approximation requirements of the Stone-Weierstrauss Theorem [9] is proposed to approximate the error function 𝜀. Such a neural network can be defined as
(4)
where 𝐰 are the weights of the neural network, which are decomposed into the set of input weights 𝝎 and the set of output weights β. The hidden function (⋅) is a sigmoid function: (5) This creates a new measurement and more accurate measurement model:
(6)
where ‖𝑒‖ ≪ ‖𝜀‖.
The extended Kalman filter (EKF) is a standard for nonlinear estimation in a control loop [4, 19]. An approach developed in [7] referred to as a neural extended Kalman filter provides both the state estimation of the system and the training of the weights [20] of a neural network using the same measurement estimates. The NEKF is a coupled state-estimation and neural-network training algorithm and has similarities to parameter estimation approaches [21]. In such implementations, the state vector of the EKF contains both the system states and the parameters of the model being estimated. Parameter estimation, though, is based on a known model structure. The neural network of (4), in contrast, is a general function with its weights as the parameters. Therefore, the augmented state of this NEKF is the state estimate and the input and output weights of the neural network:
258
Intelligent Control and Automation
(7) Incorporating this new state and the neural network affects all of the Kalman filter equations. In the Kalman gain equation and the state error covariance update equation, (8) and (10), the Jacobian of the output-coupling function is properly augmented. In (9), the neural network is incorporated into the state update. Finally, the prediction equations, (11) and (12), must be properly augmented for the augmented state. Thus, the NEKF equations for the sensor modeling become (8)
(9)
(10)
(11)
(12) The augmented Jacobian of the sensor function model, is therefore defined as
(13)
(14)
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
259
The necessary coupling between the weights and the dynamic states occurs as a result of the output-coupling Jacobian. This coupling permits the weights to be completely observable to the measurements and to train off of the same residual as the system states. This also implies that the NEKF trains the neural network online without the need for additional off-line training sets. The NEKF defined in (8)–(12) requires a truth measurement for the neural network to learn. For a target tracking application where a surveyed target can be used to provide ground truth, this implementation is appropriate [9]. Control applications are different in that such “truth” is unavailable. Instead, the state of the system dynamics must be generated using sensors in the control system that are more accurately modeled. This requires a modification of the NEKF design to utilize the existing state estimates and the measurements from the other sensors to correct the output-coupling function of the mismodeled sensor using the neural network. For this implementation, the detection of a sensor failure is assumed to have occurred. Such detection techniques abound in tracking theory and similar techniques exist in dynamic system implementation [22]. The detection of the faulty sensor initiates two changes in the NEKF. First, the process noise matrix is modified. The ratio of the process noise 𝐐 to the measurement noise 𝐑 is reduced to significantly favor the dynamic state estimate over the residual. This reduces the effect of the measurements from the poor sensor on the system estimate. The process noise of the weights is increased to favor the residual over the weight estimates so that the weights will train to learn the error function. A similar approach of reducing the ratio of 𝐐 to 𝐑 could be used for the reduction of state-coupling error [7, 8]. The second modification is that, for the first 𝑛 steps of the correction of the NEKF, the dynamic state estimates are decoupled from the control loop. The resulting modified NEKF equations are given as (15)
(16) (17)
260
Intelligent Control and Automation
(18) (19) where the superscript acc indicates the accurate covariance from the estimator using the other sensors. If more than one poor measurement is produced between accurate measurements, only the last measurement was used for the state estimator [7]. Two approaches for decoupling have been considered. In the first, the measurement noise for the faulty sensor is kept artificially high throughout the experiment. This reduces the effect of the poor sensor even as the neural network trains. In the second approach, as the weights of the neural network settle, the measurement noise is reduced from its artificially high values to values closer to the calibrated sensor noise covariance. This allows the improved model of the sensor to have greater effect on the control loop.
CONTROL EXAMPLE I A simulated control example is used to demonstrate the capability of the NEKF technique to model a sensor change while the system is in operation. A small motorized vehicle is operating in a two dimensional space, as seen in Figure 1.
Figure 1. Example scenario of an autonomous vehicle using two sensors to achieve a specific end location.
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
261
The goal is for the vehicle to move from one point in the operational grid to a second point in the operational grid. The vehicle maintains its position via a sensor system that provides a range-bearing measurement to a surveyed location in the operational grid. The platform also has a speedometer and compass system that is used to provide the vehicle’s heading relative to a local magnetic pole. This is similar to INS systems that are GPS denied. The state-space model of the vehicle-dynamics is defined as
(20)
(21) where 𝑎 indicates acceleration, 𝜃 denotes the heading angle, and 𝜓 denotes the change in heading angle. For a sample rate of 𝑑𝑡, the discretized representation becomes
(22) The speed sensor and heading sensor have combined into a single sensor package modeled in (23). The package has an update rate of 0.2 seconds. The accuracy of the compass heading is 2.0 degrees while speedometer accuracy is 0.001 m/s. The position sensor, providing a range (𝑟), and bearing (𝛼) relative to the surveyed location, is modeled in (24), also with an update rate of 0.2 seconds. The position sensor reports are interleaved with the velocity measurements, as shown in the time line of Figure 2. The accuracy of the position sensor is given as 1.0 degrees for bearing and 0.005 m for range:
262
Intelligent Control and Automation
(23)
(24)
For this effort, the control input has an input rate of 0.1 seconds to synchronize with the sensor reports. The initial location of the vehicle is defined at (−5 m, −6 m) with a heading of 0 degrees, while the desired final point is (0 m, 0 m) with a heading of 45.0 degrees. The range-bearing beacon is placed at (+3 m, −8 m), while the local north pole is defined at (−2 m, +3 m). The position sensor measurement uncertainty matrix had standard deviations of 0.005 m for the range and 1.0 radian for the bearing. For the velocity sensor, the measurement error covariance had standard deviations of 0.001 m/s for the speed and 2 radians for the heading. The broken velocity sensor reported speeds increased by a factor of 1.9 and added a 0.5-radian bias to the heading, in addition to the additive Gaussian noise.
Figure 2. Sensor reports from different sensors are interleaved to provide uniform reporting.
For the NEKF, the process noise for the weights of the neural networks, 𝐐w, and its initial error covariance, 𝐏wts, were increased to allow for changes based on the residuals using the broken sensor. The process noise for the input weights was set to 1.0 and for the output weights, it was set to 2.0. The initial state error covariance, 𝐏states, was set to 100.0. The process noise, Q, was set to the integrated white noise model [15]:
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
263
(25)
with a factor 𝑞 of 0.0017. For the NEKF, the initial value for 𝐏wts was set to 1000. For comparison purposes, an EKF using the same values for the dynamic state components was generated as well. Four separate cases, two with the EKF and two with the NEKF were implemented. The four cases are (1) an EKF with an inflation factor of 100 throughout the scenario, (2) an EKF with an inflation factor of 10,000,000 throughout the scenario, (3) an NEKF with an inflation factor of 100 throughout the scenario, and (4) an NEKF with an initial inflation factor of 1000 held for 80 iterations and then decayed by 15% for later iterations down to a minimum value of 1. Inflating the uncertainty matrix, 𝐑, of the velocity sensor by a given inflation factor allows the state update component of EKF and of the NEKF to be less affected by these poor measurements. For the NEKF, a 4-node hidden layer was used. The results are shown in Figures 3(a)–3(d).
264
Intelligent Control and Automation
Figure 3. (a) Position coordinates using the EKF and a low inflation rate. (b) Position coordinates using the EKF and a high inflation rate. (c) Position coordinates using the NEKF and a low inflation rate. (d) Position coordinates using the NEKF and a higher inflation rate that is allowed to decay after 80 iterations.
Figure 3(a) indicates that the EKF is unable to remain stable with the poor measurements having the reduced, but not completely insignificant, effect on the state estimate. By overcompensating with a significant increase in the measurement covariance, the EKF basically eliminates all of the poor sensor reports. This is seen in Figure 3(b). However, the vehicle never settles in at the location. The NEKF results are shown in Figures 3(c) and 3(d) where it learns sensor errors while on line to provide clearly improved control performance. Even with the same measurement covariance as the EKF in first case, as in Figure 3(c), the NEKF implementation remains stable. In both Figures 3(c) and 3(d), the results are quite similar with more changes after the measurement noise decays. With multiple sensor systems, the NEKF is able to provide a fault correcting mechanism for sensors that are still providing information, although that information needs correction. The research of this effort has also shown that the slower the decay rate is on the inflation factor, the lower the initial inflation factor can be.
CONTROL EXAMPLE II The small motorized vehicle operating in a two-dimensional space shown in Figure 1 is also the basis for the second control example. In this case, the
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
265
vehicle maintains its position via a two-sensor system. One sensor provides a range-bearing measurement to a surveyed location in the operational grid and another, slower reporting, position sensor provides a linear position report. This would be similar to a GPS system updating an INS. The state-space model of the vehicle-dynamics is defined as in (20)– (22). The range-bearing error is described as in (23). The position sensor has an update rate of 0.6 seconds and provides a linear report as seen in (26). The accuracy of the position sensor is assumed to be ±0.01 m in both directions. The other position sensor, providing a range (𝑟), and bearing (𝛼) relative to the surveyed location, is modeled in (24) and has an update rate of 0.2 seconds. The accuracy of the faster, range-bearing position sensor is given as 1.0 degree of bearing accuracy and 0.005 m of range accuracy:
(26) The sensor reports of positions are interleaved as shown in the time line of Figure 4. For this effort, the control input has an input rate of 0.2 seconds to synchronize with the range-bearing sensor report.
Figure 4. Sensor reports from different sensors are interleaved to provide uniform reporting.
The initial location of the vehicle is defined at (−5 m, −6 m) with a heading of 0 degrees, while the desired final point is (0 m, 0 m) with a heading of 45.0 degrees. The range-bearing beacon is placed at (+3 m, −8 m). The position sensor providing linear reports provides measurements with a bias of (0.5 m, −0.25 m) added to the coordinates and is considered to be miscalibrated. For the NEKF, the process noise for the weights of the neural networks, 𝐐w, and its initial error covariance, 𝐏wts, were increased to allow for changes based on the residuals using the broken sensor. The process noise for the input weights was set to 1.0 and for the output weights, it was set to 2.0. The initial state error covariance, 𝐏states, was set to 10.0. The process noise, 𝐐, was set to the integrated white noise model [14] of (25) with a factor 𝑞 of
266
Intelligent Control and Automation
0.0017. For the NEKF, the initial value for 𝐏wts was set to 0.1. The inflation factor for the broken sensor in both the NEKF and the EKF tests was set to 100. For comparison purposes, the EKF that was generated used the same values for the dynamic state components. In this example case, when the system had two fully functional sensors, the vehicle took 3.6 seconds to reach its desired endpoint with a 0.001 m total distance error. A comparison between the EKF with the bad sensor and the NEKF with the bad sensor was generated. Two different comparisons were run. In the first comparison, an inflation factor of 10 for measurement error covariance matrix of the broken sensor was used. Inflating the uncertainty matrix, 𝐑, of the velocity sensor by a given inflation factor allows the state update component of EKF and of the NEKF to be less affected by these poor measurements. For the NEKF, a 4-node hidden layer was used. In this case, both the NEKF and the EKF went unstable. In the second comparison, an inflation factor of 1000 was used. The results of the four states, x-position, x-velocity, y-position, and y-velocity are shown for the EKF in Figure 5. While the individual elements are hard to discern, the important fact is that none of the values converge. This causes the vehicle to leave the region of interest. Figure 6 depicts the NEKF results for the four states. As is clearly seen, both techniques achieve a result, but the NEKF trajectory is much smoother and the states are less erratic and require significantly fewer iterations to converge to the desired result (84 iterations versus 498 iterations).
Figure 5. EKF state responses.
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
267
Figure 6. NEKF state responses.
If the position sensor was miscalibrated and not corrected but only had a reduced effect on the navigation estimator, as would be the case when using the EKF only and assumed it were inaccurate, the vehicle position becomes unstable causing it to leave the grid area within 10 seconds of beginning the test. When the NEKF was used, the vehicle was able to reach its desired endpoint in 38.4 seconds with 0.45 m total distance error and 0.3 m/s velocity error. One of the interesting comparisons is that of the residual errors for heading in this example compared to a 3-sigma error based upon the system and sensor covariances. Figure 7 shows the results for the EKF, and Figure 8 shows the results for the NEKF. In both cases, the residuals are within the bounds, but the NEKF has a more accurate bound in the steady-state portion allowing for the Kalman gain to permit greater residual effect on the state update for the corrected sensor. Thus, without the NEKF, the system would go unstable with deleterious effects almost immediately. With the NEKF, the vehicle remains in the operational area and slowly converges to the desired point.
268
Intelligent Control and Automation
Figure 7. Comparison of EKF residual and the error covariance weighting.
Figure 8. Comparison of NEKF residual and the error covariance weighting.
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
269
CONCLUSION A new neural extended Kalman filter algorithm has been developed that can improve sensor modeling for a poorly modeled sensor in the control loop. The technique expanded on an approach originally developed for target tracking that relied upon that availability of truth. In the algorithm, properly modeled, operating sensors were used to provide truth used to recalibrate the sensor that is poorly modeled. This NEKF approach decouples its state estimates from the accurate estimates using the measurement error covariance. The NEKF was shown to have improved performance over that of the EKF in its application to estimate the states of the control loop of an autonomous vehicle for two example cases. Performance of the algorithm will continue to be evaluated in other applications and with different training parameters.
270
Intelligent Control and Automation
REFERENCES 1.
R. Lobbia, E. Frangione, and M. Owen, “Noncooperative target sensor registration in a multiple-hypothesis tracking architecture,” in Signal and Data Processing of Small Targets, vol. 3163 of Proceedings of SPIE, San Diego, Calif, USA, July 1997. 2. S. Blackman, Multiple-Target Tracking with Radar Applications, Artech House, Norwood, Mass, USA, 1986. 3. D. H. Titterton and J. L. Weston, Strapdown Inertial Navigation Technology, Institute of Electrical Engineers, London, UK, 1997. 4. W. S. Levine, Ed., The Control Handbook, CRC Press, Boca Raton, Fla, USA, 1996. 5. A. Martinelli, “State estimation on the concept of continuous symmetry and observability analysis: the case of calibration,” IEEE Transactions on Robotics, vol. 27, no. 2, pp. 239–255, 2011. 6. K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems, Prentice Hall, Englewood Cliffs, NJ, USA, 1989. 7. S. C. Stubberud and K. A. Kramer, “Control loop sensor calibration using neural networks,” in Proceedings of the IEEE International Instrumentation and Measurement Technology Conference (I2MTC ‘08), pp. 472–477, Victoria, Canada, May 2008. 8. S. C. Stubberud, R. N. Lobbia, and M. Owen, “Adaptive extended Kalman filter using artificial neural networks,” International Journal of Smart Engineering System Design, vol. 1, no. 3, pp. 207–221, 1998. 9. A. R. Stubberud, “A validation of the neural extended kalman filter,” in Proceedings of the International Conference on Systems Engineering, pp. 3–8, Coventry, UK, September 2006. 10. S. C. Stubberud, R. N. Lobbia, and M. Owen, “Adaptive extended Kalman filter using artificial neural networks,” in Proceedings of the 34th IEEE Conference on Decision and Control, pp. 1852–1856, New Orleans, La, USA, December 1995. 11. D. Klimánek and B. Šulc, “Evolutionary detection of sensor discredibility in control loops,” in Proceedings of the 31st Annual Conference of the IEEE Industrial Electronics Society (IECON ‘05), vol. 2005, pp. 136–141, Raleigh, NC, USA, November 2005.
Control Loop Sensor Calibration Using Neural Networks for Robotic ...
271
12. S. C. Stubberud, K. A. Kramer, and J. A. Geremia, “On-line sensor modeling using a neural Kalman filter,” in Proceedings of the 23rd IEEE Instrumentation and Measurement Technology Conference (IMTC ‘06), pp. 969–974, Sorrento, Italy, April 2006. 13. K. A. Kramer, S. C. Stubberud, and J. A. Geremia, “Target registration correction using the neural extended kalman filter,” IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 7, Article ID 5280378, pp. 1964–1971, 2010. 14. S. C. Stubberud, K. Kramer, and A. R. Stubberud, “Parameter estimation using a novel nonlinear constrained sequential state estimator,” in Proceedings of the UKACC International Conference on Control, pp. 1031–1036, Coventry, UK, September 2010. 15. M. J. Gao, J. W. Tian, and K. Li, “The study of soft sensor modeling method based on wavelet neural network for sewage treatment,” in Proceedings of the International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR ‘07), vol. 2, pp. 721–726, November 2007. 16. Enderle, G. K. Kraetzschmar, Sablatnoeg, and Palm, “One sensor learning from another,” in Proceedings of the 9th International Conference on Artificial Neural Networks (ICANN ‘99), vol. 2, no. 470, pp. 755–760, September 1999. 17. J. W. M. van Dam, B. J. A. Krose, and F. C. A. Groen, “Adaptive sensor models,” in Proceedings of the IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 705–712, December 1996. 18. M. S. Santina, A. R. Stubberud, and G. H. Hostetter, Digital Control System Design, Saunders College Publishing, Fort Worth, Tex, USA, 2nd edition, 1994. 19. A. K. Mahalanabis, “Introduction to random signal analysis and kalman filtering. Robert G. Brown,” Automatica, vol. 22, no. 3, pp. 387–388, 1986. 20. S. Singhal and L. Wu, “Training multilayer perceptrons with the extended kalman algorithm,” in Advances in Neural Processing Systems I, D. S. Touretsky, Ed., pp. 133–140, Morgan Kaufmann, 1989. 21. S. C. Iglehart and C. T. Leondes, “Estimation of a dispersion parameter in discrete kalman filtering,” IEEE Transactions on Automatic Control, vol. 19, no. 3, pp. 262–263, 1974.
272
Intelligent Control and Automation
22. Brotherton, Johnson, and Chadderdon, “Classification and novelty detection using linear models and a class dependent—elliptical basis function neural network,” in Proceedings of the IEEE World Congress on Computational Intelligence (IJCNN ‘98), vol. 2, pp. 876–879, Anchorage, Alaska, USA, 1998.
Chapter 12
Feedforward Nonlinear Control Using Neural Gas Network
Iván Machón-González and Hilario López-García Departamento de Ingeniería Eléctrica, Electrónica de Computadores y Sistemas, Universidad de Oviedo, Edificio Departamental 2, Zona Oeste, Campus de Viesques s/n, 33204 Gijón/Xixón, Spain
ABSTRACT Nonlinear systems control is a main issue in control theory. Many developed applications suffer from a mathematical foundation not as general as the theory of linear systems. This paper proposes a control strategy of nonlinear systems with unknown dynamics by means of a set of local linear models obtained by a supervised neural gas network. The proposed approach takes advantage of the neural gas feature by which the algorithm yields a very robust clustering procedure. The direct model of the plant constitutes a piece-wise linear approximation of the nonlinear system and each neuron represents a local linear model for which a linear controller is designed. Citation: Iván Machón-González, Hilario López-García, “Feedforward Nonlinear Control Using Neural Gas Network”, Complexity, vol. 2017, Article ID 3125073, 11 pages, 2017. https://doi.org/10.1155/2017/3125073. Copyright: © 2017 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
274
Intelligent Control and Automation
The neural gas model works as an observer and a controller at the same time. A state feedback control is implemented by estimation of the state variables based on the local transfer function that was provided by the local linear model. The gradient vectors obtained by the supervised neural gas algorithm provide a robust procedure for feedforward nonlinear control, that is, supposing the inexistence of disturbances.
INTRODUCTION Although some physical systems can be approximated as a linear model, almost all real plants actually have a nonlinear functioning. A wide understanding of the behavior of nonlinear processes is available but it is sometimes difficult to choose the appropriate control method. Lyapunov theory is a classic method for nonlinear system control. If and only if there is a positive definite continuous function whose derivative is negative under the suitable conditions of the control design, then the control asymptotic stability is guaranteed. However, this method is unfortunate because obtaining the Lyapunov function is difficult. This problem is even worse when dealing with unknown plants that are not defined mathematically. Therefore, it is usually not easy to guarantee the stability of a complex nonlinear system [1]. However, if the local linear system corresponding to an equilibrium point is controllable, then sufficient conditions can be stated for local stability [2]. Hartman-Grobman theorem states that the behavior of a nonlinear system in the neighborhood of an equilibrium point can be approximated by its linearized model. The systems theory is based on many mathematical procedures about stability, controllability, and observability regarding linear systems. The stability and, to a great extent, the dynamic response of a linear system can be described in terms of eigenvalues of the system matrix in state space design or poles of the transfer function. No such method exists for nonlinear systems. For this reason, industrial control processes are still usually designed using this linear control theory. After linearization, the typical approach is to design a linear controller such as PID with fixed parameters. The classical approach to get local linear models can be achieved with RLS (Recursive Least Squares) method. However, sometimes this method throws up unfavorable results due to the intrinsic nonlinearities of the process to be controlled. The problem is to establish the different operating points for a nonlinear system. At this point, the proposed algorithm can establish each operating point as a cluster centre of the neural gas network.
Feedforward Nonlinear Control Using Neural Gas Network
275
It is for such reason that artificial intelligence techniques improve the control performance. Research into identification and control of nonlinear systems by means of neural networks (NN) began over two decades ago [3]. One of the major advantages of control by NN is that precise knowledge of the plant such as a mathematical model is not needed. Initially, control applications using NN were based on a trial-and-error approach. Research efforts have improved the control algorithms and several journals have published special issues with a strong mathematical foundation [4]. Many applications are based on a combination between feedforward and recurrent NN. Recurrency, also known as dynamic backpropagation, is necessary due to the dependency of the output on the previous values of the same output, which are also functions of the weights [5]. Zhang and Wang [6] proposed a pole assignment control using recurrent NN. The typical design procedure is to carry out the system identification in order to model the plant and, secondly, to obtain the controller. Traditional methods rely heavily on models extracted from physical principles, whereas approaches based on NN theory usually create black-box models as function approximators using data obtained from the plant. Knowledge about the mathematical model of the plant or any other physical principle is not necessary. Neural gas (NG) is an unsupervised prototype-based method [7] in which the prototype vectors are the weights and carry out a partition of the training data space. It considers the cooperation-competition computation, allowing the algorithm to be prevented from local minima problems. In addition, the batch NG allows fast training so that the convergence is achieved in a small number of epochs [8]. Supervised versions of NG have also been developed, specially for classification [9, 10]. The algorithm has a great robustness for clustering tasks but has also been proven to be robust to obtain direct models of plants [11]. After years of works in identification and control of dynamical systems by means of NN, there is agreement among researchers that linear identifiers and controllers should be used as first attempt, as stated in Chen and Narendra [2]. If a set of local linear models corresponding to several equilibrium points can approximate with certain accuracy a nonlinear system, then linear controllers can be designed for each model and the global control is related to control by switched linear models.
276
Intelligent Control and Automation
This divide-and-conquer approach is applied in this work. The resulting model is a set of local linear maps. Each neuron of the NG model corresponds to one local model. These local models are obtained after NG training. In this way, a direct model of the plant is obtained. After obtaining this NG model, the design of the local linear controller is simpler than that of the global nonlinear controller. Local linear mapping using another prototypebased algorithm such as SOM was successfully tested at the NASA facilities [12]. This paper aims to apply the robustness modeling capability of NG to control a nonlinear plant such as a typical robot manipulator. The paper contains the learning rules of the considered NG algorithm in Section 2, the model of the plant and the control strategy are explained in Sections 3 and 4, respectively, and the proposed technique is tested in Section 5.
NEURAL GAS APPROACH The unsupervised version of the NG algorithm is based on energy cost function (1) according to the Euclidean metric. The notation used for the squared Euclidean distance is given in (2). Moreover,
(1) (2) A neighborhood function (3) is needed to implement the algorithm. The rank function (v, 𝑤𝑖) ∈ 0, . . . , 𝑚 − 1 represents the rank distance between prototype 𝑤𝑖 and data vector v. The minimum distance takes the value 0 and the rank for the maximum distance is equal to 𝑚 − 1, where 𝑚 is the number of neurons or prototypes and (𝑡) is the neighborhood radius: (3) The neighborhood radius (𝑡) is usually chosen to decrease exponentially according to (4). The decrease goes from an initial positive value, smaller final positive value,
:
, to a
Feedforward Nonlinear Control Using Neural Gas Network
277
(4) where 𝑡 is the epoch step, 𝑡max is the maximum number of epochs, and was chosen as half the number of map units as in Arnonkijpanich et al. [13]. In addition, in order to minimize the quantization error at the end of the training. The learning rule of the batch version is obtained in Cottrell et al. [8]. The batch algorithm can be obtained by means of Newton’s method using the Jacobian and Hessian matrices, 𝐽 and 𝐻, respectively, of the cost function 𝐸NG. The adaptation of the prototype 𝑤𝑖 is formulated accordingly based on this method (5) Kernel function ℎNG can be considered locally constant [8]. In this way, the Jacobian and Hessian matrices are
(6) Substituting (6) into (5), the increment can be obtained
(7) Finally, the updating rule for each prototype vector appears in
(8)
278
Intelligent Control and Automation
Supervised Learning Supervised learning with NG is possible by means of local linear mapping over each Voronoi region defined by prototype vector 𝑤𝑖. A constant 𝑦𝑖 and a vector ∇𝑓𝑖 with the same dimension as 𝑤𝑖 are assigned to each neuron 𝑖. The goal is to approximate the function 𝑦 = (v) from ℜ𝐷 to ℜ, where is the number of training variables, that is, the dimension of data vector v. The training thus becomes supervised and the dataset contains input-output pairs of data vector v and variable 𝑦 as the objective function. The estimation is carried out by (9) where is the estimated output value, 𝑦𝑖 is the reference value learned for 𝑤𝑖, ∇𝑓𝑖 is the gradient of the approximated function obtained in the 𝑖th Voronoi region defined by 𝑤𝑖, and 𝑖∗ is the neuron 𝑖 with its closest 𝑤𝑖 to data vector v, that is, the best matching unit (BMU). The asterisk super index denotes the winning neuron for input data vector v. The probability distribution of the input data is represented by prototype vectors 𝑤 which are previously updated according to the typical rule of the unsupervised version of the algorithm [14] using (8). Each prototype vector 𝑤𝑖 can be considered as the centroid of the th Voronoi region. After unsupervised training, 𝑚 regions are well defined by these vectors. At this point, local models will be created in each of these regions so that 𝑚 local models will represent the whole data distribution. The energy cost function of the supervised version of the algorithm is based on the mean square error of the output variable estimation averaged over each Voronoi region [15] according to (10). Prototypes 𝑤𝑖 are already obtained in (8), whereas the adaptation rules of 𝑦𝑖 and ∇𝑓𝑖 are calculated considering Newton’s method for energy cost (10). The learning rules for 𝑦𝑖 and ∇𝑓𝑖 are shown in (11) and (12), respectively:
(10)
Feedforward Nonlinear Control Using Neural Gas Network
279
(11)
(12)
PLANT MODEL After NG training, the plant is modeled as a set of linear systems whose output 𝑌 depends on the previous values of both output 𝑌 and input 𝑈. The Nonlinear Autoregressive-Moving Average (NARMA) model has been proven for nonlinear identification [16, 17] and can be expressed as
(13)
where 𝑌𝑘 is the system output at the sampling instant 𝑘, 𝑈𝑘 is the system input at instant 𝑘, and 𝑑 is the system delay. Considering zero delay system and substituting (13) for (9) remains
Hereafter, the gradients will be denoted as coefficients 𝑎𝑖 and 𝑏𝑖.
(14)
280
Intelligent Control and Automation
And the following terms will be gathered to form variable 𝜂: (15) 𝑧
−1
Denoting the by
polynomials
with
backward
(16)
shift
operator
which reminds one of an ARMAX model, 𝜂 is not only a zero mean independent identically distributed white noise process but also a known disturbance calculated according to (15), and it depends on the input and output values since it is obtained by BMU 𝑖∗. The internal noise of the system can be included in 𝜂. Using the 𝑧-transform, (𝑧) is the system output and (𝑧) is the system input where the controller must be connected.
LOCAL LINEAR CONTROL BY STATE FEEDBACK If the system is linear (locally), then the superposition theorem can be applied remaining the linear transfer function between the system output and the control input as follows: (17) Define (18) and choose the following relationship between the state variables: 𝑥1(𝑧) = 𝑧−𝑛𝑄(𝑧), 𝑥2(𝑧) = 𝑧−𝑛+1𝑄(𝑧), . . . ,𝑥𝑛(𝑧) = 𝑧−1𝑄(𝑧).
Transfer function (17) can be expressed in control canonical form for linear state space design as
Feedforward Nonlinear Control Using Neural Gas Network
281
(19)
(20) Assuming that the system is controllable, the purpose of the control by state feedback via pole placement is to assign a set of pole locations for the closed-loop system that will correspond to satisfactory dynamic response in terms of rise time, settling time, and overshoot of the transient response. The control law is a linear combination of the state variables 𝑥𝑖 which are estimated in (19) by way of local transfer function (17). The characteristic polynomial of the closed-loop system depending on system matrix 𝐹, input matrix 𝐺, and gain vector 𝐾 is (21)
282
Intelligent Control and Automation
whereas the characteristic polynomial of the desired pole locations is
(22)
For an 𝑛th-order system, the gain vector 𝐾 = [𝐾1 𝐾2 ⋅⋅⋅ 𝐾𝑛] for state feedback is obtained by matching coefficients in (21) and (22) forcing the closed-loop poles to be placed at the desired locations:
(23)
It is possible that there are enough degrees of freedom to choose arbitrarily any desired root location by selecting the proper values 𝐾𝑖. This is an inexact procedure that may require some iteration by the designer. The solution of the local linear model lies in finding the matrix or the regulator coefficients that implement the state feedback control. The stability condition for linear discrete-time systems is that all the eigenvalues must be inside the unit circle. Obviously, this criterion is not valid for nonlinear systems but there is a region inside the stable linear area where the asymptotic stability of the switched linear systems is achieved [18]. Thus, not only a desired dynamic response can be designed, but also stability criteria will be accomplished. In this work, this stability region was found by means of trial-and-error with different eigenvalues. The proposed control strategy scheme is shown in Figure 1. Gain vector 𝐾 is calculated to fulfill the dynamics according to (22) depending on the local linear model defined by the current winning neuron 𝑖∗ or BMU. State variables 𝑥𝑖 are also obtained by the local linear model of the NG in (19). Tracking of the setpoint reference is possible using the inverse static gain of the feedback loop. In addition, since disturbance 𝜂 is known (it is included by the model), it can be compensated as −𝜂/𝐴𝑖∗ (𝑧−1). The transfer function of the prefilter has been chosen as (1 − 𝜆prefilter)/(𝑧 − 𝜆prefilter)𝑛 and determines the switching rate of the local linear models. Although the pole assignment method does not affect the zeros of the plant, the prefilter can be optionally designed in order to cancel dominant zeros located inside the unit circle.
Feedforward Nonlinear Control Using Neural Gas Network
283
Figure 1. Control strategy by state feedback and local linear models.
EXPERIMENTAL TESTING The aim is to control the typical robot arm problem depicted in Figure 2. Hagan et al. [19] focused on this plant to be controlled by dynamic propagation algorithm using a Model Reference Control architecture. Obviously the proposal control is not based on the mathematical model of the plant, but this is a well-known second-order nonlinear differential equation: (24) where 𝑀 = 1 kg, 𝐽 = 1 kgm , 𝐿 = 1 m, 𝐵 = 1 kgm2/s, and 𝑔 = 10 m/s2. The viscous friction coefficient is important regarding stability and dynamic response. If 𝐵 = 0, then the system is unstable in open loop and the necessary training data cannot be obtained by open loop simulation of the plant. In the present work, the system is simulated in open loop to acquire the training data considering 𝐵 = 1 in order to propose a plant with more oscillatory response in comparison to Hagan et al. [19] where 𝐵 = 2. Plant input 𝑢 is a uniformly distributed random signal with −4 and 8.1 as minimum and maximum values of amplitude, respectively. The pulse width must be carefully selected in order to model the transient and steady states correctly. Thus, the pulse width is equal to 14 seconds. Since NG is a vector quantization algorithm where the neurons are updated according to the probability distribution function of 2
284
Intelligent Control and Automation
the training data, it is important to obtain a uniform distribution of output value 𝜃 in the training data. After plant simulation, it was observed that the present system output 𝜃𝑘 did not depend on the present system input 𝑢𝑘 in the training dataset. If 𝑛 = 2 is considered, then the plant is not correctly modeled, only the steady state is approximated but not the transient. If 𝑛 = 3, then the plant model is quite accurate but the control is not well performed due to the system nonlinearities. The optimum value is 𝑛 = 4. Batch NG training was carried out considering 50 epochs and 49 neurons to obtain the direct model of the plant. Fewer neurons cause a similar effect to that mentioned above for 𝑛 = 2 and using more neurons does not improve the control. Figures 3 and 4 show the results for testing data after training for 𝑛 = 4. Obviously, the sampling time must meet the criteria of Nyquist-Shannon theorem. The considered sampling time was 0.1 seconds.
Figure 2. Plant used to test the proposed control.
Feedforward Nonlinear Control Using Neural Gas Network
285
Figure 3. Results for testing data using 𝑛 = 4.
Figure 4. Results for testing data using 𝑛 = 4.
Eigenvalues 𝜆𝑗 in (22) provide asymptotic stability and suitable dynamic response. We consider that all the eigenvalues are equal; that is, (𝑧) = (𝑧 − 𝜆)4. In order to tune 𝜆, the control system was tested for different eigenvalues and prefilter poles. Steps of amplitude 0.9 at the reference setpoint were
286
Intelligent Control and Automation
used since 𝜃 values close to 0.9 represent the worst system working zone with high nonlinearity. Obviously, 𝜃 values over 0.9 are even worse, but we refer to the training data range. Eigenvalue 𝜆 was incremented from 0.2 to 0.9 in series of 0.1 amplitude steps. Figures 5 and 6 show the results. If the prefilter has a wide bandwidth (𝜆prefilter is low), then lower values of 𝜆 produce instability. In Figure 5, lower 𝜆 values yield a considerable effect of the modeled disturbance 𝜂 and the overshoot is high, whereas there are some rebounds between two BMU linear models for higher 𝜆 values. The lower 𝜆, the lower the rise time (the wider the bandwidth). The optimum 𝜆 range is [0.5–0.6] for 𝜆prefilter = 0.4. Thus, when using switched linear models there is a stability region of 𝜆 within the global stability area of the linear systems theory (unit circle) depending on the switching rate of the NG local linear models [18]. Here the switching rate is determined by the prefilter transfer function. If 𝜆prefilter is increased then good results are obtained for 𝜆 = 0.2, and see Figure 6.
Figure 5. Results for 𝜆prefilter = 0.4.
Feedforward Nonlinear Control Using Neural Gas Network
287
Figure 6. Results for 𝜆prefilter = 0.9.
Once the parameters had been determined, the NG approach was tested to track a constant reference for control of position in Figure 7 and a variable reference such as a sinusoidal signal to check control of velocity in Figure 8. The linear estimation output is not the NG estimation in (9) but it is calculated by means of (20) and adding the value of known disturbance 𝜂.
288
Intelligent Control and Automation
Figure 7. Results for 𝜆prefilter = 0.7 and 𝜆 = 0.2.
Feedforward Nonlinear Control Using Neural Gas Network
289
290
Intelligent Control and Automation
Figure 8. Results for 𝜆prefilter = 0.7 and 𝜆 = 0.2.
The worst control is when the reference value is close to 0.9. To illustrate this problem, PI control with fixed parameters was compared to NG approach. Obviously, PID control would achieve a fast and stable response due to derivative action but at the cost of an unrealizable control action and the system control would be affected by noisy signals. The linearized model using the 𝑠-transform in the neighborhood of 𝜃0, denoted as
, is
(25) A suitable PI design regarding settling time and smoothness response is PI(𝑠) = 𝑘𝑝 ⋅ (𝑠 + (𝑘𝑖/𝑘𝑝))/𝑠 = 0.3 ⋅ (𝑠 + 10)/𝑠 considering 𝜃0 = 0.5. Figure 9 shows that the system controlled by this PI with fixed parameters becomes more oscillatory when considering 𝜃0 = 0.9 because the two complex poles of the plant change their location involving the change of the closed-loop poles. The two complex closed-loop poles are more dominant and, therefore, the system increases oscillation. In this design the trade-off is between the settling time and the oscillatory component of the response. The tuning is to change 𝑘𝑝 keeping constant the location of the zero in −10; that is, 𝑘𝑖/𝑘𝑝 = 10. Figure 10 shows the influence of the adjustment of 𝑘𝑝 in the control design and the NG approach is displayed to be compared to that one.
Feedforward Nonlinear Control Using Neural Gas Network
Figure 9. PI control at two different linearization points.
291
292
Intelligent Control and Automation
Figure 10. NG in comparison to PI.
The control strategy described above is valid for feedforward control, that is, supposing the inexistence of disturbances. In these conditions, the algorithm promises very good performance. However, disturbance rejection can be achieved adding an extra state variable so that where 𝑒 is the tracking error, and following the steps described from (21) to (23), Figure 11 shows the rejection of a constant output disturbance of amplitude 0.05.
Figure 11. Disturbance rejection.
Feedforward Nonlinear Control Using Neural Gas Network
293
CONCLUSIONS In this paper a supervised version of neural gas (NG) algorithm is proposed to control nonlinear systems whose dynamic mathematical models are unknown. The identification of the plant is achieved with the NG model. In comparison to other types of neural networks, the formation of the NG model is a robust procedure since there are neither problems of local minima nor overfitting. The training data must be carefully selected in order to model the transient and steady states correctly. The NG algorithm tends to model the steady states quite well. Obviously, the transient must be correctly modeled in order to control the plant. In this way, the number of delayed samples 𝑛 and the number of neurons 𝑚 are key parameters. There must be a sufficient number of neurons but the control is not improved if it is too large. The trained NG network produces a set of piece-wise local linear models. Each of these is represented by a neuron. The global controller is a set of linear controllers which are obtained by state feedback via pole assignment. This control does not affect zeros but if these are inside the unit circle, then they can be cancelled by the poles of the prefilter.
Eigenvalues 𝜆 inside the unit circle do not guarantee the asymptotic stability because the plant to be controlled is nonlinear. Therefore, the stability corresponds to a region inside the unit circle [18]. This set of 𝜆 values was assigned by means of trial-and-error. The worst performance occurs for the highest setpoint values where the nonlinearities arise. The proposed approach provides a smoother and faster response than the typical PI with fixed parameters. To conclude, NG algorithm provides a robust procedure not only for clustering tasks, but also for feedforward nonlinear control using the gradient vectors obtained by the supervised version. These gradient vectors constitute the poles and zeros of the local transfer function of the plant. The computational complexity is linear regarding the number of samples, neurons, and variables because of the efficient implementation in batch procedure.
294
Intelligent Control and Automation
REFERENCES 1.
B. Jakubczyk and E. D. Sontag, “Controllability of nonlinear discretetime systems: a lie-algebraic approach,” SIAM Journal on Control and Optimization, vol. 28, no. 1, pp. 1–33, 1990. 2. L. Chen and K. S. Narendra, “Identification and control of a nonlinear discrete-time system based on its linearization: a unified framework,” IEEE Transactions on Neural Networks, vol. 15, no. 3, pp. 663–673, 2004. 3. K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4–27, 1990. 4. K. S. Narendra and F. L. Lewis, “Editorial: introduction to the special issue on neural network feedback control,” Automatica, vol. 37, no. 8, pp. 1147–1148, 2001. 5. O. De Jesús and M. T. Hagan, “Backpropagation algorithms for a broad class of dynamic networks,” IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 14–27, 2007. 6. Y. Zhang and J. Wang, “Recurrent neural networks for nonlinear output regulation,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 37, no. 8, pp. 1161–1173, 2001. 7. T. M. Martinetz, S. G. Berkovich, and K. J. Schulten, “‘Neural-gas’ network for vector quantization and its application to time-series prediction,” IEEE Transactions on Neural Networks, vol. 4, no. 4, pp. 558–569, 1993. 8. M. Cottrell, B. Hammer, A. Hasenfuß, and T. Villmann, “Batch and median neural gas,” Neural Networks, vol. 19, no. 6-7, pp. 762–771, 2006. 9. B. Hammer, M. Strickert, and T. Villmann, “Supervised neural gas with general similarity measure,” Neural Processing Letters, vol. 21, no. 1, pp. 21–44, 2005. 10. B. Hammer, A. Hasenfuss, F. M. Schleif, and T. Villmann, “Supervised batch neural gas,” in Artificial Neural Networks in Pattern Recognition, Lecture Notes in Computer Science, pp. 33–45, Springer, Berlin, Germany, 2006. 11. I. Machon-Gonzalez, H. Lopez-Garcia, J. Rodriguez-Iglesias, E. Maranõn-Maison, L. Castrillon-Pelaez, and Y. Fernandez-Nava, “Comparing feed-forward versus neural gas as estimators: application
Feedforward Nonlinear Control Using Neural Gas Network
12.
13.
14.
15.
16.
17.
18.
19.
295
to coke wastewater treatment,” Environmental Technology, vol. 34, no. 9, pp. 1131–1140, 2013. D. Erdogmus, J. Cho, J. Lan, M. Motter, and J. C. Principe, “Adaptive local linear modelling and control of nonlinear dynamical systems,” in Intelligent Control Systems Using Computational Intelligence Techniques, A. Ruano, Ed., pp. 119–152, IET, 2005. B. Arnonkijpanich, A. Hasenfuss, and B. Hammer, “Local matrix adaptation in topographic neural maps,” Neurocomputing, vol. 74, no. 4, pp. 522–539, 2011. I. Machón-González, H. López-García, and J. Luís Calvo-Rolle, “A hybrid batch SOM-NG algorithm,” in Proceedings of the 6th IEEE World Congress on Computational Intelligence, WCCI 2010—2010 International Joint Conference on Neural Networks (IJCNN ‘10), pp. 1–5, July 2010. M. J. Crespo-Ramos, I. Machón-González, H. López-García, and J. L. Calvo-Rolle, “Detection of locally relevant variables using SOM-NG algorithm,” Engineering Applications of Artificial Intelligence, vol. 26, no. 8, pp. 1992–2000, 2013. L. Chen and K. S. Narendra, “Nonlinear adaptive control using neural networks and multiple models,” Automatica, vol. 37, no. 8, pp. 1245– 1255, 2001. K. S. Narendra and S. Mukhopadhyay, “Adaptive control using neural networks and approximate models,” IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 475–485, 1997. V. F. Montagner, V. J. Leite, R. C. Oliveira, and P. L. Peres, “State feedback control of switched linear systems: an LMI approach,” Journal of Computational and Applied Mathematics, vol. 194, no. 2, pp. 192–206, 2006. M. T. Hagan, H. B. Demuth, and O. De Jesús, “An introduction to the use of neural networks in control systems,” International Journal of Robust and Nonlinear Control, vol. 12, no. 11, pp. 959–985, 2002.
SECTION 4: INTELLIGENT CONTROL APPLICATIONS
Chapter 13
Ship Steering Control Based on Quantum Neural Network
Wei Guan, Haotian Zhou, Zuojing Su, Xianku Zhang, and Chao Zhao Navigation College, Dalian Maritime University, Dalian 116026, China
ABSTRACT During the mission at sea, the ship steering control to yaw motions of the intelligent autonomous surface vessel (IASV) is a very challenging task. In this paper, a quantum neural network (QNN) which takes the advantages of learning capabilities and fast learning rate is proposed to act as the foundation feedback control hierarchy module of the IASV planning and control strategy. The numeric simulations had shown that the QNN steering controller could improve the learning rate performance significantly comparing with the conventional neural networks. Furthermore, the numeric and practical steering control experiment of the IASV BAICHUAN has shown a good control performance similar to the conventional PID steering
Citation: Wei Guan, Haotian Zhou, Zuojing Su, Xianku Zhang, Chao Zhao, “Ship Steering Control Based on Quantum Neural Network”, Complexity, vol. 2019, Article ID 3821048, 10 pages, 2019. https://doi.org/10.1155/2019/3821048. Copyright: © 2019 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
300
Intelligent Control and Automation
controller and it confirms the feasibility of the QNN steering controller of IASV planning and control engineering applications in the future.
INTRODUCTION In the past decade, the research on intelligent automatic surface ship (IASV) technology in academic and marine industries has continued to grow. These developments have been fuelled by advanced sensing, communication, and computing technology together with the potentially transformative impact on automotive sea transportation and perceived social and economic benefits [1–5]. The ship planning and control strategy for IASV, shown in Figure 1, which based on a module-based hierarchical structure, would be a good navigation strategy. It includes the global routing planning module, behaviour decision-making module, local motion planning module, and feedback control module. These modules are in charge of the different tasks especially the feedback control module is the foundation module as action part of the IASV navigation process. The key function of this module is the IASV steering operation to maintain or change ship course. In this paper, a quantum neural network (QNN) for ship steering control is proposed to address the ship steering control problem based on the IASV planning and control concept.
Figure 1. IASV planning and control strategy.
Ship Steering Control Based on Quantum Neural Network
301
As a good research foundation of the IASV steering control problem, many effective steering feedback control methods had been surveyed. The ship steering control based on proportional-integral-derivative (PID) strategy is simple and easy to construct. However, the conventional PID occupies the necessary basic controller role in process control, but it is not the trend of controller design due to the lack of learning and adaption capabilities. In addition, the controller parameters are required adjustments in varying conditions, which are time consuming and may not achieve accurate control performance. To solve the issues and obtain better performance, various advanced control strategies have been proposed for the steering control of the ship in recent years, such as adaptive steering control strategy [6–8], steering control strategy based on fuzzy logic algorithm [9, 10], steering controller based on Backstepping controller design method [11–13], and adaptive backstepping method [14, 15]. The robust control schemes such as the sliding mode control method [16, 17] and H∞ robust control algorithm [18] are also utilized in the ship steering control to achieve better ship course keeping and changing manoeuver. Since the 1990s, with the introduction of the artificial neural network into the ship steering controller design, experts and scholars had gradually increased ship steering control research on this issue. Witt et al. proposed a PID steering controller to train a neural network, where the output signal of the PID controller acts as the teacher signal and the simulation results showed that the control effects of the PID controller and the neural network controller have basically the same course control effect [19]. Hearn et al. proposed an online course control neural network to improve the conventional PID steering control effects, but the slow convergence of the ship steering controller based on the neural network is still a big problem to be solved [20]. In order to overcome the shortcomings of the conventional neural network, a ship steering controller based on the QNN is proposed in this paper under the concept of quantum computing [21–23]. The concept of QNN was first proposed by Toth et al. in 1996 [24]. Then, Matsui et al. used a quantum bit and the quantum revolving door to design a QNN for information processing and expression [25]. Another group of Japanese scholars, Kouda et al., summarized their previous research and summed up an emerging model, which is a quantum neuron model based on general quantum logic gates [26]. In 2018, Jeswal and Chakraverty introduced the latest developments of QNN and discussed the application of QNN [22].
302
Intelligent Control and Automation
And Xie et al. took the general quantum logic gates as the basis function to design a quantum neural computing network, and the simulation results indicated that the QNN is superior to the classical BP neural network and the radial basis function (RBF) neural network computing model in the financial data analysis [27]. Li and Li also pointed out that QNN based on general quantum gate evolution can improve the convergence performance of the conventional BP neural network [28]. Besides, the QNN has been applied to signature verification [29], audio watermarking [30], cardiovascular diseases risk prediction [31], classification recognition of electronic shock fault [32], and English to Hindi machine translator [33] and other fields. Motivated by the above observations, a QNN steering controller would be applied to the IASV steering to yaw control. Hence, the remainder of the paper is organized as follows. In Section 2, the mathematical model of the IASV steering to yaw motion is given. Section 3 devotes to a systematic procedure for the QNN steering controller design. In Section 4, the numeric comparison simulations for the QNN steering controller and conventional neural networks steering controller were firstly carried out to demonstrate the faster learning convergence of the proposed QNN steering controller. Then, a numeric and practical experiment on smart IASV BAICHUAN has shown the feasibility of the QNN steering controller in practical engineering practice. Finally, Section 5 gives the conclusions of the paper.
IASV MATHEMATICAL MODEL While a mathematical model of the IASV is fully described by coupled nonlinear differential equations, a simple model with predictive capability is usually preferred for the design of a ship-steering autopilot. A three-degreeof-freedom plane motion including surge, sway, and yaw motion is considered satisfactory. However, roll motion cannot be neglected due to couplings and hence a four-degree-of-freedom plane motion including surge, sway, yaw, and roll motion is used to describe the motion of a ship. Consequently, a fourthorder transfer function relating to the yaw rate to rudder deflection is derived based on the linearized equations of motion. Nevertheless, a fourth-order transfer function is further reduced to a second-order Nomoto model and then to a first-order Nomoto model for ease of controller design. The first-order Nomoto model, from δ to yaw rate r is presented as (1)
Ship Steering Control Based on Quantum Neural Network
303
where r is the yaw rate, δ is the rudder deflection, T is the time constant of IASV maneuverability, and K is defined as the steering control gain constant of IASV maneuverability. The parameters K and T that describe the ship steering to yaw dynamics can be identified from standard maneuvering tests. Since r is the time derivative of the yaw angle ψ, the transfer function relating to the yaw angle to steering movement can be obtained by adding an integrator (1/s) to the first-order Nomoto model of (1), then we can get (2) and the corresponding differential equation can be expressed as (3) The model presented in (3) is modified to include a nonlinear steering condition as discussed in [6], wherein the yaw rate term is replaced by a nonlinear function . Then, we can get the following equation: (4) where (5) Because of the symmetrical structure of ships, the parameters a0 = a2 = 0 [34] and α1 is set as +1 for stable ships and −1 for unstable ones, while the value of α3, known as the Norbin coefficient [14], can be determined via the ship turning test.
QNN STEERING CONTROLLER DESIGN In this section, a quantum neural network model was constructed for the ship steering controller design to enhance the convergence performance of the conventional neural network steering controller.
The Quantum Neuron Model The structure of the quantum neuron model based on the quantum logic gate is defined as Figure 2, including the input part, phase rotation part, aggregation part, reverse rotation part, and output part. The details of the
304
Intelligent Control and Automation
quantum neural networks working processes are shown as the following steps: as
Step 1: let
and define the qubit phase rotation gate
(6) Then, with the aggregation, we can get
(7) where
.
Step 2: the result of equation (7) makes the reverse rotation operation by the controlled-NOT gate as follows:
(8) where f is the sigmoid function; then, we can get
(9) Therefore, the relationship between the input and output of the quantum neuron model can be described as
(10)
Ship Steering Control Based on Quantum Neural Network
305
Figure 2. The quantum neuron model.
QNN Model Based on the quantum neuron model, a quantum neural network for the ship steering controller design is constructed as shown in Figure 3. The proposed neural network has three layers including an input layer, hidden layer, and output layer. The concept of QNN is applied to the layer that is between the input layer and the hidden layer; there are n quantum neurons in the input layer, m quantum neurons in the hidden layer, and p conventional neurons in the output layer.
Figure 3. The quantum neuron networks model.
Assuming the input variable is |xi〉, the output of the hidden layer is hj, the output of the QNN is yk, R(θij) is the quantum rotation gate between the input layer and the hidden layer to update the qubits, and wjk is the network
306
Intelligent Control and Automation
weight for the hidden layer and the output layer. Taking the qubit-controlled NOT-gate U(γj) as the transfer function of the hidden layer, then the output of the QNN can be expressed as
(11) where i = 1, 2, . . . , n; j = 1, 2, . . . , m; and k = 1, 2, . . . , p.
The Learning Algorithm of QNN To apply the QNN in practical engineering, the training samples should be transformed into the quantum states. For example, the n-dimensional Euclidean space training sample corresponding quantum state as
can be defined as the
(12) where
(13) In the three layers of the QNN model as described in Figure 3, there are 3 groups of parameters, namely, phase rotation parameters θij, reverse parameters γj, and network weights wjk needed to be updated. Firstly, define the error evaluation function as (14) where dk and yk are the desired outputs and actual outputs of the normalized quantum neural network, respectively. Let |xi〉 = (cos φi, sin φi)T and could be rewritten as
, then equation (11)
Ship Steering Control Based on Quantum Neural Network
307
(15) Let
(16) According to the gradient descent method, we can get
(17)
Therefore, the updated rules for phase rotation parameters θij, reverse rotation parameters γj, and network weights wjk are
(18) where η is the learning rate of the QNN.
308
Intelligent Control and Automation
Teacher Controller of QNN In this paper, the conventional PID steering control controller is acted as the teacher of the QNN controller. The input variable of the PID controller is the heading deviation Δψ, and the linear combination of the proportion, integration, and differentiation of the heading deviation Δψ is used as the output value of the PID controller. The command steering angle δ(k) can be expressed as
(19)
where kp, ki, and kd are the controller proportion parameter, integral parameter, and differential parameter, respectively, and k is the sampling time (k = 0, 1, 2, . . .).
Design of the QNN Steering Controller In this section, a three layer 2-5-1 QNN model was constructed. The structure of the QNN steering control system is shown in Figure 4. The two inputs of the QNN steering controller are the heading deviation Δψ(k) and yaw rate r(k), respectively, and the output is the command steering angle . The difference between the QNN steering controller outputs and PID course keeping controller outputs is defined as the system error. The mean square of the system error (MSE) is defined as the performance evaluation function of the proposed QNN to evaluate the performance of the QNN learning performance and optimized targets. Generally, its value is set as 0.00001. The activation function of the QNN hidden layer and the output layer is defined as hyperbolic tangent sigmoid function (tan-sigmoid) to accelerate the QNN training and convergence performance in the training process. The gradient descent with a quasi-Newton algorithm [35] is offered to the QNN training, and the momentum parameter of the quasi-Newton algorithm is set as 0.8. The initial values of the QNN weights are randomly generated between the intervals (−1, 1), and the learning rate η of QNN is set as 0.1.
Ship Steering Control Based on Quantum Neural Network
309
Figure 4. IASV QNN-steering autopilot structure.
SIMULATIONS AND ANALYSIS In this section, a series of simulations were used to illustrate the fast convergence characteristics and practical engineering effectiveness of the proposed controller. Especially an IASV BAICHUAN is utilized as a practical experiment for validations of the proposed QNN steering controller. Take K = 0.6, T = 1.866, a1 = 1, and a3 = − 9.44 × 10−6 as the dynamic parameters of the second-order Nomoto ship model equation (4) for IASV BAICHUAN. Set kp = 2, ki = 0.00001, and kd = 1.5 as tuning parameters of the teacher controller equation (19). In the simulations, the initial course of the IASV was set as 000° and the desired course keeping angle was set as 090°. The simulation time was set as 50 s, and the sampling period was set as 0.05 s. From the result of the simulations shown in Figure 5, it can be seen that the PID steering controller, which acted as the teacher of the QNN controller, could track the desired course after 13s, and the result shows that the PID controller is satisfactory to act as a suitable teacher of the QNN steering controller. To illustrate the practical effectiveness of the proposed QNN steering control system, as shown in Figure 4, during the QNN steering controller training, the values of phase rotation parameters θij, reverse rotation parameters γj, and network weights wjk would be updated according to equation (18) by using the training data set extracted from PID controller simulation results in Figure 5.
310
Intelligent Control and Automation
Figure 5. The PID steering control results.
For comparison, a conventional BP neural network steering controller is also trained using the same training data set extracted from the PID control results in Figure 5. To emphasize the advantages of the faster convergence and fewer learning iterations, the QNN steering controller and BP neural network steering controller were trained 8 times, respectively, and then the epochs in each training time are shown in Figure 6. For the BP neural network, the maximum number of training epoch is 9565 (in the 6th training time), the minimum training epoch is 4325 (in the 2nd training time), and the average number of training epochs for the 8 training times is 7022. Although, for the QNN, the maximum number of training epoch is 4625 (in the 2nd training time), the minimum training epoch is 1526 (in the 4th training time), and the average number of sample training epoch of the 8 training times is 3302. Therefore, it can be concluded that the QNN steering controller is improved significantly in the convergence rate compared with the conventional BP neural network steering controller.
Ship Steering Control Based on Quantum Neural Network
311
Figure 6. The epochs of eight training times.
To validate the effectiveness of the trained QNN steering controller, an IASV BAICHUAN QNN steering control simulation was carried out. The weights wjk of the QNN controller were extracted from the 2nd training time and selected as the initial weights of the IASV BAICHUAN QNN steering controller. The values are detailed in Table 1. Then, the simulation result are shown in Figure 7.
w14
w15
w16
w23
w24
w25
w26
w27
w38
w48
w58
w68
w78
b1
b2
b3
b4
b5
b6
−0.0075 −0.0133 0.1705 −0.0001 0.0230 −0.0233 −0.0190 0.0038 0.0010 0.0271 −0.0275 0.0013 0.0003 −0.0003 0.0026 6.4116 6.4114
w17
Figure 7. The course keeping simulation results for the PID and QNN steering controller.
0.0026 −0.165 −0.0012 0.0074
w13
Table 1. The weights and parameters of the QNN steering controller.
312 Intelligent Control and Automation
Ship Steering Control Based on Quantum Neural Network
313
It can be seen from Figure 7 that the QNN steering controller could track the desired course at about 13 s. The control result is very similar to the PID controller. It can be concluded that the proposed QNN steering controller has a very strong learning ability and could be widely applied to various fields. To further confirm the proposed QNN steering controller performance, an IASV BAICHUAN course keeping practical engineering experiment was carried out. The experiment environment is shown in Figure 8. The wind direction of the experiment scene was northwest (310°–330°), and the wind velocity varied from 0–0.20 m/s. The temperature was about 8°C. The maximum wave height was about 0–0.05 m. The initial course of the IASV is set at 000°, and the desired course keeping angle is set as 090°. The QNN steering controller’s initial parameters are also set as Table 1. Also the PID steering control experiment was carried out for comparison. The parameter of the PID steering controller was also set as kp = 2, ki = 0.00001, and kd = 1.5, as mentioned above. The sampling period of the QNN steering controller and PID steering controller are set as 0.05s, respectively. Finally, the experiment results are shown in Figure 9.
Figure 8. The scene of the experiment.
314
Intelligent Control and Automation
Figure 9. The practical course keeping control results of IASV BAICHUAN.
The left side of Figure 9 shows the course keeping control effect, and the right side shows the output of the IASV BAICHUAN steering control. It can be seen that the rising time of the QNN steering controller is slightly slower than the PID controller, but both controllers can reach the target course rapidly and both of them can achieve a good course keeping control effect. As it can be seen on the right side of Figure 9, the controller output of the two type steering controller is basically the same and the time to reach the static stabilities are also similar, but the response of the QNN steering controller is also slightly slower (about one second) than the PID steering controller. To further quantify the controller performance, the controller efficiency function (CEF) is defined as
(20) Then, we can obtain that the CEF of the QNN steering controller is 0.323 and the CEF of the PID steering controller is 0.299. Hence, it is concluded that the control efficiency of the QNN steering controller can get a similar control effect compared with the PID steering controller for the IASV course keeping. Remark 1. From the numeric simulation and practical engineering experiment, it can be seen that the proposed QNN steering controller has a slightly delayed response compared with the PID steering controller,
Ship Steering Control Based on Quantum Neural Network
315
although the delayed response phenomenon is not obvious in the numeric simulations. The reason of the delayed response phenomenon might be caused by the larger computation burden of the QNN steering controller. This is a potential disadvantage of the QNN. However, due to the features of strong learning ability and fast convergence performance, the proposed QNN steering controller could be used in learning of other advanced controllers, not only restricted in the PID controller. Therefore, the proposed QNN steering controller might be a universal controller design structure and scheme for the future IASV steering feedback control module.
CONCLUSIONS In this paper, a QNN steering controller design method based on the planning and control concept is proposed. Through the numeric simulations of the steering controller based on the conventional BP neural network and QNN, it can be inferred that the QNN steering controller has a faster convergence rate than the conventional BP neural network steering controller. Also, the numeric simulation results show that the QNN steering controller has a similar course keeping control performance comparing with the training teacher PID steering controller. Furthermore, the practical QNN steering control experiment on an IASV BAICHUAN has shown that the proposed QNN steering controller is feasible to be equipped to a practical IASV for steering to yaw control in the future IASV planning and control engineering. Especially the strong learning characteristics and efficient convergence performance of the QNN steering controller might be the developing trend of the advanced IASV steering controller. However, the QNN steering controller proposed in this paper might be the first step to apply the advanced AI controller to the IASV. Furthermore, the proposed QNN controller structure could apply to other marine control engineering practices.
ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (Nos. 51409033 and 51679024) and the Fundamental Research Funds for the Central Universities of China (No. 3132019343).
316
Intelligent Control and Automation
REFERENCES 1.
J. E. Manley, “Unmanned surface vehicles 15 years of development,” in Proceedings of the OCEANS, pp. 1–4, Quebec City, QC, Canada, September 2008. 2. F. Yunsheng, G. Zenglu, Z. Yongsheng, and W. Guofeng, “Design of information network and control system for USV,” in Proceedings of the 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 1126–1131, Hangzhou, China, July 2015. 3. N. H. Tran, H. S. Choi, J. Y. Oh, and S.-K. Jeong, “Design and implementation of dynamic positioning control system for USV,” in AETA 2015. Recent Advances in Electrical Engineering and Related Sciences, pp. 633–644, Springer, Cham, Switzerland, 2016. 4. E. I. Sarda, I. R. Bertaska, A. Qu, and K. D. von Ellenrieder, “Development of a USV station-keeping controller,” in Proceedings of the OCEANS 2015, pp. 1–10, Genova, Italy, May 2015. 5. I. D. Couzin, J. Krause, R. James, G. D. Ruxton, and N. R. Franks, “Collective memory and spatial sorting in animal groups,” Journal of Theoretical Biology, vol. 218, no. 1, pp. 1–11, 2002. 6. J. Van Amerongen, “Adaptive steering of ships-a model reference approach,” Automatica, vol. 20, no. 1, pp. 3–14, 1984. 7. J. Van Amerongen and A. J. Udink Ten Cate, “Model reference adaptive autopilots for ships,” Automatica, vol. 11, no. 5, pp. 441–449, 1975. 8. C. Y. Tzeng and K. F. Lin, “Adaptive ship steering autopilot design with saturating and slew rate limiting actuator,” International Journal of Adaptive Control and Signal Processing, vol. 14, no. 4, pp. 411–426, 2000. 9. G. Rigatos and S. Tzafestas, “Adaptive fuzzy control for the ship steering problem,” Mechatronics, vol. 16, no. 8, pp. 479–489, 2006. 10. Y. Yang and J. Ren, “Adaptive fuzzy robust tracking controller design via small gain approach and its application,” IEEE Transactions on Fuzzy Systems, vol. 11, no. 6, pp. 783–795, 2003. 11. T. I. Fossen and J. P. Strand, “Tutorial on nonlinear backstepping: applications to ship control,” Modeling, Identification and Control: A Norwegian Research Bulletin, vol. 20, no. 2, pp. 83–135, 1999.
Ship Steering Control Based on Quantum Neural Network
317
12. K. D. Do, Z. P. Jiang, and J. Pan, “Underactuated ship global tracking under relaxed conditions,” IEEE Transactions on Automatic Control, vol. 47, no. 9, pp. 1529–1536, 2002. 13. S. Das, A. Bhatt, and S. E. Talole, “UDE based backstepping design for ship autopilot,” in Proceedings of the 2015 International Conference on Industrial Instrumentation and Control (ICIC), pp. 417–422, Pune, India, May 2015. 14. J. Du, C. Guo, S. Yu, and Y. Zhao, “Adaptive autopilot design of timevarying uncertain ships with completely unknown control coefficient,” IEEE Journal of Oceanic Engineering, vol. 32, no. 2, pp. 346–352, 2007. 15. J. Li, T. Li, Z. Fan, R. Bu, Q. Li, and J. Hu, “Robust adaptive backstepping design for course-keeping control of ship with parameter uncertainty and input saturation,” in Proceedings of the 2011 International Conference of Soft Computing and Pattern Recognition (SoCPaR), pp. 63–67, Dalian, China, October 2011. 16. E. W. McGookin, D. J. Murray-Smith, Y. Li, and T. I. Fossen, “Ship steering control system optimisation using genetic algorithms,” Control Engineering Practice, vol. 8, no. 4, pp. 429–443, 2000. 17. L. Yuan and H.-s. Wu, “Terminal sliding mode fuzzy control based on multiple sliding surfaces for nonlinear ship autopilot systems,” Journal of Marine Science and Application, vol. 9, no. 4, pp. 425–430, 2010. 18. L. Sheng, Y. Ping, L. Yan-yan, and D. Yan-chun, “Application of H infinite control to ship steering system,” Journal of Marine Science and Application, vol. 5, no. 1, pp. 6–11, 2006. 19. N. A. J. Witt, R. Sutton, and K. M. Miller, “A track keeping neural network controller for ship guidance,” IFAC Proceedings Volumes, vol. 28, no. 2, pp. 385–392, 1995. 20. G. E. Hearn, Y. Zhang, and P. Sen, “Alternative designs of neural network based autopilots: a comparative study,” IFAC Proceedings Volumes, vol. 30, no. 22, pp. 83–88, 1997. 21. R. P. Feynman, “Quantum mechanical computers,” Foundations of Physics, vol. 16, no. 6, pp. 507–531, 1986. 22. S. K. Jeswal and S. Chakraverty, “Recent developments and applications in quantum neural network: a review,” Archives of Computational Methods in Engineering, vol. 26, no. 4, pp. 793–807, 2019.
318
Intelligent Control and Automation
23. S. C. Kak, “On quantum neural computing,” in Advances in Imaging and Electron Physics, vol. 94, pp. 259–313, Elsevier, Amsterdam, Netherlands, 1995. 24. G. Toth, C. S. Lent, P. D. Tougaw et al., “Quantum cellular neural networks,” Superlattices and Microstructures, vol. 20, no. 4, pp. 473– 478, 1996. 25. N. Matsui, M. Takai, and H. Nishimura, “A network model based on qubitlike neuron corresponding to quantum circuit,” Electronics and Communications in Japan (Part III: Fundamental Electronic Science), vol. 83, no. 10, pp. 67–73, 2000. 26. N. Kouda, N. Matsui, H. Nishimura, and F. Peper, “Qubit neural network and its learning efficiency,” Neural Computing and Applications, vol. 14, no. 2, pp. 114–121, 2005. 27. G. J. Xie, D. Zhou, and H. Q. Fan, “A neural network model based-on quantum gates cell and its applications,” System Engineering Theory and Practice, vol. 25, no. 5, pp. 113–117, 2005. 28. P. C. Li and S. Y. Li, “Learning algorithm and application of quantum BP neural networks based on universal quantum gates,” Journal of Systems Engineering and Electronics, vol. 19, no. 1, pp. 167–174, 2008. 29. M. J. Cao, P. C. Li, and H. Xiao, “Application of quantum neural network in PID parameter adjustment,” Computer Engineering, vol. 37, no. 12, pp. 182–189, 2011. 30. Y. P. Zhang, L. Chen, and H. A. Hao, “New audio watermarking algorithm based on quantum neural network,” Signal Processing, vol. 29, no. 6, pp. 684–690, 2013. 31. N. Renu, S. Sanjai, and G. Achal, “Cardiovascular risk prediction: a comparative study of Framingham and quantum neural network based approach,” Patient Preference and Adherence, vol. 10, pp. 1259–1270, 2016. 32. H. Guan, M. Liu, C. Li et al., “Classification recognition model of electric shock fault based on wavelet packet transformation and quantum neural network,” Transactions of the Chinese Society of Agricultural Engineering, vol. 34, no. 5, pp. 183–190, 2018. 33. R. Narayan, S. Chakraverty, and V. P. Singh, “Quantum neural network based machine translator for English to Hindi,” Applied Soft Computing, vol. 38, pp. 1060–1075, 2016.
Ship Steering Control Based on Quantum Neural Network
319
34. W. Guan, W. Cao, J. Sun, and Z. Su, “Steering controller design for smart autonomous surface vessel based on CSF L2 gain robust strategy,” IEEE Access, vol. 7, pp. 109982–109989, 2019. 35. P. X. Lu, “Research on BP neural network algorithm based on quasiNewton method,” Applied Mechanics and Materials, vol. 686, pp. 388–394, 2014.
Chapter 14
Human-Simulating Intelligent PID Control
Zhuning Liu Qingdao No.2 Middle School of Shandong Province, Qingdao, China
ABSTRACT Motivated by PID control simplicity, robustness and validity to deal with the nonlinearity and uncertainties of dynamics, through simulating the intelligent behavior of human manual control, and only using the elementary information on hand, this paper introduces a simple formulation to represent prior knowledge and experiences of human manual control, and proposes a simple and practicable control law, named Human-Simulating Intelligent PID control (HSI-PID), and the simple tuning rules with the explicit physical meaning. HSI-PID control can not only easily incorporate prior knowledge and experiences of experts control into the controller but also automatically acquire knowledge of control experiences from the
Citation: Liu, Z. (2017), “Human-Simulating Intelligent PID Control”. International Journal of Modern Nonlinear Theory and Application, 6, 74-83. doi: 10.4236/ijmnta.2017.62007. Copyright: © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
322
Intelligent Control and Automation
past control behavior to correct the control action online. The theoretical analysis and simulation results show that: HSI-PID control has the better flexibility, stronger robustness, and especially the faster self-learning ability, and it can make the motion of system identically track the desired response, whether the controlled system has the strong nonlinearity and uncertainties of dynamics or not, even under the actions of uncertain, varying-time and strong disturbances. Keywords: Nonlinear Control, PID Control, Manual Control, KnowledgeBased Control, Intelligent Control
INTRODUCTION It is well known that the PID controllers [1] [2] [3] are still used extensively in industrial control and studied intensively in current control area because the PID control not only has exceeding simplicity and strong robustness but also can effectively deal with nonlinearity and uncertainties of dynamics and asymptotic stability can be achieved accordingly. A major drawback of PID control is that it often suffers a serious loss of performance, that is, causes large overshoot and long settling time, even may lead to instability due to unlimited integral action. To disguise this drawback, various PID-like control laws have been proposed to improve the transient performance. For example, a saturated-P, and differential feedback plus a PI controller driven by a bounded nonlinear function of position errors [4] , a linear PD plus an integral action of a nonlinear function of position errors [5] , and a linear PD plus a double integral action driven by the positions error and the filtered position [6] are presented recently. An unconquerable drawback of PID-like controller above is that its tuning procedure is very complex, tedious and difficult to obtain the satisfied transient control performance because these PID-like controllers often produce surge and big overshoot, even may lead to instability. It is obvious that all the PID-like control laws above can only improve the control performance in some extent but the intrinsic shortcomings of PID control are not absolutely eliminated from its root. In 2009, general integral control [7] appeared. After that various general integral control laws were presented. For example, general concave integral control [8] , general convex integral control [9] , general bounded integral control [10] and the generalization of the integrator and integral control
Human-Simulating Intelligent PID Control
323
action [11] were all developed by resorting to an ordinary control along with a known Lyapunov function. Although these general integral control laws above can effectively deal with the intrinsic shortcomings of PID control and have the better control performance; these PID-like and general integral controllers above are all unintelligent. Hence, it is very interesting and challenging to seek for PID control laws with intelligence, especially self-learning ability online. Based on the statement above, it is obvious that the error, e, and its derivative, ė are the fundamental elements of PID control, which are the essential information used in manual control, and then the human manual control law can be described, as follows:
where
and
(1)
can be arbitrary linear or nonlinear function.
Based on the cognition above, through simulating the intelligent behavior of human manual control, this paper proposes a simple and practicable control law, named Human-Simulating Intelligent PID control (HSI-PID). The main contributions are as follows: 1) a simple and practical formulation to represent prior knowledge and experiences of manual control is developed, and then it is easy to be used to design a controller; 2) the simple and practical tuning rules with the explicit physical meaning are introduced, and then it is easy to tune a high performance controller; 3) two kinds of fire new and simple integrator and integral control action is proposed, and then the intrinsic shortcomings of PID control is removed in principle; 4) HSI-PID control not only can easily incorporate prior knowledge and experiences of experts control into the controller but also can automatically acquire knowledge of control experiences from the past control behavior to correct the control action online, and then can make the motion of system identically track the desired response. Moreover, simulation results verify our conclusions again. The remainder of the paper is organized as follows: Section 2 describes Human-Simulating PID control law and definitions. Section 3 presents the tuning rules of human-simulating intelligent PID control. Example and simulation are provided in Section 4. Conclusions are presented in Section 5.
Intelligent Control and Automation
324
HUMAN-SIMULATING PID CONTROL LAW An experienced operator will anticipate all types of disturbances to the system. It would be very difficult to reproduce in an automatic controller the many judgments that an average person makes daily and unconsciously. So it is more realistic for us to simulate the control procedure of the human manual control in order to obtain as nearly perfect operation as possible. A general human manual control procedure can be described as follows: 1)
when the error is large and the error tends to increase, the human controller produces a largest control action, and attempt to restrain the error to increase continuously; 2) when the error decreases, the human controller will gradually reduce the control action to limit the degressive rate of error for avoiding the system out of control; 3) when the error is small and tend to zero, the human controller produces a small or even inverted control action, and attempt to prevent the overshoot; 4) when the error is large, the human controller produces an inaccuracy control action, if the error is small, the human controller produces a precise control action; 5) the amplitude of control action is determined by the size of error and tendency that the error is changing. Based on the statements above, it is easy to see that the human manual control law is in a good agreement with the sigmoid-type function. So, the sigmoid-type function can be used to simulating the procedure of human manual control, that is, approximating Equation (1). For the purpose of this paper, it is convenient to introduce the following definition. Definition 1. Tow Error [12], In human manual control procedure: if
or
, and then
or ; If , and then . For simulating the behavior that human deals with the information, Tow Error, is defined by,
Human-Simulating Intelligent PID Control
325
(2) where
is a positive constant, maximum of Tow Error.
The schematic graph of Tow Error is shown in Figure 1.
Figure 1. Tow error.
Definition 2. Desired Acceleration [12], In human manual control procedure, one usually has an explicit expectation for the speed of response and overshoot. Namely, under the control action, the error, e will change in conformity to the desired response. It cannot too fast, also cannot too slow, eventually the error, e and its derivative, ė all tend to zero in the meantime. In another words, there is some kinds of the cooperative relationship between the error and its rate in human manual control procedure. For simulating the behavior of human manual control, it is indispensable to introduce the concept of Desired Acceleration, , and which can be taken as the sigmoid-type function,
(3)
326
Intelligent Control and Automation
where is a positive constant, maximum of Desired Acceleration; kp and kd are all positive constants. Discussion 1. When the initial condition and parameter of Equation (3) is given, the desired velocity, , and desired error, ed can be obtain. This shows that Equation (3) not only describes the desired response speed and desired response error of human manual control procedure but also constructs a kind of cooperative relationship between the error and its rate. So, Equation (3) can be viewed as a representation of prior knowledge and experiences of human manual control. The schematic graph of Desired Acceleration is shown in Figure 2.
Figure 2. Desired acceleration.
PD-Type Human-Simulating Intelligent Control Law From Equation (3), if the controller output is assimilated to the force acting on the motion object, and using Newton’s laws of motion, the controller output, u can be given by the following equation: (4) where km is a positive constant, if we carry on the motion control, and then it is the mass or inertia, if we carry on the process control, and then it is a constant like the mass. Combining Equations (2) and (3) into Equation (4) results in PD-type Human-simulating intelligent control law, as follows:
Human-Simulating Intelligent PID Control
327
(5)
Discussion 2. From Figure 2, Equation (2), and control law (5), it is easy to see that: 1) when and the direction of e and ė is identical, the control law (5) produce a largest and inaccuracy control action, uto restrain the error to increase continuously; 2) when , and the direction of e and ė is reverse, u will gradually decrease to limit the degressive rate of error for avoiding the system out of control; 3) when , and the direction of e and ė is reverse, u will become a small or even inverted control action to prevent the overshoot; 4) when e and ė all tend to zero, u tend to zero, too. In another words, the control action, u can make the error reduce along the desired law, eventually the error, its rate and control action all tend to zero in the meantime. These show that the PD-type Human-simulating intelligent control law; (5) is in a good agreement with the optimal procedure of manual operation.
PID-Type Human-Simulating Intelligent Control Law It is obvious that PD-type Human-simulating intelligent control law described by Equation (5) is not capable of reducing the error to zero and making the error strictly change along the desired law when a constant disturbances act on the controlled system. However, an experienced operator can automatically adjust the control output and produce an accurate control action to eliminate or reject all kinds of disturbances. This procedure can be described as follows: according to the current state of the controlled system and the former control experiences, the human controller produces a control action, if the control action just counteracts the disturbance action, and then he maintains the control action; if the control action is large, and then he decreases the control action, vice versa. In other words, through correcting the control action again and again, eventually the operator can make the error change along the desired law and stabilize the error at zero or acceptable limit, and then the control action tends to a constant or changes along with the change of disturbances. Based on the statements above, it is easy to know that: 1) simulating the control procedure that human manual control eliminates or rejects all
Intelligent Control and Automation
328
kinds of the disturbances is very challenging because the disturbances are often uncertain and varying-time; 2) human manual control is a procedure of automatically accumulating the control experiences, and self-adapting the action of all kinds of disturbances and uncertainties of dynamics by exploration. However, integral control just is of this kind of ability. So, for simulating the manual control behavior above, the integral action must be introduced into the control law, as follows:
(6)
(7) where ki is a positive constant.
Discussion 3. Compared to PID-like control and general integral control laws reported by [1] - [11] , the striking features of the control laws (6) and (7) are: 1)
The prior knowledge and experiences of manual control can be incorporated into the controller easily;
2)
When , the integrator output remains constant; if the integral action is larger than the one needed, ė consequentially increases, and then the integrator output will instantly decrease, vice versa. However, the integrator output of PID-like control [1] - [6] continues to increase unless the error passes through zero, and then for making the integral control action tends to a constant, the error is usually needed to pass through zero repeatedly. Therefore, the intrinsic shortcoming of PID control can be removed in principle; Just as the statement above, the basic principle of the integrator in (6) and (7) is similar with the general integrator in [7] [8] [9] [10]
3)
Human-Simulating Intelligent PID Control
329
[11] , but their main difference is that Tow error is introduced into the integrator here. Therefore, the integral control action proposed here can constraint the response rate. However, the integral control action in [7] [8] [9] [10] [11] do not place restrictions on the response rate, and then it is easy to lead to instability since the response rate could be too rapid. Therefore, two kinds of the integrator and integral action in (6) and (7) are fire new; 4) The integral control action is a compensation for the shortcoming of PD control action, or an accumulation of the past PD control action. This method to adjust integral action can be viewed as the accumulation of the past control experiences or learning from past control experiences; 5) Combining the demonstration above and Discussion 2, it is easy to see that the control laws (6) and (7) can strictly constraint the motion of system along the trajectories of desired response. All these show that the control laws proposed here not only can easily incorporate prior knowledge and experiences of experts control into the controller but also can automatically acquire knowledge of control experiences from the past control behavior to correct the control action online, and then can make the motion of system identically track the desired response. Therefore, the control laws (6) and (7) should have more rapid adaptive or self-learning ability, better flexibility and stronger robustness, and can easily yield higher control performance. This is why it is called human-simulating intelligent PID control.
TUNING CONTROLLER For practical applications in industrial control, it is very necessary and interesting to develop a simple and efficient method to tune the controller proposed here. So, the purpose of this section is to address the tuning rules of human-simulating intelligent PID control. In the control laws (6) and (7), there are six parameters. The parameters, km and
have the explicit physical meaning, and often are all fixed
constant, so easy to be determined. In practice, if km or the biggest control action, umax is known, the product,
is uncertain and as a whole
can directly be taken as umax or an appropriate value in contrast to umax.
330
Intelligent Control and Automation
is a characteristic variable, which indicates the controller should carry on a braking action when
and e is slightly less than
in order to
avoid big overshoot, oscillation and instability, so can be determined by the experience of human manual control or be measured directly. Hence, only three parameters, kp, kd and ki need be determined. From the control law (5), it is easy to know that: 1) when and
, the output of sigmoid-type function should amount to the
maximum, and then kp can be obtained; 2) when e = 0 and , the controller output should amount to the maximum, too, and then kd can be determined by the following formula: (8) where is the maximum of the acceptable speed of response, it can be measured directly when the biggest control action acts on the controlled system. The purpose that we introduce the integral action is to complete rejection of uncertain disturbances and dynamics of controlled system, and yield zero steady- state error. So, the integral action in the control laws is indispensable for most industrial control applications. This leads to that ki become a very key parameter, which tuning rules are very difficult, even impossible to be described by an accurate formulation. In general, if ki is too small, the integral action increases the correction slowly, this easily results in a slower speed of response and a longer settling time; if ki is too large, the integral action increases the correction more rapidly, this could cause large overshoot, even may lead to instability. The optimum response is always achieved through some sort of compromise. In general, the settings above are usually close to the final values. However, a realistic expectation is that some tweaking of the parameters will be required to obtain a transient performance of high control. In practice, if the overshoot is large, which means that speed of response is too rapid or maximum of Tow Error is too small, and then we can fix kd, or simultaneity increase
and kd, vice versa.
and increase
Human-Simulating Intelligent PID Control
331
EXAMPLE AND SIMULATION To illustrate the effect of the control laws, a two link manipulators shown in Figure 3 is considered. Its dynamics are of the following form:
where:
,
Figure 3. The two link robot manipulators.
The
normal
parameter
values
of
system
are
selected
as:
The desired (set point) positions are chosen as: when
; ; when
when .
The tangent hyperbolic function, is used as the sigmoid function. The parameters of control law (6) are given as:
332
Intelligent Control and Automation
The simulations with sampling period of 1ms are implemented. Under the normal and perturbed parameters, the motion trajectories of Link1 and Link2 are shown in Figure 4 and Figure 5, respectively. The dotted lines are the simulation results under normal parameters. The real lines are the simulation results under perturbed parameters, that is, m2 = 2 kg is substituted for m2 = 7 kg when , corresponding to moving payload of 5 kg. From the simulation results, it is easy to see that: 1) the optimum response can be achieved by a set of control parameters in the whole domain of interest, even under the case that the payload is changed abruptly; 2) the motion trajectories shown in Figure 4 and Figure 5 are almost completely identical, this illustrates that the motion of the system can identically track the desired response, whether the controlled system has the strong nonlinearity and uncertainties of dynamics or not, even under the actions of uncertain, varying-time and strong disturbances. All these demonstrates that: 1) HIS-PID controller not only can remove the intrinsic shortcomings of PID control but also has the faster self-learning ability, better flexibility and stronger robustness with respect to uncertain nonlinear system; 2) HISPID control not only can effectively deal with uncertain nonlinear system but also it is a powerful and practical tool to solve the control design problem of dynamics with the nonlinear and uncertain actions.
Figure 4. Simulation results of Link 1.
Human-Simulating Intelligent PID Control
333
Figure 5. Simulation results of Link 2.
CONCLUSIONS In this paper, a novel human-simulating intelligent PID control law is founded by simulating the intelligent behavior of human manual control. The main contributions are as follows: 1) the formulation to represent prior knowledge and experiences of manual control are very simple and practical, and then it is easy to be used to design a controller; 2) the simple and practical tuning rules with the explicit physical meaning are introduced, and then it is easy to tune a high performance controller; 3) two kinds of fire new and simple integrator and integral action are proposed, and then the intrinsic shortcomings of PID control are removed in principle; 4) the control laws proposed here not only can easily incorporate prior knowledge and experiences of experts control into the controller but also can automatically acquire knowledge of control experiences from the past control behavior to correct the control action online, and then can make the motion of system identically track the desired response. Moreover, simulation results verify our conclusions again. For HSI-PID control, this paper is only a starting point and further cooperative research is needed to make progress in this area. For example, the design theory to ensure system stability is not developed.
334
Intelligent Control and Automation
REFERENCES 1.
Liu, J.K. (2004) Advanced PID Control MATLAB Simulation. Publishing House of Electronics Industry, Beijing. 2. Knospe, C.R. (2006) PID Control. IEEE Control Systems Magazine, 26, 30-31. https://doi.org/10.1109/MCS.2006.1580151 3. Hu, B.G. (2006) A Study on Nnonlinear PID Controllers—Proportional Component Approach. Acta Automatica Sinica, 32, 219-227. 4. Arimoto, S. (1994) A Class of Quasi-Natural Potentials and HyperStable PID Servo-Loops for Nonlinear Robotic System, Transactions of the Society of Instrument and Control Engineers, 30, 1005-1012. https://doi.org/10.9746/sicetr1965.30.1005 5. Kelly, R. (1998) Global Positioning of Robotic Manipulators via PD Control plus a Class of Nonlinear Integral Actions. IEEE Transactions on Automatic Control, 43, 934-938. https://doi.org/10.1109/9.701091 6. Ortega, R., Loria, A. and Kelly, R. (1995) A Semiglobally Stable Output Feedback PI2D Regulator for Robot Manipulators. IEEE Transactions on Automatic Control, 40, 1432-1436. https://doi. org/10.1109/9.402235 7. Liu, B.S. and Tian, B.L. (2009) General Integral Control. Proceedings of the International Conference on Advanced Computer Control, Singapore, 22-24 January 2009, 136-143. https://doi.org/10.1109/ icacc.2009.30 8. Liu, B.S., Luo, X.Q. and Li, J.H. (2013) General Concave Integral Control. Intelligent Control and Automation, 4, 356-361. https://doi. org/10.4236/ica.2013.44042 9. Liu, B.S., Luo, X.Q. and Li, J.H. (2014) General Convex Integral Control. International Journal of Automation and Computing, 11, 565570. https://doi.org/10.1007/s11633-014-0813-6 10. Liu, B.S. (2014) Constructive General Bounded Integral Control. Intelligent Control and Automation, 5,146-155. https://doi.org/10.4236/ ica.2014.53017 11. Liu, B.S. (2014) On the Generalization of Integrator and Integral Control Action. International Journal of Modern Nonlinear Theory and Application, 3, 44-52. https://doi.org/10.4236/ijmnta.2014.32007 12. Peng, L. (1995) Neural Network Control of the Intelligent Underwater Vehicle, Ocean Engineering, 13, 38-46.
Chapter 15
Intelligent Situational Control of Small Turbojet Engines
Rudolf Andoga, Ladislav Főző, Jozef Judičák, Róbert Bréda, Stanislav Szabo, Róbert Rozenberg, and Milan Džunda Faculty of Aeronautics, Technical University of Košice, Rampová 7, 041 21 Košice, Slovakia
ABSTRACT Improvements in reliability, safety, and operational efficiency of aeroengines can be brought in a cost-effective way using advanced control concepts, thus requiring only software updates of their digital control systems. The article presents a comprehensive approach in modular control system design suitable for small gas turbine engines. The control system is based on the methodology of situational control; this means control of the engine under all operational situations including atypical ones, also integrating a diagnostic system, which is usually a separate module. The resulting concept has been evaluated in real-world laboratory conditions using a unique design of small
Citation: Rudolf Andoga, Ladislav Főző, Jozef Judičák, Róbert Bréda, Stanislav Szabo, Róbert Rozenberg, Milan Džunda, “Intelligent Situational Control of Small Turbojet Engines”, International Journal of Aerospace Engineering, vol. 2018, Article ID 8328792, 16 pages, 2018. https://doi.org/10.1155/2018/8328792. Copyright: © 2018 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
336
Intelligent Control and Automation
turbojet engine iSTC-21v as well as a state-of-the-art small turbojet engine TJ-100. Our results show that such advanced control system can bring operational quality of an engine with old turbocompressor core iSTC-21v on par with state-of-the-art engines. Dedicated to the memory of Ladislav Madarász
INTRODUCTION From the cybernetic point of view, aeroengines are nonlinear systems with complex dynamics operating in stochastic environment in a broad range of conditions as principally described in traditional handbooks [1, 2]. Energetic efficiency, ecologic efficiency, and safety are today the key factors in their development. Small gas turbine aeroengines are specifically characterized by compact dimensions weighing from 0.5 to 50 kilograms and relatively high static thrust outputs in the range up to 1500 N, producing very high power to weight ratios as defined in [3]. Such engines represent a great potential for commercial applications in different areas of aviation for propulsion of drones and small UAVs described in [4], as well as small airplanes or helicopters [5]. Because of compact dimensions as well as cost-effective operation and production, these engines can serve as experimental testbeds for research purposes as can be seen in the work of Benini and Giacometti [3], where a small turbojet engine with static thrust of 200 N has been designed. The idea of utilizing small turbojet engines for rapid prototyping of algorithms and construction has also been pursued by other authors, Pecinka and Jilek designing a cost-effective test cell for small turbojet engines [6], application of small turbojet engines in education described in the works [7, 8]. Small turbojet engines have also been used as testbeds in research of application alternative fuels for gas turbine engines using synthetic as well as biofuels in the works of Badami et al. [9, 10] or optimization of jet fuel combustion processes described in [11]. Small turbojet engines can be also used for aeroengines’ component design and optimization as illustrated in optimization of turbine blades [12] or centrifugal compressors [13]. Small gas turbine engines have also been utilized in general gas path optimization described in sources [14, 15]. This research review shows that small gas turbine engines are a rapidly developing technology, which is usable not only for specific propulsion of different aerial vehicles but also for usable testbeds for rapid prototyping of components for normal-sized gas turbine engine.
Intelligent Situational Control of Small Turbojet Engines
337
In order to improve operational efficiency of gas turbine engines, constructional design changes can be done, like compressor redesign [13], combustion chamber material and design improvements, fuel nozzle redesign [11], or turbine blade redesign [12]. The other approach is to increase efficiency through design and implementation of progressive control algorithms utilizing digital control systems with a self-tuning control system as designed by Adibhatla and Lewis [16]. This allows improving operation of the controlled engine only through software algorithm updates, which may be a cost-effective solution especially in the area of unmanned systems, where certification is not as complicated as in general aviation legislation. Based on the previous state-of-the-art review, it can be concluded that utilizing small gas turbine engines for prototyping of new control methodologies and approaches is a new area, which has not been thoroughly explored and has a great potential even for applications in gas turbine engines used in civil general aviation. Gas turbine engines currently used in general aviation are mainly controlled using digital engine control systems with full control authority— FADEC (full authority digital engine control), utilizing certified standard algorithms for handling the engine’s operational states as well as managing its auxiliary systems and thrust as is systematically described in the textbooks [17, 18]. Algorithms, which are certified and commonly used in control of gas turbine engines, are based on the proven methodology of PID control [17] as well as scheduling strategies for higher level controllers like power management [18]. On top of that, special strategies like limiters and auxiliary systems are used to protect the engine from departing its operational envelope [19]. Transient states of the engine are handled by acceleration/deceleration min/max schedulers [19], typical architecture of the baseline digital engine control using engine protection logic being shown in the recent work of Connolly et al. in [20]. Diagnostics and health management are usually running as a separate process influencing operation and control of the engine using limiters [18, 19]. Life-extending measures like employment of the N_dot control methodology can be taken as progressive [21]. Utilization of digital control systems having the engine’s control algorithms and laws running as software allows improvement of its efficiency only by modification of those algorithms or updating the control laws. This has been pursued by many authors in the recent decades bringing advances in the field of control theory and computational intelligence into
338
Intelligent Control and Automation
control systems of gas turbine engines in order to increase their efficiency and reliability without the need of structural optimizations of the engine’s components as comprehensively described in a survey of intelligent technologies for application in gas turbine engines [21, 22]. An approach, which is quite often used in advanced concepts of gas turbine engine control, is the use of the methodology of adaptive control [23, 24], using specialized model-based algorithms to change or compute coefficients of PID controllers according to environmental or model parameters as presented for example in [16]; adaptive control methodology can also utilize engine performance and health computational models to adapt controllers as shown in the conceptual designs described in [25] and, on theoretical level, analyzed in [26]. These works are providing a link between engine diagnostic and control systems, while also using methodologies of computational intelligence (ARMAX model) as for example applied by Mu et al. in [27]. Other approaches which have been taken in advanced control of gas turbine engines were robust, LQ, and optimal control. These methodologies have been already found to be beneficial in solution of complex control problems as illustrated for example in [28, 29]. They were successfully applied in the envisioned engine control systems, the robust control algorithm being described in [30], and LQ control algorithms applied in gas turbine engine control [31, 32]. Even more advanced methodologies like model predictive control [33] or hybrid fuzzy-genetic algorithms have been proposed in adaptation of gas turbine engine controllers as described in [34]. The works from the area of adaptive control link control algorithms to diagnostic algorithms or engine health evaluation algorithms as described in [25, 26]. Digital diagnostic systems are directly connected to safety and reliability of any control system operation but are however often investigated as separate systems utilizing progressive methodologies and algorithms like support vector machines and decision trees as designed in [35] or application of neural networks in engine fault detection as described in [36] connecting the diagnostic system to the engine’s control system. Fuzzy inference rulebased diagnostic combined with neural networks system has been also proposed for application in fault detection system of gas turbine engines in [37]. These works illustrate that by application of progressive control methodologies, efficiency, reliability, and safety of operation of gas turbine engines can be improved. However, most of them base their results and conclusions solely on simulations like [19, 27, 30, 34] or present only
Intelligent Situational Control of Small Turbojet Engines
339
conceptual designs like [21, 22, 25]. Application of methodologies from the field of computational intelligence can also be seen together with interconnection of diagnostic and control systems [21, 22, 27], which can further enhance the control system’s reliability and efficiency. The aim of the research presented in this paper is to design a highly integrated control and diagnostic system suitable for control of gas turbine engines, which is modular and able to combine different progressive as well as classical control methodologies in a unified architecture, thus increasing efficiency of the complete system. Emphasis in this design is put on strong integration of the control system with diagnostics and the ability to control the engine also during atypical operational states using the most efficient controller in the current situation. The designed system is presented as a framework architecture using the methodology of situational control modified specifically for digital control of gas turbine engines. Contrary to most results presented in research papers, the aim is to prove the operability of the framework control system’s design in real-world laboratory conditions utilizing a small turbojet engine iSTC-21v with static thrust of 500 N based on an old turbostarter TS-21. The resulting prototype architecture of the control system is aimed to bring the efficiency of this old turbocompressor engine design to modern standards by application of the designed situational control framework. This framework is aimed to be specifically applied on small turbojet engines; however, it is expected to be also applicable on normal sized gas turbine engines, using the small engine as a demonstrator of the approach.
SITUATIONAL CONTROL METHODOLOGY FRAMEWORK DESIGN It is very difficult to design a single controller, which will be able to control a complex dynamic nonlinear system operating in a stochastic environment with high quality in all its operational states. Progressive methodologies like robust control [30], linear quadratic control [31, 32], and model predictive and fuzzy control [33, 34] are able to produce controllers robust enough to cover a large spectrum of states and uncertainties; they are however often computationally too complex as shown in the references. Another approach, which has already been often used in solution of control problems, is to design simpler specific controllers for specific operational states of the investigated dynamic system for example in an application using different
340
Intelligent Control and Automation
controllers for gas turbine generator and power system of aeropropulsion system as described in [38]. In aviation, this approach is widely used in flight control systems with a broad spectrum of applicable algorithms [39] combined in modern avionic digital control systems [40] as well as engine control system switching control algorithms for optimal dynamic characteristics [41]. In flight control, the control laws are different for different flight phases like ground, takeoff, flight, and landing [40]. A similar pattern can be found in aeroengine control algorithms—the oldest approach is the application of acceleration schedulers, which are specialized controllers to handle acceleration as a special control strategy as shown in the traditional engine control concept [21]. Setpoint/trim controllers are then used to keep the engine at a stable operating speed or thrust [17, 21]. It is a logical step in solution of control problems being able to design simpler and more specialized controllers for certain operational states as defined in [21] having different control loops for limiting, governing, and acceleration/deceleration. These control approaches are however not integrated; individual controllers are not interacting and are switched only by the use of minimum or maximum selection logic [21, 22] or using energybased switching as shown in [41]. On the other hand, situational control represents an interconnected controllers’ framework usable in control of complex dynamic systems under all operational states, emphasizing control in the atypical ones as defined by Madarász et al. in [42], not just using limiters in order to protect the engine’s envelope but allow active control during atypical operational state, like high angle of attack flight, engine overheat conditions, and compressor stalls. The basic foundation design of a situational control system suitable for general application on turbojet engines has been developed in [42] and expanded in [43]; the resulting basic control concept is shown in Figure 1. Analyzers process the measured data describing the operational state of the system and are defined as input (ANX), output (ANY), state (ANZ), and control (ANR).
Intelligent Situational Control of Small Turbojet Engines
341
Figure 1. The basic concept of the situational control system [42, 43].
All operational states of the controlled complex system are then decomposed into n situational frames, which represent specific groups of operational states. In case of a gas turbine engine, these can be states like start-up, acceleration, and stable operation. The situational classifier is acting here as a decision-making element, selecting controllers in an intelligent manner to handle certain situational frames with the highest control efficiency. This high-level modular concept represents the design base for the situational control system, which can be further modified for application in gas turbine aeroengines with possible combination of any control algorithms nowadays used or proposed for gas turbine engines. The concept of situational control methodology designed by the authors also creates a platform for strong integration of the engine’s diagnostic system into its control this integration being described in [43]. Development of the diagnostic system and its integration into the situational control system is further described in [44]. On the basis of these previous development iterations of the situational control system as presented in [45], a new complex general architecture has been developed, which is suitable for turbojet, turboshaft, and turboprop engines presented in Figure 2. The new core part of the system is the intelligent supervisory element, which largely expands functions of the situational classifier replacing the one designed in [45] and previous publications [42, 43]. The resulting design is original and based on practical experiments with the iSTC-21v engine in laboratory conditions. Intelligent supervisory element is integrating diagnostic module, selecting individual
Intelligent Control and Automation
342
controllers, and computing command signals for individual situational controllers Ci. The diagnostic module generates disturbance indications, thus helping situational classification and validates the selected parameters of the engine and its environment; these data are then used by the individual controllers as well as the intelligent supervisory element. Description of the designed diagnostic module can be found in [44, 45].
Figure 2. The designed situational control system for gas turbine aeroengines.
The newly designed intelligent supervisory element has to solve the following tasks in the proposed expanded implementation shown in Figure 3: • • •
•
Power management—setting the optimal command values for speed, exhaust nozzle diameter, propeller pitch, or shaft load Situational classifier—classification of the engine’s operational state into a situational frame Situational selector—an algorithm securing fluid switching between individual controllers and generating controller selection gating signals Adaptive limiters—adaptive limits in order to keep the engine operating within changing operational envelope for increased safety
Intelligent Situational Control of Small Turbojet Engines
343
Figure 3. Intelligent supervisory element.
The main task of the power management is to calculate the optimal commands for the engine’s main state parameters. These are commands for the controlled variables like the turbocompressor’s setpoint defined as its shaft speed or the propeller’s speed or defined by its angle of attack. The block also contains a situational classifier, which finds the situational frame (or mode) in which the engine currently operates. This indication is then sent to the situational selector, which secures fluent transitions between the switched controllers on the lower level and assigns the corresponding controller to the actual situational frame. The composition of the system including the lower level controllers is shown in Figure 4.
Figure 4. Connection of the intelligent supervisory element with lower level controllers.
Intelligent Control and Automation
344
Safety is maintained by operating the engine within its operational envelope using a set of adaptive limiters; the commanded parameter as computed by the lower level controllers is being compared to the output of the adaptive limiters, and a minimum or maximum is selected. Adaptivity of the limiters relies on the principle that the maximal shaft speed or exhaust gas temperature of the engine can be adjusted according to flight conditions or environmental conditions. The expanded situational control system developed in Figures 2, 3, and 4, as framework architecture, is the foundation of an intelligent FADEC or i-FADEC to be implemented and tested on a small turbojet engine. In order to design similar situational control frameworks and to define its elements, the following design steps are proposed: • •
Selection of operational parameters for individual data analyzers Decomposition of all operational states of the system into situational frames • Selection of the methodologies used in the subsystems of the intelligent supervisory element • Selection of the methodologies for the diagnostic and backup system These steps will be taken in order to use this framework for design of a situational control system for a small turbojet engine iSTC-21v.
A SMALL TURBOJET ENGINE: AN EXPERIMENTAL OBJECT Small turbojet engines present an ideal platform for development and testing of advanced control algorithms for gas turbine engines as mentioned in the introduction chapter. An experimental small turbojet engine iSTC-21v has been developed from the turbostarter TS-21 used in turboshaft configuration for start-up of normal-sized aircraft engines, used in legacy aircraft utilizing engines Lyulka AL-21F and Tumansky R-29, characteristics of it being described in [46]. It is a single spool, single stream engine with a radial compressor and a single-stage noncooled turbine of old design in a standard configuration of a small gas turbine engine as described for example in [2, 3]. It has been modified by the authors having implemented a digital control system with a direct control of fuel flow supply utilizing a BLDC oil/fuel pump as an actuator in a similar arrangement to the most modern small
Intelligent Situational Control of Small Turbojet Engines
345
turbine engines like the TJ-100 engine together with digital data acquisition system [5, 46]. Moreover, the engine was expanded and redesigned with a digitally controlled variable exhaust nozzle, which is a unique design in this class of turbojet engines [46]. The engine on a test bench as used in laboratory conditions is shown in Figure 5. It has to be noted that its turbocompressor core components have unknown history as it was salvaged from a flight unworthy aircraft. Using auxiliary power units or turbostarters from flight unworthy aircraft is a cost-effective solution for testing and design of progressive algorithms for this class of engines.
Figure 5. iSTC-21v during operation in laboratory conditions.
The following basic engine parameters are measured at the basic sampling frequency of fs = 10 Hz using a National Instruments Compact DAQ system and are used in control of the engine in individual data analyzers, the measurement system being described in [45, 46]: • • • • • • • • • •
Outside air temperature T0 (°C) and atmospheric pressure P0 (Atm) Total air temperature at the inlet of the radial compressor T1C (°C) Total air temperature at the outlet from the diffuser of the radial compressor T2C (°C) Total gas temperature at the inlet of the gas turbine T3C (°C) Total gas temperature at the outlet of the gas turbine T4C (°C) Total pressure of air at the outlet of the compressor P2c (Atm) Total pressure of gases at the inlet of the gas turbine P3c (Atm) Fuel flow supply FF (l/min) Thrust Th (kg) Shaft speed of the turbine/compressor, n1 (rpm)
346
Intelligent Control and Automation
• Exhaust nozzle diameter A5 (mm) The temperatures and pressures points of the engine measured are the standard measurement points [1, 2] and are specifically for a small turbojet engine in Figure 6.
Figure 6. Measurement points on a small turbojet engine.
One run of the engine in laboratory conditions is shown in Figure 7. The graph shows operation of the engine with a stable command of fuel flow supply set at FFcmd = {0.9 l/min} without the situational control system using a traditional closed loop PI controller to meter the fuel flow at the desired level utilizing the electromechanical servo valve LUN 6743 [46]. This is basically showing a state of the engine with only one available setpoint preselected before starting the engine without situational control or other complex control algorithm. The results are illustrative showing the operational parameters of the engine running at a shaft speed setpoint ncmd = {40,000 rpm}.
Intelligent Situational Control of Small Turbojet Engines
347
Figure 7. Dynamic engine data using the nonsituational PI control system.
Fluctuations and disturbances in shaft speed, temperatures, and pressures, as well as other dynamic parameters, can be seen because of the old construction and the technical state of the engine’s core components. The aim of the proposed situational control system is to considerably improve the engine’s operational qualitative characteristics by application of the situational control methodology.
SITUATIONAL CONTROL SYSTEM FOR A SMALL TURBOJET ENGINE Situational Control System Architecture In order to design a situational control system with an integrated backup diagnostic module, the general framework as shown and designed in Figure 2 has been applied. The whole operation of the engine is systematically decomposed into the following macrosituational frames as envisioned in [44, 45]: • Prestart control • Start-up control • Operational control • Shutdown Macrosituational frames are the frames, which incorporate several situational frames. The scheme in Figure 8 shows a set of eleven controllers specifically designed to control iSTC-21v engine in the same number of situational frames expanding and reevaluating the situational frames
348
Intelligent Control and Automation
designed in [44]. The resulting architecture of the control system depicted in Figure 8 is highly modular although specifically designed for a small turbojet engine. Modularity of the control system is one of its key design points, and controller elements and other algorithms can be easily added without disrupting the functionality of other subsystems. This design will be further expanded by implementation of the exhaust nozzle controller and implementation of thrust management algorithms.
Figure 8. The situational control system of the iSTC-21v engine.
Control Strategies and Situational Decomposition In design of the control system for a small turbojet engine with fixed exhaust, only the turbocompressor controllers block (see Figure 2 and Figure 8) will be defined. The performed situational decomposition consists of situational frames designated as Si (i = 1,…,n; n is the number of situational frames) with the corresponding controllers designated as Cj (j = 1,…,n; n is the number of situational frames). If each situational frame has a controller directly assigned to it, then i = j and the controller can be designated as Ci. The situational frames are organized in a horizontal decomposition and cover the following situations in control of the small turbojet engine iSTC21v. The situations S7, S8, S9, and S10 represent atypical operational conditions with specialized control strategies to handle them and are shown in grey in Figure 8 as well as the Table 1. All feedback and command signals are
Intelligent Situational Control of Small Turbojet Engines
349
transferred through the diagnostic block, which validates them and in case of any sensor’s failure is able to replace its value with a computed synthetic value also being able to trigger atypical situations for the situational selector block [44, 45]. Table 1. Definition of situational frames and the corresponding controllers. Situation
Strategy
Controller
S1
Prestart control
C1
S2
Launch control
C2
S3
Ignition control
C3
S4
Acceleration/deceleration
C4
S5
Stable operation of the engine
C5
S6
Idle control
C6
S7
Compressor stall
C7
S8
Turbocompressor over speed
C8
S9
Turbine overheat
C9
S10
Unspecific degraded mode of operation
C10
S11
Engine shutdown
C11
The following standard operational situational frames and the corresponding control strategies have been implemented and tested so far in order to demonstrate the functionality of the envisioned design: •
•
•
Prestart control strategy C1: the control system performs a check of all aggregates and powers up all corresponding subsystem needed to be powered for launch [44, 45] Launch control C2: preignition control—spinning the electric starter up to 12,000 rpm, open the fuel valve, and energize spark plugs [44, 45] Postignition control C3: control of startup, during postignition—a feedback adaptive fuzzy controller is used to meter the fuel in combustion chamber in order to minimize EGT peak during startup [43]
Intelligent Control and Automation
350
•
• • •
Restricted acceleration/deceleration C4: a PID controller is used to handle the fastest possible acceleration of the engine without exceeding its operational envelope Stable operation of the engine C5: a specialized PID controller is used for handling of the constant speed operations of the engine Idle control C6: precise fuel metering near idle speeds of the engine where flameout might happen Engine shutdown C11: disabling fuel flow supply and closing of electromagnetic fuel and oil valves [45]
Intelligent Supervisory Element This subsystem is the core element of the complex situational control system; it is acting as a supervisory control system as well as a decisionmaking element. It is designed to solve the following tasks: •
Compute commands for the engine’s speed and exhaust nozzle diameter in case of optimal thrust control. • Evaluate the current state of the engine using all observed and measured parameters and perform situational classification. • Create switching signals for fluent transitions between controllers and assign controllers to corresponding situational frames— performed by the situational selector. • Limiting of computed commands and actions for certain control strategies is performed by the block of adaptive limiters. In order for the concept of the situational control system to work, situational classification and interconnection of control strategies to situational frames are the key factors; therefore, these will be addressed further. The whole operational space of the engine can be described by the vector of parameters defined as Os = [E, S, O, I, C], which are the inputs of the situational classifier and represent the following parameters: • E is the set of environmental parameters: T0 and P0 • S is the set of state parameters: T1C, T2C, T3C, T4C; P2C; P3C; n1 • O is the set of output parameters: Th, EPR • I is the set of input control parameters FF, A5 • C is the set of command parameters: n1_cmd;Th_cmd The purpose of the situational classifier is to transform this multidimensional vector of parameters into a signal which will indicate a
Intelligent Situational Control of Small Turbojet Engines
351
situational frame as its output. An ith situational frame Si either occurs or not, using Boolean logic this means Si = {0; 1}. Fuzzy logic can however also be used, where indication of the ith situational frame would be represented by a closed interval, where Si = < 0; 1 >. The resulting output of the situational classifier can be formalized as a vector of indicated situational frames: Scl = S1, S2, S3, … , Si, … , Sn, i = 1, … , n, where n is the number of situational frames. The situational classifier is then a function which transforms the multidimensional vector of the engine’s parameters into the output classifier signal at any given time: (1) Any decision-making algorithm can be used in the role of the situational classifier, be it a rule-based system or a neural network. In the case of situational control of a small turbojet engine, a suitable and computational less intensive approach was taken in the form of combination of a rulebased expert system for indication of start-up and shutdown situations S1, S2, S3 and S11 and a fuzzy inference system for indication of the remaining operational situations S4, … , S10.
This combination is necessary in order to secure fluent transitions between the controllers during operational feedback control of the engine, where Boolean indication is enough for start-up and shutdown operational states, where direct feedback control signals are not used. Start-up and shutdown situational frames with the corresponding control strategies are described in [44, 45]. To further enhance the quality of switching of individual controllers, the output of the situational classifier is transformed through time-based first-order differential equations, by the situational selector, output of which is designated as Ssel (i). The selector then transforms the indication-gating signals for the ith situational frame according to the following differential equation in time t:
(2)
where T is an arbitrary time constant. The time constant defines the speed of situational frame switch; for a fast system like a small turbojet, values from the interval T = < 0.1, 0.7 > are suitable. The resulting architecture of the situational classifier and selector systems suitable for small turbojet engines is shown in Figure 9. The situational
352
Intelligent Control and Automation
frames correspond to the designed situational frames for the small turbojet engine iSTC-21v as described in the previous chapter.
Figure 9. The situational classifier and the situational selector for the iSTC-21v engine.
The output of the situational selector is used to gate the individual situational controllers, thus fluently switching them on or off. An output from the ith controller can be expressed by its transfer function expressed after Laplace transformation:
(3)
where in (3), FFi(s) is the fuel flow supply calculated by the ith situational controller, Ci(s) is the transfer function of the ith controller, and ei(s) is the control error expressed as a difference between the commanded and actual value of the controlled parameter. The final commanded fuel FFcmd flow for the digitally controlled fuel pump of the engine is then a sum of all fuel flows computed by the individual situational controllers expressed as
(4)
It needs to be noted here that both the input and the output of the controllers are multiplied by a signal from the situational selector in order to stop individual integrators at their inputs and to reset them at their outputs.
Intelligent Situational Control of Small Turbojet Engines
353
Design of Individual Controllers For operational control after ignition and startup of the small turbojet engine, the controllers, C4—acceleration control, C5—constant shaft speed control, and C6—idle control, have been designed and tested. In order to compute parameters of these regulators, linearized dynamic models of the iSTC-21v engine have been used as well as nonlinear dynamic models used for testing and tuning as described in [47]. The simplest controller of these is the idle regime controller designated as C6. The control law of this controller is defined according to (5), which represents a feedback PI controller operating in a fuel flow feedback loop, maintaining fuel flow at the lowest preset possible value for the iSTC-21v engine:
(5)
where FFact(s) is the actual value of the current fuel flow, K and Ti are the coefficients of the PI controller. The idle fuel flow is set to FFidle = 0.75 l/min for zero speed and zero altitude conditions. The resulting designed controller has been computed by the Naslin method [48] using a dynamic transfer function model of the fuel supply system. The Naslin method provided the best control quality during tests of the fuel flow control feedback loop. The resulting controller has the following control law:
(6)
In operational control, the acceleration and constant speed controllers are the most important ones, as they are responsible for tracking of command signals [1, 2, 20]. Both controllers can be also designed as a single robust feedback controller [30, 31] or fuel flow scheduler [19, 20]; however, the situational control approach allows to design them as two specialized feedback controllers aimed to improve the resulting control quality. The situational control system with intelligent switching can fluently switch between those controllers and even have them operate in cooperative mode to further enhance control quality in order to remove any transients, which could appear during a switch between the acceleration and constant speed controllers. The acceleration/deceleration controller C4 is defined as a feedback PID controller with the control law defined as follows:
354
Intelligent Control and Automation
(7) where ncmd(s) represents the commanded shaft speed of the engine by the operator or the higher level engine thrust management algorithm and nact(s) represents the actual shaft speed. K, Ti, and Td are the coefficients of the PID controller. The controller is limited in computation of fuel flow constrained in the interval FF4(s) = < 0.75,1.6 > l/min. Antiwindup algorithm is also implemented for the integrator of the controller. In order to tune the controller, Ziegler-Nichols algorithm has been used [49], as the engine is a nonlinear system with complex dynamics and hysteresis. Individual Ci controllers have been afterwards validated using a simulation nonlinear model based on transfer functions obtained by means of experimental identification [47], with follow-up experimental tuning of the computed PID coefficient during operation of the engine in the region of ±5% from the computed design point. The resulting acceleration controller has the following transfer function:
(8) The constant speed controller C5 has been also designed by means of the Ziegler-Nichols tuning rule resulting in a different tuning of the PID controller corresponding to the different control strategy of constant speed hold, where the resulting fuel flow supply command is a sum of a direct setpoint of the fuel flow supply and the action hit of a less aggressive PID controller with its derivative gain equal to zero: (9) (10) (11) (12)
Intelligent Situational Control of Small Turbojet Engines
355
The setpoint fuel flow FFsetpoint is a direct linear inverse model of the engine and the PI controller is tuned for limited response in the interval FFPI = < −0.15,015 > l/min with antiwindup protection. The resulting controller’s output with computed coefficients is defined as follows:
(13)
EXPERIMENTAL EVALUATION OF THE DESIGNED CONTROL SYSTEM The proposed framework has been tested in laboratory experiments in a pilot testing program, to verify that the situational control system with situational frame switching is working as designed. In these experiments, a small turbojet engine iSTC-21v has been employed with exhaust nozzle fixed [46]. These tests have been executed as a proof concept of the designed control system and compared to experimental results measured with the TJ100 engine in our laboratory, which represents a current modern production small turbojet engine with fixed exhaust nozzle and a state-of-the-art FADEC control system [5]. The functionality of the situational control system is demonstrated in Figures 10 and 11 with the iSTC-21v operating from startup to shutdown with all situational controllers employed. The figures show efficient control of the engine’s shaft speed and exhaust gas temperature.
Figure 10. A single run of the iSTC-21v engine with the situational control system.
Intelligent Control and Automation
356
Figure 11. Exhaust gas temperature during operation of the situational control system.
In order to further experimentally evaluate efficiency of the situational control system, the following experimental pilot testing set-up has been designed: •
The operational controllers are tested at the commanded speeds defining three operational points ncmd = {40,000 rpm; 45,000 rpm; 50,000 rpm}. • Each operational point is held by a command for time T = 15 sec. • The resulting acceleration, rise and settling times between the situationally controlled iSTC-21v and FADEC controlled TJ100, is compared. • The resulting constant speed deviations between the situationally controlled iSTC-21v and FADEC controlled TJ-100 are compared. Experimental results as measured in laboratory conditions are shown in Figures 12–14. The figures illustrate that control systems of both engines have acceptable control quality, as there are no undamped oscillations or large overshoots. Figure 12 is showing acceleration of the engine from the stable operating ncmd = {40,000 rpm} to the operating point ncmd = {45,000 rpm}; the time plot depicts dynamic characteristics of the situationally controlled iSTC-21v engine and the TJ-100 engine. The TJ-100 is slightly slower in acceleration and has a small overshoot and undershoot, while it is very good in maintaining the steady shaft speed. In comparison, the situational controller of the iSTC-21v engine is quite fluent in accelerations and decelerations with a bit quicker rise and settling times.
Intelligent Situational Control of Small Turbojet Engines
Figure 12. Acceleration from 40,000 rpm to 45,000 rpm.
Figure 13. Acceleration from 40,000 rpm to 50,000 rpm.
357
358
Intelligent Control and Automation
Figure 14. The comparison of mean absolute error and standard deviations of the error during the steady state.
The second comparison shown in Figure 13 depicts accelerations from the operating point ncmd = {40,000 rpm} to the operating point ncmd = {50,000 rpm}, which is the operating point near the maximum speed of the iSTC21v engine. The situational control system employed on the iSTC-21v has very good acceleration characteristics here; its acceleration is considerably faster than the TJ-100, and both control systems are fluent; however, the settling characteristics of the iSTC-21v are qualitatively better here, as the controller does not have any overshoots. In order to quantify the differences between control systems of both engines, mean absolute errors and standard deviations [50] of both engines have been compared during the steady state of their operation with results depicted in Figure 14. The steady state is defined as a state where the actual shaft speed is in the interval nact = < −200, 200 rpm >.
As can be seen in Figure 14, the iSTC-21v engine performs slightly worse at the stable operating points ncmd = {40,000 rpm} and ncmd = {45,000 rpm} as it has larger deviation from the commanded setpoint speed. However, it performs better at the operating point ncmd = {50,000 rpm} where it has lower average deviation from the setpoint speed. Standard deviations at these setpoints are nearly equal; this means that the controlled shaft speeds oscillate in a similar way. It can be concluded that both control systems are comparable in maintaining the steady operational point.
Intelligent Situational Control of Small Turbojet Engines
359
In order to compare and evaluate acceleration times, rise and settling times of both engines in accelerations to the previously defined operating points ncmd = {45,000 rpm, 50,000 rpm} have been measured. The rise time is defined as the time it takes for the response to rise from 10% to 90% of the steady-state response. The settling time is defined as the time it takes for the shaft speed error to fall to into the interval nact = < −200, 200 rpm >. The results of the average measured acceleration times during three performed accelerations as shown in Figures 12 and 13 are presented in Table 2. Table 2. Comparison of rise and settling times. Rise/settling times
ncmd = 45,000 rpm
ncmd = 50,000 rpm
Rise time: iSTC-21v
1.7 sec
2.1 sec
Rise time: TJ-100
1.8 sec
3.1 sec
Settling time: iSTC-21v
4.5 sec
6 sec
Settling time: TJ-100
5 sec
8.5 sec
The table shows that the situational control system of the iSTC-21v engine is dynamically better than the acceleration control of the TJ-100 engine. This can be also attributed to the fact that the control system of the TJ-100 engine has to be more robust as it is also tuned during flight conditions. In overall evaluation, it can be stated that both control systems are comparable in quality, but a fact is that iSTC-21v engine is using a turbocompressor core, which is of old construction and a worse technical state than the TJ100 engine, which uses state-of-the-art technical and design solutions in its construction; moreover, it is an engine with only 2 hours of runtime in laboratory conditions; thus, its technical state is flawless. In this regard, the results obtained with the intelligent situational control system employed on iSTC-21v engine can be considered as very positive. The control system of the iSTC-21v has to exert much more effort just to keep a stable operational point in order to compensate for its old constructional and material faults. In order to illustrate this point, the situational control system is compared with the classical nonsituational control system in Figure 15 with the engine accelerating to the shaft speed ncmd = {40,000 rpm}.
360
Intelligent Control and Automation
Figure 15. Comparison of the situational control system on the iSTC-21v with the nonsituational PI control system.
As can be seen in the figure, the engine iSTC-21v with the situational control system has much better characteristics during its startup with a lower exhaust gas temperature T4c by 200°C, speeds and pressures without oscillations, which makes its running a lot smoother.
CONCLUSIONS The paper presents a comprehensive description of an approach, which can be taken in design of a complex intelligent control system suitable for small
Intelligent Situational Control of Small Turbojet Engines
361
turbojet engines. The system can be considered as an evolution of FADEC systems, and it can be designated as an intelligent i-FADEC. Its main asset is its modularity, as each module can be further improved by utilizing better or more advanced algorithms, be it algorithms of the situational classifier or improved individual controller modules. The main aim of the paper was to present the framework architecture as a working solution, and operability and efficiency of which has been demonstrated in real-world laboratory conditions with a small turbojet engine, not just as a simulation example. This aim has been fulfilled; moreover, the obtained control quality equals or even surpasses the control quality of a modern turbojet engine TJ-100 lowering the acceleration times of the engine up to 30%. The designed control system needs to be taken as a proof of concept, and it is expected that a similar or even higher increase in control quality can be achieved using a state-of-the-art engine like TJ-100, this being one of the next research aims. The efficiency of the control system can be further enhanced by follow-up design of different control strategies for individual situational states, utilization of methodologies from the areas of robust or LQ control, focusing on control strategies during atypical modes of operation as well as cooperative control strategies with intelligent controller switching, where several cooperating controllers can be used to handle a certain situation. In conclusion, the proposed situational control system has to cope with an engine core from the TS-21 engine, which is a very old design from the last century; it is hard to control being in a flawed technical condition, the engine not being previously designed as a multiregime engine. It can be concluded that the designed intelligent FADEC situational control system was able to bring its dynamic characteristics comparable to modern standards, which can be considered as a substantial engineering and design success. The core control system is a real-time software implementation, and our results show that by application of progressive control methodologies, the efficiency and reliability of an old turbojet engine can be considerably improved and the methodology of situational control can be applied and operated successfully in real-world conditions.
NOMENCLATURE A5:
Exhaust nozzle diameter, mm
BLDC:
Brushless DC motor
Ci:
Transfer function of the ith controller
362
Intelligent Control and Automation
EGT:
Exhaust gas temperature
EPR:
Engine pressure ratio
FADEC:
Full authority digital engine control
FF:
Fuel flow, l/min
i-FADEC:
Intelligent full authority digital engine control
iSTC-21v: Intelligent small turbocompressor engine-21 with variable exhaust nozzle n1:
Turbocompressor shaft speed, rpm
N_dot:
Derivative of shaft speed stabilization algorithm
PID:
Proportional integral derivative controller
P0:
Atmospheric pressure, Atm
P3:
Turbine inlet pressure, Atm
Scl:
A vector of indicated situational frames
Ssel:
Output of the situational selector
P2:
Compressor outlet pressure, Atm
P4:
Turbine outlet pressure, Atm
Si: ith situational frame T0:
Outside air temperature, °C
T3C:
Gas temperatures in front of the gas turbine, °C
Th:
Thrust, kg
T2C: Air temperature at the outlet from the diffuser of the radial compressor, °C T4C:
Total gas temperature aft of the gas turbine, °C
TJ-100:
Turbojet 100
TS-21:
Turbostarter-21.
ACKNOWLEDGMENTS The work was supported by the project ESPOSA—Efficient Systems and Propulsion for Small Aircraft, funded by the European Commission in the Seventh Framework Programme under Grant Agreement no. ACP1-GA2011-284859-ESPOSA and Slovak Research and Development Agency APVV under Grant Agreement no. DO7RP-0023-11.
Intelligent Situational Control of Small Turbojet Engines
363
REFERENCES M. Boyce, Gas Turbine Engineering Handbook, Elsevier, Oxford, UK, Third edition, 2006. 2. Rolls-Royce, The Jet Engine, Wiley-Blackwell, London, UK, 5th edition, 2015. 3. E. Benini and S. Giacometti, “Design, manufacturing and operation of a small turbojet-engine for research purposes,” Applied Energy, vol. 84, no. 11, pp. 1102–1116, 2007. 4. B. C. Min, C. H. Cho, K. M. Choi, and D. H. Kim, “Development of a micro quad-rotor UAV for monitoring an indoor environment: advances in robotics,” in Advances in Robotics. FIRA 2009, J. H. Kim, S. S. Ge, P. Vadakkepat et al., Eds., vol. 5744 of Lecture Notes in Computer Science, pp. 262–271, Springer, Berlin, Heidelberg, 2009. 5. PBS Velká Bíteš Aircraft Engines website, http://www.pbsvb.com/ customer-industries/aerospace/aircraft-engines. 6. J. Pecinka and A. Jilek, “Preliminary design of a low-cost mobile test cell for small gas turbine engines,” in Proceedings of ASME Turbo Expo 2012: Turbine Technical Conference and Exposition, pp. 471– 478, Copenhagen, Denmark, 2012. 7. A. J. B. Jackson, P. Laskaridis, and P. Pilidis, “A test bed for small aero gas turbines for education and for university: industry collaboration,” in Proceedings of ASME Turbo Expo 2004: Power for Land, Sea, and Air, pp. 901–909, Vienna, Austria, 2004. 8. C. R. Davison and A. M. Birk, “Set up and operational experience with a micro-turbine engine for research and education,” in Proceedings of ASME Turbo Expo 2004: Power for Land, Sea, and Air, pp. 849–858, Vienna, Austria, 2004. 9. M. Badami, P. Nuccio, and A. Signoretto, “Experimental and numerical analysis of a small-scale turbojet engine,” Energy Conversion and Management, vol. 76, pp. 225–233, 2013. 10. M. Badami, P. Nuccio, D. Pastrone, and A. Signoretto, “Performance of a small-scale turbojet engine fed with traditional and alternative fuels,” Energy Conversion and Management, vol. 82, pp. 219–228, 2014. 11. M. Makida, Y. Kurosawa, H. Yamada et al., “Emission characteristics through rich–lean combustor development process for small aircraft engine,” Journal of Propulsion and Power, vol. 32, no. 6, pp. 1315– 1324, 2016. 1.
364
Intelligent Control and Automation
12. J. Michalek and P. Straka, “A comparison of experimental and numerical studies performed on a low-pressure turbine blade cascade at high-speed conditions, low reynolds numbers and various turbulence intensities,” Journal of Thermal Science, vol. 22, no. 5, pp. 413–423, 2013. 13. S. Guo, F. Duan, H. Tang, S. C. Lim, and M. S. Yip, “Multi-objective optimization for centrifugal compressor of mini turbojet engine,” Aerospace Science and Technology, vol. 39, pp. 414–425, 2014. 14. A. O. Pugachev, A. V. Sheremetyev, V. V. Tykhomirov, and O. I. Shpilenko, “Structural dynamics optimization of rotor systems for a small-size turboprop engine,” Journal of Propulsion and Power, vol. 31, no. 4, pp. 1083–1093, 2015. 15. F. K. Lu and E. M. Braun, “Rotating detonation wave propulsion: experimental challenges, modeling, and engine concepts,” Journal of Propulsion and Power, vol. 30, no. 5, pp. 1125–1142, 2014. 16. S. Adibhatla and T. Lewis, “Model-based intelligent digital engine control (MoBIDEC),” in 33rd Joint Propulsion Conference and Exhibit, pp. 1–10, Seattle, WA, U.S.A, 1997. 17. L. C. Jaw and J. D. Mattingly, Aircraft Engine Controls – Design, System Analysis, and Health Monitoring, American Institute of Aeronautics and Astronautics, Reston, VA, USA, 2009. 18. A. L. Diesinger, Systems of Commercial Turbofan Engines, Springer, Berlin, Heidelberg, 2008. 19. J. Csank, R. May, J. Litt, and T.-H. Guo, Control Design for a Generic Commercial Aircraft Engine, NASA, Glenn Research Center, Cleveland, OH, USA, 2010. 20. J. W. Connolly, J. Csank, A. Chicatelli, and K. Franco, “Propulsion controls modeling for a small turbofan engine,” in AIAA Propulsion and Energy Forum - 53rd AIAA/SAE/ASEE Joint Propulsion Conference, pp. 1–15, Atlanta, GA, USA, 2017. 21. J. S. Litt, D. L. Simon, S. Garg et al., “A survey of intelligent control and health management technologies for aircraft propulsion systems,” Journal of Aerospace Computing, Information, and Communication, vol. 1, no. 12, pp. 543–563, 2004. 22. J. S. Litt, J. Turso, N. Shah, T. Sowers, and A. Owen, “A demonstration of a retrofit architecture for intelligent control and diagnostics of a turbofan engine,” in Infotech@Aerospace Conferences, pp. 1–18, Arlington, Virginia, 2005.
Intelligent Situational Control of Small Turbojet Engines
365
23. I. D. Landau, R. Lozano, M. M’Saad, and A. Karimi, Adaptive Control - Algorithms, Analysis and Applications, Springer, London, UK, 2nd edition, 2011. 24. R. C. Roman, M. B. Radac, R. E. Precup, and E. M. Petriu, “Datadriven model-free adaptive control tuned by virtual reference feedback tuning,” Acta Polytechnica Hungarica, vol. 13, no. 1, pp. 83–96, 2016. 25. S. Garg, “Controls and health management technologies for intelligent aerospace propulsion systems,” in NASA, Glenn Research Center, Cleveland, OH, USA, 2004. 26. Y. Diao and K. M. Passino, “Stable fault-tolerant adaptive fuzzy/neural control for a turbine engine,” IEEE Transactions on Control Systems Technology, vol. 9, no. 3, pp. 494–509, 2001. 27. J. Mu, D. Rees, and G. P. Liu, “Advanced controller design for aircraft gas turbine engines,” Control Engineering Practice, vol. 13, no. 8, pp. 1001–1015, 2005. 28. T. A. Várkonyi, J. Tar, and I. Rudas, “Improved stabilization for robust fixed point transformations-based controllers,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 17, no. 3, pp. 418–424, 2013. 29. J. K. Tar, J. F. Bitó, and I. J. Rudas, “Contradiction resolution in the adaptive control of underactuated mechanical systems evading the framework of optimal controllers,” Acta Polytechnica Hungarica, vol. 13, no. 1, pp. 97–121, 2016. 30. D. K. Frederick, S. Garg, and S. Adibhatla, “Turbofan engine control design using robust multivariable control technologies,” IEEE Transactions on Control Systems Technology, vol. 8, no. 6, pp. 961– 970, 2000. 31. Z. Knoll and G. Tao, “Multivariable adaptive LQ control of jet engines,” in 2015 American Control Conference (ACC), pp. 1193–1198, Chicago, IL, USA, 2015. 32. I. Kisszolgyemi, K. Beneda, and Z. Faltin, “Linear quadratic integral (LQI) control for a small scale turbojet engine with variable exhaust nozzle,” in 2017 International Conference on Military Technologies (ICMT), pp. 507–513, Brno, Czech Republic, 2017. 33. B. J. Brunell, R. R. Bitmead, and A. J. Connolly, “Nonlinear model predictive control of an aircraft gas turbine engine,” in Proceedings of
366
34.
35.
36.
37.
38.
39.
40. 41.
42.
43.
Intelligent Control and Automation
the 41st IEEE Conference on Decision and Control, 2002, pp. 4649– 4651, Las Vegas, NV, USA, 2002. M. Montazeri-Gh and A. Safari, “Tuning of fuzzy fuel controller for aero-engine thrust regulation and safety considerations using genetic algorithm,” Aerospace Science and Technology, vol. 15, no. 3, pp. 183–192, 2011. W. Yan, C. J. Li, and K. F. Goebel, “A multiple classifier system for aircraft engine fault diagnosis,” in Proceedings of the 60th Meeting of the Society For Machinery Failure Prevention Technology (MFPT), pp. 271–279, Virginia Beach, Virginia, 2006. J. B. Armstrong and D. L. Simon, Implementation of an Integrated OnBoard Aircraft Diagnostic System, NASA, Glenn Research Center, Cleveland, OH, USA, 2012. A. Kyriazis and K. Mathioudakis, “Gas turbine fault diagnosis using fuzzy-based decision fusion,” Journal of Propulsion and Power, vol. 25, no. 2, pp. 335–343, 2009. J. Seok, I. Kolmanovsky, and A. Girard, “Coordinated model predictive control of aircraft gas turbine engine and power system,” Journal of Guidance, Control, and Dynamics, vol. 40, no. 10, pp. 2538–2555, 2017. J. Hu and H. Gu, “Survey on flight control technology for large-scale helicopter,” International Journal of Aerospace Engineering, vol. 2017, Article ID 5309403, 14 pages, 2017. I. Moir, A. Seabridge, and M. Jukes, Civil Avionics Systems, AIAA Education Series, Reston, VA, USA, Second edition, 2013. X. Wang, J. Zhao, and X. Sun, “Overshoot-free acceleration of aero-engines: an energy-based switching control method,” Control Engineering Practice, vol. 47, pp. 28–36, 2016. L. Madarász, R. Andoga, L. Fozo, and T. Lazar, “Situational control, modeling and diagnostics of large scale systems,” in Towards Intelligent Engineering and Information Technology, I. J. Rudas, J. Fodor, and J. Kacprzyk, Eds., vol. 243 of Studies in Computational Intelligence, pp. 153–164, Springer, Berlin, Heidelberg, 2009. L. Madarász, R. Andoga, and L. Főző, “Intelligent technologies in modeling and control of turbojet engines,” in New Trends in Technologies: Control, Management, Computational Intelligence and
Intelligent Situational Control of Small Turbojet Engines
44.
45.
46.
47.
48. 49. 50.
367
Network Systems, M. J. Er, Ed., pp. 17–38, Sciyo, Rijeka, Croatia, 2010. R. Andoga, L. Főző, L. Madarász, and T. Karo, “A digital diagnostic system for a small turbojet engine,” Acta Polytechnica Hungarica, vol. 10, no. 4, pp. 45–58, 2013. R. Andoga, L. Madarász, L. Főző, T. Lazar, and V. Gašpar, “Innovative approaches in modeling, control and diagnostics of small turbojet engines,” Acta Polytechnica Hungarica, vol. 10, no. 5, pp. 81–99, 2013. L. Főző, R. Andoga, L. Madarász, J. Kolesár, and J. Judičák, “Description of an intelligent small turbo-compressor engine with variable exhaust nozzle,” in 2015 IEEE 13th International Symposium on Applied Machine Intelligence and Informatics (SAMI), pp. 22–24, Herl’any, Slovakia, 2015. L. Főző, R. Andoga, K. Beneda, and J. Kolesár, “Effect of operating point selection on non-linear experimental identification of iSTC– 21v and TKT–1 small turbojet engines,” Periodica Polytechnica Transportation Engineering, vol. 45, no. 3, pp. 141–147, 2017. P. Naslin, Essentials of Optimal Control, Boston Technical Publishers, Boston, MA, USA, 1969. J. Graf, PID Control: Ziegler-Nichols Tuning, CreateSpace Independent Publishing Platform, Cambridge, MA, USA, 2013. R. R. Wilcox, Basic Statistics: Understanding Conventional Methods and Modern Insights, Oxford University Press, Hannover, Germany, 1st edition, 2009.
Chapter 16
An Antilock-Braking Systems (ABS) Control: A Technical Review
Ayman A. Aly1,2, El-Shafei Zeidan1,3, Ahmed Hamed1,3, and Farhan Salem1 Department of Mechanical Engineering, Faculty of Engineering, Taif University, AlHaweiah, Saudi Arabia 1
Department of Mechanical Engineering, Faculty of Engineering, Assiut University, Assiut, Egypt 2
Department of Mechanical Power Engineering, Faculty of Engineering, Mansoura University, Mansoura, Egypt 3
ABSTRACT Many different control methods for ABS systems have been developed. These methods differ in their theoretical basis and performance under the changes of road conditions. The present review is a part of research project entitled “Intelligent Antilock Brake System Design for Road-Surfaces of Saudi Arabia” In the present paper we review the methods used in the design of ABS systems. We highlight the main difficulties and summarize the more
Citation: A. Aly, E. Zeidan, A. Hamed and F. Salem, “An Antilock-Braking Systems (ABS) Control: A Technical Review,” Intelligent Control and Automation, Vol. 2 No. 3, 2011, pp. 186-195. doi: 10.4236/ica.2011.23023.. Copyright: © 2011 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
370
Intelligent Control and Automation
recent developments in their control techniques. Intelligent control systems like fuzzy control can be used in ABS control to emulate the qualitative aspects of human knowledge with several advantages such as robustness, universal approximation theorem and rule-based algorithms. Keywords: ABS, Intelligent Control, Fuzzy Control
INTRODUCTION Since the development of the first motor driven vehicle in 1769 and the occurrence of first driving accident in 1770, engineers were determined to reduce driving accidents and improve the safety of vehicles [1]. It is obvious that efficient design of braking systems is to reduce accidents. Vehicle experts have developed this field through the invention of the first mechanical antilock-braking system (ABS) system which have been designed and produced in aerospace industry in 1930 [2,3]. In 1945, the first set of ABS brakes were put on a Boeing B-47 to prevent spin outs and tires from blowing and later in the 1950s, ABS brakes were commonly installed in airplanes [4,5]. Soon after, in the 1960s, high end automobiles were fitted with rear-only ABS, and with the rapid progress of microcomputers and electronics technologies, the trend exploded in the 1980s. Today, allwheel ABS can be found on the majority of late model vehicles and even on select motorcycles [6-10]. ABS is recognized as an important contribution to road safety as it is designed to keep a vehicle steerable and stable during heavy braking moments by preventing wheel lock. It is well known that wheels will slip and lockup during severe braking or when braking on a slippery (wet, icy, etc.) road surface. This usually causes a long stopping distance and sometimes the vehicle will lose steering stability [11-13]. The objective of ABS is to manipulate the wheel slip so that a maximum friction is obtained and the steering stability (also known as the lateral stability) is maintained. That is, to make the vehicle stop in the shortest distance possible while maintaining the directional control. The ideal goal for the control design is to regulate the wheel velocity. The technologies of ABS are also applied in traction control system (TCS) and vehicle dynamic stability control (VDSC) [14]. Typical ABS components include: vehicle’s physical brakes, wheel speed sensors (up to 4), an electronic control unit (ECU), brake master cylinder, a hydraulic modulator unit with pump and valves as shown in
An Antilock-Braking Systems (ABS) Control: A Technical Review
371
Figure 1. Some of the advanced ABS systems include accelerometer to determine the deceleration of the vehicle. This paper is intended to present a literature review of research works done by many researchers concerning various aspects of ABS technology in an effort to improve the performance of its applications.
Figure 1. Typical ABS components [4].
PRINCIPLES OF ANTILOCK-BRAKE SYSTEM The reason for the development of antilock brakes is in essence very simple. Under braking, if one or more of a vehicle’s wheels lock (begins to skid) then this has a number of consequences: a) braking distance increases, b) steering control is lost, and c) tire wear will be abnormal. The obvious consequence is that an accident is far more likely to occur. The application of brakes generates a force that impedes a vehicles motion by applying a force in the opposite direction During severe braking scenarios, a point is obtained in which the tangential velocity of the tire surface and the velocity on road surface are not the same such that an optimal slip which corresponds to the maximum friction is obtained. The ABS controller must deal with the brake dynamics and the wheel dynamics as a whole plant [15]. The wheel slip, S is defined as:
372
Intelligent Control and Automation
(1) where ω, R, and V denote the wheel angular velocity, the wheel rolling radius, and the vehicle forward velocity, respectively. In normal driving conditions, V = ωR, therefore S = 0. In severe braking, it is common to have ω = 0 while S = 1, which is called wheel lockup. Wheel lockup is undesirable since it prolongs the stopping distance and causes the loss of direction control [16,17]. Figure 2 shows the relationship between braking coefficient and wheel slip. It is shown that the slide values for stopping/traction force are proportionately higher than the slide values for cornering/steering force. A locked-up wheel provides low road handling force and minimal steering force. Consequently the main benefit from ABS operation is to maintain directional control of the vehicle during heavy braking. In rare circumstances the stopping distance may be increased however, the directional control of the vehicle is substantially greater than if the wheels are locked up.
Figure 2. Illustration of the relationship between braking coefficient and wheel slip [14].
An Antilock-Braking Systems (ABS) Control: A Technical Review
373
The main difficulty in the design of ABS control arises from the strong nonlinearity and uncertainty of the problem. It is difficult and in many cases impossible to solve this problem by using classical linear, frequency domain methods [17]. ABS systems are designed around system hydraulics, sensors and control electronics. These systems are dependent on each other and the different system components are interchangeable with minor changes in the controller software [18]. The wheel sensor feeds the wheel spin velocity to the electronic control unit, which based on some underlying control approach would give an output signal to the brake actuator control unit. The brake actuator control unit then controls the brake actuator based on the output from the electronic control unit. The control logic is based on the objective to keep the wheels from getting locked up and to maintain the traction between the tire and road surface at an optimal maximum. The task of keeping the wheels operating at maximum traction is complicated given that the friction-slip curve changes with vehicle, tire and road changes. The block diagram in Figure 3. shows the block representation of an antilock brake system. It shows the basic functionality of the various components in ABS systems and also shows the data/information flow.
Figure 3. Block representation of an ABS.
The ABS (shown in Figure 4) consists of a conventional hydraulic brake system plus antilock components.
374
Intelligent Control and Automation
Figure 4. Anti-lock braking system [14].
The conventional brake system includes a vacuum booster, master cylinder, front disc brakes, rear drum brakes, interconnecting hydraulic brake pipes and hoses, brake fluid level sensor and the brake indicator. The ABS components include a hydraulic unit, an electronic brake control module (EBCM), two system fuses, four wheel speed sensors (one at each wheel), interconnecting wiring, the ABS indicator, and the rear drum brake. Most ABS systems employ hydraulic valve control to regulate the brake pressure during the anti-lock operation. Brake pressure is increased, decreased or held. The amount of time required to open, close or hold the hydraulic valve is the key point affecting the brake efficiency and steering controllability.
ABS CONTROL ABS brake controllers pose unique challenges to the designer: a) For optimal performance, the controller must operate at an unstable equilibrium point, b) Depending on road conditions, the maximum braking torque may vary over a wide range, c) The tire slippage measurement signal, crucial for controller performance, is both highly uncertain and noisy, d) On rough roads, the tire slip ratio varies widely and rapidly due to tire bouncing, e) brake pad coefficient of friction changes, and f) The braking system contains transportation delays which limit the control system bandwidth [19].
An Antilock-Braking Systems (ABS) Control: A Technical Review
375
As stated in the previous section of this paper, the ABS consists of a conventional hydraulic brake system plus antilock components which affect the control characteristics of the ABS. ABS control is a highly a nonlinear control problem due to the complicated relationship between friction and slip. Another impediment in this control problem is that the linear velocity of the wheel is not directly measurable and it has to be estimated. Friction between the road and tire is also not readily measurable or might need complicated sensors. Researchers have employed various control approaches to tackle this problem. A sampling of the research done for different control approaches is shown in Figure 5. One of the technologies that has been applied in the various aspects of ABS control is soft computing. Brief review of ideas of soft computing and how they are employed in ABS control are given below.
Figure 5. Sampling of ABS control.
Classical Control Methods Based on PID Control Out of all control types, the well-known PID has been used to improve the performance of the ABS. Song, et al. [20] presented a mathematical model that is designed to analyze and improve the dynamic performance of a vehicle. A PID controller for rear wheel steering is designed to enhance the stability,
376
Intelligent Control and Automation
steerability, and driveability of the vehicle during transient maneuvers. The braking and steering performances of controllers are evaluated for various driving conditions, such as straight and J-turn maneuvers. The simulation results show that the proposed full car model is sufficient to predict vehicle responses accurately. The developed ABS reduces the stopping distance and increases the longitudinal and lateral stability of both two- and four-wheel steering vehicles. The results also demonstrate that the use of a rear wheel controller as a yaw motion controller can increase its lateral stability and reduce the slip angle at high speeds. The PID controller is simple in design but there is a clear limitation of its performance. It does not posses enough robustness for practical implementation. For solving this problem, Jiang [21] applied a new Nonlinear PID (NPID) control algorithm to a class of truck ABS problems. The NPID algorithm combines the advantages of robust control and easy tuning. Simulation results at various situations using TruckSim show that NPID controller has shorter stopping distance and better velocity performance than the conventional PID controller and a loop-shaping controller.
Optimal Control Methods Based on Lyapunov approach The optimal control of nonlinear system such as ABS is one of the most challenging and difficult subjects in control theory. Tanelli et al. [22] proposed a nonlinear output feedback control law for active braking control systems. The control law guarantees bounded control action and can cope also with input constraints. Moreover, the closed-loop system properties are such that the control algorithm allows detecting without the need of a friction estimator, if the closed-loop system is operating in the unstable region of the friction curve, thereby allowing enhancing both braking performance and safety. The design is performed via Lyapunov-based methods and its effectiveness is assessed via simulations on a multibody vehicle simulator. The change in the road conditions implies continuous adaptation in controller parameter. In order to resolve this issue, an adaptive controlLyapunov approach is suggested by R. R. Freeman [23] and similar ideas are pursued in [24,25]. The use of Sontag’s formula is applied in the adaptive control Lyapunov approach in [26], which includes gain scheduling on vehicle speed and experimental testing. Feedback linearization in combination with gain scheduling is suggested by Liu and Sun [27]. PID-type approaches to wheel
An Antilock-Braking Systems (ABS) Control: A Technical Review
377
slip control are considered in [28-32]. A gain scheduled LQ control design approach with associated analysis, and, except [26] and [32], is the only one that contains detailed experimental evaluation using a test vehicle. In [33], an optimum seeking approach is taken to determine the maximum friction, using sliding modes. Sliding mode control is also considered in [34,35]. Another nonlinear modification was suggested by Ünsal and Kachroo [36] for observer-based design to control a vehicle traction that is important in providing safety and obtaining desired longitudinal vehicle motion. The direct state feedback is then replaced with nonlinear observers to estimate the vehicle velocity from the output of the system (i.e., wheel velocity). The nonlinear model of the system is shown locally observable. The effects and drawbacks of the extended Kalman filters and sliding observers are shown via simulations. The sliding observer is found promising while the extended Kalman filter is unsatisfactory due to unpredictable changes in the road conditions.
Nonlinear Control Based on Backstepping Control Design The complex nature of ABS requiring feedback control to obtain a desired system behavior also gives rise to dynamical systems. Ting and Lin [37] developed the anti-lock braking control system integrated with active suspensions applied to a quarter car model by employing the nonlinear backstepping design schemes. In emergency, although the braking distance can be reduced by the control torque from disk/drum brakes, the braking time and distance can be further improved if the normal force generated from active suspension systems is considered simultaneously. Individual controller is designed for each subsystem and an integrated algorithm is constructed to coordinate these two subsystems. As a result, the integration of anti-lock braking and active suspension systems indeed enhances the system performance because of reduction of braking time and distance. Wang, et al. [38] compared the design process of backstepping approach ABS via multiple model adaptive control (MMAC) controllers. The high adhesion fixed model, medium adhesion fixed model, low adhesion fixed model and adaptive model were four models used in MMAC. The switching rules of different model controllers were also presented. Simulation was conducted for ABS control system using MMAC method basing on quarter vehicle model. Results show that this method can control wheel slip ratio more accurately, and has higher robustness, therefore it improves ABS performance effectively.
378
Intelligent Control and Automation
Tor Arne Johansen, [39] provided a contribution on nonlinear adaptive backstepping with estimator resetting using multiple observers A multiple model based observer/estimator for the estimation of parameters was used to reset the parameter estimation in a conventional Lyapunov based nonlinear adaptive controller. Transient performance can be improved without increasing the gain of the controller or estimator. This allows performance to be tuned without compromising robustness and sensitivity to noise and disturbances. The advantages of the scheme are demonstrated in an automotive wheel slip controller.
Robust Control Based on Sliding Mode Control Method Sliding mode control is an important robust control approach. For the class of systems to which it applies, sliding mode controller design provides a systematic approach to the problem of maintaining stability and consistent performance in the face of modeling imprecision. On the other hand, by allowing the tradeoffs between modeling and performance to be quantified in a simple fashion, it can illuminate the whole design process. Several results have been published coupling the ABS problem and the VSS design technique [40,41]. In these works design of sliding-mode controllers under the assumption of knowing the optimal value of the target slip was introduced. A problem of concern here is the lack of direct slip measurements. In all previous investigations the separation approach has been used. The problem was divided into the problem of optimal slip estimation and the problem of tracking the estimated optimal value. J.K. Hedrick, et al. [42,43] suggested a modification of the technique known as sliding mode control. It was chosen due to its robustness to modeling errors and disturbance rejection capabilities. Simulation results are presented to illustrate the capability of a vehicle using this controller to follow a desired speed trajectory while maintaining constant spacing between vehicles. Therefore a sliding mode control algorithm was implemented for this application. While Kayacan [44] proposed a grey sliding-mode controller to regulate the wheel slip, depending on the vehicle forward velocity. The proposed controller anticipates the upcoming values of wheel slip and takes the necessary action to keep the wheel slip at the desired value. The performance of the control algorithm as applied to a quarter vehicle is evaluated through simulations and experimental studies that include sudden changes in road conditions. It is observed that the proposed controller is capable of achieving faster convergence and better noise response than the conventional approaches. It is concluded that the use of grey system theory,
An Antilock-Braking Systems (ABS) Control: A Technical Review
379
which has certain prediction capabilities, can be a viable alternative approach when the conventional control methods cannot meet the desired performance specifications. In real systems, a switched controller has imperfections which limit switching to a finite frequency. The oscillation with the neighborhood of the switching surface cause chattering. Chattering is undesirable, since it involves extremely high control activity, and furthermore may excite highfrequency dynamics neglected in the course of modeling. Chattering must be reduced (eliminated) for the controller to perform properly.
Adaptive Control Based on Gain Scheduling Control Method Ting and Lin [45] presented an approach to incorporate the wheel slip constraint as a priori into control design so that the skidding can be avoided. A control structure of wheel torque and wheel steering is proposed to transform the original problem to that of state regulation with input constraint. For the transformed problem, a low-and-high gain technique is applied to construct the constrained controller and to enhance the utilization of the wheel slip under constraint. Simulation shows that the proposed control scheme, during tracking on a snow road, is capable of limiting the wheel slip, and has a satisfactory coordination between wheel torque and wheel steering.
Intelligent Control Based on Fuzzy Logic FC has been proposed to tackle the problem of ABS for the unknown environmental parameters [46-50]. However, the large amount of the fuzzy rules makes the analysis complex. Some researchers have proposed fuzzy control design methods based on the sliding-mode control (SMC) scheme. These approaches are referred to as fuzzy sliding-mode control (FSMC) design methods [51,52]. Since only one variable is defined as the fuzzy input variable, the main advantage of the FSMC is that it requires fewer fuzzy rules than FC does. Moreover, the FSMC system has more robustness against parameter variation [52]. Although FC and FSMC are both effective methods, their major drawback is that the fuzzy rules should be previously tuned by time-consuming trial-anderror procedures. To tackle this problem, adaptive fuzzy control (AFC) based on the Lyapunov synthesis approach has been extensively studied [52-55]. With this approach, the fuzzy rules can be automatically adjusted to achieve satisfactory system response by an adaptive law. Kumar et al. [56] investigated the integrated control of ABS System and collision avoidance system (CAS) in electric vehicle. Fuzzy logic
380
Intelligent Control and Automation
techniques are applied for integral control of two subsystems. Control algorithm is implemented and tested in a prototype electric vehicle in laboratory environment using free scale HCS12 microcontroller. A high level network protocol CAN is applied to integrate all sensors, ABS and CAS. The results show that integrated control of ABS and CAS maintains the safe distance from obstacle without sacrificing the performance of either system. Different researcher [57-59] developed an adaptive PID-type fuzzy controller for the ABS system. A platform is built to accomplish a series of experiments to control the ABS. A commercial ABS module controlled by a controller is installed and tested on the platform. The vehicle and tire models are deduced and simulated by a personal computer for real time control. Road surface conditions, vehicle weight and control schemes are varied in the experiments to study braking properties. Lin and Hsu [60] proposed a self-learning fuzzy sliding-mode control (SLFSMC) design method for ABS. In the proposed SLFSMC system, a fuzzy controller is the main tracking controller, which is used to mimic an ideal controller; and a robust controller is derived to compensate for the difference between the ideal controller and the fuzzy controller. The SLFSMC has the advantages that it can automatically adjust the fuzzy rules, similar to the AFC, and can reduce the fuzzy rules, similar to the FSMC. Moreover, an error estimation mechanism is investigated to observe the bound of approximation error. All parameters in SLFSMC are tuned in the Lyapunov sense, thus, the stability of the system can be guaranteed. Finally, two simulation scenarios are examined and a comparison between a SMC, an FSMC, and the proposed SLFSMC is made. The ABS system performance is examined on a quarter vehicle model with nonlinear elastic suspension. The parallelism of the fuzzy logic evaluation process ensures rapid computation of the controller output signal, requiring less time and fewer computation steps than controllers with adaptive identification. The robustness of the braking system is investigated on rough roads and in the presence of large measurement noise. The simulation results present the system performance on various road types and under rapidly changing road conditions. While conventional control approaches and even direct fuzzy/ knowledge based approaches [61-67] have been successfully implemented, their performance will still degrade when adverse road conditions are encountered. The basic reason for this performance degradation is that the control algorithms have limited ability to learn how to compensate for the wide variety of road conditions that exist.
An Antilock-Braking Systems (ABS) Control: A Technical Review
381
Laynet et al. [68] and Laynet and Passino [69] introduced the idea of using the fuzzy model reference learning control (FMRLC) technique for maintaining adequate performance even under adverse road conditions. This controller utilizes a learning mechanism which observes the plant outputs and adjusts the rules in a direct fuzzy controller so that the overall system behaves like a “reference model” which characterizes the desired behavior. The performance of the FMRLC-based ABS is demonstrated by simulation for various road conditions (wet asphalt, icy) and “split road conditions” (the condition where, e.g., emergency braking occurs and the road switches from wet to icy or vice versa). Precup et al. [70] developed a Takagi-Sugeno fuzzy controller and an interpolative fuzzy controller for tire slip control in ABS systems. By employing local linearized models of the controlled plant, the local controllers are developed in the frequency domain. Development methods for the two fuzzy controllers are also offered. Simulation results show that the control system performance enhancement ensured by the fuzzy controllers in comparison with the conventional PI ones. Stan, et al. [71] performed a critical analysis of five fuzzy control solutions dedicated to ABS systems. The detailed mathematical model of controlled plant is derived and simplified for control design with focus on tire slip control. A new fuzzy control solution based on a class of TakagiSugeno fuzzy controllers is proposed. This class of fuzzy controllers combines separately designed PI and PID controllers corresponding to a set of simplified models of controlled plant linearized in the vicinity of important operating points. Simulation results validate the suggested fuzzy control solution in controlling the relative slip of a single wheel. R. Keshmiri and A.M. Shahri [72] designed an intelligent fuzzy ABS controller to adjust slipping performance for variety of roads. There are two major features in the proposed control system: the first is a fuzzy logic controller providing optimal brake torque for both front and rear wheels; and the second is also a FLC provides required amount of slip and torque references properties for different kinds of roads. Simulation results show more reliable and better performance compared with other brake systems. While Karakose and Akin [73] proposed a different fuzzy control algorithm, which used dynamical fuzzy logic system and block based neural network, for dynamical control problems. The effectiveness of the proposed method is illustrated by simulation results for dc motor position control problem. In the same direction Ayman A. Aly [74] designed an intelligent fuzzy ABS controller to adjust slipping performance for variety of roads. The fuzzy
382
Intelligent Control and Automation
optimizer finds immediately the optimal wheel slips for the new surface and forces the actual wheel slips to track the optimal reference wheel slips. The simulation results show that the proposed ABS algorithm ensures avoiding of wheel’s blockage, even in different road conditions. Moreover, as a free model strategy, the obtained fuzzy control is advantageous from viewpoint of reducing design complexity and, also, antisaturating, antichattering and robustness properties of the controlled system.
CONCLUSIONS ABS control is highly nonlinear control problem due to the complicated relationship between its components and parameters. The research that has been carried out in ABS control systems covers a broad range of issues and challenges. Many different control methods for ABS have been developed and research on improved control methods is continuing. Most of these approaches require system models, and some of them cannot achieve satisfactory performance under the changes of various road conditions. While soft computing methods like Fuzzy control doesn’t need a precise model. A brief idea of how soft computing is employed in ABS control is given.
ACKNOWLEDGEMENT This study is supported by Taif University under a contract No. 1-432-1168. The financial support of Taif University is highly appreciated.
An Antilock-Braking Systems (ABS) Control: A Technical Review
383
REFERENCES 1. 2. 3.
4. 5.
6.
7. 8.
9. 10.
11.
12.
13.
P. M. Hart, “Review of Heavy Vehicle Braking Systems Requirements (PBS Requirements),” Draft Report, 24 April 2003. M. Maier and K. Muller “The New and Compact ABS Unit for Passenger Cars,” SAE Paper No.950757, 1996. P. E. Wellstead and N. B. O. L. Pettit, “Analysis and Redesign of an Antilock Brake System Controller,” IEE Proceedings Control Theory Applications, Vol. 144, No. 5, 1997, pp. 413-426. doi:10.1049/ipcta:19971441 A. G. Ulsoy and H. Peng, “Vehicle Control Systems,” Lecture Notes, ME 568, 1997. P. E. Wellstead, “Analysis and Redesign of an Antilock Brake System Controller,” IEEE Proceedings Control Theory Applications, Vol. 144, No. 5, September 1997, pp. 413-426. doi: 10.1049/ip-cta:19971441 R. Fling and R. Fenton, “A Describing-Function Approach to Antiskid Design,” IEEE Transactions on Vehicular Technology, Vol. VT-30, No. 3, 1981, pp. 134- 144. doi:10.1109/T-VT.1981.23895 S. Yoneda, Y. Naitoh and H. Kigoshi, “Rear Brake Lock-Up Control System of Mitsubishi Starion,” SAE Paper, Washington, 1983. T. Tabo, N. Ohka, H. Kuraoka and M. Ohba, “Automotive Antiskid System Using Modern Control Theory,” IEEE Proceedings, San Francisco, 1985, pp. 390-395. H. Takahashi and Y. Ishikawa, “Anti-Skid Braking Control System Based on Fuzzy Inference,” U.S. Patent No. 4842342, 1989. R. Guntur and H. Ouwerkerk, “Adaptive Brake Control System,” Proceedings of the Institution of Mechanical Engineers, Vol. 186, No. 68. 1972, pp. 855-880. doi:10.1243/PIME_PROC_1972_186_102_02 G. F. Mauer, “A Fuzzy Logic Controller for an ABS Braking System,” IEEE Transactions on Fuzzy Systems, Vol. 3, No. 4, 1995, pp. 381388. doi:10.1109/91.481947 W. K. Lennon and K. M. Passino, “Intelligent Control for Brake Systems,” IEEE Transactions on Control Systems Technology, Vol. 7, No. 2, 1999, pp. 188-202. B. Lojko and P. Fuchs, “The Control of ASR System in a Car Based on the TMS320F243 DSP,” Diploma Thesis, Dept. of Radio & Electronics, Bratislava, 2002.
384
Intelligent Control and Automation
14. P. Hart, “ABS Braking Requirements,” Hartwood Consulting Pty Ltd , Victoria, June 2003. 15. Q. Ming, “Sliding Mode Controller Design for ABS System,” MSc Thesis, Virginia Polytechnic Institute and State University, 1997. 16. M. Stan, R.-E. Precup and A. S. Paul, “Analysis of Fuzzy Control Solutions for Anti-Lock Braking Systems,” Journal of Control Engineering and Applied Informatics, Vol. 9, No. 2, 2007, pp. 11-22. 17. S. Drakunov, U. Ozgiiner and P. Dix, “ABS Control Using Optimum Search via Sliding Modes,” IEEE Transaction on Control Systems Technology, Vo1. 3 No. 1, March 1995, pp. 79-85. 18. National Semiconductor Inc., “Adaptive Braking Systems (ABS),” US Patent No. 3825305, 1974. 19. G. F. Mauer, “A Fuzzy Logic Controller for an ABS Braking System,” IEEE Transactions on Fuzzy Systems, Vol. 3, No. 4, 1995, pp. 381388. doi: 10.1109/91.481947 20. J. Song, H. Kim and K. Boo, “A study on an Anti-Lock Braking System Controller and Rear-Wheel Controller to Enhance Vehicle Lateral Stability,” Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, Vol. 221 No. 7, 2007, pp. 777787. doi:10.1243/09544070JAUTO225 21. F. Jiang, “An Application of Nonlinear PID Control to a Class of Truck ABS Problems,” Proceedings of the 40th IEEE Conference on Decision and Control, Orlando, 2000, pp. 516-521. 22. M. Tanellia, A. Astolfi and S. M. Savaresi, “Robust Nonlinear Output Feedback Control for Brake by Wire Control Systems,” Automatica, Vol. 44, No. 4, 2008, pp. 1078-1087. doi:10.1016/j.automatica.2007.08.020 23. R. Freeman, “Robust Slip Control for a Single Wheel,” University of California, Santa Barbara, 1995. 24. J. S. Yu, “A Robust Adaptive Wheel-Slip Controller for Antilock Brake System,” Proceedings of 36th IEEE Conferrence on Decision Control, San Diego, 1997, pp. 2545-2546. 25. J. Yi, L. Alvarez, R. Horowitz and C. C. DeWit, “Adaptive Emergency Braking Control Using a Dynamical Tire/Road Friction Model,” Proceedings of 39th IEEE Conference on Decision Control, Sydney, 2000, pp. 456-461. 26. J. Lüdemann, “Heterogeneous and Hybrid Control with Application in Automotive Systems,” Ph.D. dissertation, Glasgow University, 2002.
An Antilock-Braking Systems (ABS) Control: A Technical Review
385
27. Y. Liu and J. Sun, “Target Slip Tracking Using Gain-Scheduling for Braking Systems,” Proceedings of the 1995 American Control Conference, Seattle, 1995, pp. 1178-1182. 28. S. Taheri and E. H. Law, “Slip Control Braking of an Automobile during Combined Braking and Steering Manoeuvres,” American Society of Magazine Editors, Vol. 40, No. 1, 1991, pp. 209-227. 29. C. Jun, “The Study of ABS Control System with Different Control Methods,” Proceedings of the 4th International Symposium on Advanced Vehicle Control, Nagoya, 1998, pp. 623-628. 30. F. Jiang, “A Novel Approach to a Class of Antilock Brake Problems,” Ph.D. Dissertation, Cleveland State University, Cleveland, 2000. 31. Y. Wang, T. Schmitt-Hartmann, M. Schinkel and K. J. Hunt, “A New Approach to Simultaneous Stabilization and Strong Simultaneous Stabilization with D Stability and Its Application to ABS Control Systems Design,” European Control Conference, Porto, 2001, pp. 1291-1294. 32. S. Solyom, “Synthesis of a Model-Based Tire Slip Controller,” Synthesis of a Model-Based Tire Slip Controller, Vol. 41, No. 6, 2004, pp. 475-499. 33. S. Drakunov, ü. ?zgüner, P. Dix, and B. Ashrafi, “ABS Control Using Optimum Search via Sliding Modes,” IEEE Transactions on Control Systems Technology, Vol. 3, 1995, pp. 79-85. doi:10.1109/87.370698 34. M. Schinkel and K. Hunt, “Anti-lock Braking Control Using a Sliding Mode Like Approach,” Proceedings of the 2002 American Control Conference, Anchorage, 2002, pp. 2386-2391. 35. M. C. Wu and M. C. Shih, “Hydraulic Anti-Lock Braking Control Using the Hybrid Sliding-Mode Pulse Width Modulation Pressure Control Method,” Proceedings of the Institution of Mechanical Engineers, Vol. 215, 2001, pp. 177-187. doi:10.1109/87.748153 36. C. ünsal and P. Kachroo, “Sliding Mode Measurement Feedback Control for Antilock Braking Systems,” IEEE Transactions on Control Systems Technology, Vol. 7, No. 2, March 1999, pp. 271-281. 37. W. Ting and J. Lin, “Nonlinear Control Design of Anti-lock Braking Systems Combined with Active Suspensions,” Technical report of Department of Electrical Engineering, National Chi Nan University, 2005.
386
Intelligent Control and Automation
38. R.-G. Wang, Z.-D. Liu and Z.-Q. Qi, “Multiple Model Adaptive Control of Antilock Brake System via Backstepping Approach,” Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, 2005, pp. 591-595. 39. T. A. Johansen, J. Kalkkuhl, J. Lüdemann and I. Petersen, “Hybrid Control Strategies in ABS,” Proceedings of the 2001 American Control Conference, Arlington 2001, pp. 1704-1705. 40. H. S. Tan and M. Tomizuka, “An Adaptive Sliding Mode Vehicle Traction Controller Design,” Proceedings of the 1989 American Control Conference, Pittsburgh, 1989, pp. 1053-1058. 41. Y. K. Chin, W. C. Lin and D. Sidlosky, “Sliding-Mode ABS Wheel Slip Control,” Proceedings of 1992 ACC, Chicago, 1992, pp. 1-6. 42. J. C. Gerdes, A. S. Brown and J. K. Hedrick, “Brake System Modeling for Vehicle Control,” Proceedings International Mechanical Engineering Congress and Exposition, San Francisco, 1995, pp. 4756-4763. 43. D. Cho and J. K. Hedrick, “Automotive Powertrain Modeling for Control,” Transactions ASME Journal of Dynamic Systems, Measurements and Control, Vol.111, No.4, December 1989, pp. 568576. doi:10.1115/1.3153093 44. E. Kayacan and O. Kaynak, “A Grey System Modeling Approach for Sliding Mode Control of Antilock Braking System,” IEEE Transactions On Industrial Electronics, Vol. 56, No. 8, August 2009, pp. 3244-3252. doi:10.1109/TIE.2009.2023098 45. W. Ting and J. Lin, “Nonlinear Control Design of Anti-lock Braking Systems Combined with Active Suspensions,” Technical Report of Department of Electrical Engineering, National Chi Nan University, 2005. 46. B. Ozdalyan, “Development of A Slip Control Anti-Lock Braking System Model,” International Journal of Automotive Technology, Vol. 9, No. 1, 2008, pp. 71-80. doi:10.1007/s12239-008-0009-6 47. A. B. Will and S. H. Zak, “Antilock Brake System Modelling and Fuzzy Control,” International Journal of Vehicle Design, Vol. 24, No. 1, 2000, pp. 1-18. doi:10.1504/IJVD.2000.001870 48. J. R. Layne, K. M. Passino and S. Yurkovich, “Fuzzy Learning Control for Antiskid Braking Systems,” IEEE Transactions on Control Systems Technology, Vol. 1, No. 2, 1993, pp. 122-129. doi:10.1109/87.238405
An Antilock-Braking Systems (ABS) Control: A Technical Review
387
49. G. F. Mauer, “A Fuzzy Logic Controller for an ABS Braking System,” IEEE Transactions on Fuzzy Systems, Vol. 3, No. 4, 1995, pp. 381388. doi:10.1109/91.481947 50. K. Lee and K. Park, “Optimal Robust Control of a Contactless Brake System Using an Eddy Current,” Mechatronics, Vol. 9, No. 6, 1999, pp. 615-631. doi:10.1016/S0957-4158(99)00008-2 51. W. K. Lennon and K. M. Passino, “Intelligent Control for Brake Systems,” IEEE Transctions on Control Systems Technology, Vol. 7, No. 2, 1999, pp. 188-202. doi:10.1109/87.748145 52. C. Unsal and P. Kachroo, “Sliding Mode Measurement Feedback Control for Antilock Braking Systems,” IEEE Transctions on Control Systems Technology, Vol. 7, No. 2, 1999, pp. 271-280. doi:10.1109/87.748153 53. C.C. Lee, “Fuzzy Logic in Control Systems: Fuzzy Logic Controller Part I, II,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 20, No. 2, 1990, pp. 404-435. doi:10.1109/21.52551 54. S.W. Kim and J.J. Lee, “Design of a Fuzzy Controller with Fuzzy Sliding Surface,” Fuzzy Sets and Systems, Vol. 71, No. 3, 1995, pp. 359-369. doi:10.1016/0165-0114(94)00276-D 55. B. J. Choi, S. W. Kwak and B. K. Kim, “Design of a Single-Input Fuzzy Logic Controller and Its Properties,” Fuzzy Sets Systems, Vol. 106, No. 3, 1999, pp. 299-308. doi:10.1016/S0165-0114(97)00283-2 56. S. Kumar, K. L. Verghese and K. K. Mahapatra, “Fuzzy Logic Based Integrated Control of Anti-Lock Brake System and Collision Avoidance System Using CAN for Electric Vehicles,” IEEE International Conference on Industrial Technology, Gippsland, 2009, pp. 1-5. doi:10.1109/ICIT.2009.4939720 57. L. X. Wang, “Adaptive Fuzzy Systems and Control: Design and Stability Analysis,” Prentice-Hall, Inc., Upper Saddle River, 1994. 58. H. Lee and M. Tomizuka, “Robust Adaptive Control Using a Universal Approximator for SISO Nonlinear Systems,” IEEE Transactions on Fuzzy Systems, Vol. 8, No. 1, 2001, pp. 95-106. 59. C. K. Chen and M. C. Shih, “PID Type Fuzzy Control for Antilock Brake System with Parameter Adaptation,” JSME International Journal, Series C, Vol. 47, No. 2, 2004, pp. 675-685. doi:10.1299/jsmec.47.675 60. C.-M. Lin, C.-F. Hsu, “Self-Learning Fuzzy Sliding-Mode Control for Antilock Braking Systems,” IEEE Transactions On Control
388
61.
62.
63.
64. 65.
66. 67.
68.
69.
70.
71.
72.
Intelligent Control and Automation
Systems Technology, Vol. 11, No. 2, 2003, pp. 273-278. doi:10.1109/ TCST.2003.809246 H. Tan and M. Tomizuka, “A Discrete-Time Robust Vehicle Traction Controller Design,” American Controls Conference, Pittsburgh, 1989, pp. 1053-1058. H. Tan and M. Tomizuka, “Discrete-Time Controller Design for Robust Vehicle Traction,” IEEE Control Systems Magazine, Vol. 10, No. 3, 1990, pp. 107-113. doi:10.1109/37.55132 R. Fling and R. Fenton, “A Describing-Function Approach to Antiskid Design,” IEEE Transactions on Vehicular Technology, Vol. 30, No. 3, 1981, pp.134-144. doi:10.1109/T-VT.1981.23895 S. Yoneda, Y. Naitoh and H. Kigoshi, “Rear Brake Lock- Up Control System of Mitsubishi Starion,” SAE paper 830482, 1983. T. Tabo, N. Ohka, H. Kuraoka and M. Ohba, “Automotive Antiskid System Using Modern Control Theory,” IECON, Vol. 1, pp. 390-395, 1985. H. Takahashi and Y. Ishikawa, “Anti-Skid Braking Control System Based on Fuzzy Inference,” US Patent No. 4842342, 1989. R. Guntur and H. Ouwerkerk, “Adaptive Brake Control System,” Proceedings of the Institution of Mechanical Engineers, 1972, pp. 855880. J. R. Laynet, K. M. Passinot and S. Yurkovich, “Fuzzy Learning Control for Anti-Skid Braking Systems,” IEEE Transactions on Control Systems Technology, Vol. 1, No. 2, 1993, pp. 122-129. doi:10.1109/87.238405 J. Laynet and K. M. Passino, “Fuzzy Model Reference Learning Control for Cargo Ship Steering,” IEEE Control Systems Magazine, Vol. 13, No. 6, September 1992, pp. 23-24. doi:10.1109/37.248001 R-E. Precup, St. Preitl, M. Balas, V. Balas, “Fuzzy Controllers for Tire Slip Control in Anti-lock Braking Systems,” IEEE International Conference on Fuzzy Systems, Budapest, 2004, pp. 1317-1322. M. Stan, R.-E. Precup and S. A. Paul, “Analysis of Fuzzy Control Solutions for Anti-Lock Braking Systems,” Journal of Control Engineering and Applied Informatics, Vol. 9, No. 2, 2007, pp. 11-22. R. Keshmiri and A. M. Shahri, “Intelligent ABS Fuzzy Controller for Diverse Road Surfaces,” World Academy of Science, Engineering and Technology, Vol. 2, No. 2, 2007, pp. 62-67.
An Antilock-Braking Systems (ABS) Control: A Technical Review
389
73. M. Karak?se and E. Akin, “Dynamical Fuzzy Control with Block Based Neural Network,” Technical Report, Department of Computer Engineering, F?rat University, 2006. 74. A. A. Aly, “Intelligent Fuzzy Control for Antilock Brake System with Road-Surfaces Identifier,” 2010 IEEE International Conference on Mechatronics and Automation, Xi’an, 2010, pp. 2292-2299.
INDEX
A accelerometer 371 AC motor 7, 8 acute respiratory distress syndrome (ARDS) 156 acute respiratory failure (ARF) 156 Adaptation 136 adaptive algorithm 52, 54, 59 Adaptive Backstepping Fuzzy Control (ABFC) 122 Adaptive Backstepping Neural Network Control (ABNNC) 122 Adaptive Backstepping Wavelet Control (ABWC) 122 adaptive fuzzy control (AFC) 379 adaptive fuzzy sliding mode control (AFSMC) scheme 95, 97 Advanced Analysis of Critical Processes (AACP) 42 advanced control system 336 aeroengines 335, 336, 341, 342 Agent Oriented Programming Language (AOPL) 35 Agent Oriented Software Engineering (AOSE) 35
Altium Designer Summer (ADS) 183 antiaircraft-gun control systems 172 antilock-braking system (ABS) 370 Approximator-Based Adaptive Backstepping Control (ABABC) 122 Artificial Intelligence (AI) 28 automatic machine tools 172 automatic navigation systems 172 B backpropagation network control system 230 BDI (Belief-Desire-Intention) 38 behaviour decision-making module 300 best matching unit (BMU) 278 Big data technology 5 bisector of area (BOA) 187 bond graphs (BG) 173 C celestial-tracking systems 172 Center of Average (COA) 176 Center of Gravity (COG) 175
392
Intelligent Control and Automation
center of gravity for singletons (COGS) 175 Chattering 379 Classic control theory 4 closed-loop networked control systems 226 closed-loop system 281 collision avoidance system (CAS) 379 communication network 225, 226 Computational Intelligent (CI) 27 computers 172 computing technology 300 controller efficiency function (CEF) 314 control systems 254 conventional brake system 374 conventional control parameter adjustment 237 Coriolis 98 cubic nanoparticles 55, 73 current systems 77 D DC motor 7, 8 Deanship of Scientific Research (DSR) 167 Derivate of Error (DE) 181 Desired Acceleration 325, 326 diagnostic system 335, 338, 339, 341, 367 Differential controllers 77 digital control system 344 Distributed Artificial Intelligence (DAI) 38 Distributed Problem Solving (DPS) 38 disturbance 196, 212, 213, 214, 215
dMARS (Distributed Multi-Agent Reasoning System) 35 driveability 376 Driving and Control of Manufacturing Processes (DCMP) 42 dynamic backpropagation 275 dynamic parameters 347 E electric cylinder 57, 58 electronic brake control module (EBCM) 374 electronic control unit (ECU) 370 ellipsoids 55 extended Kalman filter (EKF) 257 F FADEC (full authority digital engine control) 337 feedback control module 300, 315 feedback-control systems 254 flight unworthy aircraft 345 Footprint of Uncertainty (FOU) 123 Fuzzy Basis Function (FBF) 127 fuzzy control law 171 fuzzy inference system (FIS) 161 Fuzzy Logic 173, 190, 191, 192 Fuzzy logic control (FLC) 173 fuzzy logic control (FLC) systems 96 fuzzy logic system 101, 102 fuzzy model reference learning control (FMRLC) 381 fuzzy neural network (FNN) 53 fuzzy nonlinear function 171, 174 fuzzy proportional integral derivative (FPID) 155 fuzzy sliding-mode control (FSMC) 379
Index
fuzzy system 95, 97, 101, 102, 103, 116, 117 G Gas turbine engines 337 Gateway Task Master Agent (GTMA) 39 Gaussian membership function 102 Genetic algorithms 83, 91 global position system (GPS) 254 global routing planning module 300 graph 209, 210, 211, 214, 215 H Hartman-Grobman theorem 274 human manual control 321, 323, 324, 325, 326, 327, 330, 333 Human-Simulating Intelligent PID control (HSI-PID) 321, 323 I Immune multiagent network (IMAN) 6 inertial navigation systems (INSs) 254 information potential (IP) 232 infrared imaging technology 53 integral control 77 Integral of the square value (ISV) 116 Integral time absolute error (ITAE) 164 Intelligent Agents (IAs) 27 intelligent automatic surface ship 300 intelligent autonomous surface vessel (IASV) 299 Intelligent control system 3, 4, 5, 19, 20, 21, 22, 23, 24
393
intensive care unit’s (ICU) 156 Interval Type-2 Fuzzy Logic Control (IT2FLC) 125 inverted pendulum system 195, 199, 201, 202, 208, 221 L Linear systems 273, 274, 279, 282, 286, 295 local motion planning module 300 Lyapunov method 58 Lyapunov stability analysis 103 Lyapunov theory 274 M macrosituational frames 347 manufacturing industry 172 mean of maximum (MOM) 187 measurement error 257, 262, 266, 269 microcomputers 370 microprocessors 156 miscalibrated velocity sensor 256 Modern control theory 4 Monte Carlo method 237 Multiagent system 6, 7 Multiagent technology 5, 6 multiple model adaptive control (MMAC) 377 Multivariate regression analysis 173 N navigation systems 254 Networked control systems (NCSs) 226 network process control systems 226 neural extended Kalman filter (NEKF) 253, 255
394
Intelligent Control and Automation
neural gas network 273, 274 neural gas (NG) algorithm 293 neural network 96 neural networks (NN) 275 neuro-fuzzy 96 Nonlinear Autoregressive-Moving Average (NARMA) 279 Nonlinear systems control 273 nonsmooth nonlinear systems 197, 224 O of gas turbine engine control 338 operating point (OP) 180 Operational control 347 P physiological compensatory control loops 156 Planning and Strategies of Management Processes (PSMP) 42 pressure evaluate correction module (PECM) 157 Prestart control 347, 349 printed circuit board (PCB) 183 probability density function (PDF) 227 Process Master Agent (PMA) 39 progressive algorithms 345 Proportional controllers 77 proportional gain 230 Proportional-integral-derivative (PID) 75 Proportional-integral-derivative (PID) controllers 75 proportional-integral-derivative (PID) strategy 301 proportional-integral (PI) 156
proportion integration differentiation (PID) 196 Q quadratic performance indices 80 quantum neural network (QNN) 299, 300 quasi-Newton algorithm 308 R radial basis function (RBF) 52, 59, 70 range-bearing sensor 256, 265 RBF (radial basis function) 82 Real-world systems 172 Recurrency 275 redundancy 254, 255 relative position 254 Rényi quadratic entropy 232, 248 respiratory system 155, 157, 158, 160, 161, 162, 165, 166, 169 RLS (Recursive Least Squares) 274 robot 51, 52, 54, 55, 56, 57, 59, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73 S Safety 344 satellite-tracking antennas 172 self-learning fuzzy sliding-mode control (SLFSMC) 380 sensor systems 254, 264 servomechanisms 172 Shannon entropy 232 Shutdown 347 single-wheeled inverted pendulum robot (SWIPR) 196 situational control 335, 339, 340,
Index
341, 342, 344, 346, 347, 348, 350, 351, 353, 355, 356, 358, 359, 360, 361 Sliding mode control (SMC) 96 Smith predictor 79, 88 Space vector pulse width modulation (SVPWM) 15 stability 370, 375, 378, 380 Start-up control 347 steerability 376 steering stability 370 Stone-Weierstrauss Theorem 257 Supply Chain Management (SCM) 42 support vector machine technology 53 T Task Master Agent (TMA) 39 traction control system (TCS) 370 Traditional control theory 4 transmission lines 52, 53 transportation 172 transpulmonary pressure 160 turbocompressor controllers block 348
395
Type-1 Adaptive Backstepping Fuzzy Control (T1ABFC) 122 Type-1 Fuzzy Logic System (T1FLS) 123 Type-1 Fuzzy Sets (T1FSs) 123 Type-2 Adaptive Backstepping Fuzzy Control (T2ABFC) 122 Type-2 Fuzzy Logic System (T2FLS) 123 Type-2 Fuzzy Sets (T2FSs 123 U Unknown Function 1 (UF1) 141 unmanned aerial vehicle (UAV) 157 V vehicle dynamic stability control (VDSC) 370 W wire-driven parallel robot (WDPR) 196 World Health Organization (WHO) 156 Z Ziegler Nichols (ZN) 183