129 96 11MB
English Pages 366 [358] Year 2021
Studies in Systems, Decision and Control 389
Maciej Ławryńczuk
Nonlinear Predictive Control Using Wiener Models Computationally Efficient Approaches for Polynomial and Neural Structures
Studies in Systems, Decision and Control Volume 389
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.
More information about this series at http://www.springer.com/series/13304
Maciej Ławry´nczuk
Nonlinear Predictive Control Using Wiener Models Computationally Efficient Approaches for Polynomial and Neural Structures
Maciej Ławry´nczuk Institute of Control and Computation Engineering Faculty of Electronics and Information Technology Warsaw University of Technology Warsaw, Poland
ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-030-83814-0 ISBN 978-3-030-83815-7 (eBook) https://doi.org/10.1007/978-3-030-83815-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my children, young scientists
Foreword
And another new book ... Aren’t there already so many books about Model Predictive Control (MPC)? Is not already everything explained in Wikipedia? Maybe. The monograph written by Prof. Maciej Ławry´nczuk goes a different way than usual, does not repeat everything that has been written many times anywhere. Aiming at application to a large class of nonlinear SISO and MIMO systems approximable by well-known Wiener models, Prof. Ławry´nczuk presents his approaches and procedures developed over the years, which are ideally suited for modern, complex, nonlinear tasks in control and automation engineering. For several decades, model predictive control has been one of the promising mathematically based but strongly algorithmic research branches of modern control engineering. After and besides the phases of research with rigorous mathematics research with the goal to solve control tasks of nonlinear systems and processes by nonlinear control with the final proof of stability, convergence and error behavior, also mathematical-based approaches for the same problem classes but with more complex requirements, e.g. concerning manipulated variable constraints, local model approximations and optimization characterized by (programmable) algorithms have been developed in the last decades. Besides the developments of the so-called modelfree control or model adaptive control, model predictive control is characterized by a very high adaptability to temporal and physical local conditions of the system to be controlled. The development in the last years as well as the specificity of the approach regarding the prediction of the controlled system behavior with simultaneous optimization or search for the suitable local control strategy led to the development of more and more complex MPCs with more and more complex algorithms. Accordingly, typical fields of application are systems or processes with a related slow dynamics. The complexity of the plant dynamics, the complexity of the MPC, the plant dynamics as well as the available possibilities of the microprocessor or computer hardware finally determine the technical feasibility. Consequently, in the last decades, the development of computationally expensive algorithms dominates.
vii
viii
Foreword
With this background as motivation and facing the goal to develop numerically efficient algorithms for nonlinear SISO and MIMO plants, Prof. Maciej Ławry´nczuk’s monograph addresses an alternative path. His detailed view for numerical efficiency down to the equation level using online (nowadays also denoted as data-driven) modeling and also trajectory linearization combined with parameterization of decision variables using the Laguerre functions reduces the complexity of the MPC optimization task. A few structures of Wiener models are discussed for input-output and state-space systems. The author prefers to use Wiener models with neural static part and is able to show why. The author’s MPC algorithms based on underlying neural Wiener models show better performance and robustness with respect to modeling errors and disturbances than classical MPC approaches based on inverse static models. Based on this in detail explained core, efficient algorithms as highly effective alternatives are developed which finally leads to a textbook for both fundamental researchers and implementation-oriented practitioners. Besides parametrizing the approximating models for short, computationally efficient horizons, the author uses two complex application examples, presented and developed in detail, to impressively demonstrate the presented MPC approaches and setting parameters and resulting performance. Yes, there are many books on MPC and its diverse manifestations but only a few approaches that pursue the author’s addressed goals of computationally efficient realization with the goal of applications to practical unknown nonlinear systems. The author shares his knowledge fully and in detail, enabling the reader to follow (and thus develop) the approaches in detail and apply them directly. Very impressive. Highly recommended reading. Perhaps someone has to add this book with the described potential to allow many new MPC realizations ... also to Wikipedia as a new standard. Duisburg, Germany May 2021
Dirk Söffker
Preface
Good control is necessary for economically efficient and safe operation of various processes, including industrial plants, e.g. distillation columns, chemical reactors or paper machines, and processes with embedded control systems, e.g. drones or autonomous vehicles. The Model Predictive Control (MPC) methodology is a very powerful tool which may be used to control complicated processes. MPC is an advanced control method in which a dynamical model of the process is repeatedly used online to predict its future behaviour and an optimisation procedure finds the best control policy. Typically, MPC algorithms lead to much better control quality than the classical Proportional-Integral-Derivative (PID) controller, particularly in the case of Multiple-Input Multiple-Output (MIMO) processes with strong crosscouplings, also with delays, and when some constraints must be imposed on process variables. Numerous classical MPC algorithms for processes described by various linear models have been developed over the years; they are widely used in practice. Many processes are inherently nonlinear. In such cases, the rudimentary MPC algorithms which use linear models are likely to result in unacceptable control quality or even do not work. This book aims to present a few computationally efficient nonlinear MPC algorithms for processes described by input-output and state-space Wiener models defined by a serial connection of a linear dynamic block followed by a nonlinear static one. The considered class of models can approximate properties of many processes very well using a limited number of parameters. Furthermore, due to the Wiener models’ specialised structure, implementation of the presented MPC algorithms is relatively uncomplicated. For two technological processes, i.e. a neutralisation reactor and a proton exchange membrane fuel cell, the effectiveness of polynomial and neural Wiener models is thoroughly compared. The key issue in this book is computational efficiency of MPC. When a nonlinear model, including the Wiener model, is used for prediction in MPC, a nonlinear constrained optimisation problem must be solved at each sampling instant online. In order to reduce computational complexity and computation time, two concepts are used. Firstly, a few approaches using online model or trajectory linearisation are possible. As a result, relatively simple quadratic optimisation tasks are obtained
ix
x
Preface
(they have only one global solution), and nonlinear online optimisation is unnecessary. Secondly, parameterisation of the computed decision variables using Laguerre functions is possible to reduce the number of actually optimised variables. This book consists of nine chapters. It also includes the list of symbols and acronyms used, the lists of references and the index. Chapter 1 is an introduction to the field of MPC. Its basic idea and the rudimentary MPC optimisation problems are defined, and the parameterisation of the decision variables using Laguerre functions is described. A literature review on computational complexity issues of nonlinear MPC is given; many example applications of MPC algorithms in different fields are reported. Chapter 2 describes the considered structures of Wiener models: six inputoutput configurations and three state-space ones. A short review of the identification methods of Wiener models is given, possible internal structures of both model parts are discussed and example applications of Wiener models are reported. Alternative structures of cascade models are mentioned. Chapter 3 details MPC algorithms for processes described by input-output Wiener models: the classical simple MPC method based on the inverse static model, the rudimentary MPC algorithm with nonlinear optimisation, two MPC schemes with online model linearisation and two MPC methods with advanced trajectory linearisation. Variants of all algorithms with parameterisation using Laguerre functions are also described. Chapter 4 thoroughly discusses implementation details and simulation results of the considered MPC algorithms applied to five input-output benchmark processes. Different features of the algorithms are emphasised, including set-point tracking ability and robustness when the process is affected by disturbances and modelling errors. All algorithms are compared in terms of control quality and computational time. Chapters 5 and 6 compare the effectiveness of different Wiener models’ configurations to approximate properties of two simulated technological processes: a neutralisation reactor and a proton exchange membrane fuel cell. Polynomials and neural networks are used in the nonlinear static block of the models. Properties of both model classes are thoroughly discussed. Next, simulations of various MPC algorithms are presented. A few variants of constraints imposed on the predicted value of the controlled variable, including soft approaches, are considered for the neutralisation reactor. Chapter 7 details variants of all MPC algorithms presented in Chap. 3 for processes described by state-space Wiener models. The classical and an original, very efficient prediction method, which allow for offset-free control, are presented. Chapter 8 thoroughly discusses implementation details and simulation results of the considered MPC algorithms applied to three state-space benchmark processes. In particular, the efficiency of different methods allowing for offset-free control is compared. Chapter 9 summarises the whole book; some future research ideas are also given.
Preface
xi
This book is intended to be useful for everyone interested in advanced control, particularly graduate and Ph.D. students, researchers and practitioners who want to implement nonlinear MPC solutions in practice. Warsaw, Poland May 2021
Maciej Ławry´nczuk
Acknowledgements
I acknowledge the permission of Elsevier to reproduce portions of the following journal article: Ławry´nczuk, M., Tatjewski, P.: Offset-free state-space nonlinear predictive control for Wiener systems. Information Sciences, vol. 511, pp. 127–151, 2020. Warsaw, Poland May 2021
Maciej Ławry´nczuk
xiii
Contents
Part I
Preliminaries
1 Introduction to Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Formulation of the Basic MPC Problem . . . . . . . . . . . . . . . . . . . . . . 1.2 How to Cope with Infeasibility Problem . . . . . . . . . . . . . . . . . . . . . . 1.3 Parameterisation of Decision Variables . . . . . . . . . . . . . . . . . . . . . . . 1.4 Computational Complexity of MPC Algorithms . . . . . . . . . . . . . . . 1.5 Example Applications of MPC Algorithms . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 16 19 23 30 31
2 Wiener Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Structures of Input-Output Wiener Models . . . . . . . . . . . . . . . . . . . . 2.1.1 SISO Wiener Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 MIMO Wiener Model I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 MIMO Wiener Model II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 MIMO Wiener Model III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 MIMO Wiener Model IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 MIMO Wiener Model V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Structures of State-Space Wiener Models . . . . . . . . . . . . . . . . . . . . . 2.2.1 State-Space SISO Wiener Model . . . . . . . . . . . . . . . . . . . . . . 2.2.2 State-Space MIMO Wiener Model I . . . . . . . . . . . . . . . . . . . 2.2.3 State-Space MIMO Wiener Model II . . . . . . . . . . . . . . . . . . . 2.3 Identification of Wiener Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Possible Structures of Linear and Nonlinear Parts of Wiener Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Example Applications of Wiener Models . . . . . . . . . . . . . . . . . . . . . 2.6 Other Structures of Cascade Models . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41 41 41 43 44 47 51 54 56 56 57 58 59 59 60 60 63
xv
xvi
Part II
Contents
Input-Output Approaches
3 MPC Algorithms Using Input-Output Wiener Models . . . . . . . . . . . . . 3.1 MPC-inv Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 MPC-NO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 MPC-NO-P Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 MPC-NPSL and MPC-SSL Algorithms . . . . . . . . . . . . . . . . . . . . . . . 3.5 MPC-NPSL-P and MPC-SSL-P Algorithms . . . . . . . . . . . . . . . . . . . 3.6 MPC-NPLT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 MPC-NPLT-P Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 MPC-NPLPT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 MPC-NPLPT-P Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 73 81 84 105 107 124 127 136 141
4 MPC of Input-Output Benchmark Wiener Processes . . . . . . . . . . . . . . 4.1 Simulation Set-Up and Comparison Methodology . . . . . . . . . . . . . . 4.2 The SISO Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Description of the SISO Process . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Implementation of MPC Algorithms for the SISO Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 MPC of the SISO Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The SISO Process with Complex Dynamics: Classical MPC Versus MPC with Parameterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Description of the SISO Process with Complex Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Implementation of MPC Algorithms for the SISO Process with Complex Dynamics . . . . . . . . . . . . . . . . . . . . . . 4.3.3 MPC of the SISO Process with Complex Dynamics . . . . . . 4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Description of the MIMO Process A . . . . . . . . . . . . . . . . . . . 4.4.2 Implementation of MPC Algorithms for the MIMO Process A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 MPC of the MIMO Process A . . . . . . . . . . . . . . . . . . . . . . . . 4.5 The MIMO Process B with Ten Inputs and Two Outputs: Model I Versus Model III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Description of the MIMO Process B . . . . . . . . . . . . . . . . . . . 4.5.2 Implementation of MPC Algorithms for the MIMO Process B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 MPC of the MIMO Process B . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The MIMO Process C with Two Inputs, Two Outputs and Cross Couplings: Model II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Description of the MIMO Process C . . . . . . . . . . . . . . . . . . . 4.6.2 Implementation of MPC Algorithms for the MIMO Process C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 MPC of the MIMO Process C . . . . . . . . . . . . . . . . . . . . . . . . .
143 143 144 144 145 146 158 158 158 159 170 170 171 173 188 188 189 189 202 202 202 205
Contents
xvii
4.7
The Influence of Process Dimensionality on the Calculation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Description of the Neutralisation Reactor . . . . . . . . . . . . . . . . . . . . . 5.2 Modelling of the Neutralisation Reactor for MPC . . . . . . . . . . . . . . 5.3 Implementation of MPC Algorithms for the Neutralisation Reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 MPC of the Neutralisation Reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 MPC of the Neutralisation Reactor with Constraints Imposed on the Predicted Controlled Variable . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell Using Wiener Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Control of Proton Exchange Membrane Fuel Cells . . . . . . . . . . . . . 6.2 Description of the Proton Exchange Membrane Fuel Cell . . . . . . . 6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Implementation of MPC Algorithms for the Proton Exchange Membrane Fuel Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 MPC of the Proton Exchange Membrane Fuel Cell . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
215 215 219 226 228 238 249 251 251 253 256 270 274 280
Part III State-Space Approaches 7 MPC Algorithms Using State-Space Wiener Models . . . . . . . . . . . . . . . 7.1 MPC-inv Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 MPC-NO Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 MPC-NO-P Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . . . . 7.4 MPC-NPSL and MPC-SSL Algorithms in State-Space . . . . . . . . . . 7.5 MPC-NPSL-P and MPC-SSL-P Algorithms in State-Space . . . . . . 7.6 MPC-NPLT Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . . . . 7.7 MPC-NPLT-P Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . . 7.8 MPC-NPLPT Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . . 7.9 MPC-NPLPT-P Algorithm in State-Space . . . . . . . . . . . . . . . . . . . . . 7.10 State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
285 285 286 291 291 296 296 302 302 305 305 307
8 MPC of State-Space Benchmark Wiener Processes . . . . . . . . . . . . . . . . 8.1 The State-Space SISO Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Description of the State-Space SISO Process . . . . . . . . . . . . 8.1.2 Implementation of MPC Algorithms for the State-Space SISO Process . . . . . . . . . . . . . . . . . . . . . . 8.1.3 MPC of the State-Space SISO Process . . . . . . . . . . . . . . . . .
309 309 309 310 310
xviii
8.2
8.3
8.4
Contents
The State-Space MIMO Process A with Two Inputs and Two Outputs: Model I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Description of the State-Space MIMO Process A . . . . . . . . 8.2.2 Implementation of MPC Algorithms for the State-Space MIMO Process A . . . . . . . . . . . . . . . . . . 8.2.3 MPC of the State-Space MIMO Process A . . . . . . . . . . . . . . The State-Space MIMO Process B with Ten Inputs and Two Outputs: Model I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Description of the State-Space MIMO Process B . . . . . . . . 8.3.2 Implementation of MPC Algorithms for the State-Space MIMO Process B . . . . . . . . . . . . . . . . . . 8.3.3 MPC of the State-Space MIMO Process B . . . . . . . . . . . . . . The Influence of Process Dimensionality on the Calculation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
317 317 318 318 326 326 330 330 334
9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Notation
General Notation a, b, . . . A, B, . . . a T , AT diag(a 1 , . . . , an ) dy(x) dx x=x¯ ∂ y(x) ∂xi x=x¯
f (·), g(·), . . . 0m×n , I m×n q −1 x2A
Variables or constants, scalars or vectors Matrices Transpose of the vector a and the matrix A The diagonal matrix with a1 , . . . , an on the diagonal The derivative of the function y(x) at the point x¯ The fractional derivative of the function y(x) = y(x1 , . . . , xn x ) with respect to the scalar xi at the point x¯ Scalar or vector functions Zeros and identity matrices of dimensionality m × n The discrete unit time-delay operator x T Ax
Processes and Models ai , bi , aim , bim,n ai, j , bi, j , ci, j A(q −1 ), B(q −1 ) A, B, C g(·), gm (·) g(·), ˜ g˜ m (·)
The parameters of the linear dynamic part of the input-output Wiener model The parameters of the linear dynamic part of the state-space Wiener model The polynomial matrices that describe the linear dynamic part of the input-output Wiener model The matrices that describe the linear dynamic part of the statespace Wiener model The functions that describe the nonlinear static blocks of the Wiener model The functions that describe the inverse models of the nonlinear static blocks of the Wiener model xix
xx
Notation
k K
The discrete time sampling instant (k = 0, 1, 2, . . . ) The number of hidden nodes in a neural network, the degree of a polynomial The constants that define the order of dynamics of the linear dynamic part of the input-output Wiener model The number of inputs (manipulated variables) The number of outputs of the linear dynamic block of the Wiener model The number of state variables The number of outputs (controlled variables) The input vector at the sampling instant k The vector of outputs of the linear dynamic block of the Wiener model at the sampling instant k The state vector at the sampling instant k The estimated state vector at the sampling instant k The output vector at the sampling instant k u(k + p|k) − u(k + p − 1|k)
m,n nA , nB, nm A , nB
nu nv nx ny u(k) v(k) x(k) x(k) ˜ y(k) u(k + p|k)
MPC Algorithms d(k) G(k) G H(k), H t (k) J (k) K (k), K m (k), K m,n (k) N Nu s p (k), s m,n p (k), S p (k) ¯ s¯ p , s¯ m,n p , Sp u(k + p|k)
The vector of unmeasured output disturbances at the sampling instant k The step-response matrix of the model linearised at the sampling instant k The constant step-response matrix of the linear dynamic part of the Wiener model The matrix of derivatives of the predicted output trajectory with respect to the future input trajectory at the sampling instant k The cost-function minimised in MPC The gains of the nonlinear static part of the Wiener model at the sampling instant k The prediction horizon The control horizon The step-response coefficients and the step-response matrix of the model linearised at the sampling instant k for the sampling instant p The constant step-response coefficients and the constant step-response matrix of the linear dynamic part of the Wiener model for the sampling instant p The process input vector calculated for the sampling instant k + p at the sampling instant k
Notation
u(k) u min , u max umin , umax v(k + p|k) v(k) x(k), x(k) ˜ x 0 (k + p|k) x 0 (k) x(k ˆ + p|k) xˆ (k) y 0 (k + p|k) y0 (k) yˆ (k + p|k) ˆy(k) y min , y max ymin , ymax y sp (k + p|k) ysp (k) u(k + p|k) u(k) u min , u max
xxi
The process input vector calculated at the sampling instant k over the control horizon The vectors of magnitude constraints imposed on process inputs The vectors of magnitude constraints imposed on process inputs over the control horizon The output vector of the linear dynamic part of the Wiener model predicted for the sampling instant k + p at the sampling instant k The output vector of the linear dynamic part of the Wiener model predicted at the sampling instant k over the prediction horizon The vectors of measured and estimated state variables at the sampling instant k The state free trajectory vector predicted for the sampling instant k + p at the sampling instant k The state free trajectory vector predicted at the sampling instant k over the prediction horizon The state trajectory vector predicted for the sampling instant k + p at the sampling instant k The state trajectory vector predicted at the sampling instant k over the prediction horizon The output free trajectory vector predicted for the sampling instant k + p at the sampling instant k The output free trajectory vector predicted at the sampling instant k over the prediction horizon The output trajectory vector predicted for the sampling instant k + p at the sampling instant k The output trajectory vector predicted at the sampling instant k over the prediction horizon The vectors of magnitude constraints imposed on predicted output variables The vectors of magnitude constraints imposed on predicted output variables over the prediction horizon The output set-point trajectory vector for the sampling instant k + p known at the sampling instant k The output set-point trajectory vector at the sampling instant k over the prediction horizon The vector of input increments calculated for the sampling instant k + p at the sampling instant k The vector of input increments calculated for the sampling instant k over the control horizon (the vector of decision variables calculated in MPC) The vectors of constraints imposed on increments of the input variables
xxii
Notation
umax εmin (k), εmax (k) εmin (k + p), εmax (k + p) εmin (k), εmin (k) λ, λn, p , , p μm, p , M, M p ν(k) ρmin , ρmax
The vectors of constraints imposed on increments of the input variables over the control horizon The vectors that define the degree of hard output constraints’ violation constant over the prediction horizon The vectors that define the degree of hard output constraints’ violation varying over the prediction horizon The vectors that define the degree of hard output constraints’ violation over the prediction horizon The weighting coefficients and the weighting matrices related to control increments The weighting coefficient and the weighting matrices related to the predicted output control errors The vector of unmeasured state disturbances at the sampling instant k The weighting coefficients of the penalties related to violation of hard output constraints
Acronyms DMC GPC IMC LMPC LS-SVM MIMO MISO MLP MPC MPC-inv MPC-NO MPC-NO-P MPC-NPLPT MPC-NPLPT-P MPC-NPLT MPC-NPLT1
Dynamic Matrix Control Generalized Predictive Control Internal Model Control MPC algorithm based on a linear model Least Squares Support Vector Machine Multiple-Input Multiple-Output Multiple-Input Single-Output Multi-Layer Perceptron feedforward neural network Model Predictive Control MPC algorithm based on the inverse model of the nonlinear static part of the Wiener system MPC algorithm with Nonlinear Optimisation MPC-NO algorithm with Parameterisation MPC algorithm with Nonlinear Prediction and Linearisation along the Predicted Trajectory MPC-NPLPT algorithm with Parameterisation MPC algorithm with Nonlinear Prediction and Linearisation along the Trajectory MPC-NPLT algorithm with linearisation along the trajectory defined by the input signals applied at the previous sampling instant
Notation
MPC-NPLT2
MPC-NPLT-P MPC-NPSL MPC-NPSL-P MPC-SSL MPC-SSL-P PID RBF SISO SQP SVM
xxiii
MPC-NPLT algorithm with linearisation along the trajectory defined by the optimal input signals calculated at the previous sampling instant MPC-NPLT algorithm with Parameterisation MPC algorithm with Nonlinear Prediction and Simplified model Linearisation for the current operating point MPC-NPSL algorithm with Parameterisation MPC algorithm with Simplified Successive model Linearisation for the current operating point MPC-SSL algorithm with Parameterisation Proportional-Integral-Derivative controller Radial Basis Function feedforward neural network Single-Input Single-Output Sequential Quadratic Programming Support Vector Machine
Part I
Preliminaries
Chapter 1
Introduction to Model Predictive Control
Abstract This Chapter is an introduction to the field of MPC. Its basic idea and the rudimentary MPC optimisation problems are defined, at first for Single-Input SingleOutput (SISO) processes and next for Multiple-Input Multiple-Output (MIMO) ones. A method to cope with infeasibility problems caused by constraints imposed on the predicted controlled variables is presented. Next, parameterisation of the decision variables using Laguerre functions in order to reduce the number of actually optimised variables is described. Classification of MPC algorithms is given and computational complexity issues are discussed. Finally, some example applications of MPC algorithms in different fields are reported.
1.1 Formulation of the Basic MPC Problem The objective of a good control algorithm is to calculate repeatedly on-line the value of the manipulated variable (or the values of many manipulated variables) that leads to good process behaviour [36]. Let us discuss the term good process behaviour using two examples. The first process example is a residential building equipped with an underfloor radiant heating system based on electric heating foils [99]. From the point of view of control engineering, the process is very simple since it has only one manipulated variable (process input) which is the value of the current (or the voltage) applied to the foils and only one controlled variable (process output) which is the average temperature inside the building. There are two objectives of the controller: (a) it must increase the temperature quickly when the user increases the temperature set-point, i.e. the value of the required temperature, (b) it must stabilise the temperature when the outside temperature drops. The first objective is set-point tracking, i.e. the process output must follow changes of its set-point. The second objective is compensation of disturbances, i.e. the process output must be (approximately) constant when the process is affected by external disturbances, also called uncontrolled process inputs. In our simple example, it is only possible to increase the temperature by increasing the current (or the voltage), but it is impossible to reduce the temperature. It means that it works fine in the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_1
3
4
1 Introduction to Model Predictive Control
two above situations, but when the user wants to reduce the set-point or the outside temperature increases, the only possible action is to reduce heating, switch it off or simply ventilate the building. Of course, in more advanced solutions, it is possible to both heat and cool. Furthermore, it may be necessary to stabilise not only temperature but also humidity. An important application of such a control system may be found in greenhouses, where it is necessary to maintain constant temperature and humidity values for the proper growth of plants. Different parts of the greenhouse may be heated separately to obtain different local temperature conditions. In such a case, there are many manipulated, controlled and disturbance variables. In addition to set-point tracking and compensation of disturbances, the calculated values of the manipulated signals must satisfy some constraints. Typically, they have limited values and rates of change caused by the physical limits of actuators. Moreover, one may imagine that some constraints are imposed on the controlled variables, e.g. temperature and humidity should be in some ranges. The second process example is a car. Its control is significantly much more complicated than the simple temperature control task discussed above. It is because a driver must manipulate numerous variables, such as the accelerator, clutch and brake pedals, the wheel and the gear lever. There are many controlled variables, such as position on the road, speed, acceleration. The driver controls the car in such a way that position, speed and acceleration set-point trajectories are followed. Moreover, the influence of many external disturbances is compensated, e.g. variable road slope, type of surface, side wind. Unlike the first process example, the driver not only controls the process but also calculates the set-point trajectories on-line, i.e. adjusts them to the current road conditions. Of course, there are numerous constraints which must be taken into account during calculation of the values of the manipulated variables and adjusting the trajectories. Both manipulated and controlled variables must be constrained in this example. The classical Proportional-Integral-Derivative (PID) controller in continuoustime domain is described by the following rule ⎛ 1 u(t) = u 0 + K ⎝e(t) + Ti
t
⎞ de(t) ⎠ e(τ )dτ + Td dt
(1.1)
0
The control error is defined as the difference between the set-point and the current measured value of the controlled variable, i.e. e(t) = y sp (t) − y(t). The value of the manipulated variable u for the current time t is a linear function of three parts: the proportional part, which takes into account the current control error, e(t), the integral part, which takes into account the past errors, and the derivative part, which takes into account the rate of change of the error. The tuning parameters are: the proportional gain K , the integration time-constant Ti and the derivative time-constant Td . Using Euler’s backward differentiation and trapezoidal integration, in discrete-time domain, the value of the manipulated variable for the current sampling instant k is u(k) = u(k − 1) + r0 e(k) + r1 e(k − 1) + r2 e(k − 2)
(1.2)
1.1 Formulation of the Basic MPC Problem
5
where e(k), e(k − 1) and e(k − 2) denote the values of the control error at the sampling instants k, k − 1 and k − 2, respectively, u(k − 1) is the value of the manipulated variable at the sampling instant k − 1, r0 , r1 , r2 are parameters. They are calculated for the settings K , Ti , Td and the chosen sampling time of the controller. If properties of the process are (approximately) linear, the PID controller proves to be very efficient in numerous applications. Nevertheless, the PID controller has the following limitations: 1. The PID control law (1.1) or (1.2) is linear. In the case of nonlinear processes, the possible quality of control may be not satisfactory, in particular when the set-point changes are significant and fast or the external disturbances are strong. 2. The PID controller works fine when the process delay is not significant. Conversely, PID control of delayed dynamical systems is usually not good. 3. In its basic version, the PID controller does not include constraints. Although simple limiters may easily enforce limits of the manipulated variable and constraints on its rate of change, there is no systematic way to enforce satisfaction of constraints imposed on the controlled variable. 4. The PID controller is a natural choice when the controlled process has one manipulated variable and one controlled one. In the case of a dynamical process with many inputs and many outputs, the basic problem is finding out which manipulated variable has the strongest influence on each controlled one. Next, several classical single-loop PID controllers are used. Such an approach works correctly when the consecutive manipulated variables strongly impact the consecutive controlled ones, but when one process input impacts two or more outputs, such a control structure does not work. Moreover, the number of process inputs and outputs must be equal. 5. It is interesting that the current value of the manipulated variable generated by the PID controller depends on the current and past errors. It is clear when we consider the discrete-time implementation (1.2). The derivative part tries to use some information of the future control error but using only its current and previous measurements. 6. The PID controller is tuned in practice using some simple rules, e.g. the famous Ziegler and Nichols procedure, or simply by the trial and error approach. Although interpretation of the continuous-time parameters K , Ti and Td is straightforward, the parameters r0 , r1 and r2 of the discrete-time controller have no physical interpretation. Having discussed the objectives of a good control algorithm and properties of the PID structure, we will discuss the basic formulation of MPC. Let us recall the problem of controlling a car by a driver. Humans do not use mathematical equations to calculate values of the manipulated variables. Conversely, in our mind, we repeatedly do the following: 1. We collect all possible information, i.e. we observe the road and monitor the car dashboard. 2. Using a model of the car, i.e. knowing how the car reacts, we predict behaviour of the car, i.e. its position, speed, acceleration, over some time horizon.
6
1 Introduction to Model Predictive Control
3. We optimise behaviour of the car, i.e. we find out how the car should be controlled in order to satisfy all control objectives. We find not only the current values of the manipulated variables, but we also assess their future values. 4. Prediction of the future car state, as well as optimisation of the current and future control actions, are coupled, i.e. we have many possible control policies, we assess how they are successful and we choose the best one. 5. We constantly repeat the above steps as we receive new information, we assess the results of our actions and how the disturbances change. The traffic and road conditions are never constant. The horizon is moved each time we start prediction and optimisation. Figure 1.1 illustrates the above. Let us consider information collected by the driver A (measurements) and the decisions taken for three different time instants denoted as t1 , t2 and t3 , respectively. Initially, at time t1 , for the prediction horizon used, the driver A is able to see the speed limit sign and his or her decision is to deduce speed to 50 km/h. The prediction horizon is too short to notice the cars B and C. Next, at time t2 , the prediction horizon makes it possible to notice the car B that is approaching from the right-side road. Because the car B moves very slowly, the driver A decides to continue driving with constant speed, he or she does not wait for the car B to give way to it. For the prediction horizon used, the driver A does not notice the car C. Finally, at time t3 , the driver A sees the car C. He or she is unable to overtake it since the car D moves from the opposite direction, in the second lane. Probably, provided that no other obstacles exist, overtaking will be possible shortly. Let us point out that all decisions are made using predictions of future behaviour of all drivers, possible drivers’ actions are predicted using some models, all existing constraints are taken into account. Now, it is time to formulate the basic MPC problem using mathematics. At first, let us consider a SISO process. The input of the controlled process is denoted by u, the output is denoted by y. At each consecutive sampling instant k, k = 1, 2, 3, . . ., the vector of the future increments of the manipulated variable ⎡ ⎢ u(k) = ⎣
u(k|k) .. .
⎤ ⎥ ⎦
(1.3)
u(k + Nu − 1|k) is calculated on-line. The symbol u(k + p|k) denotes the increment of the manipulated variable for the sampling instant k + p calculated at the current sampling instant k, Nu is the control horizon which defines the number of decision variables (1.3). The first increment is u(k|k) = u(k|k) − u(k − 1)
(1.4)
u(k + p|k) = u(k + p|k) − u(k + p − 1|k)
(1.5)
and the following ones are
1.1 Formulation of the Basic MPC Problem
7
Fig. 1.1 Situations on the road and the driver’s A decisions for three example time instants t1 , t2 , t3
8
1 Introduction to Model Predictive Control
for p = 1, . . . , Nu − 1. The symbol u(k + p|k) denotes the value of the manipulated variable for the sampling instant k + p calculated at the current sampling instant k, u(k − 1) is the value of the manipulated variable used (applied to the process) at the previous sampling instant. In the simplest case, the vector of decision variables (1.3) is calculated on-line from an unconstrained optimisation problem min {J (k)}
(1.6)
u(k)
Typically, the minimised objective function (the cost-function) consists of two parts J (k) =
N
p=1
N u −1
2 y (k + p|k) − yˆ (k + p|k) + λ (u(k + p|k))2 sp
(1.7)
p=0
The first part of the MPC cost-function measures the predicted quality of control since the differences between the set-point trajectory and the predicted trajectory of the output variable (i.e. the predicted control errors) over the prediction horizon N ≥ Nu are taken into account. The set-point value for the sampling instant k + p known at the current sampling instant k is denoted by y sp (k + p|k), the predicted value of the output variable for the sampling instant k + p calculated at the current instant is denoted by yˆ (k + p|k). The future values of the set-point are usually not known, hence only the scalar set-point value for the current sampling instant, denoted by y sp (k), is used, i.e. y sp (k + 1|k) = · · · = y sp (k + N |k) = y sp (k). Such an approach is typically used in control of industrial processes in which changes of the set-point are very rare, but the controller must compensate for changes of the disturbances. However, in some applications, e.g. in autonomous vehicles and robotics, the setpoint trajectory may be not constant over the prediction horizon. The second part of the MPC cost-function is a penalty term. It is used to reduce excessive changes of the manipulated variable; λ > 0 is a weighting coefficient. The greater its value, the lower the increments of the manipulated variable and, hence, the slower control. Because in practice the control horizon is shorter than the prediction one, it is assumed that u(k + p|k) = u(k + Nu − 1|k) for p = Nu , . . . , N , which means that u(k + Nu |k) = · · · = u(k + N |k) = 0. Although at each sampling instant as many as Nu future increments of the manipulated variable (1.3) are calculated, only the first element of this sequence is actually applied to the process, i.e. the increment for the current sampling instant k. Let the optimal vector calculated from the MPC optimisation problem be denoted by uopt (k). The current optimal value of the manipulated variable is applied to the process (1.8) u(k) = u opt (k|k) + u(k − 1) where u opt (k|k) is the first element of the vector uopt (k). In the next sampling instant (k + 1) the output value of the process is measured (the state variables may also be measured or estimated), the prediction horizon is shifted one step forward
1.1 Formulation of the Basic MPC Problem
9
Fig. 1.2 The general structure of the MPC algorithm
and the whole procedure described above is repeated. As a result, the MPC algorithm works in the closed-loop, i.e. with feedback from the measured process output. Figure 1.2 depicts the general structure of the MPC algorithm. It is assumed that the time necessary to solve the MPC optimisation problem is much shorter than the sampling time. In practical applications, it is necessary to take into account existing constraints. First of all, the magnitude of the manipulated variable may be constrained. Such constraints result from the physical limits of the actuator u min ≤ u(k + p|k) ≤ u max , p = 0, . . . , Nu − 1
(1.9)
where u min and u max are the minimal and maximal values of the manipulated variable, respectively. It is interesting to notice the fact that all calculated values of the manipulated variable over the whole control horizon are limited, not only the value for the current sampling instant, i.e. u(k|k). Secondly, the rate of change of the manipulated variable may be constrained u min ≤ u(k + p|k) ≤ u max , p = 0, . . . , Nu − 1
(1.10)
where u min and u max are the maximal negative and maximal (positive) changes of the manipulated variable, respectively (usually u min = −u max ). All calculated increments of the manipulated variable over the whole control horizon are limited, not only the increment for the current sampling instant, i.e. u(k|k). Thirdly, the predicted values of the process output variable may also be limited, which is usually
10
1 Introduction to Model Predictive Control
enforced by some technological reasons y min ≤ yˆ (k + p|k) ≤ y max , p = 1, . . . , N
(1.11)
where y min and y max are the minimal and maximal values of the predicted output variable, respectively. All predictions over the prediction horizon N are constrained. When the constraints are present, the vector of decision variables (1.3) is calculated at each sampling instant from an optimisation problem in which the cost-function (1.7) is minimised and all the constraints (1.9), (1.10) and (1.11) are taken into account. Hence, the rudimentary MPC constrained optimisation problem is ⎧ ⎨ min
u(k) ⎩
J (k) =
N
2
y sp (k + p|k) − yˆ (k + p|k)
+λ
p=1
p=0
subject to u
min
u
≤ u(k + p|k) ≤ u
min
N u −1
(u(k + p|k))2
⎫ ⎬ ⎭
(1.12) max
, p = 0, . . . , Nu − 1
≤ u(k + p|k) ≤ u max , p = 0, . . . , Nu − 1
y min ≤ yˆ (k + p|k) ≤ y max , p = 1, . . . , N The number of decision variables of the optimisation problem (1.12) is Nu , the number of constraints is 4Nu + 2N . All things considered, in the case of the SISO constrained MPC algorithm, at each sampling instant k, the following steps are performed on-line: 1. The current value of the controlled variable, y(k), is measured; the state variables may be measured or estimated when necessary. 2. The future sequence of increments of the manipulated variable is calculated from the optimisation problem (1.12). 3. The first element of the determined sequence is applied to the process (Eq. (1.8)). Having discussed the MPC formulation for the SISO case, we will concentrate on a more general MIMO problem. Let us assume that the number of process inputs is denoted by n u and the number of process outputs is denoted by n y . In this book we use two notation methods: scalars and vectors. When possible, it is very convenient to use vectors, but sometimes the consecutive scalar signals must be used. The vector T of manipulated variables is u = u 1 . . . u n u and the vector of controlled variables T is y = y1 . . . yn y . The vector of decision variables of the MPC algorithm (1.3) is hence of length n u Nu . The minimised MPC cost-function for the MIMO case is
1.1 Formulation of the Basic MPC Problem
J (k) =
ny N
11
2 μ p,m ymsp (k + p|k) − yˆm (k + p|k)
p=1 m=1
+
N nu u −1
λ p,n (u n (k + p|k))2
(1.13)
p=0 n=1
In comparison with the SISO case (Eq. (1.7)), in the first part of the cost-function (1.13), we consider the predicted control errors for all n y controlled variables over the whole prediction horizon. Similarly, in the second part of the cost-function, increments of all n u manipulated variables are taken into account over the whole control horizon. The weighting coefficients μ p,m ≥ 0 make it possible to differentiate the influence of the predicted control errors of the consecutive outputs within the prediction horizon. The coefficients λ p,n > 0 are used not only to differentiate the influence of the control increments of the consecutive inputs of the process within the control horizon but to establish the necessary scale between both parts of the cost-function. The MPC cost-function and the resulting optimisation problems may be conveniently and compactly derived, formulated and implemented using vector-matrix notation rather than scalars. The cost-function (1.13) may be expressed in the following form J (k) =
N N u −1
sp y (k + p|k) − yˆ (k + p|k)2 + u(k + p|k)2 p Mp p=1
(1.14)
p=0
Now, the set-point vector for the sampling instant k + p known at the current sampling instant k is denoted by y sp (k + p|k), the predicted vector of the output variables for the sampling instant k + p calculated at the current sampling instant k is denoted by yˆ (k + p|k), both vectors are of length n y . The matrix M p = diag(μ p,1 , . . . , μ p,n y ) ≥ 0 is of dimensionality n y × n y , the matrix p = diag(λ p,1 , . . . , λ p,n u ) > 0 is of dimensionality n u × n u . For the process with n u manipulated variables, the magnitude constraints are ≤ u n (k + p|k) ≤ u max u min n n , p = 0, . . . , Nu − 1, n = 1, . . . , n u
(1.15)
max are the minimal and maximal values of the manipulated variable where u min n and u n u n , respectively. The constraints imposed on the rate of change of the manipulated variables are
≤ u n (k + p|k) ≤ u max u min n n , p = 0, . . . , Nu − 1, n = 1, . . . , n u
(1.16)
max are the maximal negative and maximal (positive) changes of where u min n and u n the manipulated variable u n , respectively. The constraints imposed on the predicted values of the process output variables are
12
1 Introduction to Model Predictive Control
ymmin ≤ yˆm (k + p|k) ≤ ymmax , p = 1, . . . , N , m = 1, . . . , n y
(1.17)
where ymmin and ymmax are the minimal and maximal values of the predicted variable ym , respectively. If we use the vector notation, the constraints are defined by the following vectors of length n u ⎤ ⎡ max ⎤ ⎡ min ⎤ ⎡ max ⎤ u min u1 u 1 u 1 1 ⎢ .. ⎥ max ⎢ .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥ min max = ⎣ . ⎦, u = ⎣ . ⎦ , u = ⎣ . ⎦ , u =⎣ . ⎦ ⎡
u min
u min nu
u max nu
u min nu
u max nu (1.18)
and the following vectors of length n y ⎤ ⎡ max ⎤ y1min y1 ⎢ .. ⎥ max ⎢ .. ⎥ = ⎣ . ⎦, y =⎣ . ⎦ ynmin ynmax y y ⎡
y min
(1.19)
We may notice that the above 3 scalar constraints given by Eqs. (1.15), (1.16) and (1.17) may be rewritten in the same way it is done for the SISO case, i.e. by Eqs. (1.9), (1.10) and (1.11). Now we may formulate the general MPC optimisation problem for MIMO processes. Using the cost-function (1.14), the scalar constraints (1.15), (1.16), (1.17) and the definitions (1.18)–(1.19), we have min
u(k)
N N u −1
2 sp 2 u(k + p|k) p y (k + p|k) − yˆ (k + p|k) M p + J (k) = p=1
p=0
subject to u
min
u
≤ u(k + p|k) ≤ u
min
(1.20) max
, p = 0, . . . , Nu − 1
≤ u(k + p|k) ≤ u max , p = 0, . . . , Nu − 1
y min ≤ yˆ (k + p|k) ≤ y max , p = 1, . . . , N where the norm is defined as x2A = x T Ax (the matrix A is square). The above optimisation problem corresponds with the task (1.12) for the SISO case. The number of decision variables of the optimisation problem (1.20) is n u Nu , the number of constraints is 4n u Nu + 2n y N . Although at each sampling instant as many as n u Nu future increments of the manipulated variables (1.3) are calculated, only the first n u elements of this sequence are actually applied to the process, i.e. the increments for the current sampling instant k. The current optimal values of the manipulated variables applied to the process are calculated from Eq. (1.8), the same which is used in the SISO case, but now all vectors, i.e. u(k), u opt (k|k) and u(k − 1), are of length n u .
1.1 Formulation of the Basic MPC Problem
13
In the case of the MIMO constrained MPC algorithm, at each sampling instant k the following steps are performed on-line: 1. The current values of the controlled variables, y1 (k), . . . , yn y (k), are measured; the state variables may be measured or estimated when necessary. 2. The future sequence of increments of the manipulated variables is calculated from the optimisation problem (1.20). 3. The first n u elements of the determined sequence are applied to the process (Eq. (1.8)). Now, let us find a more compact representation of the rudimentary MIMO MPC optimisation problem (1.20). Let us define the set-point trajectory vector ⎤ y sp (k + 1|k) ⎥ ⎢ .. ysp (k) = ⎣ ⎦ . sp y (k + N |k) ⎡
(1.21)
and the predicted output trajectory vector ⎡
⎤ yˆ (k + 1|k) ⎢ ⎥ .. ˆy(k) = ⎣ ⎦ . yˆ (k + N |k)
(1.22)
Both vectors are of length n y N . The MPC cost-function (1.14) may be rewritten in the following compact form 2 J (k) = ysp (k) − ˆy(k) M + u(k)2
(1.23)
The matrices M = diag(M 1 , . . . , M N ) ≥ 0 and = diag(0 , . . . , Nu −1 ) > 0 are of dimensionality n y N × n y N and n u Nu × n u Nu , respectively. It is necessary to find the relation between the future values of the manipulated variables and their increments, which are calculated on-line in MPC. From the definitions of increments (Eqs. (1.4) and (1.5)), we have u(k|k) = u(k|k) + u(k − 1) u(k + 1|k) = u(k|k) + u(k + 1|k) + u(k − 1) .. . u(k + Nu − 1|k) = u(k|k) + · · · + u(k + Nu − 1|k) + u(k − 1)
(1.24)
which may be expressed as a general rule u(k + p|k) =
p
i=0
u(k + i|k) + u(k − 1)
(1.25)
14
1 Introduction to Model Predictive Control
for p = 0, . . . , Nu − 1. The above observation may be rewritten compactly u(k) = Ju(k) + u(k − 1) ⎡
where
⎢ u(k) = ⎣
(1.26)
⎤
u(k|k) .. .
⎥ ⎦
(1.27)
u(k + Nu − 1|k) is a vector of length n u Nu that corresponds to the vector of increments u(k). Using Eq. (1.26), the scalar constraints (1.15) may be expressed compactly umin ≤ Ju(k) + u(k − 1) ≤ umax
(1.28)
⎤ ⎡ max ⎤ u min u ⎢ .. ⎥ ⎢ .. ⎥ max = ⎣ . ⎦, u =⎣ . ⎦
(1.29)
⎡
where the vectors umin
u
min
u
max
⎡
⎤ u(k − 1) ⎢ ⎥ .. u(k − 1) = ⎣ ⎦ .
and
(1.30)
u(k − 1)
are of length n u Nu , the matrix ⎡
I n u ×n u 0n u ×n u 0n u ×n u ⎢ I n u ×n u I n u ×n u 0n u ×n u ⎢ J =⎢ . .. .. ⎣ .. . . I n u ×n u I n u ×n u I n u ×n u
⎤ . . . 0n u ×n u . . . 0n u ×n u ⎥ ⎥ .. ⎥ .. . . ⎦ . . . I n u ×n u
is of dimensionality n u Nu × n u Nu . The scalar constraints (1.16) may be expressed compactly (1.31) umin ≤ u(k) ≤ umax where the vectors ⎤ ⎡ max ⎤ u min u ⎢ .. ⎥ ⎢ .. ⎥ max = ⎣ . ⎦ , u =⎣ . ⎦ ⎡
umin
u
min
u
(1.32)
max
are of length n u Nu . The scalar constraints (1.17) may be expressed compactly as
1.1 Formulation of the Basic MPC Problem
15
− ymin ≤ ˆy(k) ≤ ymax ⎤ ⎡ max ⎤ y min y ⎥ ⎥ ⎢ ⎢ = ⎣ ... ⎦ , ymax = ⎣ ... ⎦ y min y max
(1.33)
⎡
where the vectors ymin
(1.34)
are of length n y Ny . Taking into account the minimised cost-function (1.23) and the constraints (1.28), (1.31), (1.33), the general MIMO MPC optimisation problem (1.20) is rewritten in a very compact vector-matrix form 2 min J (k) = ysp (k) − ˆy(k) M + u(k)2
u(k)
subject to u
min
u
≤ Ju(k) + u(k − 1) ≤ u
min
(1.35) max
≤ u(k) ≤ umax
ymin ≤ ˆy(k) ≤ ymax Since a mathematical model of the controlled process is used on-line for prediction and optimisation of the control policy, the MPC algorithms have the following advantages: 1. It is possible to control MIMO processes efficiently. When a series of classical single-loop PID controllers are used for the MIMO process, the consecutive controllers work independently; each of them has only one objective, i.e. control of only one controlled variable. When cross-couplings in the process (interactions of the consecutive manipulated variables with the consecutive controlled ones) are strong, such single-loop PID controllers do not work properly. Conversely, due to using a model for prediction, the MPC “knows” all interactions between process variables and calculates the best possible control policy. 2. The MPC algorithms may be used when the number of process inputs is different from the number of outputs. In such a case, it is practically impossible to use a set of single-loop PID controllers. 3. It is possible to take into account constraints imposed on both manipulated and predicted controlled variables in a simple way (MPC optimisation is simply carried out subject to all necessary constraints). 4. It is possible to control “difficult” processes, i.e. with significant time-delays or with the inverse step-response. Additional advantages of MPC are: 1. Tuning of MPC algorithms is relatively easy. It is only necessary to select appropriate horizons and some weighting coefficients. All these parameters have a clear physical interpretation.
16
1 Introduction to Model Predictive Control
2. It is possible to take into account the measured disturbances of the process, i.e. the uncontrolled inputs (the feed-forward action). 3. Unlike the PID algorithm, future changes of the set-point trajectory over the prediction horizon may be easily taken into account. 4. The core idea of MPC is straightforward, which is important when advanced methods are introduced in industry [112, 177]. Let us emphasise the very significant role of the process model in MPC. The model is used for prediction. Intuitively, the better the model, the better (potentially) the resulting control accuracy. Moreover, without the model, it is impossible to use MPC at all. Let us also mention some other advanced model-based computational methods: fault diagnosis [81, 83, 145, 192] and fault-tolerant control [118, 145, 192]. An important question is how to assess the quality of control. In addition to typically used indicators, such as the sum of squared errors, overshoot and setting time, we can use more sophisticated indices, including fractal and entropy measures [36]. Effectiveness of such methods is discussed in [38, 39, 41] (for MPC algorithms based on linear models) and in [40, 42] (for nonlinear MPC algorithms). A review of control performance assessment methods for MPC is given in [37]. We have presented above the classical formulation of MPC. In the next parts of the book, we will detail computationally efficient nonlinear approaches. At this point we have to mention a few important extensions of MPC. In numerous industrial applications, when the objective is maximisation of production profits, set-point optimisation that cooperates with MPC [50, 89, 91, 177, 181] and economic MPC [48, 49, 107, 132] must be used. An excellent review of possible architectures for distributed and hierarchical MPC is given in [163]. MPC algorithms may also offer fault-tolerant control [118, 145, 167], which means that safe process operation is guaranteed in the case of some faults, e.g. when a sensors’ or an actuators’ malfunction occurs. It is also possible to take into account in MPC not only control accuracy and economic issues but also the remaining useful life of the system considered (health-aware MPC) [150]. An important direction of theoretical research is concerned with stable and robust versions of MPC algorithms [128, 129]. Different versions of such approaches are presented in [58, 117, 144–146, 159, 174, 182]. In the last years MPC schemes for fractional-order systems have gained popularity [43– 46, 135, 169]. The fractional-order approach makes it possible to control processes for which classical differential (or difference) equations are insufficient as models used for prediction in MPC.
1.2 How to Cope with Infeasibility Problem In this work three different classes of constraints are taken into account in MPC optimisation taks (1.12), (1.20) and (1.35). The constraints may be imposed on: the values of the manipulated variables, the corresponding increments of those variables and on
1.2 How to Cope with Infeasibility Problem
17
the predicted values of the controlled variables. The first two classes of constraints simply limit the feasible set of possible solutions of the MPC optimisation task. The third type of constraints may cause some important problems. Let us imagine that we require no overshoot. In order to achieve that, we use the constraints yˆ (k + p|k) ≤ y sp (k), p = 1, . . . , N
(1.36)
If the model used for prediction is precise and there are no external disturbances, such constraints may work correctly provided that the constraints imposed on the manipulated variables are not too restrictive. It is also possible that the constraints (1.36) may be not satisfied because of the constraints imposed on the manipulated variables, even in the case of a perfect model and no disturbances. When the model is only a rough approximation of the process, which frequently happens, and/or the process is affected by a strong disturbance, it is very likely that it is impossible to calculate a decision variable vector which leads to satisfaction of the constraints (1.36). When such problems occur, the feasible set of the MPC optimisation problem is empty. In such a case, one may use for control at the current sampling instant the signals applied to the process at the previous sampling instant, i.e. u(k − 1), or the signals calculated at the previous sampling instant for the current sampling, i.e. u(k|k − 1). A more mathematically sound approach is to use soft output constraints [112, 177]. The original hard constraints (in the vector notation for a general MIMO process) (1.37) y min ≤ yˆ (k + p|k) ≤ y max , p = 1, . . . , N are relaxed when they cannot be satisfied. It means that the predicted values of the controlled variables may temporarily violate the hard constraints. As a result, the feasible set is not empty. Using the soft constraints, the rudimentary MPC optimisation problem (1.20) becomes min
u(k) εmin (k), εmax (k)
J (k) =
N
sp y (k + p|k) − yˆ (k + p|k)2 Mp p=1 N u −1
+
u(k + p|k)2 p
p=0
2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
≤ u(k + p|k) ≤ u
min
(1.38) max
, p = 0, . . . , Nu − 1
≤ u(k + p|k) ≤ u max , p = 0, . . . , Nu − 1
y min − εmin (k) ≤ yˆ (k + p|k) ≤ y max + εmax (k), p = 1, . . . , N εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1
18
1 Introduction to Model Predictive Control
When the original hard constraints (1.37) cannot be satisfied, they are temporarily violated. It is done by relaxing the minimal and maximal predicted values of the controlled variables by εmin (k) and εmax (k), respectively. The MPC algorithm calculates not only the future control increments u(k) but also the vectors εmin (k) and εmax (k) of length n y . Because it is natural that the original hard output constraints should be relaxed only when necessary, the degree of violations of the hard constraints is minimised in the cost-function by additional penalty terms; ρ min , ρ max > 0 are penalty coefficients. Additionally, the last two constraints require that the degree of constraints’ violation is non-negative. The number of decision variables of the optimisation problem (1.38) is n u Nu + 2n y , the number of constraints is 4n u Nu + 2n y N + 2n y . Using the vector-matrix notation, the rudimentary MPC optimisation problem with soft output constraints (1.38) may be easily transformed to the following task in a compact vector-matrix notation, similar to the task (1.35) min
u(k) εmin (k), εmax (k)
2 J (k) = ysp (k) − ˆy(k) M + u(k)2 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
(1.39)
≤ Ju(k) + u(k − 1) ≤ u
min
max
≤ u(k) ≤ umax
ymin − εmin (k) ≤ ˆy(k) ≤ ymax + ε max (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1 where the vectors of length n y N are ⎤ ⎡ max ⎤ εmin (k) ε (k) ⎥ ⎢ .. ⎥ max ⎢ .. min ε (k) = ⎣ . ⎦ , ε (k) = ⎣ ⎦ . ⎡
ε
min
(k)
ε
max
(1.40)
(k)
In the soft output approach it is possible to allow that the degree of relaxation of the same controlled variable may change over the prediction horizon. In such a case, in the optimisation problem (1.38), the soft constraints are y min − εmin (k + p) ≤ yˆ (k + p|k) ≤ y max + εmax (k + p), p = 1, . . . , N
(1.41)
The vectors of additional decision variables of the MPC optimisation task are now ⎤ ⎤ ⎡ max εmin (k + 1|k) ε (k + 1|k) ⎥ max ⎥ ⎢ ⎢ .. .. εmin (k) = ⎣ ⎦ , ε (k) = ⎣ ⎦ . . εmin (k + N |k) εmax (k + N |k) ⎡
(1.42)
1.2 How to Cope with Infeasibility Problem
19
Unfortunately, the number of decision variables increases to n u Nu + 2n y N , the number of constraints is 4n u Nu + 4n y N . In practical applications of MPC, the assumption that the output constraints are relaxed by the same degree for the whole prediction horizon (for the consecutive controlled variables) and only 2n y additional variables are used gives very good results, very close to those possible when as many as 2n y N additional variables are necessary [96].
1.3 Parameterisation of Decision Variables Laguerre, Kautz and other orthonormal functions may be successfully used for modelling of dynamical systems in linear [137] and nonlinear [138] cases, respectively. Application of orthonormal Laguerre functions to parameterise the calculated future sequence of the manipulated variables may be used in MPC algorithms based on linear state-space models: in continuous-time [186] and discrete-time [187] versions, respectively, as well as in the DMC algorithm, in which a step-response model is used for prediction [178]. A systematic tuning methodology to find parameters of Laguerre functions in parameterised MPC is discussed in [61, 75]. MPC algorithms with Laguerre parameterisation have been developed for different technological processes. Example applications include: buildings [19], wave energy converters [69], magnetically actuated satellites [76], wind turbines [84], hexacopters [104] and power systems [202]. All cited MPC algorithms use linear models for prediction. In this book, the Laguerre functions are used to parameterise the decision vector of all discussed nonlinear MPC algorithms, i.e. to reduce the number of decision variables that are actually optimised on-line. At first, let us consider the SISO case. Let l1 (k),…,ln L (k) denote n L Laguerre functions. The transfer function of the Laguerre function of the order n is [185] 1 − (aL )2 1 − aL z n−1 G n (z) = z − aL z − aL
(1.43)
where aL is a scaling factor, often named a Laguerre pole. For stability, the condition 0 ≤ aL < 1 must be satisfied. The transfer functions G n (z) satisfy the following orthonormality conditions 1 2π 1 2π
π −π π
−π
G n (e jω )G n (e jω )∗ dω = 1
(1.44)
G m (e jω )G n (e jω )∗ dω = 0 for m = n
(1.45)
20
1 Introduction to Model Predictive Control
where G n (e jω )∗ denotes complex conjugate of the transfer function G n (e jω ). The Laguerre functions are defined as inverse Z-transforms of the transfer functions G n (z) ln (k) = Z −1 (G n (z)) (1.46) Taking into account the structure of the obtained Laguerre functions, it may be found that [187] L(k + 1) = L(k) (1.47) where the vector of length n L is T L(k) = l1 (k) . . . ln L (k)
(1.48)
and the matrix of dimensionality n L × n L is ⎡ ⎢ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣
aL βL −aL βL aL2 βL .. .
0 aL βL −aL βL .. .
0 0 aL βL .. .
... ... ... ... .. .
0 0 0 0 .. .
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(1.49)
(−aL )n L −2 β (−aL )n L −3 β . . . βL aL The initial condition is ⎡
L(0) =
⎢ ⎢ ⎢ 2⎢ 1 − aL ⎢ ⎢ ⎢ ⎣
1 −aL aL2 −aL3 .. .
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(1.50)
(−aL )n L −1 and βL = 1 − aL2 . The orthonormality conditions (1.44)–(1.45) may also be formulated for the discrete-time description ∞
k=0 ∞
li (k)l j (k) = 0 for i = j
(1.51)
li (k)l j (k) = 1 for i = j
(1.52)
k=0
The idea of parameterisation is to eliminate the necessity of calculating at each sampling instant as many as Nu future increments u(k|k), . . . , u(k + Nu − 1|k),
1.3 Parameterisation of Decision Variables
21
i.e. the whole vector u(k) (Eq. (1.3)). The future control increments are parameterised using the Laguerre functions in the following way [187] u(k + p|k) =
nL
li ( p)ci (k)
(1.53)
i=1
Using the vector notation, we have u(k + p|k) = L T ( p)c(k)
(1.54)
where the vector of coefficients is T c(k) = c1 (k) . . . cn L (k)
(1.55)
For the whole vector of future increments of the manipulated variable over the control horizon, we have u(k) = Lc(k) (1.56) where the matrix of dimensionality Nu × n L is ⎡ ⎢ ⎢ L=⎢ ⎣
l1 (0) l1 (1) .. .
l2 (0) l2 (1) .. .
... ... .. .
ln L (0) ln L (1) .. .
⎤ ⎥ ⎥ ⎥ ⎦
(1.57)
l1 (Nu − 1) l2 (Nu − 1) . . . ln L (Nu − 1) In parameterised MPC the vector of decision variables is c(k), not u(k). Since n L < Nu , the number of decision variables used in the MPC optimisation problem solved on-line is reduced. Having calculated the optimal vector copt (k) from the MPC optimisation problem, using Eq. (1.56) and taking into account the structure of the matrix L given by Eq. (1.57), the current optimal value of the manipulated variable is calculated from u(k) = l1 (0) l2 (0) . . . ln L (0) copt (k) + u(k − 1)
(1.58)
and applied to the process. Having discussed the SISO case, we will consider parameterisation using Laguerre functions for MIMO processes. In order to obtain a flexible solution, we assume that for the consecutive manipulated variables separate Laguerre poles aL1 , . . . , aLn u are used. Furthermore, we also assume that for the consecutive variables different numbers of Laguerre functions are possible, i.e. n 1L , . . . , n nLu . Similarly to Eq. (1.53) used in the SISO case, the future control increments are parameterised in the following way
22
1 Introduction to Model Predictive Control 1
u 1 (k + p|k) =
nL
l1,i ( p)c1,i (k)
(1.59)
ln u ,i ( p)cn u ,i (k)
(1.60)
i=1
.. . nu
u n u (k + p|k) =
nL
i=1
In place of Eq. (1.54), we have u 1 (k + p|k) = L T1 ( p)c1 (k) .. .
(1.61)
u n u (k + p|k) = L Tn u ( p)cn u (k)
(1.62)
where the vectors of coefficients, of length n 1L , . . . , n nLu , respectively, are ⎤ ⎡ ⎤ c1,1 (k) cn u ,1 (k) ⎥ ⎢ ⎢ ⎥ .. .. c1 (k) = ⎣ ⎦ , . . . , cn u (k) = ⎣ ⎦ . . n c1,n 1L (k) cn u ,n Lu (k) ⎡
(1.63)
For all manipulated variables and the whole vector of future increments over the control horizon, for the MIMO process we also obtain Eq. (1.56), the same as in the SISO case, but now the vector u(k) is of length n u Nu and the matrix of dimensionality n u Nu × (n 1L + · · · + n nLu ) has the general structure ⎡
L1
⎢ 0 Nu ×n 1 L ⎢ ⎢ L = ⎢ 0 Nu ×n 1L ⎢ .. ⎣ . 0 Nu ×n 1L
0 Nu ×n 2L 0 Nu ×n 3L L 2 0 Nu ×n 3L 0 Nu ×n 2L L3 .. .. . . 0 Nu ×n 2L 0 Nu ×n 3L
. . . 0 Nu ×n nLu . . . 0 Nu ×n nLu . . . 0 Nu ×n nLu .. .. . . . . . L nu
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(1.64)
where the consecutive submatrices of dimensionality Nu × n nL are ⎡ ⎢ ⎢ Ln = ⎢ ⎣
ln,1 (0) ln,1 (1) .. .
ln,2 (0) ln,2 (1) .. .
... ... .. .
ln,n nL (0) ln,n nL (1) .. .
⎤ ⎥ ⎥ ⎥ ⎦
(1.65)
ln,1 (Nu − 1) ln,2 (Nu − 1) . . . ln,n nL (Nu − 1) for n = 1, . . . , n u . The vector of optimised decision variables is of length n 1L + · · · + n nLu and has the structure
1.3 Parameterisation of Decision Variables
23
⎤ c1 (k) ⎥ ⎢ c(k) = ⎣ ... ⎦ cn u (k) ⎡
(1.66)
where the subvectors are defined by Eq. (1.63). Having calculated the optimal vector copt (k) from the MPC optimisation problem, using Eq. (1.56) and taking into account the matrices L and L n , given by Eqs. (1.64) and (1.65), respectively, the current optimal values of the manipulated variables are calculated from opt u 1 (k) = l1,1 (0) l1,2 (0) . . . l1,n 1L (0) c1 (k) + u 1 (k − 1) .. . u n u (k) = ln u ,1 (0) ln u ,2 (0) . . . ln u ,n nLu (0) cnopt (k) + u n u (k − 1) u
(1.67)
(1.68)
and applied to the process.
1.4 Computational Complexity of MPC Algorithms In the simplest case, a linear model is used in MPC for prediction and no constraints are taken into account. A few different such MPC methods have been developed, with different structures of linear models. To name the most important MPC approaches based on linear models, we have to mention the following ones: 1. The Predictive Functional Control (PFC) algorithm (also known under the name Model Heuristic Predictive Control (MHPC)) [156, 157] in which the impulseresponse process representations are used. 2. The Dynamic Matrix Control (DMC) algorithm [29] in which the step-response models are used. 3. The Generalized Predictive Control (GPC) algorithm [27] in which the discretetime transfer functions are used. 4. The MPC algorithm with state-space models (MPCS) [112, 177] in which the classical linear state-space models are used. The use of a linear model implies that the predicted trajectory of the manipulated variables (Eq. (1.22)) is a linear function of the decision variable vector (1.3). Remembering that the typical minimised MPC cost-function is of the quadratic type (Eq. (1.13)), we obtain an unconstrained quadratic optimisation problem. It may be solved analytically, without on-line optimisation. The future increments of the manipulated variables are linear functions of the following: the model parameters, some values of the manipulated variables computed at the previous sampling instants and the values of the process controlled variables measured at the previous sampling instants.
24
1 Introduction to Model Predictive Control
Hence, such unconstrained MPC methods are named unconstrained linear explicit MPC algorithms. If a linear model is used for prediction, but the constraints must be taken into account, at each sampling instant, it is necessary to solve on-line a quadratic optimisation task (a quadratic minimised cost-function and linear constraints). Such methods are named constrained linear MPC algorithms or, better, constrained MPC algorithms based on linear models since in the constrained case, the explicit linear solution does not exist, the optimal solution is obtained as a result of on-line optimisation. Depending on the model used, we obtain constrained MHPC, DMC, GPC and MPCS algorithms. For linear models, provided that μ p,m ≥ 0 and λ p,n > 0, the optimisation task has only one solution, which is the global one. Different approaches may be used to find the solution of the quadratic optimisation MPC problem [171]: the active-set methods, the interior-point ones and the first-order ones. It is necessary to point out that many very computationally efficient quadratic optimisation solvers are available, e.g. qpOASES [52], CVXGEN [126] and OSQP [171]. To speed up calculations, advanced quadratic optimisation algorithms may be specially tailored for MPC, i.e. the special form of the MPC optimisation task may be exploited. They may be used not only for industrial control applications [13] but also in embedded systems [16, 78, 158], for which sampling times are very short, of the order of hundreds, teens or even single milliseconds. As described in Sect. 1.3, some basis functions, e.g. Laguerre orthonormal functions, may be used to reduce the number of decision variables of the MPC optimisation problem. The sequence of future manipulated variables is parameterised using a set of basis functions. The optimisation routine does not directly calculate the future manipulated variables or the corresponding increments but the coefficients of the basis functions. In the literature, a few variants of MPC algorithms which use that concept are described [178, 186, 187]. The parameterisation approach may be used in unconstrained linear explicit MPC algorithms and constrained MPC algorithms based on linear models. A similar approach is used in the PFC algorithm, which also uses linear models for prediction [156]. Finally, parameterisation may be used in the nonlinear MPC algorithms [98] which are discussed in the following chapters of this book. Although the classical quadratic optimisation MPC problem is quite simple, in some applications, it would be best to eliminate the necessity of on-line optimisation at all. It can be proven that for a linear model and the typical quadratic cost-function, the optimal solution of the constrained quadratic optimisation MPC problem is a function of the state [15, 179]. That observation leads to constrained linear explicit MPC algorithms. The whole state domain is divided into a number of sets. For each set, the explicit control law is derived off-line. During on-line control, it is only necessary to determine to which set the current state of the process belongs and to use the corresponding precalculated control law; no on-line optimisation is necessary. Although the idea seems to be generally simple and intuitive, it may turn out that many (dozens or even hundreds) of sets and local control laws are required for typical processes.
1.4 Computational Complexity of MPC Algorithms
25
When a general nonlinear model is used for prediction, the predicted trajectory (1.22) is a nonlinear function of the decision variable vector (1.3). Thus, the minimised cost-function (Eq. (1.13)) is not quadratic but nonlinear. The constraints imposed on the magnitude and on the rate of change of the manipulated variables are linear, but the constraints put on the predicted values of the controlled variables are nonlinear. The general class of the discussed approach is known as fully-fledged constrained nonlinear MPC algorithms or constrained MPC algorithms with nonlinear optimisation. A constrained nonlinear MPC optimisation problem must be solved on-line at each sampling instant. There are two difficulties of that approach. Firstly, nonlinear optimisation algorithms must be used. They are much more complicated than the classical quadratic optimisation ones. Solution of a constrained nonlinear optimisation task may need a lot of time. It is particularly important in the case of fast dynamical systems, for which very short sampling times are required. Secondly, it is possible that not only one global but several local minima exist. When a suboptimal solution is used for control, the resulting control quality may be lower than expected. Typically, the Newton-like nonlinear optimisation algorithms are used. The Sequential Quadratic Programming (SQP) [151] and Interior Point (IP) [20] methods are the most frequently used ones in nonlinear MPC. Efficient implementation methods for SQP and IP algorithms have been developed which exploit the particular structure of the MPC optimisation task [53, 153]. Specialised nonlinear optimisation methods, developed with the aim of being used to solve MPC optimisation problems, make it possible to carry out parallel calculations [31, 199]. When the model used for prediction is comprised of a set of differential-algebraic equations, specialised optimisation methods must be used [33]. An excellent review of possible approaches to nonlinear optimisation in MPC is given in [34]. Very infrequently, for nonlinear optimisation other algorithms may be used, e.g. the golden section method [114, 193] or the branch-and-bound approach [195]. When the process dynamics is slow, which makes it possible to use relatively long sampling periods, we may use heuristic global optimisation algorithms. For example, applications of genetic algorithms to solve the constrained nonlinear MPC optimisation task may be found in [103, 149]. Specialised genetic operators (mutation and crossover) are used, tailored for the nature of MPC. An alternative is to use the particle swarm optimisation algorithm [25, 191]. Another option is to use simulated annealing for nonlinear optimisation [1]. It must be stressed that application of heuristic optimisation methods is limited. There are, however, some deterministic global optimisation methods [164] that may be used in MPC [47]. The cited method is based on a convex relaxation of the MPC cost-function. It is reported to significantly reduce dimensionality of the MPC optimisation task, which lower the overall computational burden. To further reduce computational complexity, a neural multi-model is used rather than one dynamical model applied recurrently. In practice, fuzzy MPC is a very important alternative. To control a nonlinear process, a set of simple local MPC controllers is used. The local controllers are switched on-line, taking into account the current operating point of the process and/or the setpoint. Both the unconstrained linear explicit MPC methods and the constrained MPC
26
1 Introduction to Model Predictive Control
algorithms based on linear models may be used as local controllers. It is important that the local controllers are developed off-line. During on-line control, it is only necessary to combine the values of the manipulated variables computed by the local controllers in a fuzzy way. Fuzzy DMC algorithms [30, 119, 125] and fuzzy GPC methods are given as examples of the described approach [177]. Advanced methods utilised for prediction generation in the fuzzy DMC algorithm are discussed in [123, 124]. A similar idea is to use multi-linear models for prediction in MPC [200]. A specialised procedure is used to determine the multi-linear process representation from nonlinear Hammerstein or Wiener models. There are numerous attempts to simplify the general nonlinear MPC optimisation task that must be solved at each sampling instant on-line. The following methods are reported in the literature: 1. The first n u elements of the future control policy are computed from a nonlinear optimisation task, whereas the remaining ones are found from an explicit control law [201]. As a result, the optimisation problem is still nonlinear, but the number of decision variables is equal to n u , not to n u Nu as in the rudimentary approach. 2. The technique named move blocking may be used [21]. The degree of freedom is reduced by fixing the manipulated variables or their derivatives to be constant over several time-steps. Some of such methods guarantee stability and satisfaction of constraints. 3. Compression of the constraint set is possible [102]. It simplifies the MPC optimisation task. Such an approach may be used together with the move blocking technique. 4. The domain of the calculated manipulated variable may be discretised [115] (in the cited approach, the control horizon is equal to 1). A simple procedure determines its best value and on-line optimisation is not necessary. A more advanced graph search method for finding the control policy is used in [155]. 5. In the case of the cascade models, the inverse of the static part of the model may be used to make an attempt to cancel the effect of nonlinearity. It makes it possible to formulate the classical quadratic optimisation MPC problem. For the Hammerstein structure, such an approach is discussed in [54], for the Wiener structure in [4, 23, 70, 133, 134, 168]. The same method may also be used for cascade models with 3 blocks, e.g. the Hammerstein-Wiener ones as described in [35, 63, 147]. As pointed out in Section 3.1, the discussed approach has important structural disadvantages and limitations. Moreover, as demonstrated in simulations discussed in this book, it is very sensitive to model errors and disturbances. 6. In the fast MPC algorithm [190] the MPC optimisation task is not solved precisely but in an approximate way. Although it may have a negative effect on the resulting control quality, the time of calculations necessary at each sampling instant is likely to be significantly reduced. As proved in [165], for stability, it is sufficient to use a feasible control strategy, i.e. the one that satisfies all the existing constraints, not the optimal one. 7. The numerical optimisation procedure used in the MPC algorithm may be replaced by a specially designed neural network which acts as a neural optimiser. There
1.4 Computational Complexity of MPC Algorithms
8.
9.
10.
11.
12. 13.
14.
15.
16.
27
are a few neural structures which solve the quadratic optimisation problem [109, 188]. The network described in [109] is used for optimisation in an MPC algorithm based on a linear model [141] and in an MPC algorithm with on-line model linearisation [140]. The MPC algorithm may be replaced by a specially designed neural network which acts as a neural approximator that attempts to mimic the whole MPC algorithm [2, 142]. At first, the classical nonlinear MPC algorithm is developed and run on-line (or off-line in simulations) for different operating conditions and set-points. A data set is collected and next used to train a neural approximator. For a given operating point of the process, determined by measurements of the process input and output variables, as well as the set-point, the approximator finds the current values of the manipulated variables. An approximator may also be used to find the initial solution of the MPC optimisation problem [180]. Finding the initial solution is likely to significantly shorten the calculation time in embedded, microprocessor-based systems [77]. The prediction and control horizons may be equal to 1 and the current value of the manipulated variable may be computed by a simple binary search algorithm [160]. The Experience-driven Predictive Control (EPC) algorithm constructs a database of feedback controllers that are parameterised by the system dynamics [32]. When, for given conditions, the control law does not exist, it is calculated by a conventional MPC algorithm based on a linear model. In order to obtain a quadratic optimisation task, for prediction Locally-Weighted Projection Regression (LWPR) models are used, which allow for easy on-line model adaptation. The nonlinear optimisation MPC problem is relaxed into a Mixed Integer Linear Programming (MILP) one. Next, the solution of the MILP problem is taken as a starting point of the nonlinear one [189]. Constrained explicit nonlinear MPC algorithms are possible [57, 71]. Unfortunately, a huge number of local control laws may be necessary. A specialised model may be used in which the output values for the consecutive sampling instants within the prediction horizon are linear functions of the calculated future manipulated variables, but they are nonlinear functions of the past (the quasi-linear model) [106]. Such an approach results in a quadratic optimisation MPC task. Neural networks are used for modelling. When Linear Parameter Varying (LPV) models are used for prediction, the general nonlinear optimisation problem is replaced by a convex Linear Matrix Inequalities (LMIs) optimisation task [203–205]. Neural networks may calculate coefficients of the LPV models. Model convexity may be achieved when Input Convex Neural Networks (ICNNs) are used [8]. ICNNs are obtained by explicitly constraining the model outputs to be convex functions of the inputs during model development. As a result, convex MPC optimisation problems are obtained: unconstrained [26] or constrained ones [196]. A class of linear predictors may be used to describe a nonlinear system [127]. The key step in obtaining such accurate predictions is to lift (or embed) the nonlinear
28
1 Introduction to Model Predictive Control
dynamics into a higher dimensional space in which its evolution of this lifted state is (approximately) linear. The idea corresponds to the Koopman operator [79, 80]. When such a model is used in MPC, we obtain a quadratic MPC optimisation task [82, 127]. An alternative method, named polyflows, is discussed in [72]. Finally, on-line linearisation must be discussed as the method which makes it possible to significantly reduce computational burden of nonlinear MPC. Details of numerous such MPC methods are presented in Chaps. 3 and 7 for input-output and state-space Wiener process descriptions, respectively. Let us now only give a short literature review. In general, two categories of computationally efficient MPC algorithms may be distinguished: with on-line model linearisation and with on-line trajectory linearisation. In both cases, we obtain computationally simple quadratic optimisation problems, the necessity of on-line nonlinear optimisation is eliminated. In the simplest approach, a linear approximation of the nonlinear model is computed on-line for the current operating point of the process. Typically, model linearisation is performed at each sampling instant but, for some “less nonlinear” processes or when changes of the set-point are slow and infrequent, model linearisation may be repeated less frequently. Next, the obtained linearised model is used to calculate the predicted trajectory of the controlled variables. Thanks to linearisation, the predicted trajectory is a linear function of the vector of decision variables (1.3), which is a characteristic feature of the classical MPC algorithms based on linear models. Hence, a quadratic optimisation problem is formulated when the constraints must be taken into account or even the explicit unconstrained solution is possible. The MPC algorithms with on-line model linearisation may be divided into two categories [91, 177]. In the first one, the time-varying linear approximation of the rudimentary nonlinear model is used to calculate future predictions and the influence of the past, i.e. the free trajectory. In the second approach to MPC with successive linearisation, the linearised model is only used to calculate the future predictions, whereas the nonlinear model is used to find the nonlinear free trajectory. The first approach is used to control a spark-ignition engine in [28] and an aircraft gas turbine engine in [130]. Applications to a polymerisation reactor and a distillation column are presented in [85]. When necessary, the nonlinear model may be retrained on-line as shown in [3], applications of the algorithm to a fluidised bed furnace reactor and the autopilot of the F-16 aircraft are described. An application to a boiler-turbine unit in a power plant described by a state-space process model is detailed in [96], two variants of soft constraints are considered. Although the algorithm may be implemented for practically any differentiable model, a straightforward calculation is possible for Wiener structures since the linearised model is found in a simplified way, as a multiplication of the linear dynamic part and the time-varying gain of the nonlinear static block [5]. A similar calculation method is possible for the Hammerstein model. The second approach, i.e. with the nonlinear free trajectory, is used to control a solar power plant in [9, 17], a spark-ignition engine [162], a yeast fermentation reactor [91], a polymerisation reactor and a distillation column [85]. Also in the second approach simple calculations are possible when Hammerstein [91, 121] or Wiener [87, 91, 120, 122] models are used.
1.4 Computational Complexity of MPC Algorithms
29
In more advanced MPC algorithms with on-line trajectory linearisation, not the model itself is linearised, but a linear approximation of the predicted trajectory of the controlled variables over the whole prediction horizon is directly calculated. Unlike the simple MPC algorithms with model linearisation, linearisation is not performed for the current operating point of the process, defined by past measurements of the process input and output signals, but carried out along some future trajectory of the manipulated variables defined for the whole control horizon. Similarly to the simple algorithm with on-line model linearisation, a quadratic optimisation problem is next formulated. The explicit unconstrained solution is also possible. In practice, the classical MPC algorithm with model linearisation may be used when the process is close to the desired set-point. If it is not true, the calculated solution defines the future trajectory of the manipulated variables along which a linear approximation of the predicted trajectory of the controlled variables is calculated. Such a hybrid MPC structure is presented in [88, 91], an application to a high-pressure distillation column is discussed. An application of the algorithm to a solid oxide fuel cell is presented in [97], the method of coping with infeasibility caused by linearisation of nonlinear technological constraints (fuel utilisation) are discussed. The MPC algorithm with trajectory linearisation is also of course possible when the process is described by cascade models, including: Hammerstein [91] (for a polymerisation reactor benchmark), Wiener [94] (for a neutralisation reactor) and [100] (for a proton exchange membrane fuel cell), Hammerstein-Wiener [93] as well as Wiener-Hammerstein [95] (for a heat exchanger) structures. Although all cited works are concerned with the input-output process representation, the MPC algorithm with trajectory linearisation is, of course, possible for the state-space representation [101] (implementation details for the Wiener model are given). Finally, let us mention computationally efficient MPC algorithms with on-line linearisation and approximation. The approximator is used in order to eliminate some calculations that must be repeated at each sampling instant. They are necessary in the classical MPC algorithms with on-line linearisation. Successive model linearisation and prediction calculation may be simplified using an approximator which directly estimates, at each sampling instant, the time-varying matrix of step response coefficients of the linearised model [91]. An application of that approach to a simulated distillation column is detailed in [90]. The same approximation method may be used in the nonlinear DMC algorithm [86, 91]. A significant reduction of computational complexity in comparison with the classical MPC algorithms with on-line linearisation may be obtained when explicit unconstrained versions of the discussed algorithms are considered. It may be proved [91, 92] that in such a case, the optimal vector of the decision variable vector (1.3) is a linear function of the set-point, model parameters and some past measurements. The time-varying vector of coefficients of the control law is determined on-line by a neural approximator for the current operating point. As a result, on-line model linearisation and some other calculations are not necessary, which significantly speeds up calculations. A simulation study concerned with a high-pressure distillation process is presented in [91, 92]. In all mentioned cases, neural networks are used as approximators, although other structures are also possible.
30
1 Introduction to Model Predictive Control
1.5 Example Applications of MPC Algorithms MPC is regarded as the only one among the advanced control techniques, defined as more advanced than the classical PID controller, which is successfully used in numerous industrial applications [152]. Let us cite a number of typical applications. Traditionally, MPC algorithms may be successfully used for controlling the following industrial processes: – – – – – – – – –
chemical reactors [64, 166, 175, 198], distillation columns [11, 65, 74, 111, 116, 148, 184], combustion in pulverized-coal-fired boilers (in power plants) [62], greenhouses [60], hydraulic systems [12], solar power stations [9, 55], waste water treatment plants [131], electromagnetic mills [136], cement kilns [170].
Typically, the sampling period of industrial MPC algorithms used in process control is quite long, of the order of seconds, a dozens of seconds or even minutes. Programmable Logic Controllers (PLCs) are used for implementation of MPC algorithms in industrial process control. In addition to that, thanks to availability of fast microcontrollers, it is possible to develop MPC algorithms for fast dynamical systems (in embedded systems). In contrast to the mentioned industrial applications, they require short sampling times, shorter than one second, typically of millisecond order. Example applications of fast MPC include: – – – – – – – – – –
fuel cells [59], active vibration attenuation [176], combustion engines [28, 73, 154], robots [22, 139, 183], servomotors [24], quadrotors [7], stratospheric airships [108], power converters [194], electrical inverters [110], induction machines [51].
Many research works are concerned with automotive applications. A few examples are: autonomous driving [105, 173], autonomous racing [6], traction control [68], vehicle roll-over [67]. There are some applications of MPC in medicine, e.g. muscle relaxant anaesthesia [114] and artificial pancreas [66]. In addition to industrial and embedded applications of MPC, it is interesting to mention a few original and less frequent applications in which MPC algorithms also turn out to be very efficient:
1.5 Example Applications of MPC Algorithms
– – – – –
31
drinking water transport networks [143], supermarket refrigeration systems [161], traffic on highways [14], high energy physics accelerators [18], inventory management in hospitals [113].
Important applications of MPC are concerned with building control. Typically, only temperature control (stabilisation despite changes of the outside temperature, which is a disturbance) is considered [56, 172]. In more advanced solutions, thermal comfort is controlled [197], i.e. temperature, humidity and other factors. MPC may cooperate with on-line energy optimisation which determines optimal set-points for MPC [10]. It is important to emphasise that all cited works in Chap. 1.5 discuss real applications only. In addition to that, hundreds or even thousands of works annually discuss simulation results.
References 1. Aggelogiannaki, E., Sarimveis, H.: A simulated annealing algorithm for prioritized multiobjective optimization-implementation in an adaptive model predictive control configuration. IEEE Trans. Syst. Man Cybern.-Part B: Cybern. 37, 902–915 (2007) 2. Åkesson, B.M., Toivonen, H.T., Waller, J.B., Nyström, R.H.: Neural network approximation of a nonlinear model predictive controller applied to a pH neutralization process. Comput. Chem. Eng. 29, 323–335 (2005) 3. Akpan, V.A., Hassapis, G.D.: Nonlinear model identification and adaptive model predictive control using neural networks. ISA Trans. 50, 177–194 (2011) 4. Al-Duwaish, H., Karim, M., Chandrasekar, V.: Use of multilayer feedforward neural networks in identification and control of Wiener model. IEE Proc. Control Theory Appl. 143, 255–258 (1996) 5. Al Seyab, R.K., Cao, Y.: Nonlinear model predictive control for the ALSTOM gasifier. J. Process Control 16, 795–808 (2006) 6. Alcalá, E., Puig, V., Quevedo, J., Rosolia, U.: Autonomous racing using linear parameter varying-model predictive control (LPV-MPC). Control Eng. Practice 95, 104270 (2020) 7. Alexis, K., Nikolakopoulos, G., Tzes, A.: Switching model predictive attitude control for a quadrotor helicopter subject to atmospheric disturbances. ISA Trans. 19, 1195–1207 (2011) 8. Amos, B., Xu, L., Kolter, J.Z.: In: Input Convex Neural Networks, pp. 146–155. Sydney, NSW, Australia (2017) 9. Arahal, M.R., M., B., F., C.E. : Neural identification applied to predictive control of a solar plant. Control Eng. Practice 6, 333–344 (1998) 10. Ascione, F., Bianco, N., De Stasio, C., Mauro, G.M., Vanoli, G.P.: Simulation-based model predictive control by the multi-objective optimization of building energy performance and thermal comfort. Energy Build. 111, 131–144 (2016) 11. Assandri, A.D., de Prada, C., Rueda, A., Martínez, J.S.: Nonlinear parametric predictive temperature control of a distillation column. Control Eng. Practice 21, 1795–1806 (2013) 12. Bakhshande, F., Spiller, M., King, Y.L., Söffker, D.: Computationally efficient model predictive control for real time implementation experimentally applied on a hydraulic differential cylinder. IFAC-PapersOnLine 53, 8979–8984 (2020) 13. Bartletta, R.A., Biegler, L.T., Backstromb, J., Gopal, V.: Quadratic programming algorithms for large-scale model predictive control. J. Process Control 12, 775–795 (2002)
32
1 Introduction to Model Predictive Control
14. Bellemans, T., De Schutter, B., De Moor, B.: Model predictive control for ramp metering of motorway traffic: a case study. Control Eng. Practice 14, 757–767 (2006) 15. Bemporad, A., Morari, M., Dua, V., Pistikopoulos, E.: The explicit linear quadratic regulator for constrained systems. Automatica 38, 3–20 (2002) 16. Bemporad, A., Patrinos, P.: Simple and certifiable quadratic programming algorithms for embedded linear model predictive control. IFAC Proc. Vol. 45, 14–20 (2012) 17. Berenguel, M., Arahal, M.R., Camacho, E.F.: Modelling the free response of a solar plant for predictive control. Control Eng. Practice 6, 1257–1266 (1998) 18. Blanco, E., de Prada, C., Cristea, S., Casas, J.: Nonlinear predictive control in the LHC accelerator. Control Eng. Practice 17, 1136–1147 (2009) 19. Bosschaerts, W., Van Renterghem, T., Hasan, O.A., Limam, K.: Development of a model based predictive control system for heating buildings. Energy Procedia 122, 519–528 (2017) 20. Byrd, R.H., Hribar, M.E., Nocedal, J.: An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim. 9, 877–900 (1999) 21. Cagienard, R., Grieder, P., Kerrigan, E.C., Morari, M.: Move blocking strategies in receding horizon control. In: Proceedings of the 43rd IEEE Conference on Decision and Control (CDC 2004), pp. 2023–2028. Nassau, Bahamas (2004) 22. Castañeda, L.Á., Chairez, Guzman-Vargas L., I., Luviano-Juárez, A. : Output based bilateral adaptive control of partially known robotic systems. Control Eng. Practice 98, 104362 (2020) 23. Cervantes, A.L., Agamennoni, O.E., Figueroa, J.L.: A nonlinear model predictive control system based on Wiener piecewise linear models. J. Process Control 13, 655–666 (2003) 24. Chaber, P., Ławry´nczuk, M.: Fast analytical model predictive controllers and their implementation for STM32 ARM microcontroller. IEEE Trans. Indus. Inf. 15, 4580–4590 (2019) 25. Chen, L., Du, S., He, Y., Liang, M., Xu, D.: Robust model predictive control for greenhouse temperature based on particle swarm optimization. Inf. Process. Agri. 5, 329–338 (2018) 26. Chen, Y., Shi, Y., Zhang, B.: In: Optimal Control via Neural Networks: A Convex Approach. New Orleans, USA (2019) 27. Clarke, D.W., Mohtadi, C., Tuffs, P.S.: Generalized predictive control-part i. the basic algorithm. Automatica 23, 137–148 (1987) 28. Colin, G., Chamaillard, Y., Bloch, G., Corde, G.: Neural control of fast nonlinear systemsapplication to a turbocharged SI engine with VCT. IEEE Trans. Neural Netw. 18, 1101–1114 (2007) 29. Cutler, C.R., Ramaker, B.L.. : In: Dynamic Matrix Control-a Computer Control Algorithm. Houston, Texas, USA (1979) 30. D., D., D., C. : A practical multiple model adaptive strategy for single-loop MPC. Control Eng. Practice 11, 141–159 (2003) 31. Deng, H., Ohtsuka, T.: A parallel newton-type method for nonlinear model predictive control. Automatica 109, 108560 (2019) 32. Desaraju, V.R., Nathan, M.: Leveraging experience for computationally efficient adaptive nonlinear model predictive control. In: Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA 2017), pp. 5314–5320. Singapore (2017) 33. Diehl, M., Bock, H.G., Schlöder, J.P., Findeisen, R., Nagy, Z., Allgöwer, F.: Real-time optimization and nonlinear model predictive control of processes governed by differentialalgebraic equations. J. Process Control 12, 577–585 (2002) 34. Diehl, M., Ferreau, H.J., Haverbeke, N.: Efficient numerical methods for nonlinear mpc and moving horizon estimation. In: Magni, L., Raimondo, D.M., Allgöwer, F. (eds.) Nonlinear Model Predictive Control. Lecture Notes in Control and Information Sciences, vol. 384, pp. 391–417. Springer, Berlin, Heidelberg (2009) 35. Ding, B., Ping, X.: Dynamic output feedback model predictive control for nonlinear systems represented by Hammerstein-Wiener model. J Process Control 22, 1773–1784 (2012) 36. Doma´nski, P.D.: Control Performance Assessment: Theoretical Analyses and Industrial Practice, Studies in Systems, Decision and Control, vol. 245. Springer, Cham (2020) 37. Doma´nski, P.D.: Performance assessment of predictive control-a survey. Algorithms 13, 97 (2020)
References
33
38. Doma´nski, P.D., Ławry´nczuk, M.: Assessment of predictive control performance using fractal measures. Nonlinear Dyn. 89, 773–790 (2017) 39. Doma´nski, P.D., Ławry´nczuk, M.: Assessment of the GPC control quality using non-gaussian statistical measures. Int. J. Appl. Math. Comput. Sci. 27, 291–307 (2017) 40. Doma´nski, P.D., Ławry´nczuk, M.: Control quality assessment for processes with asymmetric properties and its application to pH reactor. IEEE Access 8, 94535–94546 (2020) 41. Doma´nski, P.D., Ławry´nczuk, M.: Multi-criteria control performance assessment method for a multivariate MPC. In: Proceedings of the American Control Conference (ACC 2020), pp. 1968–1973. Denver, Colorado, USA (2020) 42. Doma´nski, P.D., Ławry´nczuk, M.: Quality assessment of nonlinear model predictive control using fractal and entropy measures. In: Lacarbonara, W., Balachandran, B., Ma, J., Tenreiro Machado, J., Stepan, G. (eds.) Nonlinear Dynamics and Control, pp. 147–156. Springer, Cham (2020) 43. Domek, S.: Switched state model predictive control of fractional-order nonlinear discrete-time systems. Asian J. Control 15, 658–668 (2013) 44. Domek, S.: Fractional-order model predictive control with small set of coincidence points. In: Latawiec, K., Łukaniszyn, M., Stanisławski, R. (eds.) Advances in Modelling and Control of Non-integer-Order Systems. Lecture Notes in Electrical Engineering, vol. 320, pp. 135–144. Springer, Cham (2015) 45. Domek, S.: Model-plant mismatch in fractional order model predictive control. In: Domek, S., Dworak, P. (eds.) Theoretical Developments and Applications of Non-Integer Order Systems. Lecture Notes in Electrical Engineering, vol. 357, pp. 281–291. Springer, Cham (2016) 46. Domek, S.: Switched fractional state-space predictive control methods for non-linear fractional systems. In: Malinowska, A.B., Mozyrska, D., Sajewski, Ł (eds.) Advances in NonInteger Order Calculus and Its Applications. Lecture Notes in Electrical Engineering, vol. 3559, pp. 113–127. Springer, Cham (2020) 47. Doncevic, D.T., Schweidtmann, A.M., Vaupel, Y., Schäfer, P., Caspari, A., Mitsos, A.: Deterministic global nonlinear model predictive control with recurrent neural networks embedded. IFAC-PapersOnLine 53, 5273–5278 (2020) 48. Ellis, M., Christofides, P.D.: On finite-time and infinite-time cost improvement of economic model predictive control for nonlinear systems. Automatica 50, 2561–2569 (2014) 49. Ellis, M., Durand, H., Christofides, P.D.: A tutorial review of economic model predictive control methods. J. Process Control 24, 1156–1178 (2014) 50. Engell, S.: Feedback control for optimal process operation. J. Process Control 17, 203–219 (2007) 51. Englert, T., Graichen, K.: Nonlinear model predictive torque control and setpoint computation of induction machines for high performance applications. Control Eng. Practice 99, 104415 (2016) 52. Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Program. Comput. 6, 327–363 (2014) 53. Frasch, J.V., Sager, S., Diehl, M.: A parallel quadratic programming method for dynamic optimization problems. Math. Program. Comput. 7, 289–329 (2015) 54. Fruzzetti, K.P., Palazo˘glu, A., McDonald, K.A.: Nonlinear model predictive control using Hammerstein models. J. Process Control 7, 31–41 (1997) 55. Gallego, A.J., Merello, G.M., Berenguel, M., Camacho, E.F.: Gain-scheduling model predictive control of a Fresnel collector field. Control Eng. Practice 82, 1–13 (2019) 56. Gorni, D., del Mar Castilla, M., Visioli, A.: An efficient modelling for temperature control of residential buildings. Build. Environ. 103, 86–98 (2016) 57. Grancharova, A., Johansen, T.A.: Explicit Nonlinear Model Predictive Control. Lecture Notes in Control and Information Sciences, vol. 429. Springer, Berlin (2012) 58. Griffith, D.W., Biegler, L.T., Patwardhan, S.C.: Robustly stable adaptive horizon nonlinear model predictive control. J. Process Control 70, 109–122 (2018) 59. Gruber, J.K., Doll, M., Bordons, C.: Design and experimental validation of a constrained mpc for the air feed of a fuel cell. Control Eng. Practice 17, 874–885 (2009)
34
1 Introduction to Model Predictive Control
60. Gruber, J.K., Guzmán, J.L., Rodríguez, F., Bordons, C., Berenguel, M., Sánchez, J.A.: Nonlinear mpc based on a Volterra series model for greenhouse temperature control using natural ventilation. Control Eng. Practice 19, 354–366 (2011) 61. Gutiérrez-Urquídez, R.C., Valencia-Palomo, G., Rodriguez-Elias, O.M., Trujillo, L.: Systematic selection of tuning parameters for efficient predictive controllers using a multiobjective evolutionary algorithm. Appl. Soft Comput. 31, 326–338 (2015) 62. Havlena, V., Findejs, J.: Application of model predictive control to advanced combustion control. Control Eng. Practice 13, 671–680 (2005) 63. Hong, M., Cheng, S.: Hammerstein-Wiener model predictive control of continuous stirred tank reactor. In: Hu, W. (ed.) Electronics and Signal Processing. Lecture Notes in Electric Engineering, vol. 97, pp. 235–242. Springer, Berlin, Heidelberg (2011) 64. Hosen, M.A., Hussain, M.A., Mjalli, F.S.: Control of polystyrene batch reactors using neural network based model predictive control (NNMPC): an experimental investigation. Control Eng. Practice 19, 454–467 (2011) 65. Huyck, B., De Brabanter, J., De Moor, B., Van Impe, J.F., Logist, F.: Online model predictive control of industrial processes using low level control hardware: a pilot-scale distillation column case study. Control Eng. Practice 28, 34–48 (2014) 66. Incremona, G.P., Messori, M., Toffanin, C., Cobelli, C., Magni, L.: Model predictive control with integral action for artificial pancreas. Control Eng. Practice 77, 86–94 (2019) 67. Jalali, M., Hashemi, E., Khajepour, A., Chen, S.K., Litkouhi, B.: Model predictive control of vehicle roll-over with experimental verification. Control Eng. Practice 77, 256–266 (2018) 68. Jalali, M., Khajepour, A., Chen, S.K., Litkouhi, B.: Integrated stability and traction control for electric vehicles using model predictive control. Control Eng. Practice 54, 256–266 (2016) 69. Jama, M., Wahyudie, A., Noura, H.: Robust predictive control for heaving wave energy converters. Control Eng. Practice 77, 138–149 (2018) 70. Jia, L., Li, Y., Li, F.: Correlation analysis algorithm-based multiple-input single-output Wiener model with output noise. Complexity 9650254 (2019) 71. Johansen, T.A.: Approximate explicit receding horizon control of constrained nonlinear systems. Automatica 40, 293–300 (2004) 72. Jungers, R.M., Tabuada, P.: Non-local linearization of nonlinear differential equations via polyflows. In: Proceedings of the American Control Conference (ACC 2019), pp. 1906–1911. Philadelphia, Pensylwania, USA (2019) 73. Kaleli, A.: Development of the predictive based control of an autonomous engine cooling system for variable engine operating conditions in SI engines: design, modeling and real-time application. Control Eng. Practice 100, 104424 (2020) 74. Kawathekar, R., Riggs, J.B.: Nonlinear model predictive control of a reactive distillation column. Control Eng. Practice 15, 231–239 (2007) 75. Khan, B., Rossiter, J.A.: Alternative parameterisation within predictive control: a systematic selection. Int. J. Control 86, 1397–1409 (2013) 76. Kim, J., Jung, Y., Bang, H.: Linear time-varying model predictive control of magnetically actuated satellites in elliptic orbits. Acta Astronaut. 151, 791–804 (2018) 77. Klauˇco, M., Kalúz, M., Kvasnica, M.: Machine learning-based warm starting of active set methods in embedded model predictive control. Eng. Appl. Artif. Intell. 77, 1–8 (2019) 78. Kögel, M., Findeisen, R.: A fast gradient method for embedded linear predictive control. IFAC Proc. Vol. 44, 1362–1367 (2011) 79. Koopman, B.: Hamiltonian systems and transformation in Hilbert space. Proc. Natl. Acad. Sci. U. S. A. 17, 315–318 (1931) 80. Koopman, B., von Neuman, J.: Dynamical systems of continuous spectra. Proc. Natl. Acad. Sci. U. S. A. 18, 255–263 (1932) 81. Korbicz, J., Ko´scielny, J.M., Kowalczuk, Z.: Fault Diagnosis: Models, Artificial Intelligence, Applications. Springer, Heidelberg (2004) 82. Korda, M., Mezi´c, I.: Linear predictors for nonlinear dynamical systems: koopman operator meets model predictive control. Automatica 93, 149–160 (2018)
References
35
83. Ko´scielny, J.M.: Fault Diagnosis of Automated Industrial Processes. Academic Publishing House EXIT, Warsaw (2001). In Polish 84. Lasheen, A., Saad, M.S., Emara, H.M., Elshafei, A.L.: Continuous-time tube-based explicit model predictive control for collective pitching of wind turbine. Energy 118, 1222–1233 (2017) 85. Ławry´nczuk, M.: A family of model predictive control algorithms with artificial neural networks. Int. J. Appl. Math. Comput. Sci. 17, 217–232 (2007) 86. Ławry´nczuk, M.: Neural dynamic matrix control algorithm with disturbance compensation. In: García Pedrajas, N., Herrera, F., Fyfe, C., Benítez, J.M., A.M. (eds.) Proceedings of the 23th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems (IEA-AIE 2010), Cordoba, Spain, Lecture Notes in Artificial Intelligence, vol. 6098, pp. 52–61. Springer, Berlin (2010) 87. Ławry´nczuk, M.: In: Dobnikar, A., Lotriˇc, U., Šter, B. (eds.) Nonlinear Predictive Control Based on Multivariable Neural Wiener Models, vol. 6593, pp. 31–40. Springer, Berlin (2011) 88. Ławry´nczuk, M.: On improving accuracy of computationally efficient nonlinear predictive control based on neural models. Comput. Eng. Sci. 66, 5253–5267 (2011) 89. Ławry´nczuk, M.: On-line set-point optimisation and predictive control using neural Hammerstein models. Chem. Eng. J. 166, 269–287 (2011) 90. Ławry´nczuk, M.: In: Dobnikar, A., Lotriˇc, U., Šter, B. (eds.) Predictive Control of a Distillation Column Using a Control-oriented Neural Model, vol. 6593, pp. 230–239. Springer, Berlin (2011) 91. Ławry´nczuk, M.: Computationally Efficient Model Predictive Control Algorithms: a Neural Network Approach, Studies in Systems, Decision and Control, vol. 3. Springer, Cham (2014) 92. Ławry´nczuk, M.: Explicit nonlinear predictive control algorithms with neural approximation. Neurocomputing 129, 570–584 (2014) 93. Ławry´nczuk, M.: Nonlinear predictive control for Hammerstein-Wiener systems. ISA Trans. 55, 49–62 (2015) 94. Ławry´nczuk, M.: Modelling and predictive control of a neutralisation reactor using sparse support vector machine Wiener models. Neurocomputing 205, 311–328 (2016) 95. Ławry´nczuk, M.: Nonlinear predictive control of dynamic systems represented by WienerHammerstein models. Nonlinear Dyn. 86, 1193–1214 (2016) 96. Ławry´nczuk, M.: Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation. ISA Trans. 67, 476– 495 (2017) 97. Ławry´nczuk, M.: Constrained computationally efficient nonlinear predictive control of solid oxide fuel cell: tuning, feasibility and performance. ISA Trans. 99, 270–289 (2020) 98. Ławry´nczuk, M.: Nonlinear model predictive control for processes with complex dynamics: a parameterisation approach using Laguerre functions. Int. J. Appl. Math. Comput. Sci. 30, 35–46 (2020) 99. Ławry´nczuk, M., Ocło´n, P.: Model predictive control and energy optimisation in residential building with electric underfloor heating system. Energy 182, 1028–1044 (2019) 100. Ławry´nczuk, M., Söffker, D.: Wiener structures for modeling and nonlinear predictive control of proton exchange membrane fuel cell. Nonlinear Dyn. 95, 1639–1660 (2019) 101. Ławry´nczuk, M., Tatjewski, P.: Offset-free state-space nonlinear predictive control for Wiener systems. Inf. Sci. 511, 127–151 (2020) 102. Li, S.E., Jia, Z., Li, K., Cheng, B.: Fast online computation of a model predictive controller and its application to fuel economy-oriented adaptive cruise control. IEEE Trans. Ind. Inf. 16, 1199–1209 (2015) 103. Li, Y., Shen, J., Lu, J.: Constrained model predictive control of a solid oxide fuel cell based on genetic optimization. J. Power Sour. 196, 5873–5880 (2011) 104. Ligthart, J.A.J., Poksawat, P., Wang, L., Nijmeijer, H.: Experimentally validated model predictive controller for a hexacopter. IFAC-PapersOnLine 50, 4076–4081 (2017) 105. Lima, P.F., Pereira, G.C., Mårtensson, J., Wahlberg, B.: Experimental validation of model predictive control stability for autonomous driving. Control Eng. Practice 81, 244–255 (2018)
36
1 Introduction to Model Predictive Control
106. Liu, G.P., Kadirkamanathan, V., Billings, S.A.: Predictive control for non-linear systems using neural networks. Int. J. Control 71, 1119–1132 (1998) 107. Liu, S., Liu, J.: Economic model predictive control with extended horizon. Automatica 73, 180–192 (2016) 108. Liu, S., Sang, Y., Jin, H.: Robust model predictive control for stratospheric airships using LPV design. Control Eng. Practice 81, 231–243 (2018) 109. Liu, S., Wang, J.: A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans. Neural Netw. 17, 1500–1510 (2006) 110. Liu, Y., Ge, B., Abu-Rub, H., Sun, H., Peng, F.Z., Xue, Y.: Model predictive direct power control for active power decoupled single-phase quasi-Z-source inverter. IEEE Trans. Indus. Inf. 12, 1550–1559 (2016) 111. Lopez-Negrete, R., D’Amato, F.J., Biegler, L.T., Kumar, A.: Fast nonlinear model predictive control: formulation and industrial process applications. Comput. Chem. Eng. 51, 55–64 (2013) 112. Maciejowski, J.: Predictive Control with Constraints. Prentice Hall, Harlow (2002) 113. Maestre, J.M., Fernández, M.I., Jurado, I.: An application of economic model predictive control to inventory management in hospitals. Control Eng. Practice 71, 120–128 (2018) 114. Mahfouf, M., Linkens, D.A.: Non-linear generalized predictive control (NLGPC) applied to muscle relaxant anaesthesia. Int. J. Control 71, 239–257 (1998) 115. Makarow, A., Keller, M., Rösmann, C., Bertram, T.: Model predictive trajectory set control with adaptive input domain discretization. In: Proceedings of the American Control Conference (ACC 2018), pp. 3159–3164. Milwaukee, USA (2018) 116. Martin, P.A., Odloak, D., Kassab, F.: Robust model predictive control of a pilot plant distillation column. Control Eng. Practice 21, 231–241 (2013) 117. Martins, M.A.F., Odloak, D.: A robustly stabilizing model predictive control strategy of stable and unstable processes. Automatica 67, 132–143 (2016) 118. Marusak, P.M.: Oeasily reconfigurable analytical fuzzy predictive controllers: Actuator faults handling. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds.) Advances in Computation and Intelligence. Lecture Notes in Computer Science, vol. 5370, pp. 396–405. Springer, Berlin, Heidelberg (2008) 119. Marusak, P.M.: Advantages of an easy to design fuzzy predictive algorithm in control systems of nonlinear chemical reactors. Appl. Soft Comput. 9, 1111–1125 (2009) 120. Marusak, P.M.: Application of fuzzy Wiener models in efficient MPC algorithms. In: Szczuka, M., Kryszkiewicz, M., Ramanna, S., Jensen, R., Hu, Q. (eds.) Rough Sets and Current Trends in Computing. Lecture Notes in Artificial Intelligence, vol. 6086, pp. 669–677. Springer, Berlin, Heidelberg (2010) 121. Marusak, P.M.: On prediction generation in efficient MPC algorithms based on fuzzy Hammerstein models. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and Soft Computing. Lecture Notes in Computer Science, vol. 6113, pp. 136–143. Springer, Berlin, Heidelberg (2010) 122. Marusak, P.M.: Efficient MPC algorithms based on fuzzy Wiener models and advanced methods of prediction generation. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and Soft Computing. Lecture Notes in Computer Science, vol. 7267, pp. 292–300. Springer, Berlin, Heidelberg (2012) 123. Marusak, P.M.: Numerically efficient fuzzy MPC algorithm with advanced generation of prediction-application to a chemical reactor. Algorithms 13, 143 (2020) 124. Marusak, P.M.: Advanced construction of the dynamic matrix in numerically efficient fuzzy MPC algorithms. Algorithms 14, 25 (2021) 125. Marusak, P.M.: A numerically efficient fuzzy MPC algorithms with fast generation of the control signal. Int. J. Appl. Math. Comput. Sci. 31, 59–71 (2021) 126. Mattingley, J., Boyd, S.: CVXGEN: a code generator for embedded convex optimization. Optim. Eng. 13, 1–27 (2012) 127. Mauroy, A., Mezi´c, I., Susuki, Y. (eds.): The Koopman Operator in Systems and Control: Concepts, Methodologies, and Applications. Lecture Notes in Control and Information Sciences, vol. 484. Springer, Cham (2020)
References
37
128. Mayne, D.Q.: Model predictive control: recent developments and future promise. Automatica 50, 2967–2986 (2014) 129. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.M.: Constrained model predictive control: stability and optimality. Automatica 36, 789–814 (2000) 130. Mu, J., Rees, D., Liu, G.P.: Advanced controller design for aircraft gas turbine engines. Control Eng. Practice 13, 1001–1015 (2005) 131. Mulas, M., Tronci, S., Corona, F., Haimi, H., Lindell, P., Heinonen, M., Vahala, R., Baratti, R.: Predictive control of an activated sludge process: An application to the Viikinmäki wastewater treatment plant. Control Eng. Practice 35, 89–100 (2015) 132. Müller, M.A., Grüne, L.: Economic model predictive control without terminal constraints for optimal periodic behavior. Automatica 70, 128–139 (2016) 133. Norquay, S.J., Palazo˘glu, A., Romagnoli, J.A.: Model predictive control based on Wiener models. Chem. Eng. Sci. 53, 75–84 (2016) 134. Norquay, S.J., Palazo˘glu, A., Romagnoli, J.: Application of wiener model predictive control (WMPC) to an industrial C2 splitter. J. Process Control 9, 461–473 (1999) 135. Ntouskas, S., Sarimveis, H., Sopasakis, P.: Model predictive control for offset-free reference tracking of fractional order systems. Control Eng. Practice 71, 26–33 (2018) 136. Ogonowski, S., Bismor, D., Ogonowski, Z.: Control of complex dynamic nonlinear loading process for electromagnetic mill. Arch. Control Sci. 30, 471–500 (2020) 137. Oliveira, G.H.C., da Rosa, A., Campello, R.J.G.B., Machado, J.B., Amaral, W.C.: An introduction to models based on Laguerre, Kautz and other related orthonormal functions - part I: linear and uncertain models. Int. J. Model. Identif Control 14, 121–132 (2011) 138. Oliveira, G.H.C., da Rosa, A., Campello, R.J.G.B., Machado, J.B., Amaral, W.C.: An introduction to models based on Laguerre, Kautz and other related orthonormal functions - part II: Non-linear models. Int. J. Model. Identif. Control 16, 1–14 (2012) 139. Ortega, J.G., Camacho, E.F.: Mobile robot navigation in a partially structured static environment, using neural predictive control. Control Eng. Practice 4, 1669–1679 (1996) 140. Pan, Y., Wang, J.: Nonlinear model predictive control using a recurrent neural network. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2008), pp. 2296–2301. Hong Kong (2008) 141. Pan, Y., Wang, J.: Two neural network approaches to model predictive control. In: Proceedings of the American Control Conference (ACC 2008), pp. 1685–1690. Washington, USA (2008) 142. Parisini, T., Zoppoli, R.: A receding-horizon regulator for nonlinear systems and a neural approximation. Automatica 31, 1443–1451 (1995) 143. Pascual, J., Romera, J., Puig, V., Cembrano, G., Creus, R., Minoves, M.: Operational predictive optimal control of Barcelona water transport network. Control Eng. Practice 21, 1020–1034 (2013) 144. Patan, K.: Two stage neural network modelling for robust model predictive control. ISA Trans. 72, 56–65 (2018) 145. Patan, K.: Robust and Fault-Tolerant Control: Neural-Network-Based Solutions, Studies in Systems, Decision and Control, vol. 197. Springer, Cham (2019) 146. Patan, K., Korbicz, J.: Nonlinear model predictive control of a boiler unit: a fault tolerant control study. Int. J. Appl. Math. Comput. Sci. 22, 225–237 (2012) 147. Patikirikorala, T., Wang, L., Colman, A., Han, J.: Hammerstein-Wiener nonlinear model based predictive control for relative QoS performance and resource management of software systems. Control Eng. Practice 20, 49–61 (2012) 148. Porfírio, C., Odloak, D.: Optimizing model predictive control of an industrial distillation column. Control Eng. Practice 19, 1137–1146 (2011) 149. Potoˇcnik, P., Grabec, I.: Nonlinear model predictive control of a cutting process. Neurocomputing 43, 107–126 (2002) 150. Pour, F.K., Puig, V., Ocampo-Martinez, C.: Multi-layer health-aware economic predictive control of a pasteurization pilot plant. Int. J. Appl. Math. Comput. Sci. 28, 97–110 (2018) 151. Powell, M.J.D.: A fast algorithm for nonlinearly constrained optimization calculations. In: Watson, G.A. (ed.) Numerical Analysis. Lecture Notes in Mathematics, vol. 630, pp. 144–157. Springer, Dundee (1978)
38
1 Introduction to Model Predictive Control
152. Qin, S.J., Badgwell, T.A.: A survey of industrial model predictive control technology. Control Eng. Practice 11, 733–764 (2003) 153. Rao, C.V., Wright, S.J., Rawlings, J.B.: Application of interior-point methods to model predictive control. J. Optim. Theory Appl. 99, 723–757 (1998) 154. Raut, A., Irdmousa, B.K., Shahbakhti, M.: Dynamic modeling and model predictive control of an rcci engine. Control Eng. Practice 81, 129–144 (2018) 155. Reese, B.M., Collins, E.G.: A graph search and neural network approach to adaptive nonlinear model predictive control. Eng. Appl. Artif. Intell. 55, 250–268 (2016) 156. Richalet, J., O’Donovan, D.: Predictive Functional Control: Principles and Industrial Applications. Springer, London (2009) 157. Richalet, J.A., Rault, A., Testud, J.L., Papon, J.: Model predictive heuristic control: application to an industrial processes. Proc. AIChE Natl Meeting 14, 413–428 (1979) 158. Richter, S., Morari, M., Jones, C.N.: In: Proceedings of the 2011 IEEE 50th Annual Conference on Decision and Control (CDC) and European Control Conference (ECC). Towards computational complexity certification for constrained MPC based on Lagrange relaxation and the fast gradient method, pp. 5223–5229. Orlando, Florida, USA (2011) 159. Rodrigues, M.A., Odloak, D.: An infinite horizon model predictive control for stable and integrating processes. Comput. Chem. Eng. 27, 1113–1128 (2003) 160. Saeed, J., Hasan, A.: Unit prediction horizon binary search-based model predictive control of full-bridge DC-DC converter. IEEE Trans. Control Syst. Technol. 26, 463–474 (2018) 161. Sarabia, D., Capraro, F., Larsen, L.F.S., de Prada, C.: Hybrid NMPC of supermarket display cases. Control Eng. Practice 17, 428–441 (2009) 162. Saraswati, S., Chand, S.: Online linearization-based neural predictive control of air-fuel ratio in SI engines with PID feedback correction scheme. Neural Comput. Appl. 19, 919–933 (2010) 163. Scattolini, R.: Architectures for distributed and hierarchical model predictive control - a review. J. Process Control 19, 723–731 (2009) 164. Schweidtmann, A.M., Mitsos, A.: Deterministic global optimization with artificial neural networks embedded. J. Optim. Theory Appl. 180, 925–948 (2019) 165. Scokaert, P.O.M., Mayne, D.Q., Rawlings, J.B.: Suboptimal model predictive control (feasibility implies stability). IEEE Trans. Automat. Control 44, 648–654 (1999) 166. Seki, H., Ogawa, M., Ooyama, S., Akamatsu, K., Ohshima, M., Yang, W.: Industrial application of a nonlinear model predictive control to polymerization reactors. Control Eng. Practice 9, 819–828 (2001) 167. Seybold, L., Witczak, M., Majdziek, P., Stetter, R.: Towards robust predictive fault-tolerant control for a battery assembly unit. Int. J. Appl. Math. Comput. Sci. 25, 849–862 (2015) 168. Shafiee, G., M., A.M., Jahed-Motlagh, M.R., Jalali, A.A. : Nonlinear predictive control of a polymerization reactor based on piecewise linear Wiener model. Chem. Eng. J. 143, 282–292 (2008) 169. Sopasakis, P., Sarimveis, H.: Stabilising model predictive control for discrete-time fractionalorder systems. Automatica 75, 24–31 (2017) 170. Stadler, K.S., Poland, J., Gallestey, E.: Model predictive control of a rotary cement kiln. Control Eng. Practice 19, 1–9 (2011) 171. Stellato, B., Banjac, G., Goulart, P., Bemporad, A., Boyd, S.: OSQP: an operator splitting solver for quadratic programs. Math. Program. Comput. (2020). In press 172. Sturzenegger, D., Gyalistras, D., Morari, M., Smith, R.S.: Model predictive climate control of a Swiss office building: implementation, results, and cost-benefit analysis. IEEE Trans. Control syst. Technol. 24, 1–12 (2016) 173. Suh, J., Yi, K., Jung, J., Lee, K., Chong, H., Ko, B.: Design and evaluation of a model predictive vehicle control algorithm for automated driving using a vehicle traffic simulator. Control Eng. Practice 51, 256–266 (2016) 174. Sun, J., Kolmanovsky, I.V., Ghaemi, R., Chen, S.: A stable block model predictive control with variable implementation horizon. Automatica 43, 1945–1953 (2007)
References
39
175. Tahir, F., Mercer, E., Lowdon, I., Lovett, D.: Advanced process control and monitoring of a continuous flow micro-reactor. Control Eng. Practice 77, 225–234 (2018) 176. Takács, G., Batista, G., Gulan, M., Rohal’-Ilkiv, B.: Embedded explicit model predictive vibration control. Mechatronics 36, 54–62 (2016) 177. Tatjewski, P.: Advanced Control of Industrial Processes, Structures and Algorithms. Springer, London (2007) 178. Tatjewski, P.: DMC algorithm with Laguerre functions. In: Bartoszewicz, A., Kabzi´nski, J., Kacprzyk, J. (eds.) Advanced, Contemporary Control, Advances in Intelligent Systems and Computing, vol. 1196, pp. 1006–1017. Springer, Cham (2020) 179. Tøndel, P., Johansen, T.A., Bemporad, A.: An algorithm for multi-parametric quadratic programming and explicit mpc solutions. Automatica 39, 489–497 (2003) 180. Vaupel, Y., Hamacher, N.C., Caspari, A., Mhamdi, A., Kevrekidis, I.G., Mitsos, A.: Accelerating nonlinear model predictive control through machine learning. J. Process Control 92, 261–270 (2020) 181. Vega, P., Revollar, S., Francisco, M., Martın, J.M.: Integration of set point optimization techniques into nonlinear mpc for improving the operation of WWTPs. Comput. Chem. Eng. 68, 78–95 (2014) 182. Vermillion, C., Menezes, A., Kolmanovsky, I.: Stable hierarchical model predictive control using an inner loop reference model and λ-contractive terminal constraint sets. Automatica 50, 92–99 (2014) 183. Vivas, A., Poignet, P.: Predictive functional control of a parallel robot. Control Eng. Practice 13, 863–874 (2005) 184. Volk, U., Kniese, D.W., Hahn, R., Haber, R., Schmitz, U.: Optimized multivariable predictive control of an industrial distillation column considering hard and soft constraints. Control Eng. Practice 13, 913–927 (2005) 185. Wahlberg, B.: System identification using Laguerre models. IEEE Trans. Automat. Control 36, 551–562 (1991) 186. Wang, L.: Continuous time model predictive control design using orthonormal functions. Int. J. Control 74, 1588–1600 (2001) 187. Wang, L.: Discrete model predictive controller design using Laguerre functions. J. Process Control 14, 131–142 (2004) 188. Wang, L.X., Wan, F.: Structured neural networks for constrained model predictive control. Automatica 37, 1235–1243 (2001) 189. Wang, X., Mahalec, V., F., Q. : Globally optimal nonlinear model predictive control based on multi-parametric disaggregation. J. Process Control 52, 1–13 (2017) 190. Wang, Y., Boyd, S.: Fast model predictive control using online optimization. IEEE Trans. Control Syst. Technol. 18, 267–278 (2010) 191. Wang, Y., Luo, L., Zhang, F., Wang, S.: GPU-based model predictive control for continuous casting spray cooling control system using particle swarm optimization. Control Eng. Practice 84, 349–364 (2019) 192. Witczak, M.: Fault Diagnosis and Fault-Tolerant Control Strategies for Non-Linear Systems: Analytical and Soft Computing Approaches. Lecture Notes in Electrical Engineering, vol. 266. Springer, Cham (2014) 193. Wu, X., Zhu, X., Cao, G., Tu, H.: Predictive control of sofc based on a GA-RBF neural network model. J. Power Sour. 179, 232–239 (2008) 194. Xia, C., Liu, T., Shi, T., Song, Z.: A simplified finite-control-set model-predictive control for power converters. IEEE Trans. Indus. Inf. 10, 991–1002 (2014) 195. Yang, J., Li, X., Mou, H., Jian, L.: Predictive control of solid oxide fuel cell based on an improved takagi-sugeno fuzzy model. J. Power Sour. 193, 699–705 (2009) 196. Yang, S., Bequette, B.W.: Optimization-based control using input convex neural networks. Comput. Chem. Eng. 144, 107143 (2020) 197. Yang, S., Wan, M.P., Ng, B.F., Zhang, T., Babu, S., Zhang, Z., Chen, W., Dubey, S.: A statespace thermal model incorporating humidity and thermal comfort for model predictive control in buildings. Energy Build. 170, 25–39 (2018)
40
1 Introduction to Model Predictive Control
198. Yu, D.L., Gomm, J.B.: Implementation of neural network predictive control to a multivariable chemical reactor. Control Eng. Practice 11, 1315–1323 (2003) 199. Yu, Z., Biegler, L.T.: Advanced-step multistage nonlinear model predictive control: robustness and stability. J. Process Control 85, 15–29 (2020) 200. Zhang, J., Chin, K.S., Ławry´nczuk, M.: Multilinear model decomposition and predictive dontrol of MIMO two-block cascade systems. Indus. Eng. Chem. Res. 56, 14101–14114 (2017) 201. Zheng, A.: A computationally efficient nonlinear MPC algorithm. In: Proceedings of the American Control Conference (ACC 1997), pp. 1623–1627. Albuquerque, New Mexico, USA (1997) 202. Zheng, Y., Zhou, J., Xu, Y., Zhang, Y., Qian, Z.: A distributed model predictive control based load frequency control scheme for multi-area interconnected power system using discrete-time Laguerre functions. ISA Trans. 68, 127–140 (2017) 203. Zhou, F., Peng, H., Zeng, X., Tian, X., Peng, X.: RBF-ARX model-based robust MPC for nonlinear systems with unknown and bounded disturbance. J. Franklin Instit. 354, 8072–8093 (2017) 204. Zhou, F., Peng, H., Zhang, G., Zeng, X.: A robust controller design method based on parameter variation rate of RBF-ARX model. IEEE Access 7, 160284–160294 (2019) 205. Zhou, F., Peng, H., Zhang, G., Zeng, X., Peng, X.: Robust predictive control algorithm based on parameter variation rate information of functional-coefficient ARX model. IEEE Access 7, 27231–27243 (2019)
Chapter 2
Wiener Models
Abstract This chapter is concerned with Wiener models. At first, input-output structures are described: one SISO case and five MIMO ones. Next, state-space models are detailed: one SISO case and two MIMO ones. A short review of identification methods of Wiener models is given, possible internal structures of both model parts are discussed and example applications of Wiener models are reported. Finally, other structures of cascade models are shortly mentioned.
2.1 Structures of Input-Output Wiener Models For prediction in MPC, i.e. to calculate the quantities yˆ (k + 1|k), . . . , yˆ (k + N |k) used in the minimised MPC cost-function (1.7) or (1.13), a dynamical model of the process is necessary. In this work, Wiener models are used for this purpose. As far as input-output models are concerned, one SISO structure and as many as five MIMO model configurations are described.
2.1.1 SISO Wiener Model The structure of the SISO input-output Wiener model [57] is depicted in Fig. 2.1. It consists of a linear dynamic block followed by a nonlinear static one. The linear dynamic part of the model is described by the equation A(q −1 )v(k) = B(q −1 )u(k)
(2.1)
A(q −1 ) = 1 + a1 q −1 + · · · + an A q −n A
(2.2)
where the polynomials are
B(q
−1
) = b1 + · · · + bn B q
−n B
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_2
(2.3)
41
42
2 Wiener Models
Fig. 2.1 The structure of the SISO Wiener model
The auxiliary signal v is the output of the first block and the input of the second block. All signals u, v and y are scalars. The backward shift operator (the unit time delay) is denoted by q −1 , the integers n A and n B define the order of dynamics, the constant parameters of the linear dynamic part are denoted by the real numbers a j ( j = 1, . . . , n A ) and b j ( j = 1, . . . , n B ). From Eqs. (2.1), (2.2)–(2.3), the output of the linear part of the model is v(k) =
nB
bi u(k − i) −
i=1
nA
ai v(k − i)
(2.4)
i=1
The nonlinear static part of the model is described by the general equation y(k) = g(v(k))
(2.5)
where the function g : R → R is required to be differentiable (for implementation of the computationally efficient nonlinear MPC algorithms described in Chap. 3). It means that polynomials, neural networks, fuzzy systems (with differentiable membership functions) or Support Vector Machines (SVM) may be used in the second model block. The output of the SISO Wiener model can be explicitly expressed as a function of the input signal and the auxiliary signal of the model at some previous sampling instants. Taking into account Eqs. (2.4) and (2.5), we obtain y(k) = g
n B i=1
bi u(k − i) −
nA
ai v(k − i)
(2.6)
i=1
Let us stress that the signal v is used in the model, but in general, we assume that it does not exist in the process. Hence, measurement of that signal is impossible, but its value may be assessed from the model for the current operating point of the process. Similarly, predictions of the signals v over the prediction horizon may also be computed.
2.1 Structures of Input-Output Wiener Models
43
Fig. 2.2 The structure of the MIMO Wiener model I
2.1.2 MIMO Wiener Model I The structure of the MIMO Wiener model I [57] is depicted in Fig. 2.2. It consists of one linear dynamic MIMO block and n y SISO nonlinear static ones. The linear dynamic part of the model is described by Eq. (2.1) but now u ∈ Rn u and v ∈ Rn y . Because SISO nonlinear static blocks are used, n v = n y . The polynomial model matrices are ⎤ ⎡ 1 + a11 q −1 + · · · + an1A q −n A . . . 0 ⎥ ⎢ .. .. .. A(q −1 ) = ⎣ ⎦ . . . n
⎡ B(q
−1
⎢ )=⎣
b11,1 q −1 n ,1
+ ··· + .. .
n
. . . 1 + a1 y q −1 + · · · + an Ay q −n A
0 bn1,1 q −n B B n ,1
... .. .
b11,n u q −1 n ,n u −1
b1 y q −1 + · · · + bn By q −n B . . . b1 y
q
+ ··· + .. .
u −n B bn1,n q B
n ,n
⎤
(2.7)
⎥ ⎦ (2.8)
+ · · · + bn By u q −n B
The constant parameters of the linear dynamic part are denoted by the real numbers ( j = 1, . . . , n B , m = 1, . . . , n y , n = a mj ( j = 1, . . . , n A , m = 1, . . . , n y ) and bm,n j 1, . . . , n u ). From Eqs. (2.1) and (2.7)–(2.8), we can calculate the consecutive outputs of the linear dynamic part of the model v1 (k) =
nu nB
bi1,n u n (k − i) −
nA
n=1 i=1
ai1 v1 (k − i)
(2.9)
i=1
.. . vn y (k) =
nu nB n=1 i=1
n ,n
bi y u n (k − i) −
nA i=1
n
ai y vn y (k − i)
(2.10)
44
2 Wiener Models
which may be compactly expressed as vm (k) =
nu nB
bim,n u n (k − i) −
n=1 i=1
nA
aim vm (k − i), m = 1, . . . , n y
(2.11)
i=1
The nonlinear static parts of the model are described by the general equations y1 (k) = g1 (v1 (k)) .. .
(2.12)
yn y (k) = gn y (vn y (k))
(2.13)
which may be compactly expressed as ym (k) = gm (vm (k)), m = 1, . . . , n y
(2.14)
where the functions gm : R → R are required to be differentiable. From Eqs. (2.9)– (2.10) and (2.12)–(2.13), we obtain model outputs y1 (k) = g1
n n u B
bi1,n u n (k
− i) −
n=1 i=1
.. . yn y (k) = gn y
n n u B
nA
ai1 v1 (k
− i)
(2.15)
i=1
n ,n bi y u n (k
− i) −
n=1 i=1
nA
n ai y vn y (k
− i)
(2.16)
i=1
which may be compactly expressed as ym (k) = gm
n n u B
bim,n u n (k
− i) −
n=1 i=1
nA
aim vm (k
− i) , m = 1, . . . , n y (2.17)
i=1
2.1.3 MIMO Wiener Model II The structure of the MIMO Wiener model II is depicted in Fig. 2.3. Similarly to the MIMO Wiener model I shown in Fig. 2.2, it consists of one linear dynamic MIMO block and n y static ones. On the other hand, there are two important differences. Firstly, the number of auxiliary signals between two model parts (n v ) may be, in general, different from the number of outputs (n y ). The number of auxiliary signals may be treated as an additional model parameter, but it is straightforward to choose
2.1 Structures of Input-Output Wiener Models
45
Fig. 2.3 The structure of the MIMO Wiener model II
n v = n y . Secondly, in the MIMO Wiener model II, the nonlinear static blocks are of the Multiple-Input Single-Output (MISO) type, each of them has n v inputs and one output. The linear dynamic part of the model is described by Eq. (2.1) but now u ∈ Rn u and v ∈ Rn v . The polynomial model matrices are ⎤ 0 1 + a11 q −1 + · · · + an1A q −n A . . . ⎥ ⎢ .. .. .. A(q −1 ) = ⎣ ⎦ . . . n v −1 n v −n A 0 . . . 1 + a1 q + · · · + an A q (2.18) ⎤ ⎡ 1,1 −1 1,n u −1 1,1 −n B 1,n u −n B . . . b1 q + · · · + bn B q b1 q + · · · + bn B q ⎥ ⎢ . .. −1 .. . B(q ) = ⎣ ⎦ . . . ⎡
b1n v ,1 q −1 + · · · + bnnBv ,1 q −n B . . . b1n v ,n u q −1 + · · · + bnnBv ,n u q −n B
(2.19) where the constant parameters of the linear dynamic part are denoted by the real num( j = 1, . . . , n B , m = 1, . . . , n v , bers a mj ( j = 1, . . . , n A , m = 1, . . . , n v ) and bm,n j n = 1, . . . , n u ). Taking into account Eqs. (2.1), (2.18)–(2.19), the consecutive outputs of the linear dynamic part of the model are calculated from v1 (k) =
nu nB
bi1,n u n (k − i) −
nA
n=1 i=1
ai1 v1 (k − i)
(2.20)
i=1
.. . vn v (k) =
nu nB n=1 i=1
bin v ,n u n (k − i) −
nA i=1
ain v vn v (k − i)
(2.21)
46
2 Wiener Models
which may be compactly expressed as vm (k) =
nu nB
bim,n u n (k − i) −
nA
n=1 i=1
aim vm (k − i), m = 1, . . . , n v
(2.22)
i=1
The nonlinear static parts of the model are described by the general equations y1 (k) = g1 (v1 (k), . . . , vn v (k)) .. .
(2.23)
yn y (k) = gn y (v1 (k), . . . , vn v (k))
(2.24)
which may be compactly expressed as ym (k) = gm (v1 (k), . . . , vn v (k)), m = 1, . . . , n y
(2.25)
where the functions gm : Rn v → R are required to be differentiable. From Eqs. (2.20)– (2.21), (2.23)–(2.24), we obtain model outputs y1 (k) = g1
n n u B
bi1,n u n (k − i) −
nA
n=1 i=1 nu nB
i=1
bin v ,n u n (k
− i) −
nA
n=1 i=1
.. . yn y (k) = gn y
ain v vn v (k
− i)
(2.26)
i=1
n n u B
bi1,n u n (k − i) −
n=1 i=1 nu nB
ai1 v1 (k − i), . . . ,
nA
ai1 v1 (k − i), . . . ,
i=1
bin v ,n u n (k
− i) −
n=1 i=1
nA
ain v vn v (k
− i)
(2.27)
i=1
which may be compactly expressed as ym (k) = gm
n n u B
bi1,n u n (k
− i) −
n=1 i=1 nu nB n=1 i=1
bin v ,n u n (k
nA
ai1 v1 (k − i), . . . ,
i=1
− i) −
nA i=1
ain v vn v (k
− i) , m = 1, . . . , n y (2.28)
2.1 Structures of Input-Output Wiener Models
47
Fig. 2.4 The structure of the MIMO Wiener model III
2.1.4 MIMO Wiener Model III The structure of the MIMO Wiener model III is depicted in Fig. 2.4. In general, similarly to the MIMO Wiener model I, it consists of n y SISO nonlinear static blocks defined by Eq. (2.14), but the linear dynamic part of the model is different, it is not represented by one MIMO block defined by Eq. (2.1). As a result of a model identification procedure or from fundamental knowledge of the process, the transfer functions of the consecutive input-output channels are typically found. They comprise the linear dynamic part of the Wiener model. The first block of the model is described by the array of transfer functions ⎤ ⎡ ⎤⎡ ⎤ G 1,1 (q −1 ) . . . G 1,n u (q −1 ) v1 (k) u 1 (k) ⎥ ⎢ .. ⎥ ⎢ .. ⎥ ⎢ .. .. .. ⎦⎣ . ⎦ ⎣ . ⎦=⎣ . . . vn y (k) G n y ,1 (q −1 ) . . . G n y ,n u (q −1 ) u n u (k) ⎡
(2.29)
The transfer functions have the general form G m,n (q −1 ) =
Nm,n (q −1 ) Dm,n (q −1 )
(2.30)
for all inputs and outputs of the linear dynamic block, i.e. for m = 1, . . . , n y , n = 1, . . . , n u . The numerators and the denominators of the transfer functions (2.30) are polynomials
48
2 Wiener Models m,n
−n B Nm,n (q −1 ) = b1m,n q −1 + · · · + bnm,n m,n q
(2.31)
B
Dm,n (q −1 ) = 1 + a1m,n q −1 + · · · + anm,n m,n q
m,n −n A
(2.32)
A
m,n The integer numbers n m,n denote the order of dynamics of the consecuA and n B tive denominators and nominators, respectively. Let us stress the fact that order of dynamics of the consecutive transfer functions (2.30) may be different. In the MIMO Wiener model I, all input-output channels have the same order of dynamics, defined by n A and n B . The MIMO Wiener model I is usually considered in the literature [57, 74]. Even though, initially, we may have the simple rudimentary model comprised of SISO transfer functions as in Eq. (2.29), it is transformed to the MIMO Wiener model I. More specifically, the linear dynamic block of the model III is transformed. From Eq. (2.29) and Fig. 2.4, we have
v1 (k) =
nu
G 1,i (q −1 )u i (k)
(2.33)
G n y ,i (q −1 )u i (k)
(2.34)
i=1
.. . vn y (k) =
nu i=1
Taking into account Eq. (2.30), the linear part of the model (2.33)–(2.34) becomes v1 (k) =
nu N1,i (q −1 ) u i (k) D1,i (q −1 ) i=1
(2.35)
nu Nn y ,i (q −1 ) u i (k) Dn y ,i (q −1 ) i=1
(2.36)
.. . vn y (k) =
Multiplying the consecutive equations of the linear block (2.35)–(2.36) by the comn u nu mon denominators i=1 D1,i (q −1 ), . . . , i=1 Dn y ,i (q −1 ), respectively, we obtain nu
D1,i (q
−1
)v1 (k) =
i=1
nu
N1, j (q
−1
)
j=1
nu
D1,i (q −1 )u j (k)
(2.37)
i=1 i= j
.. . nu i=1
Dn y ,i (q
−1
)vn y (k) =
nu j=1
Nn y , j (q
−1
)
nu i=1 i= j
Dn y ,i (q −1 )u j (k)
(2.38)
2.1 Structures of Input-Output Wiener Models
49
Equations (2.37)–(2.38) may be rewritten in such a way that we obtain the linear dynamic block used in the MIMO Wiener model I (Eq. (2.1)) where the entries of the matrices A(q −1 ) and B(q −1 ) (Eqs. (2.7)–(2.8)) are A1,1 (q −1 ) =
nu
D1,i (q −1 )
(2.39)
Dn y ,i (q −1 )
(2.40)
i=1
.. . An y ,n y (q −1 ) =
nu i=1
and B1,1 (q −1 ) = N1,1 (q −1 )
nu
D1,i (q −1 )
(2.41)
i=2
.. . Bn y ,n u (q
−1
) = Nn y ,n u (q
−1
)
n u −1
Dn y ,i (q −1 )
(2.42)
i=1
As a result of multiplication in Eqs. (2.39)–(2.40) and (2.41)–(2.42), the linear part of the classical MIMO block (2.1) used in the MIMO Wiener model I is likely to be of a high-order, even though the transfer functions (2.30) used in the rudimentary MIMO Wiener model III are of a low order. As it is demonstrated in Sect. 4.5, it may lead to serious numerical problems and make predictive control difficult or completely impossible. Hence, when the process has really multiple inputs and outputs, it is strongly recommended to use the MIMO Wiener model III, not the classical model I. In the case of the MIMO Wiener model III, in order to explicitly express model outputs as functions of the input signals of the process and the auxiliary signals of the model at some previous sampling instants, we use Fig. 2.4, Eqs. (2.29) and (2.30) which give v1,1 (k) = G 1,1 (q −1 )u 1 (k) =
N1,1 (q −1 ) u 1 (k) D1,1 (q −1 )
(2.43)
.. . vn y ,n u (k) = G n y ,n u (q −1 )u n u (k) =
Nn y ,n u (q −1 ) u n (k) Dn y ,n u (q −1 ) u
(2.44)
50
2 Wiener Models
Taking into account Eqs. (2.31)–(2.32), we obtain 1,1
v1,1 (k) =
nB
1,1
bi1,1 u 1 (k
nA
− i) −
i=1
ai1,1 v1,1 (k − i)
(2.45)
i=1
.. . n y ,n u
nB
vn y ,n u (k) =
n y ,n u
n ,n bi y u u n u (k
nA
− i) −
i=1
n ,n u
ai y
vn y ,n u (k − i)
(2.46)
i=1
Equations (2.45)–(2.46) may be rewritten in the compact form n m,n B
n m,n
i=1
i=1
A m,n vm,n (k) = bi u n (k − i) − aim,n vm,n (k − i), m = 1, . . . , n y , n = 1, . . . , n u
(2.47) From Fig. 2.4, we have v1 (k) =
nu
v1,n (k)
(2.48)
vn y ,n (k)
(2.49)
vm,n (k), m = 1, . . . , n y
(2.50)
i=1
.. . vn y (k) =
nu i=1
which may be rewritten compactly vm (k) =
nu i=1
Using Eqs. (2.14), (2.48)–(2.49), the consecutive model outputs are y1 (k) = g1
n u
v1,n (k)
(2.51)
n=1
.. . yn y (k) = gn y
n u n=1
vn y ,n (k)
(2.52)
2.1 Structures of Input-Output Wiener Models
51
which may be rewritten compactly ym (k) = gm
n u
vm,n (k) , m = 1, . . . , n y
(2.53)
n=1
From Eqs. (2.45)–(2.46), we obtain model outputs ⎛
⎛ 1,n ⎞⎞ nB n 1,n nu A ⎜ ⎜ ⎟⎟ y1 (k) = g1 ⎝ bi1,n u n (k − i) − ai1,n v1,n (k − i)⎠⎠ ⎝ n=1
.. .
i=1
(2.54)
i=1
⎛
⎛ n y ,n ⎞⎞ n y ,n nB nA nu n y ,n n y ,n ⎜ ⎜ ⎟⎟ yn y (k) = gn y ⎝ bi u n (k − i) − ai vn y ,n (k − i)⎠⎠ ⎝ n=1
i=1
(2.55)
i=1
which may be rewritten compactly ⎞⎞ ⎛ m,n m,n nB nA nu ⎝ ym (k) = gm ⎝ bim,n u n (k − i) − aim,n vm,n (k − i)⎠⎠ , m = 1, . . . , n y ⎛
n=1
i=1
i=1
(2.56)
2.1.5 MIMO Wiener Model IV A direct extension of the MIMO Wiener model III is the model IV depicted in Fig. 2.5. There are two extensions. Firstly, the number of the auxiliary signals, n v , may be in general different from the number of outputs, n y . Hence, the linear MIMO dynamic part of the model is described by the general array of transfer functions ⎡
⎤ ⎡ ⎤⎡ ⎤ v1 (k) G 1,1 (q −1 ) . . . G 1,n u (q −1 ) u 1 (k) ⎢ .. ⎥ ⎢ ⎥ ⎢ .. ⎥ .. .. .. ⎣ . ⎦=⎣ ⎦⎣ . ⎦ . . . −1 −1 vn v (k) G n v ,1 (q ) . . . G n v ,n u (q ) u n u (k)
(2.57)
The transfer functions G m,n (q −1 ) have the general form defined by Eq. (2.30), as in the case of the MIMO Wiener model III, but now m = 1, . . . , n v , n = 1, . . . , n u . Secondly, the nonlinear static blocks are of the MISO type. Each of them has n v inputs and is characterised by Eq. (2.25), used in the MIMO Wiener model II. Of course, the MIMO Wiener model IV may be transformed to the MIMO structure II (in a similar way we transform the MIMO Wiener model III to the MIMO structure
52
2 Wiener Models
Fig. 2.5 The structure of the MIMO Wiener model IV
I), but in such a case, the resulting order of dynamics of the linear dynamic block, defined by n A and n B , may be very high, although the order of the consecutive transfer m,n functions, defined by n m,n A and n B , is low. Because of that, for processes with really multiple inputs and outputs, such a conversion is not recommended; it is advised to use the MIMO Wiener model IV in place of the model II. In order to explicitly express model outputs as functions of the input signals of the process and the auxiliary signals of the model at some previous sampling instants, we use Fig. 2.5 and Eq. (2.30) which give v1,1 (k) = G 1,1 (q −1 )u 1 (k) =
N1,1 (q −1 ) u 1 (k) D1,1 (q −1 )
(2.58)
.. . vn v ,n u (k) = G n v ,n u (q −1 )u n u (k) =
Nn v ,n u (q −1 ) u n (k) Dn v ,n u (q −1 ) u
(2.59)
Taking into account Eqs. (2.31)–(2.32), we obtain n 1,1
n 1,1
i=1
i=1
B A v1,1 (k) = bi1,1 u 1 (k − i) − ai1,1 v1,1 (k − i)
(2.60)
.. . n v ,n u nB
vn v ,n u (k) =
i=1
bin v ,n u u n u (k − i) −
n v ,n u nA
i=1
ain v ,n u vn v ,n u (k − i)
(2.61)
2.1 Structures of Input-Output Wiener Models
53
Equations (2.60)–(2.61) may be rewritten in the compact form n m,n B
n m,n
i=1
i=1
A m,n bi u n (k − i) − aim,n vm,n (k − i), m = 1, . . . , n v , n = 1, . . . , n u vm,n (k) =
(2.62) From Fig. 2.5, we have v1 (k) =
nu
v1,n (k)
(2.63)
vn v ,n (k)
(2.64)
vm,n (k), m = 1, . . . , n v
(2.65)
i=1
.. . vn v (k) =
nu i=1
which may be rewritten compactly nu
vm (k) =
i=1
Using Eqs. (2.25), (2.63)–(2.64), the consecutive model outputs are y1 (k) = g1
n u
v1,n (k), . . . ,
n=1
.. . yn y (k) = gn y
n u
nu
vn v ,n (k)
(2.66)
n=1
v1,n (k), . . . ,
n=1
nu
vn v ,n (k)
(2.67)
n=1
which may be rewritten compactly ym (k) = gm
n u n=1
v1,n (k), . . . ,
nu n=1
vn v ,n (k) , m = 1, . . . , n y
(2.68)
54
2 Wiener Models
From Eqs. (2.60)–(2.61), we obtain model outputs ⎛
⎛
n 1,n B
n 1,n A
i=1
i=1
⎞
1,n ⎟ ⎜ ⎜ 1,n y1 (k) = g1 ⎝ bi u n (k − i) − ai v1,n (k − i)⎠ , . . . , ⎝ nu
n=1
⎛ nu ⎜ ⎝ n=1
. . .
n v ,n nB
bin v ,n u n (k − i) −
n v ,n nA
i=1
⎞⎞ ⎟⎟ ain v ,n vn v ,n (k − i)⎠⎠
(2.69)
i=1
⎛
⎛ 1,n ⎞ nB n 1,n nu A ⎜ ⎜ ⎟ yn y (k) = gn y ⎝ bi1,n u n (k − i) − ai1,n v1,n (k − i)⎠ , . . . , ⎝ n=1
i=1
i=1
⎛
n ,n n Bv
nu ⎜ ⎝ n=1
⎞⎞
n ,n
bin v ,n u n (k − i) −
i=1
n Av
⎟⎟ ain v ,n vn v ,n (k − i)⎠⎠
(2.70)
i=1
which may be rewritten compactly ⎞ ⎛ ⎛ 1,n nB n 1,n nu A 1,n 1,n ⎝ bi u n (k − i) − ai v1,n (k − i)⎠ , . . . , ym (k) = gm ⎝ n=1 nu n=1
⎛
i=1
n nBv ,n
⎝
bin v ,n u n (k − i) −
i=1
i=1 n nAv ,n
⎞⎞ ain v ,n vn v ,n (k − i)⎠⎠ , m = 1, . . . , n y
i=1
(2.71)
2.1.6 MIMO Wiener Model V Finally, let us discuss the last structure of the input-output MIMO Wiener model depicted in Fig. 2.6. Each model output is represented as a parallel connection of n u transfer functions and n u SISO nonlinear static blocks. The MISO version of the MIMO model V is discussed in [50, 61], i.e. assuming that the process has n u inputs but only one output. The consecutive transfer functions are the same as in the model III shown in Fig. 2.4, i.e. they are described by Eq. (2.30), the signals vm,n (k) are characterised by Eq. (2.47), where m = 1, . . . , n y , n = 1, . . . , n u . The nonlinear static parts of the model is represented by n y n u nonlinear functions described by the general equations
2.1 Structures of Input-Output Wiener Models
55
Fig. 2.6 The structure of the MIMO Wiener model V
y1,1 (k) = g1,1 (v1,1 (k)) .. . yn y ,n u (k) = gn y ,n u (vn y ,n u (k))
(2.72)
(2.73)
which may be rewritten compactly ym,n (k) = gm,n (vm,n (k)), m = 1, . . . , n y , n = 1, . . . , n u
(2.74)
The model outputs are y1 (k) =
nu
y1,n (k) =
nu
n=1
g1,n (v1,n (k))
(2.75)
n=1
.. . yn y (k) =
nu
yn y ,n (k) =
n=1
nu
gn y ,n (vn y ,n (k))
(2.76)
gm,n (vm,n (k)), m = 1, . . . , n y
(2.77)
n=1
which may be rewritten compactly ym (k) =
nu n=1
ym,n (k) =
nu n=1
From Eqs. (2.45)–(2.46) and (2.75)–(2.76), we have
56
2 Wiener Models
y1 (k) =
nu
⎞ ⎛ 1,n nB n 1,n A g1,n ⎝ bi1,n u n (k − i) − ai1,n v1,n (k − i)⎠
n=1
i=1
.. . yn y (k) =
nu
⎛
n y ,n
nB
gn y ,n ⎝
n=1
(2.78)
i=1
⎞
n y ,n
n ,n bi y u n (k
nA
− i) −
i=1
n ,n ai y vn y ,n (k
− i)⎠
(2.79)
i=1
which may be rewritten compactly ym (k) =
nu n=1
⎛ gm,n ⎝
m,n
nB
⎞
m,n
bim,n u n (k
i=1
− i) −
nA
aim,n vm,n (k
− i)⎠ , m = 1, . . . , n y
i=1
(2.80)
2.2 Structures of State-Space Wiener Models As far as state-space Wiener models are concerned, one SISO structure and two MIMO model configurations are considered. The number of state variables is denoted T by n x , the state vector is x = x1 . . . xn x .
2.2.1 State-Space SISO Wiener Model The general structure of the state-space SISO Wiener model, shown in Fig. 2.1, is the same as that used in the input-output approach. However, in the state-space description the linear dynamic part of the model is characterised by the equations x(k + 1) = Ax(k) + Bu(k) v(k) = C x(k)
(2.81) (2.82)
The constant parameters of the linear dynamic part are characterised by the matrices of dimensionality n x × n x , n x × 1 and 1 × n x , respectively. They have the following structure ⎡ ⎤ ⎡ ⎤ a1,1 . . . a1,n x b1,1 ⎢ ⎥ ⎢ ⎥ A = ⎣ ... . . . ... ⎦ , B = ⎣ ... ⎦ , C = c1,1 . . . c1,n x (2.83) an x ,1 . . . an x ,n x
bn x ,1
2.2 Structures of State-Space Wiener Models
57
The nonlinear static part of the model is described by the general equation (2.5). Using Eqs. (2.5), (2.81) and (2.82), the state-space Wiener model is compactly described by the following vector-matrix equations x(k + 1) = Ax(k) + Bu(k) y(k) = g(v(k)) = g(C x(k))
(2.84) (2.85)
Taking into account the structure of model matrices (Eqs. (2.83)), the model (2.84)– (2.85) may be rewritten in the following scalar form xi (k) =
nx
ai, j x j (k − 1) + bi,1 u(k − 1), i = 1, . . . , n x
j=1
y(k) = g(v(k)) = g
nx
(2.86)
c1,i xi (k)
(2.87)
i=1
The signal between two blocks of the model is v(k) =
nx
c1,i xi (k)
(2.88)
i=1
2.2.2 State-Space MIMO Wiener Model I The general structure of the state-space MIMO Wiener model I is the same as the corresponding input-output representation shown in Fig. 2.2. Similarly to the statespace SISO case, in the state-space MIMO Wiener model I, the linear dynamic part is also described by Eqs. (2.81)–(2.82), but parameter-constant model matrices are of dimensionality n x × n x , n x × n u , n y × n x , respectively, and they have the following structure ⎡ ⎢ A=⎢ ⎣
a1,1 . . . an x ,1
⎡ ⎤ b1,1 . . . a1,n x ⎢ . ⎥ . ⎥ .. ⎢ . . , B = . ⎣ . . ⎦ bn x ,1 . . . an x ,n x
⎡ ⎤ c1,1 . . . b1,n u ⎢ . ⎥ . ⎥ .. ⎢ . ⎦ , C = ⎣ .. . . cn y ,1 . . . bn x ,n u
⎤ . . . c1,n x .. ⎥ .. ⎥ . . ⎦ . . . cn y ,n x
(2.89)
The nonlinear static part of the model is described by the general equation (2.14). For compactness of presentation, let us define the vector function ⎤ g1 (v1 (k)) ⎥ ⎢ .. g(v(k)) = ⎣ ⎦ . gn y (vn y (k)) ⎡
(2.90)
58
2 Wiener Models
As a result, the state-space MIMO Wiener model I may be described by the general vector-matrix equations (2.84)–(2.85). Taking into account the structure of model matrices (Eqs. (2.89)), the following scalar description is obtained xi (k) =
nx
ai, j x j (k − 1) +
j=1
nu
bi, j u j (k − 1), i = 1, . . . , n x
j=1
ym (k) = gm (vm (k)) = gm
n x
(2.91)
cm,i xi (k) , m = 1, . . . , n y
(2.92)
i=1
The auxiliary signals between two models blocks are vm (k) =
nx
cm,i xi (k), m = 1, . . . , n y
(2.93)
i=1
2.2.3 State-Space MIMO Wiener Model II The general structure of the state-space MIMO Wiener model II is the same as the corresponding input-output representation shown in Fig. 2.3. Similarly to the statespace SISO and MIMO I structures, in the state-space MIMO Wiener model II, the linear dynamic part is also described by Eqs. (2.81)–(2.82), but parameter-constant model matrices are of dimensionality n x × n x , n x × n u , n v × n x , respectively, and they have the following structure ⎡
a1,1 ⎢ . A=⎢ ⎣ .. an x ,1
⎡ ⎤ b1,1 . . . a1,n x ⎢ . ⎥ .. ⎥ , B = ⎢ .. . . ⎣ . . ⎦ bn x ,1 . . . an x ,n x
⎡ ⎤ c1,1 . . . b1,n u ⎢ . ⎥ .. ⎥ , C = ⎢ .. . . ⎣ . . ⎦ cn v ,1 . . . bn x ,n u
⎤ . . . c1,n x . ⎥ .. . ⎥ . . ⎦ . . . cn v ,n x
(2.94)
The nonlinear static part of the model is described by the general equation (2.25). For compactness of presentation, let us define the vector function ⎤ g1 (v1 (k), . . . , vn v (k)) ⎥ ⎢ .. g(v(k)) = g(v1 (k), . . . , vn v (k)) = ⎣ ⎦ . gn y (v1 (k), . . . , vn v (k)) ⎡
(2.95)
As a result, the state-space MIMO Wiener model II may be described by the vectormatrix general equations (2.84)–(2.85). Taking into account the structure of model matrices (Eqs. (2.94)), its scalar form is characterised by the same state equation that is used in the state-space Wiener MIMO model I (Eq. (2.91), the output equation is only different. For completeness of presentation, we give both state and output scalar equations
2.2
Structures of State-Space Wiener Models
xi (k) =
nx
ai, j x j (k − 1) +
j=1
nu
bi, j u j (k − 1), i = 1, . . . , n x
59
(2.96)
j=1
ym (k) = gm (v1 (k), . . . , vn v (k)) n nx x c1,i xi (k), . . . , cn v ,i xi (k) , m = 1, . . . , n y = gm i=1
(2.97)
i=1
The auxillary signals between two models blocks are vm (k) =
nx
cm,i xi (k), m = 1, . . . , n v
(2.98)
i=1
2.3 Identification of Wiener Models The following methods may be used for identification of Wiener models: – – – – – –
correlation methods [15, 17, 113], linear regression methods [57–59, 108], nonparametric regression methods [46–48, 86], nonlinear optimisation methods [4, 8, 57, 72, 115, 117, 121], frequency analysis methods [20, 37], subspace approaches [9, 40, 42, 116].
Some methods use a linear approximation of the model to initialise the identification algorithm. A review of such techniques is presented in [105]. A thorough discussion on identification of Wiener systems is given in the textbooks [38, 57, 86]. In practice, the so-called input injection method [89] makes it possible to perform identification of a running process without stopping its usual operation. Slight random injections are only added to the input signal. They do not disturb the overall system’s functionality.
2.4 Possible Structures of Linear and Nonlinear Parts of Wiener Models Numerous different representations are used as the linear dynamic part of the Wiener model: – – – –
transfer functions [19, 28, 29, 57, 64, 67, 70, 79, 103, 105, 119, 120], impulse responses [46–48], finite impulse responses [33, 113], state-space models [40, 42, 74, 116],
60
2 Wiener Models
– Laguerre orthonormal basis functions [1, 60, 81, 108], – generalized orthonormal basis functions [111]. As far as the nonlinear static part is concerned, the following representations are reported in the literature: – – – – – – – – – – – –
polynomials [19, 28, 57, 59, 60, 64, 79, 105, 108, 111, 116, 119], Multi-Layer Perceptron (MLP) neural networks [4, 57, 67, 68], Radial Basis Function (RBF) neural networks [11], Support Vector Machines (SVMs) [112], Least Squares Support Vector Machines (LS-SVMs) [21, 70], piecewise linear functions [29, 33, 106], fuzzy models [82, 85], Legendre polynomials [9], cubic splines [7], sets of basis functions [40, 42, 103, 120], kernel expansions [113], nonparametric representations [47, 48].
2.5 Example Applications of Wiener Models The Wiener models are reported to approximate behaviour of an array of processes. Examples are: – – – – – – – –
chemical reactors [22, 43, 62, 68, 70, 106], distillation columns [18], gasifiers [6], chromato-graphic separation processes [8], proton exchange membrane fuel cells [73], solid oxide fuel cells [72], the relaxation processes during anaesthesia [80], the arterial pulse transmission phenomena [97].
2.6 Other Structures of Cascade Models Figure 2.7 shows the structure of the SISO Hammerstein model [57]. In contrast to the Wiener model, the order of model blocks is changed, i.e. the nonlinear static block is followed by the linear dynamic one. In particular, Hammerstein models may be used when the sensor is nonlinear. For example, the Hammerstein structure can be used as models of the following processes: neutralisation reactors (pH reactors) [34, 100, 107], distillation columns [30, 34, 77], heat exchangers [30] and diesel engines [10].
2.6 Other Structures of Cascade Models
61
Fig. 2.7 The structure of the SISO Hammerstein
Fig. 2.8 The structure of the SISO Hammerstein–Wiener model
In [68] an example is given which compares efficiency of the Hammerstein and Wiener models for a polymerisation reactor. The Wiener structure turns out to be much better. Similarly to the Wiener systems, the following identification approaches are used in the case of the Hammerstein ones: – – – – – –
correlation methods [14–17], linear optimisation methods [24, 41, 51, 91, 109], nonparametric regression methods [44, 45, 49, 53, 66], nonlinear optimisation methods [5, 24, 30, 56, 110, 114], subspace methods [40], combined parametric-nonparametric methods [54, 86].
An excellent review of identification methods of the Hammerstein model is given in [38, 57, 86]. MPC of dynamical processes represented by Hammerstein models is discussed in numerous works, e.g. in [3, 23, 34, 65, 68, 83, 84, 100, 107]. In addition to cascade models comprised of two blocks, we have to shortly mention two examples of more complex representations. In the Hammerstein–Wiener structure depicted in Fig. 2.8, a linear dynamic block is embedded between two nonlinear static blocks. The Hammerstein–Wiener structure may be successfully used to describe numerous processes, e.g. the human’s muscle [2], a continuous stirred tank reactor [55], a micro-scale polymerase chain reaction reactor [75], temperature variations in a silage bale [90], a DC motor [92], a neutralisation reactor [95], a photovoltaic system [96] and even runtime management of quality of service performance and resource provisioning in shared resource software environments [98]. Various identification algorithms for the Hammerstein–Wiener models are discussed in [12, 32, 39, 78, 87, 118, 122]. MPC of dynamical processes represented by Hammerstein–Wiener models is discussed in a few works, e.g. in [18, 27, 55, 69, 98].
62
2 Wiener Models
Fig. 2.9 The structure of the SISO Wiener–Hammerstein model
In the Wiener–Hammerstein structure depicted in Fig. 2.9, a nonlinear static block is embedded between two linear dynamic blocks. Although such a model is significantly less frequently used than the Wiener, Hammerstein and Hammerstein– Wiener structures, it may be employed to describe the following example processes: a superheater-desuperheater [13], an RF amplifier [25], a paralysed muscle under electrical stimulation [38], a heat exchanger [52], a DC-DC converter [93], an equalizer for optical communication [94], an electronic circuit [99]. Various identification algorithms for the Wiener–Hammerstein models are discussed in [31, 36, 52, 63, 76, 88, 101]. MPC of dynamical processes represented by Wiener–Hammerstein models is discussed in [26, 71]. Finally, let us shortly mention parallel cascade models. The parallel Wiener structure [103, 104] is depicted in Fig. 2.10. Similarly, we may obtain parallel Hammerstein model that is usually named the Uryson model [35, 102]. Parallel Wiener– Hammerstein structures are also possible [101]. All things considered, we may notice that the Wiener model is the most popular because of its simplicity and effectiveness. The Hammerstein one is slightly less frequently considered in literature and applications. The Hammerstein–Wiener structure is much less frequently used, the Wiener–Hammerstein is the least popular one.
Fig. 2.10 The structure of the SISO parallel Wiener model
References
63
References 1. Aadaleesan, P., Miglan, N., Sharma, R., Saha, P.: Nonlinear system identification using Wiener type Laguerre-Wavelet network model. Chem. Eng. Sci. 63, 3932–3941 (2008) 2. Abbasi-Asl, R., Khorsandi, R., Farzampour, S., Zahedi, E.: Estimation of muscle force with EMG signals using Hammerstein-Wiener model. In: Proceedings of the 5th Kuala Lumpur International Conference on Biomedical Engineering 2011 (BIOMED 2011), pp. 157–160. Kuala Lumpur, Malaysia (2011) 3. Abonyi, J., Babuška, R., Ayala Botto, M., Szeifert, F., Nagy, L.: Identification and control of nonlinear systems using fuzzy Hammerstein models. Ind. Eng. Chem. Res. 39, 4302–4314 (2000) 4. Al-Duwaish, H., Karim, M., Chandrasekar, V.: Use of multilayer feedforward neural networks in identification and control of Wiener model. IEE Proc.: Control Theory Appl. 143, 255–258 (1996) 5. Al-Duwaish, H., Karim, M., Chandrasekar, V.: Hammerstein model identification by multilayer feedforward neural networks. Int. J. Syst. Sci. 18, 49–54 (1997) 6. Al Seyab, R.K., Cao, Y.: Nonlinear model predictive control for the ALSTOM gasifier. J. Process Control 16, 795–808 (2006) 7. Aljamaan, I., Westwick, D., Foley, M.: Identification of Wiener models in the presence of ARIMA process noise. IFAC-PapersOnLine 49, 1008–1013 (2016) 8. Arto, V., Hannu, P., Halme, A.: Modeling of chromato-graphic separation process with WienerMLP representation. J. Process Control 78, 443–458 (2001) 9. Ase, H., Katayama, T.: A subspace-based identification of two-channel Wiener systems. IFACPapersOnLine 48, 638–643 (2015) 10. Ayoubi, M.: Comparison between the dynamic multi-layered perceptron and generalised Hammerstein model for experimental identification of the loading process in diesel engines. Control Eng. Pract. 6, 271–279 (1998) 11. Azhar, A.S.S., Al-Duwaish, H.N.: Identification of Wiener model using radial basis functions neural networks. In: Dorronsoro, J.R. (ed.) Artificial Neural Networks (ICANN 2002). Lecture Notes in Computer Science, vol. 2415, pp. 344–350. Springer, Berlin (2002) 12. Bai, E.W.: A blind approach to the Hammerstein-Wiener model identification. Automatica 38, 967–979 (2002) 13. Benyó, I., Kovács, J., Mononen, J., Kortela, U.: Modelling of steam temperature dynamics of a superheater. Int. J. Simul. 6, 3–9 (2005) 14. Billings, S.A., Fakhouri, S.Y.: Identification of a class of nonlinear systems using correlation analysis. Proc. Inst. Electr. Eng. 125, 691–697 (1978) 15. Billings, S.A., Fakhouri, S.Y.: Theory of separable processes with applications to the identification of nonlinear systems. Proc. Inst. Electr. Eng. 125, 1051–1058 (1978) 16. Billings, S.A., Fakhouri, S.Y.: Non-linear system identification using the Hammerstein model. Int. J. Syst. Sci. 10, 567–578 (1979) 17. Billings, S.A., Fakhouri, S.Y.: Identification of systems containing linear dynamic and static nonlinear elements. Automatica 18, 15–26 (1982) 18. Bloemen, H.H.J., Chou, C.T., Boom, T.J.J., Verdult, V., Verhaegen, M., Backx, T.C.: Wiener model identification and predictive control for dual composition control of a distillation column. J. Process Control 11, 601–620 (2001) 19. Bottegai, G., Castro-Garcia, R., Suykens, J.A.K.: On the identification of Wiener systems with polynomial nonlinearity. In: Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, VIC, Australia, pp. 6475–6480 (2017) 20. Brouri, A., Slassi, S.: Frequency identification approach for Wiener systems. Int. J. Comput. Eng. Res. 5, 12–16 (2015) 21. Castro-Garcia, R., Suykens, J.A.K.: Wiener system identification using best linear approximation within the LS-SVM framework. In: Proceedings of the 2016 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Cartagena, Colombia, pp. 1–6 (2016)
64
2 Wiener Models
22. Cervantes, A.L., Agamennoni, O.E., Figueroa, J.L.: A nonlinear model predictive control system based on Wiener piecewise linear models. J. Process Control 13, 655–666 (2003) 23. Chan, K.H., Bao, J.: Model predictive control of Hammerstein systems with multivariable nonlinearities. Ind. Eng. Chem. Res. 46, 168–180 (2007) 24. Chang, F.H.I., Luus, R.: A noniterative method for identification using Hammerstein model. IEEE Trans. Autom. Control 16, 464–468 (1971) 25. Crama, P., Rolain, Y.: Broadband measurement and identification of a Wiener-Hammerstein model for an RF amplifier. In: 60th ARFTG Conference Digest, Fall 2002, Washington, DC, USA, pp. 49–57 (2002) 26. Dasgupta, D., Patwardhan, S.C.: NMPC of a continuous fermenter using Wiener-Hammerstein model developed from irregularly sampled multi-rate data. In: Proceedings of the 9th International Symposium on Dynamics and Control of Process Systems (DYCOPS 2010), Leuven, Belgium, pp. 637–642 (2010) 27. Ding, B., Ping, X.: Dynamic output feedback model predictive control for nonlinear systems represented by Hammerstein-Wiener model. J. Process Control 22, 1773–1784 (2012) 28. Ding, F., Liu, X., Liu, M.: The recursive least squares identification algorithm for a class of Wiener nonlinear systems. J. Franklin Inst. 353, 1518–1526 (2015) 29. Dong, R., Tan, Q., Tan, Y.: Recursive identification algorithm for dynamic systems with output backlash and its convergence. Int. J. Appl. Math. Comput. Sci. 19, 631–638 (2009) 30. Eskinat, E., Johnson, S., Luyben, W.L.: Use of Hammerstein models in identification of nonlinear systems. AIChE J. 37, 255–268 (1991) 31. Falck, T., Dreesen, P., De Brabanter, K., Pleckmans, K., De Moor, B., Suykens, J.A.K.: Least-squares support vector machines for the identification of Wiener-Hammerstein systems. Control Eng. Pract. 20, 1165–1174 (2012) 32. Falkner, A.H.: Iterative technique in the identification of a non-linear system. Int. J. Control 48, 385–396 (1988) 33. Fan, D., Lo, K.: Identification for disturbed MIMO Wiener systems. Nonlinear Dyn. 55, 31–42 (2009) 34. Fruzzetti, K.P., Palazo˘glu, A., McDonald, K.A.: Nonlinear model predictive control using Hammerstein models. J. Process Control 7, 31–41 (1997) 35. Gallman, P.: An iterative method for the identification of nonlinear systems using a Uryson model. IEEE Trans. Autom. Control 20, 771–775 (1975) 36. Giordano, G., Gros, S., Sjöberg, J.: An improved method for Wiener-Hammerstein system identification based on the fractional approach. Automatica 94, 349–360 (2018) 37. Giri, F., Radouane, A., Brouri, A., Chaoui, F.: Combined frequency-prediction error identification approach for Wiener systems with backlash and backlash-inverse operators. Automatica 50, 768–783 (2014) 38. Giri, F., Bai, E.W.: Block-Oriented Nonlinear System Identification. Lecture Notes in Control and Information Sciences, vol. 404. Springer, Berlin (2010) 39. Goethals, I., Pelckmans, K., Hoegaerts, L., Suykens, J.A.K., De Moor, B.: Subspace intersection identification of Hammerstein-Wiener systems. In: Proceedings of the 2005 44th IEEE Conference on Decision and Control/European Control Conference CDC-ECC, Seville, Spain, pp. 7108–7113 (2004) 40. Gómez, J.C., Baeyens, E.: Subspace identification of multivariable Hammerstein and Wiener models. IFAC Proc. Vol. 35, 55–60 (2002) 41. Gómez, J.C., Baeyens, E.: Identification of block-oriented nonlinear systems using orthonormal bases. J. Process Control 14, 685–697 (2004) 42. Gómez, J.C., Baeyens, E.: Subspace-based identification algorithms for Hammerstein and Wiener models. Eur. J. Control 11, 127–136 (2005) 43. Gómez, J.C., Jutan, A., Baeyens, E.: Wiener model identification and predictive control of a pH neutralisation process. Proc. IEE Part D Control Theory Appl. 151, 329–338 (2004) 44. Greblicki, W.: Identification of discrete Hammerstein systems using kernel regression estimates. IEEE Trans. Autom. Control 31, 74–77 (1986)
References
65
45. Greblicki, W.: Non-parametric orthogonal series identification of Hammerstein systems. Int. J. Syst. Sci. 20, 2355–2367 (1989) 46. Greblicki, W.: Nonparametric identification of Wiener systems by orthogonal series. IEEE Trans. Autom. Control 39, 2077–2086 (1994) 47. Greblicki, W.: Nonparametric approach to Wiener system identification. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 44, 538–545 (1997) 48. Greblicki, W.: Recursive identification of Wiener systems. Int. J. Appl. Math. Comput. Sci. 11, 977–991 (2001) 49. Greblicki, W., Pawlak, M.: Recursive nonparametric identification of Hammerstein systems. J. Franklin Inst. 326, 461–481 (1989) 50. Guo, F., Bretthauer, G.: Identification of MISO Wiener and Hammerstein systems. In: Proceedings of the European Control Conference, ECC 2003, Cambridge, UK, pp. 2144–2149 (2003). CD-ROM, paper 364 51. Haist, N.D., Chang, F.H.I., Luus, R.: Nonlinear identification in the presence of correlated noise using Hammerstein model. IEEE Trans. Autom. Control 18, 552–555 (1973) 52. Haryanto, A., Hong, K.S.: Maximum likelihood identification of Wiener-Hammerstein models. Mech. Syst. Signal Process. 41, 54–70 (2013) 53. Hasiewicz, Z.: Non-parametric estimation of nonlinearity in a cascade time series system by multiscale approximation. Signal Process. 81, 791–807 (2001) 54. Hasiewicz, Z., Mzyk, G.: Combined parametric-nonparametric identification of Hammerstein systems. IEEE Trans. Autom. Control 49, 1370–1375 (2004) 55. Hong, M., Cheng, S.: Hammerstein-Wiener model predictive control of continuous stirred tank reactor. In: Hu, W. (ed.) Electronics and Signal Processing. Lecture Notes in Electric Engineering, vol. 97, pp. 235–242. Springer, Berlin (2011) 56. Janczak, A.: Neural network approach for identification of Hammerstein systems. Int. J. Control 76, 1749–1766 (2003) 57. Janczak, A.: Identification of Nonlinear Systems Using Neural Networks and Polynomial Models: A Block-Oriented Approach. Lecture Notes in Control and Information Sciences, vol. 310. Springer, Berlin (2004) 58. Janczak, A.: Instrumental variables approach to identification of a class of MIMO Wiener systems. Nonlinear Dyn. 48, 275–284 (2007) 59. Janczak, A., Korbicz, J.: Two-stage instrumental variables identification of polynomial Wiener systems with invertible nonlinearities. Int. J. Appl. Math. Comput. Sci. 29, 571–580 (2019) 60. Jansson, D., Medvedev, A.: Identification of polynomial Wiener systems via Volterra-Laguerre series with model mismatch. IFAC-PapersOnLine 48, 831–836 (2015) 61. Jia, L., Li, Y., Li, F.: Correlation analysis algorithm-based multiple-input single-output Wiener model with output noise. Complexity 9650254 (2019) 62. Kalafatis, A.D., Wang, L., Cluett, W.R.: Linearizing feedforward-feedback control of pH processes based on the Wiener model. J. Process Control 15, 103–112 (2005) 63. Katayama, T., Ase, H.: Linear approximation and identification of MIMO WienerHammerstein systems. Automatica 71, 118–124 (2016) 64. Kazemi, M., Arefi, M.: A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems. ISA Trans. 67, 382–388 (2017) 65. Knohl, T., Xu, W.M., Unbehauen, H.: Indirect adaptive dual control for Hammerstein systems using ANN. Control Eng. Pract. 11, 377–385 (2003) 66. Krzy˙zak, A., Partyka, M.A.: On identification of block-oriented systems by non-parametric techniques. Int. J. Syst. Sci. 24, 1049–1066 (1993) 67. Ławry´nczuk, M.: Practical nonlinear predictive control algorithms for neural Wiener models. J. Process Control 23, 696–714 (2013) 68. Ławry´nczuk, M.: Computationally Efficient Model Predictive Control Algorithms: a Neural Network Approach. Studies in Systems, Decision and Control, vol. 3. Springer, Cham (2014) 69. Ławry´nczuk, M.: Nonlinear predictive control for Hammerstein-Wiener systems. ISA Trans. 55, 49–62 (2015)
66
2 Wiener Models
70. Ławry´nczuk, M.: Modelling and predictive control of a neutralisation reactor using sparse support vector machine Wiener models. Neurocomputing 205, 311–328 (2016) 71. Ławry´nczuk, M.: Nonlinear predictive control of dynamic systems represented by WienerHammerstein models. Nonlinear Dyn. 86, 1193–1214 (2016) 72. Ławry´nczuk, M.: Identification of Wiener models for dynamic and steady-state performance with application to solid oxide fuel cell. Asian J. Control 21, 1836–1846 (2019) 73. Ławry´nczuk, M., Söffker, D.: Wiener structures for modeling and nonlinear predictive control of proton exchange membrane fuel cell. Nonlinear Dyn. 95, 1639–1660 (2019) 74. Ławry´nczuk, M., Tatjewski, P.: Offset-free state-space nonlinear predictive control for Wiener systems. Inf. Sci. 511, 127–151 (2020) 75. Lee, Y.J., Sung, S.W., Park, S., Park, S.: Input test signal design and parameter estimation method for the Hammerstein-Wiener processes. Ind. Eng. Chem. Res. 43, 7521–7530 (2004) 76. Li, L., Ren, X.: Identification of nonlinear Wiener-Hammerstein systems by a novel adaptive algorithm based on cost function framework. ISA Trans. 80, 146–159 (2018) 77. Ling, W.M., Rivera, D.: Nonlinear black-box identification of distillation column models design variable selection for model performance enhancement. Int. J. Appl. Math. Comput. Sci. 8, 793–813 (1998) 78. MacArthur, J.W.: A new approach for nonlinear process identification using orthonormal bases and ordinal splines. J. Process Control 22, 375–389 (2012) 79. Mahataa, K., Schoukens, J., Cock, A.D.: Information matrix and D-optimal design with Gaussian inputs for Wiener model identification. Automatica 69, 65–77 (2016) 80. Mahfouf, M., Linkens, D.A.: Non-linear generalized predictive control (NLGPC) applied to muscle relaxant anaesthesia. Int. J. Control 71, 239–257 (1998) 81. Mahmoodi, S., Poshtan, J., Jahed-Motlagh, M.R., Montazeri, A.: Nonlinear model predictive control of a pH neutralization process based on Wiener-Laguerre model. Chem. Eng. J. 146, 328–337 (2009) 82. Marusak, P.M.: Application of fuzzy Wiener models in efficient MPC algorithms. In: Szczuka, M., Kryszkiewicz, M., Ramanna, S., Jensen, R., Hu, Q. (eds.) Rough Sets and Current Trends in Computing. Lecture Notes in Artificial Intelligence, vol. 6086, pp. 669–677. Springer, Berlin (2010) 83. Marusak, P.M.: Numerically efficient analytical MPC algorithm based on fuzzy Hammerstein models. In: Dobnikar, A., Lotriˇc, U., Šter, B. (eds.) Artificial Intelligence and Soft Computing. Lecture Notes in Computer Science, vol. 6593, pp. 31–40. Springer, Berlin (2010) 84. Marusak, P.M.: On prediction generation in efficient MPC algorithms based on fuzzy Hammerstein models. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and Soft Computing. Lecture Notes in Computer Science, vol. 6113, pp. 136–143. Springer, Berlin (2010) 85. Marusak, P.M.: Efficient MPC algorithms based on fuzzy Wiener models and advanced methods of prediction generation. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and Soft Computing. Lecture Notes in Computer Science, vol. 7267, pp. 292–300. Springer, Berlin (2012) 86. Mzyk, G.: Combined Parametric-Nonparametric Identification of Block-Oriented Systems. Lecture Notes in Control and Information Sciences, vol. 454. Springer, Berlin (2014) 87. Mzyk, G., Biega´nski, M., Mielcarek, P.: Multi-level identification of Hammerstein-Wiener systems. IFAC-PapersOnLine 52, 174–179 (2019) 88. Mzyk, G., Wachel, P.: Kernel-based identification of Wiener-Hammerstein system. Automatica 83, 275–281 (2017) 89. Mzyk, G., Wachel, P.: Wiener system identification by input injection method. Int. J. Adapt. Control Signal Process. 34, 1105–1119 (2020) 90. Nadimi, E.S., Green, O., Blanes-Vidal, V., Larsen, J.J., Christensen, L.P.: HammersteinWiener model for the prediction of temperature variations inside silage stack-bales using wireless sensor networks. Biosys. Eng. 112, 236–247 (2012) 91. Narendra, K.S., Gallman, P.G.: An iterative method for the identification of nonlinear systems using Hammerstein model. IEEE Trans. Autom. Control 11, 546–550 (1966)
References
67
92. Nemati, A., Faieghi, M.: The performance comparison of ANFIS and Hammerstein-Wiener models for BLDC motors. In: Hu, W. (ed.) Electronics and Signal Processing. Lecture Notes in Electric Engineering, vol. 97, pp. 29–37. Springer, Berlin (2011) 93. Oliver, J.A., Prieto, R., Cobos, J.A., Garcia, O., Alou, P.: Hybrid Wiener-Hammerstein structure for grey-box modeling of DC-DC converters. In: The 24th Annual IEEE Conference Applied Power Electronics Conference and Exposition, Washington, DC, USA, pp. 280–285 (2009) 94. Pan, J., Cheng, C.: Wiener-Hammerstein model based electrical equalizer for optical communication systems. J. Lightwave Technol. 29, 2454–2459 (2011) 95. Park, H.C., Sung, S.W., Lee, J.: Modeling of Hammerstein-Wiener processes with special input test signals. Ind. Eng. Chem. Res. 45, 1029–1038 (2006) 96. Patcharaprakiti, N., Kirtikara, K., Monyakul, V., Chenvidhya, D., Thongpron, J., Sangswang, A., Muenpinij B.: Modeling of single phase inverter of photovoltaic system using Hammerstein-Wiener nonlinear system identification. Curr. Appl. Phys. 10, S532–S536 (2010) 97. Patel, A.M., Li, J.K.J.: Validation of a novel nonlinear black box Wiener system model for arterialpulse transmission. Comput. Biol. Med. 88, 11–17 (2017) 98. Patikirikorala, T., Wang, L., Colman, A., Han, J.: Hammerstein-Wiener nonlinear model based predictive control for relative QoS performance and resource management of software systems. Control Eng. Pract. 20, 49–61 (2012) 99. Piroddi, L., Farina, M., Lovera, M.: Black box model identification of nonlinear input-output models: a Wiener-Hammerstein benchmark. Control Eng. Pract. 20, 1109–1118 (2012) 100. Patwardhan, R.S., Lakshminarayanan, S., Shah, S.L.: Constrained nonlinear MPC using Hammerstein and Wiener models: PSL framework. AIChE J. 44, 1611–1622 (1998) 101. Schoukens, M., Marconato, A., Pintelon, R., Vandersteen, G., Rolain, Y.: Parametric identification of parallel Wiener-Hammerstein systems. Automatica 51, 111–122 (2015) 102. Schoukens, M., Pintelon, R., Rolain, Y.: Parametric identification of parallel Hammerstein systems. IEEE Trans. Instrum. Meas. 60, 3931–3938 (2011) 103. Schoukens, M., Rolain, Y.: Parametric MIMO parallel Wiener identification. In: Proceedings of the 2011 50th IEEE Conference on Decision and Control/European Control Conference CDC-ECC, Orlando, FL, USA, pp. 5100–5105 (2011) 104. Schoukens, M., Rolain, Y.: Parametric identification of parallel Wiener systems. IEEE Trans. Instrum. Meas. 61, 2825–2832 (2012) 105. Schoukens, M., Tiels, T.: Identification of block-oriented nonlinear systems starting from linear approximations: a survey. Automatica 85, 272–292 (2017) 106. Shafiee, G., Arefi, M.M., Jahed-Motlagh, M.R., Jalali, A.A.: Nonlinear predictive control of a polymerization reactor based on piecewise linear Wiener model. Chem. Eng. J. 143, 282–292 (2008) 107. Smith, J.G., Kamat, S., Madhavan, K.P.: Modeling of pH process using wavenet based Hammerstein model. J. Process. Control 17, 551–561 (2007) 108. Stanisławski, R., Latawiec, K., Gałek, M., Łukaniszyn, M.: Modeling and identification of a fractional-order discrete-time SISO Laguerre-Wiener system. In: Proceedings of the 19th International Conference on Methods and Models in Automation and Robotics (MMAR 2014), Mie˛dzyzdroje, Poland, pp. 165–168 (2014) 109. Stoica, P., Söderström, T.: Instrumental-variable methods for identification of Hammerstein systems. Int. J. Control 35, 459–476 (1982) 110. Su, H.T., McAvoy, T.J.: Integration of multilayer perceptron networks and linear dynamic models: a Hammerstein modeling approach. Ind. Eng. Chem. Res. 32, 1927–1936 (1993) 111. Tiels, K., Schoukens, J.: Wiener system identification with generalized orthonormal basis functions. Automatica 50, 3147–3154 (2014) 112. Tötterman, S., Toivonen, H.T.: Support vector method for identification of Wiener models. J. Process Control 19, 1174–1181 (2009) 113. Van Vaerenbergh, S., Via, J., Santamaria, I.: Blind identification of SIMO Wiener systems based on kernel canonical correlation analysis. IEEE Trans. Signal Process. 61, 2219–2230 (2013)
68
2 Wiener Models
114. Vörös, J.: Parameter identification of discontinuous Hammerstein systems. Automatica 33, 1141–1146 (1997) 115. Vörös, J.: Identification of nonlinear cascade systems with output hysteresis based on the key term separation principle. Appl. Math. Model. 39, 5531–5539 (2015) 116. Westwick, D., Verhaegen, M.: Identifying MIMO Wiener systems using subspace model identification methods. Syst. Control Lett. 52, 235–258 (1996) 117. Wigren, T.: Recursive prediction error identification algorithm using the nonlinear Wiener model. Automatica 29, 1011–1025 (1993) 118. Willis, A., Schön, T.B., Ljung, L., Ninness, B.: Identification of Hammerstein-Wiener systems. Automatica 49, 70–81 (2013) 119. Xiong, W., Yang, X., Ke, L., Xu, B.: EM algorithm-based identification of a class of nonlinear Wiener systems with missing output data. Nonlinear Dyn. 80, 329–339 (2015) 120. Yang, X., Xiong, W., Ma, J., Wang, Z.: Robust identification of Wiener time-delay system with expectation-maximization algorithm. J. Franklin Inst. 354, 5678–5693 (2017) 121. Zhou, L., Li, X., Pan, F.: Gradient based iterative parameter identification for Wiener nonlinear systems. Appl. Math. Model. 37, 16–17 (2013) 122. Zhu, Y.: Estimation of an N-L-N Hammerstein-Wiener model. Automatica 38, 1607–1614 (2002)
Part II
Input-Output Approaches
Chapter 3
MPC Algorithms Using Input-Output Wiener Models
Abstract This Chapter details MPC algorithms for processes described by inputoutput Wiener models. At first, the simple MPC-inv method based on the inverse static model is recalled. The rudimentary MPC algorithm with Nonlinear Optimisation (MPC-NO) repeated at each sampling instant is described. Next, two computationally efficient MPC methods with on-line model linearisation are characterised: the MPC scheme with Simplified Successive Linearisation (MPC-SSL) and the MPC approach with Nonlinear Prediction and Simplified Linearisation (MPC-NPSL). Two MPC schemes with on-line trajectory linearisation are also detailed: the MPC method with Nonlinear Prediction and Linearisation along the Trajectory (MPC-NPLT) and the MPC scheme with Nonlinear Prediction and Linearisation along the Predicted Trajectory (MPC-NPLPT). All discussed MPC algorithms are first presented in their rudimentary versions; the variants with parameterisation using Laguerre functions, which reduces the number of decision variables, are described next.
3.1 MPC-inv Algorithm The simplest and the most frequent approach to control dynamical processes described by Wiener models is to use an inverse static model of the nonlinear part of the Wiener model and a linear control algorithm [4]. The role of the inverse model is to try to cancel the nonlinear static behaviour of the controlled process. The resulting control system structure is depicted in Fig. 3.1. In the SISO case, the inverse model is v(k) = g(y(k)) ˜ (3.1) where the general function g˜ : R → R. The inverse model is used twice for control. Firstly, it is used to calculate the value of the auxiliary model signal basing on the measured process output ˜ vmod (k) = g(y(k))
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_3
(3.2)
71
72
3 MPC Algorithms Using Input-Output Wiener Models
Fig. 3.1 The control system structure for dynamical processes described by Wiener models using the inverse static model; g˜ denotes the inverse static model
Typically, the real signal v does not exist in the controlled process and its measurement is impossible. Secondly, the inverse model is necessary to calculate the value of the auxiliary variable corresponding to the current set-point of the controlled variable ˜ sp (k)) vsp (k) = g(y
(3.3)
From the perspective of the control algorithm based on a linear model, the controlled output variable is v, not y. The described approach may be used with different kinds of linear controllers, not only MPC ones [1, 2, 5, 16–18] but also with PID or Internal Model Control (IMC) [3, 6] ones. An alternative is to use dynamic inversion [19]. Although the whole idea seems to be simple and potentially efficient, it has important drawbacks: 1. The inverse model must exist, which means that the controller based on the inverse model cannot be used when it is impossible to find the inverse representation of the nonlinear part of the Wiener model. Of course, the problem occurs when the static characteristic of the nonlinear block is non-invertible. Furthermore, as it is discussed in Sect. 4.6, in some cases, the inverse models may be very complex. 2. As it will be shown in Chap. 4, the described structure may result in control accuracy worse than in some other approaches especially developed for dynamical processes described by the Wiener model. In particular, it may give unacceptable control quality when the model used in MPC is not perfect and/or the process is affected by unmeasured disturbances. Typically, both static and dynamic properties of processes are nonlinear. Hence, nonlinear process behaviour cannot be entirely cancelled by a nonlinear static block. Depending on the dimensionality of the process and model structure, we may distinguish the following cases: 1. In the simplest case, when the controlled process is described by the SISO Wiener model shown in Fig. 2.1, it is only necessary to find a SISO inverse model (3.1).
3.1 MPC-inv Algorithm
73
2. For MIMO Wiener models I and III, depicted in Figs. 2.2 and 2.4, respectively, we have to use as many as n y inverse SISO models v1 (k) = g˜ 1 (y1 (k)) .. . vn y (k) = g˜ n y (yn y (k))
(3.4)
(3.5)
3. For MIMO Wiener models II and IV, shown in Figs. 2.3 and 2.5, respectively, we have to use n v inverse MISO models v1 (k) = g˜ 1 (y1 (k), . . . , yn y (k)) .. . vn v (k) = g˜ n v (y1 (k), . . . , yn y (k))
(3.6)
(3.7)
In this case, the inverse models are likely to be complicated, in particular when there are really many process outputs. 4. For the MIMO Wiener model V shown in Fig. 2.6, it is necessary to use as many as n u n y inverse models (3.8) vm,n (k) = g˜ m,n (ym (k)) for all m = 1, . . . , n y , n = 1, . . . , n u . In this case, we require that the inverse models calculate all n u n y auxiliary signals on the basis of only n y process output signals. It may turn out to be very difficult or even impossible. All things considered, provided that the inverse model exists, the MPC-inv algorithm may be used in the SISO case or in the MIMO case when all nonlinear static blocks are of SISO type, i.e. when the MIMO Wiener models I or III are used. When the MIMO Wiener models II, IV or V are used, the inverse models may be very complicated, which makes implementation difficult or impossible.
3.2 MPC-NO Algorithm In the MPC algorithm with Nonlinear Optimisation (MPC-NO), the decision variables, i.e. the future increments of the manipulated variable(s) (1.3) are calculated at each sampling instant k from the optimisation problem. In the SISO case, the formulation (1.12) is used, whereas in the general MIMO one, the optimisation task is (1.20). In all cases, it may be transformed to a compact vector-matrix form (1.35). When the soft constraints are imposed on the controlled variables, the optimisation problem is defined by Eq. (1.38), which may be transformed to a compact vectormatrix form (1.39).
74
3 MPC Algorithms Using Input-Output Wiener Models
The model of the controlled process is used to calculate the predicted values of the controlled variables for the consecutive sampling instants over the prediction horizon, i.e. the quantities yˆ (k + 1|k), . . . , yˆ (k + N |k). Provided that we have a perfect model, in the SISO case, the prediction equation for the sampling instant k + p is yˆ (k + p|k) = y(k + p|k) (3.9) where the symbol y(k + p|k) denotes the output of the model for the sampling instant k + p used at the current instant k. Unfortunately, for prediction calculation we must take into account that usually the model used in MPC is not perfect, i.e. there are differences between properties of the process and its model and that the measurement of the process output is not ideal. In order to compensate for all these factors, the following general prediction equation must be used [15, 20] yˆ (k + p|k) = y(k + p|k) + d(k)
(3.10)
where d(k) is the current estimation of the unmeasured disturbance which acts on the process output. In the most typical approach (named “the DMC disturbance model”), it is assumed that the disturbance is constant over the whole prediction horizon and its value is determined as the difference between the real (measured) value of the process output (y(k)) and the model output (y mod (k)) d(k) = y(k) − y mod (k)
(3.11)
It may be easily proved that when the unmeasured disturbance estimation is used in the prediction equation, the MPC algorithm has the integral action which leads to no steady-state error [15, 20]. In the MIMO case, the predictions are yˆm (k + p|k) = ym (k + p|k) + dm (k)
(3.12)
for all process outputs, i.e. for m = 1, . . . , n y . Disturbance estimations are dm (k) = ym (k) − ymmod (k)
(3.13)
Prediction Using SISO Wiener Model At first, let us discuss the SISO case in which the Wiener model depicted in Fig. 2.1 is used. Using the general prediction equation (3.10) and from the description of the nonlinear static block, i.e. Eq. (2.5), we have yˆ (k + p|k) = g(v(k + p|k)) + d(k)
(3.14)
where p = 1, . . . , N . From the description of the linear dynamic block, i.e. Eq. (2.4), we have
3.2 MPC-NO Algorithm
75
v(k + 1|k) = b1 u(k|k) + b2 u(k − 1) + b3 u(k − 2) + · · · + bn B u(k − n B + 1) − a1 v(k) − a2 v(k − 1) − a3 v(k − 2) − · · · − an A v(k − n A + 1)
(3.15)
v(k + 2|k) = b1 u(k + 1|k) + b2 u(k|k) + b3 u(k − 1) + · · · + bn B u(k − n B + 2) − a1 v(k + 1|k) − a2 v(k) − a3 v(k − 1) − · · · − an A v(k − n A + 2)
(3.16)
v(k + 3|k) = b1 u(k + 2|k) + b2 u(k + 1|k) + b3 u(k|k) + · · · + bn B u(k − n B + 3) − a1 v(k + 2|k) − a2 v(k + 1|k) − a3 v(k) − · · · − an A v(k − n A + 3)
(3.17)
.. .
In general, Eqs. (3.15)–(3.17) may be rewritten in the following compact form Iuf ( p)
v(k + p|k) =
nB
bi u(k − i + p|k) +
Ivf ( p)
−
bi u(k − i + p)
i=Iuf ( p)+1
i=1
ai v(k − i + p|k) −
nA
ai v(k − i + p)
(3.18)
i=Ivf ( p)+1
i=1
for p = 1, . . . , N . Taking into account the prediction of the auxiliary variable for the future sampling instant k + p performed at the current instant k, the number of future manipulated variables, from the sampling instant k, i.e. u(k|k), u(k + 1|k), . . ., is denoted by (3.19) Iuf ( p) = max(min( p, n B ), 0) The number of future values of the signal v, from the sampling instant k + 1, i.e. v(k + 1|k), v(k + 2|k), . . ., is Ivf ( p) = min( p − 1, n A )
(3.20)
The quantity v(k) does not depend on the future manipulated variables but only on past ones, i.e. up to the sampling instant k − 1, it is clear from Eq. (2.4). From Eqs. (2.6) and (3.11), the unmeasured disturbance is estimated from d(k) = y(k) − g
nB i=1
bi u(k − i) −
nA i=1
ai v(k − i)
(3.21)
76
3 MPC Algorithms Using Input-Output Wiener Models
Prediction Using MIMO Wiener Model I Next, we will discuss the MIMO case in which the first structure of the Wiener model depicted in Fig. 2.2 is used. Using the general prediction equation (3.12) and from Eq. (2.14), we have yˆm (k + p|k) = gm (vm (k + p|k)) + dm (k)
(3.22)
where m = 1, . . . , n y , p = 1, . . . , N . Using the vector notation, Eq. (3.14) may be obtained, the same that is used in the SISO case (in such a case, all three components are vectors of length n y ). Next, from Eq. (2.11), we have vm (k + p|k) =
n u I uf ( p) n=1
bim,n u n (k − i + p)
i=Iuf ( p)+1
i=1
Ivf ( p)
−
nB
bim,n u n (k − i + p|k) +
aim vm (k − i + p|k) −
nA
aim vm (k − i + p)
(3.23)
i=Ivf ( p)+1
i=1
where m = 1, . . . , n y , p = 1, . . . , N . The quantities Iuf ( p) and Ivf ( p) (Eqs. (3.19) and (3.20)) are independent of the model input and output because all model channels have the same order of dynamics, defined by the same values of n A and n B . From Eqs. (2.17) and (3.13), the unmeasured disturbances are estimated from dm (k) = ym (k) − gm
n n u B
bim,n u n (k
− i) −
n=1 i=1
nA
aim vm (k
− i)
(3.24)
i=1
Prediction Using MIMO Wiener Model II Next, we will discuss the MIMO case in which the second structure of the Wiener model depicted in Fig. 2.3 is used. Using the general prediction equation (3.12) and from Eq. (2.25), we have yˆm (k + p|k) = gm (v1 (k + p|k), . . . , vn v (k + p|k)) + dm (k)
(3.25)
where m = 1, . . . , n y , p = 1, . . . , N . The signals vm (k + p|k) are calculated from Eq. (3.23), in a similar way it is done for the MIMO Wiener model I, for p = 1, . . . , N , but now m = 1, . . . , n v . From Eqs. (2.28) and (3.13), the unmeasured disturbances are estimated from dm (k) = ym (k) − gm
nu nB n=1 i=1 nu nB n=1 i=1
bi1,n u n (k − i) − bin v ,n u n (k
nA
− i) −
ai1 v1 (k − i), . . . ,
i=1 nA i=1
ain v vn v (k
− i)
(3.26)
3.2 MPC-NO Algorithm
77
Prediction Using MIMO Wiener Model III In the case of the third structure of the MIMO Wiener model depicted in Fig. 2.4, using the general prediction equation (3.12) and from Eq. (2.53), we have yˆm (k + p|k) = gm
n u
vm,n (k + p|k) + dm (k)
(3.27)
n=1
for m = 1, . . . , n y , p = 1, . . . , N . From Eq. (2.47), we have vm,n (k + p|k) =
Iuf (m,n, p)
n m,n B
bim,n u n (k
bim,n u n (k − i + p)
i=Iuf (m,n, p)+1
i=1
−
− i + p|k) +
Ivf (m,n, p)
n m,n A
aim,n vm,n (k
− i + p|k)−
aim,n vm,n (k − i + p)
i=Ivf (m,n, p)+1
i=1
(3.28) for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . Because transfer functions of the consecutive input-output channels may have different order of dynamics, in place of Eqs. (3.19)–(3.20), we use Iuf (m, n, p) = max(min( p, n m,n B ), 0)
(3.29)
Ivf (m, n, p) = min( p − 1, n m,n A )
(3.30)
and
From Eqs. (2.56) and (3.13), the unmeasured disturbances are estimated from ⎞⎞ ⎛ m,n m,n nB nA nu ⎝ dm (k) = ym (k) − gm ⎝ bim,n u n (k − i) − aim,n vm,n (k − i)⎠⎠ (3.31) ⎛
n=1
i=1
i=1
Prediction Using MIMO Wiener Model IV In the case of the fourth structure of the MIMO Wiener model depicted in Fig. 2.5, using the general prediction equation (3.12) and from Eq. (2.68), we have yˆm (k + p|k) = gm
n u n=1
v1,n (k + p|k), . . . ,
nu
vn v ,n (k + p|k) + dm (k) (3.32)
n=1
for m = 1, . . . , n y , p = 1, . . . , N . The signals vm,n (k + p|k) are calculated from Eq. (3.28), in a similar way it is done in the case of the MIMO model III, for n = 1, . . . , n u , p = 1, . . . , N , but now m = 1, . . . , n v . From Eqs. (2.71) and (3.13), the unmeasured disturbances are estimated from
78
3 MPC Algorithms Using Input-Output Wiener Models
dm (k) = ym (k) − gm
n 1,n nu B n=1
1,n
bi1,n u n (k
− i) −
i=1
n u n B
nA i=1
n v ,n
n=1
ai1,n v1,n (k − i) , . . . ,
bin v ,n u n (k
n nAv ,n
− i) −
i=1
ain v ,n vn v ,n (k − i)
(3.33)
i=1
Prediction Using MIMO Wiener Model V In the case of the fifth structure of the MIMO Wiener model depicted in Fig. 2.6, using the general prediction equation (3.12) and from Eq. (2.77), we have yˆm (k + p|k) =
nu
gm,n (vm,n (k + p|k)) + dm (k)
(3.34)
n=1
for m = 1, . . . , n y , p = 1, . . . , N . The signals vm,n (k + p|k) are calculated from Eq. (3.28), in the same way it is done in the case of the MIMO models III, for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . From Eqs. (2.80) and (3.13), the unmeasured disturbances are estimated from ⎞ ⎛ m,n m,n nB nA nu dm (k) = ym (k) − gm,n ⎝ bim,n u n (k − i) − aim,n vm,n (k − i)⎠ (3.35) n=1
i=1
i=1
Optimisation One may easily note from Eqs. (3.14), (3.22), (3.25), (3.27), (3.32) and (3.34) that the predicted values of the controlled variables are nonlinear functions of the future values of the manipulated variable(s), or in different words, nonlinear functions of the calculated future increments (1.3). Hence, the optimisation problems (1.12), (1.20), (1.35), (1.38) and (1.39) are in fact nonlinear ones. Hence, a nonlinear optimisation method must be used on-line in the MPC-NO approach. At first let us discuss how the MPC-NO optimisation problem with hard output constraints defined by Eq. (1.35) should be reformulated in order to solve it in MATLAB. We will use the fmincon function for optimisation. Its syntax is X = fmincon(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS) It solves the general nonlinear optimisation task min { f (x(k))} x(k)
subject to Ax(k) ≤ B(k) Aeq x(k) = B eq C(x(k)) ≤ 0
(3.36)
3.2 MPC-NO Algorithm
79
C eq (x(k)) = 0 L B ≤ x(k) ≤ U B The fmincon function makes it possible to take into account 5 types of constraints: linear inequalities ( Ax(k) ≤ B(k)), linear equalities ( Aeq x(k) = B eq ), nonlinear inequalities (C(x(k)) ≤ 0), nonlinear equalities (C eq (x(k)) = 0) and bounds (L B ≤ x(k) ≤ U B). When compared with the general nonlinear optimisation problem (3.36) solved by the fmincon function, in our MPC-NO optimisation task (1.35), the decision variable vector is x(k) = u(k) and there are only three types of constraints: linear inequalities defined by A=
−J −umin + u(k − 1) , B(k) = umax − u(k − 1) J
(3.37)
nonlinear inequalities defined by
− ˆy(k) + ymin C(x(k)) = ˆy(k) − ymax
(3.38)
and bounds defined by L B = umin , U B = umax
(3.39)
Linear and nonlinear equality constraints are not present in our MPC-NO optimisation problem (1.35). The vector of predicted values of the controlled variable(s), ˆy(k), is calculated at each sampling instant for the given vector u(k) recurrently from Eqs. (3.14) and (3.18) (the SISO Wiener model) or Eqs. (3.22) and (3.23) (the MIMO Wiener model I) or Eqs. (3.23) and (3.25) (the MIMO Wiener model II) or Eqs. (3.27) and (3.28) (the MIMO Wiener model III) or Eqs. (3.28) and (3.32) (the MIMO Wiener model IV) or Eqs. (3.28) and (3.34) (the MIMO Wiener model V). Let us note that the vector ˆy(k) is present in the minimised cost-function and in the output constraints. It means that both of them are nonlinear. Next, let us consider the MPC-NO optimisation problem with soft output constraints defined by Eq. (1.39). The algorithm calculates at each sampling instant not only the future increments of the manipulated variable(s) but also optimal violations of the original hard output constraints necessary to guarantee feasibility. Hence, the vector of the decision variables ⎤ ⎡ u(k) (3.40) x(k) = ⎣ ε min (k) ⎦ εmax (k) is of length n u Nu + 2n y . Let us also define auxiliary matrices
80
3 MPC Algorithms Using Input-Output Wiener Models
N 1 = I n u Nu ×n u Nu 0n u Nu ×2n y N 2 = 0n y ×n u Nu I n y ×n y 0n y ×n y N 3 = 0n y ×(n u Nu +n y ) I n y ×n y
(3.41) (3.42) (3.43)
which are of dimensionality n u Nu × (n u Nu + 2n y ), n y × (n u Nu + 2n y ) and n y × (n u Nu + 2n y ), respectively. The following relations are true u(k) = N 1 x(k)
(3.44)
ε (k) = N 2 x(k) εmax (k) = N 3 x(k)
(3.45) (3.46)
min
We will also rewrite the vectors εmin (k) and ε max (k) defined by Eqs. (1.40). They may be expressed in the following way ε min (k) = I N ×1 ⊗ εmin (k) ε max (k) = I N ×1 ⊗ εmax (k)
(3.47) (3.48)
where the symbol ⊗ denotes the Kronecker product of two vectors. Using Eqs. (3.45)–(3.46), the relations (3.47)–(3.48) may be expressed as εmin (k) = I N ×1 ⊗ N 2 x(k) ε max (k) = I N ×1 ⊗ N 3 x(k)
(3.49) (3.50)
Using Eqs. (3.44)–(3.46) and (3.49)–(3.50), the optimisation task (1.39) becomes 2 min J (k) = ysp (k) − ˆy(k) M + N 1 x(k)2 x(k)
+ ρ min N 2 x(k)2 + ρ max N 3 x(k)2
subject to u
min
u
≤ J N 1 x(k) + u(k − 1) ≤ u
min
(3.51) max
≤ N 1 x(k) ≤ umax
ymin − I N ×1 ⊗ N 2 x(k) ≤ ˆy(k) ≤ ymax + I N ×1 ⊗ N 3 x(k) N 2 x(k) ≥ 0n y ×1 , N 3 x(k) ≥ 0n y ×1 When compared with the general nonlinear optimisation problem (3.36) solved by the fmincon function, in our MPC-NO optimisation task (3.51), the linear inequality constraints are defined by
3.2 MPC-NO Algorithm
81
⎡ ⎤ ⎤ −umin + u(k − 1) − J N1 ⎢ umax − u(k − 1) ⎥ ⎢ J N1 ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎢ −N 1 ⎥ −umin ⎥ , B(k) = ⎢ ⎥ A=⎢ max ⎢ ⎥ ⎢ N1 ⎥ u ⎢ ⎥ ⎥ ⎢ ⎣ ⎦ ⎣ −N 2 ⎦ 0n y ×1 −N 3 0n y ×1 ⎡
(3.52)
and nonlinear inequalities are defined by
− ˆy(k) − I N ×1 ⊗ N 2 x + ymin C(x(k)) = ˆy(k) − I N ×1 ⊗ N 3 x − ymax
(3.53)
There are no other constraints in our optimisation task, i.e. linear equality ones, nonlinear equality ones, and bounds. The MPC-NO optimisation problem may be solved using analytical or numerical derivatives. In the first case, it is necessary to derive formulas for gradients of the minimised cost-function and gradients of the constraints, both with respect to the future increments of the manipulated variables (1.3). When soft output constraints are used, all gradients must be calculated with respect to the decision variables (3.40). The idea and details for neural dynamical models of the MLP structure may be found in [8]. In practical applications, however, all gradients are typically approximated numerically.
3.3 MPC-NO-P Algorithm Now, we will consider parameterisation using Laguerre functions discussed in Sect. 1.3 in order to reduce the number of decision variables of the MPC-NO algorithm. In the resulting MPC-NO approach with Parameterisation (MPC-NO-P), c(k) is the vector of decision variables. The original decision variables, u(k), are calculated after solving the MPC-NO-P optimisation task for the actually calculated vector c(k) from Eq. (1.56). It means that all prediction equations derived in Sect. 3.2 for the MPC-NO algorithm can be used in the MPC-NO-P method. It is only necessary to reformulate the optimisation tasks. At first, let us consider hard constraints imposed on the controlled variables. Using the parameterisation defined by Eq. (1.56), from the general MPC optimisation problem (1.35), we obtain the following MPC-NO-P optimisation task 2 min J (k) = ysp (k) − ˆy(k) M + Lc(k)2 c(k)
subject to u
min
u
≤ J Lc(k) + u(k − 1) ≤ u
min
≤ Lc(k) ≤ umax
ymin ≤ ˆy(k) ≤ ymax
(3.54) max
82
3 MPC Algorithms Using Input-Output Wiener Models
In the optimisation problem (1.35) the decision vector u(k) is of length n u Nu , whereas in the task (3.54) the decision vector c(k) is of length n 1L + · · · + n nLu . Of course, we assume that n nL < Nu for n = 1, . . . , n u which means that n 1L + · · · + n nLu < n u Nu . The prediction vector ˆy(k) is a nonlinear function of the decision vector c(k). The optimisation problem (3.54) is solved in MATLAB by means of the fmincon function. When compared with the general nonlinear optimisation problem (3.36) solved by the fmincon function, in our MPC-NO-P optimisation task (3.54), there are only two types of constraints: linear inequalities defined by ⎤ ⎡ ⎤ −umin + u(k − 1) −JL ⎢ umax − u(k − 1) ⎥ ⎢ JL ⎥ ⎥ ⎢ ⎥ A=⎢ ⎦ ⎣ −L ⎦ , B(k) = ⎣ −umin max u L ⎡
(3.55)
and nonlinear inequalities defined by C(x(k)) =
− ˆy(k) + ymin ˆy(k) − ymax
(3.56)
At each sampling instant, for a given vector c(k), at first the increments u(k) = Lc(k) are found. Next, the vector of predicted values of the controlled variable(s), ˆy(k), is calculated recurrently from Eqs. (3.14) and (3.18) (the SISO Wiener model) or Eqs. (3.22) and (3.23) (the MIMO Wiener model I) or Eqs. (3.23) and (3.25) (the MIMO Wiener model II) or Eqs. (3.27) and (3.28) (the MIMO Wiener model III) or Eqs. (3.28) and (3.32) (the MIMO Wiener model IV) or Eqs. (3.28) and (3.34) (the MIMO Wiener model V). Next, let us consider the MPC-NO-P problem with soft constraints imposed on the controlled variables. In such a case, in place of the decision vector c(k) used in the rudimentary MPC-NO-P algorithm, the optimised vector ⎡
⎤ c(k) x˜ (k) = ⎣ ε min (k) ⎦ εmax (k)
(3.57)
is of length n 1L + · · · + n nLu + 2n y . Let us also define auxiliary matrices N 1 = I (n 1L +···+n nLu )×(n 1L +···+n nLu ) 0(n 1L +···+n nLu )×2n y N 2 = 0n y ×(n 1L +···+n nLu ) I n y ×n y 0n y ×n y N 3 = 0n y ×(n 1L +···+n nLu +n y ) I n y ×n y
(3.58) (3.59) (3.60)
which are of dimensionality (n 1L + · · · + n nLu ) × (n 1L + · · · + n nLu + 2n y ), n y × (n 1L + · · · + n nLu + 2n y ) and n y × (n 1L + · · · + n nLu + 2n y ), respectively. The following relations are true
3.3 MPC-NO-P Algorithm
83
c(k) = N 1 x˜ (k) ε (k) = N 2 x˜ (k) max N 3 x˜ (k) ε (k) =
(3.61)
min
(3.62) (3.63)
Using Eqs. (3.47)–(3.48), the vectors defined by Eq. (1.40) are N 2 x˜ (k) εmin (k) = I N ×1 ⊗ max N 3 x˜ (k) ε (k) = I N ×1 ⊗
(3.64) (3.65)
Using the parameterisation defined by Eq. (1.56), the relations (3.61)–(3.63) and (3.64)–(3.65), from the general MPC optimisation problem with soft output constraints (1.39), we obtain the following MPC-NO-P optimisation task 2 2 N 1 x˜ (k) min J (k) = ysp (k) − ˆy(k) M + L x˜ (k)
2 2 + ρ min N 2 x˜ (k) + ρ max N 3 x˜ (k)
subject to
(3.66)
u ≤ J L N 1 x˜ (k) + u(k − 1) ≤ umax umin ≤ L N 1 x˜ (k) ≤ umax min
ymin − I N ×1 ⊗ N 2 x˜ (k) ≤ ˆy(k) ≤ ymax + I N ×1 ⊗ N 3 x˜ (k) N 2 x˜ (k) ≥ 0n y ×1 , N 3 x˜ (k) ≥ 0n y ×1 When compared with the general nonlinear optimisation problem (3.36) solved by the fmincon function, in our MPC-NO-P optimisation task (3.66), there are only two types of constraints, namely the linear inequalities defined by ⎡ ⎤ ⎤ −J L N1 −umin + u(k − 1) ⎢ umax − u(k − 1) ⎥ ⎢ J L N1 ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ −L −umin N1 ⎥ ⎢ ⎥ ⎢ , B(k) = A=⎢ max ⎢ ⎥ ⎥ u ⎢ ⎥ ⎢ L N1 ⎥ ⎣ ⎦ ⎣ − 0n y ×1 N2 ⎦ 0n y ×1 − N3 ⎡
(3.67)
and nonlinear inequalities defined by C( x˜ (k)) =
− ˆy(k) − I N ×1 ⊗ N 2 x˜ (k) + ymin ˆy(k) − I N ×1 ⊗ N 3 x˜ (k) − ymax
(3.68)
84
3 MPC Algorithms Using Input-Output Wiener Models
3.4 MPC-NPSL and MPC-SSL Algorithms The MPC-NO algorithm uses for prediction the full mathematical model of the controlled process, without any simplifications. On the one hand, it is good because it allows us to fully take into account nonlinear process behaviour described by the model. On the other hand, it is necessary to solve a nonlinear optimisation problem with constraints at each sampling instant. Although the parameterisation using Laguerre functions makes it possible to reduce the number of decision variables, the MPC-NO-P optimisation problem is still nonlinear. Numerical difficulty is of two kinds: qualitative and quantitative. Firstly, a nonlinear optimisation routine must be used. Unfortunately, it may find a shallow local solution, not the global one. Secondly, the time necessary to find the solution may be long. In particular, it is important for fast embedded systems, in which the sampling period is usually very short, of the order of hundreds, tens or even single microseconds. We must remember that the time necessary to solve the MPC optimisation task must be shorter than the sampling period. In order to reduce complexity of MPC, a few MPC algorithms with on-line successive model or trajectory linearisation are next discussed. They all lead to quadratic optimisation problems, which are much simpler than nonlinear optimisation. The first of the linearisation-based approaches is the MPC algorithm with Nonlinear Prediction and Simplified Linearisation (MPC-NPSL). Its general description is presented in [8]. In short, its idea is to find at each sampling instant a linear approximation of the Wiener model as a multiplication of the linear dynamic block and the time-varying gain of the nonlinear static block. Simplified linearisation refers to the fact that the model is not linearised using the Taylor series expansion. The linearised model is next used to describe the influence of the calculated increments of the manipulated variable(s) on the predicted trajectory of the controlled variable(s), whereas the full nonlinear Wiener model is used to find the influence of the past (the nonlinear trajectory). Implementation details of the MPC-NPSL algorithm for the SISO Wiener model are given in [7, 10]. In the first case, the nonlinear static block is a neural network of the MLP type; in the second case, the Least Squares Support Vector Machine (LS-SVM) approximator is used. Utilisation of the MIMO Wiener model I with a neural static block is discussed in [8]. Other model structures have not been considered since in publications cited in this paragraph, the MPC-NPSL algorithm is treated as an extension of the classical MPC algorithm based on a linear model (LMPC). In this Chapter, the MPC-NPSL algorithm is derived for all Wiener structures discussed in Chap. 2. The MPC-NPSL algorithm takes advantage of the specific cascade structure of the model. Hence, with some modifications, it may be also used for other cascade model, i.e. Hammerstein [8], Hammerstein-Wiener [9] and Wiener-Hammerstein [11] structures. If the linearised model is also used for finding the influence of the past (the free trajectory), we obtain the MPC scheme with Simplified Successive Linearisation (MPC-SSL). Of course, conceptually, it is better to use the MPC-NPSL scheme than
3.4 MPC-NPSL and MPC-SSL Algorithms
85
the MPC-SSL one since computational complexity of both approaches is similar (it mainly depends on complexity of the quadratic optimisation tasks), but the use of a nonlinear model for free trajectory calculation is likely to give better results. Prediction Using SISO Wiener Model At first, let us discuss the SISO case in which the Wiener model depicted in Fig. 2.1 is used. The time-varying gain of the nonlinear static part of the model for the current operating point is defined by the derivative K (k) =
dy(k) dv(k)
(3.69)
The current value of the gain is calculated for the specific form of the nonlinear static block defined by Eq. (2.5) K (k) =
dg(v(k)) dv(k)
(3.70)
During calculations, the current value of the model signal v is found from Eq. (2.4) (we do not measure this signal from the process, it must be calculated using the model). Taking advantage of the serial structure of the Wiener model we may easily define the relation between the model output and the output of the first model block, i.e. the auxiliary signal v y(k) = K (k)v(k) (3.71) which gives v(k) =
y(k) K (k)
(3.72)
Using the linear part of the model defined by Eq. (2.1), we obtain A(q −1 )
y(k) = B(q −1 )u(k) K (k)
(3.73)
Because K (k) is a scalar, a linear approximation of the whole nonlinear Wiener model for the current operating point is A(q −1 )y(k) = K (k)B(q −1 )u(k)
(3.74)
Taking into account the polynomials (2.2)–(2.3), we have y(k) = K (k)
nB i=1
bi u(k − i) −
nA i=1
ai y(k − i)
(3.75)
86
3 MPC Algorithms Using Input-Output Wiener Models
As a result of the simplified linearisation, we obtain the time-varying linear model (3.75), in which the current value of the process output is a function of previous values of the input and output. The coefficients associated with the input signal are timevarying, whereas the ones related to the output signal are constant. The linearised model may be expressed in the following general form B(q −1 , k)u(k) A(q −1 )y(k) =
(3.76)
where the time-varying polynomial is B(q −1 , k) = K (k)(b1 q −1 + · · · + bn B q −n B )
(3.77)
Let us discuss in details why the presented linearisation approach is named the simplified one. Taking into account Eq. (2.6), for the nonlinear SISO Wiener model, the output signal for the current sampling time k is y(k) = g(u(k − 1), . . . , u(k − n B ), v(k − 1), . . . , v(k − n A ))
(3.78)
It is important to note that the model output signal is a nonlinear function of the input signal u and output signal v of the linear block of the model, both for some previous sampling instants. In order to eliminate the dependence on the previous auxiliary signal, we should use the inverse static model (3.1). The Wiener model (3.78) becomes ˜ − 1)), . . . , g(y(k ˜ − n A )) y(k) = g(u(k − 1), . . . , u(k − n B ), g(y(k
(3.79)
Let us define the arguments of the model (3.79) as the vector x(k) = [u(k − 1) . . . u(k − n B ) y(k − 1) . . . y(k − n A )]T . The classical linearised model is y(k) =
nB i=1
bi (k)u(k − i) −
nA
ai (k)y(k − i)
(3.80)
i=1
Its coefficients are calculated in the following way ∂g(x(k)) ∂ y(k − i) x(k)= x¯ (k) ∂ g(y(k ˜ − i)) ∂g(x(k)) =− ∂ g(y(k ˜ − i)) x(k)= x¯ (k) ∂ y(k − i) x(k)= x¯ (k) ∂g(x(k)) bi (k) = ∂u(k − i) x(k)= x¯ (k)
ai (k) = −
(3.81) (3.82)
where the vector x(k)= [u(k ¯ − 1) . . . u(k ¯ − n B ) . . . y¯ (k − 1) . . . y¯ (k − n A )]T defines the linearisation point. It is important to note that the coefficients ai (k) depend not
3.4 MPC-NPSL and MPC-SSL Algorithms
87
only on parameters of the Wiener model but also on parameters of the inverse of its static part. Of course, the inverse model must exist. Moreover, calculation of derivatives is quite complicated. Therefore, the simplified model linearisation which takes advantage of the serial Wiener model is recommended. Using the linearised model (3.75) recurrently, from the general prediction equation (3.10) one can calculate the predictions over the whole prediction horizon ( p = 1, . . . , N ) yˆ (k + 1|k) = K (k)(b1 u(k|k) + b2 u(k − 1) + b3 u(k − 2) + · · · + bn B u(k − n B + 1)) − a1 y(k) − a2 y(k − 1) − a3 y(k − 2) − · · · − an A y(k − n A + 1) + d(k)
(3.83)
yˆ (k + 2|k) = K (k)(b1 u(k + 1|k) + b2 u(k|k) + b3 u(k − 1) + · · · + bn B u(k − n B + 2)) − a1 yˆ (k + 1|k) − a2 y(k) − a3 y(k − 1) − · · · − an A y(k − n A + 2) + d(k)
(3.84)
yˆ (k + 3|k) = K (k)(b1 u(k + 2|k) + b2 u(k + 1|k) + b3 u(k|k) + · · · + bn B u(k − n B + 3)) − a1 yˆ (k + 2|k) − a2 yˆ (k + 1|k) − a3 y(k) − · · · − an A y(k − n A + 3) + d(k)
(3.85)
.. . Because the model is linear (although some parameters are time-varying), it is possible to express the predictions as the sum of two parts (it is true when different linear models are used for prediction in MPC [20]) ˆy(k) = y(k) + y0 (k)
(3.86)
where the forced trajectory (response), y(k), depends only on the future, i.e. on the currently calculated vector of increments, u(k), whereas the free one, y0 (k), depends only on the past. The predicted trajectory is defined by Eq. (1.22), the forced and free trajectories have the same structure and length as the predicted one, i.e. ⎡
⎤ y(k + 1|k) ⎢ ⎥ .. y(k) = ⎣ ⎦ . y(k + N |k)
(3.87)
88
3 MPC Algorithms Using Input-Output Wiener Models
⎤ y 0 (k + 1|k) ⎥ ⎢ .. y0 (k) = ⎣ ⎦ . y 0 (k + N |k) ⎡
and
(3.88)
From Eqs. (3.83)–(3.85), we obtain the forced trajectory y(k + 1|k) = s1 (k)u(k|k) + · · ·
(3.89)
y(k + 2|k) = s2 (k)u(k|k) + s1 (k)u(k + 1|k) + · · · (3.90) y(k + 3|k) = s3 (k)u(k|k) + s2 (k)u(k + 1|k) + s1 (k)u(k + 2|k) + · · · (3.91) .. . where the step-response coefficients of the model are calculated recurrently using the current parameters of the linearised model (3.75) over the whole prediction horizon ( p = 1, . . . , N ) from the formula min( p,n B )
s p (k) =
min( p−1,n A )
K (k)bi −
i=1
ai s p−i (k)
(3.92)
i=1
Taking into account Eqs. (3.89)–(3.91), the prediction equation (3.86) becomes ˆy(k) = G(k)u(k) + y0 (k)
(3.93)
The time-varying step-response matrix is G(k) = K (k)G
(3.94)
where the constant matrix of dimensionality N × Nu ⎡
s¯1 ⎢ s¯2 ⎢ G=⎢ . ⎣ ..
0 s¯1 .. .
... ... .. .
0 0 .. .
⎤ ⎥ ⎥ ⎥ ⎦
(3.95)
s¯N s¯N −1 . . . s¯N −Nu +1
consists of the constant step-response coefficients s¯ p of the linear dynamic part of the model. From Eq. (3.92), we may notice that they are calculated for p = 1, . . . , N from min( p,n B ) min( p−1,n A ) bi − ai s¯ p−i (3.96) s¯ p = i=1
i=1
3.4 MPC-NPSL and MPC-SSL Algorithms
89
In the MPC-NPSL algorithm, the nonlinear free trajectory is calculated using the full nonlinear Wiener model, not the linearised one. For this purpose Eq. (2.5) is used. Hence, using the rudimentary prediction equation (3.10), we have y 0 (k + p|k) = g(v0 (k + p|k)) + d(k)
(3.97)
where p = 1, . . . , N . Because the free trajectory depends only on the past, from Eq. (3.18), replacing u(k + p|k) by u(k − 1) for p ≥ 0 and v(k + p|k) by v0 (k + p|k) for p ≥ 1, we obtain Iuf ( p)
v (k + p|k) = 0
bi u(k − 1) +
nB
bi u(k − i + p)
i=Iuf ( p)+1
i=1 Ivf ( p)
−
nA
ai v0 (k − i + p|k) −
ai v(k − i + p)
(3.98)
i=Ivf ( p)+1
i=1
The unmeasured disturbance is estimated from Eq. (3.21), in the same way it is done in the MPC-NO algorithm. In the MPC-SSL algorithm the successively linearised model is used not only to calculate the time-varying step-response matrix G(k) from Eq. (3.94) but also to find the free trajectory. In place of Eq. (3.97) used in the MPC-NPSL algorithm, we have y 0 (k + p|k) = K (k)v0 (k + p|k) + d(k)
(3.99)
The signals v0 (k + p|k) are calculated from Eq. (3.98), in the same way it is done in the MPC-NPSL approach. The successively linearised model is also used for estimation of the unmeasured disturbance. Hence, using Eqs. (3.11) and (3.75), in place of Eq. (3.21), we have d(k) = y(k) − K (k)
nB
bi u(k − i) −
i=1
nA
ai y(k − i)
(3.100)
i=1
Alternatively, for estimation of the disturbance, we may also use Eqs. (2.4), (3.11) and (3.71) which give d(k) = y(k) − K (k)
n B i=1
bi u(k − i) −
nA
ai v(k − i)
(3.101)
i=1
where model signals v(k − i) are calculated from Eq. (2.4). We can see that the complexity of the computation procedure for MPC-SSL and MPC-NPSL schemes is practically the same. Hence, in general, we recommend the latter scheme since the use of the nonlinear model for calculation of the free trajectory is conceptually better than the use of the linearised one.
90
3 MPC Algorithms Using Input-Output Wiener Models
Taking into account Eq. (3.74) or Eq. (3.75), we have to point out that in the SISO case, the MPC-SSL algorithm is very similar to the classical MPC algorithm based on a linear model (LMPC) because the time-varying gain of the nonlinear static block simply multiplies the gain of the linear dynamic block. Prediction Using MIMO Wiener Model I Next, we will discuss prediction in MPC-NPSL and MPC-SSL algorithms when the MIMO Wiener model I shown in Fig. 2.2 is used. All n y nonlinear static model blocks have only one input and one output. It means that the variable v1 affects only the output y1 , v2 affects only the output y2 , etc. Hence, the time-varying gains of the consecutive nonlinear static parts of the model for the current operating point are K 1 (k) = .. . K n y (k) =
dy1 (k) dv1 (k)
(3.102)
dyn y (k) dvn y (k)
(3.103)
The current values of the gains are calculated for the specific form of the nonlinear static blocks defined by Eq. (2.14) K 1 (k) = .. . K n y (k) =
dg1 (v1 (k)) dv1 (k)
(3.104)
dgn y (vn y (k)) dvn y (k)
(3.105)
During calculations, the current values of the model signals v1 , . . . , vn v are found from Eq. (2.11). Using the gains, we may easily formulate the equations that descibe the outputs of the linearised model y1 (k) = K 1 (k)v1 (k) .. . yn y (k) = K n y (k)vn y (k)
(3.106)
(3.107)
Let us define the diagonal gain matrix of dimensionality n y × n y ⎡
K 1 (k) . . . ⎢ .. .. K (k) = ⎣ . . 0
0 .. .
. . . K n y (k)
⎤ ⎥ ⎦
(3.108)
3.4 MPC-NPSL and MPC-SSL Algorithms
91
Using the vector-matrix notation, the relations (3.106)–(3.107) lead to y(k) = K (k)v(k)
(3.109)
From Eq. (3.109), it follows that v(k) = K −1 (k)y(k)
(3.110)
Using the description of the linear part of the model given by Eq. (2.1), we have A(q −1 )K −1 (k)y(k) = B(q −1 )u(k)
(3.111)
Because the gain matrix K (k) is diagonal, it is true that A(q −1 )y(k) = K (k)B(q −1 )u(k)
(3.112)
It is straightforward that Eq. (3.112) is an extension of Eq. (3.74) obtained in the SISO case. Taking into account the polynomials (2.7)–(2.8), we have ym (k) = K m (k)
nu nB
bim,n u n (k − i) −
n=1 i=1
nA
aim ym (k − i)
(3.113)
i=1
for all m = 1, . . . , n y . Following the prediction calculation methodology used in the SISO case (Eqs. (3.83)–(3.85) and (3.89)–(3.91)), we obtain the same general prediction equation (3.93). Of course, now the vectors ˆy(k) and y0 (k) defined by Eqs. (1.22) and (3.88), respectively, are of length n y N . The time-varying step-response matrix is of dimensionality n y N × n u Nu and has the structure ⎡ ⎢ ⎢ G(k) = ⎢ ⎣
S1 (k) S2 (k) .. .
0n y ×n u S1 (k) .. .
... ... .. .
0n y ×n u 0n y ×n u .. .
⎤ ⎥ ⎥ ⎥ ⎦
(3.114)
⎤ u . . . K 1 (k)¯s 1,n K 1 (k)¯s 1,1 p p .. .. ⎥ ⎢ .. S p (k) = K (k)S p = ⎣ ⎦ . . . n y ,1 n y ,n u K n y (k)¯s p . . . K n y (k)¯s p
(3.115)
S N (k) S N −1 (k) . . . S N −Nu +1 (k) where the n y × n u step-response sub-matrices are calculated as ⎡
The sub-matrices which contain constant step-response coefficients of the linear dynamic block are of dimensionality of n y × n u and have the structure
92
3 MPC Algorithms Using Input-Output Wiener Models
⎤ u s¯ 1,1 . . . s¯ 1,n p p .. ⎥ ⎢ . .. S p = ⎣ .. . . ⎦ n y ,1 n y ,n u s¯ p . . . s¯ p ⎡
(3.116)
Constant scalar step-response coefficients of the linear part of the model are calculated recurrently over the whole prediction horizon ( p = 1, . . . , N ) and for all inputs and outputs (m = 1, . . . , n y , n = 1, . . . , n u ) from the formula min( p,n B )
s¯ m,n p
=
min( p−1,n A )
bim,n
−
i=1
aim s¯ m,n p−i
(3.117)
i=1
In the MPC-NPSL algorithm the nonlinear free trajectory is calculated using the full nonlinear Wiener model. For this purpose Eq. (2.14) is used. Hence, using the rudimentary prediction equation (3.10), we have ym0 (k + p|k) = gm (vm0 (k + p|k)) + dm (k)
(3.118)
for m = 1, . . . , n y , p = 1, . . . , N . Because the free trajectory depends only on the past, from Eq. (3.23), replacing u n (k + p|k) by u n (k − 1) for p ≥ 0 and n = 1, . . . , n u , as well as replacing vm (k + p|k) by vm0 (k + p|k) for p ≥ 1 and m = 1, . . . , n y , we obtain vm0 (k + p|k) =
n u I uf ( p) n=1
bim,n u n (k − i + p)
i=Iuf ( p)+1
i=1
Ivf ( p)
−
nB
bim,n u n (k − 1) +
aim vm0 (k − i + p|k) −
nA
aim vm (k − i + p)
(3.119)
i=Ivf ( p)+1
i=1
The unmeasured disturbances are estimated from Eq. (3.24), in the same way it is done in the MPC-NO algorithm. In the MPC-SSL algorithm the successively linearised model is used to find the free trajectory. In place of Eq. (3.118) used in the MPC-NPSL algorithm, we have ym0 (k + p|k) = K m (k)vm0 (k + p|k) + dm (k)
(3.120)
The signals vm0 (k + p|k) are calculated from Eq. (3.119), in the same way it is done in the MPC-NPSL approach. The unmeasured disturbances are estimated not from Eq. (3.24) but from Eqs. (3.11) and (3.113), we have dm (k) = ym (k) − K m (k)
nu nB n=1 i=1
bim,n u n (k
− i) −
nA i=1
aim ym (k
− i)
(3.121)
3.4 MPC-NPSL and MPC-SSL Algorithms
93
Alternatively, for estimation of the disturbances, we may also use Eq. (2.11), (3.13) and (3.106)–(3.107) which give dm (k) = ym (k) − K m (k)
n n u B
bim,n u n (k
− i) −
n=1 i=1
nA
aim vm (k
− i)
(3.122)
i=1
where the model signals vm (k − i) are calculated from Eq. (2.11). Finally, similarly to the SISO case, taking into account Eq. (3.112) or Eq. (3.113), we may conclude that the presented MPC-SSL algorithm is very similar to the classical LMPC algorithm. It is because in the MPC-SSL control scheme, the classical linear model, although partly time-varying, is used for prediction. Let us discuss if it is possible to perform linearisation using the Taylor series expansion method for the MIMO Wiener model I. In such a case, we need as many as n v = n y inverse static models defined by Eqs. (3.4)–(3.5). From Eq. (2.17) we obtain n n nA u B m,n m bi u n (k − i) − ai g˜ m (ym (k − i)) (3.123) ym (k) = gm n=1 i=1
i=1
The classical linearised model is described by the equation ym (k) =
nu nB
bim,n (k)u n (k − i) −
n=1 i=1
nA
aim (k)ym (k − i)
(3.124)
i=1
Next, we may compute the coefficients of the linearised model (3.124), in a similar way it is done in the SISO case (Eqs. (3.81)–(3.82)). It is clear that in the classical linearisation approach based on the Taylor expansion, the coefficients of the linearised model depend not only on the parameters of the linear and nonlinear parts of the model but also on the inverse static models. It is much more complicated than the discussed simplified method that is possible due to the specialised structure of the Wiener model. Prediction Using MIMO Wiener Model II If for prediction the MIMO Wiener model II shown in Fig. 2.3 is used, we must take into account as many as n y n v time-varying gains of the nonlinear static blocks of the model K 1,1 (k) =
dy1 (k) dv1 (k)
...
.. . K n y ,1 (k) =
K 1,n v (k) =
dy1 (k) dvn v (k)
(3.125)
dyn y (k) dvn v (k)
(3.126)
.. . dyn y (k) dv1 (k)
...
K n y ,n v (k) =
94
3 MPC Algorithms Using Input-Output Wiener Models
The current values of the gains are calculated for the specific form of the nonlinear static blocks defined by Eq. (2.25) K 1,1 (k) =
dg1 (v1 (k), . . . , vn v (k)) dv1 (k)
...
K 1,n v (k) =
.. . K n y ,1 (k) =
dg1 (v1 (k), . . . , vn v (k)) dvn v (k) (3.127)
.. . dgn y (v1 (k), . . . , vn v (k)) dv1 (k)
...
K n y ,n v (k) =
dgn y (v1 (k), . . . , vn v (k)) dvn v (k) (3.128)
During calculations, the current values of the model signals v1 , . . . , vn v are found from Eq. (2.22). Taking into account the serial structure of the model, we find the equations for the outputs of the linearised model y1 (k) = K 1,1 (k)v1 (k) + · · · + K 1,n v (k)vn v (k) .. .
(3.129)
yn y (k) = K n y ,1 (k)v1 (k) + · · · + K n y ,n v (k)vn v (k)
(3.130)
The gain matrix of dimensionality n y × n v is ⎤ K 1,1 (k) . . . K 1,n v (k) ⎥ ⎢ .. .. .. K (k) = ⎣ ⎦ . . . K n y ,1 (k) . . . K n y ,n v (k) ⎡
(3.131)
In comparison with the MIMO Wiener model I, there are two differences. Firstly, the gain matrix K (k) is not diagonal. Secondly, because in general n v = n y , it may be not square. Equations (3.129)–(3.130) may be expressed in the vector-matrix relation (3.109) obtained for the MIMO Wiener model I. Because the matrix K (k) may be not square, we obtain (3.132) v(k) = K + (k)y(k) where K + (k) denotes a pseudoinverse matrix. Hence, from the description of the linear block (Eq. (2.1)), the linearised model is A(q −1 )K + (k)y(k) = B(q −1 )u(k)
(3.133)
Taking into account Eqs. (2.18)–(2.19) which define the structure of the polynomials A(q −1 ) and B(q −1 ), we obtain
3.4 MPC-NPSL and MPC-SSL Algorithms
⎡
95
⎤⎡
⎤ ⎤⎡ + + K 1,1 (k) . . . K 1,n (k) y1 (k) y ⎥⎢ ⎥ ⎢ .. ⎥ .. .. .. ⎦⎣ ⎦⎣ . ⎦ = . . . + −1 + yn y (k) 0 . . . An v (q ) K n v ,1 (k) . . . K n v ,n y (k) ⎡ ⎤⎡ ⎤ −1 −1 B1,1 (q ) . . . B1,n u (q ) u 1 (k) ⎢ ⎥ ⎢ .. . . ⎥ . .. .. (3.134) ⎣ ⎦ ⎣ .. ⎦ . −1 −1 Bn v ,1 (q ) . . . Bn v ,n u (q ) u n u (k)
A1 (q −1 ) . . . ⎢ .. .. ⎣ . .
0 .. .
where the polynomials are: Am (q −1 ) = 1 + a1m q −1 + · · · + anmA q −n A for m = 1, . . . , q −n B for m = 1, . . . , n v , n = 1, . . . , n u . The n v , Bm,n (q −1 ) = b1m,n q −1 + · · · + bnm,n B + + (k) for m = 1, . . . , n v , entries of the pseudoinverse matrix K (k) are denoted by K m,n n = 1, . . . , n y . From Eq. (3.134) we obtain ⎡
+ K 1,1 (k)A1 (q −1 ) ⎢ .. ⎣ . K n+v ,1 (k)An v (q −1 ) ⎡ B1,1 (q −1 ) ⎢ .. ⎣ .
Bn v ,1 (q
−1
⎤ ⎤⎡ + . . . K 1,n (k)A1 (q −1 ) y1 (k) y ⎥ ⎢ .. ⎥ .. .. ⎦⎣ . ⎦ = . . + −1 yn y (k) . . . K n v ,n y (k)An v (q ) ⎤⎡ ⎤ −1 . . . B1,n u (q ) u 1 (k) ⎥ ⎢ .. ⎥ .. .. ⎦⎣ . ⎦ . .
) . . . Bn v ,n u (q
−1
)
(3.135)
u n u (k)
Let us stress that the classical linear MIMO model (2.1) with matrices (2.7)–(2.8) or (2.18)–(2.19) is comprised by a number of MISO models. The obtained linear approximation of the MIMO Wiener model I defined by Eq. (3.112) has the same structure. Unfortunately, Eq. (3.135) represents a complicated model which is not comprised of a number of MISO models because the left-side matrix of polynomials is not diagonal. Hence, the MPC-NPSL and MPC-SSL algorithms for the MIMO Wiener model II cannot be developed as an extension of the classical LMPC control scheme [8], whereas it is possible for the SISO Wiener model and the MIMO Wiener structure I. In order to develop the MPC-NPSL and MPC-SSL algorithms for the MIMO Wiener model II, we directly use for prediction Eqs. (3.129)–(3.130), which give yˆ1 (k + p|k) =
nv
K 1,i (k)vi (k + p|k) + d1 (k)
(3.136)
K n y ,i (k)vi (k + p|k) + dn y (k)
(3.137)
i=1
.. . yˆn y (k + p|k) =
nv i=1
Because from Eq. (3.23) the predicted signals v1 (k + p|k), . . . , vn v (k + p|k) are linear functions of the future manipulated variables, we obtain the basic prediction
96
3 MPC Algorithms Using Input-Output Wiener Models
equation (3.93) which is true when a time-varying linear model is used. We have to determine on-line the time-varying step-response matrix of the linearised model and the free trajectory. Since the whole model has n u inputs and n y outputs, the resulting step-response matrix is of dimensionality n y N × n u Nu and has the same structure as in the case of the MIMO Wiener model I (Eq. (3.114)). The time-varying submatrices are also similar to those used for the MIMO Wiener model I (Eq. (3.116)), i.e. ⎤ 1,n u s 1,1 p (k) . . . s p (k) .. .. ⎥ ⎢ .. S p (k) = ⎣ ⎦ . . . n y ,1 n y ,n u s p (k) . . . s p (k) ⎡
(3.138)
It is important to remember that in the Wiener model II, the number of outputs of the linear dynamic part is n v , in general it may be different from n y . Using the model structure shown in Fig. 2.3 and considering all cross-couplings, the entries of the matrix (3.138) are calculated from s m,n p (k) =
nv
K m,i (k)¯s i,m p
(3.139)
i=1
Constant scalar step-response coefficients s¯ i,m p of the linear dynamic part of the model are calculated for n = 1, . . . , n u and p = 1, . . . , N , from Eq. (3.117), used for the MIMO Wiener model I, but now m = 1, . . . , n v . The nonlinear free trajectory is calculated using the full nonlinear Wiener model. From Eqs. (3.25) and (3.12), we have ym0 (k + p|k) = gm (v10 (k + p|k), . . . , vn0v (k + p|k)) + dm (k)
(3.140)
where the values of the predicted signals vm0 (k + p|k) are calculated from Eq. (3.119) for p = 1, . . . , N but now m = 1, . . . , n v . The unmeasured disturbances are estimated from Eq. (3.26), in the same way it is done in the MPC-NO algorithm. In the MPC-SSL algorithm the successively linearised model is used to find the free trajectory. In place of Eq. (3.140) used in the MPC-NPSL algorithm, from Eqs. (3.129)–(3.130), we have ym0 (k
+ p|k) =
nv
K m,i (k)vi0 (k + p|k) + dm (k)
(3.141)
i=1
The signals vm0 (k + p|k) are calculated from Eq. (3.119) for all m = 1, . . . , n v , p = 1, . . . , N . The unmeasured disturbances are estimated not from Eq. (3.26) but using Eqs. (2.22), (3.11) and (3.129)–(3.130), we have
3.4 MPC-NPSL and MPC-SSL Algorithms
dm (k) = ym (k) −
nv
K m, j (k)
97
nu nB
j,n bi u n (k
− i) −
n=1 i=1
j=1
nA
j ai v j (k
− i)
i=1
(3.142) where the model signals v j (k − i) are determined from Eq. (2.22). Let us consider if it is possible to perform linearisation using the Taylor series expansion method for the MIMO Wiener model II. In this case, the inverse models must take into account all interactions. The inverse models are defined by general equations (3.6)–(3.7). From Eq. (2.28), we obtain ym (k) = gm
nu nB
bi1,n u n (k − i) −
nA
n=1 i=1 nu nB
bin v ,n u n (k − i) −
n=1 i=1
ai1 g˜ 1 (y1 (k − i), . . . , yn y (k − i)), . . . ,
i=1 nA
ain v g˜ n v (y1 (k − i), . . . , yn y (k − i))
i=1
(3.143) The classical linearised model is characterised by Eq. (3.124). Provided that it is possible to find the inverse static models (3.6)–(3.7), we may use the classical Taylor approach to linearisation. In practice, however, it may be very difficult to find such inverse models. Hence, for the discussed MIMO model II, the simplified model linearisation is recommended. The classical linearisation method may be used for the SISO Wiener model or the MIMO structure I. The MPC-NPSL and MPC-SSL algorithms are recommended in all more advanced model cases as they do not need any inverse models. Prediction Using MIMO Wiener Model III If for prediction the MIMO Wiener model III shown in Fig. 2.4 is used, the timevarying gains of the consecutive nonlinear static parts of the model for the current operating point are found from Eqs. (3.104)–(3.105), in the same way it is done in the case of the MIMO Wiener model I. The difference is that the model signals v1 (k), . . . , vn y (k) are calculated from Eq. (2.50). Because after on-line linearisation the model output is a linear function of its inputs, the predicted trajectory is calculated from the general prediction equation (3.93). The constant step-response matrix of the linear dynamic block is of dimensionality n y N × n u Nu and has the structure ⎡
S1 0n y ×n u ⎢ S2 S1 ⎢ G=⎢ . .. ⎣ .. . S N S N −1
... ... .. .
0n y ×n u 0n y ×n u .. .
⎤ ⎥ ⎥ ⎥ ⎦
(3.144)
. . . S N −Nu +1
The sub-matrices which contain step-response coefficients of the linear dynamic block are of dimensionality of n y × n u and are defined by Eq. (3.116). Because
98
3 MPC Algorithms Using Input-Output Wiener Models
we assume that the functions which describe the consecutive input-output channels of the linear part of the model may have different order of dynamics, the scalar step-response coefficients are not calculated from Eq. (3.117) used in the case of the Wiener MIMO model I, in which all input-output channels have equal order of dynamics, but they are determined from min( p,n Bm,n )
s¯ m,n p
=
m,n min( p−1,n A )
bim,n
−
i=1
aim,n s¯ m,n p−i
(3.145)
i=1
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . It is important to stress the fact that the step-response coefficients are not calculated from a possibly high-order classical representation defined by Eq. (2.1) with the matrices (2.7)–(2.8), i.e. from the MIMO Wiener model I, but from the consecutive transfer functions (2.30) characterised by the polynomials (2.31)–(2.32). Of course, the consecutive transfer functions have lower order of dynamics than the dynamic block used in the classical MIMO representation (i.e. the MIMO Wiener model I). The nonlinear free trajectory is calculated using the full nonlinear Wiener model III. From Eqs. (3.12) and (3.27), we have ym0 (k
+ p|k) = gm
n u
0 vm,n (k
+ p|k) + dm (k)
(3.146)
n=1
Using Eq. (3.28), we have 0 vm,n (k
+ p|k) =
Iuf (m,n, p)
n m,n B
bim,n u n (k
−
bim,n u n (k − i + p)
i=Iuf (m,n, p)+1
i=1 Ivf (m,n, p)
− 1) +
n m,n A 0 aim,n vm,n (k
− i + p|k) −
aim,n vm,n (k − i + p)
i=Ivf (m,n, p)+1
i=1
(3.147) The unmeasured disturbances are estimated from Eq. (3.31), in the same way it is done in the MPC-NO algorithm. In the MPC-SSL algorithm, the successively linearised model is used to find the free trajectory. In place of Eq. (3.146) used in the MPC-NPSL algorithm, from Eqs. (3.27) and (3.129)–(3.130), we have ym0 (k
+ p|k) = K m (k)
nu
0 vm,n (k
+ p|k) + dm (k)
(3.148)
n=1 0 where the signals vm,n (k + p|k) are calculated from Eq. (3.147). The unmeasured disturbances are estimated not from Eq. (3.31) used in the MPC-NPSL algorithm but from
3.4 MPC-NPSL and MPC-SSL Algorithms
99
dm (k) = ym (k) − K m (k)
nu
vm,n (k)
(3.149)
n=1
where the model signals vm,n (k) are calculated from Eq. (2.47). Prediction Using MIMO Wiener Model IV In the case of the MIMO Wiener model IV depicted in Fig. 2.5, the time-varying gains of the nonlinear blocks are calculated from Eqs. (3.127)–(3.128), in the same way it is done in the case of the MIMO Wiener model II. Consequently, the linearised model is given by Eqs. (3.129)–(3.130). Because after on-line linearisation the model output is a linear function of its inputs, the predicted trajectory is calculated from the general prediction equation (3.93). The step-response matrix of the linear dynamic block is of dimensionality n v N × n u Nu . The sub-matrices which contain step-response coefficients of the linear dynamic block are of dimensionality of n v × n u ⎤ u . . . s¯ 1,n s¯ 1,1 p p ⎢ .. ⎥ .. S p = ⎣ ... . . ⎦ n v ,1 n v ,n u . . . s¯ p s¯ p ⎡
(3.150)
The scalar step-response coefficients are determined from Eq. (3.145) for all n = 1, . . . , n u , p = 1, . . . , N but now m = 1, . . . , n v . In both MIMO Wiener models III and IV the step-response coefficients are not calculated from a possibly high-order classical representation (2.1) with the matrices (2.7)–(2.8) but from the consecutive transfer functions (2.30) characterised by the polynomials (2.31)–(2.32). Using the general prediction equation (3.12) and Eq. (3.32), the nonlinear free trajectory is calculated from ym0 (k
+ p|k) = gm
n u n=1
0 v1,n (k
+ p|k), . . . ,
nu
vn0v ,n (k
+ p|k) + dm (k) (3.151)
n=1
0 The signals vm,n (k + p|k) are calculated from Eq. (3.147), in the same way as it is done in the case of the MIMO model III for all n = 1, . . . , n u , p = 1, . . . , N but now m = 1, . . . , n v . The unmeasured disturbances are estimated from Eq. (3.33), in the same way it is done in the MPC-NO algorithm. In the MPC-SSL algorithm, the successively linearised model is used to find the free trajectory. In place of Eq. (3.151) used in the MPC-NPSL algorithm, from Eqs. (3.12), (3.32) and (3.136)–(3.137), we have
100
3 MPC Algorithms Using Input-Output Wiener Models
ym0 (k + p|k) =
nv
K m, j (k)v0j (k + p|k) + dm (k)
j=1
=
nv
K m, j (k)
n u
v0j,n (k
+ p|k) + dm (k)
(3.152)
n=1
j=1
where the signals v0j,n (k + p|k) are calculated from Eq (3.147) for all n = 1, . . . , n u , p = 1, . . . , N but now j = 1, . . . , n v . The unmeasured disturbances are estimated not from Eq. (3.33) used in the MPC-NPSL algorithm but from dm (k) = ym (k) −
nv
K m, j (k)
n u
v j,n (k)
(3.153)
n=1
j=1
where the model signals v j,n (k) are calculated from Eq. (2.62). Prediction Using MIMO Wiener Model V In the case of the MIMO Wiener model V depicted in Fig. 2.6, the time-varying gains of the nonlinear static blocks are K 1,1 (k) =
dy1,1 (k) dv1,1 (k)
...
K 1,n u (k) =
.. . K n y ,1 (k) =
dy1,n u (k) dv1,n u (k)
(3.154)
dyn y ,n u (k) dvn y ,n u (k)
(3.155)
.. . dyn y ,1 (k) dvn y ,1 (k)
...
K n y ,n u (k) =
The current values of the gains are calculated for the specific form of the nonlinear static blocks defined by Eqs. (2.72)–(2.73) K 1,1 (k) =
dg1,1 (v1,1 (k)) dv1,1 (k)
...
.. . K n y ,1 (k) =
K 1,n u (k) =
dg1,n u (v1,n u (k)) dv1,n u (k)
(3.156)
dgn y ,n u (vn y ,n u (k)) dvn y ,n u (k)
(3.157)
.. . dgn y ,1 (vn y ,1 (k)) dvn y ,1 (k)
...
K n y ,n u (k) =
The linearised model is y1 (k) = K 1,1 (k)v1,1 (k) + · · · + K 1,n u (k)v1,n u (k) .. . yn y (k) = K n y ,1 (k)vn y ,1 (k) + K n y ,n u (k)vn y ,n u (k)
(3.158)
(3.159)
3.4 MPC-NPSL and MPC-SSL Algorithms
101
Because after on-line linearisation the model output is a linear function of its inputs, the predicted trajectory is calculated from the general prediction equation (3.93). The step-response matrix of the linear dynamic block is of dimensionality n y N × n u Nu and has the same structure as that used for the MIMO Wiener model III (Eq. (3.144)). The sub-matrices which contain step-response coefficients of the linear dynamic block are of dimensionality of n y × n u (Eq. (3.116)). The scalar step-response coefficients are s n,m s m,n p (k) = K m,n (k)¯ p
(3.160)
of the linear dynamic part of Constant scalar step-response coefficients s¯ m,n p the model are calculated from Eq. (3.145) for m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . Using Eq. (2.77) and the general prediction equation (3.12), the nonlinear free trajectory is calculated from ym0 (k + p|k) =
nu
0 gm,n (vm,n (k + p|k)) + dm (k)
(3.161)
n=1 0 The signals vm,n (k + p|k) are calculated from Eq. (3.147), in the same way as it is done in the case of the MIMO model III for m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . The unmeasured disturbances are estimated from Eq. (3.35), in the same way it is done in the MPC-NO algorithm. In the MPC-SSL algorithm, the successively linearised model is used to find the free trajectory. In place of Eq. (3.161) used in the MPC-NPSL algorithm, from Eqs. (3.34) and (3.158)–(3.159), we have
ym0 (k + p|k) =
nu
0 K m, j (k)vm,n (k + p|k) + dm (k)
(3.162)
n=1 0 (k + p|k) are computed from Eq. (3.147) for m = 1, . . . , n y , where the signals vm,n n = 1, . . . , n u , p = 1, . . . , N . The unmeasured disturbances are estimated not from Eq. (3.35) used in the MPC-NPSL algorithm but from
dm (k) = ym (k) −
nu
K m,n (k)vm,n (k)
(3.163)
n=1
where the model signals vm,n (k) are calculated from Eq. (2.47). Optimisation Having discussed model linearisation performed on-line in the MPC-NPSL and MPC-SSL algorithms, now we are ready to formulate the resulting optimisation problems. For this purpose, we use the general prediction equation (3.93). Because
102
3 MPC Algorithms Using Input-Output Wiener Models
the vector of predictions of the controlled variables is a linear function of the calculated increments, u(k), from the general MPC optimisation problem with hard output constraints (Eq. (1.35)), we obtain the following quadratic optimisation MPCNPSL and MPC-SSL tasks 2 min J (k) = ysp (k) − G(k)u(k) − y0 (k) M + u(k)2 u(k)
subject to
(3.164)
umin ≤ Ju(k) + u(k − 1) ≤ umax umin ≤ u(k) ≤ umax ymin ≤ G(k)u(k) + y0 (k) ≤ ymax The only difference between the MPC-NPSL and MPC-SSL algorithms is the free trajectory computation method (in the first case, the full nonlinear model is used; in the second one, its linear approximation). It means that the general form of the optimisation task is the same for these two algorithms. The optimisation problem (3.164) should be reformulated in order to solve it in MATLAB. We will use the quadprog function. Its syntax is X = quadprog(H,f,A,b,Aeq,beq,LB,UB,X0,OPTIONS) It solves the general quadratic optimisation task min 0.5x T (k)H QP (k)x(k) + f TQP (k)x(k) x(k)
subject to A(k)x(k) ≤ B(k)
(3.165)
Aeq x(k) = B eq L B ≤ x(k) ≤ U B The quadprog function makes it possible to impose 3 types of constraints: linear inequalities ( A(k)x(k) ≤ B(k)), linear equalities ( Aeq x(k) = B eq ) and bounds (L B ≤ x(k) ≤ U B). When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPSL optimisation task (3.164), the vector of decision variables is x(k) = u(k) and there are only two types of constraints: linear inequalities defined by ⎤ ⎤ ⎡ −J −umin + u(k − 1) ⎢ J ⎥ ⎢ umax − u(k − 1) ⎥ ⎥ ⎥ ⎢ A(k) = ⎢ ⎣ −G(k) ⎦ , B(k) = ⎣ − ymin + y0 (k) ⎦ ymax − y0 (k) G(k) ⎡
(3.166)
and bounds defined by Eq. (3.39). Now, it is necessary to transform the minimised cost-function used in the specific optimisation MPC-NPSL or MPC-SSL problem
3.4 MPC-NPSL and MPC-SSL Algorithms
103
(3.164) into that of the general quadratic optimisation task (3.165). Differentiating the cost-function, J (k), with respect to the decision variables, u(k), we have d J (k) = −2G T (k)M( ysp (k) − G(k)u(k) − y0 (k)) + 2u(k) du(k) = 2(G T (k)M G(k) + )u(k) − 2G T (k)M( ysp (k) − y0 (k))
(3.167)
The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) = 2(G T (k)M G(k) + ) d(u(k))2
(3.168)
The vector f QP (k) is the part of the right-side of Eq. (3.167) that does not depend on the decision vector u(k) f QP (k) = −2G T (k)M( ysp (k) − y0 (k))
(3.169)
Next, let us consider soft output constraints. Using the prediction equation (3.93), from the general MPC optimisation problem (1.39) with soft output constraints, we obtain the following quadratic optimisation MPC-NPSL problem min
u(k) εmin (k), εmax (k)
2 J (k) = ysp (k) − G(k)u(k) − y0 (k) M + u(k)2 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
≤ Ju(k) + u(k − 1) ≤ u
min
(3.170) max
≤ u(k) ≤ umax
ymin − ε min (k) ≤ G(k)u(k) + y0 (k) ≤ ymax + εmax (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1 The above problem is solved in MATLAB by means of the quadprog function. We take into account that the decision vector of the MPC-NPSL problem, x(k), is defined by Eq. (3.40), the auxilliary matrices N 1 , N 2 and N 3 are defined by Eqs. (3.41)–(3.43) and we also use Eqs. (3.44)–(3.46), (3.49)–(3.50). We obtain the optimisation task 2 min J (k) = ysp (k) − G(k)N 1 x(k) − y0 (k) M + N 1 x(k)2 x(k) + ρ min N 2 x(k)2 + ρ max N 3 x(k)2 subject to
(3.171)
104
3 MPC Algorithms Using Input-Output Wiener Models
umin ≤ J N 1 x(k) + u(k − 1) ≤ umax umin ≤ N 1 x(k) ≤ umax ymin − I N ×1 ⊗ N 2 x(k) ≤ G(k)N 1 x(k) + y0 (k) ≤ ymax + I N ×1 ⊗ N 3 x(k) N 2 x(k) ≥ 0n y ×1 , N 3 x(k) ≥ 0n y ×1
When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPSL optimisation task (3.171), there are only linear inequality constraints defined by ⎡ ⎤ ⎤ −umin + u(k − 1) − J N1 ⎢ umax − u(k − 1) ⎥ ⎥ ⎢ J N1 ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ −umin −N 1 ⎢ ⎥ ⎥ ⎢ max ⎢ ⎥ ⎥ ⎢ u N1 ⎥ , B(k) = ⎢ ⎥ A(k) = ⎢ ⎢ − ymin + y0 (k) ⎥ (3.172) ⎢ −G(k)N 1 − I N ×1 ⊗ N 2 ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ ymax − y0 (k) ⎥ ⎢ G(k)N 1 − I N ×1 ⊗ N 3 ⎥ ⎢ ⎥ ⎥ ⎢ ⎣ ⎦ ⎦ ⎣ 0n y ×1 −N 2 −N 3 0n y ×1 ⎡
Differentiating the cost-function, J (k), with respect to the decision variables, x(k), we have d J (k) = −2N T1 G T (k)M( ysp (k) − G(k)N 1 x(k) − y0 (k)) + N T1 N 1 x(k) dx(k) + 2ρ min N T2 N 2 x(k) + 2ρ max N T3 N 3 x(k) = 2(N T1 G T (k)M G(k)N 1 + N T1 N 1 + ρ min N T2 N 2 + ρ max N T3 N 3 )x(k) − 2N T1 G T (k)M( ysp (k) − y0 (k))
(3.173)
The second-order derivative matrix of the minimised cost-function is d2 J (k) d(x(k))2 = 2(N T1 G T (k)M G(k)N 1 + N T1 N 1 + ρ min N T2 N 2 + ρ max N T3 N 3 ) (3.174)
H QP (k) =
The vector f QP (k) is f QP (k) = −2N T1 G T (k)M( ysp (k) − y0 (k))
(3.175)
3.5 MPC-NPSL-P and MPC-SSL-P Algorithms
105
3.5 MPC-NPSL-P and MPC-SSL-P Algorithms We will consider parameterisation using Laguerre functions in order to reduce the number of decision variables of the MPC-NPSL and MPC-SSL algorithms. In the resulting MPC-NPSL algorithm with Parameterisation (MPC-NPSL-P) and MPC-SSL scheme with Parameterisation (MPC-SSL-P), using the parameterisation defined by Eq. (1.56) and the prediction equation (3.93) used in the rudimentary versions of the MPC-NPSL and MPC-SSL algorithms, we obtain ˆy(k) = G(k)Lc(k) + y0 (k)
(3.176)
Due to on-line linearisation, the predicted vector of controlled variables is a linear function of the decision vector c(k). At first, let us consider MPC with hard constraints imposed on the controlled variables. From the general MPC optimisation problem (1.35), using the prediction equation (3.176), we obtain the following MPC-NPSL-P and MPC-SSL-P optimisation task 2 min J (k) = ysp (k) − G(k)Lc(k) − y0 (k) M + Lc(k)2 c(k)
subject to u
min
u
≤ J Lc(k) + u(k − 1) ≤ u
min
(3.177) max
≤ Lc(k) ≤ umax
ymin ≤ G(k)Lc(k) + y0 (k) ≤ ymax When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPSL-P optimisation task (3.177), the decision variable is x(k) = c(k) and there are only linear inequalities defined by ⎡ ⎤ ⎤ −umin + u(k − 1) −JL ⎢ umax − u(k − 1) ⎥ ⎢ JL ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎢ −L ⎥ −umin ⎢ ⎥ ⎥ ⎢ A(k) = ⎢ max ⎥ , B(k) = ⎢ ⎥ L u ⎢ ⎥ ⎥ ⎢ ⎣ − ymin + y0 (k) ⎦ ⎣ −G(k)L ⎦ G(k)L ymax − y0 (k) ⎡
(3.178)
Differentiating the cost-function, J (k), with respect to the decision variables, c(k), we have d J (k) = −2L T G T (k)M( ysp (k) − G(k)Lc(k) − y0 (k)) + 2L T Lc(k) dc(k) = 2(L T G T (k)M G(k)L + L T L)c(k) − 2L T G T (k)M( ysp (k) − y0 (k))
(3.179)
106
3 MPC Algorithms Using Input-Output Wiener Models
The second-order derivative matrix of the minimised cost-function is d2 J (k) = 2(L T G T (k)M G(k)L + L T L) d(c(k))2
H QP (k) =
(3.180)
and f QP (k) = −2L T G T (k)M( ysp (k) − y0 (k))
(3.181)
Next, let us consider soft output constraints. Using the prediction equation (3.176), from the general MPC optimisation problem (1.39) with soft output constraints, we obtain the following quadratic optimisation MPC-NPSL-P and MPC-SSL-P problem min
c(k) εmin (k), εmax (k)
2 J (k) = ysp (k) − G(k)Lc(k) − y0 (k) M + Lc(k)2 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
≤ J Lc(k) + u(k − 1) ≤ u
min
(3.182) max
≤ Lc(k) ≤ umax
ymin − εmin (k) ≤ G(k)Lc(k) + y0 (k) ≤ ymax + εmax (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1 Now, we take into account that the decision vector, x˜ (k), is defined by Eq. (3.57) N 2, N 3 are defined by Eqs. (3.58)–(3.60), we also and the auxilliary matrices N 1, use Eqs. (3.64)–(3.65). We obtain the optimisation task 2 2 N 1 x˜ (k) N 1 x˜ (k) − y0 (k) M + L min J (k) = ysp (k) − G(k)L x˜ (k) 2 2 + ρ min N 2 x˜ (k) + ρ max N 3 x˜ (k) subject to
(3.183)
u ≤ J L N 1 x˜ (k) + u(k − 1) ≤ umax umin ≤ L N 1 x˜ (k) ≤ umax min
N 1 x˜ (k) + y0 (k) ≤ ymax + I N ×1 ⊗ ymin − I N ×1 ⊗ N 2 x˜ (k) ≤ G(k)L N 3 x˜ (k) N 2 x˜ (k) ≥ 0n y ×1 , N 3 x˜ (k) ≥ 0n y ×1 The above problem is solved in MATLAB by means of the quadprog function. When compared with the general quadratic optimisation problem (3.165), in our MPC-NPSL-P and MPC-SSL-P optimisation task (3.183), there are only linear inequality constraints defined by
3.5 MPC-NPSL-P and MPC-SSL-P Algorithms
107
⎡ ⎤ ⎤ −J L N1 −umin + u(k − 1) ⎢ umax − u(k − 1) ⎥ ⎥ ⎢ J L N1 ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ −L N 1 −umin ⎢ ⎥ ⎥ ⎢ max ⎢ ⎥ ⎥ ⎢ u L N1 ⎢ ⎥ ⎥ (3.184) ⎢ , B(k) = A(k) = ⎢ min 0 ⎢ ⎥ ⎥ ⎢ − ymax + y0 (k) ⎥ ⎢ −G(k)L N 1 − I N ×1 ⊗ N 2 ⎥ ⎢ ⎥ ⎢ G(k)L − y (k) ⎥ N 1 − I N ×1 ⊗ N3 ⎥ ⎢ y ⎥ ⎢ ⎣ ⎦ ⎦ ⎣ 0n y ×1 − N2 0n y ×1 − N3 ⎡
Differentiating the cost-function, J (k), with respect to the decision variables, x˜ (k), we have d J (k) T T = −2 N 1 L T G T (k)M( ysp (k) − G(k)L N 1 x˜ (k) − y0 (k)) + N 1 L T L N 1 x˜ (k) d x˜ (k) T T + 2ρ min N2 N 2 x˜ (k) + 2ρ max N3 N 3 x˜ (k) T T = 2( N 1 L T G T (k)M G(k)L N1 + N 1 L T L N1 T T T + ρ min N 1 L T G T (k)M( ysp (k) − y0 (k)) N2 N 2 + ρ max N3 N 3 ) x˜ (k) − 2
(3.185)
The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) d( x˜ (k))2
= 2( N 1 L T G T (k)M G(k)L N1 + N 1 L T L N 1 + ρ min N2 N 2 + ρ max N3 N 3) T
T
T
T
(3.186)
and
T N 1 L T G T (k)M( ysp (k) − y0 (k)) f QP (k) = −2
(3.187)
3.6 MPC-NPLT Algorithm Let us notice that in both MPC-NPSL and MPC-SSL algorithms discussed so far, a linear approximation of the nonlinear model is calculated on-line for the current operating point, which is defined by some previous signals. Next, the linearised model is used to determine the influence of the currently calculated future increments of the manipulated variable(s) (i.e. the decision variables of MPC) on the predicted values of the controlled variables. The influence of the past may be determined by means of the linearised model or the full nonlinear one. The linearised model is used for long-range prediction calculation over the whole prediction horizon. Because such
108
3 MPC Algorithms Using Input-Output Wiener Models
a predicted trajectory may be very different from the real nonlinear one (determined from the full nonlinear model), the MPC-NPSL and MPC-SSL algorithms may lead to much worse results than the “ideal” MPC-NO approach. This inefficiency of the MPC algorithms with on-line model linearisation is likely to occur for relatively long prediction horizons, when the set-point changes are significant and fast or when strong disturbances affect the process. Having identified the source of potential disadvantages of the simple MPC algorithms with on-line model linearisation, we will try to adopt a different approach named MPC Algorithm with Nonlinear Prediction and Linearisation along the Trajectory (MPC-NPLT). We still remember that it is desirable to obtain a quadratic optimisation MPC problem. The solution is to perform not model linearisation for the current operating point and next use such a model recurrently for prediction but to carry out linearisation of the predicted output trajectory, ˆy(k), along some future trajectory of the manipulated variable(s) defined over the control horizon ⎡ ⎢ utraj (k) = ⎣ u
traj
u traj (k|k) .. .
⎤ ⎥ ⎦
(3.188)
(k + Nu − 1|k)
From the definition of the control horizon, we have u traj (k + p|k) = u traj (k + Nu − 1|k) for p = Nu , . . . , N . From the nonlinear model of the process, it is possible to calculate the predicted trajectory of the controlled variables (over the prediction horizon) corresponding to the assumed trajectory utraj (k) ⎤ yˆ traj (k + 1|k) ⎥ ⎢ .. ˆytraj (k) = ⎣ ⎦ . yˆ traj (k + N |k) ⎡
(3.189)
The trajectory utraj (k) may be defined using the values of the manipulated variables applied to the process at the previous sampling instant, i.e. ⎡
⎤ u(k − 1) ⎢ ⎥ .. utraj (k) = u(k − 1) = ⎣ ⎦ .
(3.190)
u(k − 1)
It is also possible to use for linearisation the last n u (Nu − 1) elements of the optimal input trajectory calculated at the previous sampling instant
3.6 MPC-NPLT Algorithm
109
⎡
u(k|k − 1) .. .
⎤
⎥ ⎢ ⎥ ⎢ ⎥ ⎢ utraj (k) = u(k|k − 1) = ⎢ u(k + Nu − 3|k − 1) ⎥ ⎥ ⎢ ⎣ u(k + Nu − 2|k − 1) ⎦ u(k + Nu − 2|k − 1)
(3.191)
Let us note that the first element of the sequence calculated at the previous sampling instant, i.e. u(k − 1|k − 1) (a vector of length n u in the MIMO case), is applied to the process at the sampling instant k − 1. Hence, the last element of the available sequence, u(k + Nu − 2|k − 1), should be repeated twice. When for linearisation the trajectory (3.190) is used, we will use the name MPC-NPLT1, when the trajectory (3.191), the algorithm will be called MPC-NPLT2. The general formulation of the MPC-NPLT algorithm is presented in [8]. The following processes have been considered in simulations: a solid oxide fuel cell [12], a proton exchange membrane fuel cell [14], a neutralisation reactor [10], a polymerisation reactor [8], a heat exchanger. [11]. Implementation details of the MPC-NPLT algorithm for the SISO Wiener model are given in [7]; the nonlinear static block is a neural network of the MLP type. Other model structures have not been considered. In this Chapter, the MPC-NPLT algorithm is derived for all Wiener structures discussed in Chap. 2. Prediction Using SISO Wiener Model Let us at first discuss the SISO case in which the Wiener model depicted in Fig. 2.1 is used. Using the general prediction equation (3.10) and from the description of the nonlinear static block, i.e. Eq. (2.5), the nonlinear output trajectory corresponding to the input trajectory utraj (k) is calculated from yˆ traj (k + p|k) = g(vtraj (k + p|k)) + d(k)
(3.192)
where p = 1, . . . , N . Since for prediction the full nonlinear model is used, the unmeasured disturbance is estimated from Eq. (3.21), in the same way it is done in the MPC-NO scheme. Using Eq. (3.18) derived for the MPC-NO algorithm, we obtain Iuf ( p)
vtraj (k + p|k) =
nB
bi u traj (k − i + p|k) +
i=1 Ivf ( p)
−
i=1
bi u(k − i + p)
i=Iuf ( p)+1
ai vtraj (k − i + p|k) −
nA
ai v(k − i + p)
(3.193)
i=Ivf ( p)+1
Let us recall the Taylor series expansion formula. The linear approximation of a nonlinear function y(x) : R → R calculated at the linearisation point x¯ is
110
3 MPC Algorithms Using Input-Output Wiener Models
y(x) = y(x) ¯ +
dy(x) (x − x) ¯ dx x=x¯
(3.194)
When the considered function has as many as n x arguments, i.e. y : Rn x → R and T x = x1 . . . xn x , we have y(x) = y(x) ¯ +
nx ∂ y(x) (xi − x¯i ) ∂ xi x=x¯ i=1
(3.195)
T where the linearisation point is defined by the vector x¯ = x¯1 . . . x¯n x . We have to find a linear approximation of the nonlinear predicted output trajectory, ˆy(k), which is a vector of length N (Eq. (1.22)), subject to the vector of future values of the manipulated variable, u(k) (Eq. (1.27)), corresponding to the increments u(k) (Eq. (1.3)). In different words, we have to find a linear approximation of the function ˆy(u(k)) : R Nu → R N . To make calculations simpler, we find independently N approximations of the output predictions (1.22). For this purpose we use Eq. (3.195) N times separately, for each sampling instant k + 1, . . . , k + p. We obtain yˆ (k + 1|k) = yˆ traj (k + 1|k) +
N u −1 p=0
∂ yˆ (k + 1|k) (u(k + p|k) − u traj (k + p|k)) traj ∂u(k + p|k) ˆy(k)= ˆytraj (k) u(k)=u
(k)
(3.196) .. . yˆ (k + N |k) = yˆ traj (k + N |k) +
N u −1 p=0
∂ yˆ (k + N |k) (u(k + p|k) − u traj (k + p|k)) traj ∂u(k + p|k) ˆy(k)= ˆytraj (k) u(k)=u
(k)
(3.197) Let us notice that we may use the vector-matrix notation to compactly rewrite the above linearised trajectory ˆy(k) = ˆytraj (k) + H(k)(u(k) − utraj (k))
(3.198)
where the matrix of derivatives of the predicted trajectory of the controlled variable with respect to the trajectory of the manipulated one is of dimensionality N × Nu
3.6 MPC-NPLT Algorithm
d ˆy(k) H(k) = traj du(k) ˆy(k)= ˆytraj (k) u(k)=u (k) ⎡ ∂ yˆ traj (k + 1|k) ⎢ ∂u traj (k|k) ⎢ ⎢ .. =⎢ ⎢ traj . ⎣ ∂ yˆ (k + N |k) ∂u traj (k|k)
111
=
d ˆytraj (k) dutraj (k)
⎤ ∂ yˆ traj (k + 1|k) ∂u traj (k + Nu − 1|k) ⎥ ⎥ ⎥ .. .. ⎥ . . ⎥ ∂ yˆ traj (k + N |k) ⎦ ··· ∂u traj (k + Nu − 1|k) ···
(3.199)
and u(k) (Eq. (1.27)) is the vector of length Nu . The same results may be obtained when we use the rudimentary Taylor formula (3.194) in which both x and y are vectors (i.e. x = u(k), y = ˆy(k), x¯ = utraj (k), y¯ = ˆytraj (k)). Let us notice the important fact that the obtained prediction equation (3.198) is a linear function of the future sequence of the manipulated variables, u(k), the vectors ˆytraj (k), utraj (k) and the matrix H(k) consist of numbers, i.e. they are independent of the vector u(k). The entries of the matrix (3.199) may be obtained by differentiating Eq. (3.192) which gives dg(vtraj (k + p|k)) ∂vtraj (k + p|k) ∂ yˆ traj (k + p|k) = ∂u traj (k + r |k) dvtraj (k + p|k) ∂u traj (k + r |k)
(3.200)
for all p = 1, . . . , N , r = 0, . . . , Nu − 1. Differentiating Eq. (3.193) gives Iuf ( p) ∂u traj (k − i + p|k) ∂vtraj (k + p|k) = bi ∂u traj (k + r |k) ∂u traj (k + r |k) i=1 Ivf ( p)
−
i=1
ai
∂vtraj (k − i + p|k) ∂u traj (k + r |k)
(3.201)
We have to remember that u(k + p|k) = u(k + Nu − 1|k) for p ≥ Nu > Hence, the first derivatives on the right part of the above equation may have only two values ∂u traj (k + p|k) 1 if p = r or ( p > r and r = Nu − 1) = ∂u traj (k + r |k) 0 otherwise
(3.202)
whereas the second partial derivatives on the right part of Eq. (3.201) must be calculated recurrently. The presented calculation scheme of the matrix H(k), based on Eqs. (3.200), (3.201) and (3.202), is discussed in [8, 14]. It is possible to further simplify calculations. We will show that the second derivatives on the right side of Eq. (3.200) are independent of the process operating point (i.e. the sampling instant k) and independent of the trajectories utraj (k) and vtraj (k) which means that they may be precalculated off-line. As proved in [20], when a linear model is used in MPC for prediction, the
112
3 MPC Algorithms Using Input-Output Wiener Models
predicted vector is a linear function of the calculated decision vector (increments (1.3)). Taking into account only the linear dynamic part of the Wiener model, the vector of its predicted output trajectory is of length N and is defined by the following linear function ⎤ ⎡ traj v (k + 1|k) ⎥ ⎢ .. traj 0 (3.203) vtraj (k) = ⎣ ⎦ = Gu (k) + v (k) . vtraj (k + N |k)
where G is the step-response matrix of the linear dynamic part of the model, it is of dimensionality N × Nu and is defined by Eq. (3.95). Its entries, i.e. the step-response coefficients, are found from Eq. (3.96). The free trajectory of the linear dynamic part is denoted by v0 (k). We must define the relation between the trajectory utraj (k) (Eq. (3.188)) and the corresponding increments utraj (k). The relation is J utraj (k) + u(k − 1) utraj (k) =
(3.204)
where the matrix of dimensionality Nu × Nu is ⎡
1 0 0 ⎢ −1 1 0 ⎢ ⎢ J = ⎢ 0 −1 1 ⎢ .. .. .. ⎣ . . . 0 0 0
⎤ 0 0⎥ ⎥ 0⎥ ⎥ .. ⎥ .⎦ ... 1 ... ... ... .. .
(3.205)
and the vector of length Nu is u(k − 1) =
−u(k − 1) 0(Nu −1)×1
(3.206)
From Eqs. (3.203) and (3.204), we have vtraj (k) = G( J utraj (k) + u(k − 1)) + v0 (k)
(3.207)
Equation (3.207) is differentiated with respect to the vector u(k) = utraj (k). Because the vectors u(k − 1) and v0 (k) refer to the past, they are independent of the future sequence of the manipulated variables utraj (k). Thus, we obtain dv(k) dv(k) = G = J du(k) v(k)=vtraj (k) du(k)
(3.208)
u(k)=utraj (k)
The matrix (3.208) is calculated off-line, once, in a simple way. It is independent of the process operating point as well as the trajectories utraj (k) and vtraj (k). It has
3.6 MPC-NPLT Algorithm
113
the following structure ⎡ ∂v(k + 1|k) ⎤ ∂v(k + 1|k) ··· ⎢ ∂u(k|k) ∂u(k + Nu − 1|k) ⎥ ⎢ ⎥ dv(k) . .. . ⎢ ⎥ . . =⎢ . . . ⎥ du(k) ⎣ ∂v(k + N |k) ∂v(k + N |k) ⎦ ··· ∂u(k|k) ∂u(k + Nu − 1|k)
(3.209)
Let us denote the entries of the matrix (3.209) in a simple way ∂v(k + p|k) = h( p, r ) ∂u(k + r |k)
(3.210)
Hence, to find the entries of the matrix H(k), Eqs. (3.201) and (3.202) are not necessary. All things considered, from Eq. (3.200) and taking into account the structure of the matrix (3.209), the entries of the matrix of derivatives (3.199) are calculated from the following simple formula dg(vtraj (k + p|k)) ∂ yˆ traj (k + p|k) = h( p, r ) ∂u traj (k + r |k) dvtraj (k + p|k)
(3.211)
for all p = 1, . . . , N , r = 0, . . . , Nu − 1. The derivatives on the right side of Eq. (3.211) must be calculated on-line, whereas the constant quantities h( p, r ) are computed only once, off-line. Prediction Using MIMO Wiener Model I Next, we will discuss the MIMO case in which for prediction the MIMO Wiener model I shown in Fig. 2.2 is used. Using the general prediction equation (3.12) and from Eq. (2.14), we obtain the nonlinear output trajectory corresponding to the input trajectory utraj (k) yˆmtraj (k + p|k) = gm (vmtraj (k + p|k)) + dm (k)
(3.212)
where m = 1, . . . , n y , p = 1, . . . , N . The unmeasured disturbances are estimated from Eq. (3.24), in the same way it is done in the MPC-NO scheme. Using Eq. (3.23), derived for the MPC-NO algorithm, we have vmtraj (k + p|k) =
n u I uf ( p) n=1
i=1
Ivf ( p)
−
i=1
nB
bim,n u traj n (k − i + p|k) +
aim vmtraj (k − i + p|k) −
bim,n u n (k − i + p)
i=Iuf ( p)+1 nA i=Ivf ( p)+1
aim vm (k − i + p) (3.213)
114
3 MPC Algorithms Using Input-Output Wiener Models
In the MIMO case, we have to find a linear approximation of the nonlinear predicted output trajectory, ˆy(k), which is a vector of length n y N (Eq. (1.22)), subject to the vector of future values of the manipulated variables, u(k) (Eq. (1.27)) (which is now of length n u Nu ), corresponding to the increments u(k) (Eq. (1.3)) (which is now also of length n u Nu )). The matrix H(k) has the structure given by Eq. (3.199) but now it is of dimensionality n y N × n u Nu . Its submatrices are ⎡
traj
∂ yˆ1 (k + p|k)
traj
...
∂ yˆ1 (k + p|k)
⎤
⎢ traj ⎥ traj ∂u n u (k + r |k) ⎥ ⎢ ∂u 1 (k + r |k) ⎢ ⎥ ∂ yˆ (k + p|k) ⎢ .. .. .. ⎥ =⎢ . . . ⎥ traj ∂u (k + r |k) ⎢ traj ⎥ traj ⎣ ∂ yˆn y (k + p|k) ∂ yˆn y (k + p|k) ⎦ . . . traj traj ∂u 1 (k + r |k) ∂u n u (k + r |k) traj
(3.214)
for all p = 1, . . . , N , r = 0, . . . , Nn u − 1. If for prediction the MIMO Wiener model I shown in Fig. 2.2 is used, the entries of the submatrix (3.214) may be obtained by differentiating Eq. (3.212), which gives traj
∂ yˆm (k + p|k) traj
∂u n (k + r |k)
traj
=
traj
dgm (vm (k + p|k)) ∂vm (k + p|k) traj
dvm (k + p|k)
traj
∂u n (k + r |k)
(3.215)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. Differentiating Eq. (3.213) gives Iuf ( p)
traj
∂vm (k + p|k) traj
∂u n (k + r |k)
=
traj
bim,n
i=1 Ivf ( p)
−
i=1
∂u n (k − i + p|k) traj
∂u n (k + r |k) traj
aim
∂vm (k − i + p|k) traj
∂u n (k + r |k)
(3.216)
The first partial derivatives on the right part of the above equation may have only two values traj ∂u n (k + p|k) 1 if p = r or ( p > r and r = Nu − 1) = (3.217) traj 0 otherwise ∂u n (k + r |k) whereas the second partial derivatives on the right part of Eq. (3.216) must be calculated recurrently. Similarly to the SISO case, it is possible to prove that the second derivatives on the right side of Eq. (3.215) are independent of the process operating point as well as independent of the trajectories utraj (k) and vtraj (k) which means that they may be precalculated off-line. We use Eq. (3.203), but now the step-response matrix G of the linear dynamic block is of dimensionality n y N × n u Nu . Its structure is defined by Eq.
3.6 MPC-NPLT Algorithm
115
(3.144). The sub-matrices S p are of dimensionality of n y × n u and are defined by Eq. (3.116). Similarly, Eqs. (3.203) and (3.204) are true, but now the vector utraj (k) is of length n u Nu , the vectors vtraj (k) and v0 (k) are of length n y N . The matrix of dimensionality n u Nu × n u Nu is ⎡
I n u ×n u 0n u ×n u ⎢ −I n u ×n u I n u ×n u ⎢ ⎢ J = ⎢ 0n u ×n u −I n u ×n u ⎢ .. .. ⎣ . . 0n u ×n u 0n u ×n u
0n u ×n u 0n u ×n u I n u ×n u .. . 0n u ×n u
⎤ . . . 0n u ×n u . . . 0n u ×n u ⎥ ⎥ . . . 0n u ×n u ⎥ ⎥ .. ⎥ .. . . ⎦ . . . I n u ×n u
(3.218)
and the vector of length n u Nu is
−u(k − 1) u(k − 1) = 0n u (Nu −1)×1
(3.219)
In the case of the MIMO Wiener model I, Eqs. (3.207) and (3.208) hold true. The dv(k) matrix of derivatives du(k) is defined by Eq. (3.209), but now it is of dimensionality n y N × n u Nu . Its submatrices ⎡
⎤ ∂v1 (k + p|k) ∂v1 (k + p|k) · · · ⎢ ∂u 1 (k + r |k) ∂u n u (k + r |k) ⎥ ⎥ dv(k + p|k) ⎢ ⎢ ⎥ .. .. . . =⎢ ⎥ . . . ⎢ ⎥ du(k + r |k) ⎣ ∂vn y (k + p|k) ∂vn y (k + p|k) ⎦ ··· ∂u 1 (k + r |k) ∂u n u (k + r |k)
(3.220)
dv(k) are of dimensionality n y × n u . Let us denote the entries of the matrix du(k) in a simple way
∂vm (k + p|k) = h m,n ( p, r ) ∂u n (k + r |k)
(3.221)
Hence, Eqs. (3.216) and (3.217) are not necessary to calculate the entries of the matrix H(k). All things considered, from Eq. (3.215) and taking into account the structure of the matrices (3.209) and (3.220), the entries of the submatrix (3.214) that comprise the matrix (3.199) are calculated from the following simple formula traj
∂ yˆm (k + p|k) traj ∂u n (k
+ r |k)
traj
=
dgm (vm (k + p|k)) traj dvm (k
+ p|k)
h m,n ( p, r )
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1.
(3.222)
116
3 MPC Algorithms Using Input-Output Wiener Models
Prediction Using MIMO Wiener Model II If for prediction the MIMO Wiener model II shown in Fig. 2.3 is used, using the general prediction equation (3.12) and from Eq. (2.25), we obtain the nonlinear output trajectory corresponding to the input trajectory utraj (k) traj
yˆmtraj (k + p|k) = gm (v1 (k + p|k), . . . , vntraj (k + p|k)) + dm (k) v
(3.223)
traj
where m = 1, . . . , n y , p = 1, . . . , N . The signals vm (k + p|k) are calculated from Eq. (3.213) for p = 1, . . . , N but now m = 1, . . . , n v . The unmeasured disturbances are estimated from Eq. (3.26), in the same way it is done in the MPC-NO scheme. Two methods for finding the matrix H(k) defined by Eqs. (3.199) and (3.214) are discussed. In the first case, all derivatives are calculated on-line, at each sampling instant. The entries of the submatrix (3.214) are obtained by differentiating directly Eq. (3.223) which gives traj
∂ yˆm (k + p|k) traj
∂u n (k + r |k)
=
nv traj dgm (v1 (k + p|k), . . . , vn v (k + p|k)) ∂vs (k + p|k) traj
dvs (k + p|k)
traj
∂u n (k + r |k) (3.224) for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. The second partial derivatives are calculated by differentiating Eq. (3.213) which gives s=1
Iuf ( p)
traj
∂vs (k + p|k) traj
∂u n (k + r |k)
=
traj
bis,n
i=1 Ivf ( p)
−
i=1
∂u n (k − i + p|k) traj
∂u n (k + r |k) traj
ais
∂vs (k − i + p|k) traj
∂u n (k + r |k)
(3.225)
The first derivatives on the right side of Eq. (3.225) are calculated from Eq. (3.217), the second ones are calculated recurrently. It is possible to prove that the second right-side derivatives in Eq. (3.224) may be precalculated off-line and we do not have to use Eqs. (3.217) and (3.225). The result is given by the general formula (3.208), but we have to take into account that in the case of the MIMO Wiener model II the linear dynamic part has n v outputs, not n y as it is the case in the MIMO Wiener model I. The step-response matrix G (Eq. (3.144)) of the linear dynamic block is of dimensionality n v N × n u Nu . Its sub-matrices S p are of dimensionality n v × n u and have the structure defined by Eq. (3.150). The dv(k) is defined by Eq. (3.209), but now it is of dimensionality matrix of derivatives du(k) n v N × n u Nu . Its submatrices are of dimensionality n v × n u and have the following structure
3.6 MPC-NPLT Algorithm
117
⎤ ∂v1 (k + p|k) ∂v1 (k + p|k) · · · ⎢ ∂u 1 (k + r |k) ∂u n u (k + r |k) ⎥ ⎥ dv(k + p|k) ⎢ ⎥ ⎢ . .. .. .. =⎢ ⎥ . . ⎥ ⎢ du(k + r |k) ⎣ ∂vn v (k + p|k) ∂vn v (k + p|k) ⎦ ··· ∂u 1 (k + r |k) ∂u n u (k + r |k) ⎡
dv(k) du(k)
Let us denote the entries of the matrix
(3.226)
in a simple way
traj
∂vs (k + p|k) traj ∂u n (k
+ r |k)
= h s,n ( p, r )
(3.227)
Hence, Eqs. (3.217) and (3.225) are not necessary to calculate the entries of the matrix H(k). All things considered, from Eq. (3.224) and taking into account the structure of the matrices (3.209) and (3.226), the entries of the submatrix (3.214) that comprise the matrix (3.199) are calculated from the following simple formula traj
∂ yˆm (k + p|k) traj
∂u n (k + r |k)
=
nv traj traj dgm (v1 (k + p|k), . . . , vn v (k + p|k)) traj
dvs (k + p|k)
s=1
h s,n ( p, r ) (3.228)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. Prediction Using MIMO Wiener Model III If for prediction the MIMO Wiener model III shown in Fig. 2.4 is used, from the general prediction equation (3.12) and from Eq. (2.53), we obtain the nonlinear output trajectory corresponding to the input trajectory utraj (k) yˆmtraj (k
+ p|k) = gm
n u
traj vm,n (k
+ p|k) + dm (k)
(3.229)
n=1
where m = 1, . . . , n y , p = 1, . . . , N . The predicted trajectory may be also expressed as (3.230) yˆmtraj (k + p|k) = gm vmtraj (k + p|k) + dm (k) where vmtraj (k + p|k) =
nu
traj vm,n (k + p|k)
(3.231)
n=1
The unmeasured disturbances are estimated from Eq. (3.31), in the same way it is done in the MPC-NO scheme. Using Eq. (3.28) derived for the MPC-NO algorithm, we have
118
traj vm,n (k
3 MPC Algorithms Using Input-Output Wiener Models
+ p|k) =
Iuf (m,n, p)
traj bim,n u n (k
n m,n B
bim,n u n (k − i + p)
i=Iuf (m,n, p)+1
i=1
−
− i + p|k) +
Ivf (m,n, p)
traj aim,n vm,n (k
n m,n A
− i + p|k) −
aim,n vm,n (k − i + p)
i=Ivf (m,n, p)+1
i=1
(3.232) The matrix H(k), of dimensionality n y N × n u Nu , is defined by Eq. (3.199) where its submatrices are defined by Eq. (3.214). The partial derivatives are calculated by differentiating Eq. (3.230) which gives Eq. (3.215), obtained for the MIMO Wiener model I. Differentiating Eq. (3.231), we obtain traj
∂vm (k + p|k) traj
∂u n (k + r |k)
traj
=
∂vm,n (k + p|k) traj
∂u n (k + r |k)
(3.233)
Differentiating Eq. (3.232) we have Iuf (m,n, p)
traj
∂vm,n (k + p|k) traj
∂u n (k + r |k)
=
i=1
traj m,n ∂u n (k − i + p|k) bi traj ∂u n (k + r |k)
Ivf (m,n, p)
−
i=1
traj
aim
∂vm,n (k − i + p|k) traj
∂u n (k + r |k)
(3.234)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. It is possible to prove that the left-side derivatives in Eq. (3.233) may be precalculated off-line and we do not have to use Eq. (3.234). The result is given by the general formula (3.208), where the step-response matrix of the linear dynamic part of the model is defined by Eqs. (3.116) and (3.144). Because we assume that functions which describe the consecutive input-output channels of the linear part of the model may have different order of dynamics, the scalar step-response coefficients are calculated from Eq. (3.145). It is important to stress the fact that the step-response coefficients are not calculated from a possibly high-order classical representation (2.1) with the matrices (2.7)–(2.8) but from the consecutive transfer functions (2.30) characterised by the polynomials (2.31)–(2.32), which are of a lower order than the classical MIMO representation, i.e. in the MIMO Wiener model I. All things considered, in the case of the MIMO Wiener model III, the entries of the submatrix (3.214) that comprise the matrix H(k) are calculated from the formula obtained for the Wiener model I, i.e. Eq. (3.222). Two important differences are: the step-response coefficients are determined from Eq. (3.145) and we have to use Eq. traj (3.231) to calculate the signals vm (k + p|k).
3.6 MPC-NPLT Algorithm
119
Prediction Using MIMO Wiener Model IV If for prediction the MIMO Wiener model IV shown in Fig. 2.5 is used, using the general prediction equation (3.12) and from Eq. (2.68), we obtain the nonlinear output trajectory corresponding to the input trajectory utraj (k) yˆmtraj (k
+ p|k) = gm
n u
traj v1,n (k
+ p|k), . . . ,
n=1
nu
vntraj (k v ,n
+ p|k) + dm (k)
n=1
(3.235) where m = 1, . . . , n y , p = 1, . . . , N . Using the definition vstraj (k + p|k) =
nu
traj vs,n (k + p|k)
(3.236)
n=1
where s = 1, . . . , n v , the predicted trajectory may be also expressed by Eq. (3.223) used in the case of the Wiener MIMO model II. The unmeasured disturbances are estimated from Eq. (3.33), in the same way it is done in the MPC-NO scheme. The traj signals vm,n (k + p|k) are calculated from Eq. (3.232), in the same way it is done in the case of the MIMO model III, for n = 1, . . . , n u and p = 1, . . . , N but now m = 1, . . . , n v . Because the MIMO Wiener models II and IV have n v auxiliary signals and MISO static blocks, we may easily use the results obtained for the model II. Hence, the entries of the matrix H(k) are found from Eq. (3.224). Differentiating Eq. (3.236) we obtain traj traj ∂vs,n (k + p|k) ∂vs (k + p|k) = (3.237) traj traj ∂u n (k + r |k) ∂u n (k + r |k) Using Eq. (3.232), we obtain Iuf (s,n, p)
traj
∂vs,n (k + p|k) traj
∂u n (k + r |k)
=
traj
bis,n
∂u n (k − i + p|k)
i=1 Ivf (s,n, p)
−
i=1
traj
∂u n (k + r |k) traj
ais,n
∂vs,n (k − i + p|k) traj
∂u n (k + r |k)
(3.238)
for all n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1, s = 1, . . . , n v . The first derivatives on the right side of Eq. (3.238) are calculated from Eq. (3.217), the second ones are calculated recurrently. As explained for the MIMO Wiener model III, it is possible to precalculate the left-side derivatives in Eq. (3.237) off-line and we do not have to use Eqs. (3.217) and (3.238). The result is given by the general formula (3.208). The scalar step-response coefficients are calculated from Eq. (3.145) for all m = 1, . . . , n v , n = 1, . . . , n u . The step-response matrix G (Eq. (3.144)) of the linear dynamic block is of dimensionality n v N × n u Nu . Its sub-matrices S p are of dimensionality n v × n u and are
120
3 MPC Algorithms Using Input-Output Wiener Models
calculated from Eq. (3.138) for all m = 1, . . . , n v , n = 1, . . . , n u . The step-response coefficients are not calculated from a possibly high-order classical representation (2.1) with the matrices (2.18)–(2.19) but from the consecutive transfer functions (2.30) characterised by the polynomials (2.31)–(2.32). All things considered, in the case of the MIMO Wiener model IV, the entries of the matrix H(k) are calculated from the formula obtained for the Wiener model II, i.e. Eq. (3.228). Two important differences are: the step-response coefficients are determined from Eq. (3.145) and traj we have to use Eq. (3.236) to calculate the signals vs (k + p|k). Prediction Using MIMO Wiener Model V If for prediction the fifth structure of the Wiener model shown in Fig. 2.6 is used, using the general prediction equation (3.12) and from Eq. (2.77), we have yˆmtraj (k + p|k) =
nu
traj gm,n (vm,n (k + p|k)) + dm (k)
(3.239)
n=1
where m = 1, . . . , n y , p = 1, . . . , N . The unmeasured disturbances are estimated from Eq. (3.35), in the same way it is done in the MPC-NO scheme. The signals traj vm,n (k + p|k) are calculated from Eq. (3.28), in the same way it is done in the case of the MIMO models III, for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . Differentiating Eq. (3.239), we have traj
∂ yˆm (k + p|k) traj
∂u n (k + r |k)
traj
=
traj
dgm,n (vm,n (k + p|k)) ∂vm,n (k + p|k) traj
dvm,n (k + p|k)
traj
∂u n (k + r |k)
(3.240)
Differentiating Eq. (3.232), we obtain Eq. (3.234) which defines the second derivatives on the right side of Eq. (3.240). Also for the MIMO Wiener model V we may precalculate the second partial derivatives on the right side of Eq. (3.240). The step response matrix G of the linear dynamic block of the model (Eq. (3.144)) is of dimensionality n y N × n u Nu , the submatrices S p (Eq. (3.116)) are of dimensionality n y × n u , the scalar step-response coefficients are computed from Eq. (3.117) for m = 1, . . . , n y , n = 1, . . . , n u . The dv(k) matrix du(k) is given by Eq. (3.208) and is of dimensionality n y N × n u Nu . Let us denote traj ∂vm,n (k + p|k) (3.241) = h m,n ( p, r ) traj ∂u n (k + r |k) Hence, Eq. (3.240) simplifies to traj
∂ yˆm (k + p|k) traj
∂u n (k + r |k)
traj
=
dgm,n (vm,n (k + p|k)) traj
dvm,n (k + p|k)
h m,n ( p, r )
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1.
(3.242)
3.6 MPC-NPLT Algorithm
121
Optimisation Because in MPC we calculate the increments u(k) rather than the values u(k), using Eqs. (1.26), from Eq. (3.198), we obtain the linear approximation of the nonlinear predicted trajectory of the controlled variables as the linear function of the decision variables of the MPC-NPLT algorithm ˆy(k) = H(k) Ju(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k))
(3.243)
Using the prediction rule (3.243), from the general MPC optimisation task (1.35), we obtain the MPC-NPLT quadratic optimisation problem min J (k) = ysp (k) − H(k) Ju(k) − ˆytraj (k) u(k) 2 − H(k)(u(k − 1) − utraj (k)) + u(k)2 M
subject to u
min
u
(3.244)
≤ Ju(k) + u(k − 1) ≤ u
min
max
≤ u(k) ≤ umax
ymin ≤ H(k) Ju(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ≤ ymax When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPLT optimisation task (3.244), there are only two types of constraints: the linear inequality constraints defined by ⎤ ⎤ ⎡ −umin + u(k − 1) −J ⎥ ⎢ ⎥ ⎢ umax − u(k − 1) J ⎥ ⎥ , B(k) = ⎢ A(k) = ⎢ traj min ⎣ −H(k) J ⎦ ⎣ − y + ˆy (k) + H(k)(u(k − 1) − utraj (k)) ⎦ H(k) J ymax − ˆytraj (k) − H(k)(u(k − 1) − utraj (k)) (3.245) and bounds defined by Eq. (3.39). Differentiating the cost-function, J (k), with respect to the decision variables, u(k), we have ⎡
d J (k) = −2 J T H T (k)M( ysp (k) − H(k) Ju(k) − ˆytraj (k) du(k) − H(k)(u(k − 1) − utraj (k))) + 2u(k) = 2( J T H T (k)M H(k) J + )u(k) − 2 J T H T (k)M( ysp (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k))) (3.246) The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) = 2( J T H T (k)M H(k) J + ) d(u(k))2
(3.247)
122
3 MPC Algorithms Using Input-Output Wiener Models
and f QP (k) = −2 J T H T (k)M( ysp (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k)))
(3.248)
Next, let us consider soft output constraints. Using the prediction equation (3.243), from the general MPC optimisation problem (1.39) with soft output constraints, we obtain the following quadratic optimisation MPC-NPLT problem min
u(k) εmin (k), εmax (k)
J (k) = ysp (k) − H(k) Ju(k) − ˆytraj (k) 2 − H(k)(u(k − 1) − utraj (k)) M + u(k)2 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
(3.249)
≤ Ju(k) + u(k − 1) ≤ u
min
max
≤ u(k) ≤ umax
ymin − ε min (k) ≤ H(k) Ju(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ≤ ymax + εmax (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1 Now, we take into account that the decision vector of the MPC-NPLT problem, x(k), is defined by Eq. (3.40), the auxilliary matrices N 1 , N 2 , N 3 are defined by Eqs. (3.41)–(3.43) and we use the relations (3.44)–(3.46), (3.49)–(3.50). We obtain the optimisation task min J (k) = ysp (k) − H(k) J N 1 x(k) − ˆytraj (k) x(k) 2 − H(k)(u(k − 1) − utraj (k)) M + N 1 x(k)2 + ρ min N 2 x(k)2 + ρ max N 3 x(k)2 subject to
(3.250)
umin ≤ J N 1 x(k) + u(k − 1) ≤ umax umin ≤ N 1 x(k) ≤ umax ymin − I N ×1 ⊗ N 2 x(k) ≤ H(k) J N 1 x(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ≤ ymax + I N ×1 ⊗ N 3 x(k) N 2 x(k) ≥ 0n y , N 3 x(k) ≥ 0n y The above problem is solved in MATLAB by means of the quadprog function. When compared with the general quadratic optimisation problem (3.165), in our MPC-NPLT optimisation task (3.250), the linear inequality constraints are defined by
3.6 MPC-NPLT Algorithm
123
⎤ − J N1 ⎥ ⎢ J N1 ⎥ ⎢ ⎥ ⎢ −N 1 ⎥ ⎢ ⎥ ⎢ N1 ⎥ A(k) = ⎢ ⎢ −H(k) J N 1 − I N ×1 ⊗ N 2 ⎥ ⎥ ⎢ ⎢ H(k) J N 1 − I N ×1 ⊗ N 3 ⎥ ⎥ ⎢ ⎦ ⎣ −N 2 −N 3 ⎡
and
(3.251)
⎡
⎤ −umin + u(k − 1) ⎢ ⎥ umax − u(k − 1) ⎢ ⎥ min ⎢ ⎥ −u ⎢ ⎥ max ⎢ ⎥ u ⎢ ⎥ B(k) = ⎢ traj min traj ⎥ ⎢ − y + ˆytraj (k) + H(k)(u(k − 1) − u (k)) ⎥ ⎢ ymax − ˆy (k) − H(k)(u(k − 1) − utraj (k)) ⎥ ⎢ ⎥ ⎣ ⎦ 0n y ×1 0n y ×1
(3.252)
Differentiating the cost-function, J (k), with respect to the decision variables, x(k), we have d J (k) traj T T sp = −2N T 1 J H (k)M( y (k) − H(k) J N 1 x(k) − ˆy (k) dx(k) − H(k)(u(k − 1) − utraj (k))) + N T 1 N 1 x(k) max N T N x(k) + 2ρ min N T 2 N 2 x(k) + 2ρ 3 3 T T T min N T N + 2ρ max N T N )x(k) = 2(N T 1 J H (k)M H(k) J N 1 + N 1 N 1 + ρ 2 2 3 3 traj T T sp traj − 2N T 1 J H (k)M( y (k) − ˆy (k) − H(k)(u(k − 1) − u (k))) (3.253)
The second-order derivative matrix of the minimised cost-function is d2 J (k) d(x(k))2 = 2(N T1 J T H T (k)M H(k) J N 1 + N T1 N 1 + ρ min N T2 N 2 + ρ max N T3 N 3 ) (3.254)
H QP (k) =
and f QP (k) = −2N T1 J T H T (k)M( ysp (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k))) (3.255)
124
3 MPC Algorithms Using Input-Output Wiener Models
3.7 MPC-NPLT-P Algorithm We will consider parameterisation using Laguerre functions in order to reduce the number of decision variables of the MPC-NPLT algorithm. The general formulation of the resulting MPC-NPLT algorithm with Parameterisation (MPC-NPLT-P) is presented in [13] but only for the SISO case. In this Chapter, the algorithm is discussed for the general MIMO case. All Wiener structures discussed in Chap. 2 may be used. Using the parameterisation defined by Eq. (1.56) and the prediction equation (3.243) used in the rudimentary version of the MPC-NPLT algorithm, we obtain the following prediction rule ˆy(k) = H(k) J Lc(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k))
(3.256)
At first, let us consider MPC with hard constraints imposed on the controlled variables. Using the prediction equation (3.256), from the general MPC optimisation problem (1.35), we obtain the following MPC-NPLT-P optimisation task min J (k) = ysp (k) − H(k) J Lc(k) − ˆytraj (k) c(k) 2 − H(k)(u(k − 1) − utraj (k)) + Lc(k)2 M
subject to u
min
u
≤ J Lc(k) + u(k − 1) ≤ u
min
(3.257) max
≤ Lc(k) ≤ umax
ymin ≤ H(k) J Lc(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ≤ ymax When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPLT-P optimisation task (3.257), the linear inequality constraints are defined by ⎡ ⎤ ⎤ −umin + u(k − 1) −JL ⎢ ⎢ ⎥ ⎥ umax − u(k − 1) JL ⎢ ⎢ ⎥ ⎥ min ⎢ ⎢ ⎥ ⎥ −u −L ⎥ , B(k) = ⎢ ⎥ A(k) = ⎢ max ⎢ ⎢ ⎥ ⎥ u L ⎢ ⎢ ⎥ ⎥ ⎣ − ymin + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ⎦ ⎣ −H(k) J L ⎦ H(k) J L ymax − ˆytraj (k) − H(k)(u(k − 1) − utraj (k)) ⎡
(3.258) Differentiating the cost-function, J (k), with respect to the decision variables, c(k), we have
3.7 MPC-NPLT-P Algorithm
125
d J (k) = −2L T J T H T (k)M( ysp (k) − H(k) J Lc(k) − ˆytraj (k) dc(k) − H(k)(u(k − 1) − utraj (k))) + 2L T Lc(k) = 2(L T J T H T (k)M H(k) J L + L T L)c(k) − 2L T J T H T (k)M( ysp (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k))) (3.259) The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) = 2(L T J T H T (k)M H(k) J L + L T L) d(c(k))2
(3.260)
and f QP (k) = −2L T J T H T (k)M( ysp (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k))) (3.261) Next, let us consider soft output constraints. Using the prediction equation (3.256), from the general MPC optimisation problem (1.39) with soft output constraints, we obtain the following quadratic optimisation MPC-NPLT-P problem min
c(k) εmin (k), εmax (k)
J (k) = ysp (k) − H(k) J Lc(k) − ˆytraj (k) 2 − H(k)(u(k − 1) − utraj (k)) M + Lc(k)2 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
(3.262)
≤ J Lc(k) + u(k − 1) ≤ u
min
max
≤ Lc(k) ≤ umax
ymin − ε min (k) ≤ H(k) J Lc(k) + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ≤ ymax + ε max (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1 Now, we take into account that the decision vector of the MPC-NPSL problem, N 2, N 3 are defined by Eqs. x˜ (k), is defined by Eq. (3.57), the auxilliary matrices N 1, (3.58)–(3.60) and we use the definitions (3.61)–(3.63), (3.64)–(3.65). We obtain the optimisation task
126
3 MPC Algorithms Using Input-Output Wiener Models
min J (k) = ysp (k) − H(k) J L N 1 x˜ (k) − ˆytraj (k) x˜ (k) 2 2 N 1 x˜ (k) − H(k)(u(k − 1) − utraj (k)) M + L 2 2 + ρ min N 2 x˜ (k) + ρ max N 3 x˜ (k) subject to
(3.263)
u ≤ J L N 1 x˜ (k) + u(k − 1) ≤ umax umin ≤ L N 1 x˜ (k) ≤ umax min
N 1 x˜ (k) + ˆytraj (k) ymin − I N ×1 ⊗ N 2 x˜ (k) ≤ H(k) J L + H(k)(u(k − 1) − utraj (k)) ≤ ymax + I N ×1 ⊗ N 3 x˜ (k) N 3 x˜ (k) ≥ 0n y ×1 N 2 x˜ (k) ≥ 0n y ×1 , The above problem is solved in MATLAB by means of the quadprog function. When compared with the general quadratic optimisation problem (3.165), in our MPC-NPLT-P optimisation task (3.263), the linear inequality constraints are defined by ⎤ ⎡ −J L N1 ⎥ ⎢ J L N1 ⎥ ⎢ ⎥ ⎢ −L N 1 ⎥ ⎢ ⎥ ⎢ L N 1 ⎥ ⎢ (3.264) A(k) = ⎢ ⎥ −H(k) J L N − I ⊗ N 1 N ×1 2⎥ ⎢ ⎢ H(k) J L N 1 − I N ×1 ⊗ N3 ⎥ ⎥ ⎢ ⎦ ⎣ − N2 − N3 and
⎡
⎤ −umin + u(k − 1) ⎢ ⎥ umax − u(k − 1) ⎢ ⎥ min ⎢ ⎥ −u ⎢ ⎥ max ⎢ ⎥ u ⎥ B(k) = ⎢ ⎢ − ymin + ˆytraj (k) + H(k)(u(k − 1) − utraj (k)) ⎥ ⎢ ⎥ ⎢ ymax − ˆytraj (k) − H(k)(u(k − 1) − utraj (k)) ⎥ ⎢ ⎥ ⎣ ⎦ 0n y ×1 0n y ×1
(3.265)
Differentiating the cost-function, J (k), with respect to the decision variables, x˜ (k), we have
3.7 MPC-NPLT-P Algorithm
127
d J (k) T N 1 x˜ (k) − ˆytraj (k) = −2 N 1 L T J T H T (k)M( ysp (k) − H(k) J L d x˜ (k) − H(k)(u(k − 1) − utraj (k))) T T T + 2 N 1 L T L N 1 x˜ (k) + 2ρ min N2 N 2 x˜ (k) + 2ρ max N3 N 3 x˜ (k) T T T T T T = 2( N 1 L J H (k)M H(k) J L N 1 + N 1 L L N 1 T T + ρ min N2 N 2 + ρ max N3 N 3 ) x˜ (k) T T T T sp − 2 N 1 L J H (k)M( y (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k)))
(3.266)
The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) T T = 2( N 1 L T J T H T (k)M H(k) J L N1 + N 1 L T L N1 d( x˜ (k))2 T T + ρ min (3.267) N2 N 2 + ρ max N3 N 3)
and T N 1 L T J T H T (k)M( ysp (k) − ˆytraj (k) − H(k)(u(k − 1) − utraj (k))) f QP (k) = −2 (3.268)
3.8 MPC-NPLPT Algorithm In the MPC-NPLT algorithm, linearisation is performed along some assumed future input trajectory utraj (k) (Eq. (3.188)). It would be best when the trajectory utraj (k) could be close to the one that corresponds to the optimal trajectory of increments, uopt (k), calculated from the MPC optimisation problem, i.e. from Eq. (1.26) we wish to have utraj (k) ≈ Juopt (k) + u(k − 1). In such a case, from Eq. (3.198), because uopt (k) ≈ utraj (k), the linearised output trajectory is very close to the true nonlinear trajectory, i.e. ˆy(k) ≈ ˆytraj (k) (3.269) Of course, usually, such a guess of the vector utraj (k) is impossible before optimisation. That is why for linearisation, we use the future trajectory defined by the values of the manipulated variables applied to the process at the previous sampling instant (Eq. (3.190)) or we use for linearisation the last n u (Nu − 1) elements of the optimal input trajectory calculated at the previous sampling instant (Eq. (3.191)). In the MPC algorithm with Nonlinear Prediction and Predicted Trajectory Linearisation (MPC-NPLPT), we repeat trajectory linearisation, nonlinear prediction and calculation of the future control policy a few times at each sampling instant, in the consecutive internal iterations. Such repetitions may be necessary when the set-point is significantly changed or the process is affected by a strong external disturbance (in
128
3 MPC Algorithms Using Input-Output Wiener Models
both cases, the process is far from its desired set-point). In the first internal iteration, the trajectories (3.190) or (3.191) are used for linearisation, whereas in the consecutive ones, linearisation is performed along the trajectory of the manipulated variables calculated from the optimisation problem for the previous internal iteration. The general formulation of the MPC-NPLPT algorithm is presented in [8]. The following processes have been considered in simulations: a solid oxide fuel cell [12], a proton exchange membrane fuel cell [14], a neutralisation reactor [10], a polymerisation reactor [8], a heat exchanger [11]. Implementation details of the MPC-NPLPT algorithm for the SISO Wiener model are given in [7], utilisation of the MIMO Wiener model II is discussed in [8]. In both cases, the nonlinear static block is a neural network of the MLP type. Other model structures have not been considered. In this Chapter, the MPC-NPLPT algorithm is derived for all Wiener structures discussed in Chap. 2. Let t be the index of internal iterations, t = 1, . . . , tmax . In the tth internal iteration, the predicted trajectory of the controlled variables ⎤ yˆ t (k + 1|k) ⎥ ⎢ .. ˆyt (k) = ⎣ ⎦ . t yˆ (k + N |k) ⎡
(3.270)
is linearised along the trajectory of the manipulated variables found at the previous internal iteration ⎤ ⎡ u t−1 (k|k) ⎥ ⎢ .. (3.271) ut−1 (k) = ⎣ ⎦ . u t−1 (k + Nu − 1|k)
where u t−1 (k + p|k) = u t−1 (k + Nu − 1|k) for p = Nu , . . . , N . The predicted trajectory of the controlled variables corresponding to the trajectory of the manipulated variables ut−1 (k), i.e. the trajectory ⎤ yˆ t−1 (k + 1|k) ⎥ ⎢ .. ˆyt−1 (k) = ⎣ ⎦ . t−1 yˆ (k + N |k) ⎡
(3.272)
is calculated from the model of the controlled process. We have to find a linear approximation of the function ˆyt (ut (k)) : Rn u Nu → Rn y N . It is linearised along the trajectory of the manipulated variable found at the previous internal iteration, ut−1 (k). The independent variable vector, i.e. the vector of decision variables, is the trajectory of future values of the manipulated variable at the current internal iteration, i.e. ⎤ ⎡ u t (k|k) ⎥ ⎢ .. (3.273) ut (k) = ⎣ ⎦ . u t (k + Nu − 1|k)
3.8 MPC-NPLPT Algorithm
129
We use the Taylor series expansion formula (Eq. (3.194)). In our case, x = ut (k), the vector of function values is y = ˆyt (k) and linearisation is performed along the ¯ = ˆyt−1 (k). We obtain vector x¯ = ut−1 (k), which means that y(x) ˆyt (k) = ˆyt−1 (k) + H t (k)(ut (k) − ut−1 (k))
(3.274)
The obtained linearisation formula is similar to that used in the MPC-NPLT algorithm (Eq. (3.198)). In both cases, the linear approximation of the predicted output trajectory is a sum of the trajectory of the controlled variable ( ˆytraj (k) and ˆyt−1 (k), respectively) for the linearisation point (defined by the vectors utraj (k) and ut−1 (k), respectively) and the derivative matrix (H(k) and H t (k), respectively) multiplied by the difference between the independent variable vector and the linearisation point (u(k) − utraj (k) and ut (k) − ut−1 (k), respectively). The matrix of derivatives of predicted output trajectory d ˆy(k) d ˆyt−1 (k) H (k) = ˆy(k)= ˆyt−1 (k) = du(k) dut−1 (k) u(k)=ut−1 (k) ⎡ ⎤ ∂ yˆ t−1 (k + 1|k) ∂ yˆ t−1 (k + 1|k) ··· ⎢ ∂u t−1 (k|k) ∂u t−1 (k + Nu − 1|k) ⎥ ⎢ ⎥ ⎢ ⎥ .. .. .. =⎢ ⎥ . . ⎢ t−1 . ⎥ t−1 ⎣ ∂ yˆ (k + N |k) ∂ yˆ (k + N |k) ⎦ ··· ∂u t−1 (k|k) ∂u t−1 (k + Nu − 1|k) t
(3.275)
is similar to that used in the MPC-NPLT algorithm (Eq. (3.199)). From Eq. (1.26), one finds the relation between the vector of values of the manipulated variables, ut (k), and their increments ut (k) = Jut (k) + u(k − 1)
(3.276)
The vector of increments of the manipulated variable at the internal itetration t is ⎡ ⎢ ut (k) = ⎣
u t (k|k) .. .
⎤ ⎥ ⎦
(3.277)
u (k + Nu − 1|k) t
Next, using Eqs. (3.276), we express the linear approximation of the nonlinear predicted trajectory of the controlled variable (Eq. (3.274)) as the linear function of the decision variables of the MPC-NPLPT algorithm ˆyt (k) = H t (k) Jut (k) + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k))
(3.278)
We should notice that in the MPC-NPLT and MPC-NPLPT schemes, we obtain structurally very similar prediction equations (3.243) and (3.278), respectively. It
130
3 MPC Algorithms Using Input-Output Wiener Models
means that we will easily derive all necessary implementation details for the MPCNPLPT algorithm from previously presented considerations for the MPC-NPLT scheme. For the MPC-NPLPT algorithm, we have to find the predicted trajectory of the controlled variables for the sampling instant k and the internal iteration t − 1, i.e. ˆyt−1 (k). Furthermore, the structure of the matrices of partial derivatives of the predicted trajectory is very similar in both algorithms (Eqs. (3.199) and (3.275), respectively). Hence, in the MPC-NPLPT algorithm, we have to find the entries of the matrix H t (k) using the entries of the matrix H(k) derived for the MPC-NPLT control scheme. The initial trajectory for linearisation, u0 (k), may be defined using Eq. (3.190) or Eq. (3.191). It means that the MPC-NPLPT algorithm with only one internal iteration is in fact the MPC-NPLT scheme. The internal iterations are continued if the process is not close to the required steady-state, i.e. when the difference between the process outputs and the current set-points satisfies the condition N0 sp y (k − p) − y(k − p)2 ≥ δy
(3.279)
p=0
where a positive integer number N0 is a time horizon and δy > 0 is a real number. If the difference between the future control increments calculated in two consecutive internal iterations is not significant, it means when t u (k) − ut−1 (k)2 < δu
(3.280)
where δu > 0 is a real number, the internal iterations are terminated and the first n u elements of the determined sequence, ut (k), are applied to the process, i.e. u(k) = u t (k|k) + u(k − 1). In comparison with all MPC algorithm discussed so far, the MPC-NPLPT strategy has three additional parameters, N0 , δy , δu , which must be adjusted experimentally. Prediction Using SISO Wiener Model For the SISO Wiener model depicted in Fig. 2.1, using Eq. (3.192), we obtain the predicted trajectory yˆ t−1 (k + p|k) = g(vt−1 (k + p|k)) + d(k)
(3.281)
for p = 1, . . . , N . From Eq. (3.193) we have Iuf ( p)
v
t−1
(k + p|k) =
nB
bi u t−1 (k − i + p|k) +
i=1 Ivf ( p)
−
i=1
bi u(k − i + p)
i=Iuf ( p)+1
ai vt−1 (k − i + p|k) −
nA i=Ivf ( p)+1
ai v(k − i + p) (3.282)
3.8 MPC-NPLPT Algorithm
131
Using Eq. (3.211), the entries of the matrix of derivatives, H t (k), are ∂ yˆ t−1 (k + p|k) dg(vt−1 (k + p|k)) = h( p, r ) ∂u t−1 (k + r |k) dvt−1 (k + p|k)
(3.283)
for all p = 1, . . . , N , r = 0, . . . , Nu − 1. Prediction Using MIMO Wiener Model I For the MIMO Wiener model I shown in Fig. 2.2, using Eq. (3.212), the predicted trajectory is yˆmt−1 (k + p|k) = gm (vmt−1 (k + p|k)) + dm (k) (3.284) for m = 1, . . . , n y , p = 1, . . . , N . From Eq. (3.213) we have vmt−1 (k
n u I uf ( p)
+ p|k) =
n=1
bim,n u t−1 n (k
− i + p|k) +
bim,n u n (k
− i + p)
i=Iuf ( p)+1
i=1
Ivf ( p)
−
nB
aim vmt−1 (k − i + p|k) −
nA
aim vm (k − i + p)
i=Ivf ( p)+1
i=1
(3.285) Using Eq. (3.222), the entries of the matrix of derivatives, H t (k), are ∂ yˆmt−1 (k + p|k) ∂u t−1 n (k + r |k)
=
dgm (vmt−1 (k + p|k)) dvmt−1 (k + p|k)
h m,n ( p, r )
(3.286)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. Prediction Using MIMO Wiener Model II For the MIMO Wiener model II shown in Fig. 2.3, using Eq. (3.223), the predicted trajectory is yˆmt−1 (k + p|k) = gm (v1t−1 (k + p|k), . . . , vnt−1 (k + p|k)) + dm (k) v
(3.287)
for m = 1, . . . , n y , p = 1, . . . , N . The signals vmt−1 (k + p|k) are calculated from Eq. (3.285) for p = 1, . . . , N but now m = 1, . . . , n v . Using Eq. (3.228), the entries of the matrix of derivatives, H t (k), are ∂ yˆmt−1 (k + p|k) ∂u t−1 n (k + r |k)
=
nv dgm (v1t−1 (k + p|k), . . . , vnt−1 (k + p|k)) v s=1
dvst−1 (k + p|k)
h s,n ( p, r ) (3.288)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1.
132
3 MPC Algorithms Using Input-Output Wiener Models
Prediction Using MIMO Wiener Model III For the MIMO Wiener model III shown in Fig. 2.4, using Eq. (3.229), the predicted trajectory is n u t−1 t−1 yˆm (k + p|k) = gm vm,n (k + p|k) + dm (k) (3.289) n=1
for m = 1, . . . , n y , p = 1, . . . , N . From Eq. (3.232) we have t−1 vm,n (k
+ p|k) =
Iuf (m,n, p)
n m,n B
bim,n u t−1 n (k
bim,n u n (k − i + p)
i=Iuf (m,n, p)+1
i=1
−
− i + p|k) +
Ivf (m,n, p)
n m,n A t−1 aim,n vm,n (k
− i + p|k) −
aim,n vm,n (k − i + p)
i=Ivf (m,n, p)+1
i=1
(3.290) for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . The entries of the matrix of derivatives, H t (k), are calculated from the formula obtained for the Wiener model I, i.e. from Eq. (3.286). From Eq. (3.231), we have vmt−1 (k + p|k) =
nu
t−1 vm,n (k + p|k)
(3.291)
n=1
for m = 1, . . . , n y , p = 1, . . . , N . From Eq. (3.222), we have ∂ yˆmt−1 (k + p|k) ∂u t−1 n (k
+ r |k)
=
dgm (vmt−1 (k + p|k)) dvmt−1 (k + p|k)
h m,n ( p, r )
(3.292)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. Prediction Using MIMO Wiener Model IV For the MIMO Wiener model IV shown in Fig. 2.5, using Eq. (3.235), the predicted trajectory is yˆmt−1 (k
+ p|k) = gm
n u
t−1 v1,n (k
+ p|k), . . . ,
n=1
nu
vnt−1 (k v ,n
+ p|k) + dm (k)
n=1
(3.293) for m = 1, . . . , n y , p = 1, . . . , N . From Eq. (3.236) vst−1 (k + p|k) =
nu n=1
t−1 vs,n (k + p|k)
(3.294)
3.8 MPC-NPLPT Algorithm
133
for p = 1, . . . , N , s = 1, . . . , n v . The predicted trajectory may be also expressed by Eq. (3.287) used in the case of the Wiener MIMO model II. The entries of the matrix of derivatives, H t (k), are calculated in the same way it is done in the case of the MIMO Wiener model II, i.e. from Eq. (3.288). Prediction Using MIMO Wiener Model V For the MIMO Wiener model V shown in Fig. 2.6, using Eq. (3.239), the predicted trajectory is nu t−1 yˆmt−1 (k + p|k) = gm,n (vm,n (k + p|k)) + dm (k) (3.295) n=1 t−1 (k + p|k) are calculated from Eq. for m = 1, . . . , n y , p = 1, . . . , N . The signals vm,n (3.290), in the same way it is done in the case of the MIMO models III, for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N . The entries of the matrix of derivatives, H t (k), are calculated using Eq. (3.242), which gives
∂ yˆmt−1 (k + p|k) ∂u t−1 n (k
+ r |k)
=
t−1 (k + p|k)) dgm,n (vm,n
∂u t−1 n (k + r |k)
h m,n ( p, r )
(3.296)
for all m = 1, . . . , n y , n = 1, . . . , n u , p = 1, . . . , N , r = 0, . . . , Nu − 1. We have to remember that in the case of the MIMO Wiener models III, IV and V, the step-response coefficients of the linear part of the model, necessary to calculate the coefficients h m,n ( p, r ), must be computed from the transfer functions (2.30), by means of Eq. (3.145). Optimisation At first, let us consider hard constraints imposed on the controlled variables. Using the prediction rule (3.278), from the general MPC optimisation problem (1.35), we obtain the MPC-NPLPT quadratic optimisation problem min t
u (k)
J (k) = ysp (k) − H t (k) Jut (k) − ˆyt−1 (k) 2 2 − H t (k)(u(k − 1) − ut−1 (k)) + ut (k) M
subject to u
min
(3.297)
≤ Ju (k) + u(k − 1) ≤ u t
max
− umax ≤ ut (k) ≤ umax ymin ≤ H t (k) Jut (k) + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k)) ≤ ymax The above problem is solved in MATLAB by means of the quadprog function. The decision variable vector is x(k) = ut (k). The linear inequality constraints are defined by
134
3 MPC Algorithms Using Input-Output Wiener Models
⎤ ⎡ ⎤ −umin + u(k − 1) −J ⎥ ⎢ ⎥ ⎢ umax − u(k − 1) J ⎥ ⎢ ⎥ A(k) = ⎢ ⎣ −H t (k) J ⎦ , B(k) = ⎣ − ymin + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k)) ⎦ t H (k) J ymax − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)) ⎡
(3.298) and bounds defined by Eq. (3.39). Differentiating the cost-function, J (k), with respect to the decision variables, ut (k), we have d J (k) = −2 J T (H t (k))T M( ysp (k) − H t (k) Jut (k) − ˆyt−1 (k) dut (k) − H t (k)(u(k − 1) − ut−1 (k))) + 2ut (k) = 2( J T (H t (k))T M H t (k) J + )ut (k) − 2 J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k))) (3.299) The second-order derivative matrix of the minimised cost-function is d2 J (k) = 2( J T (H t (k))T M H t (k) J + ) d(ut (k))2
H QP (k) =
(3.300)
and f QP (k) = −2 J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)))
(3.301)
When the soft output constraints are considered, using the prediction equation (3.278), from the general MPC optimisation problem (1.39) with soft output constraints, we obtain the following quadratic optimisation MPC-NPLPT problem min t
u (k) εmin (k), εmax (k)
J (k) = ysp (k) − H t (k) Jut (k) − ˆyt−1 (k) 2 2 − H t (k)(u(k − 1) − ut−1 (k)) M + ut (k) 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
(3.302)
≤ Ju (k) + u(k − 1) ≤ u
min
t
max
≤ ut (k) ≤ umax
ymin − ε min (k) ≤ H t (k) Jut (k) + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k)) ≤ ymax + ε max (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1 We easily notice that the obtained MPC-NPLPT optimisation tasks (3.297) and (3.302) are very similar to the MPC-NPLT ones, defined by Eqs. (3.244) and (3.249), respectively. It means that all formulae presented in Sect. 3.6 for the MPC-NPLT
3.8 MPC-NPLPT Algorithm
135
algorithm may be also used for implementation of the MPC-NPLPT scheme. The only difference is the fact that the vectors utraj (k) and ˆytraj (k) must be replaced by ut−1 (k) and ˆyt−1 (k), respectively, and the matrix H(k) must be replaced by H t (k). The MPC-NPLPT optimisation problem with soft constrants (3.302) is solved in MATLAB by means of the quadprog function. The vector of decision variables is ⎡
⎤ ut (k) x(k) = ⎣ εmin (k) ⎦ εmax (k)
(3.303)
Let us note that the decision vector (3.40) is very similarly to that defined by Eq. (3.303), we only have to take into account the additional index t that indicates the current internal iteration. Using Eq. (3.44), we have ut (k) = N 1 x(k)
(3.304)
but the relations (3.45)–(3.46), (3.47)–(3.48) and (3.49)–(3.50) are true. Hence, we obtain the following MPC-NPLPT optimisation task min J (k) = ysp (k) − H t (k) J N 1 x(k) − ˆyt−1 (k) x(k)
2 − H t (k)(u(k − 1) − ut−1 (k)) M + N 1 x(k)2 + ρ min N 2 x(k)2 + ρ max N 3 x(k)2
(3.305)
subject to umin ≤ J N 1 x(k) + u(k − 1) ≤ umax umin ≤ N 1 x(k) ≤ umax ymin − I N ×1 ⊗ N 2 x(k) ≤ H t (k) J N 1 x(k) + ˆyt−1 (k)
+ H t (k)(u(k − 1) − ut−1 (k)) ≤ ymax + I N ×1 ⊗ N 3 x(k) N 2 x(k) ≥ 0n y , N 3 x(k) ≥ 0n y
When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPLT-P optimisation task (3.305), the linear inequality constraints are defined by ⎡
⎤ − J N1 ⎢ ⎥ J N1 ⎢ ⎥ ⎢ ⎥ −N 1 ⎢ ⎥ ⎢ ⎥ N1 ⎢ ⎥ A(k) = ⎢ t ⎥ ⎢ −Ht (k) J N 1 − I N ×1 ⊗ N 2 ⎥ ⎢ H (k) J N 1 − I N ×1 ⊗ N 3 ⎥ ⎢ ⎥ ⎣ ⎦ −N 2 −N 3
(3.306)
136
and
3 MPC Algorithms Using Input-Output Wiener Models
⎡
⎤ −umin + u(k − 1) ⎢ ⎥ umax − u(k − 1) ⎢ ⎥ min ⎢ ⎥ −u ⎢ ⎥ max ⎢ ⎥ u ⎥ B(k) = ⎢ ⎢ − ymin + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k)) ⎥ ⎢ ⎥ ⎢ ymax − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)) ⎥ ⎢ ⎥ ⎣ ⎦ 0n y ×1 0n y ×1
(3.307)
Differentiating the cost-function, J (k), with respect to the decision variables, x(k), we have d J (k) = −2N T1 J T (H t (k))T M( ysp (k) − H t (k) J N 1 x(k) − ˆyt−1 (k) dx(k) − H t (k)(u(k − 1) − ut−1 (k))) + N T1 N 1 x(k) + 2ρ min N T2 N 2 x(k) + 2ρ max N T3 N 3 x(k) = 2(N T1 J T (H t (k))T M H t (k) J N 1 + N T1 N 1 + ρ min N T2 N 2 + 2ρ max N T3 N 3 )x(k) − 2N T1 J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k))) (3.308) The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) d(x(k))2
T t T t T min N T N + 2ρ max N T N ) = 2(N T 1 J (H (k)) M H (k) J N 1 + N 1 N 1 + ρ 2 2 3 3
(3.309)
and f QP (k) = −2N T1 J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k))) (3.310)
3.9 MPC-NPLPT-P Algorithm We will consider parameterisation using Laguerre functions in order to reduce the number of decision variables of the MPC-NPLPT algorithm. The resulting MPCNPLPT algorithm with Parameterisation (MPC-NPLPT-P) is mentioned in [13] but only for the SISO case. In this Chapter, the algorithm is discussed for the general MIMO case. All Wiener structures discussed in Chap. 2 may be used.
3.9 MPC-NPLPT-P Algorithm
137
Using the parameterisation defined by Eq. (1.56), for the internal iteration t, we obtain (3.311) ut (k) = Lct (k) Hence, the MPC-NPLPT-P prediction equation (3.278), becomes ˆyt (k) = H t (k) J Lct (k) + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k))
(3.312)
The symbol ct (k) denotes the vector of decision variables for the current internal iteration t and the sampling instant k. In the SISO case, using Eq. (1.55), it has n L entries and the structure T (3.313) ct (k) = c1t (k) . . . cnt L (k) In the MIMO case, using Eq. (1.66), the vector of decision variables is of length n 1L + · · · + n nLu and has the structure ⎤ c1t (k) ⎥ ⎢ ct (k) = ⎣ ... ⎦ cnt u (k) ⎡
(3.314)
Using Eqs. (1.63), the vectors of length n 1L , . . . , n nLu , respectively, are ⎤ ⎡ t ⎤ t (k) c1,1 cn u ,1 (k) .. .. ⎥ ⎢ ⎢ ⎥ t c1t (k) = ⎣ ⎦ , . . . , cn u (k) = ⎣ ⎦ . . t t c1,n 1 (k) cn u ,n nu (k) ⎡
(3.315)
L
L
At first, let us consider hard constraints imposed on controlled variables. Using the prediction rule (3.312), from the general MPC optimisation problem (1.35), we obtain the MPC-NPLPT-P quadratic optimisation problem J (k) = ysp (k) − H t (k) J Lct (k) − ˆyt−1 (k) min ct (k) 2 2 − H t (k)(u(k − 1) − ut−1 (k)) + Lct (k) M
subject to u
min
u
(3.316)
≤ J Lc (k) + u(k − 1) ≤ u
min
t
max
≤ Lct (k) ≤ umax
ymin ≤ H t (k) J Lct (k) + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k)) ≤ ymax When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPLPT-P optimisation task (3.316), the decision vector is x(k) = ct (k). The linear inequality constraints are defined by
138
3 MPC Algorithms Using Input-Output Wiener Models
⎡ ⎤ ⎤ −umin + u(k − 1) −JL ⎢ ⎥ ⎢ ⎥ umax − u(k − 1) JL ⎢ ⎥ ⎢ ⎥ min ⎢ ⎥ ⎢ ⎥ −u −L ⎥ , B(k) = ⎢ ⎥ A(k) = ⎢ max ⎢ ⎥ ⎢ ⎥ u L ⎢ ⎥ ⎢ ⎥ ⎣ − ymin + ˆyt−1 (k) + H(k)(u(k − 1) − ut−1 (k)) ⎦ ⎣ −H t (k) J L ⎦ H t (k) J L ymax − ˆyt−1 (k) − H(k)(u(k − 1) − ut−1 (k)) ⎡
(3.317) Differentiating the cost-function, J (k), with respect to the decision variables, ct (k), we have d J (k) = −2L T J T (H t (k))T M( ysp (k) − H t (k) J Lct (k) − ˆyt−1 (k) dct (k) − H t (k)(u(k − 1) − ut−1 (k))) + 2L T Lct (k) = 2(L T J T (H t (k))T M H t (k) J L + L T L)ct (k) − 2L T J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)))
(3.318)
The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) = 2(L T J T (H t (k))T M H t (k) J L + L T L) d(ct (k))2
(3.319)
and f QP (k) = −2L T J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)))
(3.320)
When soft output constraints are considered, using the prediction equation (3.312), from the general MPC optimisation problem (1.38) with soft output constraints, we obtain the following quadratic optimisation MPC-NPLT-P problem min t
c (k) εmin (k), εmax (k)
J (k) = ysp (k) − H t (k) J Lct (k) − ˆyt−1 (k) 2 2 − H t (k)(u(k − 1) − ut−1 (k)) M + Lct (k) 2 2 + ρ min εmin (k) + ρ max εmax (k)
subject to u
min
u
(3.321)
≤ J Lc (k) + u(k − 1) ≤ u
min
t
max
≤ Lct (k) ≤ umax
ymin − ε min (k) ≤ H t (k) J Lct (k) + ˆyt−1 (k) + H t (k)(u(k − 1) − ut−1 (k)) ≤ ymax + ε max (k) εmin (k) ≥ 0n y ×1 , εmax (k) ≥ 0n y ×1
3.9 MPC-NPLPT-P Algorithm
139
We easily notice that the obtained MPC-NPLPT-P optimisation tasks (3.316) and (3.321) are very similar to the MPC-NPLT-P ones (3.257) and (3.262), respectively. It means that all formulae presented in Sect. 3.7 for the MPC-NPLT-P algorithm may be also used for implementation of the MPC-NPLPT-P scheme. The only difference is the fact that the vectors utraj (k) and ˆytraj (k) must be replaced by ut−1 (k) and ˆyt−1 (k), respectively, and the matrix H(k) must be replaced by H t (k). The vector of decision variables is ⎡ t ⎤ c (k) x˜ (k) = ⎣ εmin (k) ⎦ (3.322) εmax (k) Let us note that the decision vector (3.322) is very similarly to that defined by Eq. (3.57), we only have to take into account the additional index t that indicates the current internal iteration. Using Eq. (3.61), we have N 1 x˜ (k) ct (k) =
(3.323)
but the relations (3.62)–(3.63), (3.64)–(3.65) are true. Hence, we obtain the following MPC-NPLPT-P optimisation task N 1 x˜ (k) − ˆyt−1 (k) min J (k) = ysp (k) − H t (k) J L x˜ (k)
2 2 N 1 x˜ (k) − H t (k)(u(k − 1) − ut−1 (k)) M + L 2 2 + ρ min N 2 x˜ (k) + ρ max N 3 x˜ (k)
subject to
(3.324)
≤ J L N 1 x˜ (k) + u(k − 1) ≤ u u ≤ L N 1 x˜ (k) ≤ umax ymin − I N ×1 ⊗ N 2 x˜ (k) ≤ H t (k) J L N 1 x˜ (k) + ˆyt−1 (k) u
min
max
min
+ H t (k)(u(k − 1) − ut−1 (k)) ≤ ymax + I N ×1 ⊗ N 3 x˜ (k) N 2 x˜ (k) ≥ 0n y ×1 , N 3 x˜ (k) ≥ 0n y ×1 When compared with the general quadratic optimisation problem (3.165) solved by the quadprog function, in our MPC-NPLPT-P optimisation task (3.324), the linear inequality constraints are defined by
140
3 MPC Algorithms Using Input-Output Wiener Models
⎤ −J L N1 ⎥ ⎢ J L N1 ⎥ ⎢ ⎥ ⎢ −L N 1 ⎥ ⎢ ⎥ ⎢ L N1 ⎥ ⎢ A(k) = ⎢ t ⎥ ⎢ −Ht (k) J L N 1 − I N ×1 ⊗ N 2 ⎥ ⎢ H (k) J L N 1 − I N ×1 ⊗ N3 ⎥ ⎥ ⎢ ⎦ ⎣ − N2 − N3 ⎡
and
(3.325)
⎡
⎤ −umin + u(k − 1) max ⎢ ⎥ u − u(k − 1) ⎢ ⎥ min ⎢ ⎥ −u ⎢ ⎥ max ⎢ ⎥ u ⎥ B(k) = ⎢ t−1 ⎢ − ymin + ˆy (k) + H t (k)(u(k − 1) − ut−1 (k)) ⎥ ⎢ ⎥ ⎢ ymax − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)) ⎥ ⎢ ⎥ ⎣ ⎦ 0n y ×1 0n y ×1
(3.326)
Differentiating the cost-function, J (k), with respect to the decision variables, x˜ (k), we have d J (k) T = −2 N 1 L T J T (H t (k))T M( ysp (k) − H t (k) J L N 1 x˜ (k) − ˆyt−1 (k) d x˜ (k) − H t (k)(u(k − 1) − ut−1 (k))) + 2 N 1 L T L N 1 x˜ (k) + 2ρ min N2 N 2 x˜ (k) + 2ρ max N3 N 3 x˜ (k) T
T
T
T T = 2( N 1 L T J T (H t (k))T M H t (k) J L N1 + N 1 L T L N1 T T + ρ min N2 N 2 + ρ max N3 N 3 ) x˜ (k) T − 2 N 1 L T J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)))
(3.327)
The second-order derivative matrix of the minimised cost-function is H QP (k) =
d2 J (k) T T = 2( N 1 L T J T (H t (k))T M H t (k) J L N1 + N 1 L T L N1 2 d( x˜ (k)) T T + ρ min (3.328) N2 N 2 + ρ max N3 N 3)
and T f QP (k) = −2 N 1 L T J T (H t (k))T M( ysp (k) − ˆyt−1 (k) − H t (k)(u(k − 1) − ut−1 (k)))
(3.329)
References
141
References 1. Al-Duwaish, H., Karim, M., Chandrasekar, V.: Use of multilayer feedforward neural networks in identification and control of Wiener model. IEE Proc. Control Theory Appl. 143, 255–258 (1996) 2. Cervantes, A.L., Agamennoni, O.E., Figueroa, J.L.: A nonlinear model predictive control system based on Wiener piecewise linear models. J. Process Control 13, 655–666 (2003) 3. Deshpande, S., Kalpana, N., Bedi, P.S., Patwardhan, S.: Peak seeking control using OBFWiener model based nonlinear IMC scheme. In: Proceedings of the International Conference on Control, Automation and Systems 2007, pp. 1561–1566. Seoul, Korea (2007) 4. Janczak, A.: Identification of Nonlinear Systems Using Neural Networks and Polynomial Models: A Block-Oriented Approach. Lecture Notes in Control and Information Sciences, vol. 310. Springer, Berlin (2004) 5. Jia, L., Li, Y., Li, F.: Correlation analysis algorithm-based multiple-input single-output Wiener model with output noise. Complexity 9650254 (2019) 6. Kim, K.K.K., Ríos-Patrón, E., Braatz, R.D.: Robust nonlinear internal model control of stable Wiener systems. J. Process Control 22, 1146–1477 (2012) 7. Ławry´nczuk, M.: Practical nonlinear predictive control algorithms for neural Wiener models. J. Process Control 23, 696–714 (2013) 8. Ławry´nczuk, M.: Computationally Efficient Model Predictive Control Algorithms: a Neural Network Approach, Studies in Systems, Decision and Control, vol. 3. Springer, Cham (2014) 9. Ławry´nczuk, M.: Nonlinear predictive control for Hammerstein-Wiener systems. ISA Trans. 55, 49–62 (2015) 10. Ławry´nczuk, M.: Modelling and predictive control of a neutralisation reactor using sparse support vector machine Wiener models. Neurocomputing 205, 311–328 (2016) 11. Ławry´nczuk, M.: Nonlinear predictive control of dynamic systems represented by WienerHammerstein models. Nonlinear Dyn. 86, 1193–1214 (2016) 12. Ławry´nczuk, M.: Constrained computationally efficient nonlinear predictive control of solid oxide fuel cell: tuning, feasibility and performance. ISA Trans. 99, 270–289 (2020) 13. Ławry´nczuk, M.: Nonlinear model predictive control for processes with complex dynamics: a parameterisation approach using Laguerre functions. Int. J. Appl. Math. Comput. Sci. 30, 35–46 (2020) 14. Ławry´nczuk, M., Söffker, D.: Wiener structures for modeling and nonlinear predictive control of proton exchange membrane fuel cell. Nonlinear Dyn. 95, 1639–1660 (2019) 15. Maciejowski, J.: Predictive Control with Constraints. Prentice Hall, Harlow (2002) 16. Norquay, S.J., Palazo˘glu, A., Romagnoli, J.A.: Model predictive control based on Wiener models. Chem. Eng. Sci. 53, 75–84 (2016) 17. Norquay, S.J., Palazo˘glu, A., Romagnoli, J.: Application of wiener model predictive control (WMPC) to an industrial C2 splitter. J. Process Control 9, 461–473 (1999) 18. Shafiee, G., M., A.M., Jahed-Motlagh, M.R., Jalali, A.A. : Nonlinear predictive control of a polymerization reactor based on piecewise linear Wiener model. Chem. Eng. J. 143, 282–292 (2008) 19. Szabo, Z., Gaspar, P., Bokor, J.: Reference tracking for Wiener systems using dynamic inversion. In: Proceedings of the 13th Mediterranean Conference on Control and Automation (MED 2005), pp. 1190–1194. Limassol, Cyprus (2005) 20. Tatjewski, P.: Advanced Control of Industrial Processes, Structures and Algorithms. Springer, London (2007)
Chapter 4
MPC of Input-Output Benchmark Wiener Processes
Abstract This Chapter thoroughly discusses implementation details and simulation results of various MPC algorithms introduced in the previous Chapter applied to input-output benchmark processes. Two SISO processes are considered, the second one has complex dynamics and Laguerre parameterisation turns out to be beneficial. Next, three MIMO benchmarks are considered: two with two inputs and two outputs (without and with cross-couplings) and one with as many as ten inputs and two outputs. Implementation details of all algorithms are shortly given. All algorithms are compared in terms of control quality and computational time.
4.1 Simulation Set-Up and Comparison Methodology All simulations discussed in this book are carried out in MATLAB 2020a. The function fmincon is used for nonlinear optimisation, whereas the function quadprog is used for quadratic programming. In both cases, the default parameters are used, including stopping criteria. All MPC algorithms are compared using two performance indices. The first of them n y kmax 2 ymsp (k) − ym (k) (4.1) E2 = m=1 k=1 sp
measures the sum of squared differences between the required set-points, ym (k), and the actual process outputs, ym (k), for the whole simulation horizon (k = 1, . . . , kmax ) and for all outputs (m = 1, . . . , n y ). The second one E MPC-NO =
n y kmax
ymMPC-NO (k) − ym (k)
2
(4.2)
m=1 k=1
measures the sum of squared differences between the process outputs controlled by the “ideal” MPC-NO algorithm, ynMPC-NO (k), and the process outputs controlled by a compared MPC algorithm, ym (k). Additionally, the relative calculation time of all © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_4
143
144
4 MPC of Input-Output Benchmark Wiener Processes
MPC algorithms is given. Computation time is measured by means of the functions tic and toc. As many as 5 repetitions of each simulation scenario are performed, the specified time is calculated as an average of all experiments.
4.2 The SISO Process 4.2.1 Description of the SISO Process The first considered process is a SISO Wiener system. The linear part of the process (Eqs. (2.1) is of the second order of dynamics (n A = n B = 2), the coefficients of the model (2.2)–(2.3) are a1 = −1.4138,
a2 = 6.0650 × 10−1
b1 = 1.0440 × 10−1 ,
b2 = 8.8300 × 10−2
(4.3)
The nonlinear static block (Eq. 2.5) is y(k) = g(v(k)) = − exp(−v(k)) + 1
(4.4)
The steady-state characteristic y(u) of the whole Wiener system is depicted in Fig. 4.1.
Fig. 4.1 The SISO process: the steady-state characteristic y(u)
2 0 -2 -4 -6 -8 -10 -12 -3
-2
-1
0
1
2
3
4.2 The SISO Process
145
4.2.2 Implementation of MPC Algorithms for the SISO Process The following MPC algorithms are compared: 1. The classical LMPC algorithm in which a parameter-constant linear model is used (three example models, obtained for different operating points are considered). The classical Generalized Predictive Control (GPC) algorithm with the DMC disturbance model is used as the LMPC algorithm [15]. 2. The classical MPC-inv algorithm (Sect. 3.1). In this approach, the inverse model of the static nonlinear part of the model is used to cancel nonlinearity of the process. 3. Two MPC algorithms with on-line simplified model linearisation and quadratic optimisation: MPC-SSL and MPC-NPSL (Sect. 3.4). The first one uses for free trajectory calculation a linear approximation of the model obtained on-line, in the second one, the full nonlinear Wiener model is used for this purpose. 4. Two MPC algorithms with on-line trajectory linearisation performed once at each sampling instant: MPC-NPLT1 and MPC-NPLT2 (Sect. 3.6). In the first of them, linearisation is carried out along the trajectory of a future sequence of the manipulated variable defined by the manipulated variable applied to the process at the previous sampling instant (u(k − 1)) as defined by Eq. (3.190). In the second one, the trajectory used for linearisation is defined by the last (Nu − 1) elements of the optimal input trajectory calculated at the previous sampling instant as defined by Eq. (3.191). 5. The MPC-NPLPT algorithm in which at each sampling instant a few repetitions of trajectory linearisation and optimisation may be necessary, in particular when the process is not close to the required set-point (Sect. 3.8). 6. The best possible MPC-NO algorithm in which the full Wiener model is used for prediction without any simplifications (Sect. 3.2). All LMPC, MPC-inv, MPC-SSL, MPC-NPSL, MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms use quadratic optimisation. The MPC-NO algorithm needs on-line nonlinear optimisation at each sampling instant. The LMPC algorithm uses for prediction a linear model; all other MPC algorithms use the same Wiener model, although in different ways. Next, we shortly detail implementation details of all considered algorithms. In general, all universal equations presented in Chap. 3 are used; here, we only describe specific relations that depend on the static part of the model used. The parameter-constant linear models used for prediction in the LMPC scheme are obtained for three different operating points: the model 1 for y = −0.5, the model 2 for y = 0 and the model 3 for y = 1. It means that the model actually used in LMPC is the linear dynamic part of the Wiener process multiplied by the gain of the nonlinear static block for the considered operating points. In general, from Eqs. (3.70) and (4.4), we have dg(v) = exp(−v) (4.5) K = dv
146
4 MPC of Input-Output Benchmark Wiener Processes
We obtain v = −4.0546 × 10−1 , v = 0 and v = 6.9315 × 10−1 for the operating points 1, 2 and 3, respectively. Hence, the following gains of the nonlinear static blocks are calculated off-line: K = 1.5, K = 1 and K = 0.5. In a similar way the gain of the nonlinear static block is calculated in the MPC-SSL and MPC-NPSL algorithms, but calculations are performed successively on-line, at each sampling instant. The time-varying gain is K (k) =
dg(v(k)) = exp(−v(k)) dv(k)
(4.6)
where v(k) is the model signal. In the MPC-NPLT1 and MPC-NPLT2 algorithms, the entries of the derivative matrix H(k) are computed from Eq. (3.211). For the nonlinear block (4.4), we have dg(vtraj (k + p|k)) = exp(−vtraj (k + p|k)) dvtraj (k + p|k)
(4.7)
Similarly, for calculation of the matrix H t (k) in the MPC-NPLPT scheme, we use Eq. (3.283). We obtain dg(vt−1 (k + p|k)) = exp(−vt−1 (k + p|k)) dvt−1 (k + p|k)
(4.8)
A neural network of the MLP type [3, 10–12, 14] with one hidden layer containing five nonlinear units and a linear output layer is used as the inverse model of the nonlinear static block in the MPC-inv algorithm. The nonlinear units use the tanh activation function. MLP neural networks are used throughout this book because they have the following essential advantages: excellent approximation ability [4], a simple structure and in practice, a low number of parameters (weights) is sufficient to obtain good models.
4.2.3 MPC of the SISO Process Parameters of all compared MPC algorithms are the same: N = 10, Nu = 3, λ = 0.25, the constraints imposed on the manipulated variable are: u min = −2.5, u max = 2.5. The horizons are long enough, the coefficient λ is sufficient for nonlinear MPC algorithms. In this book, we do not consider tuning of MPC algorithms, i.e. selection of appropriate horizons and tuning coefficients. These issues are thoroughly discussed in classical textbooks [7, 15], a review of possible approaches is presented in [2]. A practical example of finding the parameters of MPC for a solid oxide fuel cell is described in [6], a similar study concerned with a boiler-turbine unit is reported in [5]. A simple but efficient procedure how to select coefficients of the MPC costfunction is described in [8, 9]. An optimisation-based approach to tuning MPC is discussed in [13]; effectiveness of four global optimisation methods (the Particle
4.2 The SISO Process
147
1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.2 The SISO process: simulation results of the linear LMPC algorithm based on different models, obtained for different operating points
Swarm Optimisation (PSO) method, the firefly algorithm, the grey wolf optimiser and the Jaya algorithm) is compared. In the first part of simulations, the model is perfect (no modelling errors) and the process is not affected by any disturbances. Let us verify performance of the simplest approach to MPC, i.e. the LMPC algorithm in which a parameter-constant linear model is used for prediction (three example models are used, obtained for different operating points). Figure 4.2 depicts simulation results for a few set-point changes. Unfortunately, the process is nonlinear and the LMPC algorithm does not lead to good control quality. In particular, some oscillations appear after the third set-point change. Figure 4.3 compares performance of two simple MPC algorithms with on-line model linearisation, i.e. the MPC-SSL and MPC-NPSL strategies, v.s. the best possible MPC-NO scheme in which the nonlinear Wiener model is used without any simplifications. Both algorithms with model linearisation work much better than the LMPC scheme; there are no oscillations when the set-point changes in a broad range. Application of the nonlinear model for calculation of the free trajectory in the MPC-NPSL algorithm allows to reduce overshoot for negative set-point changes and increase speed for positive ones comparing with the trajectories obtained in the MPC-SSL scheme. Figure 4.4 depicts changes of the time-varying gain of the nonlinear static block calculated in the MPC-NPSL and MPC-SSL algorithms. Changes in the MPC-NPSL scheme are slower when compared with those observed in the MPC-SSL one.
148
4 MPC of Input-Output Benchmark Wiener Processes
1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.3 The SISO process: simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms 3
2
1
0 0
10
20
30
40
50
60
70
80
Fig. 4.4 The SISO process: the time-varying gain of the nonlinear static block calculated in the MPC-NPSL and MPC-SSL algorithms
Figure 4.5 compares performance of three MPC algorithms with on-line trajectory linearisation, i.e. MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT strategies, v.s. the reference MPC-NO scheme. The algorithms with one linearisation at each sampling instant, i.e. the MPC-NPLT1 and MPC-NPLT2 ones, are better than the MPC-NPSL and MPC-SSL schemes with model linearisation, but still, they give slightly different trajectories than those possible in the MPC-NO algorithm. The MPC-NPLPT algorithm gives practically the same trajectory as the MPC-NO one. The additional parameters of the MPC-NPLPT algorithm are: δ = δu = δy = 0.1, N0 = 2, the maximal number of internal iterations is 5. Figure 4.6 shows simulation results of the MPC-NPLPT algorithm for two different values of the parameter δ = δu = δy : 1
4.2 The SISO Process
149
1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.5 The SISO process: simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms 1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.6 The SISO process: simulation results of the MPC-NO and MPC-NPLPT algorithms for different values of the parameter δ = δu = δy
150
4 MPC of Input-Output Benchmark Wiener Processes 5
5
4
4
3
3
2
2
1
1 0
0 0
20
40
60
80
0
5
5
4
4
3
3
2
2
1
1
0
0
0
20
40
60
80
0
20
40
60
80
20
40
60
80
Fig. 4.7 The SISO process: the number of internal iterations (NII) of the MPC-NPLPT algorithm for different values of the parameter δ = δu = δy
and 10. It is clear that the greater these parameters, the worse the resulting control quality and the bigger the differences from the trajectories obtained in the best possible MPC-NO strategy. Figure 4.7 presents the number of internal iterations of the MPC-NPLPT algorithm for different values of the parameter δ = δu = δy . When the process output is close to the required set-point (i.e. the process is close to the steadystate), one internal iteration is sufficient. When a step change of the set-point occurs, the process is far from the steady-state and more than one internal iteration is necessary. The actual number of internal iterations depends on the parameter δ = δu = δy . Of course, the lower that parameter, the higher the number of internal iterations are necessary and they last longer after each set-point step. Finally, we consider the classical MPC-inv approach to control dynamical systems described by Wiener models based on the inverse model of the static nonlinear part of the model. Simulation results are depicted in Fig. 4.8, the results obtained for the MPC-NPLPT algorithm are given for comparison. For the perfect model and no disturbances, the MPC-inv scheme gives very good results; for three set-points changes, they are even slightly faster than in the case of the MPC-NPLPT algorithm. In one case, the changes are slower. In two cases, overshoot is lower. All considered MPC algorithms are compared in Table 4.1 in terms of the performance criteria E 2 (Eq. (4.1)) and E MPC-NO (Eq. (4.2)). The number of internal iterations necessary in the MPC-NPLPT scheme is specified (they are summarised for the whole simulation horizon). Additionally, the scaled calculation time is given, the result for the most computationally demanding solution, i.e. the MPC-NO strategy,
4.2 The SISO Process
151
1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.8 The SISO process: simulation results of the MPC-NPLPT and MPC-inv algorithms Table 4.1 The SISO process: comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
152
4 MPC of Input-Output Benchmark Wiener Processes
corresponds to 100%. In general, the simple MPC-SSL and MPC-NPSL algorithms with on-line model linearisation give quite good results, but much better accuracy is possible when the algorithms with on-line trajectory linearisation are used. In particular, the MPC-NPLPT algorithm is able to give practically the same control accuracy as the reference MPC-NO strategy because the obtained index E MPC-NO is very close to 0. Of course, the lower the parameters δ = δu = δy , the higher the number of the internal iterations and the better the control accuracy. It is interesting to consider the computational time of the compared algorithms. The simple MPCSSL and MPC-NPSL algorithms with on-line model linearisation need practically the same calculation time as the LMPC strategy (lower than 40% of that necessary in the MPC-NO scheme), the MPC algorithms with trajectory linearisation are more demanding. Of course, the lower the parameters δ = δu = δy , the longer the calculation time. It is important to stress the fact that the MPC-NPLPT scheme for δ = δu = δy = 0.1, which gives practically the same trajectories as the MPC-NO one (Fig. 4.5), is characterised by the calculation time lower than 50% of that necessary in the MPC-NO scheme. It is interesting to study how the computational time is influenced by the length of prediction and control horizons for all considered MPC algorithms. Table 4.2 presents the comparison in such a way that the calculation time for the horizons N = 10, Nu = 3, corresponds to 100%. The first important observation is that the control horizon has a major impact on the calculation time, the prediction one has much a lower influence. It is natural since the control horizon defines the number of decision variables in MPC optimisation. The second observation is that all MPC algorithms with on-line linearisation are significantly less computationally demanding for different combinations of horizons than the MPC-NO one. The best results are obtained for long control horizons, e.g. Nu = 10. In such cases, the MPC-NO scheme requires 2–3 longer calculation time. In the second part of simulations, we assume that the model is not perfect and the process is affected by a disturbance. In order to consider the process-model mismatch, the steady-state gain of the process is increased by 50%. Such a situation is typical in practice when the model used in MPC is found once, but the properties of the process change in time. In particular, an increase in process gain (and/or delay) may be dangerous since it is likely to result in oscillations or even instability. Additionally, from the sampling instant k = 10, the additive unmeasured step disturbance of the value 1 added to the process output is taken into account. The trajectories for the LMPC algorithm are shown in Fig. 4.9. The algorithm practically does not work due to modelling errors, the disturbance and nonlinearity of the process. Simulation results of the simple MPC-SSL and MPC-NPSL algorithms with on-line model linearisation are depicted in Fig. 4.10, for reference, the trajectories obtained for the MPC-NO scheme are also given. In comparison with the perfect model and disturbance-free case shown in Fig. 4.3, now it is clear that the simplest MPC-SSL algorithm results in unwanted oscillations for the third set-point step, the
4.2 The SISO Process
153
Table 4.2 The SISO process: comparison of all considered MPC algorithms in terms of the calculation time (%) for different prediction and control horizons
MPC-NPSL scheme in which the nonlinear model is used for free trajectory calculation gives better results. Figure 4.11 shows changes of the time-varying gain of the nonlinear static block calculated in the MPC-NPSL and MPC-SSL algorithms. The changes in the gain for the third set-point change are responsible for worse performance of the MPC-SSL strategy. Figure 4.12 presents simulation results of the MPC algorithms with trajectory linearisation, i.e. MPC-NPLPT (for δ = δu = δy = 0.1), MPC-NPLT1 and MPCNPLT2, the trajectories for the MPC-NO strategy are also given for reference. When compared with the perfect model and disturbance-free case shown in Fig. 4.5, we see that now overshoot is greater and there are some fast decaying oscillations. On the other hand, the MPC-NPLPT algorithm is the best one, it gives practically the same trajectories as the MPC-NO one, the MPC-NPLT1 and MPC-NPLT2 schemes with one trajectory linearisation at each sampling instant are slightly worse but still very good. Influence of the parameter δ = δu = δy on the trajectories of the MPC-NPLPT algorithm are shown in Fig. 4.13.
154
4 MPC of Input-Output Benchmark Wiener Processes 2 1 0
-1 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.9 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): simulation results of the linear LMPC algorithm based on different models, obtained for different operating points 2 1 0 -1 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.10 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms
4.2 The SISO Process
155
3
2
1
0 0
10
20
30
40
50
60
70
80
Fig. 4.11 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): the time-varying gain of the nonlinear static block calculated in the MPC-NPSL and MPC-SSL algorithms
2 1 0 -1 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.12 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms
It is a very interesting question if the classical MPC-inv control scheme based on the inverse model of the nonlinear static part of the Wiener model works when the unmeasured disturbance acts on the process and the model is not perfect. Let us remind that in the perfect model and disturbance-free case shown in Fig. 4.8, it gives very good results; even for some set-point changes, it is slightly faster than the MPC-NPLPT scheme. For the imperfect model and disturbance case, the comparison between the MPC-NPLPT and MPC-inv algorithms is given in Fig. 4.14. Unfortu-
156
4 MPC of Input-Output Benchmark Wiener Processes 2 1 0
-1 -2 -3 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.13 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): simulation results of the MPC-NO and MPC-NPLPT algorithms for different values of the parameter δ = δu = δy
nately, model errors and the disturbance lead to a very bad quality of control when the classical MPC-inv algorithm is used. The algorithm gives strong oscillations. In order to eliminate oscillations, the parameter λ is increased to 50 (the lowest value possible). Unfortunately, in such a case, the whole algorithm is very slow, all process input and output trajectories are much slower than those obtained in the MPC-NPLPT scheme. Finally, also in the case of model errors and disturbances, all considered MPC algorithms are compared in Table 4.3 in terms of the performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. The obtained numerical results confirm the observations made during analysing the trajectories shown in the previously discussed figures, i.e. the simple MPC-SSL and MPC-NPSL algorithms with on-line model linearisation give slightly better results than the ineffective LMPC and MPC-inv schemes, much better control accuracy is possible in MPC schemes with on-line trajectory linearisation. The relation between the computational time of the considered algorithms is similar to the perfect model and disturbance-free case (Table 4.1).
4.2 The SISO Process
157
2 1 0 -1 -2 -3 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 4.14 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): simulation results of the MPC-NPLPT and MPC-inv algorithms Table 4.3 The SISO process (the unmeasured disturbance acts on the process and the model is not perfect): comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
158
4 MPC of Input-Output Benchmark Wiener Processes
4.3 The SISO Process with Complex Dynamics: Classical MPC Versus MPC with Parameterisation 4.3.1 Description of the SISO Process with Complex Dynamics To demonstrate advantages of parameterisation using Laguerre functions in computationally efficient nonlinear MPC, control of a SISO Wiener system is considered. Its dynamic part is described by the following discrete-time linear equation [1] v(k) = b1 u(k − 1) + b2 u(k − 2) + b3 u(k − 3) + b4 u(k − 4) − a1 v(k − 1) − a2 v(k − 2) − a3 v(k − 3) − a4 v(k − 4)
(4.9)
The parameters are a1 = −3.0228,
a2 = 3.8630,
a3 = −2.6426,
a4 = 0.8084
b1 = −1.4316,
b2 = 4.8180,
b3 = −5.3445,
b4 = 1.9641
(4.10)
The dynamic part of the process is followed by a static block used in the SISO process discussed in Sect. 4.2 (Eq. (4.4)). The steady-state characteristic y(u) of the whole Wiener system is depicted in Fig. 4.1. Steady-state properties of the SISO process and the currently considered one are the same because, in both cases, the gain of the linear dynamic block is equal to 1. The considered process has very complex dynamics. Figure 4.15 shows the stepresponse of its linear part (4.9). It suggests that very long horizons should be used in MPC. In [16], where only control of the linear process (4.9) (with a changed steadystate gain) is considered, the horizons are as long as N = Nu = 100. The constraints imposed on the manipulated variable are: u min = −2.5, u max = 2.5 The default value of the parameter λ = 0.25.
4.3.2 Implementation of MPC Algorithms for the SISO Process with Complex Dynamics The following MPC algorithms without parameterisation are compared: 1. 2. 3. 4. 5.
The classical LMPC algorithm. The classical MPC-inv algorithm. The MPC-NPSL algorithm. The MPC-NPLPT algorithm. The MPC-NO algorithm.
4.3 The SISO Process with Complex Dynamics … Fig. 4.15 The SISO process with complex dynamics: the step-response of the linear part (4.9) of the Wiener system
159
2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 0
20
40
60
80
100
The following MPC algorithms with parameterisation using Laguerre functions are compared: 1. 2. 3. 4.
The MPC-NPLT1-P algorithm. The MPC-NPLT2-P algorithm. The MPC-NPLPT-P algorithm. The MPC-NO-P algorithm.
The MPC-NO and MPC-NO-P algorithms need solving on-line a nonlinear optimisation problem; all other algorithms use quadratic optimisation tasks. Next, we shortly detail implementation details of all considered algorithms. The parameter-constant linear model used for prediction in the LMPC scheme is obtained for the operating point u = v = y = 0. Because the nonlinear static part of the model is the same as that used the first SISO benchmark, using Eq. (4.5), for the considered operating point, the gain of that block is 1. Hence, in the LMPC algorithm, we use the model that is the same as the linear dynamic part of the Wiener process (4.9). Similarly, the time-varying gain of the nonlinear static block calculated in the MPCSSL and MPC-NPSL algorithms is computed from Eq. (4.6). Furthermore, the entries of the derivative matrix H(k) used in the MPC-NPLT1 and MPC-NPLT2 algorithms are computed from Eq. (4.7). In the case of the MPC-NPLPT algorithm, Eq. (4.8) is used for calculation of the matrix H t (k). The inverse static model used in the SISO benchmark (a neural network with five hidden nodes of the tanh type) is also used for the currently considered process in the MPC-inv control scheme.
4.3.3 MPC of the SISO Process with Complex Dynamics Figure 4.16 presents simulation results for the classical linear LMPC algorithm with different values of the parameter λ. Long prediction and control horizons are used, i.e. N = Nu = 100. The LMPC algorithm practically does not work because the process is nonlinear, but its model used for prediction in LMPC is linear.
160
4 MPC of Input-Output Benchmark Wiener Processes 1 0 -1 -2 -3 0
20
40
60
80
100
120
20
40
60
80
100
120
0.5
0
-0.5 0
Fig. 4.16 The SISO process with complex dynamics: simulation results of the LMPC algorithm with different values of the parameter λ, N = Nu = 100
Simulation results of the MPC-NPSL algorithm for λ = 0.25 are shown in Fig. 4.17, N = Nu = 100. When the parameter λ has its default value, for the first, the second and the fourth step changes of the set-point, quite good control is obtained, but for the third one, the MPC-NPSL algorithm gives a very bad trajectory. The problem may be reduced by increasing the value of the weight λ to 0.5. The obtained trajectories are shown in Fig. 4.18. Unfortunately, the results are still not satisfactory. Because the classical LMPC algorithm and the simple MPC-NPSL approach with model linearisation, despite using long horizons, give poor results, the MPC-NPLPT algorithm with advanced trajectory linearisation is considered. Figure 4.19 shows the obtained trajectories for three different values of the control horizon Nu , in all cases, the prediction horizon N = 100. It is evident that the MPC-NPLPT algorithm makes it possible to obtain good control quality, but very long horizons must be used (N = Nu = 100). Let us verify if the classical MPC-inv approach based on the inverse model of the static block gives good results. The obtained results are given in Fig. 4.20, the trajectories of the MPC-NPLPT scheme are given for comparison. In all cases, N = Nu = 100. For the default values of the parameter λ = 0.25, the MPC-inv algorithm gives some oscillations, huge overshoot and long setting time. For the increased parameter λ = 2.5, the oscillations are eliminated, but the obtained control quality is still below expectations, i.e. the trajectories possible in the MPC-inv algorithm are much worse than those obtained in the MPC-NPLPT scheme.
4.3 The SISO Process with Complex Dynamics …
161
0 -10 -20 -30 -40 -50 -60 0
20
40
60
80
100
120
20
40
60
80
100
120
2 1 0 -1 -2 0
Fig. 4.17 The SISO process with complex dynamics: simulation results of the MPC-NPSL algorithm with λ = 0.25, N = Nu = 100 1 0 -1 -2 -3 -4 0
20
40
60
80
100
120
20
40
60
80
100
120
1.5 1 0.5 0 -0.5 -1 -1.5 0
Fig. 4.18 The SISO process with complex dynamics: simulation results of the MPC-NPSL algorithm with λ = 0.5, N = Nu = 100
162
4 MPC of Input-Output Benchmark Wiener Processes 1 0
-1 -2 -3 0
20
40
60
80
100
120
20
40
60
80
100
120
2
1
0
-1 0
Fig. 4.19 The SISO process with complex dynamics: simulation results of the MPC-NPLPT algorithm with different values of the control horizons Nu , N = 100 2 0 -2 -4 -6 -8 -10 0 2
20
40
60
80
100
120
20
40
60
80
100
120
1
0
-1
-2 0
Fig. 4.20 The SISO process with complex dynamics: simulation results of the MPC-NPLPT and MPC-inv algorithms for different parameters λ, N = Nu = 100
4.3 The SISO Process with Complex Dynamics …
163
Next, the classical MPC-NPLPT algorithm and the MPC-NPLPT-P strategy with parameterisation are compared. In both cases, the horizons are N = Nu = 100. It is interesting to study the influence of the parameters a and n L on the control quality of the MPC-NPLPT-P algorithm. In order to do so, the following performance index is used kmax MPC-NPLPT 2 MPC-NPLPT y = (k) − y MPC-NPLPT-P (k) (4.11) E MPC-NPLPT-P k=1
It measures the sum of squared differences between the process output controlled by these two algorithms, the first of them is treated as the reference. The obtained MPC-NPLPT are given in Table 4.4. The numerical values of the performance index E MPC-NPLPT-P emphasised results refer to very good performance of the MPC-NPLPT-P approach, i.e. when it gives practically the same trajectories as the MPC-NPLPT one. Since the objective of parameterisation is to reduce the number of decision variables in MPC, the value a = 0.7 is chosen because it makes it possible to obtain very good control quality for the lowest value of the parameter n L which defines the number of decision variables of MPC. Table 4.4 suggests that for a = 0.7 it is sufficient to use only n L = 20 or n L = 25 decision variables. Figure 4.21 shows the process trajectories for n L = 5, 10, 20, in all cases a = 0.7. Of course, for n L = 5, 10, the control quality is not satisfactory. The influence of the parameter a is depicted in Fig. 4.22. The following values are considered: a = 0.1, 0.7, 0.9, in all cases n L = 20. We observe that too low and too large values of the parameter a lead to bad control quality. That is why the value a = 0.7 is used in the next simulations. Table 4.5 compares the classical MPC algorithms versus the MPC strategies with parameterisation in terms of two performance indices: E 2 and E MPC-NO . Additionally, the number of optimised variables n var , optimisation type (nonlinear or quadratic) and relative optimisation time are specified. The following observations may be made: 1. The MPC-NPLT1-P and MPC-NPLT2-P algorithms with one trajectory linearisation and optimisation at every sampling instant are the worst ones (the performance indices E 2 and E MPC-NO are the highest). Multiple linearisation and optimisation possible in the MPC-NPLPT-P and MPC-NPLPT algorithms give much better results. Figure 4.23 compares trajectories obtained when the MPCNPLPT-P, MPC-NPLT1-P and MPC-NPLT2-P algorithms with parameterisation are used, in all cases, a = 0.7, n L = 20, N = Nu = 100. It is clear that for the considered benchmark process, only one trajectory linearisation and quadratic optimisation at every sampling instant is not satisfactory, i.e. the resulting trajectories of the MPC-NPLT1-P and MPC-NPLT2-P algorithms are different from those obtained when the MPC-NPLPT-P algorithm is used, in which maximally five repetitions of linearisation and optimisation at every sampling instant are possible. 2. The MPC-NPLPT-P algorithm with quadratic optimisation gives very similar results as the MPC-NO-P with nonlinear optimisation. Figure 4.24 compares trajectories of these algorithms with parameterisation, a = 0.7, n L = 20.
Table 4.4 The SISO process with complex dynamics: the classical MPC-NPLPT algorithm versus the MPC-NPLPT-P strategy with parameterisation measured MPC-NPLPT (Eq. (4.11)); the emphasised results refer to very good performance of the MPC-NPLPT-P approach by the values of the performance index E MPC-NPLPT-P
164 4 MPC of Input-Output Benchmark Wiener Processes
4.3 The SISO Process with Complex Dynamics …
165
1 0 -1 -2 -3 0
20
40
60
80
100
120
20
40
60
80
100
120
2
1
0
-1 0
Fig. 4.21 The SISO process with complex dynamics: simulation results of the MPC-NPLPT-P algorithm with parameterisation with different values of the parameter n L , a = 0.7, N = Nu = 100 1 0 -1 -2 -3 0
20
40
60
80
100
120
20
40
60
80
100
120
2
1
0
-1 0
Fig. 4.22 The SISO process with complex dynamics: simulation results of the MPC-NPLPT-P algorithm with parameterisation with different values of the parameter a, n L = 20, N = Nu = 100
Table 4.5 The SISO process with complex dynamics: the MPC strategies with parameterisation versus the classical MPC algorithms measured by the values of the control performance indices (E 2 and E MPC-NO ) and the optimisation time; n var is the number of decision variables
166 4 MPC of Input-Output Benchmark Wiener Processes
4.3 The SISO Process with Complex Dynamics …
167
1 0 -1 -2 -3 0
20
40
60
80
100
120
20
40
60
80
100
120
2
1
0
-1 0
Fig. 4.23 The SISO process with complex dynamics: simulation results of the MPC-NPLPTP, MPC-NPLT1-P and MPC-NPLT2-P algorithms with parameterisation, a = 0.7, n L = 20, N = Nu = 100 1 0 -1 -2 -3 0
20
40
60
80
100
120
20
40
60
80
100
120
2
1
0
-1 0
Fig. 4.24 The SISO process with complex dynamics: simulation results of the MPC-NPLPT-P and MPC-NO-P algorithms with parameterisation, a = 0.7, n L = 20, versus the MPC-NPLPT algorithm, N = Nu = 100
168
4 MPC of Input-Output Benchmark Wiener Processes
3. The MPC-NPLPT-P algorithm with parameterisation (the optimisation problem has only n L = 20 or n L = 25 decision variables) gives practically the same results as the classical MPC-NPLPT algorithm without parameterisation (the optimisation problem has as many as Nu = 100 decision variables). Figure 4.24 compares trajectories of the MPC-NPLPT-P (n L = 20) and MPC-NPLPT algorithms. The results for n L = 25 are even better. 4. Taking into account the MPC-NPLPT-P algorithm (and also the MPC-NO-P one), increasing the number of decision variables (n L ) leads to better results. It is also clear from Table 4.4. 5. The MPC-NPLPT-P algorithm with parameterisation and quadratic optimisation gives very similar results as the computationally demanding, the best possible MPC-NO algorithm with nonlinear optimisation and Nu = 100 decision variables. 6. As far as the optimisation time is concerned, the comparison between the MPCNPLPT-P algorithm with parameterisation and the classical MPC-NPLPT algorithm, with as many as Nu = 100 decision variables, is the most important. Firstly, when n L = 20, the classical MPC-NPLPT algorithm is characterised by more than 30% longer optimisation time than that required in the MPC-NPLPT-P scheme. Secondly, the MPC-NPLT1-P and MPC-NPLT2-P algorithms with only one online linearisation and optimisation at every sampling instant are characterised by approximately 50% shorter optimisation time than the MPC-NPLPT-P strategy with multiple linearisation and optimisation. Thirdly, the MPC-NO-P algorithm is very computationally demanding, but parameterisation makes it possible to significantly reduce the optimisation time in comparison with that of the MPC-NO scheme (approximately six times). Finally, performance of all discussed nonlinear MPC algorithms is compared when the process is affected by unmeasured additive output disturbances. Four disturbance steps are considered: the first step −0.25 starts at the sampling instant k = 5, the second step 0.25 starts at k = 30, the third step −0.5 starts at k = 60, the fourth step 0.5 starts at k = 90, the set-point is 0 for the whole simulation horizon. The algorithms are compared in terms of the indices E 2 and E MPC-NO as well as the optimisation time in Table 4.6. The observations made for the set-point tracking case are true. In particular, the MPC-NPLPT-P algorithm with parameterisation gives practically the same results as the classical MPC-NPLPT algorithm without parameterisation. Moreover, the trajectories of the MPC-NPLPT-P algorithm are very similar to those possible in the MPC-NO-P and MPC-NO schemes with nonlinear optimisation. For the considered process, in terms of the optimisation time, the classical MPC-NPLPT algorithm is more than 30% time consuming than the MPC-NPLPT-P scheme. Figure 4.25 compares trajectories of the MPC-NPLPT-P and MPC-NO-P algorithms with parameterisation, a = 0.7, n L = 20, versus the classical MPC-NPLPT scheme.
Table 4.6 The SISO process with complex dynamics (the process is affected by disturbances): the MPC strategies with parameterisation versus the classical MPC algorithms measured by the values of the control performance indices (E 2 and E MPC-NO ) and the optimisation time; n var is the number of decision variables
4.3 The SISO Process with Complex Dynamics … 169
170
4 MPC of Input-Output Benchmark Wiener Processes 1.5 1 0.5 0
-0.5 -1 -1.5 0 1.5
20
40
60
80
100
120
20
40
60
80
100
120
1 0.5 0 -0.5 -1 -1.5 0
Fig. 4.25 The SISO process with complex dynamics (the process is affected by disturbances): simulation results of the MPC-NPLPT-P and MPC-NO-P algorithm with parameterisation, a = 0.7, n L = 20, versus the MPC-NPLPT algorithm, N = Nu = 100
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I 4.4.1 Description of the MIMO Process A The next considered process is defined by a MIMO Wiener model I with two inputs and two outputs. The linear part of the process (Eqs. (2.1)) is of the fourth order of dynamics (n A = n B = 4), the coefficients of the model (2.7)–(2.8) are a31 = −8.9252 × 10−1 , a41 = 1.3534 × 10−1
a11 = −2.4261,
a21 = 2.2073,
a12 = −2.6461,
a22 = 2.6197, a32 = −1.1500,
a41 = 1.8888 × 10−1 (4.12)
and b11,1 = 1.8041 × 10−1 ,
b21,1 = −8.9618 × 10−2
b31,1 = −9.0393 × 10−2 ,
b41,1 = 4.7540 × 10−2
b11,2 = 3.6082 × 10−2 ,
b21,2 = −1.7924 × 10−2
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
10
171
4
0
2
-10 0 -20 -2
-30 -40
-4 1
1 1
0
0 -1
-1
1
0
0 -1
-1
Fig. 4.26 The MIMO process A: the steady-state characteristics y1 (u 1 , u 2 ) and y2 (u 1 , u 2 )
b31,2 = −1.8079 × 10−2 ,
b41,2 = 9.5081 × 10−3
b12,1 = 2.5387 × 10−2 ,
b22,1 = −1.4361 × 10−2
b32,1 = −1.4406 × 10−2 ,
b42,1 = 8.3563 × 10−3
b12,2 = 3.1734 × 10−1 ,
b22,2 = −1.7951 × 10−1
b32,2 = −1.8008 × 10−1 ,
b42,2 = 1.0445 × 10−1
(4.13)
The nonlinear static blocks (Eq. 2.14) are y1 (k) = g1 (v1 (k)) = − exp(−v1 (k)) + 1 1 1 3 y2 (k) = g2 (v2 (k)) = v2 (k) + v (k) 30 150 2
(4.14) (4.15)
The stead-state characteristics y1 (u 1 , u 2 ) and y2 (u 1 , u 2 ) of the whole MIMO Wiener system are depicted in Fig. 4.26.
4.4.2 Implementation of MPC Algorithms for the MIMO Process A The following MPC algorithms are compared: 1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The classical MPC-inv algorithm. 3. The MPC-SSL and MPC-NPSL algorithms. 4. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms.
172
4 MPC of Input-Output Benchmark Wiener Processes
5. The MPC-NO algorithm. Next, we shortly detail implementation details of all considered algorithms. The parameter-constant linear models used for prediction in the LMPC scheme are obtained for three different operating points: the model 1 for y1 = y2 = 0, the model 2 for y1 = −1.5 and y2 = −0.5, the model 3 for y1 = 0.75 and y2 = 0. The model actually used in LMPC is the linear dynamic part of the Wiener process multiplied by the gains of the nonlinear static block for the considered operating points. In general, from Eqs. (3.104)–(3.105) and (4.14)–(4.15), we have dg1 (v1 ) = exp(−v1 ) dv1 dg2 (v2 ) 2 K2 = = (0.05 + 0.03v22 ) dv2 3 K1 =
(4.16) (4.17)
For the operating point 1, we obtain v1 = v2 = 0 and K 1 = 1, K 2 = 3.3333 × 10−2 . For the operating point 2, we obtain v1 = −9.1629 × 10−1 , v2 = −3.8232 and K 1 = 2.5, K 2 = 3.2567 × 10−1 . For the operating point 3, we obtain v1 = 1.3863, v2 = 0 and K 1 = 2.5 × 10−1 , K 2 = 3.3333 × 10−2 . The time-varying gains of the nonlinear static blocks are calculated in the MPCSSL and MPC-NPSL algorithms as dg1 (v1 (k)) = exp(−v1 (k)) dv1 (k) 2 dg2 (v2 (k)) = (0.05 + 0.03v22 (k)) K 2 (k) = dv2 (k) 3 K 1 (k) =
(4.18) (4.19)
where v1 (k) and v2 (k) are the model signals. In the MPC-NPLT1 and MPC-NPLT2 algorithms, the entries of the derivative matrix H(k) are computed from Eq. (3.222). For the nonlinear block (4.14)–(4.15), we have traj
dg1 (v1 (k + p|k)) traj
dv1 (k + p|k)
traj
= exp(−v1 (k + p|k))
traj
dg2 (v2 (k + p|k)) traj dv2 (k
+ p|k)
=
2 traj 0.05 + 0.03(v2 (k + p|k))2 3
(4.20) (4.21)
In the MPC-NPLPT algorithm, we use Eq. (3.286) to calculate the matrix H t (k). We obtain
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
dg1 (v1t−1 (k + p|k)) dv1t−1 (k + p|k) dg2 (v2t−1 (k + p|k)) dv2t−1 (k
+ p|k)
= exp(−v1t−1 (k + p|k)) =
2 0.05 + 0.03(v2t−1 (k + p|k))2 3
173
(4.22) (4.23)
Two neural networks of the MLP type with one hidden layer containing five nonlinear units and a linear output layer are used as the inverse models of the nonlinear static blocks in the MPC-inv algorithm. The nonlinear units use the tanh activation function.
4.4.3 MPC of the MIMO Process A Parameters of all compared MPC algorithms are the same: N = 10, Nu = 3, μ p,1 = 1 and μ p,2 = 5 for p = 1, . . . , N , λ p,1 = λ p,2 = 0.5 for p = 0, . . . , Nu − 1, the = u min = −1.5, u max = constraints imposed on the manipulated variables are: u min 1 2 1 max u 2 = 1.5. At first, let us consider no modelling errors and no disturbances. Simulation results of the LMPC algorithm are depicted in Fig. 4.27 (for prediction, three example models are used, obtained for different operating points). The control quality is the best when the model 2 is used, but generally, due to nonlinearity of the process, the LMPC algorithm is unable to provide good control. In particular, if the models 1 or 3 are used in LMPC, the obtained results are much worse than in the SISO case shown in Fig. 4.2. Simulation results of the MPC-SSL and MPC-NPSL algorithms with on-line model linearisation are depicted in Fig. 4.28, the trajectories obtained in the MPCNO one are given for reference. Let us remind that in the case of the SISO process (Fig. 4.3), the MPC-NPSL strategy gives quite good control, the MPC-SSL one is slower and gives greater overshoot. The same phenomena may be observed for the MIMO process. Since the MPC-SSL algorithm with default parameters leads to oscillations, the weighting coefficients λ p,1 = λ p,2 must be increased to the value of 5, but in such a case, the process output trajectories follow their set-points quite slowly. Figure 4.29 depicts changes of the time-varying gains of the nonlinear static blocks calculated in the MPC-NPSL and MPC-SSL algorithms. Simulation results of the MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms with on-line trajectory linearisation are depicted in Fig. 4.30. The additional parameters of the MPC-NPLPT algorithm are: δ = δu = δy = 0.1, N0 = 2, the maximal number of internal iterations is 5. Similarly to the SISO case shown in Fig. 4.5, all algorithms work very well, although the trajectories of the MPC-NPLPT one are the best ones, i.e. practically the same as those in the MPC-NO scheme. Although in the SISO case, the classical MPC-inv approach based on the inverse model of the static nonlinear part of the model gives very good results as shown in Fig. 4.8 (provided that the model is perfect and there are no disturbances), for the considered MIMO process, the MPC-inv algorithm is unable to provide good control
174
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.27 The MIMO process A: simulation results of the linear LMPC algorithm based on different models, obtained for different operating points
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
175
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.28 The MIMO process A: simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
176
4 MPC of Input-Output Benchmark Wiener Processes 3
2
1
0 10 0.4
20
30
40
50
60
70
80
90
20
30
40
50
60
70
80
90
0.3 0.2 0.1 0 10
Fig. 4.29 The MIMO process A: the time-varying gains of the nonlinear static blocks calculated in the MPC-NPSL and MPC-SSL algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
of the first process output as shown in Fig. 4.31, the second output is controlled in a good way. Significant overshoot may be eliminated by increasing the weighting coefficients λ p,1 = λ p,2 , but it results in a slow response of the first process output. All considered MPC algorithms are compared in Table 4.7 in terms of the performance criteria E 2 and E MPC-NO , the sum of internal iterations necessary in the MPC-NPLPT algorithm as well as the computational time. Similarly to the SISO case (Table 4.1), the more advanced the linearisation method, the better the control quality. Let us remind that the calculation time in the SISO case of the simple MPC-SSL and MPC-NPSL algorithms is equal to some 40% of that necessary in the MPC-NO scheme and it grows to some 50% in the MPC-NPLPT scheme (for the chosen tuning parameters δ = δu = δy ). Now, for the MIMO process A, this relation is even better. The calculation time of the simple MPC-SSL and MPC-NPSL algorithms is lower than 20% when compared with the time necessary in the MPC-NO scheme. This relation grows to only 23% for the MPC-NPLPT approach. Table 4.8 shows the influence of the length of prediction and control horizons on the computational time for all considered MPC algorithms. The obtained results are similar to those in the SISO case (Table 4.2), i.e. the control horizon has a major impact on the calculation time, the prediction one has a much lower influence. Furthermore, all MPC algorithms with on-line linearisation are significantly less computationally demanding than the MPC-NO one. Additionally, the MPC-NO scheme becomes very time consuming for long control horizons. Now, we assume that the model is not perfect and the process is affected by disturbances. The steady-state gains of all input-output channels of the process are increased by 25%. Additionally, from the sampling instant k = 30, the additive
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
177
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.30 The MIMO process A: simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms
178
4 MPC of Input-Output Benchmark Wiener Processes
1
0
-1
-2
-3 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.31 The MIMO process A: simulation results of the MPC-MPC-NPLPT and MPC-inv algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
179
Table 4.7 The MIMO process A: comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
unmeasured step disturbance of the value 0.5 added to the first process output is taken into account; from the sampling instant k = 50, the second disturbance of the value −0.25 added to the second process output is also considered. The trajectories for the LMPC algorithm are shown in Fig. 4.32. Due to process nonlinearity, modelling errors and disturbances, the LMPC algorithm gives very bad control quality. Simulation results of the MPC-SSL algorithm are depicted in Fig. 4.33, the trajectories obtained in the MPC-NO one are given for reference. Let us remind that in the case of the perfect model and no disturbances (Fig. 4.28), the MPC-SSL approach gives quite a low quality of control, but it works. In the case of model errors and disturbances, the possible control accuracy is even worse. Increasing the weighting coefficients λ p,1 = λ p,2 does not give good results because the process trajectories become very slow. Simulation results of the MPC-NPSL algorithm are depicted in Fig. 4.34, the trajectories obtained in the MPC-NO one are given for reference. Let us remind that in the case of the perfect model and no disturbances (Fig. 4.28), the MPC-NPSL approach gives quite good results. Unfortunately, in the case of model errors and disturbances, the obtained control accuracy is very bad. The MPC-NPSL algorithm practically does not work.
180
4 MPC of Input-Output Benchmark Wiener Processes
Table 4.8 The MIMO process A: comparison of all considered MPC algorithms in terms of the calculation time (%) for different prediction and control horizons
Unlike the MPC-SSL and MPC-NPSL algorithms, all more advanced schemes with trajectory linearisation, i.e. MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2, give good control quality as depicted in Fig. 4.35. Of course, when compared with the perfect model and disturbance-free case shown in Fig. 4.30, overshoot is greater and there are some decaying oscillations, but trajectories of the MPC-NPLT1 and MPCNPLT2 algorithms are very similar to those of the MPC-NO scheme. Furthermore, the MPC-NPLPT approach gives practically the same results as the MPC-NO one. Let us compare the MPC-NPLPT algorithm with the classical MPC-inv scheme based on the inverse static model. The results are shown in Fig. 4.36. Let us remind that in the case of no model errors and no disturbances (Fig. 4.31), the MPC-inv scheme works, the only disadvantage is significant overshoot of the first process output. Unfortunately, if model errors and disturbances are present, the control quality of the MPC-inv algorithm is very bad. For the default parameters λ = λ p,1 and λ p,2 some oscillations appear. They may be eliminated when the weighting factors are
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
181
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.32 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 25%): simulation results of the linear LMPC algorithm based on different models, obtained for different operating points
182
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.33 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 25%): simulation results of the MPC-NO and MPC-SSL algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
183
1 0 -1 -2 -3 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
1
0.5
0
-0.5 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.34 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 25%): simulation results of the MPC-NO and MPC-NPSL algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
184
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.35 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 25%): simulation results of the MPC-NO, MPCNPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
185
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.36 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 25%): simulation results of the MPC-NPLPT and MPC-inv algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
186
4 MPC of Input-Output Benchmark Wiener Processes
Table 4.9 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 25%): comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
increased, but it does not improve the control quality. The same phenomenon is observed for the SISO process in Fig. 4.14. Also, in the case of modelling errors and disturbances, all considered MPC algorithms are compared in Table 4.9 in terms of the performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. As in the case of the SISO process (Tables 4.1 and 4.3) and in the case of the discussed MIMO process A with no model errors and disturbances (Table 4.7), the more advanced the linearisation method, the better the control accuracy. Finally, let us verify effectiveness of the MPC-NPLPT and MPC-inv algorithms when the unmeasured disturbance acts on the process and the model is not perfect, but the process gains of all input-output channels are increased by 50%. Simulation results are shown in Fig. 4.37. When compared with the results obtained when the process gains are increased only by 25% (Fig. 4.36), the MPC-NPLPT approach gives greater overshoot and longer oscillations, but it is stable. Unfortunately, the MPC-inv scheme does not work.
4.4 The MIMO Process A with Two Inputs and Two Outputs: Model I
187
1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
1
0.5
0
-0.5
-1 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.37 The MIMO process A (the unmeasured disturbances act on the process and the model is not perfect, the process gains are increased by 50%): simulation results of the MPC-NPLPT and MPC-inv algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
188
4 MPC of Input-Output Benchmark Wiener Processes
4.5 The MIMO Process B with Ten Inputs and Two Outputs: Model I Versus Model III 4.5.1 Description of the MIMO Process B In order to discuss differences between the MIMO Wiener models I and III, the next benchmark process is considered. It has as many as ten inputs and two outputs. The MIMO Wiener model III depicted in Fig. 2.4 is used as the simulated process. The consecutive discrete-time transfer functions (Eq. (2.30)) are of the second order of dynamics b1m,n q −1 + b2m,n q −2 (4.24) G m,n (q −1 ) = 1 + a1m,n q −1 + a2m,n q −2 for all m = 1, 2 and n = 1, . . . , 10. The parameters of the transfer functions are given in Table 4.10. The nonlinear static blocks (Eq. 2.14) are the same as in the case of the MIMO process A, they are defined by Eqs. (4.14)–(4.15).
Table 4.10 The MIMO process B: the parameters of the discrete-time transfer functions which comprise the linear dynamic part of the Wiener system (the Wiener model III)
4.5 The MIMO Process B with Ten Inputs and Two Outputs …
189
4.5.2 Implementation of MPC Algorithms for the MIMO Process B The following MPC algorithms are compared: 1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The classical MPC-inv algorithm. 3. The MPC-SSL and MPC-NPSL algorithms. 4. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms. 5. The MPC-NO algorithm. In the MPC-NO algorithms, two model structures are used: the MIMO Wiener model I and the MIMO Wiener model III. In all other nonlinear MPC algorithms, only the MIMO Wiener model III is used (the explanation for such a choice will be given shortly). A linear approximation of the MIMO Wiener model III is used in the LMPC algorithm. Let us note that all transfer functions (4.24), the parameters of which are given in m,n Table 4.10, are of the second order of dynamics, i.e. n m,n A = n B = 2 for all inputoutput channels, i.e. m = 1, . . . , n y , n = 1, . . . , n u . From Eqs. (2.39)–(2.40) and (2.41)–(2.42), it follows that the resulting MIMO Wiener model I has the order equal to 20, i.e. n A = n B = 20. Hence, the number of parameters of the linear model part depends on the model configuration. The MIMO Wiener model III has 80 parameters in the first block, while the classical MIMO Wiener model I has as many as 440 coefficients. Because in the currently discussed benchmark we use the same static blocks as in the MIMO process A, the implementation details related to utilisation of the static blocks are the same, i.e. the gains of the nonlinear block and the derivative matrices H(k) and H t (k) are computed in the same way (Eqs. (4.16)–(4.23)). Similarly, the same neural inverse static models are used.
4.5.3 MPC of the MIMO Process B All MPC algorithms have the same horizons: N = 20, Nu = 3 and the weighting matrix M p = diag(50, 25) for p = 1, . . . , N , the default parameter of the weighting matrix p = λdiag(1, . . . , 1) is λ = 0.05 for p = 1, . . . , Nu (in some cases, different parameters λ are used). The constraints imposed on the magnitude of the manipulated = −20, u max = 20, for all n = 1, . . . , 10. variables are defined by: u min n n At first, potentially, the best MPC-NO algorithms based on the MIMO Wiener models I and III are compared. The obtained simulation results are shown in Fig. 4.38. When the MIMO Wiener model III is used, the process output variables quickly follow changes of the set-points, there is no steady-state error, overshoot is insignificant. Unfortunately, when for prediction in the same algorithm, the classical MIMO
190
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10 8 6 4 2 0 0
20
20
0
0
-20 0 20
20
40
60
80
20
40
60
80
0 -20 0 20
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
-20 0 20 0
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
20
40
60
80
-20 0
Fig. 4.38 The MIMO process B: the MPC-NO algorithm based on the MIMO Wiener models I and III
4.5 The MIMO Process B with Ten Inputs and Two Outputs … 10 4
10 12 0
0
-5
-2
-10
-4
1
191
5
10
15
20
1
5
10
15
20
Fig. 4.39 The MIMO process B: comparison of example predicted trajectories of the controlled variables obtained in the MPC-NO algorithms based on the MIMO Wiener models I and III (the scenario 1)
0
0
-200
-5000
-400
-10000
-600 1
5
10
15
20
-15000 1
5
10
15
20
Fig. 4.40 The MIMO process B: comparison of example predicted trajectories of the controlled variables obtained in the MPC-NO algorithms based on the MIMO Wiener models I and III (the scenario 2)
Wiener model I is used, the controller does not work. The optimisation solver is the same (the SQP algorithm). For explanation, it is useful to consider predicted trajectories of the controlled variables calculated by means of the two considered Wiener models. Comparisons of the trajectories for two different random scenarios of the manipulated variables (over the control horizon) are given in Figs. 4.39 and 4.40. Because the MIMO Wiener model I has a very high order of dynamics equal to 20, numerical problems (ill-conditioning) occur. As a result, the predicted trajectories are very different from those calculated from the MIMO Wiener model III. As a result, the MPC-NO algorithm based on the classical MIMO model I type does not work. Hence, in the next simulations, only the MIMO Wiener model III is used. Let us at first consider no modelling errors and no disturbances. Simulation results of the LMPC algorithm are depicted in Fig. 4.41. For prediction, three different models are used, obtained for different operating points. We may compare the obtained results with the results for the MIMO process A, in which the same nonlinear static blocks are used but which has only two inputs and only the fourth-order of dynamics (Fig. 4.27). In the currently considered case, the control quality of the LMPC algorithm is much worse; the algorithm does not work, it is unable to make the process
192
4 MPC of Input-Output Benchmark Wiener Processes
0
-5
-10
-15 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1.5 1 0.5 0 -0.5 0
20
20
0
0
-20 0 20
20
40
60
80
20
40
60
80
0 -20 0 20
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
-20 0 20 0
20
40
60
80
-20 0
Fig. 4.41 The MIMO process B: simulation results of the linear LMPC algorithm based on different linear approximations of the MIMO Wiener model III, obtained for different operating points
4.5 The MIMO Process B with Ten Inputs and Two Outputs …
193
outputs follow changes of their set-points without steady-state error and oscillations. The conclusion is that the considered MIMO Wiener process B really requires a nonlinear MPC algorithm. Figure 4.42 presents simulation results of the simple MPC algorithms with online model linearisation, i.e. MPC-NPSL and MPC-SSL ones, the trajectories of the MPC-NO strategy are given for reference. Similarly to the results obtained for the MIMO process A (Fig. 4.28), the resulting control quality possible when the MPC algorithms with on-line model linearisation are used is not satisfying. The simplest algorithm, i.e. the MPC-SSL one, works fast, but it gives huge overshoot, particularly for the second process output. For the default value of the weighting parameters λ, the MPC-NPSL algorithm does not work as the first process output is generally slow, the second output does not reach the desired set-point. To solve the problem, the weighting parameter λ is increased. In such a case, the process outputs reach their set-points, but the resulting trajectories are very slow. Next, let us consider more advanced MPC algorithms with on-line trajectory linearisation. Figure 4.43 depicts the trajectories obtained when the MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms are used, the trajectories of the MPCNO strategy are given for reference. The additional parameters of the MPC-NPLPT algorithm are: δ = δu = δy = 0.1, N0 = 2, the maximal number of internal iterations is 5. In general, similarly to the MIMO benchmark process A (Fig. 4.30), due to a more advanced linearisation method, the obtained control quality is generally good. Nevertheless, it turns out that one trajectory linearisation is insufficient as the MPCNPLT1 algorithm gives insufficient control quality for the last set-point change and the MPC-NPLT2 scheme is characterised by significant overshoot for the second process output. Repetitions of trajectory linearisation and quadratic optimisation in the MPC-NPLPT algorithm give very good results; the obtained trajectories are practically the same as in the MPC-NO method. The classical MPC-inv approach based on the inverse model of the static nonlinear part of the model gives quite good results, as shown in Fig. 4.44. Let us note that the MPC-inv method works differently than the MPC-NPLPT scheme, particularly for the second process output. It is significantly faster, but, at the same time, it gives greater overshoot. All considered MPC algorithms are compared in Table 4.11 in terms of the performance criteria E 2 and E MPC-NO , the sum of internal iterations necessary in the MPC-NPLPT algorithm as well as the computational time. Similarly to the SISO and the MIMO process A cases (Tables 4.1 and 4.7, respectively), the more advanced the linearisation method, the better the control quality. However, there is one important difference in the case of the currently discussed MIMO process B. Let us remind that the calculation time of the simple MPC algorithms with model linearisation is approximately 40 and 20% of that necessary in the MPC-NO scheme in the SISO and in the MIMO process A cases, respectively. For the MIMO process B, this relation drops to approximately 1–1.5%. As far as more advanced MPC algorithms with trajectory linearisation are concerned, this relation is 50 and 23% in the SISO and in the MIMO process A cases, respectively. For the MIMO process B this relation drops
194
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.5
0
-0.5
0
20
20
0
0
-20 0 20
20
40
60
80
0 -20 0 20
20
40
60
80
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
-20 0 20 0
20
40
60
80
-20 0
Fig. 4.42 The MIMO process B: simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms based on the MIMO Wiener model III
4.5 The MIMO Process B with Ten Inputs and Two Outputs …
195
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0
20
20
0
0
-20 0 20
20
40
60
80
0 -20 0 20
20
40
60
80
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
-20 0 20 0
20
40
60
80
-20 0
Fig. 4.43 The MIMO process B: simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms based on the MIMO Wiener model III
196
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0
20
20
0
0
-20 0 20
20
40
60
80
20
40
60
80
0 -20 0 20
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
-20 0 20 0
20
40
60
80
-20 0
Fig. 4.44 The MIMO process B: simulation results of the MPC-NPLPT and MPC-inv algorithms based on the MIMO Wiener model III
4.5 The MIMO Process B with Ten Inputs and Two Outputs …
197
Table 4.11 The MIMO process B: comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
to less than 2% (for the chosen tuning parameters δ = δu = δy ). It turns out that the discussed MPC schemes with on-linearisation are very computationally efficient in the case of really MIMO processes. Table 4.12 shows the influence of the length of prediction and control horizons on the computational time for all considered MPC algorithms. The obtained results are much better than those in the SISO case (Table 4.2) and the MIMO process A case (Table 4.8), i.e. the calculation time of MPC algorithms with on-line linearisation is only a small fraction (much lower than 10%) of that necessary in the MPC-NO scheme. In contrast, in the previously discussed benchmarks, those figures are equal to several dozens of percentages. Moreover, similarly to the previously discussed examples, the control horizon has a major impact on the calculation time; the prediction one has a much lower influence. In the second part of simulations, we assume that the model is not perfect and the process is affected by disturbances. The steady-state gains of all input-output channels of the process are increased by 20% and from the sampling instant k = 30, the additive unmeasured step disturbance of the value 0.5 added to the first process output is taken into account, the second disturbance of the value −0.25 added to the second process output is considered from the sampling instant k = 50. As simple MPC algorithms with on-line model linearisation give poor results
198
4 MPC of Input-Output Benchmark Wiener Processes
Table 4.12 The MIMO process B: comparison of all considered MPC algorithms in terms of the calculation time (%) for different prediction and control horizons
for the perfect model and disturbance-free case as shown in Fig. 4.42, we consider only more advanced MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms with trajectory linearisation. Simulation results are given in Fig. 4.45. Similarly to the perfect model and disturbance-free case shown in Fig. 4.43, even the MPC-NPLT1 and MPC-NPLT2 algorithms with only one linearisation at each sampling instant work, although the second one gives greater overshoot for the second process output. The MPC-NPLPT scheme gives very good control quality, practically the same as in the MPC-NO case. The influence of disturbances is compensated quickly. Of course, every set-point change results in some oscillations of the process output variables (greater than in the perfect model and disturbance-free case as shown in Fig. 4.43), but they are compensated quickly.
4.5 The MIMO Process B with Ten Inputs and Two Outputs …
199
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.5
0
-0.5
0
20
20
0
0
-20 0 20
20
40
60
80
0 -20 0 20
20
40
60
80
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
-20 0 20 0
20
40
60
80
-20 0
Fig. 4.45 The MIMO process B (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms based on the MIMO Wiener model III
200
4 MPC of Input-Output Benchmark Wiener Processes
Table 4.13 The MIMO process B (the unmeasured disturbances act on the process and the model is not perfect): comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
Finally, the MPC-inv algorithm with the inverse model of the static part of the Wiener model is compared with the discussed MPC-NPLPT scheme. Figure 4.46 depicts simulation results obtained for these two MPC methods. Although the MPCinv scheme works quite well for the perfect model and disturbance-free case as shown in Fig. 4.44, the process-model mismatch and disturbances lead to quite bad performance of that algorithm. For the second process output, only overshoot is greater, but for the first one, unwanted oscillations also appear. For the same disturbances and model errors, the discussed MPC-NPLPT algorithm gives much better control. Also, in the case of model errors and disturbances, all considered MPC algorithms are compared in Table 4.13 in terms of the performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. As in the perfect model and disturbance-free case (Table 4.11), the more advanced the linearisation method, the better the control accuracy.
4.5 The MIMO Process B with Ten Inputs and Two Outputs …
201
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0
20
20
0
0
-20 0 20
20
40
60
80
20
40
60
80
0 -20 0 20
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
-20 0 20 0
20
40
60
80
-20 0
Fig. 4.46 The MIMO process B (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the MPC-NPLPT and MPC-inv algorithms based on the MIMO Wiener model III
202
4 MPC of Input-Output Benchmark Wiener Processes
50
4 2
0 0 -50 -2 -100 2
-4 2 2
2 0
0
0
-2 -2
0 -2 -2
Fig. 4.47 The MIMO process C: the steady-state characteristics y1 (u 1 , u 2 ) and y2 (u 1 , u 2 )
4.6 The MIMO Process C with Two Inputs, Two Outputs and Cross Couplings: Model II 4.6.1 Description of the MIMO Process C The next considered process is defined by a MIMO Wiener model II with two inputs and two outputs, the number of outputs of the static linear dynamic part is n v = 2. The linear part of the process (Eq. (2.1) is of the fourth order of dynamics (n A = n B = 4), the coefficients of the model (Eqs. (2.18)–(2.19)) are the same as in the MIMO Process A described in Sect. 4.4 (Eqs. (4.12) and (4.13)). The main difference is the fact that now the nonlinear static blocks take into account the cross-couplings (Eq. 2.25). They are defined by the equations y1 (k) = g1 (v1 (k), v2 (k)) = − exp(−v1 (k)) + 1 + 0.075v23 (k) y2 (k) = g2 (v1 (k), v2 (k)) 1 1 3 = v2 (k) + v (k) − 0.05v13 (k) + 0.05 exp(−v1 (k)) 30 150 2
(4.25)
(4.26)
The steady-state characteristics y1 (u 1 , u 2 ) and y2 (u 1 , u 2 ) of the whole Wiener system are depicted in Fig. 4.47.
4.6.2 Implementation of MPC Algorithms for the MIMO Process C The following MPC algorithms are compared:
4.6 The MIMO Process C with Two Inputs, Two Outputs …
203
1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The classical MPC-inv algorithm. 3. The MPC-SSL and MPC-NPSL algorithms. 4. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms. 5. The MPC-NO algorithm. Next, we shortly detail implementation details of all considered algorithms. The parameter-constant linear models used for prediction in the LMPC scheme are obtained for three different operating points (the same that we use for the MIMO processes A and B). The model actually used in LMPC is the linear dynamic part of the Wiener process multiplied by the gain of the nonlinear static block for the considered operating points. In general, from Eqs. (3.127)–(3.128) and (4.25)–(4.26), we have dg1 (v1 , v2 ) dv1 dg1 (v1 , v2 ) = dv2 dg2 (v1 , v2 ) = dv1 dg2 (v1 , v2 ) = dv2
K 1,1 =
= exp(−v1 )
(4.27)
K 1,2
= 0.225v22
(4.28)
= −0.15v12 − 0.05 exp(−v1 )
(4.29)
K 2,1 K 2,2
=
3 (0.05 + 0.03v22 ) 2
(4.30)
For the operating point 1, we obtain v1 = 1.0269 × 10−1 ,
K 1,1 = 9.0241 × 10−1 ,
K 1,2 = 2.6818 × 10−1 (4.31)
v2 = −1.0917,
K 2,1 = −4.6702 × 10−2 ,
K 2,2 = 5.7171 × 10−2 (4.32)
For the operating point 2, we obtain K 1,1 = 2.0351 × 10−1 ,
v1 = 1.5920, v2 = −3.1285,
K 1,2 = 2.2022
−1
K 2,1 = −3.9037 × 10 ,
(4.33)
K 2,2 = 2.2908 × 10
−1
(4.34)
For the operating point 3, we obtain v1 = 1.0699, v2 = 1.0745,
K 1,1 = 3.4304 × 10−1 , −1
K 2,1 = −1.8886 × 10 ,
K 1,2 = 2.5976 × 10−1 K 2,2 = 5.6423 × 10
−2
(4.35) (4.36)
The time-varying gains of the nonlinear static blocks are calculated in the MPCSSL and MPC-NPSL algorithms as K 1,1 (k) =
dg1 (v1 (k), v2 (k)) = exp(−v1 (k)) dv1 (k)
(4.37)
204
4 MPC of Input-Output Benchmark Wiener Processes
dg1 (v1 (k), v2 (k)) = 0.225v22 (k) dv2 (k) dg2 (v1 (k), v2 (k)) = −0.15v12 (k) − 0.05 exp(−v1 (k)) K 2,1 (k) = dv1 (k) 3 dg2 (v1 (k), v2 (k)) = (0.05 + 0.03v22 (k)) K 2,2 (k) = dv2 (k) 2 K 1,2 (k) =
(4.38) (4.39) (4.40)
where v1 (k) and v2 (k) are the model signals. In the MPC-NPLT1 and MPC-NPLT2 algorithms the entries of the derivative matrix H(k) are computed from Eq. (3.228). For the nonlinear block (4.25)–(4.26), we have traj
traj
dg1 (v1 (k + p|k), v2 (k + p|k)) traj
dv1 (k + p|k) traj
= exp(−v1 (k + p|k))
traj
(4.41)
traj
(4.42)
traj
dg1 (v1 (k + p|k), v2 (k + p|k)) traj
dv2 (k + p|k) traj
= 0.225(v2 (k + p|k))2
traj
dg2 (v1 (k + p|k), v2 (k + p|k)) traj
dv1 (k + p|k)
traj
= −0.15(v1 (k + p|k))2 traj
− 0.05 exp(−v1 (k + p|k)) traj dg2 (v1 (k
traj + p|k), v2 (k traj dv2 (k + p|k)
+ p|k))
1 1 traj − exp(−v2 (k + p|k)) 30 50
=
(4.43) (4.44)
For the MPC-NPLPT algorithm, to calculate the matrix H t (k), we use Eq. (3.288), which gives dg1 (v1t−1 (k + p|k), v2t−1 (k + p|k)) dv1t−1 (k
+ p|k)
dg1 (v1t−1 (k + p|k), v2t−1 (k + p|k)) dv2t−1 (k + p|k) dg2 (v1t−1 (k + p|k), v2t−1 (k + p|k)) dv1t−1 (k + p|k)
= exp(−v1t−1 (k + p|k))
(4.45)
= 0.225(v2t−1 (k + p|k))2
(4.46)
= −0.15(v1t−1 (k + p|k))2 − 0.05 exp(−v1t−1 (k + p|k))
dg2 (v1t−1 (k
+ p|k), v2t−1 (k dv2t−1 (k + p|k)
+ p|k))
=
1 1 − exp(−v2t−1 (k + p|k)) 30 50
(4.47) (4.48)
Two neural networks of the MLP type are used as inverse models of the static part of the model in the MPC-inv algorithm. They approximate the functions v1 = g˜ 1 (y1 , y2 ) and v2 = g˜ 2 (y1 , y2 ), respectively. Neural networks with one hidden layer containing 100 and 120 hidden nodes, respectively, and linear output layers are used.
4.6 The MIMO Process C with Two Inputs, Two Outputs …
205
The nonlinear units use the tanh activation function. Let us stress that many hidden nodes must be used to approximate precisely the inverse functions. For lower numbers of hidden nodes, the inverse models have insufficient quality.
4.6.3 MPC of the MIMO Process C Parameters of all compared MPC algorithms are the same: N = 10, Nu = 3, μ p,1 = 1 and μ p,2 = 5 for p = 1, . . . , N , λ = λ p,1 = λ p,2 = 0.5 for p = 0, . . . , Nu − 1, the = u min = −1.5, u max = constraints imposed on the manipulated variables are: u min 1 2 1 max u 2 = 1.5. We only consider the case of no modelling errors and no disturbances. Simulation results of the LMPC algorithm are depicted in Fig. 4.48. For prediction, three different models, obtained for different operating points, are used. The obtained control quality is the best when the model 2 is used, but generally, due to nonlinearity of the process, the LMPC algorithm is unable to provide good control. Figure 4.49 presents simulation results of the simple MPC algorithms with online model linearisation, i.e. MPC-NPSL and MPC-SSL ones, the trajectories of the MPC-NO strategy are given for reference. In general, similarly to the MIMO process B case (Fig. 4.42), the simple MPC methods with model linearisation do not give very good results. The MPC-SSL strategy gives significant overshoot and the MPCNPSL algorithm, for the default parameters of the coefficients λ p,1 and λ p,2 , does not work starting from the third set-point change, i.e. the process outputs do not reach the desired set-points. The increased coefficients λ p,1 and λ p,2 solve the problem, but the trajectories are relatively slow. Next, let us consider more advanced MPC algorithms with on-line trajectory linearisation. Figure 4.50 depicts the trajectories obtained when the MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms are used, the trajectories of the MPCNO strategy are given for reference. The additional parameters of the MPC-NPLPT algorithm are: δ = δu = δy = 0.1, N0 = 2, the maximal number of internal iterations is 5. Similarly to all previously discussed benchmark systems, in particular MIMO ones, one linearisation used in the MPC-NPLT1 and MPC-NPLT2 algorithms is insufficient as in such cases, we obtain trajectories slightly slower than those possible in the MPC-NO method. The MPC-NPLPT algorithm with possibly five repetitions of trajectory linearisation and quadratic optimisation at every sampling instant gives very good results; the obtained trajectories are practically the same as in the MPC-NO method. Finally, let us consider simulation results of the MPC-inv algorithm shown in Fig. 4.51. Three different values of the parameter λ are considered: 0.5 (the default value), 5 and 50. Although for the first, the second and the fourth set-point changes, the algorithm works, i.e. the process outputs reach the required values, for the third set-point, the process does not converge to any steady-state. Increasing the value of the parameter λ slows down the trajectories, but still no steady-state is reached after the third set-point step. Let us stress the fact that we consider the simplest ideal
206
4 MPC of Input-Output Benchmark Wiener Processes
2 1 0 -1 -2 -3 -4 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.5
0
-0.5
-1 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.48 The MIMO process C: simulation results of the linear LMPC algorithm based on different models, obtained for different operating points
4.6 The MIMO Process C with Two Inputs, Two Outputs …
207
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.49 The MIMO process C: simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms, λ = λ p,1 = λ p,2 for p = 0, . . . , Nu − 1
208
4 MPC of Input-Output Benchmark Wiener Processes
1 0.5 0 -0.5 -1 -1.5 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.50 The MIMO process C: simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms
4.6 The MIMO Process C with Two Inputs, Two Outputs …
209
1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 4.51 The MIMO process C: simulation results of the MPC-inv algorithm for different values of the parameter λ
210
4 MPC of Input-Output Benchmark Wiener Processes
Table 4.14 The MIMO process C: comparison of all considered MPC algorithms in terms of the control performance criteria (E 1 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
model and disturbance-free case. In all previously discussed simulation studies, the MPC-inv algorithm works in such a case, but it gives much worse control when the model is not perfect and some disturbances are present. In the considered example, unlike the previous ones, the Wiener process has strong interactions. It turns out that full cancellation of process nonlinearity by means of the inverse model of the static model block is not possible. All considered MPC algorithms are compared in Table 4.14 in terms of the performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. As always, the more advanced the linearisation method, the better the control accuracy. Moreover, the calculation time of the simple MPC algorithms with model linearisation is below 20% of that necessary in the MPC-NO scheme. The calculation time of the more advanced MPC algorithms with trajectory linearisation is below 22% of that necessary in the MPCNO scheme (for the chosen parameters δ = δu = δy = 0.1).
Table 4.15 The SISO process, the MIMO processes A and B: comparison of all considered MPC algorithms in terms of the calculation time (%) for different prediction and control horizons
4.6 The MIMO Process C with Two Inputs, Two Outputs … 211
212
4 MPC of Input-Output Benchmark Wiener Processes
4.7 The Influence of Process Dimensionality on the Calculation Time Up till now, we have studied the influence of the prediction and control horizons on the relative calculation time for all considered MPC algorithms. The results are given in Tables 4.2, 4.8 and 4.12 for the SISO case as well as MIMO cases A and B, respectively. We have analysed the results separately; for each process, the computational time of the most demanding MPC-NO scheme and the default parameters is scaled to 100%. Let us consider Table 4.15 in which we compare the discussed MPC algorithms in terms of the calculation time for different prediction and control horizons. We consider the results for three benchmarks: the SISO process, the MIMO processes A and B. All results are scaled in such a way that computational time for the SISO process controlled by the MPC-NO algorithm with the default values of horizons (N = 10, Nu = 3), corresponds to 100%. We may note that the MPC-NO algorithm is characterised by huge calculation time, but let us only concentrate on MPC schemes with linearisation. For the MIMO process A, we observe a very moderate increase of the computational time when compared with the SISO case, although the decision vector of MPC is two times longer. The most interesting results are obtained for the MIMO process B, in which we have ten inputs, which means that the decision vector is ten times longer than in the SISO case or five times longer when compared to the MIMO case A. Nevertheless, despite this fact, we obtain a very moderate increase of the computational time in the MIMO case B for the default values of the prediction and control horizons. That ratio is less favourable for longer control horizons, which is straightforward. All things considered, we may conclude that the MPC algorithms with on-line model or trajectory linearisation scale very well as dimensionality of the process grows. Acknowledgements Figures 4.1, 4.3, 4.5, 4.26, 4.28, and 4.30 reprinted from: Ławry´nczuk, M., Tatjewski, P.: Offset-free state-space nonlinear predictive control for Wiener systems. Information Sciences, vol. 511, pp. 127–151, Copyright (2020), with permission from Elsevier.
References 1. van Donkelaar, E.T., Bosgra, O.H., Van den Hof, P.M.J.: Model predictive control with generalized input parametrization. In: Proceedings of the European Control Conference, ECC 1999, pp. 443–454. Karlsruhe, Germany (1999). CD-ROM, paper F0599 2. Garriga, J.L., Soroush, M.: Model predictive control tuning methods: a review. Ind. Eng. Chem. Res. 49, 3505–3515 (2010) 3. Haykin, S.: Neural Networks and Learning Machines. Pearson Education, Upper Saddle River, New Jersey (2009) 4. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)
References
213
5. Ławry´nczuk, M.: Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation. ISA Trans. 67, 476–495 (2017) 6. Ławry´nczuk, M.: Constrained computationally efficient nonlinear predictive control of Solid Oxide Fuel Cell: Tuning, feasibility and performance. ISA Trans. 99, 270–289 (2020) 7. Maciejowski, J.: Predictive Control with Constraints. Prentice Hall, Harlow (2002) 8. Nebeluk, R., Ławry´nczuk, M.: Tuning of multivariable model predictive control for industrial tasks. Algorithms 14, 10 (2021) 9. Nebeluk, R., Marusak, P.M.: Efficient MPC algorithms with variable trajectories of parameters weighting predicted control errors. Arch. Control Sci. 30, 325–363 (2020) 10. Osowski, S.: Neural Networks for Information Processing (in Polish). Warsaw University of Technology Press, Warsaw (2006) 11. Ripley, B.D.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge (1996) 12. Rutkowski, L.: Methods and Techniques of Artificial Intelligence (in Polish). PWN, Warsaw (2005) 13. Sawulski, J., Ławry´nczuk, M.: In: Optimisation-Based Tuning of Dynamic Matrix Control Algorithm for Multiple-input Multiple-output Processes, pp. 160–165. Mie˛dzyzdroje, Poland (2018) 14. Tadeusiewicz, R.: Neural Networks (in Polish). Academic Publishing House, Warsaw (1993) 15. Tatjewski, P.: Advanced Control of Industrial Processes, Structures and Algorithms. Springer, London (2007) 16. Wang, L.: Discrete model predictive controller design using Laguerre functions. J. Process Control 14, 131–142 (2004)
Chapter 5
Modelling and MPC of the Neutralisation Reactor Using Wiener Models
Abstract This chapter discusses simulation results of MPC algorithms based on Wiener models applied to the neutralisation reactor. At first, the process is shortly described and identification of the Wiener model is discussed. Polynomials and neural networks are used in the nonlinear static block of the model. Effectiveness of both model classes is compared. Implementation details of different MPC algorithms are given. Next, MPC algorithms are compared in terms of control quality and computational time in the classical set-point following task and, additionally, some constraints are imposed on the predicted value of the controlled variable.
5.1 Description of the Neutralisation Reactor Lest us consider a neutralisation (pH) reactor [8]. The reactor is schematically shown in Fig. 5.1. A base (NaOH) stream q1 , a buffer (NaHCO3 ) stream q2 and an acid (HNO3 ) stream q3 are mixed in a constant volume tank. The process has one input variable which is the base flow rate q1 (ml/s) and one output variable which is the value of pH. Changes of the buffer and acid streams may be treated as disturbances of the process, but in this chapter, they are assumed to be constant. The continuous-time fundamental model of the process is comprised of two ordinary differential equations q1 (t)(Wa1 − Wa (t)) q2 (Wa2 − Wa (t)) q3 (Wa3 − Wa (t)) dWa (t) = + + dt V V V q1 (t)(Wb1 − Wb (t)) q2 (Wb2 − Wb (t)) q3 (Wb3 − Wb (t)) dWb (t) = + + dt V V V
(5.1) (5.2)
and one algebraic output equation Wa (t) + 10pH(t)−14 − 10−pH(t) + Wb (t)
1 + 2 × 10pH(t)−K 2 =0 1 + 10 K 1 −pH(t) + 10pH(t)−K 2
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_5
(5.3)
215
216
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
Fig. 5.1 The neutralisation reactor control system structure Table 5.1 The neutralisation reactor: the parameters of the first-principle model
Table 5.2 The neutralisation reactor: the nominal operating point
State variables Wa and Wb are reaction invariants. The parameters of the above firstprinciple model are given in Table 5.1. The values of process variables in the nominal operating point are given in Table 5.2. Figure 5.2 depicts the structure of the continuous-time fundamental model of the neutralisation reactor in Simulink. It may be used to act as the simulated process. Of course, for this purpose the differential equations (5.1), (5.2) and the nonlinear relation (5.3) may be also solved directly in MATLAB, without the necessity of using Simulink. Example step responses of the process are depicted in Fig. 5.3. The excitation signal is q¯1 if t < 50 s. (5.4) q1 (t) = q¯1 + δq1 if t ≥ 50 s. where q¯1 denotes the value of the variable q1 in the nominal operating point. It is clear that both steady-state and dynamic properties of the pH reactor are nonlinear.
5.1 Description of the Neutralisation Reactor
217
dWa/dt
1
1/V
q1
Wa
1 s
-K-
q2
-K-
Wa
q2value
1
pH Wb
q3
pH
-K-
q3value
dWb/dt 1/V
1 s
Wb
-K-
-K-
-K-
Fig. 5.2 The neutralisation reactor: the structure of the continuous-time fundamental model in Simulink
Firstly, for the positive and negative steps, the process gains are different and the gains depend on the amplitude of the input step. Secondly, time-constants of all the steps are different. The steady-state characteristic of the neutralisation reactor is depicted in Fig. 5.4. In general, good control of the neutralisation process is necessary in chemical engineering, biotechnology and waste-water treatment industries [11]. Since both steady-state and dynamic properties of the neutralisation process are nonlinear, it is difficult to control by the classical linear control methods (e.g. PID), in particular when the set-point or other operating conditions change significantly and fast. In addition to its industrial importance, the neutralisation process is a classical benchmark used to evaluate different nonlinear model structures and control methods. Due to nonlinearity of the process, adaptive control techniques may be used, in particular, a model reference adaptive neural network control strategy [20], an adaptive nonlinear output feedback control scheme containing an input-output linearising controller and a nonlinear observer [10], an adaptive nonlinear Internal Model Controller (IMC) [14] and an adaptive backstepping state feedback controller [25]. An alternative is to use multi-model controllers, e.g. a multi-model PID controller based on a set of simple linear dynamical models [2], a multi-model robust H∞ controller [7], or fuzzy
218
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
Fig. 5.3 The neutralisation reactor: example step responses
10.5 10 9.5 9 8.5 8 7.5 7 6.5 6 0
Fig. 5.4 The neutralisation reactor: the steady-state characteristic
100
200
300
400
500
11 10 9 8 7 6 5 4 3 2 0
5
10
15
20
25
30
structures, e.g. a fuzzy PI controller [6], a fuzzy PID controller [12] and a fuzzy IMC structure [13]. An adaptive fuzzy sliding mode controller is presented in [3], a nonlinear IMC structure is discussed in [22]. Another options are: a neural network linearising scheme cooperating with a PID controller [20], a model-free learning controller using reinforcement learning [24] and an approximate multi-parametric nonlinear MPC controller [9]. Of course, the neutralisation process may be controlled by MPC algorithms. A multiple-model control strategy based on a set of classical linear MPC controllers is described in [4, 7]. A neural network trained off-line to mimic the nonlinear MPC algorithm may also be used [1]. A continuous-time MPC algorithm using a piecewise-linear approximation, which simplifies implementation, is discussed in [23]. When a nonlinear model is used directly in MPC for prediction, we obtain the
5.1 Description of the Neutralisation Reactor
219
nonlinear MPC-NO optimisation problem solved at each sampling instant on-line. Applications of the MPC-NO algorithm to the neutralisation process are reported in [19, 21]. An application of the neural Wiener model (a network of the MLP type is used as the static nonlinear part of the model) in the MPC-NPSL, MPC-NPLT and MPC-NPLPT algorithms is described in [16]. An interesting alternative to the neural network is the LS-SVM nonlinear approximator discussed in [17]. An excellent review of possible MPC approaches to the neutralisation process is given in [11]. Although the neutralisation reactor is typically considered in the SISO configurations, in some studies, the MIMO version of the process is used. A version of the MPCNPSL algorithm, in which the model is not linearised in the simplified way, but the full linear approximation is calculated from the Taylor expansion, is described in [15, 18]. Unlike numerous works, not the Wiener but Hammerstein model structure is used. Although model accuracy is worse in comparison with that of the Wiener one, the resulting MPC algorithm works very well; all inaccuracies are compensated by the negative feedback mechanism present in MPC. Finally, a multilayer control system structure may be used in which the optimal set-points for the MPC algorithm are calculated on-line from an additional set-point optimisation problem [15].
5.2 Modelling of the Neutralisation Reactor for MPC In the case of the neutralisation reactor, we will use for prediction in MPC some empirical input-output models, not the fundamental state-space model. If the fundamental model were used in MPC, it would be necessary to solve repeatedly on-line the state differential equations (5.1)–(5.2) and the nonlinear algebraic output equation (5.3). In order to find Wiener models, the fundamental state-space model is used to generate 2000 samples of the process output variable when a series of steps of random amplitude is applied as the input signal. The Runge–Kutta algorithm of the order 45 is used to solve the differential equations. The sampling time is Ts = 10 s. Two sets of data are generated: the training data set and the validation one. Figure 5.5 depicts the manipulated and the controlled variables from the first and the second sets, respectively. For model identification, the process variables are scaled in the following way (5.5) u = (q1 − q¯1 )/15, y = 0.2(pH − pH) where q¯1 and pH denote values of process variables at the nominal operating point (Table 5.2). We consider the following models of the pH reactor: (a) the linear model, (b) the Wiener model with a polynomial static nonlinear block, (c) the Wiener model with a neural static nonlinear block, All dynamical models have the second order of dynamics [16]. It means that the linear model is y(k) = b1 u(k − 1) + b2 u(k − 2) − a1 y(k − 1) − a2 y(k − 2)
(5.6)
220
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
30
30
25
25
20
20
15
15
10
10
5
5
0 0
500
1000
1500
2000
0 0
12
12
10
10
8
8
6
6
4
4
2 0
500
1000
1500
2000
2 0
500
1000
1500
2000
500
1000
1500
2000
Fig. 5.5 The neutralisation reactor: open-loop simulations (the training and validation data sets)
and the dynamic blocks of both types of the Wiener model are v(k) = b1 u(k − 1) + b2 u(k − 2) − a1 v(k − 1) − a2 v(k − 2)
(5.7)
In the first structure of the Wiener model, we use polynomials in the nonlinear static part of the model. Such polynomial Wiener models may be determined in MATLAB from input-output data shown in Fig. 5.5 by means of the function nlhw which may be used to find general Hammerstein-Wiener models (a linear dynamic part sandwiched by two nonlinear static blocks) the structure of which is shown in Fig. 2.8. Syntax of the nlhw function is sys = nlhw(Data,Orders,InputNL,OutputNL) The estimated model is returned as the structure sys. Data is the data set used for model identification, Orders specifies the delay and the order of dynamics of the linear dynamic block, InputNL and OutputNL determine the types of static nonlinear approximators used in the input and output nonlinear static blocks. A few variants of nonlinear blocks are possible: piecewise linear functions, sigmoid or custom networks defined by the user, wavelet networks, saturations, dead zones, polynomials, constant unit gains. To obtain a Hammerstein model, a unit gain must
5.2 Modelling of the Neutralisation Reactor for MPC
221
be chosen as OutputNL. Conversely, to obtain a Wiener model, a unit gain must be chosen as InputNL. The nonlinear part of the polynomial Wiener model is y(k) = g(v(k)) =
K
ci vi (k)
(5.8)
i=0
where K denotes the degree of the polynomial and ci are coefficients. In the second structure of the Wiener model, we use the sigmoid neural network as the nonlinear static part of the model. Such neural Wiener models may also be determined in MATLAB from input-output data by means of the function nlhw. The nonlinear part of the neural Wiener model is y(k) = g(v(k)) = d + P L(v(k) − r ) +
K
ai ϕ((v(k) − r )Qbi + ci )
(5.9)
i=1
where the transfer function is the sigmoid one ϕ(v(k)) =
1 1 + exp(−v(k))
(5.10)
For the SISO process, the scalar parameters are: a linear coefficient L, the linear subspace P, the nonlinear subspace Q, the offset d and a mean value of the data r . K is the number of nonlinear nodes. The parameters are: ai , bi and ci . We may simplify the notation by using the following representation of the nonlinear block y(k) = g(v(k)) = d nn + l nn (v(k) − r nn ) +
K
ainn ϕ((v(k) − r nn )binn + cinn )
i=1
(5.11) where l nn = P L and binn = Qbi for i = 1, . . . , K are scalars, auxiliary superscripts “nn” are added to all parameters of the nonlinear block. Table 5.3 gives the values of model errors for different model configurations. All errors are defined as the sum of squared differences between the data samples and the output of the model for the whole data sets [5]. To select the model finally used in MPC, we take into account the validation errors for dynamic data (E val ) ss ). The steady-state error is caland the validation errors for steady-state data (E val culated for 200 equidistant points in the domain of q1 . The linear model has large errors. As far as the polynomial Wiener model is concerned, the structures of the degree K = 2, 3, . . . , 9, 10, 15, 20 are compared. The model with the polynomial degree K = 5 is chosen because it gives a good compromise between accuracy and complexity. Moreover, increasing the degree of the polynomial does not give better results. As far as the neural Wiener model is concerned, the structures with K = 1, 2, . . . , 10, 15, 20 hidden nodes are compared. In general, the neural Wiener models are more precise than the polynomial ones. Furthermore, for the polynomials
222
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
Table 5.3 The neutralisation reactor: comparison of linear, polynomial Wiener models of the degree K and neural Wiener models containing K hidden nodes in terms of the number of parameters (n par ), errors for dynamic data (E train and E val denote the errors for the training and validation data sets, respectively) and errors for the validation steady-state data (E vss ); for the Wiener models the number of training epochs (n train ) are given
of high degree, in particular for K = 10, 15, 20, large errors are obtained, whereas the neural Wiener models are much more precise. It is important that in the case of the neural Wiener model, the errors do not grow significantly when the number of hidden nodes is increased. The neural Wiener model with five hidden units is chosen since it gives very low values of errors and it has a moderate number of parameters (22). Figure 5.6 depicts the dynamic validation data set versus the outputs of four models (the linear model, the polynomial Wiener model of the degree K = 5, the polynomial Wiener model of the degree K = 15 and the neural Wiener model containing K = 5 hidden nodes). Figure 5.7 depicts the relation between the validation data versus the
5.2 Modelling of the Neutralisation Reactor for MPC
223
14 12 10 8 6 4 2 0
200
400
600
800
1000
1200
1400
1600
1800
2000
200
400
600
800
1000
1200
1400
1600
1800
2000
200
400
600
800
1000
1200
1400
1600
1800
2000
200
400
600
800
1000
1200
1400
1600
1800
2000
14 12 10 8 6 4 2 0 14 12 10 8 6 4 2 0 14 12 10 8 6 4 2 0
Fig. 5.6 The neutralisation reactor: the validation data set versus the outputs of four models (the linear model, the polynomial Wiener model of the degree K = 5, the polynomial Wiener model of the degree K = 15 and the neural Wiener model containing K = 5 hidden nodes)
224
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models 12
15
10 10
8 6
5
4 0 2
4
6
8
10
12
2 2
4
6
8
10
12
4
6
8
10
12
12 10
10
8 8 6 6
4
4
2 0 2
4
6
8
10
12
2 2
Fig. 5.7 The neutralisation reactor: the relation between the validation data versus the outputs of four models (the linear model, the polynomial Wiener model of the degree K = 5, the polynomial Wiener model of the degree K = 15 and the neural Wiener model containing K = 5 hidden nodes)
outputs of the compared models. As the numerical data indicate, the linear model is very imprecise, the polynomial Wiener model of the degree K = 5 is good, the polynomial Wiener model of the degree K = 15 is very bad and the neural Wiener model containing K = 5 hidden nodes is excellent. Finally, it is interesting to compare the real steady-state characteristic of the neutralisation reactor versus the characteristic of its empirical models. Such a comparison is shown in Fig. 5.8 for the linear model, the polynomial Wiener model with different degree of the polynomial and the neural Wiener model with a different number of the hidden nodes. The obtained results correspond with the values of the steadystate error E vss given in Table 5.3. We can see that the neural Wiener models make it possible to achieve very good steady-state modelling. It is practically impossible for the polynomial Wiener model, i.e. when the degree of the polynomial is low, the steady-state model characteristic does not have enough degrees of freedom; when the degree of the polynomial is high, numerical problems occur (ill-conditioning).
5.2 Modelling of the Neutralisation Reactor for MPC
12 10 8 6 4 2 0
12 10 8 6 4 2 10
20
30
12 10 8 6 4 2 0
10
20
30
10
20
30
10
20
30
30
0
10
20
30
0
10
20
30
0
10
20
30
10
20
30
10
20
30
12 10 8 6 4 2 10
20
30
12 10 8 6 4 2 0
20
12 10 8 6 4 2
12 10 8 6 4 2 0
10
12 10 8 6 4 2
12 10 8 6 4 2 0
0
12 10 8 6 4 2
12 10 8 6 4 2 0
225
0
12 10 8 6 4 2 10
20
30
0
Fig. 5.8 The neutralisation reactor: the steady-state characteristic of the process versus the characteristic of different models (the linear model, the polynomial Wiener models with different degree of the polynomial (K ), the neural Wiener models with different number of hidden nodes (K ))
226
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
5.3 Implementation of MPC Algorithms for the Neutralisation Reactor The following MPC algorithms are compared: 1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The classical MPC-inv algorithm. 3. The MPC-SSL and MPC-NPSL algorithms. 4. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms. 5. The MPC-NO algorithm. Next, we shortly detail implementation details of all considered algorithms. The LMPC algorithm uses for prediction the parameter-constant linear model (5.6), whereas all other algorithms use the polynomial and Wiener structures. In the MPC-SSL and MPC-NPSL algorithms, the gain of the nonlinear static block is calculated successively on-line, at each sampling instant. The time-varying gain is computed from the general formula (3.70) taking into account the structure of the Wiener model. For the polynomial Wiener structure defined by Eq. (5.8), we have K dg(v(k)) = ici vi−1 (k) (5.12) K (k) = dv(k) i=1 When the neural Wiener model given by Eq. (5.11) is used, we obtain dg(v(k)) dϕ(z i (k)) nn = l nn + b ainn dv(k) dz i (k) i i=1 K
K (k) =
(5.13)
where z i (k) = (v(k) − r nn )binn + cinn for i = 1, . . . , K . For the sigmoid transfer function (5.10), the derivative is exp(−z i (k))) dϕ(z i (k)) = = ϕ(z i (k))(1 − ϕ(z i (k))) dz i (k) 1 + exp(−z i (k))
(5.14)
Hence, the time-varying gain of the nonlinear static block is K (k) = l nn +
K i=1
ainn
exp(−z i (k))) nn b 1 + exp(−z i (k)) i
(5.15)
In the MPC-NPLT1 and MPC-NPLT2 algorithms, the entries of the derivative matrix H(k) are computed from Eq. (3.211). When the polynomial (5.8) is used in
5.3 Implementation of MPC Algorithms for the Neutralisation Reactor
227
the nonlinear block, we have dg(vtraj (k + p|k)) = ici (vtraj (k + p|k))i−1 dvtraj (k + p|k) i=1 K
(5.16)
When the neural network (5.11) is used in the nonlinear block, we have dϕ(z i (k + p|k)) nn dg(vtraj (k + p|k)) nn = l + ainn bi traj traj dv (k + p|k) dz i (k + p|k) i=1 K
traj
(5.17)
traj
where z i (k + p|k) = (vtraj (k + p|k) − r nn )binn + cinn . The partial derivative, similarly to Eq. (5.14), is traj
dϕ(z i (k + p|k)) traj
dz i (k + p|k)
traj
=
exp(−z i (k + p|k)) traj
1 + exp(−z i (k + p|k)) traj
traj
= ϕ(z i (k + p|k))(1 − ϕ(z i (k + p|k)))
(5.18)
For the MPC-NPLPT scheme, we use Eq. (3.283). When the polynomial Wiener model is used, we have dg(vt−1 (k + p|k)) ici (vt−1 (k + p|k))i−1 = dvt−1 (k + p|k) i=1 K
(5.19)
When the neural structure is used, we have t−1 dg(vt−1 (k + p|k)) nn nn dϕ(z i (k + p|k)) nn = l bi + a i dvt−1 (k + p|k) dz it−1 (k + p|k) i=1 K
(5.20)
where z it−1 (k + p|k) = (vt−1 (k + p|k) − r nn )binn + cinn and dϕ(z it−1 (k + p|k)) dz it−1 (k
+ p|k)
=
exp(−z it−1 (k + p|k)) 1 + exp(−z it−1 (k + p|k))
= ϕ(z it−1 (k + p|k))(1 − ϕ(z it−1 (k + p|k)))
(5.21)
A neural network of the MLP type with one hidden layer containing five nonlinear units and a linear output layer is used as the inverse model of the nonlinear static block in the MPC-inv algorithm. The nonlinear units use the tanh activation function.
228
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.9 The neutralisation reactor: simulation results of the linear LMPC algorithm for different values of the penalty factor λ
5.4 MPC of the Neutralisation Reactor The default parameters of all compared MPC algorithms are: N = 10, Nu = 3, λ = 0.1, the constraints imposed on the manipulated variable are: q1min = 0, q2max = 30. At first, let us consider that in MPC a perfect model is used and no disturbances act on the process. Simulation results of the LMPC algorithm for different values of the penalty factor λ are depicted in Fig. 5.9. Unfortunately, it is impossible to choose the value of the parameter λ for which the algorithm works satisfactorily. For λ = 0.1, λ = 1 or λ = 10, there are strong oscillations which are not present when λ = 20, but in such a case, the algorithm is very slow, the set-points are not achieved in reasonable time. Next, we consider nonlinear MPC algorithms in which the neural Wiener model is used. Simulation results of the MPC-NPSL and MPC-SSL algorithms with on-line model linearisation are depicted in Figs. 5.10 and 5.11, respectively, the trajectories of the MPC-NO scheme are given for comparison. The simpler MPC-SSL algorithm for the default value of the parameter λ gives very low quality of control as there are very fast and big changes of the manipulated and controlled variables. Increasing the value of λ makes the process output follow changes of the set-point, but the trajectories are rather slow. The more advanced MPC-NPSL algorithm gives quite good results. The differences from the ideal trajectories possible in the MPC-NO algorithm are present, but they are not significant.
5.4 MPC of the Neutralisation Reactor
229
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.10 The neutralisation reactor: simulation results of the MPC-NO and MPC-SSL algorithms based on the neural Wiener model containing K = 5 hidden nodes
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.11 The neutralisation reactor: simulation results of the MPC-NO and MPC-NPSL algorithms based on the neural Wiener model containing K = 5 hidden nodes
230
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.12 The neutralisation reactor: simulation results of the MPC-NO, MPC-NPLPT, MPCNPLT1 and MPC-NPLT2 algorithms based on the neural Wiener model containing K = 5 hidden nodes
Simulation results of the MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms with on-line trajectory linearisation are presented in Fig. 5.12, the neural Wiener model is used for prediction. The additional parameters of the MPC-NPLPT algorithm are: δ = δu = δy = 0.1, N0 = 2, the maximal number of internal iterations is 5. In general, all these algorithms perform better than the best algorithm with on-line model linearisation, i.e. the MPC-NPSL scheme. The MPC-NPLPT method gives practically the same trajectories as the MPC-NO one. Figure 5.13 compares the trajectories obtained in the MPC-NPLPT algorithm with those possible in the classical MPC-inv approach in which the inverse model is used. Unfortunately, for the default value of the parameter λ, the MPC-inv algorithm gives some oscillations for the second set-point change. It is necessary to increase the weighting parameter to λ = 20 to reduce oscillations, but it slightly slows down control. Table 5.4 compares all considered MPC algorithms (the LMPC one and nonlinear MPC schemes based on the neural Wiener model with K = 5 hidden nodes) in terms of the factors E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. Considering only the nonlinear MPC algorithms, we may easily find out that the values of the calculated parameters confirm our observations from Figs. 5.9, 5.10, 5.11, 5.12 and 5.13, i.e. the LMPC scheme does not work, the simple MPC-SSL algorithm is the worst one, the
5.4 MPC of the Neutralisation Reactor
231
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.13 The neutralisation reactor: simulation results of the MPC-NPLPT algorithm based on the neural Wiener model containing K = 5 hidden nodes and the MPC-inv algorithm Table 5.4 The neutralisation reactor: comparison of all considered MPC algorithms (the LMPC scheme and nonlinear MPC ones based on the neural Wiener model containing K = 5 hidden nodes) in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
232
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.14 The neutralisation reactor: simulation results of the MPC-NO algorithm based on Wiener models with different variants of the static block (the neural network containing K = 5 hidden nodes and the polynomials of the degree K = 5 and K = 15)
MPC-NPSL scheme is better, the MPC-NPLT2 algorithm is quite good and, finally, the MPC-NPLPT algorithm makes it possible to give practically the same trajectories as the truly nonlinear MPC-NO one. Next, we consider nonlinear MPC algorithms in which the polynomial Wiener model is used. Let us analyse the influence of the structure of the nonlinear part of the model on control quality. Figures 5.14, 5.15 and 5.16 depict simulation results of the MPC-NO, MPC-NPSL and MPC-NPLPT algorithms in which the Wiener model with the neural network with K = 5 hidden nodes as well as the polynomials of the degree K = 5 and K = 15, are used, respectively. Because the polynomial Wiener model of the degree K = 5 is worse than the chosen neural network, the obtained trajectories are characterised by slightly greater overshoot. Unfortunately, the polynomial Wiener model of the degree K = 15 has very poor quality and the resulting MPC algorithms do not work for some operating points. Table 5.5 compares all considered nonlinear MPC algorithms based on the polynomial Wiener model of the degree K = 5 in terms of the factors E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. In general, the obtained results are similar to those recorded for the MPC algorithms based on the neural Wiener model (Table 5.4). Since the neural Wiener model containing K = 5 hidden nodes outperforms all polynomial Wiener structures, it is used in all simulations presented next.
5.4 MPC of the Neutralisation Reactor
233
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.15 The neutralisation reactor: simulation results of the MPC-NPSL algorithm based on Wiener models with different variants of the static block (the neural network containing K = 5 hidden nodes and the polynomials of the degree K = 5 and K = 15) 10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.16 The neutralisation reactor: simulation results of the MPC-NPLPT algorithm based on Wiener models with different variants of the static block (the neural network containing K = 5 hidden nodes and the polynomials of the degree K = 5 and K = 15)
234
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
Table 5.5 The neutralisation reactor: comparison of all considered MPC algorithms based on the polynomial Wiener model of the degree K = 5 in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
In the second part of simulations, we assume that the model is not perfect and the process is affected by disturbances. The steady-state gain of the model is decreased by 25%. From the sampling instant k = 30, the additive unmeasured step disturbance of the value 0.5 acts on the process output; from k = 70, the value of disturbance changes to −0.5. Simulation results of the LMPC algorithm for different values of the penalty factor λ are depicted in Fig. 5.17. Similarly to the perfect model and disturbance-free case (Fig. 5.9), it is clear that due to huge differences between the process and the linear model as well as disturbances, the LMPC algorithm practically does not work. Figure 5.18 shows the trajectories of the MPC-SSL algorithms with on-line model linearisation, Fig. 5.19 depicts the trajectories of the MPC-NPSL algorithms. Similarly to the perfect model and disturbance-free cases (Figs. 5.10 and 5.11, respectively), the MPC-SSL algorithm, even with the increased coefficient λ, does not lead to good control. Conversely, the MPC-NPSL algorithm with nonlinear free trajectory works well; its trajectories are quite close to those possible when the ideal MPCNO control scheme is used. Of course, model inaccuracy and disturbances have a negative effect on the resulting control quality, but there are no oscillations and the required set-points are achieved quickly. Figure 5.20 depicts trajectories of the MPC-NPLPT, MPC-NPLT1 and MPCNPLT2 algorithms with on-line trajectory linearisation. Similarly to the perfect model and disturbance-free case (Fig. 5.12), the trajectories of the MPC-NPLT1 and MPCNPLT2 algorithms with one repetition of linearisation and quadratic optimisation at every sampling instant are slightly different from those possibly in the MPC-NO scheme, but the most advanced MPC-NPLT approach gives practically the same control quality as the MPC-NO method.
5.4 MPC of the Neutralisation Reactor
235
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.17 The neutralisation reactor (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the linear LMPC algorithm for different values of the penalty factor λ
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.18 The neutralisation reactor (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the MPC-NO and MPC-SSL algorithms based on the neural Wiener model containing K = 5 hidden nodes
236
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.19 The neutralisation reactor (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the MPC-NO and MPC-NPSL algorithms based on the neural Wiener model containing K = 5 hidden nodes
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.20 The neutralisation reactor (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms based on the neural Wiener model containing K = 5 hidden nodes
5.4 MPC of the Neutralisation Reactor
237
10
8
6
4 0
20
40
60
80
100
120
20
40
60
80
100
120
30
20
10
0 0
Fig. 5.21 The neutralisation reactor (the unmeasured disturbances act on the process and the model is not perfect): simulation results of the MPC-NPLPT algorithm based on the neural Wiener model containing K = 5 hidden nodes and the MPC-inv algorithm
Finally, the trajectories possible in the MPC-NPLPT algorithm are compared with those obtained in the classical MPC-inv approach based on the inverse model. Simulation results are given in Fig. 5.21. Let us remind that in the perfect model and disturbance-free case, as shown in Fig. 5.13, the MPC-inv strategy gives unwanted oscillations for the second set-point changes, but for all other operating points, it works quite well. The observed disadvantage may be easily eliminated by increasing the penalty coefficient λ. Unfortunately, when the unmeasured disturbances act on the process and the model is not perfect, the MPC-inv control scheme gives very bad control; increasing the penalty λ does not solve the problem. Table 5.6 compares all considered MPC algorithm in terms of the factors E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme and the scaled calculation time. As in the perfect model and disturbance-free case (Table 5.4), the more advanced the linearisation method, the better the quality of control.
238
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
Table 5.6 The neutralisation reactor (the unmeasured disturbances act on the process and the model is not perfect): comparison of all considered MPC algorithms based on the neural Wiener model containing K = 5 hidden nodes in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
5.5 MPC of the Neutralisation Reactor with Constraints Imposed on the Predicted Controlled Variable Now let us consider the following constraints imposed on the controlled variable: pHmin = 3, pHmax = 9. In all simulations discussed next, the tuning parameters of MPC are: N = 10, Nu = 3, λ = 0.5 and the neural Wiener model containing K = 5 hidden nodes is used in MPC. In the second part of simulations, we also assume that from the sampling instant k = 28, the additive unmeasured step disturbance of the value 3 acts on the process output. At first, let us discuss simulation results when the process is not affected by the additive output disturbance. Since introduction of the soft output constraints needs finding the additional coefficients ρ min , ρ max , a few values have been tested. Figures 5.22 and 5.23 depict simulation results of the MPC-NPLT1 algorithm when ρ min = ρ max = 0.1, 10, 1000. We consider two versions of the soft output constraints: – The simplified soft constraints: the same scalars εmin (k) and εmax (k) are used over the whole prediction horizon as in the optimisation task (1.39). The overall number of decision variables in MPC optimisation is Nu + 2 = 5.
5.5 MPC of the Neutralisation Reactor with Constraints Imposed …
239
10
8
6
4
2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.22 The neutralisation reactor: the MPC-NPLT1 algorithm based on the neural Wiener model containing K = 5 hidden nodes with the simplified soft output constraints for different coefficients ρ min = ρ max
– The full soft constraints (1.41): the independent scalars εmin (k + p) and εmax (k + p) are used for the consecutive sampling instants of the prediction horizon, i.e. for p = 1, . . . , N . The number of decision variables grows to Nu + 2N = 23. For both types of soft constraints, it is straightforward to notice that when the penalty coefficients ρ min = ρ max have low values, violation of the output constraints is not taken into account in the minimised MPC cost-function sufficiently and the actual value of the process output exceeds its upper limit. The problem is solved when the penalty coefficients are increased. It turns out that the simplified soft output constraints are sufficient for the considered process since they give practically the same results as the full ones. Having selected the penalty coefficients ρ min = ρ max = 1000, we may compare performance of the considered MPC algorithms and possible approaches to the output constraints. Figures 5.24, 5.25 and 5.26 depict simulation results of all compared MPC algorithms (i.e. MPC-NO, MPC-NPLT1 and MPC-NPSL) for four cases of the output constraints: none, hard, soft and simplified soft. Of course, when the output constraints are not taken into account, there is significant overshoot. In the case of soft and simplified soft constraints, the penalty term coefficients related to the output constraints are ρ min = ρ max = 1000. The MPC-NO algorithm uses for prediction the full nonlinear model without any simplifications (linearisation) which results in very good control quality when the output constraints of any kind are
240
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
10
8
6
4
2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.23 The neutralisation reactor: the MPC-NPLT1 algorithm based on the neural Wiener model containing K = 5 hidden nodes with the soft output constraints for different values of the coefficients ρ min = ρ max 10
8
6
4
2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.24 The neutralisation reactor: the MPC-NO algorithm based on the neural Wiener model containing K = 5 hidden nodes with different types of the output constraints
5.5 MPC of the Neutralisation Reactor with Constraints Imposed …
241
10
8
6
4
2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.25 The neutralisation reactor: the MPC-NPLT1 algorithm based on the neural Wiener model containing K = 5 hidden nodes with different types of the output constraints
present, i.e. the overshoot is eliminated. The suboptimal MPC-NPLT1 and MPCNPSL algorithms give slightly different trajectories when the hard output constraints are used. It is interesting to note that the simplified soft constraints give practically the same trajectories as the full soft ones. Figure 5.27 compares effectiveness of the MPC-NO, MPC-NPSL and MPC-NPLT1 algorithms with the simplified soft output constraints. The obtained trajectories may be compared numerically, using the performance indices E 2 and E MPC-NO defined by Eqs. (4.1) and (4.2). Additionally, it is necessary to define an additional index which shows satisfaction accuracy of the output constraints E constr =
kmax 2 min(pHmin , pH(k)) + (max(pHmax , pH(k)))2
(5.22)
k=kmin
Table 5.7 compares the nonlinear MPC algorithms in terms of the performance indices E 2 , E MPC-NO and E constr for different types of the output constraints; for soft and soft simplified constraints, ρ min = ρ max = 1000. The obtained numerical values confirm our observations made so far. The MPC-NO algorithm is the best, the MPC-NPLT1 scheme is slightly worse, the MPC-NPSL approach is the worst one but still acceptable. Both types of output constraints give good results which means
242
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models 10
8
6
4
2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.26 The neutralisation reactor: the MPC-NPSL algorithm based on the neural Wiener model containing K = 5 hidden nodes with different types of the output constraints
12 10 8 6 4 2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.27 The neutralisation reactor: the MPC-NO, MPC-NPSL and MPC-NPLT1 algorithms based on the neural Wiener model containing K = 5 hidden nodes with the simplified soft output constraints
5.5 MPC of the Neutralisation Reactor with Constraints Imposed …
243
Table 5.7 The neutralisation reactor: comparison of nonlinear MPC algorithms based on the neural Wiener model containing K = 5 hidden nodes in terms of the control performance indices (E 2 , E MPC-NO and E constr ) for different types of the output constraints, for soft and soft simplified constraints ρ min = ρ max = 1000
that the simplified ones may be used since, in such a case, we have a lower number of decision variables. It is necessary to remember that ordinary hard output constraints may lead to numerical problems. Presence of such constraints may result in an empty set of feasible solutions of the MPC optimisation task. In such a case, the optimisation procedure is unable to find a feasible solution. Happily, for the chosen parameters, the hard output constraints lead to no infeasibility problems in the MPC-NO and MPC-NPSL algorithms, but for other configurations of the parameters, the hard constraints are very likely to result in such problems. Unfortunately, in the MPC-NPLT1 algorithm, simulation results of which are presented in Fig. 5.25, for the sampling instant k = 23, the quadratic optimisation solver is unable to find a feasible solution. For the solution returned by the quadprog function in MATLAB, the constraints are violated. In this case, the last feasible value of the manipulated variable, possibly from the previous sampling instant, is applied to the process. Of course, there are no infeasibility problems when the output constraints are implemented as soft or simplified soft. In order to enlarge the feasible region, the optimisation procedure finds positive values of the slack variables, which relax the original hard constraints. Figures 5.28 and 5.29 depict the additional decision variables that define the degree of the output constraints’ violations in the MPC-NPLT1 algorithms with the soft output constraints and the simplified soft ones, respectively, for different values of the coefficients ρ min = ρ max . In general, the higher the penalty coefficients, the lower the allowed degree of constraints’ violation.
244
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
0.08
0.4
0.06
0.3
0.04
0.2
0.02
0.1
0 0
0 0 20 40
9 10 7 8 6 5 3 4 1 2
0.08
0.4
0.06
0.3
0.04
0.2
0.02
0.1
0 0
0 0 20 40
9 10 7 8 6 5 3 4 1 2
0.4
0.06
0.3
0.04
0.2
0.02
0.1
0 0
0 0
40
9 10 7 8 5 6 4 3 1 2
40
9 10 7 8 6 5 3 4 1 2
40
9 10 7 8 6 5 3 4 1 2
20
0.08
20
40
10 8 9 6 7 5 3 4 1 2
20
20
Fig. 5.28 The neutralisation reactor: the additional decision variables that define the degree of the output constraints’ violations in the MPC-NPLT1 algorithm based on the neural Wiener model containing K = 5 hidden nodes with the soft output constraints for different coefficients ρ min = ρ max
5.5 MPC of the Neutralisation Reactor with Constraints Imposed …
245
0.08 0.06 0.04 0.02 0 0 0.5
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
0.4 0.3 0.2 0.1 0 0
Fig. 5.29 The neutralisation reactor: the additional decision variables that define the degree of the output constraints’ violations in the MPC-NPLT1 algorithm based on the neural Wiener model containing K = 5 hidden nodes with the simplified soft output constraints for different coefficients ρ min = ρ max
In the second part of simulations, let us discuss the case when the process is affected by the additive output disturbance (from the sampling instant k = 28, the additive unmeasured step disturbance of the value 3 acts on the process output). The constraints imposed on the predicted values of the controlled variable are still present; in all simulations, the penalty coefficients ρ min = ρ max = 1000 are used. Figure 5.30 presents simulation results obtained in the MPC-NO algorithm with different types of output constraints. It is easy to notice that the additional output disturbance leads to significant overshoot which cannot be eliminated fast when there are no output constraints. The introduction of the output constraints helps a lot. When the output constraints are implemented in two soft versions, there are no feasibility problems. Unfortunately, when the output constraints are hard, for the sampling instants k = 28, 29, the nonlinear optimisation solver is unable to find a solution without violating the constraints (the same problem is present in the MPC-NPLT1 algorithm for k = 23 when the process is not affected by the additive disturbance). For the sampling instants k = 28, 29, the value of the manipulated variable calculated at the instant k = 27 is used. Figure 5.31 presents simulation results obtained in the MPC-NPLT1 algorithm with different types of the output constraints. Unfortunately, when the output constraints are implemented as hard, the infeasibility problems occur at the sampling
246
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
12 10 8 6 4 2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.30 The neutralisation reactor (the unmeasured disturbance acts on the process): the MPCNO algorithm based on the neural Wiener model containing K = 5 hidden nodes with different types of the output constraints
instants k = 23, 28, . . . , 40. At all these sampling instants, the manipulated variable calculated at k = 22 is used (the last feasible one), but it results in very bad control quality since the process output stabilises on a value very different from the required set-point. Of course, the output constraints are not satisfied. Figure 5.32 presents simulation results obtained in the MPC-NPSL algorithm with different types of the output constraints. The trajectories are correct in all cases; the output constraints work, the simplified output constraints lead to very similar trajectories as the full soft ones. When the hard constraints are used, the infeasibility problem occurs only once, for the sampling instant 29. The optimisation procedure is unable to find any solution, in the same way it is observed in the case of the MPC-NPLT1 algorithm for the sampling instants k = 28, . . . , 40. We can observe that in all simulation results presented in Figs. 5.30, 5.31 and 5.32, the soft and the simplified soft constraints make it possible to obtain good control quality and no feasibility problem occurs. Figure 5.33 compares effectiveness of the MPC-NO, MPC-NPSL and MPCNPLT1 algorithms with the simplified soft output constraints. Unlike the previous experiment when the process is not affected by any disturbance (Fig. 5.27), the MPCNO algorithm gives the biggest overshoot, whereas the MPC-NPLT1 and MPC-NPSL algorithms lead to much lower one. Table 5.8 compares the nonlinear MPC algorithms in terms of the performance indices E 2 , E MPC-NO and E constr for different types of the output constraints.
5.5 MPC of the Neutralisation Reactor with Constraints Imposed …
247
12 10 8 6 4 2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.31 The neutralisation reactor (the unmeasured disturbance acts on the process): the MPCNPLT1 algorithm based on the neural Wiener model containing K = 5 hidden nodes with different types of the output constraints
12 10 8 6 4 2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.32 The neutralisation reactor (the unmeasured disturbance acts on the process): the MPCNPSL algorithm based on the neural Wiener model containing K = 5 hidden nodes with different types of the output constraints
248
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
12 10 8 6 4 2 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
20
10
0 0
Fig. 5.33 The neutralisation reactor (the unmeasured disturbance acts on the process): the MPCNO, MPC-NPSL and MPC-NPLT1 algorithms based on the neural Wiener model containing K = 5 hidden nodes with the simplified soft output constraints Table 5.8 The neutralisation reactor (the unmeasured disturbances act on the process): comparison of nonlinear MPC algorithms based on the neural Wiener model containing K = 5 hidden nodes in terms of the control performance indices (E 2 , E MPC-NO and E constr ) for different types of the output constraints, for soft and soft simplified constraints ρ min = ρ max = 1000
5.5 MPC of the Neutralisation Reactor with Constraints Imposed …
249
Table 5.9 The neutralisation reactor (the unmeasured disturbances act on the process): infeasible optimisation problems in MPC algorithms based on the neural Wiener model containing K = 5 hidden nodes when hard output constraints are present
Let us concentrate on infeasibility problems likely to occur when the output constraints are implemented as hard. Table 5.9 gives the sampling instants for which such problems are present. Of course, when the additional disturbance is not present, the only reason for infeasibility problems is that the model used in MPC is a rough approximation of the process. Consequently, there are differences between the predicted trajectory and the actual process output. In the disturbance-free case, it happens only in the MPC-NPLT1 algorithm. The very strong additional disturbance leads to infeasibility problems in all considered MPC algorithms with hard output constraints. In our simulations, for the chosen tuning parameters and the particular type of the additive disturbance, the worst situation is in the MPC-NPLT1 algorithm, which is unable to find the solution to the optimisation problem for the sampling instants k = 23, 28, . . . , 40, which results in a very bad process trajectory (Fig. 5.31). Of course, numerical problems are not present when the output constraints are implemented as soft ones.
References 1. Åkesson, B.M., Toivonen, H.T., Waller, J.B., Nyström, R.H.: Neural network approximation of a nonlinear model predictive controller applied to a pH neutralization process. Comput. Chem. Eng. 29, 323–335 (2005) 2. Böling, J.M., Seborg, D.E., Hespanha, J.P.: Multi-model adaptive control of a simulated pH neutralization process. Control Eng. Pract. 15, 663–672 (2007) 3. Chen, J., Peng, Y., Han, W., Guo, M.: Adaptive fuzzy sliding mode control in pH neutralization process. Procedia Eng. 15, 954–958 (2011) 4. Dougherty, D., Cooper, D.: A practical multiple model adaptive strategy for single-loop MPC. Control Eng. Pract. 11, 141–159 (2003) 5. Doma´nski, P.D.: Control Performance Assessment: Theoretical Analyses and Industrial Practice. Studies in Systems, Decision and Control, vol. 245. Springer, Cham (2020) 6. Fuente, M.J., Robles, C., Casado, O., Syafiie, S., Tadeo, F.: Fuzzy control of a neutralization process. Eng. Appl. Artif. Appl. 19, 905–914 (2016)
250
5 Modelling and MPC of the Neutralisation Reactor Using Wiener Models
7. Galán, O., Romagnoli, J.A., Palazoglu, A.: Real-time implementation of multi-linear modelbased control strategies-an application to a bench-scale pH neutralization reactor. J. Process Control 14, 571–579 (2004) 8. Gómez, J.C., Jutan, A., Baeyens, E.: Wiener model identification and predictive control of a pH neutralisation process. Proc. IEE Part D Control Theory Appl. 151, 329–338 (2004) 9. Grancharova, A., Kocijan, J., Johansen, T.A.: Explicit output-feedback nonlinear predictive control based on black-box models. Eng. Appl. Artif. Appl. 24, 388–397 (2011) 10. Henson, M., Seborg, D.: Adaptive nonlinear control of a pH neutralization process. IEEE Trans. Control Syst. Technol. 2, 169–182 (1994) 11. Hermansson, A.W., Syafiie, S.: Model predictive control of pH neutralization processes: a review. Control Eng. Pract. 45, 98–109 (2016) 12. Karasakal, O., Guzelkaya, M., Eksin, I., Yesil, E., Kumbasar, T.: Online tuning of fuzzy PID controllers via rule weighing based on normalized acceleration. Eng. Appl. Artif. Appl. 26, 184–197 (2016) 13. Kumbasar, T., Eksin, I., Guzelkaya, M., Yesil, E.: Type-2 fuzzy model based controller design for neutralization processes. ISA Trans. 51, 277–287 (2014) 14. Lakshmi Narayanan, N.R., Krishnaswamy, P.R., Rangaiah, G.P.: An adaptive internal model control strategy for ph neutralization. Chem. Eng. Sci. 52, 3067–3074 (2016) 15. Ławry´nczuk, M.: On improving accuracy of computationally efficient nonlinear predictive control based on neural models. Comput. Eng. Sci. 66, 5253–5267 (2011) 16. Ławry´nczuk, M.: Practical nonlinear predictive control algorithms for neural Wiener models. J. Process Control 23, 696–714 (2013) 17. Ławry´nczuk, M.: Modelling and predictive control of a neutralisation reactor using sparse support vector machine Wiener models. Neurocomputing 205, 311–328 (2016) 18. Ławry´nczuk, M.: Suboptimal nonlinear predictive control based on multivariable neural Hammerstein models. Appl. Intell. 32, 173–192 (2016) 19. Ławry´nczuk, M.: Wiener model identification and nonlinear model predictive control of a pH neutralization process based on Laguerre filters and least squares support vector machines. J. Zhejiang Univ. Sci. C (Comput. Electron.) 12, 25–35 (2016) 20. Loh, A.P., Looi, K.O., Fong, K.F.: Neural network modeling and control strategies for a pH process. J. Process Control 5, 355–362 (1995) 21. Mahmoodi, S., Poshtan, J., Jahed-Motlagh, M.R., Montazeri, A.: Nonlinear model predictive control of a pH neutralization process based on Wiener-Laguerre model. Chem. Eng. J. 146, 328–337 (2009) 22. Norquay, S.J., Palazo˘glu, A., Romagnoli, J.A.: Model predictive control based on Wiener models. Chem. Eng. Sci. 53, 75–84 (2016) 23. Oblak, S., Škrjanc, I.: Continuous-time Wiener-model predictive control of a pH process based on a PWL approximation. Chem. Eng. Sci. 65, 1720–1728 (2010) 24. Syafiie, S., Tadeo, F., Martinez, E.: Model-free learning control of neutralization processes using reinforcement learning. Eng. Appl. Artif. Appl. 20, 767–782 (2005) 25. Yoon, S.S., Yoon, T.W., Yang, D.R., Kang, T.S.: Indirect adaptive nonlinear control of a pH process. Comput. Chem. Eng. 26, 1223–1230 (2002)
Chapter 6
Modelling and MPC of the Proton Exchange Membrane Fuel Cell Using Wiener Models
Abstract This chapter discusses simulation results of MPC algorithms based on Wiener models applied to the proton exchange membrane fuel cell. At first, the process is shortly described, identification of three structures of neural Wiener models and model selection are discussed. Efficiency of the polynomial Wiener model is also evaluated. Implementation details of different MPC algorithms are given. Next, efficiency of MPC algorithms is compared in terms of control quality and computational time.
6.1 Control of Proton Exchange Membrane Fuel Cells Currently, the transport sector relies on combustion engines which use fossil fuels. Alas, it results in emission of greenhouse gases, which leads to serious environmental problems, i.e. air pollution, global warming, climate changes and destruction of the ozone layer. Zero-emission electric vehicles are available and their popularity grows. Typically, electric vehicles use batteries for energy storage. Unfortunately, such cars have a few disadvantages. Although they are zero-emission, the energy may be produced in a non-zero-emission process. Moreover, recharging of batteries takes long hours and the range is limited. An interesting alternative is to use a fuel cell for energy generation because, in such a case, the energy is produced in a really zero-emission environmentally friendly process and the tank is filled in minutes. Fuel cells are electrochemical devices that convert the chemical energy of a fuel (often hydrogen) and an oxidising agent (often oxygen) directly into electrical energy [16]. They have many significant advantages: high electrical efficiency, very low emission and quiet operation. Moreover, since fuel cells do not have moving parts, their life cycle is very long. Fuel cells may be produced in different scales: from microwatts to megawatts, which makes them useful in numerous applications. Lastly, hydrogen necessary for fuel cells may be quite easily produced, so dependence on imported oil may be significantly reduced. Among existing types of fuel cells [16], the Proton Exchange Membrane (PEM) fuel cells are preferred not only for mobile and vehicle applications, including cars, scooters, bicycles, boats and underwater vessels [1, 22] but also for stationary ones. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_6
251
252
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
This is because of low operation temperature (usually between 60–80 ◦ C) which gives a fast start-up, simple and compact design as well as reliable operation. Since solid electrolyte is used, no electrolyte leakage is possible. The PEM fuel cells are considered to be very promising power sources and they are expected to become sound alternatives to conventional power generation methods. It is necessary to point out that control of PEM fuel cells is a challenging task. Although there are examples of classical control methods applied to the PEM fuel cell, e.g. a linear state feedback controller [25] or a Sliding-Mode Controller (SMC) [15], the process is inherently nonlinear and linear controllers may give control accuracy below expectations. Hence, different nonlinear control strategies have been applied to the PEM fuel cell process: an adaptive Proportional-Integral-Derivative (PID) algorithm whose parameters are tuned on-line by a fuzzy logic system [2, 21] or by a neural network [7], an adaptive PID algorithm with a fuzzy logic feedforward compensator [5], a nonlinear state feedback controller [13], a fuzzy controller [19] and a look-up table [23]. Fractional complex-order controllers may be also used [28, 29]. Recently, MPC algorithms have been applied for the PEM fuel cell. In the literature, it is possible to find two categories of MPC: 1. The fully-fledged nonlinear MPC-NO algorithm [10, 26, 27, 35]. Such an approach may be very computationally demanding, its practical application may be impossible. 2. The classical LMPC algorithm based on a fixed (parameter-constant) linear model [3, 24]. In this approach, on-line calculations in MPC are not demanding (quadratic optimisation is used), but the resulting control quality may be not satisfactory because the process is nonlinear and the linear model used in MPC is only a very rough approximation of the nonlinear process. The contribution of this chapter is threefold: 1. Effectiveness of three Wiener structures is compared. It is necessary to point out that the Wiener model is a natural representation of the PEM process. A neural network of the MLP type is used in the static part of the Wiener models. In contrast to all Wiener model presented in Chap. 2 and discussed in other chapters of this book, in all described models of the PEM fuel cell, not only influence of the process input on the output is taken into account, but also the impact of the measured disturbance on the output is considered. 2. Effectiveness of the polynomial Wiener model is evaluated. The best structure chosen for the neural Wiener model is used; the only difference is utilisation of polynomials in place of neural networks in the nonlinear static part of the model. 3. Two nonlinear MPC algorithms for the PEM fuel cell are described: the MPCNPSL and MPC-NPLT approaches. In contrast to the algorithms presented in [10, 26, 27, 35], in both algorithms computationally simple quadratic optimisation is used, full nonlinear optimisation is not necessary. The discussed algorithms are compared to the MPC-NO scheme in terms of control accuracy and computational time, inefficiency of the classical LMPC scheme is shown.
6.1 Control of Proton Exchange Membrane Fuel Cells
253
Modelling of the PEM fuel cell by neural Wiener models and MPC based on such models are discussed in [18] distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/). This chapter extends that publication since not only neural Wiener models are discussed but also polynomial ones.
6.2 Description of the Proton Exchange Membrane Fuel Cell In the general case, the model of the PEM fuel cell is quite complicated [25]. Hence, the tendency is to use simpler models for development of the control system [4, 6, 11, 30, 31, 33]. In this work, the PEM fuel cell model introduced in [33] and further discussed in [9, 14, 34] is considered. The PEM process has one manipulated variable (the input of the process) which is the input methane flow rate q (mol s−1 ), one disturbance (the uncontrolled input) I which is the external current load (A) and one controlled variable (the output of the process) which is the stack output voltage V (V). The partial pressures of hydrogen, oxygen and water are denoted by pH2 , pO2 and pH2 O , respectively (atm). The input hydrogen flow, the hydrogen reacted flow and the oxygen input flow are denoted by qHin2 , qHr 2 and qOin2 , respectively (mol s−1 ). The fundamental continuous-time model of the PEM system is defined by a set of transfer functions. The pressure of hydrogen is pH2 =
1/K H2 in q − 2K r I τH2 s + 1 H2
(6.1)
where K H2 and τH2 denote the valve molar constant for hydrogen and the response time of hydrogen flow, respectively. The input hydrogen flow obtained from the reformer is CV q (6.2) qHin2 = (τ1 s + 1)(τ2 s + 1) where q is the methane flow rate, C V , τ1 and τ2 are constants. Hence, from Eqs. (6.1) and (6.2), the pressure of hydrogen is pH2 =
1/K H2 τH2 s + 1
CV q − 2K r I (τ1 s + 1)(τ2 s + 1)
(6.3)
The pressure of oxygen is pO2 =
1/K O2 in q − Kr I τO2 s + 1 O2
(6.4)
254
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
where K O2 and τO2 denote the valve molar constant for oxygen and the response time of oxygen flow, respectively. The input flow rate of oxygen is qOin2 = (1/τH-O )qHin2
(6.5)
where τH-O is the ration of hydrogen to oxygen. Using Eq. (6.2), the pressure of oxygen is C V /τH-O 1/K O2 q − Kr I pO2 = (6.6) τO2 s + 1 (τ1 s + 1)(τ2 s + 1) The pressure of water is pH2 O =
1/K H2 O r q τH2 O s + 1 H2
(6.7)
where K H2 O and τH2 O denote the valve molar constant for water and the response time of water flow, respectively. The hydrogen flow that reacts is qHr 2 = 2K r I
(6.8)
Hence, from Eqs. (6.7) and (6.8), the pressure of water is pH2 O =
2K r /K H2 O I τH2 O s + 1
(6.9)
Finally, the stack output voltage is V = E − ηact − ηohmic
(6.10)
√ pH2 pO2 RT ln E = N0 E 0 + 2F pH2 O
(6.11)
From the Nernst’s equation
where N0 , E 0 , R0 , T0 , F0 denote the number of cells in series in the stack, the ideal standard potential, the universal gas constant, the absolute temperature and the Faraday’s constant, respectively. The activation loses are defined by ηact = B log(C I )
(6.12)
where B and C are constants. The ohmic losses are ηohmic = R int I where R int is the internal resistance.
(6.13)
6.2 Description of the Proton Exchange Membrane Fuel Cell
255
The continuous-time fundamental model consists of Eqs. (6.2), (6.3), (6.6), (6.8), (6.9), (6.10), (6.11), (6.12) and (6.13). The values of parameters are given in Table 6.1. Table 6.2 gives the values of process variables for the initial operating point. Figure 6.1 shows the structure of the continuous-time fundamental model of the process. The values of process input and disturbance signals are constrained 0.1 mol s−1 ≤ q ≤ 2 mol s−1
(6.14)
50 A ≤ I ≤ 150 A
(6.15)
Table 6.1 The fuel cell: the parameters of the fundamental continuous-time model
Table 6.2 The fuel cell: the values of process variables for the initial operating point
256
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
Fig. 6.1 The fuel cell: the structure of the continuous-time fundamental model
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC It can be noted that the continuous-time fundamental model of the discussed PEM fuel cell is characterised by linear transfer functions (Eqs. (6.3), (6.6) and (6.9)), but the stack voltage is defined by the nonlinear steady-state Nernst’s equation (6.11) and the activation losses are defined by the nonlinear equation (6.12). It means that the outputs of the linear dynamic part of the model are inputs of the nonlinear static one. Hence, it is straightforward to use the Wiener structure as an empirical model of the considered PEM fuel cell. We will evaluate performance of three structures of the neural Wiener model, for a different number of hidden nodes in the nonlinear static block and different order of dynamics of the linear model part. Furthermore, the chosen variant of the neural Wiener model will be compared with a corresponding polynomial Wiener structure. For model identification, the manipulated variable of the process, q, the disturbance, I , and the output, V , are scaled u = q − q, ¯ h = 0.01(I − I¯), y = V − V
(6.16)
where q, ¯ I¯ and V denote values of process variables for the initial operating point (Table 6.2). In the following part of this chapter, three structures of the Wiener model for the PEM process are discussed. Figure 6.2 depicts the first structure of the neural Wiener model (the structure A). It consists of a linear dynamic block connected in series with a nonlinear static one.
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC
257
Fig. 6.2 The fuel cell: the structure A of the neural Wiener model
The linear block has two inputs (u—the controlled one, h—the uncontrolled one) and one output, v, which is an auxiliary variable. The linear block is characterised by the equation 1
v(k) =
nB
2
bi1 u(k
− i) +
i=1
nB
bi2 h(k − i) −
i=0
nA
ai v(k − i)
(6.17)
i=1
j
The integers n A and n B , for j = 1, 2, define the order of the model dynamics. The constant parameters of the linear dynamic block are denoted by the real numbers ai (i = 1, . . . , n A ), bi1 (i = 1, . . . , n 1B ) and bi2 (i = 0, . . . , n 1B ). It is important to note that the signal v(k) depends directly on the signal h(k) since the current, I , has an immediate impact on the voltage, V , (it is clear from Eqs. (6.10) and (6.12)). The output signal of the linear dynamic block is taken as the input of the static one. The nonlinear static part of the model is described by the general equation used in the case of the SISO process, i.e. Eq. (2.5). A neural network of the MLP type with one input, one hidden layer containing K units and one output is used as the differentiable function g : R → R [12]. The model output is y(k) = w02 +
K
1 1 wl2 ϕ wl,0 + wl,1 v(k)
(6.18)
l=1
where ϕ : R → R is a nonlinear transfer function (e.g. hyperbolic tangent). Weights 1 , l = 1, . . . , K , m = 0, 1 and wl2 , l = 0, . . . , K , for of the network are denoted by wl,m the first and the second layers, respectively. The total number of weights is 3K + 1. Figure 6.3 depicts the second structure of the neural Wiener model (the structure B). It consists of two linear dynamic blocks and a nonlinear static one, but unlike the structure A, the latter one has two inputs. The outputs of the linear blocks are characterised by the equations 1
v1 (k) =
nB
1
bi1 u(k
− i) −
i=1
v2 (k) =
i=0
ai1 v1 (k − i)
(6.19)
i=1
2
nB
nA
2
bi2
h(k − i) −
nA i=1
ai2 v2 (k − i)
(6.20)
258
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
Fig. 6.3 The fuel cell: the structure B of the neural Wiener model
j
j
The integers n A , n B , j = 1, 2 define the order of the model dynamics. The constant parameters of the linear dynamic blocks are denoted by the real numbers ai1 (i = 1, . . . , n 1A ), ai2 (i = 1, . . . , n 2A ), bi1 (i = 1, . . . , n 1B ) and bi2 (i = 0, . . . , n 1B ). The signal v2 (k) depends on the signal h(k) since the current, I , has an immediate impact on the voltage, V . The nonlinear static block is described by the general equation y(k) = g (v1 (k), v2 (k))
(6.21)
A neural network of the MLP type with two inputs, one hidden layer containing K units and one output is used. The model output is y(k) =
w02
+
K
1 1 1 wl2 ϕ wl,0 + wl,1 v1 (k) + wl,2 v2 (k)
(6.22)
l=1
The weights of the network are denoted by wl,1 j , l = 1, . . . , K , j = 0, 1, 2 and wl2 , l = 0, . . . , K , for the first and the second layers, respectively. The overall number of weights is 4K + 1. Figure 6.4 depicts the third structure of the neural Wiener model (the structure C). It has three linear dynamic blocks. They are characterised by the equations 11
v1 (k) =
nB
12
bi11 u(k
− i) +
i=1
v2 (k) =
− i) +
i=1 j
nA
ai1 v1 (k − i)
(6.23)
ai2 v2 (k − i)
(6.24)
i=1
nB
2
bi22 h(k
− i) −
i=1
3
v3 (k) =
− i) −
22
bi21 u(k
i=1 nB
1
bi12 h(k
i=1
21
nB
nB
nA i=1
3
bi3
h(k − i) −
nA
ai3 v3 (k − i)
(6.25)
i=1 ij
The integers n A for j = 1, 2, 3, n B for i = 1, 2, j = 1, 2 and n 3B define the order of the model dynamics. The constant parameters of the linear dynamic blocks are
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC
259
Fig. 6.4 The fuel cell: the structure C of the neural Wiener model
j
j
jl
j
denoted by the real numbers ai (i = 1, . . . , n A , j = 1, 2, 3), bi (i = 1, . . . , n B , j = 1, 2, l = 1, 2) and bi3 (i = 1, . . . , n 3B ). The nonlinear static block is described by the general equation y(k) = g (v1 (k), v2 (k), v3 (k), h(k))
(6.26)
Unlike two previously discussed model structures, in the structure C, the static block has an additional input which is the value of the disturbance signal, h, measured at the current sampling instant, k. A neural network of the MLP type with four inputs, one hidden layer containing K units and one output is used. The model output is y(k) = w02 +
K l=1
⎛ 1 wl2 ϕ ⎝wl,0 +
3
⎞ 1 wl,1 j v j (k) + wl,4 h(k)⎠
(6.27)
j=1
Weights of the network are denoted by wl,1 j , l = 1, . . . , K , j = 0, . . . , 4 and wl2 , l = 0, . . . , K , for the first and the second layers, respectively. The overall number of weights is 6K + 1. Next, we will discuss finding precise black-box models of the PEM fuel cell. A linear model and three discussed neural Wiener structures (A, B and C) are considered. All models are assessed in terms of the model error and the number of model parameters. During model identification, two data sets are used: the training data set and the validation one. The first of them is used only to find parameters of models, whereas the second one is used only to assess generalisation ability of models, i.e. how the model reacts when it is excited by a different data set than that used for identification. To obtain those two sets of data, the continuous-time fundamental model of the PEM process (defined by Eqs. (6.2), (6.3), (6.6), (6.8), (6.9), (6.10), (6.11), (6.12) and (6.13)) is simulated. The resulting system of differential equations is solved by the Runge–Kutta method of order 45. As the process input and disturbance signals,
260
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
random sequences from the range characterised by Eqs. (6.14) and (6.15)) are used. The process signals (i.e. the manipulated variable, q, the disturbance, I , and the controlled variable, V ) are sampled with the sampling period equal to 1 s. The training and validation data sets are shown in Fig. 6.5, both sets consist of 3000 samples. Since identification of nonlinear Wiener models is a nonlinear optimisation problem, training is repeated as many as ten times for each model configuration and the results presented next are the best obtained. All parameters of the Wiener model, i.e. the parameters of the dynamic part and weights of the neural network, are determined from an identification procedure. During identification, the classical model error is minimised. The model error is defined as the sum of squared differences between the model output and the data for all available data samples [8]. Since the model is nonlinear, optimisation of the model parameters is a nonlinear optimisation task which is solved off-line. For this purpose, the SQP algorithm is used [20], which makes it possible to take into account constraints during optimisation. To enforce stability of the Wiener model, the poles of the linear dynamic block are optimised subject to stability constraints (in the discrete-time domain, all poles must belong to the unit circle). Next, from the
64
64
60
60
56
56
52 0
1000
2000
3000
52 0
2
2
1.5
1.5
1
1
0.5
0.5
0 0
1000
2000
3000
0 0
200
200
150
150
100
100
50
50
0 0
1000
2000
3000
0 0
Fig. 6.5 The fuel cell: the training and validation data sets
1000
2000
3000
1000
2000
3000
1000
2000
3000
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC
261
Table 6.3 The fuel cell: comparison of linear models of different order of dynamics in terms of the number of parameters (n par ), the training error (E train ) and the validation error (E val ); the chosen model is emphasised
j
j
1 optimised poles, the model coefficients ai are calculated. The values of bi , wl,m and 2 wl are directly calculated (optimised) with no constraints. Details of the optimisation procedure are described in [17]. At first, linear models of the process are considered. They have the following structure 1
y(k) =
nB i=1
2
bi1 u(k
− i) +
nB i=0
bi2 h(k − i) −
nA
ai y(k − i)
(6.28)
i=1
Table 6.3 compares linear models of different order of dynamics in terms of the number of parameters, the training error and the validation error. The first, the second, the third and the fourth order of dynamics is considered (it is defined as an integer number n 1B = n 2B = n A ). As a compromise between model accuracy and complexity, the third-order model is chosen. The first part of Fig. 6.6 compares the validation data set versus the output of the chosen linear model. The linear model is stable but not precise since there are significant differences between the model output and the data. Next, the neural Wiener structure A is considered. Table 6.4 presents training and validation errors for models of different orders of dynamics of the linear dynamic block (the order is defined as an integer number n 1B = n 2B = n A ) and different numbers of hidden nodes of the nonlinear static block, K . All compared models are of very low quality, only slightly better than the linear models (Table 6.3). Model complexity (defined by the order of dynamics and the number of hidden nodes) has practically no influence on model accuracy. For further comparison, the third-order model containing five hidden nodes is chosen. The second part of Fig. 6.6 compares the validation data set versus the output of the neural Wiener model A. Slightly better results in comparison to the linear structure (the top part of this figure) can be obtained, but still, significant differences between the model output and data are present. The training and validation errors of the neural Wiener structure B are given in Table 6.5 for models of different orders (the order is defined as an integer number n 1B = n 2B = n 1A = n 2A ) and different numbers of hidden nodes, K . In comparison with the linear model (Table 6.3) and the neural Wiener structure A (Table 6.4), the
262
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell … 62 60 58 56 54 52 0
500
1000
1500
2000
2500
3000
500
1000
1500
2000
2500
3000
500
1000
1500
2000
2500
3000
500
1000
1500
2000
2500
3000
62 60 58 56 54 52 0 62 60 58 56 54 52 0 62 60 58 56 54 52 0
Fig. 6.6 The fuel cell: the validation data set versus the output of the linear and neural Wiener models (all models have the third order of dynamics)
Table 6.4 The fuel cell: comparison of neural Wiener models A with different number of hidden nodes of the neural static block (K ) and the order of dynamics of the dynamic block, in terms of the number of parameters of the neural network (n nn par ), the training error (E train ) and the validation error (E val ); the chosen model is emphasised
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC 263
Table 6.5 The fuel cell: comparison of neural Wiener models B with different number of hidden nodes of the neural static block (K ) and the order of dynamics of the dynamic block, in terms of the number of parameters of the neural network (n nn par ), the training error (E train ) and the validation error (E val ); the chosen model is emphasised
264 6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC
265
neural Wiener structure B has significantly lower errors. For further comparisons, the third-order model containing five hidden nodes is chosen. The third part of Fig. 6.6 compares the validation data set versus the output of the neural Wiener model B. Unlike the structure A, the model output signal is very similar to the validation data; the differences are small. Finally, the neural Wiener structure C is considered. Training and validation errors of the model are given in Table 6.6 for models of different orders (where the order ij is defined as an integer number n B = n iA = n 3B = n 3A for i = 1, 2, j = 1, 2) and different numbers of hidden nodes, K . In comparison with the neural Wiener structures A and B (Tables 6.4 and 6.5, respectively), the structure C has significantly lower errors. Furthermore, there is a direct influence of the number of hidden nodes on model accuracy (the more hidden nodes, the lower the errors). It is interesting to notice that the third-order models are characterised by very similar errors as the fourth-order ones, whereas the first-order and second-order structures are significantly worse. As a compromise between accuracy and complexity, the third-order model containing five hidden nodes is chosen. The bottom part of Fig. 6.6 compares the validation data set versus the output of the neural Wiener model C. In this case, it is practically impossible to see any differences between the validation data and the model output (which are present in the case of the neural Wiener structures A and B). Accuracy of the considered third-order linear model and all three third-order neural Wiener structures (with five hidden nodes) is compactly presented in Fig. 6.7 which depicts the relation between the validation data versus the model outputs. The linear model and the neural Wiener structure A are imprecise; the neural Wiener structure B gives much better results and the neural Wiener structure C is excellent (the relation between the data set and the model output forms a line the slope of which is 45◦ ). We have found that the structure C of the Wiener model gives the best results. The linear part of the model should be of the third order of dynamics. The neural network with K = 5 hidden nodes is used in the nonlinear static part of the chosen model which means that the neural network has only 31 parameters. An interesting question is if the neural network may be replaced by the classical polynomials. The general structure of the considered polynomial Wiener model is the same as depicted in Fig. 6.4, the linear dynamic block is of the third order. The output of the polynomial Wiener model is not defined by Eq. (6.27) used in the case of the neural network but by the following relation y(k) = g(v1 (k), v2 (k), v3 (k), h(k)) =
K K K K
j
ci, j,m,n v1i (k)v2 (k)v3m (k)h n (k)
(6.29)
i=0 j=0 m=0 n=0
where K denotes the degree of the polynomial and ci, j,m,n are coefficients. For example, let us consider the polynomial of the degree K = 3
Table 6.6 The fuel cell: comparison of neural Wiener models C with different number of hidden nodes of the neural static block (K ) and the order of dynamics of the dynamic block, in terms of the number of parameters of the neural network (n nn par ), the training error (E train ) and the validation error (E val ); the chosen model is emphasised
266 6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC 62
62
60
60
58
58
56
56
54
54
267
52
52 52
54
56
58
60
62
62
62
60
60
58
58
56
56
54
54
52
54
56
58
60
62
52
54
56
58
60
62
52
52 52
54
56
58
60
62
Fig. 6.7 The fuel cell: the relation between the validation data versus the outputs of the chosen third-order linear model and the third-order neural Wiener models A, B and C containing K = 5 hidden nodes
y(k) = c0,0,0,0 + c1,0,0,0 v1 (k) + c2,0,0,0 v12 (k) + c3,0,0,0 v13 (k) + c0,1,0,0 v2 (k) + c1,1,0,0 v1 (k)v2 (k) + c2,1,0,0 v12 (k)v2 (k) + c3,1,0,0 v13 (k)v2 (k) + c0,2,0,0 v22 (k) + c1,2,0,0 v1 (k)v22 (k) + c2,2,0,0 v12 (k)v22 (k) + c3,2,0,0 v13 (k)v22 (k) + c0,3,0,0 v23 (k) + c1,3,0,0 v1 (k)v23 (k) + c2,3,0,0 v12 (k)v23 (k) + c3,3,0,0 v13 (k)v23 (k) + c0,0,1,0 v3 (k) + c1,0,1,0 v1 (k)v3 (k) + c2,0,1,0 v12 (k)v3 (k) + c3,0,1,0 v13 (k)v2 (k) + ... + c0,3,3,3 v23 (k)v33 (k)h 3 (k) + c1,3,3,3 v1 (k)v23 (k)v33 (k)h 3 (k) + c2,3,3,3 v12 (k)v23 (k)v33 (k)h 3 (k) + c3,3,3,3 v13 (k)v23 (k)v33 (k)h 3 (k)
(6.30)
We can see that all possible combinations of the model arguments are present which makes the real degree of the model as high as 12, the total number of parameters is 256. Hence, let us consider the simplified polynomial in which all components have the highest possible degree K
268
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
y(k) = g(v1 (k), v2 (k), v3 (k), h(k)) =
K
K
K
K
j
ci, j,m,n v1i (k)v2 (k)v3m (k)h n (k) (6.31)
m=0 n=0 i=0 j=0 i+ j+m+n≤K i+ j+m+n≤K i+ j+m+n≤K i+ j+m+n≤K
The simplified model of the degree K = 3 has only 35 parameters and the following form y(k) = c0,0,0,0 + c1,0,0,0 v1 (k) + c2,0,0,0 v12 (k) + c3,0,0,0 v13 (k) + c0,1,0,0 v2 (k) + c1,1,0,0 v1 (k)v2 (k) + c2,1,0,0 v12 (k)v2 (k) + c0,2,0,0 v22 (k) + c1,2,0,0 v1 (k)v22 (k) + c0,3,0,0 v23 (k) + c0,0,1,0 v3 (k) + c1,0,1,0 v1 (k)v3 (k) + c2,0,1,0 v12 (k)v3 (k) + ... + c1,0,0,2 v1 (k)h 2 (k) + c0,1,0,2 v2 (k)h 2 (k) + c0,0,1,2 v3 (k)h 2 (k) + c0,0,0,3 h 3 (k) (6.32) Table 6.7 compares polynomial Wiener models (6.29) with different degree of the static block in terms of the number of parameters of the polynomial, the training error and the validation error; all models have the third order of dynamics. All models are very bad since very low errors are obtained for the training set, but large errors are observed for the validation set. Let us also note that as the degree K is increased, the total number of parameters of the polynomial grows very rapidly. Let us note that the number of data samples in the training data set is 3000 which means that the polynomials of the degree 7 and higher have a larger number of parameters than the number of data points. Table 6.8 compares simplified polynomial Wiener models (6.31) with different degree of the static block in terms of the number of parameters of the polynomial, the training error and the validation error; all models have the third order of dynamics. In comparison with the rudimentary polynomial Wiener models, all simplified structures have a much lower number of parameters. Moreover, the models of the degree K = 3, 4, 5, 6, 7, 8 have moderate values of the validation error. These values are much lower than in the case of the rudimentary polynomial structures (Table 6.7). The models of the polynomial degree K = 9 or K = 10 have very low training errors but high validation ones. Unfortunately, when the simplified polynomial Wiener models are compared with the chosen neural one with K = 5 hidden nodes (Table 6.6), we easily see disadvantages of the polynomial approach: higher errors and a huge number of parameters. Let us consider Fig. 6.8 that shows the relation between the validation data versus the outputs of the rudimentary and simplified polynomial Wiener models of the degree K = 3, 4. We can see that the simplified models really outperform the rudimentary ones in terms of accuracy. On the other hand, the simplified polynomial Wiener models are noticeably worse than the chosen structure C of
6.3 Modelling of the Proton Exchange Membrane Fuel Cell for MPC
269
Table 6.7 The fuel cell: comparison of polynomial Wiener models with different degree of the pol static block (K ) in terms of the number of parameters of the polynomial (n par ), the training error (E train ) and the validation error (E val ); all models have the third order of dynamics
Table 6.8 The fuel cell: comparison of simplified polynomial Wiener models with different degree pol of the static block (K ) in terms of the number of parameters of the polynomial (n par ), the training error (E train ) and the validation error (E val ); all models have the third order of dynamics
the neural Wiener one (Fig. 6.7). The chosen neural network has only 31 parameters; the simplified polynomials of the third and fourth degree have as many as 35 and 70 parameters, respectively. The best Wiener model with a simplified polynomial in the first block needs the degree K = 7 and it has as many as 330 parameters. Hence, because of high accuracy and a low number of parameters, the neural Wiener models are used in MPC algorithms, not the polynomial ones.
270
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell … 62
62
60
60
58
58
56
56
54
54 52
52 52
54
56
58
60
62
62
62
60
60
58
58
56
56
54
54
52
54
56
58
60
62
52
54
56
58
60
62
52
52 52
54
56
58
60
62
Fig. 6.8 The fuel cell: the relation between the validation data versus the outputs of the rudimentary and simplified polynomial Wiener models of the degree K = 3, 4 and the third-order of dynamics
6.4 Implementation of MPC Algorithms for the Proton Exchange Membrane Fuel Cell The following MPC algorithms are compared: 1. 2. 3. 4.
The classical LMPC algorithm based on a linear model. The MPC-NPSL algorithm. The MPC-NPLT2 algorithm. The MPC-NO algorithm.
In the LMPC algorithm, a linear model is used for prediction. All studied neural Wiener models (A, B and C) are used in the MPC-NO algorithm. In the MPC-NPSL, MPC-NPLT2 and MPC-NPLPT algorithms, the best neural Wiener model C is used. All models (linear and nonlinear) used in all MPC algorithms are of the third order of dynamics. Although in MPC all three neural Wiener model types may be used, the MPCNPSL and MPC-NPLT2 algorithms for the most complex neural Wiener structure C shown in Fig. 6.4 are detailed. Implementation details for the structures A and B may be easily derived from the given description. Of course, it is also possible to develop MPC algorithms for polynomial Wiener models, but since they are much worse than the neural ones and need much more parameters, we only concentrate on the latter structures. Because the considered neural Wiener models include not only
6.4 Implementation of MPC Algorithms for the Proton Exchange Membrane Fuel Cell
271
the input-output channel but also the measured disturbance-output one, we present a thorough discussion of implementation details when compared with previous chapters, in which classical Wiener models without measured disturbances are used. At first, let us discuss implementation details of the MPC-NPSL algorithm. From Eq. (6.27), the time-varying gains of the vi to y channels (i = 1, 2, 3) of the nonlinear block can be obtained as dg(v1 (k), v2 (k), v3 (k), h(k)) 2 dzl (k) 1 dy(k) = = w wl dvi (k) dvi (k) dzl (k) l,i l=1 K
K i (k) =
(6.33)
1 1 where zl (k) = wl,0 + 3j=1 wl,1 j v j (k) + wl,4 h(k). If the hyperbolic tangent is used as the activation function of the hidden layer of the neural network, i.e. ϕ = tanh(·), we have dzl (k) = 1 − tanh2 (zl (k)) (6.34) dzl (k) Taking into account the serial structure of the neural Wiener model C shown in Fig. 6.4, a linear approximation of the model output signal is y(k) = K 1 (k)v1 (k) + K 2 (k)v2 (k) + K 3 (k)v3 (k)
(6.35)
Remembering that the dynamic blocks are linear with constant parameters (Eqs. (6.23)–(6.25)), the linearised model used in the MPC-NPSL algorithm for prediction (Eq. (6.35) is linear but time-varying. In all MPC algorithms based on constant linear models the predicted trajectory of the output is a linear combination of the decision variables of MPC [32]. Using the concept of linear MPC and taking into account the time-varying linear approximation of the neural Wiener structure C defined by Eq. (6.35), the prediction equation is obtained ˆy(k) = (K 1 (k)G 1 + K 2 (k)G 2 )u(k) + y0 (k)
(6.36)
It is straightforward to notice from Eqs. (6.23)–(6.25) that the manipulated variable, u, influences only the first two intermediate model variables, v1 and v2 . Hence, only the channels u-v1 -y and u-v2 -y are considered in the forced trajectory. The constant matrices of dimensionality N × Nu , denoted by G 1 and G 2 , are defined by Eq. (3.95) and they are comprised of step response matrices of the channels u-v1 (n = 1) and u-v2 (n = 2), respectively. They are calculated off-line in the classical way, i.e. in the same way it is done for the SISO case, from Eq. (3.92). For this purpose the first two dynamic blocks (Eqs. (6.23)–(6.24)) are taken into account without any influence of the disturbance signal, h. In the MPC-NPSL algorithm, it is also necessary to find the free trajectory, y0 (k), defined by Eq. (3.88). It is calculated at each sampling instant not from the simplified linearised model (Eq. (6.35)) but from the full nonlinear Wiener model. From Eq. (6.27), we have
272
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
y 0 (k + p|k) = w02 +
K
3 1 wl2 ϕ wl,0 + wl,1 j v0j (k + p|k)
l=1
j=1
+
1 h meas (k wl,4
+ p|k) + d(k)
(6.37)
The free trajectories of the variables v1 , v2 and v3 are denoted by v10 , v20 and v30 , respectively. The free trajectory of the variable v1 is obtained from Eq. (6.23) Iuf1 ( p)
v10 (k
+ p|k) =
11
bi11 u(k
− 1) +
nB
bi11 u(k − i + p)
i=Iuf1 ( p)+1
i=1
Ivf1 ( p)
12
+
nB
bi12 h meas (k
− i + p|k) −
i=1
ai1 v10 (k − i + p|k)
i=1 n 1A
−
ai1 v1 (k − i + p)
(6.38)
i=Ivf 1 ( p)+1 1 1 where Iuf1 ( p) = min( p, n 11 B ), Ivf ( p) = min( p − 1, n A ). The free trajectory of the variable v2 is obtained from Eq. (6.24) Iuf2 ( p)
v20 (k
+ p|k) =
21
bi21 u(k
− 1) +
nB
bi21 u(k − i + p)
i=Iuf2 ( p)+1
i=1
Ivf2 ( p)
n 22 B
+
bi22 h meas (k
− i + p|k) −
i=1
ai2 v20 (k − i + p|k)
i=1 n 2A
−
ai2 v2 (k − i + p)
(6.39)
i=Ivf 2 ( p)+1 2 2 where Iuf2 ( p) = min( p, n 21 B ), Ivf ( p) = min( p − 1, n A ). The free trajectory of the variable v3 is obtained from Eq. (6.25) Ivf3 ( p)
3
v30 (k
+ p|k) =
nB
bi3 h meas (k
− i + p|k) −
i=1
ai3 v30 (k − i + p|k)
i=1 n 2A
−
i=Ivf 3 ( p)+1
ai3 v3 (k − i + p)
(6.40)
6.4 Implementation of MPC Algorithms for the Proton Exchange Membrane Fuel Cell
273
where Ivf3 ( p) = min( p − 1, n 3A ). The measured value of the disturbance is typically known up to the current sampling instant. Using the scaled variables defined by Eq. (6.16), we have h meas (k + p|k) =
0.01(I (k + p) − I¯) when p < 0 0.01(I (k) − I¯) when p ≥ 0
(6.41)
The unmeasured disturbance acting on the process output, d(k), used in the free trajectory (Eq. (6.37)), is calculated as difference between the value of the output signal measured at the current sampling instant, y(k), and process output estimated from the model. Using Eq. (6.27), we obtain d(k) = y(k) − w02 −
K
⎛ 1 wl2 ϕ ⎝wl,0 +
l=1
3
⎞ 1 wl,1 j v j (k) + wl,4 h(k)⎠
(6.42)
j=1
where model signals v j (k) are calculated from Eqs. (6.23)–(6.25). Let us discuss implementation details of the MPC-NPLT2 algorithm for the most complex neural Wiener structure C shown in Fig. 6.4. The predicted trajectory yˆ traj (k) and the matrix H(k) are calculated directly from the full nonlinear model of the process, without any simplification. From Eq. (6.27), the predicted trajectory is y traj (k + p|k) = w02 +
K
3 traj 1 wl2 ϕ wl,0 + wl,1 j v j (k + p|k)
l=1
j=1
+
1 h meas (k wl,4
+ p|k) + d(k)
(6.43)
From Eq. (6.23), the predicted trajectory of the variable v1 is Iuf1 ( p) traj v1 (k
+ p|k) =
1
bi11 u traj (k
nB
− i + p|k) +
i=Iuf1 ( p)+1
i=1
Ivf1 ( p)
n 12 B
+
bi12 h meas (k
− i + p|k) −
i=0
−
bi11 u(k − i + p)
nA
traj
ai1 v1 (k − i + p|k)
i=1
ai1 v1 (k − i + p)
i=Ivf1 ( p)+1
From Eq. (6.24), the predicted trajectory of the variable v2 is
(6.44)
274
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell … Iuf2 ( p) traj v2 (k
+ p|k) =
2
bi21 u traj (k
nB
− i + p|k) +
i=1
Ivf2 ( p)
22
+
nB
bi22 h meas (k
− i + p|k) −
i=0
traj
ai2 v2 (k − i + p|k)
i=1
nA
−
bi21 u(k − i + p)
i=Iuf2 ( p)+1
ai2 v1 (k − i + p)
(6.45)
i=Ivf2 ( p)+1
From Eq. (6.25), the predicted trajectory of the variable v3 is Ivf3 ( p)
3
traj v3 (k
+ p|k) =
nB
bi3 h meas (k
− i + p|k) −
i=0
−
traj
ai3 v3 (k − i + p|k)
i=1 nA
ai3 v3 (k − i + p)
(6.46)
i=Ivf3 ( p)+1
The unmeasured disturbance is assessed in the same way as in the MPC-NPSL algorithm (Eq. (6.42)). The entries of the matrix H(k) are determined differentiating Eq. (6.43) and taking into account Eq. (3.211) which leads to ∂ yˆ traj (k + p|k) 2 dzl (k + p|k) 1 = wl traj wi, j h( p, r ) ∂u traj (k + r |k) dzl (k + p|k) j=1 l=1 K
traj
2
traj
1 for all p = 1, . . . , N , r = 0, . . . , Nu − 1, where zl (k + p|k) = wl,0 +
(6.47)
3 j=1
traj
1 wl,1 j v j (k + p|k) + wl,4 h meas (k). The right side derivative in Eq. (6.47) depends on the transfer function used in the neural network. For the hyperbolic tangent, similarly to Eq. (6.34), we have
dzl (k + p|k) traj = 1 − tanh2 zl (k + p|k) dzl (k) traj
(6.48)
6.5 MPC of the Proton Exchange Membrane Fuel Cell The continuous-time fundamental model that consists of Eqs. (6.2), (6.3), (6.6), (6.8), (6.9), (6.10), (6.11), (6.12) and (6.13) is used as the simulated process. In simulations, the horizons of all compared MPC algorithms are the same: N = 10 and Nu = 3, the default value of the parameter λ is 1. The objective of all MPC algorithms is
6.5 MPC of the Proton Exchange Membrane Fuel Cell
275
to control the process in such a way that the output, V , is close to the constant setpoint V sp = V despite the changes of the disturbance, I . The scenario of disturbance changes is ⎧ I¯ for k < 5 ⎪ ⎪ ⎪ ⎪ ⎪ I¯ + 25 for 5 ≤ k < 40 ⎪ ⎪ ⎪ ⎨ I¯ − 25 for 40 ≤ k < 80 (6.49) I (k) = ⎪ I¯ + 50 for 80 ≤ k < 120 ⎪ ⎪ ⎪ ⎪ ⎪ I¯ − 50 for 120 ≤ k < 160 ⎪ ⎪ ⎩¯ I for 160 ≤ k < 200 The magnitude of the manipulated variable is constrained: q min = 0.1, q max = 2. At first, the LMPC algorithm is considered. Simulation results for different values of the penalty factor λ are depicted in Fig. 6.9. Unfortunately, for the lowest value of that coefficient, i.e. for λ = 1, there are very strong oscillations of the process input and output variables. When the penalty coefficient is increased, for λ = 25 and λ = 50, the oscillations are damped, but the trajectory of the process output is slow. When λ = 100, no oscillations are observed, but the setting time is very long. It means that the LMPC algorithm is unable to compensate fast for changes of the disturbance.
57.5
57
56.5
56 0
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
0.8 0.6 0.4 0.2 0
Fig. 6.9 The fuel cell: simulation results of the LMPC algorithm for different values of the penalty factor λ
276
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell … 57.5 57 56.5 56 55.5 55 0
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
0.8 0.6 0.4 0.2 0
Fig. 6.10 The fuel cell: simulation results of the MPC-NO algorithm based on the neural Wiener model A for different values of the penalty factor λ
Due to the underlying linear model applied for prediction, the LMPC algorithm gives poor control results. It seems to be straightforward to consider a nonlinear model in MPC. At first, the neural Wiener structure A is used in the fully-fledged MPC-NO algorithm. Although the MPC-NO algorithm is computationally too demanding to be used in practice, in simulations, it shows whether or not the model may be used for long-range prediction in MPC. Figure 6.10 depicts simulation results of the MPC-NO algorithm based on the neural Wiener model A for different values of the penalty factor λ (the same values are used as in the case of the LMPC algorithm). Since the neural Wiener model A is imprecise (Table 6.4, Figs. 6.6 and 6.7), the obtained control quality is poor. For the lowest value λ = 1, there are some damped oscillations which may be eliminated when the penalty coefficient is increased. Unfortunately, it results in very slow trajectories, as those in the case of the LMPC strategy. One may conclude that the neural Wiener model A is not precise enough to be used in MPC. Next, the neural Wiener models B and C are considered in MPC. Simulations results of the MPC-NO algorithm based on these models are shown in Fig. 6.11. It is possible to formulate two observations. First of all, unlike the LMPC algorithm and the MPC-NO strategy based on the neural Wiener model A, both neural Wiener models B and C, when applied for prediction in MPC, result in good control, i.e. it is possible to compensate fast for changes of the disturbance. Secondly, it should be noticed that the MPC-NO algorithm gives slightly better but noticeable results when the neural Wiener model C is used. In this case, overshoot is lower and the required
6.5 MPC of the Proton Exchange Membrane Fuel Cell
277
57 56.8 56.6 56.4 56.2 0
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
0.8 0.6 0.4 0.2 0
Fig. 6.11 The fuel cell: simulation results of the MPC-NO algorithm based on the neural Wiener models B and C, λ = 1
set-point is achieved faster. This results from the use of the more precise model C (instead of the model B) (Tables 6.5 and 6.6). Taking into account simulation results presented in Figs. 6.10 and 6.11, it can be concluded that the neural Wiener model C results in strongly improved control quality when used in the MPC-NO algorithm. It should be noted that the MPC-NO algorithm requires solving a nonlinear optimisation problem at each sampling instant on-line. In order to reduce computational complexity, two alternatives are considered: the MPC-NPLT2 algorithm with trajectory linearisation and the MPC-NPSL algorithm with simplified model linearisation. Both algorithms result in quadratic optimisation problems, nonlinear optimisation is not necessary. Figure 6.12 compares trajectories of the MPC-NO algorithm with those obtained in the MPC-NPLT2 and MPC-NPSL strategies. Two observations may be discussed. Firstly, the MPC-NPLT2 algorithm with trajectory linearisation gives practically the same process trajectories as the “ideal” MPC-NO strategy, it is impossible to see any differences. It is a beneficial feature of the MPC-NPLT2 algorithm since it is significantly less computationally demanding but leads to the same control performance as the MPC-NO strategy. Secondly, the MPC-NPSL algorithm also works correctly; it only gives slightly greater overshoot than the MPC-NO and MPC-NPLT2 algorithms. It should be noted that the MPC-NPSL algorithm uses for prediction a linear approximation of the model which is obtained in a simple way; quite complicated trajectory linearisation is not necessary.
278
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
57 56.8 56.6 56.4 56.2 0
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
0.8
0.6
0.4
0.2 0
Fig. 6.12 The fuel cell: simulation results of the MPC-NO, MPC-NPLT2 and MPC-NPSL algorithms based on the neural Wiener model C, λ = 1
Figure 6.13 depicts simulation results of the three compared nonlinear MPC algorithms based on the neural Wiener model C (MPC-NO, MPC-NPLT2 and MPC-NPSL), but now the increments of the manipulated variable are constrained, u max = 0.1. Due to the additional constraints, the manipulated variable does not change as quickly as in Fig. 6.12, but the trajectories of the process output are slower. In this case, the observations concerning the algorithms’ performance are the same as before, i.e. the MPC-NPLT2 algorithm gives the trajectories practically the same as the MPC-NO one and the MPC-NPSL algorithm gives only slightly greater overshoot. To further compare MPC algorithms whose trajectories are depicted in Figs. 6.12 and 6.13, they are compared in Table 6.9 in terms of the performance criteria E 2 and E MPC-NO . Additionally, the scaled calculation time is given, in the most computationally demanding solution, i.e. the MPC-NO strategy, it corresponds to 100%. In general, the values of E 2 for the MPC-NO and MPC-NPLT2 algorithms are the same, which indicates that the measure E MPC-NO for the MPC-NPLT2 algorithm is low, close to 0. The MPC-NPSL scheme works very well, but when it is compared with the MPC-NO one, there are much noticeable differences than in the comparison between the MPC-NPLT2 and MPC-NO control methods. When the rate constraints are present, all trajectories (of the process input and output) are slower. As far as the calculation time is concerned, two general observations may be made. Firstly, the MPC-NPSL and MPC-NPLT2 algorithms are many times more computationally
6.5 MPC of the Proton Exchange Membrane Fuel Cell
279
57 56.8 56.6 56.4 56.2 0
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
140
160
180
200
0.8
0.6
0.4
0.2 0
Fig. 6.13 The fuel cell: simulation results of the MPC-NO, MPC-NPLT2 and MPC-NPSL algorithms based on the neural Wiener model C, the increments of the manipulated variable are constrained, u max = 0.1, λ = 1 Table 6.9 The fuel cell: comparison of control performance criteria (E 2 and E MPC-NO ) as well as the calculation time for MPC-NPSL, MPC-NPLT2 and MPC-NO algorithms based on the neural Wiener model C; λ = 1
efficient in comparison with the MPC-NO one. The MPC-NPSL scheme is somehow less demanding than the MPC-NPLT2 one, but this difference is not big since computational complexity is mostly influenced by the quadratic optimisation subroutine. Secondly, the introduction of the additional constraints imposed on the rate of change of the manipulated variable “helps” the optimisation routine to slightly faster find the solution.
280
6 Modelling and MPC of the Proton Exchange Membrane Fuel Cell …
References 1. Barbir, F.: PEM Fuel Cells: Theory and Practice. Academic, London (2013) 2. Baroud, Z., Benmiloud, M., Benalia, A., Ocampo-Martinez, C.: Novel hybrid fuzzy-PID control scheme for air supply in PEM fuel-cell-based systems. Int. J. Hydrog. Energy 42, 10435–10447 (2017) 3. Barzegari, M.M., Alizadeh, E., Pahnabi, A.H.: Grey-box modeling and model predictive control for cascade-type PEMFC. Energy 127, 611–622 (2017) 4. Barzegari, M.M., Dardel, M., Alizadeh, E., Ramiar, A.: Reduced-order model of cascade-type PEM fuel cell stack with integrated humidifiers and water separators. Energy 113, 683–692 (2016) 5. Beirami, H., Shabestari, A.Z., Zerafat, M.M.: Optimal PID plus fuzzy controller design for a PEM fuel cell air feed system using the self-adaptive differential evolution algorithm. Int. J. Hydrog. Energy 40, 9422–9434 (2015) 6. Benchouia, N.E., Derghal, A., Mahmah, B., Madi, B., Khochemane, L., Aoul, L.H.: An adaptive fuzzy logic controller (AFLC) for PEMFC fuel cell. Int. J. Hydrog. Energy 40, 13806–13819 (2015) 7. Damoura, C., Benne, M., Lebreton, C., Deseure, J., Grondin-Perez, B.: Real-time implementation of a neural model-based self-tuning PID strategy for oxygen stoichiometry control in PEM fuel cell. Int. J. Hydrog. Energy 39, 12819–12825 (2014) 8. Doma´nski, P.D.: Control Performance Assessment: Theoretical Analyses and Industrial Practice. Studies in Systems, Decision and Control, vol. 245. Springer, Cham (2020) 9. Erdinc, O., Vural, B., Uzunoglu, M., Ates, Y.: Modeling and analysis of an FC/UC hybrid vehicular power system using a wavelet-fuzzy logic based load sharing and control algorithm. Int. J. Hydrog. Energy 34, 5223–5233 (2009) 10. Hähnel, C., Aul, V., Horn, J.: Power control for efficient operation of a PEM fuel cell system by nonlinear model predictive control. IFAC-PapersOnLine 48, 174–179 (2015) 11. Hatziadoniu, C.J., Lobo, A.A., Pourboghrat, F., Daneshdoost, M.: A simplified dynamic model of grid-connected fuel-cell generators. IEEE Trans. Power Deliv. 17, 467–473 (2002) 12. Haykin, S.: Neural Networks and Learning Machines. Pearson Education, Upper Saddle River (2009) 13. Hong, L., Chen, J., Liu, Z., Huang, L., Wu, Z.: A nonlinear control strategy for fuel delivery in PEM fuel cells considering nitrogen permeation. Int. J. Hydrog. Energy 42, 1565–1576 (2017) 14. Kisacikoglu, M.C., Uzunoglu, M., Alam, M.S.: Load sharing using fuzzy logic control in a fuel cell/ultracapacitor hybrid vehicle. Int. J. Hydrog. Energy 34, 1497–1507 (2009) 15. Kunusch, C., Puleston, P., Mayosky, M.: Sliding-Mode Control of PEM Fuel Cells. Springer, London (2012) 16. Larminie, J., Dicks, A.: Fuel Cell Systems Explained. Wiley, Chichester (2000) 17. Ławry´nczuk, M.: Identification of Wiener models for dynamic and steady-state performance with application to solid oxide fuel cell. Asian J. Control 21, 1836–1846 (2019) 18. Ławry´nczuk, M., Söffker, D.: Wiener structures for modeling and nonlinear predictive control of proton exchange membrane fuel cell. Nonlinear Dyn. 95, 1639–1660 (2019) 19. Meidanshahi, V., Karimi, G.: Dynamic modeling, optimization and control of power density in a PEM fuel cell. Appl. Energy 93, 98–105 (2012) 20. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Berlin (2006) 21. Ou, K., Wang, Y.X., Li, Z.Z., Shen, Y.D., Xuan, D.J.: Feedforward fuzzy-PID control for air flow regulation of PEM fuel cell system. Int. J. Hydrog. Energy 40, 11686–11695 (2015) 22. Özbek, M.: Modeling, simulation, and concept studies of a fuel cell hybrid electric vehicle powertrain. Ph.D. thesis, University of Duisburg-Essen (2010) 23. Özbek, M., Wang, S., Marx, M., Söffker, D.: Modeling and control of a PEM fuel cell system: a practical study based on experimental defined component behavior. J. Process Control 23, 282–293 (2013) 24. Panos, C., Kouramas, K.I., Georgiadis, M.C., Pistikopoulos, E.N.: Modelling and explicit model predictive control for PEM fuel cell systems. Chem. Eng. Sci. 67, 15–25 (2012)
References
281
25. Pukrushpan, J.T., Stefanopoulou, A.G., Peng, H.: Control of Fuel Cell Power Systems: Principles, Modeling, Analysis and Feedback Design. Springer, London (2004) 26. Rosanas-Boeta, N., Ocampo-Martinez, C., Kunusch, C.: On the anode pressure and humidity regulation in PEM fuel cells: a nonlinear predictive control approach. IFAC-PapersOnLine 48, 434–439 (2015) 27. Schultze, M., Horn, J.: Modeling, state estimation and nonlinear model predictive control of cathode exhaust gas mass flow for PEM fuel cells. Control Eng. Pract. 43, 76–86 (2016) 28. Shahiri, M., Ranjbar, A., Karami, M.R., Ghaderi, R.: Robust control of nonlinear PEMFC against uncertainty using fractional complex order control. Nonlinear Dyn. 80, 1785–1800 (2015) 29. Shahiri, M., Ranjbar, A., Karami, M.R., Ghaderi, R.: New tuning design schemes of fractional complex-order PI controller. Nonlinear Dyn. 84, 1813–1835 (2016) 30. Suh, K.W.: Modeling, analysis and control of fuel cell hybrid power systems. Ph.D. thesis, University of Michigan, Ann Arbor (2016) 31. Talj, R.J., Hissel, D., Ortega, R., Becherif, M., Hilairet, M.: Experimental validation of a PEM fuel-cell reduced-order model and a moto-compressor higher order sliding-mode control. IEEE Trans. Ind. Electron. 57, 1906–1913 (2010) 32. Tatjewski, P.: Advanced Control of Industrial Processes, Structures and Algorithms. Springer, London (2007) 33. Uzunoglu, M., Alam, M.S.: Dynamic modeling, design and simulation of a combined PEM fuel cell and ultracapacitor system for stand-alone residential applications. IEEE Trans. Energy Convers. 21, 767–775 (2006) 34. Uzunoglu, M., Alam, M.S.: Dynamic modeling, design and simulation of a PEM fuel cell/ultracapacitor hybrid system for vehicular applications. Energy Convers. Manag. 48, 1544–1553 (2009) 35. Ziogou, C., Papadopoulou, S., Georgiadis, M.C., Voutetakis, S.: On-line nonlinear model predictive control of a PEM fuel cell system. Control Eng. Pract. 23, 483–492 (2013)
Part III
State-Space Approaches
Chapter 7
MPC Algorithms Using State-Space Wiener Models
Abstract This chapter details MPC algorithms for processes described by statespace Wiener models. At first, the simple MPC method based on the inverse static model is recalled and the rudimentary MPC-NO algorithm is described. Next, the computationally efficient MPC methods with on-line model linearisation are characterised: the MPC-SSL and the MPC-NPSL ones as well as two MPC schemes with on-line trajectory linearisation: the MPC-NPLT and MPC-NPLPT schemes. All MPC algorithms are presented without and with parameterisation using Laguerre functions. The classical and an original, very efficient prediction method, which lead to offset-free control, are presented. Finally, state estimation methods for MPC are shortly mentioned.
7.1 MPC-inv Algorithm in State-Space In the simplest approach, we may use the MPC algorithm in which the inverse model of the static block is used. The general control system structure is presented in Fig. 3.1, the same as in the input-output process representation. In both inputoutput and state-space formulations, limitations of that approach are the same, as discussed in Sect. 3.1, i.e. the inverse model must exist and it is best when the process is described by the SISO Wiener model or the MIMO Wiener models I or III since, in such cases, all inverse models are of SISO type. For more complex Wiener models, the inverse models may be very complicated which makes implementation difficult or even impossible. Example simulated processes for which the MPC-inv algorithm based on the state-space Wiener model is discussed in the literature are: a continuous stirred tank reactor [3], polymerisation reactors [3, 6, 17], a neutralisation reactor [4].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_7
285
286
7 MPC Algorithms Using State-Space Wiener Models
7.2 MPC-NO Algorithm in State-Space Prediction in State-Space for Offset-Free Control A conventional method for providing offset-free MPC with state estimation is to augment the state equations taking into account in the model additional states of disturbances. For linear state-space systems, such a method was discussed in [5, 12, 13, 15, 16]. Next, this approach was extended in [14] to deal with nonlinear processes of the following general form (the subscript “p” refers to the process) x(k + 1) = f p (x(k), u(k), dp (k))
(7.1)
y(k) = gp (x(k), dp (k))
(7.2)
where dp (k) represents all and generally unknown true disturbances affecting the controlled process. For prediction in MPC, the following augmented model is used x(k + 1) = f aug (x(k), u(k), d(k))
(7.3)
d(k + 1) = d(k) y(k) = gaug (x(k), d(k))
(7.4) (7.5)
where d is the vector of disturbances used in the model, of length n d , i.e. d = T d1 . . . dn d . It is required that the number of disturbances located in the model does not exceed the number of measured outputs, i.e. n d ≤ n y , which is an obvious disadvantage of the augmented state method. For example, when n y = 1, as many as n x + 1 possibilities exists: the disturbance d may be located in the consecutive n x state equations or in the output equation. The actual location of the disturbance(s) is an issue, usually many possibilities must be verified and the best one chosen. In this work an original prediction calculation method is used to determine the predicted values of state and output variables by means of the state-space Wiener model of the process. The prediction method detailed below is next used in all MPC algorithms. In place of the augmented model (7.3)–(7.5), basing on the statespace Wiener model defined by Eqs. (2.84)–(2.85), the following model is used for prediction x(k + 1) = Ax(k) + Bu(k) + ν(k) y(k) = g(C x(k)) + d(k)
(7.6) (7.7)
Unlike the augmented model, disturbances are taken into account in all state and T output equations. The state disturbance vector ν = ν1 . . . νn x is determined as the difference between the estimated state, x(k), ˜ and the state calculated from the state equation (2.84) ν(k) = x(k) ˜ − ( Ax(k ˜ − 1) + Bu(k − 1)) (7.8)
7.2 MPC-NO Algorithm in State-Space
287
T where x˜ = x˜1 . . . x˜n x denotes the vector of estimated state variables. Of course, if the state vector may be measured, measurements are used in place of estimations, which gives ν(k) = x(k) − ( Ax(k − 1) + Bu(k − 1)) (7.9) Unfortunately, it may be possible only in very few cases in practice. The output disturT bance vector d = d1 . . . dn y is calculated as the difference between the measured output vector, y(k), and the output calculated from the output equation (2.85) d(k) = y(k) − g(˜v(k)) = y(k) − g(C(x(k))) ˜
(7.10)
When the state vector may be measured, we have d(k) = y(k) − g(v(k)) = y(k) − g(C(x(k)))
(7.11)
In the MPC-NO strategy, the state variables predicted for the sampling instant k + 1 at the current instant k are obtained from Eq. (7.6) x(k ˆ + 1|k) = Ax(k) ˜ + Bu(k|k) + ν(k)
(7.12)
When the state is measured, in place of Eq. (7.12), we have x(k ˆ + 1|k) = Ax(k) + Bu(k|k) + ν(k)
(7.13)
Similarly, using Eq. (7.6) recurrently, the predictions calculated at the sampling instant k for the sampling instants k + p are x(k ˆ + p|k) = Ax(k ˆ + p − 1|k) + Bu(k + p − 1|k) + ν(k)
(7.14)
where p = 2, . . . , N . Using Eq. (7.7), the output predictions for the sampling instant k + p calculated at the current sampling instant k, are yˆ (k + p|k) = g(C x(k ˆ + p|k)) + d(k)
(7.15)
for p = 1, . . . , N . In Eqs. (7.12), (7.13) and (7.14) the same state disturbance vector, ν(k), is used over the whole prediction horizon. Similarly, in Eq. (7.15) the same output disturbance vector, d(k), is used. Because, typically, variability of future disturbances is not known, they are assumed to be constant over the whole prediction horizon [19]. The presented disturbance modelling was introduced in [20] for linear state-space systems and further extended for nonlinear ones in [21, 22]. A computationally efficient MPC using the considered disturbance modelling was introduced in [7, 8]. The discussed approach to offset-free control has the following advantages:
288
7 MPC Algorithms Using State-Space Wiener Models
(a) simplicity of development, no need to check all possibilities of disturbance location necessary in the augmented state approach, (b) ability to compensate for the deterministic constant-type disturbances affecting the process, which are crucial in process control because they include unavoidable modelling errors or piecewise-constant disturbances, (c) only the process state must be estimated, not accompanied by the disturbance vector as it is necessary in the case of the conventional augmented state method. A unique feature of the proposed prediction method is that the resulting MPC controllers assure offset-free control without the necessity to use an additional observer of the deterministic disturbances. The key factor is the use of properly defined and updated state and output disturbance predictions, ν(k) and d(k), used in state and output prediction equations, respectively. Let us derive scalar prediction equations which will be convenient for future transformations. We assume that the estimated state vector, x, ˜ is used. When the state is measured, it must be replaced by the measured vector, x. Prediction Using State-Space SISO Wiener Model At first, let us discuss the state-space SISO case in which the Wiener model depicted in Fig. 2.1 is used. Model matrices A, B and C are given by Eq. (2.83). From Eqs. (2.86) and (7.12), the state predictions for the sampling instant k + 1 are xˆi (k + 1|k) =
nx
ai, j x˜ j (k) + bi,1 u(k|k) + νi (k)
(7.16)
j=1
for i=1, . . . , n x . From Eqs. (2.86) and (7.14), the state predictions for the sampling instant k + p are xˆi (k + p|k) =
nx
ai, j xˆ j (k + p − 1|k) + bi,1 u(k + p − 1|k) + νi (k)
(7.17)
j=1
for i=1, . . . , n x , p=2, . . . , N . From Eqs. (2.87) and (7.15), the predictions of the controlled variable are yˆ (k + p|k) = g(v(k + p|k)) + d(k) n x c1,i xˆi (k + p|k) + d(k) =g
(7.18)
i=1
From Eqs. (2.86) and (7.8), the state disturbances are estimated from ⎛ νi (k) = x˜i (k) − ⎝
nx j=1
⎞ ai, j x˜ j (k − 1) + bi,1 u(k − 1)⎠
(7.19)
7.2 MPC-NO Algorithm in State-Space
289
for i = 1, . . . , n x . From Eqs. (2.87) and (7.10), the output disturbance is estimated from n x c1,i x˜i (k) (7.20) d(k) = y(k) − g i=1
Prediction Using State-Space MIMO Wiener Model I Next, we will discuss the case when the state-space MIMO Wiener model I depicted in Fig. 2.2 is used for prediction. Model matrices A, B and C are given by Eq. (2.89). From Eqs. (2.91) and (7.12), the state predictions for the sampling instant k + 1 are xˆi (k + 1|k) =
nx
ai, j x˜ j (k) +
j=1
nu
bi, j u j (k|k) + νi (k)
(7.21)
j=1
for i = 1, . . . , n x . From Eqs. (2.91) and (7.14), the state predictions for the sampling instant k + p are xˆi (k + p|k) =
nx
ai, j xˆ j (k + p − 1|k) +
j=1
nu
bi, j u j (k + p − 1|k) + νi (k) (7.22)
j=1
for i = 1, . . . , n x , p = 2, . . . , N . From Eqs. (2.92) and (7.15), the predictions of the controlled variables are yˆm (k + p|k) = gm (vm (k + p|k)) + dm (k) n x cm,i xˆi (k + p|k) + dm (k) = gm
(7.23)
i=1
for m = 1, . . . , n y , p = 1, . . . , N . From Eqs. (2.91) and (7.8), the state disturbances are estimated from ⎛ ⎞ nx nu νi (k) = x˜i (k) − ⎝ ai, j x˜ j (k − 1) + bi, j u j (k − 1)⎠ (7.24) j=1
j=1
for i = 1, . . . , n x . From Eqs. (2.92) and (7.10), the output disturbances are estimated from n x cm,i x˜i (k) (7.25) dm (k) = ym (k) − gm i=1
for m = 1, . . . , n y .
290
7 MPC Algorithms Using State-Space Wiener Models
Prediction Using State-Space MIMO Wiener Model II Finally, we will discuss the case when the state-space MIMO Wiener model II depicted in Fig. 2.3 is used for prediction. Model matrices A, B and C are given by Eq. (2.94). Because the state equation is the same in both types of the state-space MIMO Wiener model, state predictions given by Eqs. (7.21) and (7.22) holds true in the second model structure. Analogously, the state disturbances are estimated from Eq. (7.24) in both cases. From Eqs. (2.97) and (7.15), the output predictions for the sampling instant k + p are yˆm (k + p|k) = gm (v1 (k + p|k), . . . , vn v (k + p|k)) + dm (k) n nx x c1,i xˆi (k + p|k), . . . , cn v ,i xˆi (k + p|k) + dm (k) = gm i=1
i=1
(7.26) for m = 1, . . . , n y , p = 1, . . . , N . From Eqs. (2.97) and (7.10), the output disturbances are estimated from n nx x c1,i x˜i (k), . . . , cn v ,i x˜i (k) (7.27) dm (k) = ym (k) − gm i=1
i=1
for m = 1, . . . , n y . Optimisation Taking into account the obtained state and output prediction equations, i.e. Eqs. (7.16), (7.17), (7.18) (the state-space SISO Wiener model), (7.21), (7.22), (7.23) (the statespace MIMO Wiener model I), (7.21), (7.22), (7.26) (the state-space MIMO Wiener model II), it is clear that the predicted controlled variables are nonlinear functions of the calculated future increments (1.3). It means that the resulting MPC-NO optimisation problem is also nonlinear. The general formulations of these MPC-NO optimisation problems are the same when input-output and state-space process descriptions are used. If hard constraints are imposed on the controlled variables, we obtain the nonlinear optimisation task (1.35). If soft constraints are used, the nonlinear task is defined by Eq. (1.39). As far as MATLAB implementation is considered, the general structures of the vectors and matrices which define the constraints are the same in both input-output and state-space formulations. The MPC optimisation task is solved in MATLAB by means of the fmincon function. All details are given in Sect. 3.2. Of course, the main difference is the way the predicted vector of the controlled variables, ˆy(k), is calculated. In the state-space description the state equations must be used. Of course, we have to use a state estimator when the state cannot be measured. Applications of the MPC-NO algorithm for Wiener systems are rather rare. An example application of the MPC-NO algorithm based on a state-space Wiener model to a plug-flow tubular reactor is presented in [2]. The MPC-NO algorithm is rather treated as a reference to which alternative, more computationally efficient control schemes are compared.
7.3 MPC-NO-P Algorithm in State-Space
291
7.3 MPC-NO-P Algorithm in State-Space In the state-space MPC-NO-P algorithm, all prediction equations derived in Sect. 7.2 for the MPC-NO algorithm can be used, it is only necessary to find from Eq. (1.56) the control increments, u(k), for the actually calculated vector of decision variables, c(k). Although parameterisation using Laguerre functions makes it possible to reduce the number of decision variables, we still have to solve a nonlinear optimisation task. It is because the predicted vector of the controlled variables, ˆy(k), is a nonlinear function of the calculated decision vector, c(k). The general formulations of these MPC-NO optimisation problems are the same when input-output and state-space process descriptions are used. If hard constraints are imposed on the controlled variables, we obtain the nonlinear optimisation task (3.54). If soft constraints are used, the nonlinear task is defined by Eq. (3.66). As far as MATLAB implementation is considered, the general structures of the vectors and matrices which define the constraints are the same in both input-output and state-space formulations. The MPC optimisation task is solved in MATLAB by means of the fmincon function. All details are given in Sect. 3.3.
7.4 MPC-NPSL and MPC-SSL Algorithms in State-Space The MPC-SSL algorithm based on the state-space Wiener model with a neural static block is described in [1, 10]. Effectiveness of the algorithm is shown for the following simulated processes: a gasifier in the first case and an intensified continuous chemical reactor in the second case. The authors of these works show that the classical LMPC algorithm based on a linear process description results in unsatisfactory control quality and the MPC-SSL scheme gives much better results. Unfortunately, the MPC-NPSL strategy is not discussed, yet it is likely to improve the quality of control. Both MPC-SSL and MPC-NPSL schemes for the state-space MIMO Wiener model I are discussed and compared in [9]. The description presented in this chapter extends that publication. Prediction Using State-Space SISO Wiener Model At first, let us discuss the state-space SISO case in which the Wiener model depicted in Fig. 2.1 is used. The time-varying gain of the nonlinear static part of the model for the current operating point is defined by the general equations (3.69) and (3.70), i.e. K (k) =
dg(v(k)) dy(k) = dv(k) dv(k)
where the model signal v(k) is calculated using Eq. (2.88) which gives
(7.28)
292
7 MPC Algorithms Using State-Space Wiener Models
v(k) =
nx
c1,i x˜i (k)
(7.29)
i=1
Prediction Using State-Space MIMO Wiener Model I If the state-space MIMO Wiener model I depicted in Fig. 2.2 is used, the time-varying gains of the nonlinear static blocks of the model for the current operating point are defined by the general equations (3.102)–(3.103) and (3.104)–(3.105), i.e. K m (k) =
dym (k) dgm (vm (k)) = dvm (k) dvm (k)
(7.30)
for m = 1, . . . , n y , where the model signals vm (k) are calculated using Eq. (2.93) which gives nx vm (k) = cm,i x˜i (k) (7.31) i=1
Prediction Using State-Space MIMO Wiener Model II If the state-space MIMO Wiener model II depicted in Fig. 2.2 is used, the timevarying gains of the nonlinear static blocks of the model for the current operating point are defined by the general equations (3.125)–(3.126) and (3.127)–(3.128), i.e. K m,n (k) =
dgm (v1 (k), . . . , vn v (k)) dym (k) = dvn (k) dvn (k)
(7.32)
for m = 1, . . . , n y , n = 1, . . . , n v . The model signals vn are calculated using Eq. (2.98) which gives nx vn (k) = cn,i x˜i (k) (7.33) i=1
As a result of linearisation, taking into account the serial structure of the Wiener model, we may easily conclude that in the SISO case, the model output may be expressed as multiplication of the time-varying gain K (k) and the auxiliary signal v(k), as defined by Eq. (3.71). In the case of the MIMO Wiener model I, we may also notice that the signals y1 (k), . . . , yn y (k) may be easily found as multiplications of the corresponding time-varying gains K 1 (k), . . . , K n y (k) and the auxiliary signals v1 (k), . . . , vn y (k), respectively, as defined by Eqs. (3.106)–(3.107). The diagonal gain matrix K (k), of dimensionality n y × n y , is defined by Eq. (3.108). Hence, the linear approximation of the state-space Wiener model (2.84)–(2.85) is x(k + 1) = Ax(k) + Bu(k) y(k) = K (k)v(k) = K (k)C x(k)
(7.34) (7.35)
7.4 MPC-NPSL and MPC-SSL Algorithms in State-Space
293
When the MIMO Wiener model II is used, the signals y1 (k), . . . , yn y (k) may be easily found as multiplications of the corresponding time-varying gains K 1,1 (k), . . . , K n y ,n v (k) and the auxiliary signals v1 (k), . . . , vn v (k). In contrast to the MIMO Wiener model I, all input-output channels must be taken into consideration, as defined by Eqs. (3.129)–(3.130). The resulting linear approximation of the MIMO Wiener model II is also defined by Eqs. (7.34)–(7.35), but now v(k) is the vector of length n v and the matrix K (k), of dimensionality n y × n v , is defined by Eq. (3.131). All things considered, Eqs. (7.34)–(7.35) are used in all three cases of the state-space Wiener models, i.e. for the SISO structure as well as MIMO representations I and II. One may easy notice that the structure of the obtained linearised model (7.34)–(7.35) is similar to that of the classical linear state-space models, but a time-varying matrix K (k) is used in the output equation. Using recurrently the general state prediction formula (7.6) and the state equation (7.34), it is possible to calculate the predicted state vector for the whole prediction horizon ( p = 1, . . . , N ) x(k ˆ + 1|k) = Ax(k) + Bu(k|k) + ν(k) x(k ˆ + 2|k) = Ax(k ˆ + 1|k) + Bu(k + 1|k) + ν(k)
(7.36) (7.37)
ˆ + 2|k) + Bu(k + 2|k) + ν(k) x(k ˆ + 3|k) = Ax(k .. .
(7.38)
The state predictions can be expressed as functions of the increments of the future control increments (similarly to Eqs. (3.89)–(3.91), the influence of the past is not taken into account) x(k ˆ + 1|k) = Bu(k|k) + . . . x(k ˆ + 2|k) = ( A + I)Bu(k|k) + Bu(k + 1|k) + . . .
(7.39) (7.40)
x(k ˆ + 3|k) = ( A2 + A + I)Bu(k|k) + ( A + I)Bu(k + 1|k) + Bu(k + 2|k) + . . .
(7.41)
.. . Let us define the predicted state trajectory over the whole prediction horizon, the vector of length n x N ⎡
⎤ x(k ˆ + 1|k) ⎢ ⎥ .. xˆ (k) = ⎣ ⎦ . x(k ˆ + N |k)
(7.42)
From Eqs. (7.39)–(7.41), the predicted state vector can be expressed in the following way (7.43) xˆ (k) = Pu(k) + x 0 (k)
294
7 MPC Algorithms Using State-Space Wiener Models
where the matrix ⎡
B ( A + I) B .. .
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Nu −1 i A + I B ⎢ P = ⎢ i=1 ⎢ Nu i ⎢ i=1 A + I B ⎢ ⎢ .. ⎢ . ⎣ N −1 i i=1 A + I B
... ... .. .
0n x ×n u 0n x ×n u .. .
...
B
... .. .
( A + I) B .. . N −Nu i ... i=1 A + I B
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(7.44)
is of dimensionality n x N × n u Nu and the free state trajectory vector ⎤ x 0 (k + 1|k) ⎥ ⎢ .. x 0 (k) = ⎣ ⎦ . 0 x (k + N |k) ⎡
(7.45)
is of length n x N . Using the obtained linearised output equation (7.35) and the state prediction equation (7.43), we derive the predicted trajectory of the controlled variables, defined by Eq. (1.22), as ˆy(k) = K (k) Pu(k) + y0 (k)
(7.46)
where the matrix of dimensionality n y N × n x N is K (k) = diag(K (k)C, . . . , K (k)C)
(7.47)
Let us remind that in the input-output process description we use the prediction equation (3.93), i.e. ˆy(k) = G(k)u(k) + y0 (k). The relation obtained for the statespace approach, i.e. Eq. (7.46), may be easily transformed to Eq. (3.93) equating G(k) = K (k) P. It means that in the state-space process description we may use the same prediction equations derived for the input-output case. The output free trajectory vector is defined by Eq. (3.88). In the MPC-NPSL algorithm, the consecutive elements of the free trajectory are calculated from Eqs. (7.12), (7.14) and (7.15), assuming that the increments of the manipulated variables, starting from the sampling instant k − 1, are all 0. Let x 0 (k + p|k) denote the element of the state free trajectory for the sampling instant k + p calculated at the instant k (a vector of length n x ). For the sampling instant k + 1, we obtain ˜ + Bu(k − 1) + ν(k) x 0 (k + 1|k) = Ax(k) for the sampling instant k + p, we have
(7.48)
7.4 MPC-NPSL and MPC-SSL Algorithms in State-Space
x 0 (k + p|k) = Ax 0 (k + p − 1|k) + Bu(k − 1) + ν(k)
295
(7.49)
where p = 2, . . . , N . The output free trajectory for the sampling instant k + p calculated at the instant k (a vector of length n y ) is calculated recurrently from y 0 (k + p|k) = g(C x 0 (k + p|k)) + d(k)
(7.50)
where p = 1, . . . , N . When for free output trajectory calculation not the full nonlinear Wiener model (2.84)–(2.85) is used but its current linear approximation (7.34)– (7.35), we obtain the MPC-SSL scheme. In such a case, the state free trajectory is calculated from Eqs. (7.48) and (7.49), exactly in the same way it is done in the MPCNPSL scheme (because the state equation that describe the Wiener model is linear). Conversely, in the MPC-SSL algorithm, the output free trajectory is calculated using the linearised static block of the model. In place of Eq. (7.50), we have y 0 (k + p|k) = K (k)C x 0 (k + p|k) + d(k)
(7.51)
where p = 1, . . . , N . In order to assess the unmeasured output disturbances, in place of Eq. (7.10), we obtain d(k) = y(k) − g(˜v(k)) = y(k) − K (k)C(x(k)) ˜
(7.52)
As pointed out in Sect. 3.4, computational complexity of calculating the nonlinear free trajectory is comparable with the case when a linearised model is used for this purpose, but the use of a nonlinear model is likely to result in better control quality. Hence, we always should use the MPC-NPSL scheme rather than the MPC-SSL one. Optimisation Taking into account that in both input-output and state-space process descriptions, we use the same prediction formula (3.93), although its components are calculated differently, it means that the resulting MPC-NPSL and MPC-SSL optimisation problem are of quadratic optimisation type. The general formulations of these optimisation problems are the same when input-output and state-space process descriptions are used. If hard constraints are imposed on the controlled variables, we obtain the quadratic optimisation task (3.164). If soft constraints are used, the nonlinear task is defined by Eq. (3.170). As far as MATLAB implementation is considered, the general structures of the vectors and matrices which define the constraints are the same in both input-output and state-space formulations. The MPC optimisation task is solved in MATLAB by means of the quadprog function. All details are given in Sect. 3.4. Comparing the input-output and the state-space formulations, there are two main differences: (a) the way a linear approximation of the model is successively calculated on-line, (b) the way the free trajectory is calculated (in the state-space domain, both state and output free trajectories must be found, in the input-output approach only the second one).
296
7 MPC Algorithms Using State-Space Wiener Models
7.5 MPC-NPSL-P and MPC-SSL-P Algorithms in State-Space In the state-space MPC-NPSL-P and MPC-SSL-P algorithms all prediction equations derived in Sect. 7.4 for the MPC-NPSL and MPC-SSL algorithms can be used, it is only necessary to find the control increments u(k) for actually calculated vector of decision variables, c(k). Using Eqs. (1.56), (7.46) becomes ˆy(k) = K (k) P Lc(k) + y0 (k)
(7.53)
equating G(k) = K (k) P, we easily obtain Eq. (3.176), i.e. ˆy(k) = G(k)Lc(k) + y0 (k), that is used in the MPC-NPSL and MPC-SSL algorithms for input-output Wiener models. Let us note that in Eqs. (3.176) and (7.53) the predicted vector of the controlled variables, ˆy(k), is a linear function of the calculated decision vector, c(k). Because quadratic norms are used in the minimised MPC cost-function, we obtain quadratic optimisation problems. The general formulations of these optimisation problems are the same when input-output and state-space process descriptions are used. If hard constraints are imposed on the controlled variables, we obtain the quadratic optimisation task (3.177). If soft constraints are used, the nonlinear task is defined by Eq. (3.182). As far as MATLAB implementation is concerned, general structures of the vectors and matrices which define the constraints are the same in both input-output and state-space formulations. The MPC optimisation task is solved in MATLAB by means of the quadprog function. All details are given in Sect. 3.5.
7.6 MPC-NPLT Algorithm in State-Space The general formulation of the MPC-NPLT algorithm presented in Sect. 3.6 for inputoutput Wiener models is the same in the state-space context. As previously, we use the prediction equation (3.243), according to which the predicted trajectory of the controlled variables is a linear function of the calculated future increments (1.3). It makes it possible to formulate the MPC-NPLT quadratic optimisation problems defined by Eqs. (3.244) and (3.249), for hard and soft output constraints, respectively. As far as MATLAB implementation is concerned, the general structures of the vectors and matrices which define the constraints are the same in both input-output and state-space formulations. The MPC-NPLT optimisation task is solved in MATLAB by means of the quadprog function. All details are given in Sect. 3.6. Because in the input-output and state-space approaches, different Wiener model structures are used (more precisely: their linear dynamic parts are different), we must now discuss how the state-space models are used in the MPC-NPLT algorithm. The MPCNPLT algorithm for the state-space MIMO Wiener model I is discussed in [9]. The description presented in this chapter extends that publication.
7.6 MPC-NPLT Algorithm in State-Space
297
Firstly, for the assumed trajectory of the manipulated variables, utraj (k), defined by Eq. (3.188), it is necessary to calculate the corresponding trajectory of the controlled variables, ˆytraj (k), defined by Eq. (3.189). For this purpose we will use derivations made for the MPC-NO algorithm and described in Sect. 7.2. From Eq. (7.12), the state variables predicted for the sampling instant k + 1 at the current instant k are ˜ + Bu traj (k|k) + ν(k) xˆ traj (k + 1|k) = Ax(k)
(7.54)
Using Eq. (7.14), the predictions for the sampling instant k + p are xˆ traj (k + p|k) = Axˆ traj (k + p − 1|k) + Bu traj (k + p − 1|k) + ν(k)
(7.55)
where p = 2, . . . , N . Using Eq. (7.15), the output predictions for the sampling instants k + p are yˆ traj (k + p|k) = g(C xˆ traj (k + p|k)) + d(k)
(7.56)
where p = 1, . . . , N . State and disturbance estimations, ν(k) and d(k), are calculated in the same way it is done in the MPC-NO algorithm, form Eqs. (7.8) and (7.10). In the following part of this chapter we will detail how the predicted trajectories are calculated for three considered classes of the state-space Wiener model. Secondly, we will discuss how the entries of the matrix of derivatives of the predicted trajectory of the controlled variables with respect to the manipulated variables, H(k), defined by Eq. (3.199) are calculated. Prediction Using State-Space SISO Wiener Model At first, let us discuss the prediction details when the state-space SISO Wiener model depicted in Fig. 2.1 is used. From the state prediction equations (7.16) and (7.54), for the trajectory utraj (k), the state predictions for k + 1 are calculated from traj
xˆi (k + 1|k) =
nx
ai, j x˜ j (k) + bi,1 u traj (k|k) + νi (k)
(7.57)
j=1
where i = 1, . . . , n x . For the sampling instant k + p, using Eqs. (7.17) and (7.55), we have traj
xˆi (k + p|k) =
nx
traj
ai, j xˆ j (k + p − 1|k) + bi,1 u traj (k + p − 1|k) + νi (k)
j=1
(7.58) where i = 1, . . . , n x , p = 2, . . . , N . The state disturbances, νi (k), are estimated from Eq. (7.19). From Eqs. (7.18) and (7.56), the output predictions for the trajectory utraj (k) are obtained from
298
7 MPC Algorithms Using State-Space Wiener Models
yˆ traj (k + p|k) = g(vtraj (k + p|k)) + d(k) n x traj c1,i xˆi (k + p|k) + d(k) =g
(7.59)
i=1
where p = 1, . . . , N . The output disturbance, d(k), is estimated from Eq. (7.20). In order to find entries of the matrix H(k), we differentiate Eq. (7.59), which gives dg(vtraj (k + p|k)) ∂vtraj (k + p|k) ∂ yˆ traj (k + p|k) = traj ∂u (k + r |k) dvtraj (k + p|k) ∂u traj (k + r |k)
(7.60)
for all i = 1, . . . , n x , p = 1, . . . , N , r = 0, . . . , Nu − 1. From Eq. (2.88), we have vtraj (k + p|k) =
nx
traj
c1,i xˆi (k + p|k)
(7.61)
i=1
From Eq. (7.61), we also have x ∂ xˆ (k + p|k) ∂vtraj (k + p|k) c1,i itraj = ∂u traj (k + r |k) ∂u (k + r |k) i=1
n
traj
(7.62)
From Eq. (7.57), we have traj ∂ xˆi (k + 1|k) bi,1 if r = 0 = traj ∂u (k + r |k) 0 otherwise
(7.63)
and from Eq. (7.58), we obtain nx traj ∂ xˆ j (k + p − 1|k) ∂ xˆi (k + p|k) ∂u traj (k + p − 1|k) = + bi,1 ai, j traj traj ∂u (k + r |k) ∂u (k + r |k) ∂u traj (k + r |k) j=1 traj
(7.64)
The first partial derivatives on the right side of Eq. (7.64) are calculated recurrently, whereas the second one are may have only two values: 0 or 1, as defined by Eq. (3.202). In Chap. 3 we have shown that for input-output Wiener models the second derivatives on the right side of Eq. (7.60) are independent of the process operating point (i.e. the sampling instant k) and of the trajectories utraj (k) and vtraj (k) which means that they may be precalculated off-line. We also use that approach in the case of statep|k) space models. For the considered state-space SISO model, the derivatives ∂v(k+ ∂u(k+r |k) are denoted by h( p, r ) and Eq. (7.60) simplifies to Eq. (3.211) that is used in the input-output process description.
7.6 MPC-NPLT Algorithm in State-Space
299
Prediction Using State-Space MIMO Wiener Model I Next, we will discuss prediction when the state-space MIMO model I depicted in Fig. 2.2 is used. From the state prediction equations (7.21) and (7.54), when the trajectory utraj (k) is used, the state predictions for k + 1 are calculated from traj xˆi (k
+ 1|k) =
nx
ai, j x˜ j (k) +
j=1
nu
traj
bi, j u j (k|k) + νi (k)
(7.65)
j=1
where i = 1, . . . , n x . For the sampling instant k + p, using Eqs. (7.22) and (7.55), we have traj
xˆi (k + p|k) =
nx
traj
ai, j xˆ j (k + p − 1|k) +
j=1
nu
traj
bi, j u j (k + p − 1|k) + νi (k)
j=1
(7.66) where i = 1, . . . , n x , p = 2, . . . , N . The state disturbances, νi (k), are estimated from Eq. (7.24). From Eqs. (7.23) and (7.56), the output predictions for the trajectory utraj (k) are obtained from yˆmtraj (k + p|k) = gm (vmtraj (k + p|k)) + dm (k) n x traj cm,i xˆi (k + p|k) + dm (k) = gm
(7.67)
i=1
where m = 1, . . . , n y , p = 1, . . . , N . The output disturbance vector, d(k), is estimated from Eq. (7.25). Differentiating Eq. (7.67), we have traj
∂ yˆm (k + p|k) traj ∂u n (k
+ r |k)
traj
=
traj
dgm (vm (k + p|k)) ∂vm (k + p|k) traj dvm (k
+ p|k)
traj
∂u n (k + r |k)
(7.68)
for all i=1, . . . , n x , m=1, . . . , n y , p = 1, . . . , N , r = 0, . . . , Nu − 1. From Eq. (2.93) nx traj vmtraj (k + p|k) = cm,i xˆi (k + p|k) (7.69) i=1
for all m = 1, . . . , n y . From Eq. (7.69), we also have traj
∂vm (k + p|k) traj
∂u n (k + r |k) From Eq. (7.65), we obtain
=
nx i=1
traj
cm,i
∂ xˆi (k + p|k) traj
∂u n (k + r |k)
(7.70)
300
7 MPC Algorithms Using State-Space Wiener Models
traj
∂ xˆi (k + 1|k) traj ∂u n (k
+ r |k)
=
bi,n if r = 0 0 otherwise
(7.71)
and from Eq. (7.66) traj
∂ xˆi (k + p|k) traj
∂u n (k + r |k)
=
nx
traj
ai, j
j=1
∂ xˆ j (k + p − 1|k) traj
∂u n (k + r |k)
traj
+ bi,n
∂u n (k + p − 1|k) traj
∂u n (k + r |k)
(7.72)
The first partial derivatives on the right side of Eq. (7.72) are calculated recurrently, whereas the second ones may have only two values: 0 or 1, as defined by Eq. (3.217). The second derivatives on the right side of Eq. (7.68) may be precalculated offline. We also use this approach in the case of state-space models. For the considered m (k+ p|k) are denoted by h m,n ( p, r ) and state-space MIMO model I, the derivatives ∂v ∂u n (k+r |k) Eq. (7.68) simplifies to Eq. (3.222) that is used in the input-output process description. Prediction Using State-Space MIMO Wiener Model II Finally, we discuss prediction when the state-space MIMO Wiener model II depicted in Fig. 2.3 is used. Because the state equation of the linear dynamic part of the model is the same in the case of the MIMO Wiener models I and II, Eqs. (7.65) and traj (7.66) are used to calculate predicted values of the state variables, xi (k + p|k), for all i = 1, . . . , n x and p = 1, . . . , N . The state disturbances νi (k) are estimated from Eq. (7.24). From Eqs. (7.26) and (7.56), the output predictions for the trajectory utraj (k) are obtained from traj
(k + p|k)) + dm (k) yˆmtraj (k + p|k) = gm (v1 (k + p|k), . . . , vntraj v n nx x traj traj = gm c1,i xˆi (k + p|k), . . . , cn v ,i xˆi (k + p|k) + dm (k) i=1
i=1
(7.73) where m = 1, . . . , n y , p = 1, . . . , N . The output disturbance vector, d(k), is estimated from Eq. (7.27). Differentiating Eq. (7.73), we have nv traj traj traj dgm ((v1 (k + p|k), . . . , vn v (k + p|k)) ∂vs (k + p|k) traj traj dz m (k + p|k) ∂u n (k + r |k) ∂u n (k + r |k) s=1 (7.74) where from Eq. (2.98) traj
∂ yˆm (k + p|k)
=
traj
∂vs (k + p|k) traj
∂u n (k + r |k)
=
nx i=1
traj
cs,i
∂ xˆi (k + p|k) traj
∂u n (k + r |k)
(7.75)
7.6 MPC-NPLT Algorithm in State-Space
301
Because the state equation is the same in the case of the state-space MIMO Wiener models I and II, the partial derivatives on the right side of Eq. (7.75) are calculated using Eqs. (7.71) and (7.72). The second derivatives on the right side of Eq. (7.74) may be precalculated offline. We also use that approach in the case of state-space models. For the considered ∂vs (k+ p|k) are denoted by h s,n ( p, r ) and state-space MIMO model II, the derivatives ∂u n (k+r |k) Eq. (7.74) simplifies to Eq. (3.228) that is used in the input-output process description. In order to determine the step-response coefficients for the state-space model formulations, let us consider the prediction equations obtained for the MPC-NPSL and MPC-SSL algorithms. In the case of the input-output models, we have shown that the predicted trajectory of the auxiliary variable over the prediction horizon, i.e. the vector v(k), with respect to the vector of the manipulated variables over the dv(k) = G J where control horizon, i.e. the vector u(k), is given by Eq. (3.208), i.e. du(k) the matrix J is defined by Eqs. (3.205) and (3.218) in the SISO and MIMO cases, respectively. The constant matrix G is comprised of step-response coefficients of the first part of the Wiener model, it is defined by Eqs. (3.95) and (3.144) in the SISO and MIMO cases, respectively. In the MIMO case, the submatrices S p that comprise the matrix G are defined by Eqs. (3.116) and (3.150) for the MIMO models I and II, respectively. Considering prediction methods used in MPC algorithms based on linear models [19], the predicted trajectory of the variables v is a multiplication of the constant step-response matrix of the linear dynamic part of the model and the vector of increments of the manipulated variables supplemented by a free trajectory of the variable v (7.76) v(k) = Gu(k) + v0 (k) We use a similar formula in the MPC-NPSL and MPC-SSL algorithms based on the input-output models for the output prediction of the whole Wiener model (Eq. (3.93)), in such a case, the time-varying matrix G(k) is necessary. Now, let us formulate similar prediction equations for the state-space models. Similarly to Eq. (7.46) used for output prediction of the whole Wiener model, we have v(k) = C Pu(k) + v0 (k)
(7.77)
The matrix P of dimensionality n x N × n u Nu is given by Eq. (7.44). The matrix C= diag(C, . . . , C) is of dimensionality N × n x N , n y N × n x N and n v N × n x N for the SISO, MIMO I and MIMO II Wiener models, respectively. Comparing Eqs. (7.76) and (7.77), we conclude that the step-response matrix in the state-space process description is calculated from G= CP (7.78) Hence, using Eq. (3.208) dv(k) J= C P J = G du(k)
(7.79)
302
7 MPC Algorithms Using State-Space Wiener Models
From Eq. (7.79), the coefficients h( p, r ) or h m,n ( p, r ) that comprise the elements dv(k) may be easily calculated off-line for the state-space process of the matrix du(k) description. The presented method may be used to calculate the step-response matrix (and coefficients) in the case of all three state-space Wiener models.
7.7 MPC-NPLT-P Algorithm in State-Space In the state-space MPC-NPLT-P algorithm we use the general prediction equation (3.256), derived for the input-output models. Of course, for the state-space models the trajectory ˆytraj (k) and the matrix H(k) are calculated taking into account specificity of such models. All details are given in Sect. 7.6. Let us note that in Eq. (3.256) the predicted vector of the controlled variables, yˆ (k), is a linear function of the calculated decision vector, c(k). Because quadratic norms are used in the minimised MPC cost-function, we obtain quadratic optimisation problems. The general formulations of these optimisation problems are the same when input-output and state-space process descriptions are used. If hard constraints are imposed on the controlled variables, we obtain the quadratic optimisation task (3.257). If soft constraints are used, the optimisation task is defined by Eq. (3.262). As far as the MATLAB implementation is concerned, the general structures of the vectors and matrices which define the constraints are the same in both input-output and state-space formulations. The MPC optimisation task is solved in MATLAB by means of the quadprog function. All details are given in Sect. 3.7.
7.8 MPC-NPLPT Algorithm in State-Space The general formulation of the MPC-NPLPT algorithm presented in Sect. 3.8 for input-output Wiener models is the same in the state-space context, i.e. a linear approximation of the predicted trajectory of the controlled variables is repeatedly found once or a few times at every sampling instant. The condition for continuation of the internal iterations are given by Eq. (3.279) and Eq. (3.280) defines the condition for termination of the internal iterations. The MPC-NPLPT algorithm for the statespace MIMO Wiener model I is shortly discussed in [9]. The description presented in this chapter extends that publication. The state-space MPC-NPLPT algorithm can be developed for alternative model configurations. For example, the state-space Hammerstein model with piecewise linear functions used in the nonlinear static block is considered in [23]. For presentation of the MPC-NPLPT algorithm, we use the derivations from Sect. 7.6 obtained for the MPC-NPLT control method. We will detail next how to calculate the nonlinear predicted trajectory and the matrix of derivatives of the predicted trajectory of the controlled variables with respect to the trajectory of the manipulated variables. The only difference is the fact that at each sampling instant
7.8 MPC-NPLPT Algorithm in State-Space
303
trajectory linearisation and quadratic optimisation may be repeated a few times. It means that for the state-space MPC-NPLPT algorithm all equations derived for traj the state-space NPLT scheme may be used, but we have to replace xˆi (k + p|k) traj traj by xˆit−1 (k + p|k), vm (k + p|k) by vmt−1 (k + p|k), yˆm (k + p|k) by yˆmt−1 (k + p|k), t t yˆm (k + p|k) by yˆm (k + p|k) and H(k) by H (k). In order to determine the general state prediction equations, we use Eqs. (7.54)– (7.55), which give ˜ + Bu t−1 (k|k) + ν(k) xˆ t−1 (k + 1|k) = Ax(k)
(7.80)
Using Eq. (7.14), the predictions for the sampling instant k + p are xˆ t−1 (k + p|k) = Axˆ t−1 (k + p − 1|k) + Bu t−1 (k + p − 1|k) + ν(k)
(7.81)
where p = 2, . . . , N . From Eq. (7.56), the output predictions are yˆ t−1 (k + p|k) = g(C xˆ t−1 (k + p|k)) + d(k)
(7.82)
where p = 1, . . . , N . State and disturbance estimations, ν(k) and d(k), respectively, are calculated in the same way it is done in the MPC-NO algorithm, form Eqs. (7.8) and (7.10), respectively. Let us shortly describe how the predicted trajectories and the entries of the matrix H t (k) are calculated for three considered classes of the state-space Wiener model. Prediction Using State-Space SISO Wiener Model From the state prediction equations (7.16) and (7.80), for the trajectory ut−1 (k), the state predictions are xˆit−1 (k
+ 1|k) =
nx
ai, j x˜ j (k) + bi,1 u t−1 (k|k) + νi (k)
(7.83)
j=1
for i = 1, . . . , n x . From Eqs. (7.17) and (7.81), we have xˆit−1 (k + p|k) =
nx
t−1 ai, j xˆ t−1 (k + p − 1|k) + νi (k) j (k + p − 1|k) + bi,1 u
j=1
(7.84) for i = 1, . . . , n x , p = 2, . . . , N . The state disturbances νi (k) are estimated from Eq. (7.19). From Eqs. (7.18) and (7.82), the output predictions for the trajectory ut−1 (k) are
304
7 MPC Algorithms Using State-Space Wiener Models
yˆ t−1 (k + p|k) = g(vt−1 (k + p|k)) + d(k) n x t−1 c1,i xˆi (k + p|k) + d(k) =g
(7.85)
i=1
where p = 1, . . . , N . The output disturbance is estimated from Eq. (7.20). In order to find entries of the matrix H t (k), it is best to use Eq. (3.283). The coefficients h( p, r ) are calculated using Eq. (7.79). Prediction Using State-Space MIMO Wiener Model I From the state prediction equations (7.21) and (7.80), when the trajectory ut−1 (k) is used, the state predictions are xˆit−1 (k
+ 1|k) =
nx
ai, j x˜ j (k) +
j=1
nu
bi, j u t−1 j (k|k) + νi (k)
(7.86)
j=1
for i = 1, . . . , n x . From Eqs. (7.22) and (7.81), we have xˆit−1 (k + p|k) =
nx
ai, j xˆ t−1 j (k + p − 1|k) +
j=1
nu
bi, j u t−1 j (k + p − 1|k) + νi (k)
j=1
(7.87) for i = 1, . . . , n x , p = 2, . . . , N . The state disturbances νi (k) are estimated from Eq. (7.24). From Eqs. (7.23) and (7.82), the output predictions for the trajectory ut−1 (k) are yˆmt−1 (k + p|k) = gm (vmt−1 (k + p|k)) + dm (k) n x t−1 cm,i xˆi (k + p|k) + dm (k) = gm
(7.88)
i=1
where m = 1, . . . , n y , p = 1, . . . , N . The output disturbance vector is estimated from Eq. (7.25). In order to find entries of the matrix H t (k), it is best to use Eq. (3.286). The coefficients h m,n ( p, r ) are calculated using Eq. (7.79). Prediction Using State-Space MIMO Wiener Model II Because the state equation of the linear dynamic part of the model is the same in the case of the MIMO Wiener models I and II, Eqs. (7.86) and (7.87) are used to calculate predicted values of the state variables, xit−1 (k + p|k), for all i = 1, . . . , n x and p = 1, . . . , N . From Eqs. (7.26) and (7.82), the output predictions for the trajectory ut−1 (k) are obtained from
7.8 MPC-NPLPT Algorithm in State-Space
305
yˆmt−1 (k + p|k) = gm (v1t−1 (k + p|k), . . . , vnt−1 (k + p|k)) + dm (k) v n nx x t−1 t−1 = gm c1,i xˆi (k + p|k), . . . , cn v ,i xˆi (k + p|k) + dm (k) i=1
i=1
(7.89) where m = 1, . . . , n y , p = 1, . . . , N . The output disturbance vector is estimated from Eq. (7.27). In order to find entries of the matrix H t (k), it is best to use Eq. (3.288). The coefficients h s,n ( p, r ) are calculated using Eq. (7.79). Optimisation Taking into account the prediction equation (3.278), the predicted trajectory of the controlled variables is a linear function of the calculated future increments (1.3). It makes it possible to formulate MPC-NPLT quadratic optimisation problems defined by Eqs. (3.297) and (3.302), for hard and soft output constraints, respectively. As far as MATLAB implementation is considered, the general structures of the vectors and matrices which define the constraints are the same in both input-output and statespace formulations. The MPC-NPLPT optimisation task is solved in MATLAB by means of the quadprog function. All details are given in Sect. 3.8.
7.9 MPC-NPLPT-P Algorithm in State-Space In the state-space MPC-NPLPT-P algorithm we use the general prediction equation (3.312), derived for the input-output models. Of course, for the state-space models, the trajectory ˆyt (k) and the matrix H t (k) are calculated taking into account specificity of such models. All details are given in Sect. 7.8. The general formulations of the resulting MPC-NPLPT optimisation problems are the same when input-output and state-space process descriptions are used. If hard constraints are imposed on the controlled variables, we obtain the quadratic optimisation task (3.316). If soft constraints are used, the nonlinear task is defined by Eq. (3.321). The MPC optimisation task is solved in MATLAB by means of the quadprog function. All details are given in Sect. 3.9.
7.10 State Estimation Because, typically, we assume that the real state vector, x(k), cannot be measured, it is necessary to calculate the estimated state vector, x(k), ˜ at each sampling instant. Different techniques may be used for this purpose [11]. Let us shortly describe two approaches: the Luenberger observer and the Extended Kalman filter.
306
7 MPC Algorithms Using State-Space Wiener Models
Taking into account the specific nature of the state-space Wiener model, described by Eqs. (2.84)–(2.85), the Luenberger observer [11] is characterised by the following equation x(k) ˜ = Ax(k ˜ − 1) + Bu(k − 1) + L(y(k) − g(C x(k ˜ − 1)))
(7.90)
T where L = l1 . . . ln x is the vector of parameters, calculated easily for given roots of the observer from the model matrices A and B. As an alternative, the Extended Kalman filter [18] can be also used. For the considered Wiener system (2.84)–(2.85), it is described by the equations x(k|k ˜ − 1) = Ax(k ˜ − 1|k − 1) + Bu(k − 1)
(7.91)
P(k|k − 1) = A P(k − 1|k − 1) A + Q(k)
(7.92)
y˜ (k) = y(k) − g(C x(k|k ˜ − 1))
(7.93)
T
S(k) = H EKF (k) P(k|k − K EKF (k) = P(k|k −
1)H TEKF (k) −1
1)H TEKF (k)S
+ R(k)
(k)
x(k|k) ˜ = x(k|k ˜ − 1) + K EKF (k) y˜ (k) P(k|k) = (I − K EKF (k)H EKF (k)) P(k|k − 1)
(7.94) (7.95) (7.96) (7.97)
Covariance matrices Q(k) = E([w(k)wT (k)]) and R(k) = E([v(k)vT (k)]), are of dimensionality n x × n x and n y × n y , respectively, where w(k − 1) and v(k) are the process and observation (measurement) noises, respectively. They are assumed to be zero mean, Gaussian and uncorrelated. The observation matrix H EKF (k) depends on the chosen model type. For the SISO model depicted in Fig. 2.1, using Eq. (2.87), we have the matrix of dimensionality 1 × n x H EKF (k) =
∂g(v(x(k))) dg1 (v1 (k)) dg1 (v1 (k)) = c c · · · 1,1 1,n x ∂ x(k) x(k)=x(k|k−1) dv1 (k) dv1 (k) ˜ (7.98)
where from Eq. (2.88) v1 (k) =
nx
c1,i x˜i (k − 1)
(7.99)
i=1
For the MIMO Wiener model I depicted in Fig. 2.2, using Eq. (2.92), we have the matrix of dimensionality n y × n x
7.10 State Estimation
307
⎡ dg (v (k)) 1 1 c1,1 ⎢ dv1 (k) ⎢ ∂g(v(x(k))) .. ⎢ H EKF (k) = =⎢ . ⎢ ∂ x(k) x(k)=x(k|k−1) ˜ ⎣ dgn y (vn y (k)) cn y ,1 dvn y (k)
⎤ dg1 (v1 (k)) c1,n x ⎥ dv1 (k) ⎥ .. .. ⎥ ⎥ . . ⎥ dgn y (vn y (k)) ⎦ cn y ,n x ··· dvn y (k) ···
(7.100) where from Eq. (2.93) vm (k) =
nx
cm,i x˜i (k − 1)
(7.101)
i=1
where m = 1, . . . , n y . For the MIMO Wiener model II depicted in Fig. 2.3, using Eq. (2.97), we have the matrix of dimensionality n v × n x ⎡ ⎢ ⎢ ∂g(v(x(k))) ⎢ H EKF (k) = = ⎢ ⎢ ∂ x(k) x(k)=x(k|k−1) ˜ ⎣
dg1 (v1 (k)) c1,1 dv1 (k) .. . dgn v (vn v (k)) cn v ,1 dvn v (k)
⎤ dg1 (v1 (k)) c1,n x ⎥ dv1 (k) ⎥ ⎥ .. .. ⎥ . . ⎥ ⎦ dgn v (vn v (k)) cn v ,n x ··· dvn v (k) ···
(7.102) and the values of vm (k) for m = 1, . . . , n v are given by Eq. (7.101).
References 1. Al Seyab, R.K., Cao, Y.: Nonlinear model predictive control for the ALSTOM gasifier. J. Process Control 16, 795–808 (2006) 2. Arefi, M.M., Montazeri, A., Poshtan, J., Jahed-Motlagh, M.R.: Wiener-neural identification and predictive control of a more realistic plug-flow tubular reactor. Chem. Eng. J. 138, 274–282 (2008) 3. Cervantes, A.L., Agamennoni, O.E., Figueroa, J.L.: A nonlinear model predictive control system based on Wiener piecewise linear models. J. Process Control 13, 655–666 (2003) 4. Gómez, J.C., Jutan, A., Baeyens, E.: Wiener model identification and predictive control of a pH neutralisation process. Proc. IEE Part D Control Theory Appl. 151, 329–338 (2004) 5. Gonzalez, A.H., Adam, E.J., Marchetti, J.L.: Conditions for offset elimination in state space receding horizon controllers: a tutorial analysis. Automatica 47, 2184–2194 (2008) 6. Jeong, B.G., Yoo, K.Y., Rhee, H.K.: Nonlinear model predictive control using a Wiener model of a continuous methyl methacrylate polymerization reactor. Ind. Eng. Chem. Res. 40, 5968– 5977 (2001) 7. Ławry´nczuk, M.: Computationally Efficient Model Predictive Control Algorithms: A Neural Network Approach. Studies in Systems, Decision and Control, vol. 3. Springer, Cham (2014) 8. Ławry´nczuk, M.: Nonlinear state-space predictive control with on-line linearisation and state estimation. Int. J. Appl. Math. Comput. Sci. 25, 833–847 (2015) 9. Ławry´nczuk, M., Tatjewski, P.: Offset-free state-space nonlinear predictive control for Wiener systems. Inf. Sci. 511, 127–151 (2020) 10. Li, S., Li, Y.: Model predictive control of an intensified continuous reactor using a neural network Wiener model. Neurocomputing 185, 93–104 (2016)
308
7 MPC Algorithms Using State-Space Wiener Models
11. Luenberger, D.G.: Observers for multivariable systems. IEEE Trans. Autom. Control 11, 190– 197 (1966) 12. Maeder, U., Borelli, F.B., Morari, M.: Linear offset-free model predictive control. Automatica 45, 2214–2222 (2009) 13. Maeder, U., Morari, M.: Offset-free reference tracking with model predictive control. Automatica 46, 1469–1476 (2010) 14. Morari, M., Maeder, U.: Nonlinear offset-free model predictive control. Automatica 48, 2059– 2067 (2012) 15. Muske, K.R., Badgwell, T.A.: Disturbance modeling for offset-free linear model predictive control. J. Process Control 12, 617–632 (2002) 16. Pannocchia, G.P., Rawlings, J.B.: Disturbance models for offset-free model predictive control. AIChE J. 49, 426–437 (2003) 17. Shafiee, G., Arefi, M.M., Jahed-Motlagh, M.R., Jalali, A.A.: Nonlinear predictive control of a polymerization reactor based on piecewise linear Wiener model. Chem. Eng. J. 143, 282–292 (2008) 18. Simon, D.: Optimal State Estimation: Kalman, H∞ and Nonlinear Approaches. Wiley, Hoboken (2006) 19. Tatjewski, P.: Advanced Control of Industrial Processes, Structures and Algorithms. Springer, London (2007) 20. Tatjewski, P.: Disturbance modeling and state estimation for offset-free predictive control with state-space process models. Int. J. Appl. Math. Comput. Sci. 24, 313–323 (2014) 21. Tatjewski, P.: Offset-free nonlinear model predictive control with state-space process models. Arch. Control Sci. 27, 595–615 (2017) 22. Tatjewski, P., Ławry´nczuk, M.: Algorithms with state estimation in linear and nonlinear model predictive control. Comput. Chem. Eng. 143, 107065 (2020) 23. Zhang, J., Chin, K.S., Ławry´nczuk, M.: Nonlinear model predictive control based on piecewise linear Hammerstein models. Nonlinear Dyn. 92, 1001–1021 (2018)
Chapter 8
MPC of State-Space Benchmark Wiener Processes
Abstract This Chapter thoroughly discusses implementation details and simulation results of various MPC algorithms introduced in the previous Chapter applied to statespace benchmark processes. One SISO process and two MIMO ones are considered: the first MIMO benchmark has two inputs and two outputs, the second one has as many as ten inputs and two outputs. Efficiency of different methods allowing for offset-free control is considered. All algorithms are compared in terms of control quality and computational time.
8.1 The State-Space SISO Process 8.1.1 Description of the State-Space SISO Process Let us consider the SISO process which is a state-space representation of the Wiener system introduced in Sect. 4.2. The corresponding matrices of the model (2.84)– (2.85) are A=
1 1.4138 −6.0650 × 10−1 , B= , C = 1.0440 × 10−1 8.8300 × 10−2 0 1 0
(8.1) The nonlinear static block is the same as in the input-output description (Eq. (4.4)). The steady-state characteristic y(u) of the whole Wiener system is depicted in Fig. 4.1.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_8
309
310
8 MPC of State-Space Benchmark Wiener Processes
8.1.2 Implementation of MPC Algorithms for the State-Space SISO Process The following MPC algorithms are compared: 1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The MPC-SSL and MPC-NPSL algorithms. 3. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms. 4. The MPC-NO algorithm. In all listed algorithms, the discussed offset-free prediction method is used. Additionally, two MPC-NO algorithms with the classical augmented state disturbance model are considered: 1. The MPC-NOaug1 algorithm: the disturbance estimation is placed in the first state equation. 2. The MPC-NOaug2 algorithm: the disturbance estimation is placed in the second state equation. In general, all universal equations presented in Chap. 7 are used. Additional specific relations that depend on the static parts of the model are the same as for the input-output version of the process (Eqs. (4.5)–(4.8)).
8.1.3 MPC of the State-Space SISO Process Parameters of all compared MPC algorithms are the same as in the input-output SISO process case (Sect. 4.2): N = 10, Nu = 3, λ = 0.25, the constraints imposed on the manipulated variable are: u min = −2.5, u max = 2.5. Similarly to the input-output SISO benchmark, for the state-space one, we also consider two cases: no modelling errors and no disturbances, whereas in the second part, robustness to external disturbances is evaluated. In order to demonstrate advantages of the prediction model used for offset-free control, a comparison with the method using the classical augmented state disturbance model is made. In the first part of simulations, the model is perfect (no modelling errors) and the process is not affected by any disturbances. Simulation results of the LMPC algorithm are given in Fig. 8.1, three versions of the algorithm are verified for linear models from three different operating points (the same as in the input-output SISO process case). Unfortunately, due to process nonlinearity, the LMPC algorithm does not lead to good control quality. It is interesting to note that the algorithm results in different trajectories in comparison with those obtained for the input-output SISO case depicted in Fig. 4.2. Next, we consider simulation results of the MPC-NPSL and MPC-SSL algorithms. They are shown in Fig. 8.2, the trajectories obtained when the reference MPC-NO
8.1 The State-Space SISO Process
311
2
0
-2
-4 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 8.1 The state-space SISO process: simulation results of the linear LMPC algorithm based on different models, obtained for different operating points 1
0
-1
-2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 8.2 The state-space SISO process: simulation results of the MPC-NO, MPC-NPSL and MPCSSL algorithms
312
8 MPC of State-Space Benchmark Wiener Processes
Table 8.1 The state-space SISO process: comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
scheme is used are also given for comparison. The MPC-NPSL scheme gives good control. Depending on the operating point, the MPC-SSL algorithm is slightly slower or it gives larger overshoot. When compared with the results obtained for the inputoutput SISO case and depicted in Fig. 4.3, it is interesting to note that the MPC-SSL scheme in both cases gives similar trajectories, but the differences are not significant. The MPC-NPSL and MPC-NO schemes for both model representations give the same results. Simulation results of the MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms are not shown since the results for the state-space SISO process are the same as their versions in the input-output case shown in Fig. 4.5. Let us recall that the algorithms with one on-line trajectory linearisation at each sampling instant are better than the MPC schemes with model linearisation and the MPC-NPLPT method gives practically the same results as the MPC-NO approach. All considered MPC algorithms are compared in Table 8.1 in terms of the performance criteria E 2 , E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT algorithm and the calculation time. Control accuracy of LMPC and MPC-SSL approaches are different in comparison with the input-output SISO case (Table 4.1); all other algorithms give the same results. As far as the computational time is concerned, as always, the more advanced the algorithm, the longer the time required. It is important that the MPC algorithms with model or trajectory linearisation require only a fraction of the calculation time necessary in the MPC-NO scheme. Furthermore, it is interesting to analyse the relative computational time of all tested MPC algorithms for both input-output and state-space representations for the SISO process, for different values of the control horizon. The results are given in
8.1 The State-Space SISO Process
313
Table 8.2 The input-output SISO process versus the state-space SISO process: comparison of all considered MPC algorithms in terms of the calculation time (%) for different control horizons, N = 10
Table 8.2. All results are scaled in such a way that the computational time for the MPC-NO algorithm based on the state-space Wiener model and for default horizons (N = 10, Nu = 3), corresponds to 100%. Of course, the results depend on software implementation of the algorithms, but, in our case, it turns out that the MPC-NO algorithm for the state-space version of the process needs more calculation time than its version for the input-output domain. It explains why the computation times of MPC algorithms with linearisation (in relation to the computation time of the MPCNO scheme) are longer in Table 4.1 than in Table 8.1. On the other hand, relative relations of the calculation time of the consecutive MPC algorithms are very similar in the case of both model representations, i.e. simple MPC-SSL and MPC-NPSL schemes need less time than advanced MPC-NPLT and MPC-NPLPT methods. In the second part of simulations we assume that the set-point is constant (y sp (k) = 0 ∀ k), but the process is affected by disturbances. In all MPC algorithms, the ideal Wiener model is used, whereas the simulated process is
314
8 MPC of State-Space Benchmark Wiener Processes
x x (k) d (k) x1 (k + 1) =A 1 + 1x + B(u(k) + d u (k)) x2 (k + 1) x2 (k) d2 (k) y(k) = g(C x(k)) + d y (k)
(8.2) (8.3)
The disturbances are d1x (k) = −0.2H (k − 3), d u (k) = 0.5H (k − 60),
d2x (k) = 0.3H (k − 20) d y (k) = 0.4H (k − 40)
(8.4)
where the Heaviside step function is H (k) =
0 if k < 0 1 if k ≥ 0
(8.5)
In the state-space domain, we not only consider the unmeasured disturbance that acts on the process output, as it is the case in the input-output approach, but we also take into account state and input disturbances. Simulation results of two simple MPC algorithms with on-line model linearisation, i.e. the MPC-SSL and MPC-NPSL strategies, are presented in Fig. 8.3, the trajectories obtained in the MPC-NO one are given for reference. Although the MPC-NPSL
1 0.8 0.6 0.4 0.2 0 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 8.3 The state-space SISO process (the set-point is constant, the unmeasured disturbances act on the process): simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms
8.1 The State-Space SISO Process
315
0.4
0.2
0
-0.2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 8.4 The state-space SISO process (the set-point is constant, the unmeasured disturbances act on the process): simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms
algorithm works correctly in set-point tracking (and the disturbance-free case) as depicted in Fig. 8.2, for disturbance compensation the default parameter λ = 0.25 leads to very poor performance and must be increased to λ = 0.5. The MPC-SSL scheme works correctly, but there are some differences from the ideal trajectory possible when the MPC-NO scheme is used. Figure 8.4 compares the results obtained for three MPC algorithms with on-line trajectory linearisation, i.e. the MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT strategies, the trajectories of the MPC-NO approach are also given for reference. Two observations may be made. Firstly, the MPC-NPLPT algorithm gives the same trajectory as the MPC-NO one, which is also observed in the set-point tracking task without disturbances (Table 8.1). Secondly, in the case of the disturbances, the algorithms with one linearisation at each sampling instant, i.e. the MPC-NPLT1 and MPC-NPLT2 ones, give different trajectories than those of the MPC-NO algorithm, whereas very small differences are observed in the set-point tracking task. Figure 8.5 compares the real and estimated state trajectories in the MPC-NPLPT algorithm. It is interesting to notice significant differences between them. Nevertheless, thanks to using the discussed prediction and disturbance model, offset-free control is assured. Figure 8.6 compares performance of the MPC-NPLPT scheme and two versions of the MPC-NO algorithms with the augmented state disturbance model, i.e. MPCNOaug1 and MPC-NOaug2. Although the MPC-NOaug1 scheme is faster than the
316
8 MPC of State-Space Benchmark Wiener Processes
2 1 0 -1 -2 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 8.5 The state-space SISO process (the set-point is constant, the unmeasured disturbances act on the process): the real versus estimated state trajectories in the MPC-NPLPT algorithm
0.4 0.3 0.2 0.1 0 -0.1 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
2 1 0 -1 -2 0
Fig. 8.6 The state-space SISO process (the set-point is constant, the unmeasured disturbances act on the process): simulation results of the MPC-NPLPT and two versions of the MPC-NOaug algorithms
8.1 The State-Space SISO Process
317
Table 8.3 The state-space SISO process (the unmeasured disturbances act on the process): comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
MPC-NOaug2 one, the discussed MPC-NPLPT algorithm is the fastest one for each disturbance step. The obtained result corresponds with the general recommendations for the augmented state approach, i.e. the disturbance should be placed in the same equation in which the manipulated variable is present. The process matrix B (Eq. (8.1)) indicates that the disturbance should be placed in the first state equation. All above observations are confirmed by numerical values of the performance criteria, E 2 and E MPC-NO , detailed in Table 8.3. Similarly to the set-point tracking case (Table 8.1), the calculation time of the discussed MPC algorithms with linearisation is only a fraction of that necessary in the MPC-NO one.
8.2 The State-Space MIMO Process A with Two Inputs and Two Outputs: Model I 8.2.1 Description of the State-Space MIMO Process A Let us consider the process which is a state-space representation of the MIMO Wiener system A introduced in Sect. 4.4. The corresponding matrices of the model (2.84)– (2.85) are
318
8 MPC of State-Space Benchmark Wiener Processes
⎤ 0 0 −7.3576 × 10−1 0 ⎥ ⎢ 5 × 10−1 1.2131 0 0 ⎥ (8.6) A=⎢ −1 ⎦ , ⎣ 0 0 0 −4.3460 × 10 0 0 1 1.3231 ⎡ ⎤ −1 −1 5.1691 × 10 1.0338 × 10 ⎢ 3.6082 × 10−1 7.2163 × 10−2 ⎥ 0 0 5 × 10−1 0 ⎥ B=⎢ (8.7) , C = ⎣ 3.8455 × 10−2 4.8069 × 10−1 ⎦ 0 0 0 5 × 10−1 5.0774 × 10−2 6.3467 × 10−1 ⎡
The nonlinear static blocks (Eq. 2.14) are the same as in the input-output description (Eqs. (4.14)–(4.15)). The steady-state characteristics y1 (u 1 , u 2 ) and y2 (u 1 , u 2 ) of the whole Wiener system are depicted in Fig. 4.26.
8.2.2 Implementation of MPC Algorithms for the State-Space MIMO Process A The following MPC algorithms are compared: 1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The MPC-SSL and MPC-NPSL algorithms. 3. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms. 4. The MPC-NO algorithm. In all listed algorithms, the discussed offset-free prediction method is used. Additionally, the MPC-NOaug algorithm with the classical augmented state disturbance model is considered. The additional disturbances are added to the manipulated variables (the best disturbance location in the model used). In general, all universal equations presented in Chap. 7 are used. Additional specific relations that depend on the static parts of the model are the same as for the input-output version of the process (Eqs. (4.16)–(4.23)).
8.2.3 MPC of the State-Space MIMO Process A Parameters of all compared MPC algorithms are the same as in the input-output MIMO process A case (Sect. 4.4): N = 10, Nu = 3, μ p,1 = 1 and μ p,2 = 5 for p = 1, . . . , N , λ p,1 = λ p,2 = 0.5 for p = 0, . . . , Nu − 1, constraints imposed on = u min = −1.5, u max = u max = 1.5. the manipulated variables are: u min 1 2 1 2
8.2 The State-Space MIMO Process A with Two Inputs and Two Outputs: Model I
319
In the first part of simulations, the model is perfect and the process is not affected by any disturbances. Simulation results of the LMPC algorithm are given in Fig. 8.7, three versions of the algorithm are verified for linear models from three different operating points (the same as in the input-output MIMO process A). The LMPC algorithm does not give satisfying control quality. Similarly to the SISO benchmark, the algorithm gives different results for input-output (Fig. 4.27) and state-space process descriptions. Simulation results of the MPC-NPSL and MPC-SSL algorithms are depicted in Fig. 8.8, the trajectories obtained for the MPC-NO scheme are also given for comparison. The MPC-NPSL scheme gives slightly worse control than the MPCSSL one. Let us remind that similar quite good performance of the discussed MPC algorithms with model linearisation is observed in the input-output MIMO A process, but the MPC-SSL scheme requires an increase of the weights λ = λ p,1 and λ p,2 . Furthermore, taking into account Fig. 4.28, we may easily note that the MPC-NPSL and MPC-NO schemes give the same results for the input-output MIMO process A and the discussed state-space MIMO process A. Simulation results of the MPC-NPLPT, MPC-NPLT1 and MPC-NPLT2 algorithms are not shown since the results for the state-space MIMO process A are the same as their versions for the input-output process description shown in Fig. 4.30. In general, all three algorithms work correctly, but the results of the MPC-NPLPT scheme are practically the same as those obtained for the MPC-NO method. All considered MPC algorithms are compared in Table 8.4 in terms of the performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme as well as the calculation time. When compared with the results obtained for the input-output MIMO case A (Table 4.7), only the LMPC, MPC-SSL and MPC-NPL algorithms give slightly different results; all other algorithms give the same results. In general, the more advanced the way the model is used for prediction, the longer the computational time. Of course, the MPC algorithms with model or trajectory linearisation require only a fraction of the calculation time necessary in the MPC-NO scheme. Table 8.5 compares the relative computational time of all tested MPC algorithms for both input-output and state-space representations of the MIMO process A, for different values of the control horizon. Similarly to the state-space SISO process (Table 8.2), the MPC-NO algorithm for the state-space version of the MIMO process A is the most computationally demanding. Moreover, relative relations of the calculation time of the consecutive MPC algorithms are very similar in the case of both model representations, i.e. the more advanced the MPC scheme, the longer the required time.
320
8 MPC of State-Space Benchmark Wiener Processes
1
0
-1
-2
-3 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
1
0.5
0
-0.5
-1 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 8.7 The state-space MIMO process A: simulation results of the linear LMPC algorithm based on different models, obtained for different operating points
8.2 The State-Space MIMO Process A with Two Inputs and Two Outputs: Model I
321
1 0.5 0 -0.5 -1 -1.5 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 8.8 The state-space MIMO process A: simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms
322
8 MPC of State-Space Benchmark Wiener Processes
Table 8.4 The state-space MIMO process A: comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
In the second part of simulations, we assume that the set-points are constant sp sp (y1 (k) = y2 (k) = 0 ∀ k), but the process is affected by disturbances. In all MPC algorithms the ideal Wiener model is used, whereas the simulated process is ⎡
⎤ ⎡ ⎤ ⎡ x ⎤ x1 (k + 1) x1 (k) d1 (k)
u ⎢ x2 (k + 1) ⎥ ⎢ x2 (k) ⎥ ⎢ d x (k) ⎥ u 1 (k) d (k) ⎢ ⎥ = A⎢ ⎥ + ⎢ 2x ⎥ + B + 1u ⎣ x3 (k + 1) ⎦ ⎣ x3 (k) ⎦ ⎣ d3 (k) ⎦ u 2 (k) d2 (k) x4 (k + 1) x4 (k) d4x (k) y y1 (k) d (k) = g(C x(k)) + 1y y2 (k) d2 (k)
(8.8)
(8.9)
The disturbances are d1x (k) = −0.4H (k − 5),
d2x (k) = 1H (k − 25)
d3x (k) d1u (k) y d1 (k)
d4x (k) = 0.2H (k − 55) d2u (k) = −0.4H (k − 125) y d2 (k) = 0.1H (k − 95)
= −0.6H (k − 45), = −0.3H (k − 105), = −0.3H (k − 75),
(8.10)
where the Heaviside step function H (k) is given by Eq. (8.5). Similarly to the statespace SISO process, also in the MIMO case A, we not only consider the unmeasured disturbances that act on the process output but also on state and input variables.
8.2 The State-Space MIMO Process A with Two Inputs and Two Outputs: Model I
323
Table 8.5 The input-output MIMO process A versus the state-space MIMO process A: comparison of all considered MPC algorithms in terms of the calculation time (%) for different control horizons, N = 10
The trajectories obtained in the MPC-NPSL and MPC-SSL algorithms are depicted in Fig. 8.9, the results for the MPC-NO one are given for reference. For the simple MPC algorithms with on-line model linearisation, there are some differences when compared with the MPC-NO one. The trajectories for the MPC algorithms with on-line trajectory linearisation, i.e. the MPC-NPLT1, MPC-NPLT2 and MPCNPLPT strategies are given in Fig. 8.10. The results are very similar to the reference MPC-NO approach. Figure 8.11 compares the real and estimated state trajectories in the MPC-NPLPT algorithm. Since the process is affected by disturbances, significant differences are observed. Nevertheless, in all compared MPC algorithms, no steady-state error is present due to using the offset-free prediction. Figure 8.12 compares performance of the MPC-NPLPT scheme and the MPCNOaug algorithm with the classical augmented state disturbance model. Similarly to the state-space SISO process (Fig. 8.6), for each disturbance step, the discussed MPC-NPLPT algorithm is faster.
324
8 MPC of State-Space Benchmark Wiener Processes
0.6 0.4 0.2 0 -0.2 0
50
100
150
50
100
150
50
100
150
50
100
150
0.1
0.05
0
-0.05
-0.1 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 8.9 The state-space MIMO process A (the set-point is constant, the unmeasured disturbances act on the process): simulation results of the MPC-NO, MPC-NPSL and MPC-SSL algorithms
8.2 The State-Space MIMO Process A with Two Inputs and Two Outputs: Model I
325
0.6 0.4 0.2 0 -0.2 0
50
10 0
150
50
10 0
150
50
10 0
150
50
10 0
150
0.1
0.05
0
-0.05
-0.1 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 8.10 The state-space MIMO process A (the set-point is constant, the unmeasured disturbances act on the process): simulation results of the MPC-NO, MPC-NPLPT, MPC-NPLT1 and MPCNPLT2 algorithms
326
8 MPC of State-Space Benchmark Wiener Processes
1
2 1.5
0
1 -1 0.5 -2 -3 0
0 50
100
150
-0.5 0 4
50
100
150
50
100
150
2 2
1
0
0 -1
-2
-2
-4
-3 0
50
100
150
-6 0
Fig. 8.11 The state-space MIMO process A (the set-point is constant, the unmeasured disturbances act on the process): the real versus estimated state trajectories in the MPC-NPLPT algorithm
All algorithms are compared in terms of the performance indices E 1 and E 2 , the number of internal iterations necessary in the MPC-NPLPT algorithm as well as the calculation time in Table 8.6. It is evident that the MPC-NPSL algorithm is the worst one, the MPC-SSL is better, but performance of the MPC-NPLPT scheme is practically the same as that of the MPC-NO one, whereas its calculation time is a few times shorter than that of the MPC-NO algorithm.
8.3 The State-Space MIMO Process B with Ten Inputs and Two Outputs: Model I 8.3.1 Description of the State-Space MIMO Process B Let us consider the process which is a state-space representation of the MIMO Wiener system B introduced in Sect. 4.5. The corresponding matrices of the model (2.84)– (2.85) are defined by giving only their non-zero entries. The non-zero entries of the matrix A are
8.3 The State-Space MIMO Process B with Ten Inputs and Two Outputs: Model I
327
0.6 0.4 0.2 0 -0.2 0
50
100
150
50
100
150
50
100
150
50
100
150
0.1
0.05
0
-0.05
-0.1 0 2
1
0
-1
-2 0 2
1
0
-1
-2 0
Fig. 8.12 The state-space MIMO process A (the set-point is constant, the unmeasured disturbances act on the process): simulation results of the MPC-NPLPT and the MPC-NOaug algorithms
328
8 MPC of State-Space Benchmark Wiener Processes
Table 8.6 The state-space MIMO process A (the unmeasured disturbances act on the process): comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
a1,1 = 1.3853, a3,3 = 1.9493, a5,5 = 1.7290, a7,7 = 1.9436, a9,9 = 1.8249, a11,11 = 1.9364, a13,13 = 1.8705, a15,15 = 1.9272,
a1,2 = −4.7237 × 10−1 ,
a2,1 = 1
−1
a4,3 = 1
−1
a6,5 = 1
−1
a8,7 = 1
−1
a10,9 = 1
−1
a12,11 = 1
−1
a14,13 = 1
−1
a16,15 = 1
−1
a3,4 = −9.4998 × 10 , a5,6 = −7.4702 × 10 , a7,8 = −9.4442 × 10 , a9,10 = −8.3249 × 10 , a11,12 = −9.3746 × 10 , a13,14 = −8.7465 × 10 , a15,16 = −9.2851 × 10 ,
a17,17 = 1.8972,
a17,18 = −8.9982 × 10 ,
a18,17 = 1
a19,19 = 1.9148,
a19,20 = −9.1657 × 10−1 ,
a20,19 = 1
a21,21 = 1.9148, a23,23 = 1.8972, a25,25 = 1.9272, a27,27 = 1.8705, a29,29 = 1.9364, a31,31 = 1.8249, a33,33 = 1.9436,
−1
a22,21 = 1
−1
a24,23 = 1
−1
a26,25 = 1
−1
a28,27 = 1
−1
a30,29 = 1
−1
a32,31 = 1
−1
a34,33 = 1
a21,22 = −9.1657 × 10 , a23,24 = −8.9982 × 10 , a25,26 = −9.2851 × 10 , a27,28 = −8.7465 × 10 , a29,30 = −9.3746 × 10 , a31,32 = −8.3249 × 10 , a33,34 = −9.4442 × 10 ,
8.3 The State-Space MIMO Process B with Ten Inputs and Two Outputs: Model I
a35,35 = 1.7290, a37,37 = 1.9493, a39,39 = 1.3853,
a35,36 = −7.4702 × 10−1 ,
a36,35 = 1
−1
a38,37 = 1
−1
a40,39 = 1
a37,38 = −9.4998 × 10 , a39,40 = −4.7237 × 10 ,
329
(8.11)
the non-zero entries of the matrix B are b1,1 = 1.2500 × 10−1 ,
b3,1 = 1.5625 × 10−2 ,
b5,2 = 6.2500 × 10−2
b5,2 = 6.2500 × 10−2 ,
b7,2 = 1.5625 × 10−2 ,
b9,3 = 6.2500 × 10−2
b11,3 = 3.1250 × 10−2 ,
b13,4 = 6.2500 × 10−2 ,
b15,4 = 3.1250 × 10−2
b17,5 = 6.2500 × 10−2 ,
b19,5 = 3.1250 × 10−2 ,
b21,6 = 6.2500 × 10−2
b23,6 = 6.2500 × 10−2 ,
b25,7 = 3.1250 × 10−2 ,
b27,7 = 6.2500 × 10−2
b29,8 = 3.1250 × 10−2 ,
b31,8 = 1.2500 × 10−1 ,
b33,9 = 3.1250 × 10−2
b35,9 = 2.5000 × 10−1 , b37,10 = 3.1250 × 10−2 ,
b39,10 = 5.0 × 10−1 (8.12)
and the non-zero entries of the matrix C are c1,1 = 3.9143 × 10−2 ,
c1,2 = 3.0485 × 10−2 ,
c1,5 = 4.5396 × 10−2
c1,6 = 4.1190 × 10−2 ,
c1,9 = 3.1365 × 10−2 ,
c1,10 = 2.9505 × 10−2
c1,13 = 2.3912 × 10−2 ,
c1,14 = 2.2867 × 10−2 ,
c1,17 = 1.9310 × 10−2
c1,18 = 1.8642 × 10−2 ,
c1,21 = 1.6190 × 10−2 ,
c1,22 = 1.5727 × 10−2
c1,25 = 2.7875 × 10−2 ,
c1,26 = 2.7194 × 10−2 ,
c1,29 = 2.4468 × 10−2
c1,30 = 2.3947 × 10−2 ,
c1,33 = 2.1803 × 10−2 ,
c1,34 = 2.1391 × 10−2
c1,37 = 1.9661 × 10−2 ,
c1,38 = 1.9328 × 10−2 ,
c2,3 = 4.1392 × 10−3
c2,4 = 4.0690 × 10−3 ,
c2,7 = 1.0260 × 10−2 ,
c2,8 = 1.0067 × 10−2
c2,11 = 9.7873 × 10−3 ,
c2,12 = 9.5789 × 10−3 ,
c2,15 = 1.7154 × 10−2
c2,16 = 1.6735 × 10−2 ,
c2,19 = 2.9437 × 10−2 ,
c2,20 = 2.8595 × 10−2
c2,23 = 2.5747 × 10−2 ,
c2,24 = 2.4857 × 10−2 ,
c2,27 = 4.7823 × 10−2
c2,28 = 4.5735 × 10−2 ,
c2,31 = 5.0184 × 10−2 ,
c2,32 = 4.7209 × 10−2
c2,35 = 6.8094 × 10−2 ,
c2,36 = 6.1785 × 10−2 ,
c2,39 = 1.9572 × 10−1
c2,40 = 1.5242 × 10−1
(8.13)
The nonlinear static blocks (Eq. 2.14) are the same as in the input-output description (Eqs. (4.14)–(4.15)).
330
8 MPC of State-Space Benchmark Wiener Processes
8.3.2 Implementation of MPC Algorithms for the State-Space MIMO Process B The following MPC algorithms are compared: 1. The classical LMPC algorithm based on a linear model (three example models, obtained for different operating points, are considered). 2. The MPC-SSL and MPC-NPSL algorithms. 3. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms. 4. The MPC-NO algorithm. In all listed algorithms, the discussed offset-free prediction method is used. In general, all universal equations presented in Chap. 7 are used. Additional specific relations that depend on the static parts of the model are the same as for the input-output version of the MIMO process A because the nonlinear blocks are the same in the MIMO systems A and B (Eqs. (4.16)–(4.23)).
8.3.3 MPC of the State-Space MIMO Process B Parameters of all compared MPC algorithms are the same as in the input-output MIMO process B case (Sect. 4.5): N = 20, Nu = 3 and the weighting matrix M p = diag(50, 25) for p = 1, . . . , N , the default parameter of the weighting matrix p = λdiag(1, . . . , 1) is λ = 0.05 for p = 1, . . . , Nu (in some cases, different parameters λ are used). The constraints imposed on the magnitude of the manipulated variables = −20, u max = 20, n = 1, . . . , 10. are defined by: u min n n In the first part of simulations, the model is perfect (no modelling errors) and the process is not affected by any disturbances. Simulation results of the LMPC algorithm are given in Fig. 8.13, three versions of the algorithm are verified for linear models from three different operating points (the same as in the input-output MIMO process case B). The LMPC algorithm does not give satisfying control quality. Let us note that the obtained trajectories are different from those for the input-output MIMO process B depicted in Fig. 4.41. Simulation results of MPC algorithms with on-line model linearisation, i.e. the MPC-SSL and MPC-NPSL schemes, are not shown because they are the same as those obtained for the input-output process representation shown in Fig. 4.42. The MPC-SSL algorithm is fast, but it gives huge overshoot. The MPC-NPSL algorithm does not work for the default parameter λ, an increase of that parameter is necessary, but the resulting trajectories are very slow. The MPC-NPLT1, MPC-NPLT2 and MPC-NPLPT algorithms with on-line trajectory linearisation and the MPCNO scheme give the same results in input-output and state-space domains (Fig. 4.43). In general, one trajectory linearisation at each sampling instant turns out to be insufficient, the MPC-NPLPT scheme with multiple repetitions of linearisation and
8.3 The State-Space MIMO Process B with Ten Inputs and Two Outputs: Model I
331
0
-2
-4
-6 0
10
20
30
40
50
60
70
80
10
20
30
40
50
60
70
80
5
0
-5
0
20
20
0
0
-20 0 20
20
40
60
80
0 -20 0 20
20
40
60
80
20
40
60
80
60
80
-20 0 20
20
40
60
80
-20 0 20
20
40
60
80
20
40
60
80
20
40
60
80
0 20
40
60
80
-20 0 20 0
0 -20 0
40
0
0 -20 0 20
20
0
0 -20 0 20
-20 0 20
20
40
60
80
-20 0
Fig. 8.13 The state-space MIMO process B: simulation results of the linear LMPC algorithm based on different models, obtained for different operating points
332
8 MPC of State-Space Benchmark Wiener Processes
Table 8.7 The state-space MIMO process B: comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
quadratic optimisation gives excellent control quality, the same as that obtained in the MPC-NP algorithm. All considered MPC algorithms are compared in Table 8.7 in terms of the performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPC-NPLPT scheme as well as the calculation time. When compared with the results obtained for the input-output MIMO process B (Table 4.11), only the LMPC and MPC-NPLT1 algorithms give slightly different results; all other algorithms give the same trajectories. As always, the MPC algorithms with model or trajectory linearisation require only a fraction of the calculation time necessary in the MPC-NO scheme. It is interesting to note that fraction significantly decreases in the state-space MIMO process B when compared with the state-space SISO process (Table 8.1) and the state-space MIMO process A (Table 8.4). The same phenomenon is observed in the case of input-output models as shown in Tables 4.1, 4.7 and 4.11, respectively. Table 8.8 compares the relative computational time of all tested MPC algorithms for both input-output and state-space representations of the MIMO process B, for different values of the control horizon. When confronted with similar comparisons for the SISO process (Table 8.2) and the MIMO process A (Table 8.5), we may make the following two observations. Firstly, the relative relations of the calculation time are quite similar, i.e. the more advanced the algorithm, the longer time is required. Secondly, the higher dimensionality of the process, the more computationally efficient the discussed MPC algorithms with linearisation. For the default control horizon Nu = 3, the relation between the most complex algorithm with linearisation, i.e. MPC-NPLPT, and the MPC-NO scheme, is some 30%, 15% and 1%, for the state
8.3 The State-Space MIMO Process B with Ten Inputs and Two Outputs: Model I
333
Table 8.8 The input-output MIMO process B versus the state-space MIMO process B: comparison of all considered MPC algorithms in terms of the calculation time (%) for different control horizons, N = 20
space SISO, MIMO A (with n u = n y = 2) and MIMO B (with n u = 10, n y = 2) processes, respectively. Finally, we assume that the model is not perfect and the process is affected by disturbances. The same scenario is considered as in the input-output MIMO process B case described in Sect. 4.5, i.e. the steady-state gains of all input-output channels of the process are increased by 20% and from the sampling instant k = 30, the additive unmeasured step disturbance of the value 0.5 added to the first process output is taken into account, the second disturbance of the value −0.25 added to the second process output is considered from the sampling instant k = 50. The obtained performance criteria E 2 and E MPC-NO , the number of internal iterations necessary in the MPCNPLPT algorithm as well as the calculation time are given in Table 8.9. Similarly to the perfect model and disturbance-free case described earlier in this Chapter, when compared with the corresponding results obtained for the input-output MIMO process B given in Table 4.13, we may see that only the MPC-NPLT1 algorithm gives slightly different results, but the difference is insignificant. All other MPC algorithms with model or trajectory linearisation, and the MPC-NO one, give the same results for input-output and state-space process configurations. Hence, the trajectories obtained
334
8 MPC of State-Space Benchmark Wiener Processes
Table 8.9 The state-space MIMO process B (the unmeasured disturbances act on the process and the model is not perfect): comparison of all considered MPC algorithms in terms of the control performance criteria (E 2 and E MPC-NO ), the sum of internal iterations (SII) and the calculation time
for the state-space MIMO process B are not shown since they are the same as those for the input-output MIMO process B shown in Fig. 4.45. Finally, we have to stress a very important advantage of state-space Wiener models. The state-space representation of the MIMO Process B does not lead to any numerical problems, but they are observed in the corresponding classical inputoutput MIMO Wiener model I as shown in Sect. 4.5. For really multivariable processes, input-output MIMO Wiener models III and IV are recommended rather than MIMO structures I and IV, respectively.
8.4 The Influence of Process Dimensionality on the Calculation Time In Tables 8.2, 8.5 and 8.8 we have separately studied the influence of the control horizon on the relative calculation time for all considered state-space MPC algorithms for the SISO process as well as MIMO processes A and B, respectively. Now let us consider Table 8.10 in which we consider the influence of both prediction and control horizons on the computational time. The results are scaled in such a way that the computational time for the state-space SISO process controlled by the MPC-NO scheme with the default values of horizons (N = 10, Nu = 3) corresponds to 100%.
Table 8.10 The state-space SISO process, the state-space MIMO processes A and B: comparison of all considered MPC algorithms in terms of the calculation time (%) for different prediction and control horizons
8.4 The Influence of Process Dimensionality on the Calculation Time 335
336
8 MPC of State-Space Benchmark Wiener Processes
In general, the presented results have the same character as those obtained for the input-output representations of the same processes, as shown in Table 4.15. When the MPC algorithms with on-line model or trajectory linearisation are taken into account, the computational time for the state-space MIMO process A grows moderately when compared with that of the state-space SISO one. Moreover, the computational time required by the MPC algorithms applied to the state-space MIMO process B with ten inputs and two outputs does not grow dramatically when compared with the statespace MIMO process A with two inputs and two outputs as well as the state-space SISO process. These relations obtained for the state-space process representation are even better than those in the case of the input-output process description, probably because the MPC-NO algorithm is more time-consuming in the state-space case than in the input-output environment (this issue may be also inferred from Tables 8.2, 8.5 and 8.8). Acknowledgements Figures 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 8.10, 8.11 and 8.12 reprinted from: Ławry´nczuk, M., Tatjewski, P.: Offset-free state-space nonlinear predictive control for Wiener systems. Information Sciences, vol. 511, pp. 127–151, Copyright (2020), with permission from Elsevier.
Chapter 9
Conclusions
Abstract This chapter summarises the whole book. Properties of the discussed computationally efficient MPC approaches are pointed out. Additionally, the benefits of using Wiener models in MPC, in particular of a neural structure, are stressed.
In this book, we have thoroughly studied simulation results of MPC algorithms in the input-output configuration for as many as seven benchmark processes, including the neutralisation reactor and the fuel cell. We have also considered simulation results of MPC algorithms in the state-space configuration for three benchmark processes. Basing on the obtained results, we may formulate the following observations: 1. Because of nonlinear nature of the considered processes as well as significant and fast set-point changes, the LMPC algorithm based on the parameter-constant linear model gives insufficient quality of control. 2. The simple MPC algorithms with on-line model linearisation and quadratic optimisation, i.e. MPC-SSL and MPC-NPSL approaches, make it possible to control the nonlinear processes, although the resulting trajectories are far from those obtained in the MPC-NO scheme with nonlinear optimisation repeated at each sampling instant. In general, the MPC-NPSL approach, in which the full nonlinear model is used for free trajectory calculation, gives better results than the MPC-SSL scheme, in which a linear approximation of the nonlinear model is used for this purpose. 3. The more advanced MPC algorithms with on-line trajectory linearisation and quadratic optimisation make it possible to obtain much better control quality than the simple MPC schemes with on-line model linearisation. The MPC-NPLT1 and MPC-NPLPT2 schemes, in which one linearisation is performed at each sampling instant, give slightly worse control quality than the reference MPCNO algorithm, but the differences are really small. The MPC-NPLPT algorithm, in which a few repetitions of trajectory linearisation and quadratic optimisation are possible at each sampling instant, gives process trajectories practically the same as those obtained in the computationally demanding MPC-NO approach in which nonlinear optimisation is used.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7_9
337
338
9 Conclusions
4. The classical MPC-inv algorithm, in which an inverse model of the nonlinear static part of the Wiener model is used to cancel the influence of process nonlinearity, works quite well in the case of a perfect model and no disturbances. 5. In the case of model errors and disturbances, the considered MPC algorithms with on-line model or trajectory linearisation work well, some problems are observed for the simplest MPC-SSL scheme. Of course, due to model imperfections and disturbances, control quality is lower when compared to the ideal case. 6. Unfortunately, the MPC-inv algorithm is not robust. For the considered benchmark processes, it gives very bad control quality in the case of an imperfect model and disturbances. The unwanted strong oscillations may be reduced by increasing the parameter λ, but in such a case, the whole algorithm is very slow, all process input and output trajectories are much slower than those obtained in the case of the MPC-NPLPT scheme. 7. Of course, the MPC-inv approach is possible only when the inverse model exists. Moreover, it may be practically used only when the nonlinear static blocks have only one input and one output. For more complex Wiener structures, the inverse models may be very complicated which makes implementation difficult or impossible. As far as computational effort of the discussed MPC algorithms is concerned, we may observe the following issues: 1. Computational efficiency of all MPC algorithms with model and trajectory linearisation is twofold. Firstly, they need solving quadratic optimisation problems, complicated nonlinear optimisation used in the MPC-NO scheme is unnecessary. For correctly selected tuning coefficients, quadratic optimisation MPC problems have only one (global) solution; the multiple minima problem in nonlinear optimisation does not exist. Secondly, the computational time is shorter than in the case of the MPC-NO algorithm. For the analysed benchmarks, the computational time required by the discussed algorithms is only a fraction of that necessary in the MPC-NO scheme. For example, considering the input-output approach and the default horizons, that fraction is: approximately 40–60%, 20–30% and 1–2% for the SISO process, the MIMO process with two inputs and two outputs as well as the MIMO system with ten inputs and two outputs, respectively. For the state-space configuration, these numbers are better: some 20–30%, 10–15% and 0.5–2%. 2. The control horizon has a major impact on the calculation time, the prediction one has a much lower influence. All discussed MPC algorithms work well for long horizons. 3. The computational time of the discussed MPC algorithms does not grow significantly when the number of process manipulated variables increases, which is not true in the case of the MPC-NO approach. Computational efficiency may be improved using parameterisation of the calculated sequence of the decision variables using Laguerre functions:
9 Conclusions
339
1. The parameterisation approach may be used in all discussed MPC algorithms, including the MPC-NO scheme with nonlinear optimisation and the computationally efficient methods with on-line model or trajectory linearisation. 2. The MPC algorithms with parameterisation are particularly useful in the case of processes with complex dynamics which require long control horizons. 3. As a result of parameterisation, the number of decision variables of the MPC optimisation task is reduced. 4. The higher the number of the Laguerre functions, the better the quality of control. 5. For the discussed benchmark process with complex dynamics, the MPC-NPLPTP algorithm with parameterisation (the optimisation problem has only n L = 20 decision variables) gives practically the same results as the classical MPCNPLPT algorithm without parameterisation (the optimisation problem has as many as Nu = 100 decision variables). 6. For the discussed benchmark process, the rudimentary MPC-NPLPT algorithm requires approximately more than 30% longer optimisation time than the MPCNPLPT-P algorithm. For prediction in MPC, Wiener models are recommended. It is because such models can approximate very well properties of many processes. Furthermore, due to the model’s specialised structure, implementation of the discussed MPC algorithms is relatively simple. Implementation details for as many as six input-output Wiener structures and three state-space ones are discussed. In the state-space configuration, a very efficient prediction model is used to guarantee offset-free control, much easier and better than the classical approach in which the augmented state disturbance model is used. For two technological processes, i.e. a neutralisation reactor and a fuel cell, effectiveness of polynomials and neural networks used in the nonlinear static part of the Wiener model is thoroughly compared. In general, it turns out that the neural Wiener model outperforms the polynomial one in terms of the number of parameters and accuracy. Polynomials cannot give a comparable modelling accuracy to that possible when the neural approach is used and lead to numerical problems when their degree is high. Hence, the use of neural Wiener models is recommended. As far as future research is concerned, the following issues are worth considering: 1. Although in this book MLP neural networks with one hidden layer are successfully used in all considered neural Wiener models, it is possible to use other types of neural networks. 2. Alternative, more complex cascade models may be used in MPC algorithms, e.g. parallel ones. 3. In this book, we consider classical process control benchmarks described by ordinary differential equations (or their discrete-time versions). It may be an interesting idea to develop computationally efficient MPC algorithms for fractionalorder Wiener systems.
340
9 Conclusions
4. In this book, model or trajectory linearisation is used to formulate quadratic optimisation tasks in place of computationally demanding nonlinear ones. An interesting alternative is to use the Koopman operator rather than on-line linearisation. 5. In this book, Laguerre functions are used to parameterise the calculated decision vector which makes it possible to reduce the number of the decision variables. Alternative approaches to parameterisation may be considered.
Index
C Constraints imposed on the magnitude of input variables, 9, 11 imposed on the magnitude of predicted output variables, 9–12 imposed on the rate of change of input variables, 9, 11
D DMC algorithm, 23
F Fuel cell first-principle model, 253–256 model identification for MPC, 256–270 MPC, 270–279
G GPC algorithm, 23
H Hammerstein model, 60–61 example applications, 60–61 identification methods, 61 parallel, 62 Hammerstein–Wiener model, 61, 61 example applications, 61 identification methods, 61 Horizon
control, 6 prediction, 8
M MPC algorithms advantages, 15–16 compression of the constraint set, 26 computational complexity, 23–29 constrained linear, 24 constrained linear explicit, 24 constrained nonlinear explicit, 27 control quality assessment, 16 cost-function, 8, 10 decision variables, 6–8 example applications, 30–31 extensions, 16 fast, 26 fuzzy, 25–26 infeasibility problem, 16–19 move blocking, 26 optimisation problem, 10, 12, 15 optimisation problem with soft constraints, 18 optimisation solvers, 25 parameterisation using Laguerre functions, 19–24 predicted output trajectory, 13 principle, 5–9 set-point trajectory, 13 unconstrained linear explicit, 24 using linear models, 23–24 using nonlinear models, 24–29 with neural optimiser, 26
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Ławry´nczuk, Nonlinear Predictive Control Using Wiener Models, Studies in Systems, Decision and Control 389, https://doi.org/10.1007/978-3-030-83815-7
341
342 with on-line model linearisation, 28 with on-line trajectory and approximation, 29 with on-line trajectory linearisation, 28– 29 MPC algorithms for input-output Wiener models MPC-inv, 71–73 MPC-NO, 73–81 optimisation, 78–81 prediction, 73–78 MPC-NO-P, 81–83 optimisation, 81–83 prediction, 81 MPC-NPLPT, 127–136 optimisation, 133–136 prediction, 127–133 trajectory linearisation, 127–133 MPC-NPLPT-P, 136–140 optimisation, 137–140 prediction, 136–137 MPC-NPLT, 107–123 optimisation, 121–123 prediction, 109–120 trajectory linearisation, 108–120 MPC-NPLT-P, 124–127 optimisation, 124–127 prediction, 124 MPC-NPSL, 84–104 free trajectory calculation, 88–89, 92, 96, 98–99, 101 model linearisation, 85–87, 90–91, 93–95, 97, 99–101 optimisation, 101–104 prediction, 85–101 MPC-NPSL-P, 105–107 optimisation, 105–107 prediction, 105 MPC-SSL, 84–104 free trajectory calculation, 89, 92–93, 96–101 model linearisation, 85–87, 90–91, 93–95, 97, 99–101 optimisation, 101–104 prediction, 85–101 MPC-SSL-P, 105–107 optimisation, 105–107 prediction, 105 MPC algorithms for state-space Wiener models MPC-inv, 285 MPC-NO, 286–290 optimisation, 290
Index prediction, 286–290 MPC-NO-P, 291 optimisation, 291 prediction, 291 MPC-NPLPT, 302–305 optimisation, 305 prediction, 302–305 trajectory linearisation, 302 MPC-NPLPT-P, 305 optimisation, 305 prediction, 305 MPC-NPLT, 296–302 optimisation, 296 prediction, 297–302 trajectory linearisation, 296 MPC-NPLT-P, 302 optimisation, 302 prediction, 302 MPC-NPSL, 291–295 free trajectory calculation, 294–295 model linearisation, 292–293 optimisation, 295 prediction, 291–295 MPC-NPSL-P, 296 optimisation, 296 prediction, 296 MPC-SSL, 291–295 free trajectory calculation, 295 model linearisation, 292–293 optimisation, 295 prediction, 291–295 MPC-SSL-P, 296 optimisation, 296 prediction, 296 MPCS algorithm, 23
N Neutralisation reactor first-principle model, 215–216 model identification for MPC, 219–224 MPC, 226–249 MPC with constraints imposed on the controlled variable, 238–249
P PFC algorithm, 23 PID controller, 4–5
U Uryson model, 62
Index W Wiener–Hammerstein model, 62, 62 example applications, 62 identification methods, 62 parallel, 62 Wiener model example applications, 60 identification methods, 59 input-output MIMO I, 43–44 MIMO II, 44–46
343 MIMO III, 47–51 MIMO IV, 51–54 MIMO V, 54–56 SISO, 41–42 parallel, 62 state-space MIMO I, 57–58 MIMO II, 58–59 SISO, 56–57 structures of linear dynamic part, 59–60 structures of nonlinear static part, 60