127 8 3MB
English Pages 148 [143] Year 2005
Reconfigurable Distributed Control
He´ctor Benı´tez-Pe´rez and Fabia´n Garcı´a-Nocetti
Reconfigurable Distributed Control With 108 Figures
He´ctor Benı´tez-Pe´rez, PhD Fabia´n Garcı´a-Nocetti, PhD DISCA-IIMAS, Universidad Nacional Auto´noma de Me´xico, Circuito Escolar, Ciudad Universitaria, Del. Coyoaca´n 04510, Me´xico D.F., Me´xico Cover illustration: General strategy of control reconfiguration and time graph strategy.
British Library Cataloguing in Publication Data Benı´tez-Pe´rez, He´ctor Reconfigurable distributed control 1. Adaptive control systems I. Title II. Garcı´a-Nocetti, D. Fabia´n, 1959– 629.8′36 ISBN 1852339543 Library of Congress Cataloging-in-Publication Data Benı´tez-Pe´rez, He´ctor. Reconfigurable distributed control/He´ctor Benı´tez-Pe´rez and Fabia´n Garcı´a-Nocetti. p. cm. Includes bibliographical references and index. ISBN 1-85233-954-3 (alk. paper) 1. Process control—Data processing. I. Garcı´a-Nocetti, D. Fabia´n, 1959– II. Title. TS156.8.G37 2005 670.42′75—dc22 2005040270 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. ISBN-10: 1-85233-954-3 ISBN-13: 978-1-85233-954-8 Springer Science+Business Media springeronline.com © Springer-Verlag London Limited 2005 MATLAB® and Simulink® are the registered trademarks of The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098, USA. http://www.mathworks.com The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Typesetting: Electronic camera-ready by authors Printed in the United States of America (SBA) 69/3830-543210 Printed on acid-free paper
Preface
This work is the result of the development of reconfigurable control around unknown scenarios known as fault scenarios over system performance that consider distributed communication. In that respect, control law is modified to overcome these unknown scenarios. To perform this task, several issues need to be considered such as fault diagnosis and related time delays. Several strategies have been reviewed to come up with the implementation and integration of three main stages: fault diagnosis and related heuristic confidence value, fault tolerance strategy and the related real-time scheduling approach, and those control strategies suitable for different scenarios that consider fault presence and time delays. To accomplish this integration, reconfigurable control is proposed in which the main issue is to keep a system under safety conditions even in the case of selfdegradation. Therefore, reconfigurable control is neither only a gain scheduler algorithm nor only a decision-maker algorithm; it is a procedure in which safety degradation and its effects need to be taken into account to overcome hazardous situations. This approach is based on the use of a finite state machine to pursue reconfigurable control. This strategy is not the only feasible one; however, it becomes the most suitable for providing coherence under hazardous situations and when various sources of information are present. The aim of this book is to describe a complex problem in which two areas are combined, computer and control engineering. From the description of this problem, several algorithms are presented to obtain as much information as possible from the disposable sources that consider two main scenarios, fault and time delay presence. Therefore, how to produce suitable control and process monitoring strategies under hazardous conditions following a feasible and understandable approach is the main issue of this work. To accomplish such a task, a well-known computer engineering strategy is pursued: the finite state machine, in which different healthy and unhealthy conditions are monitored for system status definition. The objective of this work is to present to the reader a way to perform reconfigurable control on-line without jeopardizing the safety and the stability of the system. This book is written for undergraduate and postgraduate students interested in reconfigurable control as a strategy to overcome local fault conditions and
v
vi
Preface
performance degradation during still manageable fault situations. An exhaustive review of model-based and model-free strategies for fault detection is provided to introduce the reader to the area of process monitoring, which looks for a practical approximation of it. Furthermore, an extensive review of network control is given to define how time delays are accommodated due to fault tolerance strategies. To that respect, a guide for how to approximate to this interesting open field is presented. Special mention should be made of the software used in this work; in this case, three major packages are used: first, MATLAB 6.5, especially the Simulink and State Flow toolboxes. The second package is True Time, which is used to simulate the communication network, and it is available through the Internet http://www.control.lth.se/~dan/truetime/; this package has been found to be an efficient tool for understanding the functionality of a network. Finally, ADMIRE is another useful package (ADMIRE, 2003) that is an aerodynamic model of an aircraft, which has presented interesting scenarios under fault and time delay scenarios worthy to be studied in future work. This book is divided into five chapters. The first chapter is a basic review of communication networks in which a description of network protocols is provided based on a common standard of open systems interconnection (OSI); particular interest is presented over the control area network databus (CAN bus) because of its common use over real-time distributed systems. The second chapter is focused on the real-time background, and it presents how a real-time system can define time delays based on its inherent deterministic behavior using scheduling algorithms; in fact, an overview of most classic scheduling algorithms is presented. The outline of the third chapter is based on fault diagnosis strategies for a common and unknown situation, which is an unhealthy scenario. Specifically, this chapter concentrates on a smart element paradigm that understands this element as an autonomous device that can self-diagnose. Chapter 3 presents the most common strategies of fault diagnosis and their use for health treatment. The fourth chapter presents an implementation of the control reconfiguration strategy based on a review of network control systems to pursue a feasible option for safety during fault conditions. Strategies such as time delay modeling, interloop connection, and others such as fuzzy control or model predictive control are presented as alternatives for network control. This review allows the reader to choose any of these techniques according to the model of the plant. It is obvious that some of these possibilities would not have the proper stability analysis to probe. Moreover, this chapter presents fault tolerance control as a way to overcome fault appearance without any fault accommodation procedure; in this case, control law is designed to loosen peripheral elements such as sensors or actuators. At last, reconfigurable control is presented as a combination of these two strategies; in this case, the use of an automata procedure is followed. For instance, the pursued scenario is a fault environment with inherent communication time delays. To define a feasible strategy suitable for a sporadic situation, it is necessary to design several control laws that can switch from one scenario to another, bearing in mind safety and availability issues. A way to overcome this environment by combining both scenarios using an automata is presented here. The final chapter provides some implementing examples based on two different approaches. First, an example of three conveyor belts is given, in which modeling is
Preface vii
pursued; in this case, time delays are bounded as are possible local faults. The only restriction is that one local fault per belt is allowed. The second example is based on an aircraft model in which multiple sensors and actuators are presented and the mathematical model is a challenge. In this case, the strategy pursued is based on fuzzy control that provides feasible results during reconfiguration. It is important to mention that fuzzy control is pursued, bearing in mind that the model is a simulation because this implementation is virtually impossible in real-life implementation. The background of this book is mainly related to a Ph.D. thesis developed by the first author (Benítez-Pérez, 1999) while with the Department of Automatic Control and Systems Engineering, University of Sheffield, U.K. From this experience, several strategies were pursued such as fault diagnosis for smart elements and reconfigurable control (Benítez-Pérez and García-Nocetti, 2003). Moreover, the use of network control is pursued to accomplish stability during unknown and sporadic time delays; in that respect, an interesting issue has been presented by the IEEE Transactions on Automatic Control (IEEE TAC, 2004). Acknowledgments. The authors of this book would like to thank several people who have helped in its creation: Dr. Gerardo Espinosa for his valuable comments in the network control chapter, Dr. Luis Álvarez for his comments on the outline of the book, and Ms. Sara Garduño Antonio for her valuable help in editing this work. Furthermore, the authors would like to recognize the sponsorship of two projects involved in the creation of the book, PAPIIT-UNAM IN105303, and PAPIITUNAM IN106100 and IIMAS-UNAM. Héctor Benítez-Pérez Fabián García-Nocetti México November 2004
Contents
Preface ........................................................................................................................v Contents .................................................................................................................... ix 1. Introduction to Network Communication ...............................................................1 1.1. Background .....................................................................................................1 1.2. Review of Open Systems Interconnection Layer ............................................1 1.2.1. Application Layer ....................................................................................2 1.2.2. Presentation Layer ...................................................................................3 1.2.3. Session Layer ..........................................................................................3 1.2.4. Transport Layer .......................................................................................3 1.2.5. Network Layer.........................................................................................4 1.2.6. Data-Link Layer ......................................................................................4 1.2.7. Physical Layer .........................................................................................4 1.3. General Overview of Transport Control Protocol/Internet Protocol ...............5 1.4. Industrial Networks .........................................................................................6 1.5. Databuses ........................................................................................................8 1.5.1. ARINC 429..............................................................................................8 1.5.2. ARINC 629..............................................................................................8 1.5.3. MIL-STD 1553b....................................................................................10 1.5.4. Control Area Network Databus .............................................................11 1.6. Concluding Remarks .....................................................................................13 2. Real-Time Systems ...............................................................................................15 2.1. Background ...................................................................................................15 2.2. Overview.......................................................................................................16 2.3. Scheduling Algorithms..................................................................................25 2.4. Distributed Real-Time Systems.....................................................................35 2.5. Conclusions ...................................................................................................37 3. Smart Peripheral Elements....................................................................................39 3.1. Overview.......................................................................................................39 3.2. Peripheral Autonomy ....................................................................................39
ix
x
Table of Contents
3.3. Typical Smart Elements ................................................................................40 3.4. Smart Elements Designs................................................................................44 3.5. Fault Diagnosis Approximations...................................................................44 3.5.1. Parameter Estimation.............................................................................44 3.5.2. Observer-Based Techniques ..................................................................46 3.5.3. Parity Equations ....................................................................................47 3.5.4. Principal Components Analysis.............................................................48 3.5.5. Neural Network Approach.....................................................................49 3.5.6. Logic as Fault Diagnosis Strategy .........................................................60 3.5.7. Heuristic Confidence Value Definition .................................................61 3.6. Conclusions ...................................................................................................63 4. Reconfigurable Control.........................................................................................65 4.1. Network Control............................................................................................65 4.2. Other Control Structures for Network Control Systems................................79 4.3. Reconfiguration Issues ..................................................................................86 4.4. Fault Tolerant Control...................................................................................86 4.5. Reconfigurable Control Strategies ................................................................88 4.6. Concluding Remarks .....................................................................................94 5. Case Study ............................................................................................................95 5.1. Introduction ...................................................................................................95 5.2. Case Studies ..................................................................................................95 5.2.1. Conveyor Belt Model ............................................................................95 5.2.2. Aircraft Model .....................................................................................108 5.3. Conclusions .................................................................................................126 References...............................................................................................................127 Index .......................................................................................................................137
1 Introduction to Network Communication
1.1 Background In this chapter, a general review of several databuses that comprise the most common strategies for interprocess communication is provided. This review describes common databus behavior by presenting how time delays are the result of data transfer. The objective of this chapter is to show how different protocols communicate to understand communication time delays, which are reviewed in the subsequent chapters.
1.2 Review of Open Systems Interconnection Layer One key issue in distributed systems is the protocol in which the integration of the information to be transmitted through the network is comprised. Several points can be developed into that respect. For instance, the number of open systems interconnection (OSI) layers has a direct repercussion on user applications. A distributed system is one in which several autonomous processors and data stores supporting processes and/or databases interact to cooperate and achieve an overall goal. The processes coordinate their activities and exchange information by means of information transferred over a communication network (Sloman and Kramer, 1987). One basic characteristic of distributed systems is that interprocess messages are subject to variable delays and failure. There is a defined time between the occurrence of an event and its availability for observation at some other point. The simplest view of the structure of a distributed system is that it consists of a set of physical distributed computer stations interconnected by some communications network. Each station can process and store data, and it may have connections to external devices. Table 1.1 is a summary that provides an impression of the functions performed by each layer in a typical distributed system (Sloman and Kramer, 1987). It is important to highlight that this table is just a first attempt to define an overall formal concept of the OSI layer.
1
2
Reconfigurable Distributed Control
Table 1.1. OSI layer nonformal attempt Layer
Example
Application software
Monitoring and control modules
Utilities
File transfer, device handlers
Local management
Software process management
Kernel
Multitasking, I/O drivers, memory management
Hardware
Processors, memory I/O devices
Communication system
Virtual circuits, network routing, flow control error control
This local layered structure is the first attempt in understanding how a distributed system is constructed. It provides a basis for describing the functions performed and the services offered at a station. The basic idea of layering is that, regardless of station boundaries, each layer adds value to the services provided by the set of lower layers. Viewed from above, a particular layer and the ones below it may be considered to be a “black box”, which implements a set of functions to provide a service. A protocol is the set of rules governing communication between the entities, which constitute a particular layer. An interface between two layers defines the means by which one local layer makes use of services provided by the lower layer. It defines the rules and formats for exchanging information across the boundary between adjacent layers within a single station. The communication system at a station is responsible for transporting system and application messages to/from that station. It accepts messages from the station software and prepares them for transmission via a shared network interface. It also receives messages from the network and prepares them for receipt by the station software. In 1977, the International Standard Organisation (ISO) started working on a reference model for open system interconnection. The ISO model defines the seven layers as shown in Figure 1.1. The emphasis of the ISO work is to allow interconnection of independent mainframes rather than distributed processing. The current version of the model only considers point-to-point connections between two equal entities. 1.2.1 Application Layer Those application entities performing local activities are not considered part of the model. A distributed system would not make this distinction as any entity can potentially communicate with local or remote similar entities. The application layer includes all entities, which represent human users or devices, or performs an application function.
1. Introduction to Network Communication
3
End-user application process Distributed Information File transfer, access and management
Application layer
Syntax independent message
Transfer syntax negotiation Dialogue and synchronisation
Presentation layer Session layer
Network Independent End-to-end message transfer
Transport layer
Network routing, addressing and clearing
Network layer
Data link control Mechanical and electrical network definitions
Link layer Physical layer
Physical Connection Data communication network
Figure 1.1. OSI layers
1.2.2 Presentation Layer The purpose of the presentation layer is to resolve differences in information representation between application entities. It allows communication between application entities running on different computers or implemented using programming languages. This layer is concerned with data transformation, formatting, structuring, encryption, and compression. Many, functions are application dependent and are often performed by high-level language compilers, so the borderline between presentation and application layers is not clear. 1.2.3 Session Layer This layer provides the facilities to support and maintain sessions between application entities. Sessions may extend over a long time interval involving many message interactions or be very short involving one or two messages. 1.2.4 Transport Layer The transport layer is the boundary between what are considered the applicationoriented layers and the communication-oriented layer. This layer is the lowest using
4
Reconfigurable Distributed Control
an end-station-to-end-station protocol. It isolates higher layers from concerns such as how reliable and cost-effective transfer of data is actually achieved. The transport layers usually provide multiplexing: end-to-end error and flow control, fragmenting and reassembly of large messages into network packets, and mapping of transportlayer identifiers onto network addresses. 1.2.5 Network Layer The network layer isolates the higher layers from routing and switching considerations. The network layer masks the transport layer from all of the peculiarities of the actual transfer medium: whether a point-to-point link, packet switched network, local area network (LAN), or even interconnected networks. It is the network layer’s responsibility to get a message from a source station to the destination station across an arbitrary network topology. 1.2.6 Data-Link Layer The task of this layer is to take the raw physical circuit and convert it into a point-topoint link that is relatively error free to the network layer. It usually entails error and flow control, but many local area networks have low intrinsic error rates and so do not include error correction. 1.2.7 Physical Layer This layer is concerned with transmission of bits over a physical circuit. It performs all functions associated with signaling, modulation, and bit synchronization. It may perform error detection by signal quality monitoring. There are several types, which are based on their applications, type of physical networks, and specific demands such as fault tolerance or communication performance. The classification pursued in this work is related to that used in communication networks. There are two main divisions in respect to generalpurpose networks and industrial networks. These are characterized for the environment to be proposed. A general description of these protocols is listed next (Lönn, 1999): • • • • •
Collision Sense Multiple Access/Carrier Detect CSMA/CD–IEEE 802.3 Ethernet; Collision Sense Multiple Access/Collision Avoidance (CSMA/CA)Controller Area Network CAN Bosh; Token Passing–Token bus; Mini Slotting ARINC 629; Time Slot Allocation–Time Triggered Protocol (Kopetz, 1994), ARINC 659 (ARINC, 1991).
Two main types of databuses are taken into account: TCP/IP and CANbus. Several variations based on CANbus-like FTT-CAN or planning scheduler are also
1. Introduction to Network Communication
5
considered. Based on OSI computing layers, protocols are defined (Tanenbaum, 2003). Several aspects can be pursued such as load balancing, scheduling analysis, or synchronization. These aspects are reviewed in this work following the basis of real time and non migration; basis load balancing is out the scope of this work, and the interested reader may consult Nara et al. (2003). For clock synchronization, there are various feasible approaches like Krishna and Shin (1997) and Lönn (1999).
1.3 General Overview of Transport Control Protocol/Internet Protocol One major example of this type of databus is Transport Control Protocol/Internet Protocol (TCP/IP). The TCP/IP family protocols are defined to use the Internet and other applications that use interconnected networks. The protocols are layered but do not conform precisely to the ISO 7-layer model. Several layers are used by TCP/IP as shown in Figure 1.2.
Layer
Message Conformation
Application
Transport UDP Internet IP Network Interface NETWORK FRAME Underlying Network
Figure 1.2. TCP/IP protocol over OSI layer
The Internet protocol layer provides two transport protocols, TCP and user datagram protocol (UDP). TCP is a reliable connection-oriented protocol, and UDP is a datagram protocol that does not guarantee reliable transmission. IP is the underlying “network” protocol of the Internet virtual network. TCP/IP specifications do not specify the layers below the Internet datagram layer. The success of the TCP/IP protocols is based on the independence of
6
Reconfigurable Distributed Control
underlying transmission technology, which enables inter-networks to be built from many single heterogeneous networks and data links.
1.4 Industrial Networks It is important in a distributed system to ensure system synchronization. Without tight synchronization it is likely that the system will lose data consistency. For example, sensors may be sampled at different times, which leads to failures being detected due to differences between data values. It is also important to consider intermediate data and consistency between replicated processing if comparison/voting is used to avoid the states of the replicas from diverging (Brasileiro et al. 1995). Asynchronous events and processing of nonidentical messages could lead to replica state divergence. Synchronization at the level of processor micro-instructions is logically the most straightforward way to achieve replica synchronism. In this approach, processors are driven by a common clock source, which guarantees that they execute the same step at each clock pulse. Outputs are evaluated by a (possibly replicated) hardware component at appropriate times. Asynchronous events must be distributed to the processors of a node through special circuits that ensure that all correct processors will perceive such an event at the same point of their instruction flow. As every correct processor of a node executes the same instruction flow, all programs that run on the nonredundant version can be made to run, without any changes, on the node (as concurrent execution). There are, however, a few problems with the micro-instruction-level approach to synchronization. First, as indicated before, individual processors must be built in such a way that they will have a deterministic behavior at each clock pulse. Therefore, they will produce identical outputs. Second, the introduction of special circuits, such as a reliable comparator/voter, a reliable clock, asynchronous event handlers, and bus interfaces, increases the complexity of the design, which in the extreme can lead to a reduction in the overall reliability of a node. Third, every new microprocessor architecture requires a considerable re-design effort. Finally, because of their tight synchronization, a transient fault is likely to affect the processors in an identical manner, thus making a node susceptible to common mode failures. An alternative approach that tries to reduce the hardware level complexity associated with the approaches discussed here is to maintain replica synchronism at a higher level, for instance, at the process, or task, level by making use of appropriate software-implemented protocols. Such software-implemented nodes can offer several advantages over their hardware-implemented equivalents: • •
Technology upgrades seem easy because the principles behind the protocols do not change. Employing different types of processors within a node, there is a possibility that a measure of tolerance against design faults in processors can be obtained, without recourse to any specialized hardware.
Fail silent nodes are implemented at the higher software fault tolerance layer. The main goal is to detect faults inside of several processors (initially two) that comprise
1. Introduction to Network Communication
7
a node. As soon as one processor has detected a fault, it has two options: either remain fail silent or decrease its performance. The latter option is suitable when the faulty processor is still checking information from the other processor. This implementation involves, first, a synchronization technique called “order protocol”, and second, a comparison procedure that validates and transmits the information or remains silent if there is a fault. The concept used for local fault tolerance in fail silent nodes is the basis of the approach followed in this book for the “smart” elements. However, in this case, in the presence of a fault, the nodes should not remain silent. The main advantage of fail silent nodes is the use of object-oriented programming for synchronization protocols to allow comparison of results from both processors at the same time. Fail silent nodes within fault tolerance are considered to be the first move toward mobile objects (Caughey and Shrivastava, 1995). Although the latter technique is not explained here, it remains an interesting research area for fault tolerance. System model and assumptions. It is necessary to assume that the computation performed by a process on a selected message is deterministic. This assumption is well known in state machine models because the precise requirements for supporting replicated processing are known (Schneider, 1990). Basically, in the replicated version of a process, multiple input ports of the nonreplicated process are merged into a single port and the replica selects the message at the head of its port queue for processing. So, if all nonfaulty replicas have identical states, then they produce identical output messages. Having provided the queues with all correct replicas, they can be guaranteed to contain identical messages in identical order. Thus, replication of a process requires the following two conditions to be met: Agreement: All nonfaulty replicas of a process receive identical input messages. Order: All nonfaulty replicas process the messages in an identical order. Practical distributed programs often require some additional functionality such as using time-outs when they are waiting for messages. Time-outs and other asynchronous events, such as high-priority messages, are potential sources of nondeterminism during input message selection, which makes such programs difficult to replicate. In Chapter 4, this nondeterminism is handled as an inherent characteristic of the system. It is assumed that each processor of a fail silent node has network interfaces for internode communication over networks. In addition, the processors of a node are internally connected by communication links for intranode communication needed for the execution of the redundancy management protocols. The maximum intranode communication delay over a link is known and bounded. If a nonfaulty process of a neighbor processor sends a message, then the message will be received within δ time units. Communication channel failures will be categorized as processor failures.
8
Reconfigurable Distributed Control
1.5 Databuses For aerospace application, it was first necessary to consider the databus standard to be used on-engine for the distributed system. Several standards are used in aerospace. In the following sections, the most common databuses are introduced. 1.5.1 ARINC 429 The ARINC 429 databus is a digital broadcast databus developed by the Airlines Electronics Engineering Committee’s (AEEC) Systems Architecture and Interfaces (SAI). The AEEC, which is sponsored by ARINC, released the first publication of the ARINC specification 429 in 1978. The ARINC 429 databus (Avionics Communication, 1995) is a unidirectionaltype bus with only one transmitter. Transmission contention is thus not an issue. Another factor contributing to the simplicity of this protocol is that it was originally designed to handle “open loop” data transmission. In this mode, there is no required response from the receiver when it accepts a transmission from the sender. This databus uses a word length of 32 bits and two transmission rates: low speed, which is defined as being in the range of 12 to 14.5 Kbits/s consistency with units for 1553b (Freer, 1989), and high speed, which is 100 Kbits/s. There are two modes of operation in the ARINC 429 bus protocol: characteroriented mode and bit-oriented mode. As the ARINC 429 bus is a broadcast bus, the transmitter on the bus uses no access protocols. Out of the 32-bit word length used, a typical usage of the bits would be as follows: • • • •
eight bits for the label; two bits for the source/Destination Identifier; twenty-one data bits; one parity bit.
This databus has the advantage of simplicity; however, if the user needs more complicated protocols or it is necessary to use a complicated communication structure, the data bandwidth is used rapidly. One characteristic used by ARINC 429 is the logical remote unit (LRU), to verify that the number of words expected match with those received. If the number of words does not match the expected number, the receiver notifies the transmitter within a specific amount of time. Parity checks use one bit of the 32-bit ARINC 429 data word. Odd parity was chosen as the accepted scheme for ARINC 429-compatible LRUs. If a receiving LRU detects odd parity in a data word, it continues to process that word. If the LRU detects even parity, it ignores the data word. 1.5.2 ARINC 629 ARINC 629-2 (1991) has a speed of 2 MHz with two basic modes of protocol operation. One is the Basic Protocol (BP), where transmissions may be periodic or aperiodic. Transmission lengths are fairly constant but can vary somewhat without
1. Introduction to Network Communication
9
causing aperiodic operation if sufficient overhead is allowed. In the Combined Protocol (CP) mode, transmissions are divided into three groups of scheduling: • • •
level 1 is periodic data (highest priority); level 2 is aperiodic data (mid-priority); level 3 is aperiodic data (lowest priority).
Level one data are sent first, followed by level two and level three. Periodic data are sent in level one in a continuous stream until finished. Afterward, there should be time available for transmission of aperiodic data. The operation of transferring data from one LRU to one or more other LRUs occurs as follows: • • • •
The terminal controller (TC) retrieves 16-bit parallel data from the transmitting LRU’s memory. The TC determines when to transmit, attaches the data to a label, converts the parallel data to serial data, and sends it to the serial interface module (SIM). The SIM converts the digital serial data into an analogue signal and sends them to the current mode coupler (CMC) via the stub (twisted pair cable). The CMC inductively couples the doublets onto the bus. At this point, the data are available to all other couplers on the bus.
This protocol has three conditions, which must be satisfied for proper operation: the occurrence of a transmit interval (TI), the occurrence of a synchronization gap (SG), and the occurrence of a terminal gap (TG). The TI defines the minimum period that a user must wait to access the bus. It is set to the same value for all users. In the periodic mode, it defines the update rate of every bus user. The SG is also set to the same value for all users and is defined as a bus quiet time greater than the largest TG value. Every user is guaranteed bus access once every TI period. The TG is a bus quiet time, which corresponds to the unique address of a bus user. Once the number of users is known, the range of TG values can be assigned and the SG and TI values can be determined. TI is given by Table 1.2. Table 1.2. ARINC 629 time characteristics Binary Value (BV)
BV
TI (ms)
TG (micro seconds)
TI6
TI5
TI4
TI3
TI2
TI1
TI0
0
0
0
0
0
0
0
0
0.5005625
Not used
0
0
0
0
0
0
1
1
1.0005625
Not used
...
...
...
...
...
...
...
...
...
...
1
1
1
1
1
1
1
126
64.0005625
127.6875
To program the desired TG for each node, the user must follow Table 1.2 from TI6 to TI0, which represent the binary value (BV).
10
Reconfigurable Distributed Control
1.5.3 MIL-STD 1553b Another commonly used databus is MIL-STD 1553b (Freer, 1989), which is a serial, time-division-multiplexed databus using screened twisted-pair cable to transmit data at 1 Mbit/s. Data are transmitted in 16-bit words with a parity and a 3-bit period synchronization signal, with a whole word taking 20 microseconds to be transmitted. Transformer-coupled baseband signaling with Manchester encoding is employed. Three types of devices may be attached to the databus: • • •
Bus Controller (BC); Remote Terminal (RT); Bus Monitor (BM).
The use of MIL-STD-1553b in military aircraft has simplified the specification of interfaces between avionics subsystems and goes a long way toward producing offthe-shelf interoperability. Most avionics applications of this databus require a duplicated, redundant bus cable and bus controller to ensure continued system operation in case of a single bus or controller failure. MIL-STD-1553b is intended primarily for systems with central intelligence and intelligent terminals in applications where the data flow patterns are predictable. Information flow on the databus includes messages, which are formed from three types of words (command, data, and status). The maximum amount of data that may be contained in a message is 32 data words, each word containing 16 data bits, one parity bit, and three synchronization bits. The bus controller only sends command words; their content and sequence determine which of the four possible data transfers must be undertaken: • • • •
Point-to-Point between controller and remote terminal; Point-to-Point between remote terminals; Broadcast from controller; Broadcast from a remote terminal.
There are six formats for point-to-point transmissions: • • • • • •
Controller to RT data transfer; RT to controller data transfer; RT to RT data transfer; Mode command without a data word; Mode command with data transmission; Mode command with data word reception;
and four broadcast transmission formats are specified: • • • •
Controller to RT data transfer; RT to RT(s) data transfer; Mode command without a data word; Mode command with a data word.
This databus incorporates two main features for safety-critical systems, a predictable behavior based on its pooling protocol and the use of bus controllers. They permit
1. Introduction to Network Communication
11
communication handling to avoid collisions on the databus. MIL-STD-1553b also defines a procedure for issuing a bus control transfer to the next potential bus controller, which can accept or reject control by using a bit in the returning status word. From this information, it can be concluded that MIL-STD-1553b is a very flexible databus. A drawback, however, is that the use of a centralized bus controller reduces transmission speed as well as reliability. 1.5.4 Control Area Network Databus This sort of databus is based on CANbus. This type of databus is based on the bottom two layers, and its protocol is simple. It is based on CSMA. This databus was defined by Lawrenz, (1997). One key characteristic of this databus is Kopetz, (1997). Another type of databus is MIL-STD-1553b, which is outside of the scope of this work. This is a serial, time-division-multiplexed databus using screened twisted-pair cable to transmit data at 1 Mbit/s. Data are transmitted in 16-bit words with a parity and a 3-bit period synchronization signal, with a whole word taking 20 microseconds to be transmitted. Transformer-coupled baseband signaling with Manchester encoding is employed. Three types of devices may be attached to the databus: • • •
Bus Control; Remote Terminal; Bus Monitor.
The use of MIL-STD-1553b in military aircraft has simplified the specification of interfaces between avionics subsystems and goes a long way toward producing offthe-shelf interoperability. Most avionics applications of this databus require a duplicated, redundant bus cable and bus controller to ensure continued system operation in case of a single bus or controller failure. This databus is intended primarily for systems with central intelligence and intelligent terminals in applications where the data flow patterns are predictable. Information flow on the databus includes messages, which are formed from three types of words (command, data, and status). The maximum amount of data that may be contained in a message is 32 data words, each word containing 16 data bits, 1 parity bit, and 3 synchronization bits. The bus controller only sends commands words; their content and sequence determine which of the four possible data transfers must be taken: • • • •
Point-to-point between controller and remote terminal; Point-to-point between remote terminals; Broadcast from controller; Broadcast from remote terminal.
There are six formats for point-to-point transmissions: • • •
Controller to RT data transfer; RT to controller data transfer; RT to RT data transfer;
12
Reconfigurable Distributed Control
• • •
Mode command without a data word; Mode command with data transmission; Mode command with data word reception.
This databus incorporates two main features for safety-critical systems: a predictable behavior based on its pooling protocol and the use of a bus controller. These permit communication handling to avoid collisions on the databus. This databus has been modelled after several strategies such as Markov Chain and time delays such as Nilsson (1998). The CANbus protocol is based on the protocol standard named Carrier Sense Multiple Access Collision Avoidance. A CAN word consists of six field as shown in Figure 1.3. Arbitration
Control
Data Field
11
6
0-64
CRC A 16
2
EOF Field 7
Number of Bits
Figure 1.3. Data word configuration from CANbus
This databus is a broadcast bus in which the data source may be transmitted periodically, sporadically, or on-demand. The data source is assigned a unique identifier. The identifier serves as priority to the message. The use of this identifier is the most important characteristic of CAN regarding real time. If a particular node needs to receive certain information, then it indicates the identifier to the interface processor. Only messages with valid identifiers are received and presented. The identifier field of a CAN message is used to control access to the bus after collisions by taking advantage of recessive bit strategy. For instance, if multiple stations are transmitting concurrently and one station transmits a “0” bit, then all stations monitoring the bus see a “0”. When silence is detected, each node begins to transmit the highest priority message held on its queue. If a node sends a recessive bit as part of the message identifier but monitors the bus and sees a dominant bit then a collision is detected. The node determines that the message it is transmitting is not the highest priority in the system, stops transmitting, and waits for the bus to become idle. It is important to recall that each message in CAN bus has a unique identifier that is based on the priority. CAN, in fact, can resolve in a deterministic way any collision that could take place on the shared bus. When a collision occurs and the arbitration procedure is set off, it immediately stops all transmitting nodes, except for that one that is sending the message with the highest priority (lowest numerical identifier). One perceived problem of CAN is the inability to bound the response time messages. From the observations above, the worst-case time from queuing the highest priority message to the reception of that message can be calculated easily. The longest time a node must wait for the bus to become idle is the longest time to transmit a message. According to Tindell et al. (1995), the largest message (8 bytes) takes 130 microseconds to be transmitted. The CAN specification (ISO 11898) discusses only the physical and data link layers for a CAN network:
1. Introduction to Network Communication
•
13
The data link layer is the only layer that recognizes and understands the format of messages. This layer constructs the messages to be sent to the physical layer and decodes messages received from the physical layer. In CAN controllers, the data link layer is implemented in hardware. Because of its complexity and commonality with most other networks, this is divided into a: Logical link control layer, which handles transmission and reception of data messages to and from other, higher level layers in the model. Media access control layer, which encodes and serializes messages for transmission and decodes received messages. The MAC also handles message prioritization (arbitration), error detection, and access to the physical layer.
•
The physical layer specifies the physical and electrical characteristics of the bus, which includes the hardware that converts the characters of a message into electrical signals from transmitted messages and likewise the electrical signals into characters for received messages.
1.6 Concluding Remarks A general review of some of the most common databuses is provided; in particular, a brief description of OSI layers and their relation to data communication through these databuses is highlighted. This chapter gives an introduction to computer networks in order to understand the needs for real-time systems.
2 Real-Time Systems
2.1 Background Nowadays real-time systems have become a common issue in modeling computer systems behavior for time performance. As the approach followed in this book is to present how computer communication affects control law performance, to achieve this strategy, it is necessary to understand how real-time systems can be modeled and measured. Several strategies comprise real-time systems. These strategies can be classified by two main aspects: the needs and the algorithms of real-time systems. The first aspect allows us to understand why real time is required for some conditions like the presence of the fault and the respective fault tolerance issue. For other conditions like clock synchronization, it is necessary to review real time to achieve a feasible communication performance. On the other hand, a second aspect is related to how scheduling algorithms are focused into several aspects having an impact on system performance. This is reviewed for consumption time, and it is accomplished by time diagrams. One of the most important issues for real-time systems is the conformation of time diagrams to define system behavior under several scenarios. This strategy visualizes how the algorithm would perform with certain variations in time. Because this strategy provides the visualization of system response, another issue arises related to how valid is scheduling configuration. This is known as schedulability analysis. Some other aspects such as load balancing, task precedence, and synchronization are reviewed, which provide an integral overview of modeling real-time systems and what are the repercussions of such an approach. This revision of real-time systems provides a strong idea of how control law is affected by these time variations, which are the results of several conditions that are beyond the scope of this book. The important outcome of this review is how time delays can be modeled to be defined under the control law strategy.
15
16
Reconfigurable Distributed Control
2.2 Overview One of the main characteristics of real-time systems is the determinism (Cheng, 2002) in terms of time consumption. This goal is achieved through several algorithms that take into account several characteristics of tasks as well as the computer system in which is going to be executed. Real-time systems are divided in two main approaches: mono-processor and multiprocessor. These two are defined by different characteristics. The monoprocessor has a common resource, the processor, and the multiprocessor has a common resource, the communication link. The last common resource can be challenged through different communication approximations. For instance, the use of shared memory is common in high-performance computing systems in which the use of databuses becomes common in network systems. The multiprocessor approach is the followed in this work. There are two main sources of information that should be reviewed as introduction to real-time systems: one is by Kopetz (1997) and other is by Krishna et al. (1997), both have a review of several basic concepts that are integrated to give a coherent overview of real-time systems, like fault tolerance strategies, the most common protocols, the most common clock synchronization algorithms, as well as some of the most useful performance measures. From this review, one of the most important needs for real time systems is fault tolerance because its performance evaluation is modified to cover an abnormal situation. Fault tolerance is a key issue that has a lot implications in different fields such as the configuration in communication and the structural strategy to accommodate failures. Most current strategies are based on the redundancy approach, which can be implemented in three different ways: • • •
Hardware; Software; Time Redundancy.
Hardware redundancy has a representation known as replication using voting algorithms named N-modular redundancy (NMR). Figure 2.1 shows the basic structure of this type of approach. In this case, several strategies can be pursued. x1 x2
N-Modular Redundancy xN
Figure 2.1. N-modular redundancy approach
2. Real-Time Systems
17
Different approaches are defined as voting algorithms to mask faults in a trustworthy manner. These are classified into two main groups as safe and reliable algorithms. The first group refers to those algorithms that produce a safe value when there is no consensus between redundant measures. Alternatively, the second group produces a value even in the case of no consensus, so this last approach becomes common when safety is not an issue. Some of the most common voting algorithms are presented next: • • •
Majority Voter; Weight Average Voter; Median Voter.
As an example of safe algorithms, the majority voter is presented: This algorithm defines its output as one element of the largest group of inputs with the minimum difference. For instance, consider xn inputs with a limit ε to evaluate the difference between two inputs d (x i ,x j ). A group g is conformed by those inputs whose difference is lower than the limit ε. This voter can be defined as: • • •
The difference between two inputs is defined as d ( x i ,x j )=|xi-xj|; Two inputs xi and xj belong to gi if d ( x i , x j ) +25 deg. Elevons −30 −> +30 deg. Rudder −25 −> +25 deg. Lead.edge.flap −10 −> +30 deg. Airbrakes max 55 deg.
The body fixed frame is located on the aircraft´s center of gravity. The signs of all control surface deflections, with one exception, follow the right-hand rule 1. The exception signwise is the leading edge flap. The airbrakes deflect into the free stream, maximum setting angle. The geometry reference data used to convert force and moment coefficients into forces and moments are given in a file named ad_coeff.const that is included in the aerodata model. The aircraft is defined in a mainframe where all x- and z-coordinates are positive. Aerodata is defined in another frame, where the aerodynamic forces and moments are also determined. For the generic aerodata model, the reference point x is determined to be 25% of the Mean Aerodynamic Cord (MAC), which is calculated in the usual manner. In practice, though, no absolute coordinates are needed because transformation of forces and moments is established by the user to determine the position of a/c center of gravity (c.g), and thus get the proper deviation to correctly transform aerodata. Hinge moments are calculated in a straightforward manner, using the correct reference area and chord for each control surface. The engine model contains data in two 2-dimensional tables describing the engine thrust. The two tables contain the available thrust from the engine, one with activated afterburner and the other without it. The engine model is scaled so that the ratio between the static thrust and the maximum takeoff weight of the aircraft correlates to the value of similar modern aircraft. The available control actuators in the ADMIRE model are: • • •
Left canard ( į lc); Right canard ( į rc); Left outer elevon ( į loe);
5. Case Study 111
• • • • • • • •
Left inner elevon ( į lie); Right inner elevon ( į rie); Right outer elevon ( į roe); Leading edge flap ( į le); Rudder ( į r); Landing gear ( į ldg); Horizontal thrust vectoring ( į th); Vertical thrust vectoring ( į tv).
The leading edge flap, landing gear and thrust vectoring are not used in the Flight Control System (FCS). The sign of the actuator deflections follows the “right-handrule”, except for the leading edge flap that has a positive deflection down (Figure 5.18). Roll-axis
δlc
δrc δle
Yaw-axis, canard- wing
δr δloe ,
Pitch-axis, delta-wing
δlie δroe ,
δrie
Yaw-axis, delta-wing
Figure 5.18. Definition of the control surface deflections
From this example, control definition becomes a difficult achievement due to the inherent nonlinear behavior of the case study. Two different control laws are designed, fuzzy control and current control law. To define the communication network performance, the use of the true-time network is pursued. This strategy achieves network simulation based on message transactions that are based on the real-time toolbox from MATLAB. Extended information from this tool is available at (True Time, 2003); the true time main characteristics are shown next. In the true time model, computer and network blocks are introduced in Figure 5.19.
112
Reconfigurable Distributed Control
Figure 5.19. Basic model of true time
These blocks are event driven, and the scheduling algorithm is managed by the user independently of each computer block. True time simulation blocks are basically two blocks. These have been developed by the Department of Automatic Control, Lund Institute of Technology, Sweden. Each kernel represents the interface between the actual dynamical model and the network simulation. Here, continuous simulation and digital conversion take place to transmit information through the network. This tool provides the necessary interruptions to simulate delay propagation as well as synchronization within the network. Final configuration based on the true time and the ADMIRE models is presented in Figures 5.20, 5.21, and 5.22, where the actual model is modified to integrate network simulation. Figure 5.20 integrates the CAN network with three elements, sensors, actuators, and controllers (Figures 5.21 and 5.22).
Figure 5.20. Network control integrated to ADMIRE model
5. Case Study 113
Figure 5.21. Control strategy after network integration
Figure 5.22. Sensor interface of network integration
On the other hand, the network control module is modified to add several control laws and the related switching module as shown in Figure 5.23.
Figure 5.23. Multiple control selection for current network control
A time graph with relation to communication performance is shown in Figure 5.24, where CANbus behavior is followed because it is a reliable and fast protocol studied by different groups over aircraft implementations.
114
Reconfigurable Distributed Control
Sensor Block
Control Block
Actuator Block Time
Figure 5.24. General time graph of network control
As this configuration is pursued, the proposed scheduler is EDF, where reconfiguration takes place when communication modifies system performance due to fault presence. In that respect, the chosen elements for the fault endeavor are sensors and actuators. The fault tolerance strategy is masking faults based on primitive and safety voting algorithms such as majority voters. This strategy allows the appearance of new elements during fault scenarios as shown in Figure 5.25.
Sensor Block
Reconfiguration Block
Control Block
Actuator Block Time
Figure 5.25. General time graph of network control during local fault scenario
As both scenarios are stated, time variables are defined based on consumption, communication, and several other time delays as presented in Table 5.4. Table 5.4. Time delay figures from several scenarios Sensor Consumption Time
30 ms
Sensor Controller Consumption Time Delays
20 ms
Controller–Actuator Communication Time Delays
20 ms
Actuator Consumption Time Delay
30 ms
Reconfiguration Time Delay
20–40 ms
Fault Tolerance Time Delay
30 ms
5. Case Study 115
The problem is bounded in terms of time delays for both possible scenarios and one operating point (0.6 match and 6000 meters altitude); the next step is to define those suitable control laws for related scenarios. In this case, the fault-free scenario is defined by the use of current control law designed for the dynamics of the time system. The second scenario is defined in terms of fuzzy logic control law, which is trained based on the most suitable system response over fault and time delay conditions; fuzzy logic control (FLC) is presented next. if var1 is x1 and var2 is x2 and var3 is x3 and var4 is x4 and var5 is x5 and var6 is x6 then X1=AX1+BU if var1 is x1' and var2 is x2' and var3 is x3' and var4 is x4' and var5 is x5' and var6 is x6' then X1=A'X1+B'U
Figure 5.26. Fuzzy logic approach
FLC is feasible by the use of fuzzy clustering to define possible control variations such as several time delays and local faults. The system response for both cases under different time delays is presented next where several variables are reported (Table 5.5). Table 5.5. Variable names involved in case study Name of the Variables
Variables
Body Fixed Velocity in x axis
u (m/s)
Body Fixed Velocity in y axis
v (m/s)
Body Fixed Velocity in z axis
w (m/s)
Body Fixed Roll Rate
p (deg/s)
Body Fixed Pitch Rate
q (deg/s)
Body Fixed yaw Rate
r (deg/s)
Aircraft velocity
Vt (m/s)
Angle of Attack
alpha (deg)
Angle of sideslipe
beta (deg)
Climb angle
gamma (deg) cannard deflection (deg) Elevon deflection
For the fault-free scenario, system response is shown in Figures 5.27, 5.28, 5.29, and 5.30, where time delays from the sensor vector are 30 ms for processing time delays and 20 ms for communication time delays.
Reconfigurable Distributed Control
u [m/s]
ADMIRE − simulation data from RTW generated program.
189.6
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0.1
v [m/s]
0.05 0 −0.05 −0.1
w [m/s]
10.5
10
9.5
time [s]
Figure 5.27. Velocity response during fault-free scenario ADMIRE − simulation data from RTW generated program.
0.6 0.4 p [deg/s]
0.2 0 −0.2
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
1 q [deg/s]
0.5 0
−0.5 −1 0.2 0.1 r [deg/s]
116
0 −0.1 −0.2
time [s]
Figure 5.28. Different fixed roll positions considering the fault-free scenario
5. Case Study 117
ADMIRE − simulation data from RTW generated program. 189.88
Vt [m/s]
189.87 189.86 189.85 189.84
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
alpha [deg]
3.2 3.1 3 2.9
beta [deg]
0.01 0 −0.01 −0.02 −0.03
time [s]
Figure 5.29. Several angles considering the fault-free scenario ADMIRE − simulation data from RTW generated program.
gamma [deg]
0.1 0.05 0 −0.05
canard deflection (deg)
−0.1
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0.4 0.2 0 −0.2 −0.4
Elevon deflection
0.5 0 −0.5 −1
time [s]
Figure 5.30. Elevon deflection during the fault-free scenario
Taking into account the fault-free scenario with different time delays, system response is shown in Figures 5.31, 5.32, 5.33, and 5.34. In this case, time delays suffer an increase in the sensor vector and the control mode. These delays are up to
118
Reconfigurable Distributed Control
25 ms between communication nodes and 40 ms in consumption time within the sensor vector.
ADMIRE − simulation data from RTW generated program.
u [m/s]
189.6
189.4 0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0.6 0.4
v [m/s]
0.2 0 −0.2 11.5
w [m/s]
11 10.5 10 9.5
time [s]
Figure 5.31. Velocity response for the second fault-free scenario ADMIRE − simulation data from RTW generated program.
p [deg/s]
4 2 0 −2
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
4
q [deg/s]
2 0 −2 −4
r [deg/s]
1 0.5 0 −0.5
time [s]
Figure 5.32. Different fixed roll positions for the second fault-free scenario
5. Case Study 119
ADMIRE − simulation data from RTW generated program.
Vt [m/s]
190
189.8
189.6
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
4
alpha [deg]
3.5 3 2.5 0.15
beta [deg]
0.1 0.05 0 −0.05
time [s]
Figure 5.33. Several angles considering the fault-free scenario
gamma [deg]
0.6 0.4 0.2 0
canard deflection (deg)
−0.2
0
1
2
3
4
5
6
7
8
9
10
0.5 Right Left 0
−0.5
0
1
2
3
4
5
6
7
8
9
10
Elevon deflection
5 Outer right Inner right Inner left Outer left
0
−5
0
1
2
3
4
5
6
7
8
9
10
time [s]
Figure 5.34. Elevon deflection for the second fault-free scenario
Figures 5.35–5.38 consider the fault scenario on the sensor related to the v signal and the related sensors, with the time delay system taking into account system
120
Reconfigurable Distributed Control
reconfiguration between nominal control and fuzzy logic control response. The time delays of 40 ms for the fault sensor and 30 ms for the EDF take into account the communication time of the sensor vector, fault tolerance modules, control node, and respective communication time delays.
u [m/s]
189.65 189.6 189.55 189.5 189.45
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0.3 0.2
v [m/s]
0.1 0 −0.1
w [m/s]
11 10.5 10 9.5
time [s]
Figure 5.35. Velocity response for the first fault scenario ADMIRE − simulation data from RTW generated program.
p [deg/s]
2 1 0 −1
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
1
q [deg/s]
0.5 0
−0.5 −1 0.6 0.4
r [deg/s]
0.2 0 −0.2
time [s]
Figure 5.36. Different fixed roll positions for the first fault scenario
5. Case Study 121
189.95 ADMIRE − simulation data from RTW generated program. 189.9
Vt [m/s]
189.85 189.8 189.75
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
alpha [deg]
3.2 3.1 3 2.9
beta [deg]
0.04 0.02 0 −0.02 −0.04
time [s]
Figure 5.37. Several angles considering the first fault scenario
0.3
canard deflection (deg)gamma [deg]
0.2 0.1 0 −0.1
0
1
2
3
4
5
6
7
8
9
10
0.1 Right Left
0 −0.1 −0.2 −0.3
0
1
2
3
4
5
6
7
8
9
10
Elevon deflection
0.5 Outer right Inner right Inner left Outer left
0 −0.5 −1
0
1
2
3
4
5
6
7
8
9
10
time [s]
Figure 5.38. Elevon deflection for the first fault scenario.
Consider a fault scenario that involves one local fault in one sensor that is redundant. In this case, the w response with the time delay system taking into account system reconfiguration between nominal control and fuzzy logic control
122
Reconfigurable Distributed Control
response is shown in Figures 5.39–5.42. Time delays of 50 ms and EDF of 60 ms take into account a communication time of 20 ms.
u [m/s]
189.65 189.6 189.55 189.5 189.45
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0.2
v [m/s]
0.1 0 −0.1 −0.2 11
w [m/s]
10.5 10 9.5
time [s]
Figure 5.39. Velocity response for the second fault scenario ADMIRE − simulation data from RTW generated program.
p [deg/s]
2 1 0 −1
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
1
q [deg/s]
0.5 0
−0.5 −1 0.6 0.4
r [deg/s]
0.2 0 −0.2
time [s]
Figure 5.40. Different fixed roll positions for the second fault scenario
5. Case Study 123
189.95 ADMIRE − simulation data from RTW generated program. 189.9
Vt [m/s]
189.85 189.8 189.75
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
alpha [deg]
3.2 3.1 3 2.9
beta [deg]
0.04 0.02 0 −0.02 −0.04
time [s]
Figure 5.41. Several angles considering the second fault scenario
0.3
canard deflection (deg) gamma [deg]
0.2 0.1 0 −0.1
0
1
2
3
4
5
6
7
8
9
10
0.1 Right Left
0 −0.1 −0.2 −0.3
0
1
2
3
4
5
6
7
8
9
10
Elevon deflection
0.5 Outer right Inner right Inner left Outer left
0 −0.5 −1
0
1
2
3
4
5
6
7
8
9
10
time [s]
Figure 5.42. Elevon deflection for the second fault scenario.
Figures 5.43–5.46 consider a fault scenario with a time delay system taking into account system reconfiguration between nominal control and fuzzy logic control response. Time delays of 60 ms on sensor vector and EDF take into account a communication time of 40 ms.
Reconfigurable Distributed Control
ADMIRE − simulation data from RTW generated program. 189.6
u [m/s]
189.59 189.58 189.57 189.56
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
v [m/s]
0.03 0.02 0.01 0
w [m/s]
10.6 10.4 10.2 10
time [s]
Figure 5.43. Velocity response for the third fault scenario ADMIRE − simulation data from RTW generated program.
0.6 0.4 p [deg/s]
0.2 0 −0.2
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
q [deg/s]
0.5
0
−0.5 0.02 0.01 r [deg/s]
124
0 −0.01 −0.02
time [s]
Figure 5.44. Different fixed roll positions for the third fault scenario
5. Case Study 125
ADMIRE − simulation data from RTW generated program.
Vt [m/s]
189.86
189.855
0
1
2
3
4
5
6
7
8
9
10
0 −3 x 10
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
alpha [deg]
3.15 3.1 3.05 3
beta [deg]
8 6 4 2 0
time [s]
Figure 5.45. Several angles considering the third fault scenario
gamma [deg]
0.03 0.02 0.01 0
canard deflection (deg)
−0.01
0
1
2
3
4
5
6
7
8
9
10
0.05 Right Left 0
−0.05
0
1
2
3
4
5
6
7
8
9
10
Elevon deflection
0 Outer right Inner right Inner left Outer left
−0.5
−1
0
1
2
3
4
5
6
7
8
time [s]
Figure 5.46. Elevon deflection for the third fault scenario.
9
10
126
Reconfigurable Distributed Control
5.3 Conclusions From these approximations, it is possible to discern that reconfigurable control has two main issues from a structural and a dynamic point of view. The approach pursued in these examples is related to a reconfiguration based on the decisionmaker strategy because certain conditions are reached. Other approximations are feasible for modification of control parameters by fault detection; strategies like general predictive control allow us to pursue this goal, although other strategies like adaptive control in principle allow us to modify internal parameters. However, it becomes nonpractical because fault scenarios eliminate the local elements that degrade the model of the plant. Both examples presented in this work allow the development of reconfigurable control as a combination of several techniques, like fault diagnosis, network control, intelligent control, and structural analysis. The use of these techniques opens an area in which dynamical models are not enough if structural information is not available; in that respect, formal models such as finite state machines provide us with an opportunity to overcome this integration.
References
Admire, http://www.ffa.se/admire, 2003. Agre J., and Clare, L.; “An Integrated Architecture for Cooperative Sensing Networks”; IEEE Computer, 1999. Akbaryan F., and Bishnoi P.; “Fault Diagnosis of Multivariate Systems Using Pattern Recognition and Multisensor Data Analysis Technique”; Computers and Chemical Engineering, Vol. 25, pp. 1313-1339, 2001. Almeida L., Pedreiras P., and Fonseca J. A.; “The FTT-CAN Protocol: Why and How”; IEEE Transactions on Industrial Electronics, Vol. 49, No. 6, pp. 1189-1201, 2002. Almeida, L., Pasadas, R., and Fonseca, J.A.; “Using a Planning Scheduler to Improve the Flexibility of Real-Time Fieldbus Networks”; Control Engineering Practice, Vol. 7, pp. 101-108, 1999. Altisen, K., Gossler, G., and Sifakis, J.; “Scheduler Modeling Based on the Controller Paradigm”; Real-Time Systems, Vol. 23, No. 1-2, pp. 55-84, 2002. Alves, R., and García, M.A.; “Communications in Distributed Control Environment with Dynamic Configuration”; IFAC 15th Triennial World Congress, Spain, 2002. ARINC, S., 629-2; “ARINC Multitransmitter Databus Part 1: Technical Description”; Published by Aeronautical Radio Inc., USA, 1991. Arzen, K., Bernhardsson, B., Eker, J., Cervin, A., Persson, P., Nilsson, K., and Sha, L.; “Integrated Control and Scheduling”; Department of Automatic Control Lund Institute of Technology, ISSN 0280-5316, August 1999. Auslander, D.M.; “What is Mechatronics?”; IEEE/ASME Transactions Mechatronics, Vol. 1, No. 1, pp. 54-66, 1996. Avionics, Communications; “Principles of Avionics Databuses”; Editorial Staff of Avionics Communications, USA, 1995. Ballé, P., Fischer, M., Füssel, D., Nelles, O., and Isermann, R.; “Integrated Control Diagnosis and Reconfiguration of a Heat Exchanger”; IEEE Control Systems Magazine, Vol. 18, No. 3, 1998. Beilharz, J., and Filbert, D.; “Using the Functionality of PWM Inverters for Fault Diagnosis of Induction Motors”; IFAC Symposium on Fault Detection,
127
128
References
Supervision and Safety for Technical Processes SAFEPROCESS’97, Vol. 1, pp. 246-251, 1997. Benchaib, A.H., and Rachid, A.; “Sliding Mode Flux Observer for an Induction Motor with Rotor Resistance Adaptation”; IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS’97, Hull, UK, Vol. 1, pp. 258-263, 1997. Benítez-Pérez, H.; “Smart Distributed Systems”; PhD. Thesis, Department of Automatic Control and System Engineering, University of Sheffield, UK, 1999. Benítez-Pérez, H., Hargrave, S., Thompson, H., and Fleming, P.; “Application of Parameters Estimation Techniques to a Smart Fuel Metering Unit”; IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes, SAFEPROCESS, pp. 1092-1097, 2000. Benítez-Pérez, H., and Rendon-Acevedo, P.; “Fuzzy Classification of Faults in a Non-linear Actuators”; 4th IFAC Workshop on Online Fault Detection and Supervision in the Chemical Process Industries, pp. 317-321, 2001. Benítez-Pérez, H., Thompson, H., and Fleming, P.; “Implementation of a Smart Sensor using Analytical Redundancy Techniques”; IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes, SAFEPROCESS, Vol. 1, pp. 498-503, 1997. Benítez-Pérez, H., and García-Nocetti, F.; “Reconfigurable Distributed Control using Smart Peripheral Elements”, Control Engineering Practice, Vol. 11, pp. 975-988, 2003. Blanke, M., Kinnaert, M., Lunze, J., and Staroswiecki, M.; “Diagnosis and Fault Tolerant Control”; Springer, 2003. Blanke, M., Nielsen, S., and Jorgensen, R.; “Fault Accommodation in Feedback Control Systems”; Lecture Notes in Computer Science, Hybrid Systems, Sringer-Verlag, No. 376, pp. 393-425, 1993. Blanke, M., Nielsen, S., Jorgensen, R., and Patton, R.J.; “Fault Detection and Accommodation for a Diesel Engine Actuator - a Benchmark”; IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes, SAFEPROCESS’94, Finland, pp. 498-506, 1994. Bodson, M., and Groszkiewicz J.E.; “Multivariable Adaptive Algorithms for Reconfigurable Flight Control”; IEEE Transactions on Control Systems Technology, Vol. 5, No. 2, 217-229, 1997. Brandt, S., and Nutt, G.; “Flexible Soft Real-Time Processing in Middleware”; RealTime Systems, Kluwer Academic Publishers, No. 22, pp. 77-118, 2002. Brasileiro, F.V., Ezhilchelvan, P.D., Shrivastava, S.K., Speirs, N.A., and Tao, S.; “Implementing Fail-silent Nodes for Distributed Systems”; IEEE Transactions on Computers, Vol. 45, No. 11, pp. 1226-1238, 1996. Browne, A.; “Automating the Development of Real-time Control Systems Software”; PhD. Thesis, Department of Automatic Control and Systems Engineering, University of Sheffield, UK, 1996. Buttazzo, G.; “Hard Real-Time Computing Systems”; Kluwer Academic Publishers, 2004. Camacho, E., and Bordons, C.; “Model Predictive Control”; Springer-Verlag, 1999.
References
129
Campbell S., and Nikoukhah, R.; “Auxiliary Signal Design for Failure Detection”; Princeton Series in Applied Mathematics, 2004. Caughey, S. J., and Shrivastava, S.K.; “Architectural Support for Mobile Objects in Large Scale Distributed Systems”; Proc. 4th IEEE Int. Workshop on Object-Orientation in Operating Systems (IWOOS), pp. 38-47, Lund, Sweden, 1995. Cervin, A., Henriksson, D., Lincoln, B., Eker, J., and Arzén, K.; “How Does Control Timing Affect Performance?”; IEEE Control Systems Magazine, Vol. 23, pp. 16-30, 2003. Chandler, P., “System Identification for Adaptive and Reconfigurable Control”; Journal of Guidance, Control and Dynamics, Vol. 18, No. 3, pp. 516-524, 1995. Chen, J., and Patton, R.; “Robust Model-Based Fault Diagnosis for Dynamic Systems”; Kluwer Academic Press, 1999. Cheng A.; “Real-Time Systems: Scheduling, Analysis and Verification”; WileyInterscience, 2002. Chiang, L., Russell, E., and Braatz, R.; “Fault Detection and Diagnosis in Industrial Systems”; Springer-Verlag, Great Britain, 2001. Clarke, D., Mohtadi, C., and Tuffs P.; “Generalized Predictive Control Part I. The Basic Algorithm”; Automatica, Vol. 23, No. 2, pp. 137-148, 1987a. Clarke, D., Mohtadi, C., and Tuffs P.; “Generalized Predictive Control Part II. Extensions and Interpretations”; Automatica, Vol. 23, No. 2, pp. 149-160, 1987b. Clarke, D., Mohtadi, C., and Tuffs P.; “Properties of Generalized Predictive Control”; Automatica, Vol. 25, No. 6, pp. 859-875, 1989. Coulouris, G., Dollimore, J., and Kindberg, T.; “Distributed Systems”; Addison Wesley, 1994. Devillers, R., and Goossens, J.; “Liu and Layland´s Schedulability Test Revisited”; Information Processing Letters, Vol. 73, pp. 157-161, 2000. Driankov, D., Hellendoorn, H., and Reinfrank, M.; “An Introduction to Fuzzy Control”; Springer-Verlag, 1993. Ferree, S.R.; “Sensors−Simple to Smart to System”; International Journal of Instrumentation and Control (INTECH), Vol. 38, No. 11, pp. 24-25, 1991. Flexicon, http://www.control.lth.se/FLEXCON/, 2003. Frank, T., Kraiss, K.F., and Kuhlen, T; “Comparative Analysis of Fuzzy ART and ART-2A Network Clustering Performance”; IEEE Transactions on Neural Networks, Vol. 9, No. 3, May 1998. Freer, J.; “Computer Communication Networks”; Addison-Wesley, 1989. Gertler, J. and Kunwer, M.; “Optimal Residual Decoupling for Robust Fault Diagnosis”; International Journal of Control, Vol. 62, No. 2, pp. 395421, 1995. Gertler, J.; “Fault Detection and Diagnosis in Engineering Systems”; Marcel Dekker, 1998. Gill, C., Levine, D., and Schmidt, D.; “The Design and Performance of a Real-Time CORBA Scheduling Service”; Real-Time Systems, Kluwer Academic Publishers, No. 21, pp. 117-154, 2001.
130
References
Gudmundsson, D., and Goldberg, K.; “Tuning Robotic Part Feeder Parameters to Maximize Throughput”; Assembly Automation Publisher: MCB University Press, Vol. 19, No. 3, pp. 216-221, 1999. Halevi, Y., and Ray, A.; “Integrated Communication and Control Systems; Part IAnalysis”; Journal of Dynamic Systems, Measurement and Control, Vol. 110, pp. 367-373, 1988. Hassoum, H.; “Fundamentals of Artificial Neural Networks”; Massachusetts Institute of Technology, 1995. Hong, P., Kim, Y., Kim, D., and Kwon, W.; “A Scheduling Method for NetworkBased Control Systems”; IEEE Transactions on Control Systems Technology, Vol. 10, No. 3, pp. 318-330, 2002. Hong, S.H.; “Scheduling Algorithm of Data Sampling Times in the Integrated Communication and Control Systems”; IEEE Transactions Control Systems Technology, Vol. 3, pp. 225-231, 1995. Höppner, F., Klawonn, F., Kruse, R., and Funkler, T.; “Fuzzy Cluster Analysis”; John Wiley and Sons, 2000. IEEE, TAC; “Special Issue on Networked Control Systems”; IEEE Transactions on Automatic Control, Vol. 49, No. 9, 2004. Isermann, R., and Raab, U.; “Intelligent actuators-ways to autonomous actuating systems”; Automatica, Vol. 29, No. 5, pp. 1315-1332, 1993. Jämsä-Jounela, S., Vermasvouri, M., Endén, P., and Haavisto, S.; “A Process Monitoring System-Based on the Kohonen Self-Organizing Maps”; Control Engineering Practice, Vol. 11, pp. 83-92, 2003. Janseen, K., and Frank, P.M.; “Component Failure Detection Via State Estimation”; Pre-prints IFAC 9th World Congress, Budapest, Hungary, Vol. 1, pp. 147152, 1984. Jitterbug, http://www.control.lth.se/~lincoln/jitterbug/, 2003. Johannessen, S.; “Time Synchronization in a Local Area Network”; IEEE Control Systems Magazine, Vol. 24, No. 2, pp. 61-69, 2004. Johnson, B.; “Design and Analysis of Fault Tolerant Digital Systems”; Addison Wesley, 1989. Jolliffe, I.T.; “Principal Component Analysis”; Springer-Verlag, 2002. Kanev, S., and Verhaegen, M.; “Reconfigurable Robust Fault-Tolerant Control and State Estimation”; IFAC 15th Trienial World Congress, 2002. Klir, G., and Yuan, B.; “Fuzzy Sets and Fuzzy Logic”; Prentice-Hall, 1995. Koenig, D., Nowakowski, S. and Cecchin, T.; “An Original Approach for Actuator and Component Fault Detection and Isolation”; IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS’97, Hull, UK, Vol. 1, pp. 95-105, 1997. Kohonen, T.; “Self-Organization and Associative Memory”; Springer-Verlag, Berlin, Germany, 1989. Kopetz, H., and Oschenreiter, W.; “Clock Synchronization in Distributed Real-Time Systems”; IEEE Transactions Computers, Vol. 36, No. 8, pp. 930-940, 1987. Kopetz, H.; “A Solution to an Automotive Control System Benchmark”; Proceedings Real-Time Systems Symposium IEEE, Computer Society Press, California, USA, Vol. x+299; pp. 154-158, 1994.
References
131
Kopetz, H.; “Real-Time Systems”; Kluwer Academic Publishers, 1997. Koppenhoefer, S., and Decotignie, J.; “Formal Verification for Distributed RealTime Control Periodic Producer/Consumer”; IEEE Conference on Engineering of Complex Computer Systems, pp. 230-238, 1996. Krishna C., and Shin K.; “Real-Time Systems”; Mc Graw Hill, 1997. Krtolica, R., Ozgüner, Ü, Chan, H., Göktas, H., Winkelman, J., and Liubakka, M.; “Stability Linear Feedback Systems with Random Communications Delays”; International Journal of Control, Vol. 59, No. 4, pp. 925-953, 1994. Krueger, C.W.; “Software Reuse”; ACM Computing Survey, Vol. 24, No. 2, pp. 131183, June 1992. Kubota, H., Matsuse, K., and Nakano, T.; “DSP-Based Speed Adaptive Flux Observer of Induction Motor”; IEEE Transactions on Industrial Applications, Vol. 29, No. 2, pp. 344-348, 1993. Lapeyre, F., Habmelouk, N., Zolghadri, A., and Monsion, M.; “Fault Detection in Induction Motors via Parameter Estimation Techniques”; IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS’97, Hull, UK, Vol. 1, pp. 270-275, 1997. Lawrenz, W.; “CAN Systems Engineering from Theory to Practical Applications”; Springer-Verlag, 1997. Lee, D., Thompson, H.A., and Bennett, S.; “PID Control for a Distributed System with a Smart Actuator”; Digital Control: Past, Present and Future of PID Control (PID'00). Proceedings Volume from the IFAC Workshop, pp. 499-504, 2000. Lee, K., Lee, S., and Lee, M.; “Remote Fuzzy Logic Control of Networked Control System via Profibus-DP”; IEEE Transactions on Industrial Electronics, Vol. 50, No. 4, pp. 784-792, 2003. Lian, F., Moyne J., and Tilbury, D.; “Network Design for Distributed Control Systems”; IEEE Transactions on Control Systems Technology, Vol. 10, No. 2, pp. 297-307, 2002. Lincoln, B., and Cervin, A.; “Jitterbug: A Tool for Analysis of Real-Time Control Performance”; 41th IEEE Conference on Decision and Control, Vol. 2, pp. 1319-1324, 2002. Linkens, M., and Nie, J.; “Learning Control using Fuzzified Self Organizing Radial Basis Functions Networks”; IEEE Transactions on Fuzzy Systems, No. 4, pp. 280-287, 1993. Liou, L., and Ray A.; “A Stochastic Regulator for Integrated Communication and Control Systems: Part I- Formulation of Control Law”; Journal of Dynamic Systems, Measurement, and Control, Vol. 113, pp. 604-611, 1991. Liu, C. L., and Layland, J.W.; “Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment”; Journal of Association of Computing Machinery, Vol. 20, pp. 46-61, 1973. Liu, J.; “Real-Time Systems”; Prentice Hall, 2000. Livani, M.A., and Kaiser, J.; “EDF Consensus on CAN Bus Access for Dynamic Real-time Applications”; Lecture Notes in Computer Science, Springer
132
References
Verlag, Berlin, Edited by Frantisek Plasil and Keith G. Jeffery, Vol. 1388, pp. 1088-1097, 1998. Livani, M.A., Kaiser, J., and Jia, W.J.; “Scheduling Hard and Soft Real-Time Communication in the Controller Area Network (CAN)”; IFAC Workshop on Real-Time Programming, pp. 13-18, 1998. Ljung, L.; “Asymptotic Behavior of the Extended Kalman Filter as a Parameter Estimator for Linear Systems”; IEEE Transactions in Automatic Control, Vol. AC-24, No. 1, pp. 36-50, 1979. Lönn H.; “Synchronization and Communication Results in Safety Critical RealTime Systems”; PhD. Thesis, Department of Computer Engineering, Chalmers, University of Technology, Sweden, 1999. Mahmoud, M., Jiang, J., and Zhang, Y.; “Active Fault Tolerance Control Strategies”; Lectures Notes in Control and Information Science, Springer, 2003. Malmborg J.; “Analysis and Design of Hybrid Control Systems”; PhD. Thesis, Department of Automatic Control, Lund Institute of Technology, Sweden, 1998. Mangoubi, R.S., and Edelanager, M.; “Model-Based Fault Detection: The Optimal Past, The Robust Present and a Few Thoughts on the Future”; 4th IFAC Symposium SAFEPROCESS, Vol. 1, pp. 65-76, 2000. Masten, M.; “Electronics: The Intelligence on Intelligent Control”; IFAC Symposium on Intelligent Component and Instrument for Control Applications, pp. 1-11, 1997. Mediavilla, M., and Pastora-Vega, L.; “Isolation of Multiplicative Faults in the Industrial Actuator Benchmark”; IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS’97, Hull, UK, Vol. 2, pp. 855-860, 1997. Merrill, W.C.; “Sensor Failure Detection for Jet Engines Using Analytical Redundancy”; Journal of Dynamic Systems, Measurement and Control, Vol. 8, No. 6, pp. 673-682, 1985. Misra, M., Yue, H., Qin, S., and Ling, C.; “Multivariable Process Monitoring and Fault Diagnosis by Multi-Case PCA”; Computers and Chemical Engineering, Vol. 26, pp. 1281-1293, 2002. Mitra, S., and Pal, K.; “Neuro-Fuzzy Pattern Recognition”; Wiley Series, 1999. Monfared, M.A.S., and Steiner, S.J.; “Fuzzy Adaptive Scheduling and Control Systems”; Fuzzy Sets and Systems, No. 115, pp. 231.246, 2000. Moya, E., Sainz, G., Grande, B., Fuente, M., and Peran, J.; “Neural PCA Based Fault Diagnosis”; Proceeding of the European Control Conference, pp. 809-813, 2001. Nara, K, Mishima, Y., and Satoh, T.; “Network Reconfiguration for Loss Minimization and Load Balancing”; IEEE Power Engineering Society General Meeting, Vol. 4, pp. 2413-2418, 2003. Nelles, O.; “Non-Linear Systems Identification”; Springer-Verlag, 2001. Nikolakopoulos, G., and Tzes, A.; “Reconfigurable Internal Model Control Based on Adaptive Lattice Filtering”; Mathematics and Computers in Simulation, Vol. 20, pp. 303-314, 2002.
References
133
Nilsson, J.; “Real-Time Control Systems with Delays”; PhD. Thesise, Department of Automatic Control, Lund Institute of Technology, Sweden, 1998. Oehler, R., Schoenhoff, A., and Schreiber, M.; “On-Line Model-Based Fault Detection and Diagnosis for a Smart Aircraft Actuator”; IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes SAFEPROCESS’97, Hull, UK, Vol. 2, pp. 591-596, 1997. Olbrich, T., and Richardson, A.; “Integrated Test Support for Intelligent Sensors”; IEE Colloquium on Intelligent Systems at the University of Leicester, Reference number 1996/261, pp. 1/1-1/7, UK, 1996a. Olbrich, T., Bradley, P.A., and Richardson, A.M.; “BIST for Microsystems as a Contributor to System Quality and Engineering”; Special Issue on Quality Effort in Europe, Vol. 8, No. 4, pp. 601-613, 1996b. Park, B., Kargupta, H., Johnson, E., Sanseverino, E., Hershberger, D., Silvestre, L.; “Distributed Collaborative Data Analysis from Heterogeneous Sites Using a Scalable Evolutionary Technique”; Applied Intelligence, Vol. 16, No. 1, pp. 19-42, 2002. Patton, R., Frank, P., and Clark, R.; “Issues of Fault Diagnosis for Dynamic Systems”; Springer, 2000. Quevedo, D., Goodwin, C., and Welsh, J.; “Design Issues Arising in a Networked Control System Architecture”; Proceedings of the 2004 IEEE International Conference on Control Applications, Vol. 1, pp. 450-455, 2004. Raab, U., and Isermann, R.; “Lower Power Actuator Principles”; VDI/VDE-Tagung, Actuator 90, Bremen, 1990. Rauch, H.; “Autonomous Control Reconfiguration”; IEEE Control Systems Magazine, pp. 34-47, 1995. Ray, A., and Halevi, Y.; “Integrated Communication and Control Systems: Part IIDesign Considerations”; Journal of Dynamic Systems, Measurement and Control, Vol. 110, pp. 374-381, 1988. Reza, S.; “Smart Networks for Control”; IEEE, 1994. Sanz, R., Alonso, M., Lopez, I., and García, C.; “Enhancing Control Architectures using CORBA”; Proceedings of the 2001 IEEE International Symposium on Intelligent Control, pp. 189-194, 2001. Sanz, R., and Zalewski J.; “Pattern Based Control Systems Engineering”; IEEE Control Systems Magazine, Vol. 23, No. 3, pp. 43-60, 2003. Schneider, F.; “Implementing Fault Tolerance Services Using the State Machine Approach: A Tutorial”; ACM Computing Services, Vol. 22, No. 4, pp. 299-319, Dec. 1990. Seto, D., Lehoczky, J.P., Sha, L., and Shin, K. G.; “Trade-Off Analysis of RealTime Control Performance and Schedulability”; Real-Time Systems, Kluwer Academic Publishers, No. 21, pp. 199-217, 2001. Sloman, M., and Kramer, J.; “Distributed Systems and Computer Networks”; Prentice-Hall, 1987. Tanenbaum, S.; “Computer Networks”; Prenice Hall, 2003. Tindell, K., and Clark, J.; “Holistic Schedulability Analysis for Distributed Hard Real-Time Systems”; Microprocessing and Microprogramming, Vol. 40, pp. 117-134, 1994.
134
References
Tontini, G., and De Queiroz, A.; “RBF Fuzzy ARTMAP: A New Fuzzy Neural Network for Robust Online Learning and Identification of Patterns”; IEEE International Conference on Systems Man and Cybernetics, Information Intelligence and Systems, Vol. 2, pp. 1364-1369, 1996. Törngren, M., and Redell, O.; “A Modelling Framework to Support the Design and Analysis of Distributed Real-Time Control Systems”; Microprocessors and Microsystems, Vol. 24, pp. 81-93, 2000. Törngren, M.; “Fundamentals of Implementing Real-Time Control Applications in Distributed Computer Systems”; Real-Time Systems, Kluwer Academic Publishers, Vol. 14, No. 3, pp. 219-250, 1998. True Time, http://www.control.lth.se/~dan/truetime/, 2003. Venkatasubramanian, V., Rengaswamy, R., Kavuri, S., and Yin, K.; “A Review of Process Fault Detection and Diagnosis. Part I: Quantitative Model-based Methods”; Computers and Chemical Engineering, Vol. 27, pp. 293-311, 2003a. Venkatasubramanian, V., Rengaswamy, R., Kavuri, S., and Yin, K.; “A Review of Process Fault Detection and Diagnosis. Part II: Qualitative Models and Search Strategies”; Computers and Chemical Engineering, Vol. 27, pp. 313-326, 2003b. Venkatasubramanian, V., Rengaswamy, R., Kavuri, S., and Yin, K.; “A Review of Process Fault Detection and Diagnosis. Part III: Process History Based Methods”; Computers and Chemical Engineering, Vol. 27, pp. 327-346, 2003c. Vinoski, S.; “CORBA: Integrating Diverse Applications within Distributed Heterogeneous Environments”; IEEE Communication Magazine, Vol. 35, pp. 46-55, 1997. Walsh, G.C., Ye, H., and Bushnell, L.G.; “Stability Analysis of Networked Control Systems”; IEEE Transactions on Control Systems Technology, Vol. 10, No. 3, pp. 438-446, 2002. Werbos, P.; “Backpropagation Through Time: What it Does and How to Do It”; Proceedings of the IEEE, Vol. 78, pp. 1550-1560, 1990. Willis, H., Tram, H., Engel, M., and Finley, L.; “Selecting and Applying Distribution Optimization Methods”; IEEE Computer Applications in Power, Vol. 9, No. 1, pp. 12-17, 1996. Wills, L., Kannan, S., Sander, S., Guler, M., Heck, B., Prasad, J., Schrage, D., and Vachtsevanous, G.; “An Open Plataform for Reconfigurable Control”; IEEE Control Systems Magazine, Vol. 21, No. 3, pp. 49-64, 2001. Wittenmark, B., Badtiam, B., and Nilsson, J.; “Analysis of Time Delays in Synchronous and Asynchronous Control Loops”; Proceedings of the 37th IEEE Conference on Decision & Control, WA10, pp. 283-288, 1998. Xu, Z., and Zhao, Q.; “Design of Fault Detection and Isolation Via Wavelet Analysis and Neural Networks”; Proceedings of the IEEE International Symposium on Intelligent Control, pp. 467-472, 2002. Yang, B., Han, T., and Kim, Y.; “Integration of ART-Kohonen Neural and Case Based Reasoning for Intelligent Fault Diagnosis”; Expert Systems with Applications, Vol. 26, pp. 387-395, 2004.
References
135
Yang, C.Y., and Clarke, D.W.; “A Self-Validating Thermocouple”; IEEE Transactions on Control Systems Technology, Vol. 5, No. 2, pp. 239-253, 1997. Yang, Z., and Blanke, M.; “A Unified Approach for Controllability Analysis of Hybrid Control Systems”; Proceedings of IFAC CSD2000, pp. 158-163, 2000. Yang, Z., and Hicks D.; “Reconfigurability of Fault-Tolerant Hybrid Control Systems”; 15th IFAC Triennial World Congress, 2002. Yook, J.K., Tilbury, D.M., and Soparkar, N.R.; “Trading Computation for Bandwidth: Reducing Communication in Distributed Control Systems using State Estimators”; IEEE Transactions on Control Systems Technology, Vol. 10, No. 4, pp. 503-518, 2002. Yu, W., and Pineda, J.; “Chemical Process Modeling with Multiple Neural Networks”; European Control Conference, Porto, Portugal, pp. 37353740, 2001.
Index
A
D
ADMIRE............................. 112, 127 Analytical Redundancy ................. 53 ARINC 429 ..................................... 8 ARINC 629 ................................. 4, 8 ART2……………………….50, 53, 56, 58, 59 ARTMAP.............................. 56, 134 AUTOMATA........................... vi, 94
Decision Maker……………….v, 87, 89, 90, 91, 92, 94, 95, 106, 107, 126 Decision-Making Module.............. 56
C
F
CANbus………………4, 11, 12, 28, 67, 113 Cluster………………..49, 51, 55, 61, 62, 80, 81, 115 Common Resource…………...16, 25, 31, 33, 34, 67 Confidence Value................ v, 40, 63 CORBA……………….29, 30, 129, 133, 134 CSMA/CA ...................................... 4 CSMA/CD ...................................... 4
Fault Detection…………...vi, 39, 40, 42, 46, 49, 51, 52, 63, 89, 126, 130, 134 Fault Diagnosis………………...v, vi, vii, 39, 40, 43, 44, 51, 60, 63, 88, 89, 90, 92, 126, 127, 129 Fault Tolerance………….…..v, vi, 4, 6, 7, 15, 16, 19, 21, 24, 36, 37, 39, 40, 41, 82, 86, 99, 100, 120, 133 Fieldbus ........................... 42, 67, 127 Finite State Machine........................ v Fuzzy Control...................vi, vii, 111
E Earliest Deadline First ............. 25, 31
137
138
Index
H
R
Hierarchical Control...................... 87
IAE ................................................ 65
Rate Monotonic ................ .25, 30, 31 Real Time……………..….15, 16, 19, 22, 23, 27, 28, 29, 37, 40, 60, 111 Reconfigurable Control………..v, vi, vii, 66, 86, 88, 89, 95, 108, 126
J
S
Jitter .................................. 22, 24, 66
safety………………………...…v, vi, 10, 12, 17, 30, 86, 92, 95, 114 Scheduler…………………...…v, 25, 27, 28, 37, 41, 71, 90, 93, 98, 107, 114 Self-Diagnose………………...vi, 40, 49 Simulink ................................. vi, 109 Smart Element……………….vi, vii, 39, 44 Square Prediction Error ................. 49 Stability Analysis ..................... vi, 66
I
L Least Slack Time........................... 25 Load Balancing…………....…..5, 15, 24, 36, 66, 132 Logical Remote Unit....................... 8 Lyapunov ................................ 78, 82
M MATLAB.............................. 66, 111 Maximum Laxity First .................. 29 Model Predictive Control............... vi MPC.............................................. 79
N Network Control…………..….vi, vii, 30, 66, 67, 71, 77, 86, 93, 94, 113, 126 Neural Networks……………..39, 44, 50, 51, 52, 53, 55, 56, 58, 59, 61, 63, 89, 90
O Open Systems Interconnection... vi, 1
T TAO............................................... 29 TCP/IP........................... ………..4, 5 Time Delays……………………v, vi, vii, 1, 12, 15, 22, 25, 27, 36, 37, 65, 66, 67, 68, 69, 71, 73, 75, 76, 77, 78, 79, 80, 81, 82, 86, 89, 90, 91, 93, 94, 95, 100, 101, 107, 114, 115, 117, 119, 121, 123 True Time………………….66, 111, 112, 134
V Voting Algorithms…………...16, 17, 18, 92, 114