140 102 8MB
English Pages 360 [359] Year 2023
Studies in Systems, Decision and Control 486
Andrey E. Gorodetskiy Irina L. Tarasova
Introduction to the Theory of Smart Electromechanical Systems
Studies in Systems, Decision and Control Volume 486
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution and exposure which enable both a wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.
Andrey E. Gorodetskiy · Irina L. Tarasova
Introduction to the Theory of Smart Electromechanical Systems
Andrey E. Gorodetskiy Institute for Problems in Mechanical Engineering (IPME) Russian Academy of Sciences (RAS) St. Peterburg, Russia
Irina L. Tarasova Institute for Problems in Mechanical Engineering (IPME) Russian Academy of Sciences (RAS) St. Peterburg, Russia
ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-031-36051-0 ISBN 978-3-031-36052-7 (eBook) https://doi.org/10.1007/978-3-031-36052-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Smart Electromechanical Systems (SEMS) are used in Cyber-Physical Systems (CPS). The term “Cyber-Physical Systems” was coined in 2006 by Helen Gill to highlight the distinctive feature of the NSF CPS Workshop she organized. At the time, she was the director of the Integrated and Hybrid Systems Division at the National Science Foundation. The seminar organizers tried to redefine the role of embedded systems, and they succeeded; they caught the general trend, and within a couple of years, the rapid development of CPS began. Progress in this class of system was recognized as one of the most important technological advances in the USA and later in Europe. CPS shows the ability to integrate computing, transmission and storage of information, monitoring and control over the objects of the physical world. The main tasks in the field of theory and practice of CPS are to ensure the efficiency, reliability and safety of functioning in real time. SEMS have been widely used since 2000 in parallel robots or so-called parallel kinematic machines. They offer good opportunities in terms of precision, rigidity and the ability to handle heavy loads. SEMS are used in unmanned aerial vehicles, in astronomy, machine tool building, medicine and other fields. Currently, much attention is paid to the methods of designing and modeling SEMS based on the principles of adaptability, intelligence, biomorphism of parallel kinematics and parallelism in information processing and control calculations. The most relevant are the following areas of research in the field of SEMS: – Methods for designing SEMS modules and intelligent robots based on them; – Development of mathematical and software tools for the central nervous system of SEMS modules; – Creation and improvement of mathematical models of parallel mechanics and control systems; – Development of mathematical and software tools for SEMS; – Development of methods for adapting SEMS to the mental characteristics of participants in human-machine systems; – Issues of using systems in various fields of technology.
v
vi
Preface
Since SEMS occupy a significant area in modern intelligent robots that perform important and complex technological operations as part of human-machine systems, it seems appropriate and timely to publish the monograph with the proposed topic. The monograph can be recommended for specialists in the field of control, as well as a textbook for masters of universities specializing in the field of Smart Electromechanical Systems and robotics, and includes many scientific areas such as kinematics, dynamics and control theory. The purpose of the publication of this monograph is to introduce the basics of the theory of SEMS, including logical-probabilistic and logical-linguistic methods for their design and modeling, taking into account the incomplete certainty of the operating environment and the mental characteristics of the members of the team of human-machine systems. The monograph consists of four chapters. Chapter 1 discusses the mechanisms and systems of automatic control. Kinematic and dynamic models of SEMS modules are shown. The variants of SEMS automatic control systems, including those with neuroprocessor controllers, are considered. Particular attention is paid to methods for synthesizing optimal control algorithms and methods for combating jamming in electric drives of SEMS modules. Chapter 2 deals with the synthesis of the central nervous system of SEMS modules. The issues of forming images based on the sensory data of robots, creating a language of sensations of robots, and classifying images in the central nervous system are considered. Particular attention is paid to the problems of endowing SEMS with elements of the psyche when making decisions as part of human-machine systems. Chapter 3 discusses the features of the synthesis of automatic control systems SEMS in group control. The principles of situational control of the SEMS group, mathematical and algorithmic support for decision making are considered. Particular attention is paid to the issues of safe control. Chapter 4 describes examples of the use of SEMS in various fields of technology. Devices based on SEMS modules are described, such as a medical microrobot, controlled ciliary motors, adaptive capture, SEMS drive systems for the counterreflector of a space radio telescope, SEMS platforms with a matrix receiver for obtaining radio images in astronomy, systems for forming mirror reflective surfaces of a space radio telescope, and attention is paid to the control of such devices. We are indebted to many people for the technical support we received in writing this monograph. It is impossible to give their names here, but we are deeply grateful to all of them. In addition, the monograph was prepared with the financial support of the Ministry of Science and Higher Education of the Russian Federation as part of the state assignment (subject No. 121112500304-4). St. Petersburg, Russia April 2023
Andrey E. Gorodetskiy Irina L. Tarasova
Contents
1 Mechanisms and Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 SEMS Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 SEMS Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Serial Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Parallel Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Architecture of the “Star” . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Architecture of the “Ring” . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Architecture of the “Tree” . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Kinematic Models of SEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 The Direct Kinematics Problem . . . . . . . . . . . . . . . . . . . . . . 1.3.2 The Inverse Problem of Kinematics . . . . . . . . . . . . . . . . . . . 1.4 Synthesis of Optimal SEMS Control Algorithms . . . . . . . . . . . . . . . 1.4.1 Mathematical Programming . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Mathematical Programming in the Ordinal Scale . . . . . . . 1.4.3 The Generalized Mathematical Programming . . . . . . . . . . 1.4.4 Multistep Generalized Mathematical Programming . . . . . 1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints to Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Prerequisites for Reducing Logical-Probabilistic Constraints to Interval Constraints . . . . . . . . . . . . . . . . . . . . 1.5.2 Prerequisites for Reducing Logical–Linguistic Constraints to Interval Constraints . . . . . . . . . . . . . . . . . . . . 1.5.3 An Example of Solving a Fuzzy Control Problem Using Interval Reduction Theorems . . . . . . . . . . . . . . . . . . 1.6 SEMS Automatic Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Architecture of Control Systems of Modules SEMS . . . . . 1.6.2 Automatic Control Subsystems of Mobile Elements of Modules SEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Neuroprocessor Automatic Control System of the SEMS Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 8 8 9 10 10 10 11 12 22 28 31 33 37 46 47 48 49 49 56 57 59 62
vii
viii
Contents
1.8
Anti-jamming in Automatic Control Systems of SEMS Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 The Use of Force Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.2 Use of Pickoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.3 Using a Calculation Block of Optimal Trajectories . . . . . . 1.9 Control of Vitality and Reliability Analysis . . . . . . . . . . . . . . . . . . . 1.9.1 A Simplified Accounting of the Relations Between the Blocks of a Complex System . . . . . . . . . . . . . . . . . . . . . 1.9.2 The Modeling of Changes with the Passing of Time of the Complex System Failure Probability with the Reservation of the Blocks . . . . . . . . . . . . . . . . . . . 1.9.3 The Algorithm of the Modeling of the Failure Probability with the Passing of Time of the Complex System with the Reservation of the Blocks . . . . . . . . . . . . . 1.9.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9.5 Decision Making Methods for Durability Control . . . . . . . 1.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Central Nervous System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Problems of Creating the Central Nervous System SEMS . . . . . . . 2.1.1 The Bodies of the Human Senses . . . . . . . . . . . . . . . . . . . . . 2.1.2 The Central Nervous System of the Human Senses . . . . . 2.1.3 Tasks of Construction of the Central Nervous System Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Features of the Central Nervous System SEMS with Elements of the Psyche . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Supplementing the Structure of the Robot’s Central Nervous System with Elements of the Psyche . . . . . . . . . . 2.2 Formation of Images Based on Sensory Data of Robots . . . . . . . . . 2.2.1 Fuzzification of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Formation of the Robot’s Sensation Language . . . . . . . . . . . . . . . . . 2.3.1 Algorithm of Formation of the Language of Sensations of the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Quantization of the Surrounding Space . . . . . . . . . . . . . . . . 2.3.3 Fuzzification of Sensory Information . . . . . . . . . . . . . . . . . 2.3.4 Image Formation in the Display of the Surrounding Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 Formation of Images by Combining Images from Different Senses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Classification of Images in the Central Nervous System . . . . . . . . . 2.4.1 Statement of the Problem of Inductive Formation of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Algorithms for the Formation of Decision Rules . . . . . . . .
68 68 72 73 74 74
77
79 82 84 84 87 89 89 90 94 95 99 100 102 102 106 107 108 109 111 112 113 114 114
Contents
ix
2.4.3
Logical-Probabilistic and Logical–Linguistic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Testing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Image Classification System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 The Block Diagram of the System . . . . . . . . . . . . . . . . . . . . 2.5.2 The Principle of Operation of the System . . . . . . . . . . . . . . 2.6 Logical and Mathematical Model of Decision Making in the Central Nervous System SEMS . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Making Behavioral Decisions Based on Solving Systems of Logical Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Stages of the Formation of Behavioral Decisions . . . . . . . 2.7.2 Features of Recognition, Described by Equations in Algebra Modulo Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Using Binary Relations When Decision Making . . . . . . . . . . . . . . . 2.8.1 The Tasks of Situational Control of a Group of Dynamic Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Generalized Description of the Task of Situational Control of the SEMS Group . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Mathematical Methods for Using Binary Relations in Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 The Influence of Emotions on Decision Making . . . . . . . . . . . . . . . . 2.9.1 The Emotional Component of the Decision Making System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 Estimates of the Strength of Emotion Based on the Use of Sensory Signals . . . . . . . . . . . . . . . . . . . . . . . 2.9.3 Estimates of the Strength of Emotion Based on the Analysis of Images Obtained After Processing Sensory Signals . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.4 Example of Accounting for the Influence of Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 The Influence of Temperament on Decision Making . . . . . . . . . . . . 2.10.1 Types of Classification of Human Temperament . . . . . . . . 2.10.2 Methods of Temperament Diagnostics . . . . . . . . . . . . . . . . 2.10.3 Adaptation of SEMS to Human Temperament in Human–Machine Systems . . . . . . . . . . . . . . . . . . . . . . . . 2.10.4 Correction of the Structure of the Central Nervous System SEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.5 Example of Taking into Account the Influence of Temperament in Human–Machine Systems . . . . . . . . . . 2.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115 117 119 120 121 122 125 125 128 130 131 132 133 135 135 138
140 143 150 150 156 159 160 162 167 171
3 Group Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 3.1 Principles of Situational Control of the SEMS Group . . . . . . . . . . . 180 3.1.1 The Concept of Situational Control . . . . . . . . . . . . . . . . . . . 181
x
Contents
3.1.2 3.1.3 3.2
3.3
3.4
3.5
3.6 3.7
3.8
3.9
The Principles of Situational Control SEMS Groups . . . . The Methodology of Situational Control SEMS Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Situational Control a Group of Robots Based on SEMS . . . . . . . . . 3.2.1 The Construction of the Current Dynamic Model of the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 The Construction of the Current Dynamic Models of Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 An Example of the Construction PDSCR for the Task of Extracting from the Premises Items Robots-Loaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Movement Control of the SEMS Group . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Setting the Task of Controlling the Movement of a Group of Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Principles of Situational Control . . . . . . . . . . . . . . . . . . . . . Decision Making an Autonomous Robot Based on Matrix Solution of Systems of Logical Equations that Describe the Environment of Choice for Situational Control . . . . . . . . . . . . . 3.4.1 Processing of Information in the CNSR . . . . . . . . . . . . . . . 3.4.2 Genetic Robot Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Methods of Decision Making Based on Robot Reflections in the Environment of Choice . . . . . . . . . . . . . Using Influence Diagrams in Group Control of SEMS . . . . . . . . . . 3.5.1 Options for Using Influence Diagrams When Making a Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Features of the Search for the Optimal Decision in Various Variants of Situational Control Structures . . . . Secure Control of the SEMS Group . . . . . . . . . . . . . . . . . . . . . . . . . . Logical–Linguistic Method of the Movement of Mobile SEMS with Minimal Probability of Accidents . . . . . . . . . . . . . . . . . 3.7.1 The Routes Ranking Problem . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 The Database of Reference Route Segments . . . . . . . . . . . 3.7.3 Determining the Probability of an Accident on a Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Ranking and Optimization of Routes . . . . . . . . . . . . . . . . . . Assessment of the Group Intelligence of SEMS in RTS . . . . . . . . . 3.8.1 Test Simulation of a Robotic System . . . . . . . . . . . . . . . . . . 3.8.2 Baseline Assessment of Group Intelligence in the Central Nervous System RTS . . . . . . . . . . . . . . . . . . Software Package for Testing Group Intelligence Assessment Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 A Typical Structure of a Software System Testing of Models of Groups of Robots . . . . . . . . . . . . . . . . . . . . . . 3.9.2 Assembly and Adjustment of Models of Control Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
182 185 188 189 191
192 194 196 198
201 202 205 206 208 209 211 214 217 218 219 222 224 224 226 228 230 231 233
Contents
xi
3.9.3
Assembly and Adjustment of Models of the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.4 The Assembly of Test Models Groups of Robots . . . . . . . 3.9.5 Testing Groups of Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.6 Model Training and Correction . . . . . . . . . . . . . . . . . . . . . . 3.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Examples of Using SEMS Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Controlled Ciliated Thrusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Ciliated Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Simple Ciliated Thruster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Ciliated Thruster with Controlled Rigidity . . . . . . . . . . . . . 4.1.5 Ciliated Thruster with Controlled Rigidity and Shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Flagellar Thruster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Biological Flagellum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Structure of the Flagellar Thruster . . . . . . . . . . . . . . . . . . . . 4.2.3 Architecture of the Automatic Control System . . . . . . . . . 4.3 Medical Micro Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 New Micro Robot Device . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Operation of the Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Adaptive Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Capture Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Capture Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Algorithms of Automatic Control System Operation . . . . 4.4.4 Adaptive Capture Automatic Control System . . . . . . . . . . 4.5 Using a Platform Based on SEMS Modules for Obtaining Radio Images in Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Problems of Creating a Matrix Radio Receiver . . . . . . . . . 4.5.2 Estimation of Optimal Pixel Sizes of Matrix Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Matrix Receiver Adaptation Algorithms . . . . . . . . . . . . . . . 4.5.4 Adaptive Matrix Receiver Control System . . . . . . . . . . . . . 4.6 SEMS Actuator System for the Space Radio Telescope Subdish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Simulation of Natural Oscillations of the Subdish . . . . . . . 4.6.2 Calculation of ACS Parameters . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Analysis of the Dynamics of the Automatic Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Accounting for Nonlinearities in the Automatic Control System of the Actuator . . . . . . . . . . . . . . . . . . . . . . 4.7 Antenna of the Space Radio Telescope . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Antenna Formation in Near-Earth Orbit . . . . . . . . . . . . . . .
234 236 237 239 239 250 257 257 257 258 260 262 266 269 269 270 271 273 274 275 276 277 281 282 289 294 295 297 301 305 307 308 310 314 315 317 319
xii
Contents
4.7.2 4.7.3 4.7.4
The Device of the Space Radio Telescope . . . . . . . . . . . . . Operation of the Control System . . . . . . . . . . . . . . . . . . . . . The Choice of the Satellite Orbit During the Formation of the Adaptive Mirror Antenna System of the Space Radio Telescope . . . . . . . . . . . . . . . . . 4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
323 327
330 336 340
Abbreviations
AB ACS ACSL ACSRC ACSRM ACSRR ACS AC ACS CRM ACS FT ACS MCR ACS PF ACS PM ACS RR ACS RT ACS SM ADS AES AHP AI ALCB ALMB ALRB AMR ATP BBN BCCA BCOD BCOE BCOR BCRF
Adaptation Block Automatic Control System Automatic Control System of the Legs Automatic Control System of Rods Capture Automatic Control System of Rod Movement Automatic Control System of Rods a Reconfiguration Automatic Control System of Adaptive Capture Automatic Control System for Module with Controlled Rigidity Automatic Control System of the Flagellar Thruster Automatic Control System for Motors Controlled Rod Automatic Control System of the Phalanges of the Fingers Automatic Control System of the Phalanx Modules Automatic Control System of the Reconfiguration Rods Automatic Control System of the Radio Telescope Automatic Control System for Standard Modules Angular Displacement Sensors Artificial Earth Satellites Analytical Hierarchy Process Artificial Intelligence Actuators Legs Control Block Actuators Legs Motors Block Actuators Legs Reducers Block Adaptive Matrix Receiver Adenosine Triphosphate Bayesian Belief Networks Block for Calculating Control Actions Block for Calculating Optimal Displacements Block for Calculating Optimal Elongations Block for Calculating Optimal Rigidity Block for Calculating the Rigidity of Fingers xiii
xiv
BCSM BF BM BMO MD BMO SD BMSLP BMSUP BPSL BPSLP BPSUP C CB AMR CBCC CC CCB CCC CCCD CCR R CCRR CER CL CLA CM X, CM Y, CM Z CMCR CNS CNSH CNSR CR CRef CRLP CRM CRS CShB CSSLP CSSUP CTCR CTCRSh CTR DB DCS DM DMB DMLP
Abbreviations
Block for Calculating the Steps of Movement along the coordinates X, Y, Z Bacterial Flagellum Block of the Memory Main Dish Oscillation Measurement Block Subdish Oscillation measurement Block Block of Moving Sensors of the Lower Platform Block of Moving Sensors of the Upper Platform Block of Pressure Sensors in the Legs Block of Pressure Sensors in the rods Lower Platform Block of Pressure Sensors in the rods of the Upper Platform Controllers Adaptive Matrix Receiver Control Block Control Block for the Coordinates of the Cilia Control Computer Coordinate Control Block Central Control Computer Controller CCD Controllers of Controlled Reconfiguration Rods Controllers Control Rods Reconfiguration Controllers of Elongation Rods Controller Leg Controllers Leg-Actuators Controllers of the Motor of movement along X, Y, Z Controllers of the Controlled Rod Motors Central Nervous System Central Nervous System Human Central Nervous System of the Robot Control Rods Counterflector Controllers Rods Lower Platform Module with Controlled Rigidity Complex Robotic Systems Shape Control Block Control Subsystem of the Lower Platform Control SubSystem of the Upper Platform Ciliated Thrusters with Controlled Rigidity Ciliated Thrusters with Controlled Rigidity and Shape Controllers of Turning Rods Database Dynamic Configuration Space Decision Making Decision Making Block Dynamic Model of the Lower Platform
Abbreviations
DMP DMUP DS DSm ECB ED ELI EMC EMF EMS ERD ESD ESZM FS G GCCR GCL GCP GCRG GCRM GCRR GEO GFC GLR GMP GRUP HEO HMS ID IE IMS IR KB L LA LB LDS LEO LFSB LH LIM LLM LLSB LM LMP
xv
Discrete Mathematical Programming Dynamic Model of the Upper Platform Displacement Sensors Docking System Elongation Calculating Block Electric Drives Extensible Language Interface Electromechanical Cilia Electromechanical Flagellum Electromechanical System Electric Rotation Drive External Storage Device Equisignal Zone Method Force Sensors Gear Group Control Current Regulators Group Controls Legs Generalized Convex Programming Group Control Rod Gripping Group Control Rods Movement Group Control Rods Reconfiguration Geostationary Orbit Group Finger Controllers Group Leg Regulators Generalized Mathematical Programming Controllers Rods Upper Platform High-Flying Orbit Human-Machine Systems Influence Diagrams Inference Engine Information-Measuring System Intelligent Robots Knowledge Base Legs Legs-Actuators Logic Block Linear Displacement Sensors Low-Flying Orbit Legs Force Sensors Block Lower Hinges Logical-Interval Model Logical-linguistic model Legs Lengthening Sensors Block Leg Motors Linear Mathematical Programming
xvi
LP LPFSB LPM LPR LPRCB LPRLSB LPRMB LPRRB LTR LV M MB MBUP MCC F MD MEO MER MGMP ML MLP MM MP MPOS MR MRLP MRR MRUP MS X, MS Y, MS Z MS MSE MTR MUP NACS NB NBCOM NBR NMP NPP OB OCB OTUP OWL PCCD PDM PDSCR PID
Abbreviations
Lower Platform Lower Platform Force Sensors Block Logic-Probability Model Lower Platform Rods Lower Platform Rods Control Block Lower Platform Rods Lengthening Sensors Block Lower Platform Rods Motors Block, Lower Platform Rods Reducers Block Logical-Transformational Rules Logical Variables Motors Modeling Block Multiplier Block of Upper Platform Calculating the Coordinates of the Fingers Main Dish Medium- Flying Orbit Motors of Elongation Rods Multistep Generalized Mathematical Programming Motor Leg Model of the Lower Platform Main Mirror Mathematical Programming Mathematical Programming in an Ordinal Scale Motors Rods Motors Rods of the Lower Platform Motors Rods Reconfiguration Motors Rods of the Upper Platform Motion Sensors in X, Y, Z coordinates Measuring System Mean Square Error Motors of Turning Rods Model of the Upper Platform Neuroprocessor Automatic Control System Neuroprocessor Blocks Neuroprocessor Calculation Block Optimal Movement Neuroprocessor Recognition Block Nonlinear Mathematical Programming Nuclear Power Plant Optimization Bock Oscillation Control Block Optimizer of the Trajectory of the Upper Platform Overall Work Level Parameters CCD Preference Decision Makers Permitted Dynamic Space of Robot Configurations Proportional–Integral–Differential
Abbreviations
PM PMO MD PMO SD QMP RC RCB RP RR RTI RTS SCT SD SEMS SIMD SM SoS SP SPC SpCB SRT SSAC CC SSAC F SSAC P SSAC RCB StM X, StM Y, StM Z SUF TH TS TSLR UAV’s UD UP UPCCB UPFSB UPR UPRCB UPRLSB UPRMB UPRRB UV’s V AMR VLIW VS
xvii
Mounting Pads Point at the edge of the Main Dish Point at the edge of the Subdish Quadratic Mathematical Programming Rods Capture Rigidity Control Block Radiation Pattern Rods Reconfiguration Recurrent Target Inequalities Robotic Systems Software Testing Complex Subdish Smart ElectroMechanical Systems Single Instruction/Multiple Data Standard Modules System of Systems Supporting Platform Planning Systems by Control Speed Control Block Space Radio Telescope Subsystem for Automatic Control of the Capture Coordinates Subsystem for Automatic Finger Control Subsystem for Automatic Palm Control Subsystem of Automatic Control of the Block with Controlled Rigidity Stepper Motors in X, Y, Z coordinates Surface Utilization Factor Top Hinges Tactile Sensor Telescopic Spring-Loaded Rods Unmanned Aerial Vehicles Uncertain Decision Tree Upper Platform Upper Platform Coordinates Calculating Block Upper Platform Force Sensors Block Upper Platform Rods Upper Platform Rods Control Block Upper Platform Rods Lengthening Sensors Block Upper Platform Rods Motors Block Upper Platform Rods Reducers Block Unmanned Vehicles Virtual Adaptive Matrix Receiver Very Long Instruction Word Vision System
Chapter 1
Mechanisms and Control Systems
1.1 SEMS Modules Research and development of intelligent robots (IR), designed to operate under the a priori uncertainty of a dynamically changing environment, are actively conducted in all industrialized countries of the world. The areas of application of such robots are vast and varied: automated production, transport, household, medicine, space, defense, underwater research, rescue and repair work in extreme conditions, etc. In many of them, the presence of a person is undesirable or even impossible. Therefore, in order to successfully perform work operations, IRs, like highly developed living beings, must have such an important quality as adaptability to a non-formalized changing work environment [1]. The latter involves the solution of a number of complex problems by the IR control system. First of all, these are problems of adequate perception and recognition of the external environment, purposeful planning of behavior and effective execution of planned actions. The last problem is quite successfully solved by the methods of the theory of automatic control using a computer of traditional architecture with a consistent principle of information processing. The solution of the first two problems on the same computing means is associated with significant difficulties. The reason for this is not only the need to process large amounts of information from spatially distributed and parallel functioning sensors in real time, but also the use of new intelligent methods of information processing, which these computers are not focused on [2]. A special place in the development of IR is occupied by structures with parallel kinematics [3] and neuroprocessor control systems. These properties are possessed by smart electromechanical systems (Smart ElectroMechanical Systems—SEMS). The use of smart electromechanical systems in IR makes it possible to obtain the maximum accuracy of actuators with a minimum travel time due to the introduction of parallelism in the processes of measurement, calculation and displacement
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. E. Gorodetskiy and I. L. Tarasova, Introduction to the Theory of Smart Electromechanical Systems, Studies in Systems, Decision and Control 486, https://doi.org/10.1007/978-3-031-36052-7_1
1
2
1 Mechanisms and Control Systems
Fig. 1.1 Block diagram of SM SEMS
and the use of high-precision piezoelectric motors capable of operating in extreme conditions, including in open space [4]. The main elements of the SEMS is standard modules (standard modules—SM), having a structure such as hexapods. They allow you to maximize the accuracy of the actuators with the minimum travel time by introducing parallelism in measuring, calculating, moving and using precision motors. But however, such mechanisms have a more complex kinematics, which requires more advanced control algorithms and solving new, complex optimization problems, ensuring the implementation of the optimum path without jamming. When building SEMS for various purposes can be used a wide variety of different standard modules. All they usually contain (see Fig. 1.1) electromechanical system (EMS) (1), a parallel type, an automatic control system (ACS) (2), the measuring system (MS) (3) and docking system (DSm) (4). EMS comprises a movable and a stationary platform and usually six legs. The difference of certain SM SEMS from each other mainly lies in the design platforms. The core of ACS is neuroprocessor automatic control system (NACS). The main function of NACS is the automatic control of the movement of the upper platform, which has, as a rule, 6-axis positioning system with a control unit. Also part of the function of NACS is the automatic configuration control upper and lower platforms by extending the control rod. MS generally contains opto-electronic sensors elongation and displacement, tactile sensors and force sensors. DS, as a rule, contains a vision system with intelligent recognition unit. Next, consider the design of the main modules SM SEMS: 1. 2. 3. 4. 5. 6. 7. 8.
module SM1 SEMS; module SM2 SEMS; module SM3 SEMS; module SM4 SEMS; module SM5 SEMS; module SM6 SEMS; module SM7 SEMS; module SM8 SEMS.
The tripod (module SM1 SEMS) contains (see Fig. 1.2) lower 1 and upper 2 platforms, connected to each other by three legs 3–5 with engines 6–8 through 9–14 two-stage hinges, providing a change in their length. Thanks to this, it is possible to carry out positioning along three linear (X, Y, Z ) and three angular coordinates (rotation around the corresponding axes Q x , Q y , Q z ) [3].
1.1 SEMS Modules
3
Fig. 1.2 Circuit design tripod and an example of its execution
Tripods normally used as a platform to support the weight and maintain the stability of the fortified another object, such as a camera or camcorder. They provide resistance against downward forces and horizontal forces and movements about horizontal axes. Positioning of three feet from the vertical center of gravity makes it easier to provide arms to resist lateral forces. The main advantages of tripod are: small size, great rigidity, high positioning accuracy, freely configurable in terms of support, high repeatability of structural elements, and the possibility of excluding backlash and no need for fine adjustment during assembly. The disadvantages are the tripod: the complexity of manufacturing the linear actuator legs, complexity of the joints and complex control algorithms. Hexapod (module SM2 SEMS) or Hugh Stewart platform is a type of parallel manipulator, which uses the octahedral arrangement of pillars [3]. Hexapod has six degrees of freedom (three translational and three rotational, as an absolutely solid). The unit has (see Fig. 1.3) lower platform (LP) (1), the upper platform (UP) (2) and six legs—actuators (LA) (3–8). The lower and upper platform (5 and 6) each comprise a supporting platform (SP) (9 and 10), at least three rods (attached at one end to SP, and the other—to the mounting pads (PM) (17–19 and 20–22), to which may be attached, for example screwed, at least three telescopic spring-loaded rods (TSLR) (23–25 and 26–28). LA contain electric drives (ED) with gears (G), displacement sensors (DS), such as opto-electronic, and force sensors (FS), such as piezoelectric, and lower hinges (LH), attached to the mounting pads lower platform and top hinges (TH), attached to the mounting pads of the upper platform. By varying the length of the legs with the
4
1 Mechanisms and Control Systems
Fig. 1.3 Driving hexapod design and its general appearance
help of controlled drives, you can change the orientation of a single platform while fixing another. Hexapod mechanisms used in those areas where it is necessary to control with great precision the object on three axes such as high-precision automatic machines, in medicine for complex operations. Just hexapod machines with high payloads are used in aircraft simulators and radio telescopes. The main advantages compared with hexapod tripod are: good dynamic characteristics, and lack of accumulation of positional errors. Hexapod disadvantages are the complexity of manufacturing the linear actuator leg joints complexity, complex control algorithms and possible seizure in violation of the synchronization of linear actuators. Unlike hexapod the module SM3 SEMS (see Fig. 1.4) has rods 29–31 and 32–34, which are mounted on the MP 17–19 and/or 20–22 platforms 1 and/or 2 to rotate in planes passing through the points of attachment rods and center of the platform 1 and/or 2 perpendicular to the latter by means of a controlled drive 35–37 and 38– 40. The rods 29–31 and 32–34 can change their length by means of linear control actuators 41–43 and 44–46. These rods make the module SM3 SEMS moved by their simultaneous turning and changing lengths. The latter can be used in the design of robots able to move, such as pipes or vessels [5]. Sometimes rods 29–31 and 32–34 can be elastic, which can be useful in medical robots. SM3 SEMS module has the same advantages and disadvantages as the hexapod, but additionally has the ability to move in space. Unlike module SM2 SEMS module SM4 SEMS (see Fig. 1.5) has rods 47–49 and 50–52, inwardly directed platforms and which are mounted on the SP 9 and/or 10 the platforms 1 and/or 2. They can be rotated in planes reference platforms 9 and/
1.1 SEMS Modules
5
Fig. 1.4 Design platform of module SM3 SEMS
or 10, using of controlled drives 53–55 and 56–58. The rods 47–49 and 50–52 can change their length by means of linear control actuators 59–61 and 62–64. The latter can be used in constructions gripper robots to capture different objects rods 47–49 and 50–52. Just as in the previous case, rods 47–49 and 50–52 can be flexible, which may be useful in medical robots, such as the Elizarova devices used for bone fixation at fractures [6]. The module SM5 SEMS unlike hexapod in rods 11–13 and/or 14–16) have (see Fig. 1.6) actuators 65–67 and/or 68–70, which allow you to change their length, that is, making them the control rods (CR). Fig. 1.5 Design platform of module SM4 SEMS
Fig. 1.6 Circuit design of the module SM5 SEMS
6
1 Mechanisms and Control Systems
Usually CR actuators contain gears, displacement sensors, for example, optoelectronic, force sensors, such as piezoelectric and controllers (C). MP 17–19 and 20–22 comprise two grooves for securing TSLR 23–25 and 26–28, and two slots for threaded joint with other similar universal modules. In addition MP 17–19 and 20–22 may contain CCD and LED arrays docking system. SP 9 lower platform includes grooves with thread for articulation with other similar modules and CCD docking system. SP 10 upper platform includes grooves with thread and an array of LEDs docking system. SM5 SEMS module has the same advantages and disadvantages as the hexapod, but unlike hexapod provides not only the shifts and turns of the upper platform, but also the compression and expansion of the upper and lower platforms. This, together with control systems, measurement and docking increases its versatility [7]. Module SM6 SEMS (see Fig. 1.7), in contrast to the module SM5 SEMS, has rods 29–31 and 32–34, which are mounted on the MP 17–19 and 20–22. They can be rotated in a plane passing through the points of attachment rods and center of the platform 1 and/or 2 perpendicularly to the latter, using of controlled drives 35–37 and 38–40. The rods 29–31 and 32–34 can change their length by means of linear control actuators 41–43 and 44–46. These rods make the module SM6 SEMS moved by their simultaneous turning and changing lengths. Sometimes the rods 29–31 and 32–34 can be elastic, which can be useful in medical robots. SM6 SEMS module has the same advantages and disadvantages as the SM5 SEMS, but has a greater flexibility due to the ability to move in space with a simultaneous change of its size and turns. Module SM7 SEMS (see Fig. 1.8), unlike SM5 SEMS module, has the rods 47–49 and 50–52. These rods are mounted on the base plate 9 and/or 10 platforms 1 and/ or 2. They are directed inwards and can be rotated in the plane of the support areas 9 and/or 10 by means of controlled actuators 53–55 and 56–58. The rods 47–49 and 50–52 can change their length by means of linear control actuators 59–61 and 62–64. The latter can be used in the construction of the robot gripper as different objects. As in module SM4 SEMS rods 47–49 and 50–52 may be elastic. Fig. 1.7 Design platform of the module SM6 SEMS
1.1 SEMS Modules
7
Fig. 1.8 Design platform of the module SM7 SEMS
Fig. 1.9 Design platform of the module SM8 SEMS
SM7 SEMS module has the same advantages and disadvantages as the SM5 SEMS, but has a greater flexibility by providing capture different subjects. Module SM8 SEMS (see Fig. 1.9), unlike the module SM5 SEMS, has rods 29–31 and 32–34, which are mounted on the MP 17–19 and/or 20–22 platform 1 and/or 2. They can be rotated in a plane passing through the points of attachment rods and center of the platform 1 and/or 2 perpendicular to the latter by means of a controlled drive 35–37 and 38–40. The rods 29–31 and 32–34 can change their length by means of linear control actuators 41–43 and 44–46. Additionally, this module further has rods 47–49 and 50–52, which are mounted on the support, plate 9 and/or 10 the platforms 1 and/or 2. They can be rotated in the plane of the support areas 9 and/or 10 by means of controlled actuators 53–55 and 56–58. The rods 47–49 and 50–52 inwardly directed platform (Fig. 1.9), and can change their length by means of linear control actuators 59–61 and 62–64. SM8 SEMS module has the same advantages and disadvantages as the SM5 SEMS, but has a greater flexibility due to a combination of additional features and modules SM6 SEMS SM7 SEMS.
8
1 Mechanisms and Control Systems
1.2 SEMS Architectures The use of intelligent robots in hexapod structures of the SEMS type makes it possible to obtain electric drives of maximum accuracy with minimal travel time. This is achieved through the introduction of parallelism in the process of measurement, calculation and movement and the use of a high-precision piezomotor capable of operating in extreme conditions, including in outer space [3]. Various associations (serial, parallel, tree, etc.). SEMS structures make it easy to design a variety of intelligent robots with broad technological capabilities (relief structures, combination in a single mechanism of transportation and processing operations, design flexibility, etc.). However, such mechanisms have a more complex kinematics. The latter requires more advanced control algorithms and solving new, complex optimization problems, ensuring the implementation of the optimum path without jamming. In addition, the inclusion in SEMS of a wireless network interface SEMS such as Wi-Fi, and an intelligent strategic planning system for the cooperative behavior of several SEMS will further expand the scope of IR [1]. Consider the kinds of architectures SEMS with examples of their application.
1.2.1 Serial Architecture Typical SEMS modules in serial architecture are connected in series with each other, i.e. lower platform subsequent module is attached to the top—the previous (see Fig. 1.10). A typical example of such architecture can be support-rotating device antenna space radio telescope [8]. As illustrated in [4] in the space radio telescope (SRT) antenna device, it is advisable to use a design consisting of a series connection of hexapods. It is necessary for issuing a predetermined shape and position of their mirror (dish) surfaces after the disclosure of the antenna and a possible correction of the periodic as well as to restore the telescope to a given light source. At the same time the Main Mirror (MM) attached to the top platform of the compounds 2–3 hexapods and Counterreflector (CRef) or subdish the lower platform of another hexapod upper platform which is attached to the respective uprights. For automatic control of the spatial position of the mirror elements of such a structure commonly used neuroprocessor systems. Features of operation of such SEMS in space are the need to ensure the efficiency of electric system in a high vacuum, and that is particularly difficult to implement, at temperatures up to 4K. Therefore, conventional principles of construction of control systems based on DC motors or asynchronous motors with digital controllers based on microcontrollers and industrial general purpose computing stations in this case cannot be used. In [9] it is shown that one of the most promising solutions to this problem is to build neuroprocessor automatic control system (NACS) using a piezoelectric actuator motors SEMS legs.
1.2 SEMS Architectures
9
Fig. 1.10 Serial architecture
Thus, the use in the design of support-rotating device antenna space radio telescopes serial communications modules SEMS provides a compact antenna placement in the delivery of its orbit, fast and reliable disclosure, as well as the precise positioning and tracking with the help of neuroprocessor automatic control system.
1.2.2 Parallel Architecture The parallel architecture SEMS typical modules are connected in parallel with each other, i.e. lower the platform of all the modules are attached to a support surface (see Fig. 1.11). A typical example of such an architecture can be adaptive surface of the main dish and subdish radio telescope [10], using modules such as “hexapod”. In [9] it is shown that one of the most promising solutions to the problem of control such surfaces is also building a NACS using piezoelectric actuator motors in legs, which is especially promising for space telescopes. The latter provides a rapid and accurate surface shape changes depending on the change in the wavelength of the radiation and their adaptation to different perturbations, thermal, weight, wind and others. Fig. 1.11 Parallel architecture
10
1 Mechanisms and Control Systems
Fig. 1.12 Architecture of the “star”
1.2.3 Architecture of the “Star” In the “star” architecture of the SEMS series-connected of standard modules are connected at the top and/or lower platform of the basic module is usually larger (see Fig. 1.12). A typical example of such architecture can capture the adaptive industrial robot [11], which uses a module type SM5 SEMS. Use these grips NACS quickly and accurately adapt the surface of the palm and fingers grip. Furthermore gripping force can be adapted for gripping an object, thereby improving reliability of industrial robot performing various processing operations.
1.2.4 Architecture of the “Ring” The architecture of the SEMS type “Ring” series-connected types of modules form a ring by connecting the upper platform of the last module of the first lower platform (see Fig. 1.13). A typical example of such architecture can capture the adaptive industrial robot [11], using modules of type SM4 SEMS, or SM7 SEMS. In such grips NACS use as quickly and accurately adapt the surface and grip force for gripping an object, thereby increasing the reliability of the industrial robot perform various processing operations.
1.2.5 Architecture of the “Tree” Architectures such as “tree” may in one SEMS any combination of the above architectures. At the same time it is also possible to use any type of universal modules SEMS. A typical example of such architecture can be a medical micro-robot [2] (see Fig. 1.14). The design of the robot comprises a body 1 consisting of a series connection of the modules, such as the type SM5 SEMS, propeller 2, which is attached to
1.3 Kinematic Models of SEMS
11
Fig. 1.13 Architecture of the “ring”
Fig. 1.14 Architecture of the “tree”
the bottom platform of the first module housing and consists of a series connection of the modules, such as the type SM5 SEMS, adaptive grippers 3, which are attached to the upper platform of the last the module housing and consist of a series connection of the modules, such as the type SM5 SEMS, and paddle-4 stops, which are attached to the upper and/or lower platforms intermediate housing units and consist of serial communication modules, such as the type SM5 SEMS.
1.3 Kinematic Models of SEMS The main element of SEMS is the SM5 module, which, unlike hexapods, provides not only the movement and rotation of the upper platform, but also compression and expansion of the upper and lower platforms. In combination with control, measurement and docking systems, this ensures versatility. The SM8 SEMS module has the
12
1 Mechanisms and Control Systems
most complete functionality and can be called a universal module. Other modules are in some simplification. Therefore, it is advisable to build mathematical models of this module in order to study the characteristics and properties of the standard SEMS modules discussed.
1.3.1 The Direct Kinematics Problem In accordance with the SM5 SEMS scheme shown in Fig. 1.15 we introduce the notation. O O1 OX, OY, OZ OX OY
Fig. 1.15 Scheme SM5 SEMS
center of the upper SEMS platform in the initial position (after initialization); the center of the lower platform; the axis of the base coordinate system with the origin at the point O; directed horizontally and passes through the base point A; directed horizontally and perpendicular to the xaxis;
1.3 Kinematic Models of SEMS
OZ 1v, 2v, 3v, 4v, 5v and 6v θ =120° H Rv Rn ϕv Δϕ v A1 ϕn
13
directed vertically upwards; point projection of the center of the upper hinge 1–6 LA on the upper platform; the angle between adjacent pairs of hinges LA at lower and upper platform; height SEMS measured as the distance between the points O and O1 ; the radius of the upper platform; the radius of the lower platforms; turning point 1v angle relative to the point A in the XY plane in an initial position; angle of rotation relative to the point 2v point 1v in the XY plane in an initial position; the projection of the reference point A on the lower platform in the rest position; the angle of rotation with 1n point relative to the
ΔRv (t) ΔRn (t) i v
point A1 in the plane X 1 Y 1 ; angle of rotation relative to the point 2n point 1n in the plane X 1 Y 1 ; the angle of rotation points with 1n point relative to the projection 1v on the lower platform in the plane X 1 Y 1 ; coordinate system with the origin at the point O1 ; the movement of the upper platform along the axis OX; the movement of the upper platform along the axis OY; the movement of the upper platform along the axis OZ; the angle of counterclockwise rotation of the upper platform relative to the axis X; the angle of counterclockwise rotation of the upper platform relative to the axis Y (the firm PI this angle is clockwise); the angle of counterclockwise rotation of the upper platform relative to the axis Z; compression or stretching of the upper platform; compression or stretching of the lower platform; number of LA; the upper platform;
n
the lower platform;
riv
vector from point O to point iv ;
Δϕ n Ψ
X 1Y 1Z 1 x(t) y(t) z(t) u(t) v(t)
w(t)
14
1 Mechanisms and Control Systems
rin
vector from point O1 to point in ;
x(0) = 0, y(0) = 0, z(0) = 0, u(0) initial position SEMS (after initialization). = 0, v(0) = 0, w(0) = 0 Let us calculate the lengths of the LA original (after initialization) position. The radius vector from the center of the upper platform O to the points of projection of the center of the upper hinges LA on the platform (see Fig. 1.15) will be: | x | y z |T r1B = |r1B ; r1B ; r1B = |RB sinϕB ; RB cosϕB ; 0|T ,
(1.1)
| x | y z |T r2B = |r2B ; r2B ; r2B = |RB sin(ϕB + ΔϕB ); RB cos(ϕB + ΔϕB ); 0|T ,
(1.2)
| x | y z |T r3B = |r3B ; r3B ; r3B = |RB sin(ϕB + 30◦ ); RB cos(ϕB + 30◦ ); 0|T ,
(1.3)
| x | y z |T r4B = |r4B ; r4B ; r4B = |RB sin(ϕB + ΔϕB + 30◦ ); RB cos(ϕB + ΔϕB + 30◦ ); 0|T ,
(1.4)
| x | y z |T r5B = |r5B ; r5B ; r5B = |RB sin(ϕB + 60◦ ); RB cos(ϕB + 60◦ ); 0|T ,
(1.5)
| x | y z |T r6B = |r6B ; r6B ; r6B = |RB sin(ϕB + ΔϕB + 60◦ ); RB cos(ϕB + ΔϕB + 60◦ ); 0|T .
(1.6)
Respectively, the radius vector from the center of the lower platform O1 to the points of projection of the center of the upper hinges LA on the platform will be: | x | y z |T r1H = |r1H ; r1H ; r1H = |RH sinϕH ; RH cosϕH ; − H |T ,
(1.7)
| x | y z |T r2H = |r2H ; r2H ; r2H = |RH sin(ϕH + ΔϕH ); RH cos(ϕH + ΔϕH ); − H |T ,
(1.8)
| x | y z |T r3H = |r3H ; r3H ; r3H = |RB sin(ϕH + 30◦ ); RB cos(ϕH + 30◦ ); − H |T ,
(1.9)
1.3 Kinematic Models of SEMS
15
| x | y z |T r4H = |r4H ; r4H ; r4H = |RB sin(ϕH + ΔϕH + 30◦ ); RB cos(ϕH + ΔϕH + 30◦ ); − H |T , (1.10) | x | y z |T r5H = |r5H ; r5H ; r5H = |RB sin(ϕH + 60◦ ); RB cos(ϕH + 60◦ ); − H |T ,
(1.11)
| x | y z |T r6H = |r6H ; r6H ; r6H = |RB sin(ϕH + ΔϕH + 60◦ ); RB cos(ϕH + ΔϕH + 60◦ ); − H |T . (1.12) Therefore, the length of the i-th LA in the initial position: L i (0) =
() ) y ) z ) ) (1 2 y )2 z 2 / x x 2 − riH + riB − riH + riB − riH , riB
(1.13)
where i = 1, 2, …, 6. When the shifts of the upper platform on the values x(t), y(t) and z(t) the radius vectors rin not changed (rin (x(t), y(t), z(t)) = rin (0)) and the radius vector riv change as follows: |T | x y z riB (x(t), y(t), z(t)) = |riB + x(t); riB + y(t); riB + z(t)| .
(1.14)
Introducing the notation A = |x(t); y(t); z(t)|T , the expression (1.14) can be rewritten as follows: riB (x(t), y(t), z(t)) = riB (0) + A.
(1.15)
This leads to a corresponding change in the lengths of the LA: ⎞1/ 2 ⎛) x ) x 2 riB + x(t) − riH ⎟ ⎜ ) y y )2 ⎟ L i (x(t), y(t), z(t)) = ⎜ ⎝ + riB + y(t) − riH ⎠ , ) ) z z 2 + riB + z(t) − riH
(1.16)
i.e. their relative change: ΔL i (x(t), y(t), z(t)) = L i (x(t), y(t), z(t)) − L i (0). Now consider how the length LA when turning angles on the upper platform u(t), v(t) and w(t). We introduce the rotation matrix:
(1.17)
16
1 Mechanisms and Control Systems
| | | cos(v(t)) 0 sin(v(t)) | | | |, Cv = || 0 10 | | − sin(v(t)) 0 cos(v(t)) |
(1.18)
(1.19)
Then while turning angles of u(t), v(t) and w(t) the radius vector riv will change as follows: riB (u(t), v(t), w(t)) = Cu Cv Cw riB (0) | x |T y z = |riB (u, v, w); riB (u, v, w); riB (u, v, w)|
(1.20)
This leads to a corresponding change in the lengths of the LA: ⎞1/ 2 ⎛) x ) x 2 riB (u, v, w) − riH ⎟ ⎜ ) y y )2 ⎟ L i (u, v, w) = ⎜ ⎝ + riB (u, v, w) − riH ⎠ , ) ) z z 2 + riB (u, v, w) − riH
(1.21)
i.e. their relative change: ΔL i (u(t), v(t), w(t)) = L i (u(t), v(t), w(t)) − L i (0).
(1.22)
In synchronous operation of the actuator control rod upper platform changes the radius of the upper platform: RB (t) = RB + ΔRB (t).
(1.23)
When the synchronous operation of the actuator control rod bottom platform changes the radius of the lower platform: RH (t) = RH + ΔRH (t).
(1.24)
This leads to a change in the radius vector: | |x y z |T riB (ΔRB (t)) = |riB ; riB ; riB + BiB (t) | x |T y z = |riB (ΔRB ); riB (ΔRB ); riB (ΔRB )| , | |x y z |T riH (ΔRH (t)) = |riH ; riH ; riH + BiH (t)
(1.25)
1.3 Kinematic Models of SEMS
17
|x |T y z = |riH (ΔRH ); riH (ΔRB ); riH (ΔRH )| ,
(1.26)
where: B1B (t) = |ΔRB (t)sinϕB ; ΔRB (t)cosϕB ; 0|T ,
(1.27)
B2B (t) = |ΔRB (t)sin(ϕB + ΔϕB ); ΔRB (t)cos(ϕB + ΔϕB ); 0|T ,
(1.28)
B3B (t) = |ΔRB (t)sin(ϕB + 30◦ ); ΔRB (t)cos(ϕB + 30◦ ); 0|T ,
(1.29)
| | ΔR (t)sin(ϕ + Δϕ + 30◦ ); B B B | B4B (t) = | | ΔRB (t)cos(ϕB + ΔϕB + 30◦ );
|T | | | , 0|
B5B (t) = |ΔRB (t)sin(ϕB + 60◦ ); ΔRB (t)cos(ϕB + 60◦ ); 0|T , |T | | | ΔR (t)sin(ϕ + Δϕ + 30◦ ); B B B | | B6B (t) = | | , | ΔRB (t)cos(ϕB + ΔϕB + 30◦ ); 0| B1H (t) = |ΔRH (t)sinϕH ; ΔRH (t)cosϕH ; 0|T ,
(1.30) (1.31)
(1.32) (1.33)
B2H (t) = |ΔRH (t)sin(ϕH + ΔϕH ); ΔRH (t)cos(ϕH + ΔϕH ); 0|T ,
(1.34)
B3H (t) = |ΔRH (t)sin(ϕH + 30◦ ); ΔRH (t)cos(ϕH + 30◦ ); 0|T ,
(1.35)
| | ΔR (t)sin(ϕ + Δϕ + 30◦ ); H H H | B4H (t) = | | ΔRH (t)cos(ϕH + ΔϕH + 30◦ );
|T | | | , 0|
B5H (t) = |ΔRH (t)sin(ϕH + 60◦ ); ΔRH (t)cos(ϕH + 60◦ ); 0|T , |T | | | ΔR (t)sin(ϕ + Δϕ + 30◦ ); H H H | | B6H (t) = | | . | ΔRH (t)cos(ϕH + ΔϕH + 30◦ ); 0|
(1.36) (1.37)
(1.38)
This will change in the lengths of the legs: ⎞1/ 2 x x riB (ΔRB ) − riH (ΔRH )2 )⎟ ) y ⎜ y ⎜ + riB (ΔRB ) − riH (ΔRH )2 ⎟ ⎟ , L i (ΔRB , ΔRH ) = ⎜ ⎟ ⎜ + ... ⎠ ⎝ z z + riB (ΔRB ) − riH (ΔRH )2 ⎛
(1.39)
18
1 Mechanisms and Control Systems
i.e. their relative change: ΔL i (ΔRB (t), ΔRH (t)) = L i ((ΔRB , ΔRH ) − L i (0)).
(1.40)
If there is a simultaneous displacement (x(t), y(t) and z(t)) and turning through angles (u(t), v(t) and w(t)) of the upper platform, as well as compression or stretching of the upper (ΔRv (t)) and the lower (ΔRn (t)) SEMS platforms, then the radius vector will change as follows: riB (x, y, z, u, v, w, ΔRB (t)) = Cu Cv Cw (riB (0) + | x | r (x, y, z, u, v, | iB | y = | riB (x, y, z, u, v, | z | r (x, y, z, u, v, iB
A + BiB (t)) |T w, ΔRB ); || | w, ΔRB ); | , | w, ΔRB ) |
riH (ΔRB (t)) = |riH (0) + BiH (t)| | x |T y z = |riH (ΔRH ); riH (ΔRH ); riH (ΔRH )| .
(1.41)
(1.42)
Then: L i (x, y, z, u, v, w, ΔRB , ΔRB ) ⎞1/ 2 ⎛( )2 x riB (x, y, z, u, v, w, ΔRB ) ⎟ ⎜ x ⎟ ⎜ − riH (ΔRB ) ⎟ ⎜ ⎟ ⎜ ) ( 2 ⎟ ⎜ y riB (x, y, z, u, v, w, ΔRB ) ⎟ ⎜ = ⎜ + ⎟ , y ⎟ ⎜ − riH (ΔRH ) ⎟ ⎜ ⎜ ( z )⎟ ⎟ ⎜ ⎝ + riB (x, y, z, u, v, w, ΔRB ) ⎠ z − riH (ΔRH )2
(1.43)
ΔL i (x, y, z, u, v, w, ΔRB , ΔRH ) = L i (x, y, z, u, v, w, ΔRB , ΔRH ) − L i (0).
(1.44)
and
Taking the first derivative of the expression (1.16), (1.21) and (1.39) can be linear and angular speed hexapod platform. At small angles u(t), v(t) and w(t) of expression (1.17–1.19) can be simplified: | | | |1 0 0 | | Cu = || 0 1 − (u(t)) ||, | | 0 u(t) 1
(1.45)
1.3 Kinematic Models of SEMS
19
| | |1 0 v(t) || | Cv = || 0 1 0 ||, | − (v(t)) 0 1 |
(1.46)
(1.47)
Accordingly simplified expressions (1.20–1.22), (1.41), (1.43) and (1.44), as well as the expression for the velocity SEMS platforms. Unlike the SM5 SEMS modules, the SM8 SEMS module (see Fig. 1.9) has movable rods that are mounted on mounting platforms with the possibility of rotation in a plane passing through the attachment points of the rods and the center of the platforms, by means of controlled drives, the rods can change their length using linear control drives. Additionally, this module has gripping rods, which can change their length by means of linear control actuators. Suppose that the coordinates of “vector capture” for gripping rods S0B and S0H in the fixed coordinate system: – For the upper platform: ) ) BK BH , i = 1, 2, 3. , − z i0 S0B xi0B K − xi0B H , yi0B K − yi0B H , z i0
(1.48)
– For the lower platform: ) K ) HH yHK HH yHH HK S0H x H j0 − x j0 , j0 − j0 , z j0 − z j0 ,
j = 1, 2, 3.
(1.49)
It should be noted that the value of these coordinates depends on the design of the model. We take into account that as a result of the simultaneous displacement and rotation of the platform, as well as their compression or stretching, the coordinates of the “rod-grip vectors” in the new coordinate system change, i.e. ⎛
⎞ xiBT STB ⎝ yiBT ⎠ = STB (x, y, z, u, v, w, ΔR B ) z iBT = CuB . CvB . CwB . AcB . B pB . S0B , i = 1, 2, 3. , ⎛
⎞ xH jT ⎜ ⎟ H STH ⎝ y H j T ⎠ = ST (x, y, z, u, v, w, ΔR H ) z Hj T
(1.50)
20
1 Mechanisms and Control Systems
= CuH . CvH . CwH . AcH . B pH . S0H , j = 1, 2, 3. ,
(1.51)
) ) where CuB , CvB , CwB CuH , CvH , CwH ,—rotation matrices upper (lower) platform, respectively, AcB , AcH —displacement of the upper matrix (lower) platform, B pB , B pH —matrix of stretching-contraction the upper (lower) platform. Suppose we are given relative to(the upper ((lower) platform shift length and angle B B L Hj3 , α Hj3 . Then, from (1.50), (1.51) the current , αi3 of the rods 47–49 (50–52) L i3 ( ( ( ( length of the rods L iBT L Hj T and the current angle αiBT α Hj T of rotation for the upper (lower) platform as follows: L iBT = L Hj T =
]) ])
xiBT
xH jT
)2
) )2 ) )2 [1/ 2 + yiBT + z iBT , i = 1, 2, 3.
(1.52)
)2
) )2 ) H )2 [1/ 2 + yH + z jT , jT
(1.53)
αiBT = arcsin α Hj T = arcsin
j = 1, 2, 3.
z iBT , i = 1, 2, 3. L iBT z Hj T L Hj T
,
j = 1, 2, 3.
(1.54)
(1.55)
Taking into account (1.52–1.55), the following changes in the relative lengths and angles of rotation of the rods to the upper (lower) platforms: | | B ΔL iB (t) = | L i3 − L iBT |, i = 1, 2, 3, | | ΔL Hj (t) = | L Hj3 − L Hj T |, j = 1, 2, 3. | B | ΔαiB = |αi3 − αiBT |, i = 1, 2, 3, | | Δα H = |α H − α H |, j = 1, 2, 3. j
j3
(1.56)
(1.57)
jT
The value of the parameters (1.56), (1.57) is used in the design of control systems SEMS. Similar formulas with a few changes you can get to the movements of rods 29–31 and 32–34. Suppose that the coordinates of “motion vector” M0B (for the rods 29–31), and H M0 (for the rods 32–34) in the fixed coordinate system: – for the upper platform: ) ) BK BH , i = 1, 2, 3. − z i0 M0B xi0B K − xi0B H , yi0B K − yi0B H , z i0
(1.58)
1.3 Kinematic Models of SEMS
21
– for the lower platform: ) K ) HH yHK HH yHH HK M0H x H j0 − x j0 , j0 − j0 , z j0 − z j0 ,
j = 1, 2, 3.
(1.59)
It should be noted that the value of these coordinates depends on the design of the model. Given that the simultaneous displacement and swing, as their compression or stretching of the coordinates “motion vector” the new coordinate system changes, i.e. ⎛ B ⎞ xi T MTB ⎝ yiBT ⎠ = MTB (x, y, z, u, v, w, ΔR B ) z iBT = CuB . CvB . CwB . AcB . B pB . M0B , i = 1, 2, 3.
(1.60)
⎛
⎞ xH jT ⎜ ⎟ H MTH ⎝ y H j T ⎠ = M T (x, y, z, u, v, w, ΔR H ) H z jT = CuH . CvH . CwH . AcH . B pH . M0H ,
j = 1, 2, 3.
(1.61)
) ) where CuB , CvB , CwB CuH , CvH , CwH —rotation matrices upper (lower) platform, respectively, AcB , AcH —displacement of the upper matrix (lower) platform, B pB , B pH —matrix of stretching-contraction the upper (lower) platform. Suppose we are given relative to( the upper((lower) platform shift length and angle B B L Hj3 , α Hj3 . Then, based on the expressions (1.60), , αi3 of the rods 29–31 (32–34) L i3 ( ( ( ( (1.61) of the current length of the rods L iBT L Hj T ) and the current angle αiBT α Hj T of rotation for the upper (lower) platform as follows: L iBT = L Hj T =
]) ])
xiBT
xH jT
)2
) )2 ) )2 [1/ 2 + yiBT + z iBT , i = 1, 2, 3.
(1.62)
)2
) )2 ) )2 [1/ 2 + yH + z Hj T , jT
(1.63)
αiBT = arccos α Hj T = arccos
j = 1, 2, 3.
xiBT , i = 1, 2, 3. L iBT xH jT L Hj T
,
j = 1, 2, 3.
(1.64)
(1.65)
22
1 Mechanisms and Control Systems
Taking into account the expressions (1.62–1.65), we obtain the following changes in the lengths and angles of rotation of the rods of movements for the upper and lower platforms: | | B ΔL iB (t) = | L i3 − L iBT |, i = 1, 2, 3, | | ΔL Hj (t) = | L Hj3 − L Hj T |, j = 1, 2, 3. | B | ΔαiB = |αi3 − αiBT |, i = 1, 2, 3, | | Δα H = |α H − α H |, j = 1, 2, 3. j
j3
(1.66)
(1.67)
jT
The value of the parameters ΔL iB (t), ΔL Hj (t), ΔαiB , Δα Hj is used in the design of control systems SEMS.
1.3.2 The Inverse Problem of Kinematics In a direct kinematics problem, it was found that if there is a simultaneous displacement x(t), y(t) and z(t)) and rotation by angles (u(t), v(t) and w(t)) of the upper platform, as well as compression or stretching of the upper (ΔRv (t)) and the lower (ΔRn (t)) platforms, then the radius vector will change as follows: riB (x, y, z, u, v, w, ΔRB (t)) = Cu Cv Cw (riB (0) + A + BiB (t)), riH (ΔRB (t)) = (riH (0) + BiH (t))
(1.68) (1.69)
Consequently, the length of the legs actuators (LA) has the following form: L i (x, y, z, u, v, w, ΔRB , ΔRH ) ⎞1/2 ⎛( )2 x riB (x, y, z, u, v, w, ΔRB ) ⎟ ⎜ x ⎟ ⎜ − riH (ΔRB ) ⎟ ⎜ ⎟ ⎜ ) ( 2⎟ y ⎜ riB (x, y, z, u, v, w, ΔRB ) ⎟ ⎜ = ⎜ + ⎟ y ⎟ ⎜ − riH (ΔRH ) ⎟ ⎜ ⎟ ⎜ ) ( 2⎟ ⎜ z riB (x, y, z, u, v, w, ΔRB ) ⎠ ⎝ + z − riH (ΔRH ) | | |1 −w v || | C = Cu . Cv . Cw = || uv + w − uvw + 1 − u ||. | − v + uw vw + u 1 |
(1.70)
(1.71)
1.3 Kinematic Models of SEMS
23
Then, the expressions (1.68), (1.69) can be rewritten in the following form: riB
| | x | | ri B + x + BixB | | ) ) y | | + w ri B + y + BiyB | | ) z ) | | + v r + z + Bz iB ) iB | | ) | | (uw + v) r x + x + B x iB ) iB | ) y y | = || + (1) − uvw) ri B +) y + Bi B ||. | | + u r z + z + Bz iB ) iB | | ) | | (uw − v) r x + x + B x iB ) iB | | ) | + (vw + u) r y + y + B y | iB iB | | | | + r z + z + Bz iB iB | x | | ri H + BixH | | y | riH = || ri H + BiyH ||. | r z + Bz | iH iH
(1.72)
(1.73)
Given that the data L i (i = 1, 2, …, 6), RB , RH , from (1.70), (1.72) and (1.73), we obtain the following system of nonlinear equations: )⎞⎤2 ) rixB + x + BixB − w riyB + y + BiyB ) z ) ⎢⎜ ⎟⎥ z ⎣⎝ + v ri B + z + Bi B ⎠⎦ ) x ) x − ri H + Bi H ) ) ⎡ ⎛⎛ ⎞⎞⎤2 (uv + w) rixB + x + BixB ) )⎟⎟⎥ ⎢ ⎜⎜ ⎢ ⎝⎝ + (1 − uvw) riyB + y + BiyB ⎠⎠⎥ ⎢ ⎥ ) ) +⎢ ⎥ − u rizB + z + BizB ⎣ ⎦ ) y y ) − ri H + Bi H ⎡⎛ ) ) ⎞⎤2 (uw − v) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎥ = L2 ⎢ +⎢ i ⎥ + rizB + z + BizB ⎦ ⎣ ) z ) − ri H + BizH ⎡⎛
(1.74)
where, i = 1, 2,…, 6. The result: ⎧ ⎫ ⎪ f 1 (x, y, z, u, v . w) = 0 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨· ⎬ , · ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ · ⎪ ⎪ ⎪ ⎩ f (x, y, z, u, v . w) = 0 ⎪ ⎭ 6
(1.75)
24
1 Mechanisms and Control Systems
where, f i (x, y, z, u, v . w), (i = 1, . . . , 6) non-linear functions defined and continuous in predetermined areas. Or in vector form: ξ = (x, y, z, u, v . w)T , F(ξ ) = [ f 1 (ξ ), . . . , f 6 (ξ )]T , 0 ≤ x ≤ x, 0 ≤ y ≤ y, 0 ≤ z ≤ z, 0 < u ≤ u, 0 < v ≤ v, 0 < w ≤ w
(1.76)
F(ξ ) = 0 To find the solution (1.75) and (1.76) using Newton’s method, i.e., ) ) ) ) ξ (k + 1) = ξ (k) − J − 1 ξ (k) . F ξ (k) , k = 0, 1, 2, . . .
(1.77)
where J—the Jacobi matrix: J
| | ∂ f1 (ξ ) | ∂x |· | | = |· | |· | ∂ f6 (ξ ) | ∂x
∂ f i (ξ ) ∂ x⎡
· · · · ·
· · · · ·
| 1 (ξ ) | · ∂ f∂w | | ·· | | ·· | | | ·· ∂ f 6 (ξ ) || · ∂w
⎞⎤ rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ − w riyB + y + BiyB ⎠⎥ ⎥ = 2⎢ ⎢ + v )r z + z + B z ) ⎥ ⎣ ⎦ iB iB − rixH + BixH ) ) ⎡⎛ ⎞⎤ (uv + w) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (1 − uvw) riyB + y + BiyB ⎠⎥ ⎥ ) ) + 2(uv + w)⎢ ⎢ ⎥ − u rizB + z + BizB ⎣ ⎦ ) ) y y − ri H + Bi H ⎡⎛ ) ) ⎞⎤ (uw − v) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎥ + 2(uw − v)⎢ y ⎢ ⎥ + ri B + z + BizB ⎣ ⎦ ) z ) − ri H + BizH
(1.78)
⎛
(1.79)
1.3 Kinematic Models of SEMS
∂ f i (ξ ) ∂y
⎞⎤ rixB + x + BixB )⎟⎥ ) ⎢⎜ ⎢ ⎝ − w riyB + y + BiyB ⎠⎥ ⎢ ) ) ⎥ = − 2w ⎢ ⎥ + v rizB + z + BizB ⎦ ⎣ ) x ) x − ri H + Bi H ) ) ⎡⎛ ⎞⎤ (uv + w) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (1 − uvw) riyB + y + BiyB ⎠⎥ ⎢ ⎥ ) ) + 2(1 − uvw)⎢ ⎥ − u rizB + z + BizB ⎣ ⎦ ) y y ) − ri H + Bi H ⎡⎛ ) ) ⎞⎤ (uw − v) + rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎢ ⎥ + 2(vw + u)⎢ ⎥ + rizB + z + BizB ⎣ ⎦ ) z ) − ri H + BizH
25
⎡⎛
(1.80)
∂ f i (ξ ) ∂z ⎡⎛
⎞⎤ rixB + x + BixB )⎟⎥ ) ⎢⎜ ⎢ ⎝ − w riyB + y + BiyB ⎠⎥ ⎢ ) ) ⎥ = 2v ⎢ ⎥ + v rizB + z + BizB ⎦ ⎣ ) x ) x − ri H + Bi H ) ) ⎡⎛ ⎞⎤ (uv + w) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (1 − uvw) riyB + y + BiyB ⎠⎥ ⎢ ⎥ ) ) − 2u ⎢ ⎥ − u rizB + z + BizB ⎣ ⎦ ) ) y y − ri H + Bi H ⎡⎛ ) ) ⎞⎤ (uw − v) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎢ ⎥ + 2⎢ ⎥ + rizB + z + BizB ⎣ ⎦ ) z ) z − ri H + Bi H
(1.81)
26
1 Mechanisms and Control Systems
∂ f i (ξ ) ∂v
) ) y y ) ⎤ rixB + x + BixB − w ri B + y + Bi B ) ) )⎢ ) ⎥ + v rizB + z + BizB = 2 rixB + x + BixB ⎣ ⎦ ) ) x x − ri H + Bi H ] [ ) x ) x u ri B + x + Bi B ) y +2 y ) − uw ri B + y + Bi B ) ) ⎡⎛ ⎞⎤ (uv + w) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (1 − uvw) riyB + y + BiyB ⎠⎥ ⎢ ⎥ ) z ) ⎢ ⎥ z − u r + z + B ⎣ ⎦ iB iB ) ) y y − ri H + Bi H [ ) ) ] − v rixB + x + BixB ) y +2 y ) + w ri B + y + Bi B ⎡⎛ ) ) ⎞⎤ (uw − v) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎥ ×⎢ ⎥ ⎢ + rizB + z + BizB ⎦ ⎣ ) z ) − ri H + BizH ⎡(
∂ f i (ξ ) ∂u ) ) ⎤ ⎡ v rixB + x + BixB ) y ⎢ y )⎥ = 2⎣ − vw ri B + y + Bi B ⎦ ) ) − rizB + z + BizB ) ) ⎡⎛ ⎞⎤ (uv + w) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (1 − uvw) riyB + y + BiyB ⎠⎥ ⎥ ⎢ ) ) ×⎢ ⎥ − u rizB + z + BizB ⎦ ⎣ ) y y ) − ri H + Bi H [ ) )] w rixB + x + BixB +2 y y + ri B + y + Bi B ⎡⎛ ) x ) ⎞⎤ (uw − v) ri B + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎥ ⎢ ⎥ ⎢ + rizB + z + BizB ⎦ ⎣ ) ) z − ri H + BizH
(1.82)
(1.83)
1.3 Kinematic Models of SEMS
∂ f i (ξ ) ∂w
27
⎡⎛ ⎛
⎞⎞⎤ rixB + x + BixB )⎟⎟⎥ ) ⎢⎜ ⎜ ⎜ ⎝ − w riyB + y + BiyB ⎠⎟⎥ ) y y )⎢ ⎜ ⎢ ⎥ ) z ) ⎟ = − 2 ri B + y + Bi B ⎢⎜ ⎟⎥ z + v r + z + B ⎣⎝ ⎠⎦ iB iB ) ) − rixH + BixH [) x ] ) ri B + x + BixB ) y +2 y ) − uv ri B + y + Bi B ) ) ⎡⎛ ⎞⎤ (uv + w) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (1 − uvw) riyB + y + BiyB ⎠⎥ ⎥ ⎢ ) ) ⎥ ⎢ − u rizB + z + BizB ⎦ ⎣ ) y y ) − ri H + Bi H [ ) x ] ) u ri B + x + BixB ) y +2 y ) + v ri B + y + Bi B ⎡⎛ ) ) ⎞⎤ (uw − v) rixB + x + BixB ) )⎟⎥ ⎢⎜ ⎢ ⎝ + (vw + u) riyB + y + BiyB ⎠⎥ ⎢ ⎥ ×⎢ ⎥ + rizB + z + BizB ⎣ ⎦ ) ) z z − ri H + Bi H
(1.84)
Since inverse matrix calculation is arduous transform (1.76) as follows: ) ) ) ) J ξ (k) Δξ (k) = − F ξ (k) , k = 0, 1, 2, . . .
(1.85)
where, Δξ (k) = ξ (k + 1) − ξ (k) . We obtain a system of linear algebraic equations with respect to the amendment Δξ (k) . After determining it calculated the following approximation: ξ (k + 1) = ξ (k) + Δξ (k)
(1.86)
In this case, you can use the following algorithm. 1. Set the initial approximation ξ (0) and small positive number ε (accuracy). Let k = 0. 2. Calculate the Jacobi matrix and solve a system of linear algebraic equations with respect to the amendment Δξ (k) .. ) ) ) ) J ξ (k) Δξ (k) = − F ξ (k) , k = 0, 1, 2, . . . 3. Calculate the following approximation:
28
1 Mechanisms and Control Systems
ξ (k + 1) = ξ (k) + Δξ (k) | | 4. If Δ(k + 1) = max|ξ (k + 1) − ξ (k) | ≤ ε, the process is complete, and ξ ∗ ≈ i
ξ (k + 1) . If Δ(k + 1) > ε, put ξ ∗ ≈ ξ (k + 1) ,, it is k=k+1 and go to step 2. For the convergence of Newton’s method only if the function is continuous differentiable no degenerate Jacobi matrix. With good initial approximation is a quadratic convergence of the method. You can use other methods (simplified Newton method, Proyden method, secant method, etc.,), but the restrictions on non-linear functions and Jacobi matrix for this task is time-consuming [12, 13].
1.4 Synthesis of Optimal SEMS Control Algorithms In the synthesis of control systems, first of all, there is a question of finding the best in one sense or another or optimal control object or process. This could be, for example, of optimality in the sense of speed, i.e. towards the goal in the shortest time, or, for example, the achievement of goals with minimum error, and etc. The largest number of optimal control methods considering such control processes, each of which smears be described a system of ordinary differential equations: / d xi dt = f i (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ), i = 1, 2, . . . , n,
(1.87)
where x 1 , x 2 ,…, x n —a process value, i.e. the phase coordinates of the control object, determine its state at any given time t, and u1 , u2 ,…, um —control parameters (including configurable controller parameters) that determine the course of the process. To move a managed process has been determined at some point in time t 0 ≤t≤t 1 , enough to make at this point in time have been set in the time control settings. u j = u j (t),
j = 1, 2, . . . , m.
(1.88)
xi (t0 ) = xi0 , i = 1, 2, . . . , n,
(1.89)
Then, for given initial conditions:
The solution of (1.87) is uniquely determined. Be solved variation problem associated with the controlled process (1.87) is as follows. An integral functional: ∫t1 J =
f 0 (x1 , . . . , xn , u 1 , . . . , u m )dt, t0
(1.90)
1.4 Synthesis of Optimal SEMS Control Algorithms
29
where f 0 (.)—a given function. For each unit (1.88), given at a time interval t 0 ≤t≤t 1 is uniquely determined by the course of the controlled process, and the integral (1.90) takes a certain value. Let us assume that there exists a control (1.88) transferring the control object from a given initial state (1.89) in the prescribed terminal phase state: xi (t1 ) = xi1 , i = 1, 2, . . . , n.
(1.91)
Required to find a control u j (t),
j = 1, 2, . . . , m,
(1.92)
Which will transfer the control object from the state (1.89) to state (1.91) in such a manner that a functional (1.90) has a minimum value. At this time instants t 0 , t 1 are not fixed, but is only required that at the initial time the object was in the state (1.89), and in the end–in the state (1.91) and to the functional (1.90) reaches a minimum (it may be the case when t 0 , t 1 are not fixed). The technical problems of the control parameters cannot take any arbitrary value. Therefore, for every point that characterizes the current value of the control must be satisfied (u 1 , . . . , u m ) ∈ U. The choice of U should reflect the specifics of the object of control. In many technical problems this set is closed. The introduction of these restrictions leads to no classical problems of the calculus of variations that are best solved computational methods. Often there are problems on the optimal transition of the control object from an initial variety M 0 points in phase space for a finite set of the M 1 , and the dimensions of these varieties can be arbitrary. In particular, when the two varieties are zero, then we come to the initial problem. It is obvious that in technical systems not only control parameters, but also the phase coordinates of the control object should be subject to some physical limitations. For example, the height of the airplane cannot be negative. Therefore, in general, must fulfill the conditions: (x1 , . . . , xn ) ∈ X, where X is the set reflects the specificity of the object of control and protection of its functioning. Another object of the optimal control may be the problem of optimal contact with the moving point of the phase space. Suppose that in the phase space has a moving point: z i = Q i (t), i = 1, 2, . . . , n.
(1.93)
Then there is the problem of optimal reduction of the object (1.87) in agreement with the moving point (1.93). This problem is easily reduced to the first, if we introduce new variables, putting: yi = xi − Q i (t), i = 1, 2, . . . , n.
(1.94)
30
1 Mechanisms and Control Systems
As a result of this conversion system (1.87) is transformed into a new, though not autonomous, and to manage the reduction of the object becomes obbekta y1, y2,…, yn at fixed point (0, 0,…, 0) of the phase space. It is very important to the case where the object pursued is manageable and its motion is described by a system of differential equations: / dz i dt = gi (z 1 , z 2 , . . . , z n , u 1 , u 2 , . . . , u m ).
(1.95)
The control problem is that, knowing the technical possibilities of the object pursued, i.e. the system of Eqs. (1.95) and its position at any given time, to determine the control of pursuing the object in the same moment in time so that the persecution was carried out optimally. In this formulation, the problem is considered in the differential game theory [14, 15]. It is assumed that in the initial moment of time the position of the object pursued is known, and its further behavior is described probabilistically and the process of its motion is considered Markov. Under these assumptions, control sought a haunting object (1.87), in which the meeting of a small neighborhood of the object (1.87) with the object pursued (1.95), or a small neighborhood is the most likely. This problem can be reversed, i.e. control is sought for the persecuted of the object (1.95), is to meet a small neighborhood of the object (1.95) with a haunting object (1.87) or a small neighborhood is the least likely. Moreover it can be a challenge when pursuing several objects. All these tasks are related to the theory of differential games, and the optimal solution is sought in the conditions of incomplete certainty. Finally, the complex intellectual systems having appropriate behavior, which include intelligent control systems SEMS, control may be to select the best solution from a variety of alternatives when unclear and do not necessarily probabilistic or statistical, describing the dynamics of the control object and the environment, as well as in quality Score is not scalar control. These problems belong to the theory of decision-making [16], i.e., tasks decision making (DM) about the optimal system. In cases when it is possible to specify the scale—the objective function, which determines the value of a solution are known and well-developed theory and methods of mathematical programming (MP) [17], allowing to carry out a qualitative and numerical analysis arising in this clear objectives optimization solutions. Uncertainty that may arise in solving the problems of decision-making in the fuzzy modeling of complex systems operating in poorly formalized environment allows the use of these methods of MP with more or less success in these cases [9]. In the simplest case, a decision the decision-maker—a developer has one goal and this goal can be formally defined as a scalar function, i.e., quality criterion of choice. The values of quality criteria can be obtained for any valid set of values for the arguments. It is assumed also that the region is known for determining the parameters of choice, i.e., component selected vector or, anyway, for any given point can be determined whether it is a valid option, i.e. whether it belongs to the domain of the criterion of quality solutions. In this situation, the task of choosing the solution can be formalized and described a model of MP. In other cases, you should use mathematical programming in an ordinal scale (MPOS), generalized mathematical
1.4 Synthesis of Optimal SEMS Control Algorithms
31
programming (GMP) or multistep generalized mathematical programming (MGMP) [18].
1.4.1 Mathematical Programming The problem MP is required to calculate the n–dimensional vector X, optimizing (drawn in maximum or minimum, depending on the content formulation of the problem) criterion of quality solutions f 0 (x) subject to the restrictions f j (x) ≤ u j , j ∈ 1, 2, . . . , r, x ∈ G, where f j —known scalar functional, uj —given numbers, G—predetermined set of n-dimensional space Rn . Thus, the task MP is: / f 0 (x) → ext f j (x) ≤ u j ,
j ∈ 1, 2, . . . , r,
x ∈ G ⊆ Rn .
(1.96)
Depending on the properties of the function f = < f 0 , f 1 , f j . . . , f n > and set G is a particular class of optimization problems. If all the functions f j —linear, and G—polyhedral set, it is the problem of linear mathematical programming (LMP). If, among the functions f j are found non-linear, it is a problem of nonlinear mathematical programming (NMP). Among the extreme nonlinear problems isolated convex problem that maximize the subject concave functions f 0 (x) at the concave functional restrictions f j (x) and a convex domain G. Problems in which G has a finite or countable number of points highlighted in a special section mathematical programming—integer or discrete mathematical programming (DMP). If the quadratic functional f 0 (x), and all other linear functions f j and G polyhedral set, it is the task of the quadratic mathematical programming (QMP), which can lead to problems LMP using the Kuhn-Tucker theorem [9]. In the process of solving the majority of optimization problems, including finding optimal control systems SEMS, you can come to the problems of MP in selecting the optimal solution when there is a possibility of constructing a scalar quality criterion, including the attributes of logical variables. When the logic-probabilistic description of uncertainties [19] in order to optimize may find the identity of the rows of the system logic equations describing the closeness of the synthesized optimal logic-probability model (LPM) control systems to the ideal, giving the true values of logic functions yi with the maximum value of the probability P{yi = 1}. Then the quality criterion can be expressed as follows: f 0 (Y ) =
n ∑
P{yi = 1} → max.
(1.97)
i =1
The values of the probabilities P{yi = 1} can be calculated approximately by the method described in [19] algorithm. If the analysis LPM a complex system of SEMS will be revealed that the influence of one or another component yi its behavior
32
1 Mechanisms and Control Systems
is different, the quality criteria (1.97) it is advisable to the form: f 0 (Y ) =
n ∑
βi P{yi = 1} → max,
(1.98)
i =1
where βi —assigns weights. In the above approach, the quality of the optimization will be mainly determined by the proper construction of a binary relation, describing the measure of closeness designed to SEMS perfect. This can be time-consuming and difficult task, often associated with the solution of a number of logical problems. The quality of formulation and solution of these problems depends on the experience and skills of the developer, as the decision makers. To increase objectivity in the evaluation of the optimal model it is advisable to make a collective decision-maker with the involvement of the customer to work on the construction of a binary relationship. With a logical–linguistic description of uncertainties, it is possible to optimize as a search for single rows of a matrix of logical equations. These equations describe the proximity of the synthesized optimal logical–linguistic model (LLM) to the ideal one, which gives the true values of the logical functions yi with the maximum values of the membership functions μ(yi ). Then the quality criterion can be expressed as follows: f 0 (Y ) =
n ∑
μ(yi ) → max.
(1.99)
i =1
The values of the membership functions μ(yi ) can be calculated from those described in [19] algorithms. If the analysis LLM any complex system SEMS will be revealed that the influence of one or another component yi its behavior is different, the quality criteria (1.99) it is advisable to the form: f 0 (Y ) =
n ∑
βi μ(yi ) → max,
(1.100)
i =1
where βi —assigns weights. To increase objectivity in the evaluation of the optimal model it is advisable to make a collective decision-maker with the involvement of the customer to work on the construction of a binary relationship. When the logic-interval uncertainty [ ] control is a set of logical variables that have the attribute part of the intervals a j1 , b ji . The task of finding the optimal control, specified as a logical-interval model (LIM) can be reduced to problems of mathematical programming with the following scalar functional:
1.4 Synthesis of Optimal SEMS Control Algorithms
J1 =
m ∑ n ∑ j
J2 =
J3 =
j
J4 =
(1.102)
i
[) ) ) )]2 k ji b ji − a ji − boji − a 0ji → min,
(1.103)
i
m ∑ n ] ∑ ) )2 ) )2 [ k bji b ji − b0ji + k aji a ji − a 0ji → min, j
(1.101)
i
n ∑ n ∑ [ ) ) ]2 k ji b ji − a ji − c ji → min, j
n m ∑ ∑
k ji (b ji − a ji ) → min,
33
(1.104)
i
where: k ji , k bji , k aji —coefficients of the preferences of the person receiving the decision maker of optimality, c ji —the decision maker desired width of the interval, b0ji , a 0ji —the decision maker desired border intervals.
1.4.2 Mathematical Programming in the Ordinal Scale Since the decision on the optimality of the synthesized system often take the people for them is often the concept of consistent preference for one of the options being compared to another—a more natural way to select a rational alternative than the formulation of the objectives and the approach to it. In this case, the feasible set of alternatives it is advisable not to ask inequalities and certain conditions preference chooses. Such situations occur, in particular in the synthesis of SEMS, when the selection should provide a number of purposes, and it is permissible to determine different people in charge of different resources, limiting the choice. To solve such problems can be generalized scheme of mathematical programming, shifting from quantity to ordinal scales, i.e. moving from models require functional task, defining the objectives and constraints of the problem, a model taking into account the preferences of persons involved in the selection decision. This extends the range of applications of the theory of extreme problems and may prove useful in a number of situations of choice. In particular to the problems of mathematical programming in an ordinal scale (MPOS) can be obtained in the process of solving the problems of optimal synthesis of SEMS, which describes the optimal control of the LPM, LLM or LIM. Then, when you select the optimal solution of logical equations of the attributes of logical variables is linguistic expressions describing the preferences in the form of, for example, estimates ball formed on the basis of the analysis of the views of decision-makers. In this case, there is a fundamental possibility of ordering preferences. Consider the simplest transition from extreme problems in a quantitative scale of the problem in terms of an ordinal scale.
34
1 Mechanisms and Control Systems
Let G be a fixed compact set in Rn , gj , j = 0, 1, 2, …, r—reflexive, transitive and complete binary relations in G, describing the preferences of individuals, limiting the possible solutions, g0 —the preference of the decision maker. gj , j = 1, 2, …, r—can be interpreted as individual preferences, restrict, each in its own way, the set of feasible control plans. Some of the relationships gj can be determined by conventional functional inequalities that limit the ranges of the various components of the model. We denote by uj , j = 1, 2, …, r—a priori defined path in G, and assume that the plan x i ∈ G is admissible on the j-th constraint when x i gj uj i.e., if the pair (x i , uj ) ∈ gj . Accordingly, the plan is called x i ∈ G feasible solution if x i gj uj , j = 1, 2,.., r. MPOS task is the task of choosing the “best” (in the sense of a binary relation g0 ) among feasible solutions, i.e. as close as possible to the reference solution x ϶ . It is necessary to find one solution x 0 —such that: x0 g0 x϶ at : x϶ g j u j , x0 g j u j , j = 1, 2, . . . , r, (y϶ , y0 ) ∈ G
(1.105)
When determining the optimum SEMS, LPM described, and, accordingly, go , gj can be expressed as a logical equation: C0 Q = Y,
(1.106)
C j Q = Z,
(1.107)
where: vector Y logical variables yj , characterizing the optimized model parameters y that have to attribute part of the probability P{yi = 1} = Pi and established by the decision maker, ballroom estimate the closeness of the model parameters to the standard bi ∈ B and the coefficients of their significance vi , vector Z is logical variables zj , characterized by a limited model parameters that have to attribute part of the probability P{z i = 1} = Piz and established parties, restrictive solutions, ballroom estimate the closeness to the standard aj ∈ A, Q vector has components < q1 , q2 ,…, qn , q1 q2 ,…, q1 qn , q2 q3 ,…, qn−1 qn , q1 q2 q3 ,…, qn−2 qn−1 qn ,…, q1 q2… qn−1 qn > , identification line C 0 and C j contain elements 0 and 1 ina predetermined order (e.g. C 0 = |1 0 0 1 0 … 1 | and C j = |0 1 1 0 … 0 |) dimensional vector Q and lines C 0 , C j coincide, qi —logical variables characterizing x ϶ and x 0 [19]. Then, if ∀ qi probability P(qi ) = 1, the process of determining the optimal x 0 can be easily automated by calculating J (xi ) =
m ∑
vi bi
i =1
And finding x i ,the maximum value J(x i ) subject to the limitations:
(1.108)
1.4 Synthesis of Optimal SEMS Control Algorithms
35
ai ≥ aimin , i = 1, . . . , m,
(1.109)
where aimin —the minimum allowable assessment of the i-th parameter. If: ∃qi , P(qi ) < 1,
(1.110)
MPOS the problem is complicated, because now you must first calculate the y probabilities Pi and Piz , then search for the maximum value x i J (xi ) =
m ∑
y
vi bi Pi ,
(1.111)
i =1
Subject to the restrictions (1.109), and additional constraints of the form: Piz ≥ Pimin , i = 1, . . . , m,
(1.112)
where Pimin ,—minimum allowable probability of logical variable zi . MPOS task becomes even more complicated if the condition (1.110) and, in addition, one or another point scoring ai and bij are set makers and restrictive solutions, with some associated probabilities Pibj and Piaj (see Table 1.1 and Table 1.2). J (xi ) =
m ∑
vi Pi
y
i =1
n ∑
bi j Pibj .
(1.113)
j =1
Subject to the restrictions (1.112), n ∑
ai j Piaj ≥ Ai min
(1.114)
j =1
where i = 1, . . . , m; j = 1, . . . , n, Aimin —minimum allowable probability estimate for the i-th parameter. Table 1.1 Probability bi j Parameter model
Probability scores bi j
y1
b b11 ; P11
y2
b b21 ; P21
b b12 ; P12 b b22 ; P22
Probability b b13 ; P13 b b23 ; P23
… …
Coefficients significant
b b1n ; P1n
P1
y
ν1
b b2n ; P2n
y P2
ν2
…
…
…
…
…
…
…
…
ym
b bm1 ; Pm1
b bm2 ; Pm2
b bm3 ; Pm3
…
b bmn ; Pmn
Pmn
y
νm
36
1 Mechanisms and Control Systems
Table 1.2 Probability ai j Parameter model
Probability scores ai j
y1
a11 ;
y2
a21 ;
b P11 b P21
a12 ; a22 ;
b P12 b P22
Probability a13 ; a23 ;
b P13 b P23
…
a1n ;
…
a2n ;
b P1n y P2
P1z P2z
…
…
…
…
…
…
…
ym
b am1 ; Pm1
b am2 ; Pm2
b am3 ; Pm3
…
b amn ; Pmn
Pmz
y
The exact calculation of probabilities Pi and Piz even with a small number of logical variables qi is a very time-consuming task which, as was shown in [7], it is advisable to be solved approximately. In determining the optimal SEMS, described LLM instead of probabilistic estimates of the solutions of Eqs. (1.106) and (1.107) use minimax estimation of the membership function solutions μ(yi ) and μ(z i ), which has been shown in [20], is calculated using the rules: ) { ) ) )} μ qi ∨ q j = max μ(qi ), μ q j and ) ) { ) )} μ qi ∧ q j = min μ(qi ), μ q j . When solving a simple problem of synthesis of optimal SEMS, in which the control is set LIM, the choice of optimal solutions of systems (1.106) and (1.107) reduces to finding the maximum value of the functional form (1.108). These points compared bi for control options are given in the form of an expert system of inequalities or logical rules. If the decision maker is a team of several experts, ballroom assessment may be given in some bands [bi1 , bi2 ]. Then the functional index of the same quality will be obtained in a certain interval J (xi ) = [J51 , J52 ], computed according to the rule of interval algebra. J51 =
m ∑
vi b1i ,
(1.115)
vi b2i
(1.116)
i =1
J52 =
m ∑ i =1
Therefore, choosing the best option in this case it is necessary to introduce the rules of their ranking based on interval assignment functional quality. The problem of finding the best solution even more complicated if the decisionmaker, which is a team of several experts, sets the interval, not only ballroom assessment, but the preference coefficients ν i = [ν i1 , ν i2 ]. This functionality is the same quality index will be obtained in a range of J 5 (x i ) = [J 51, J 52 ], calculated in accordance with the rules of interval algebra by the following formulas:
1.4 Synthesis of Optimal SEMS Control Algorithms
J51 =
m ∑ i=1
J52 =
m ∑ i=1
37
min(vqi b ji ),
(1.117)
max(vqi b ji ).
(1.118)
g, j
q, j
To select the best alternative in this case also need to enter their ranking rules. Most nonlinear problems MPOS has no solution acceptable complexity of the procedure. However, for problems of convex MPOS, i.e. for problems in which the set G is convex and preferences gj , j = 0, 1, 2,…, r—concave, developed an interactive method of solution. This procedure allows for consistent requirements and alternatives to obtain a local preference information gj on respective pairs of options to get ε approximate solution. When expanding the concept of “optimization of binary relation” and expanding the concept of « ε in g0 optimal solution”, it is possible to generalize the method of solution of the convex problem MPOS to arbitrary (non-reflexive, incomplete and not transitive) binary relations [16]. It is expected that significant progress in addressing the challenges MPOS be able to achieve through the use of neural networks.
1.4.3 The Generalized Mathematical Programming A generalized mathematical programming (GMP)—a methodology that applies the principles and methods of traditional mathematical programming in the case of vector and vector criteria limits. Unlike traditional optimization theory GMP schemes do not assess the admissibility and quality of each alternative, and approach to the solution in the process of comparing pairs of alternatives. Optimization and restrictions on weapons of mass destruction are interpreted in terms of preference relations. As used GMP approach to choice-making takes into account the preferences of decision-makers. It is assumed that each participant decision-making process induces a set of alternatives, some binary relations and stores them in the process of solving the problem. In contrast to the problem MPOS optimization method GMP corresponds to the selection system, based on a comparison of its characteristics with the characteristics of an ideal system, not their parameters. When solving a simple problem of synthesis of optimal SEMS, in which the control is given LPM, formally record the model GMP is: Model 1 ∀ qi , P(qi ) = 1, then:
38
1 Mechanisms and Control Systems
f 0 (x0 )g0 f 0 (x϶ ) . . . at . . . f j (x0 )g j u j f j (x϶ )g j u j , j = 1, . . . , r,
(1.119)
(x0 , x϶ ) ∈ G ⊆ R n , where f i (x 0 ) and f i (x ϶ )—asked some functions Rm j preferences of decision-makers; i = 0,1,…,r; uj —fixed vector in Rm j restrictions set by decision-makers; gj , j = 0, 1, 2,…, r—binary relations on G ⊆ Rn . In this case, the model will be optimal x 0 , for which the condition (1.119). Model 2 ∀ qi , P(qi ) < 1, then: f 0 (x0 )g0 f 0 (x϶ ) → opt . . . at . . . f j (x0 )g j u j , f j (x϶ )g j u j ,
(1.120)
j = 1, . . . , r, (x0 , x϶ ) ∈ G ⊆ R n . In model 1, GMP reduces to the construction of optimal binary relations gj . In model 2 GMP finding optimal SEMS after building binary relations gj and selection of candidates for optimality based on their defined boundary probabilities or membership functions for type Eqs. (1.105) and (1.106) requires further analyze complex linguistic expressions, which are attributes (wi ) or formed from attributes (wi ) logical variables qi , which are components of the vector Q in the logical expressions (1.106) and (1.107), characterized by an optimal LPM (go ) and the appropriate limits (gj ) objects. At the same time, if these linguistic expressions can be reduced to a logical expression of the form: Fi = C0i W,
(1.121)
Si = C ij W,
(1.122)
where: F i —vector logical variables yj , characterizing the optimized characteristics of the model, S i —vector logical variables zj , characterized by a limited characteristics of the model, W —a vector of logical variables, components of which are logical variables wi , attributes which can be only their probability Pi w , or their membership functions μ(wi ), C 0 i and C j i —identification strings containing elements 0 and 1 ina predetermined order, (for example C 0 i = |1 0 0 1 0 …. 1 | and C j i = |0 1 1 0 … 0 |) the dimension of the vector W and rows C 0 i , C j i match, then the solution can be automated in software artificial intelligence [20].
1.4 Synthesis of Optimal SEMS Control Algorithms
39
Algorithm optimization model 2 GMP comprises the following steps: 1. 2. 3. 4.
Construction of the binary relationship f 0 (x 0 )g0 f 0 (x ϶ ), f j (x 0 )gj uj , f j (x ϶ )gj uj. The construction of logical expressions C 0 Q = Y, C j Q = Z. y The calculation of probabilities Pi , Piz and a membership function μ(yi ), μ(zi ). If evaluation of the characteristics of the model close to the standard bi and ai assessment limiting characteristics defined faces decision-makers precisely the selection of models for which the following conditions: ai ≥ aimin (i = 1, . . . , m); Piz ≥ Pimin , or μ(z i ) ≥ μimin (i = 1, . . . , m), J (xi ) =
m ∑
y
vi bi Pi ≥ Jimin
i =1
or J (xi ) =
m ∑
vi bi μi (y) ≥ Jimin .
i =1
5. If the characteristics of the model valuation close to the standard bi and assessment limiting characteristics of persons are given ai decides not exactly corresponding with some probability Pibj and Piaj (see Table 1.1 and Table 1.2), or membership functions μij (b) and μij (a), the selection of models for which the following conditions: n ∑
ai j Piaj ≥ Aimin
j =1
or n ∑
ai j μi j (a) ≥ Aimin (i = 1, . . . , m;
j = 1, . . . , n),
j =1
Piz ≥ Pimin or μ(z i ) ≥ μimin (i = 1, . . . , m); J (xi ) =
m ∑ i =1
vi Pi
y
n ∑
bi j Pibj ≥ Jimin
j =1
or J (xi ) =
m ∑ i =1
vi μi (y)
n ∑ j =1
bi j μi j (b) ≥ Jimin .
40
1 Mechanisms and Control Systems
6. For the selected models to build logical expressions: Fi = C0i W, Si = C ij W f
7. The calculation of probabilities Pi and Pis , or a membership function μ( f i ) and μ(si ). 8. If the assessment proximity bi and ai are set limiting the assessment individuals decision-makers exactly the model search, for which J (xi ) =
m ∑
f
vi bi Pi → max
i =1
or J (xi ) =
m ∑
vi bi μi ( f ) → max
i =1
Under the following conditions: ai ≥ aimin (i = 1, . . . , m); Pis ≥ Pimin , or μ(si ) ≥ μimin (i = 1, . . . , m) 9. If the assessment proximity bi and ai are set limiting the assessment individuals decides not exactly corresponding with some probability, Pibj and Piaj (see Table 1.1 and Table 1.2) or membership functions μij (b) and μij (a), the search for a model for where in: J (xi ) =
m ∑
vi Pi
i =1
f
n ∑
bi j Pibj → max,
j =1
or J (xi ) =
m ∑
vi μi ( f )
i =1
Under the following conditions:
n ∑ j =1
bi j μi j (b) → max
1.4 Synthesis of Optimal SEMS Control Algorithms n ∑
41
ai j Piaj ≥ Aimin ,
j =1
or n ∑
ai j μi j (a) ≥ Aimin (i = 1, . . . , m;
j = 1, . . . , n);
j =1
Pis ≥ Pimin , or μ(si ) ≥ μimin (i = 1, . . . , m) When solving a simple problem of synthesis of optimal SEMS, in which the control is set LLM, formally record the problem of GMP is: Problem 1 If ∀ qi , μ (qi ) = 1, then: f 0 (x0 )g0 f 0 (x϶ ) at f j (x0 )g j u j ,
(1.123)
f j (x϶ )g j u j , j = 1, . . . , r, (x0 , x϶ ) ∈ G ⊆ R n , where f i (x 0 ) and f i (x ϶ )—asked some functions Rm j preferences of decision-makers; i = 0,1, …, r; uj —fixed vector in Rm j restrictions set by decision-makers; gj , j = 0, 1, 2, …, r—binary relations on G ⊆ Rn . In this case, the model will be optimal x 0 , for which the condition (1.123). Problem 2 If ∀ qi , μ(qi ) < 1, then: f 0 (x0 )g0 f 0 (x϶ ) → opt at f j (x0 )g j u j ,
(1.124)
f j (x϶ )g j u j , j = 1, . . . , r, (x0 , x϶ ) ∈ G ⊆ R n , In this case, there will be several models, satisfying (1.123) with a different membership function, and to select the optimal model is required to analyze other linguistic attributes of the logical variables qi . In Problem 1 GMP optimization reduces to the construction of optimal binary relations gj . In Problem 2 GMP finding optimal logical–linguistic control algorithms after building binary relations gj and selection of candidates for optimality based on their
42
1 Mechanisms and Control Systems
defined boundary membership functions requires further analyze complex linguistic expressions. These expressions are attributes (wi ) or generated attribute (wi ) logical variables qi , which are components of the vector Q in the logical expressions of the form (1.106) and (1.107), characterized by an optimal logical–linguistic control algorithms (go ) and the appropriate limits (gj ). Thus, if these linguistic expressions can be reduced to the form of logical expressions (1.121) and (1.122), then the solution, as in the case of LPM, can be automated by artificial intelligence (AI) software [19]. The optimization algorithm in the problem 2 GMP comprises the following steps: 1. 2. 3. 4.
Construction of the binary relationship f 0 (x 0 )g0 f 0 (x ϶ ), f j (x 0 )gj uj , f j (x ϶ )gj uj The construction of logical expressions C 0 Q = Y, C j Q = Z Calculation of the membership functions μ(yi ) and μ(zi ) If the characteristics of the model valuation close to the standard bi and ai assessment limiting characteristics defined faces decision-makers precisely the selection of models for which the following conditions: ai ≥ aimin (i = 1, . . . , m); μ(z i ) ≥ μimin , J (xi ) =
m ∑
vi bi μ(yi ) ≥ Jimin ,
i =1
5. If the characteristics of the model valuation close to the standard bi and assessment limiting characteristics of persons are given ai decides not exactly with some relevant membership functions μ(bij ) and μ(aij ), the selection of models for which the following conditions: n ∑
) ) ai j μ ai j ≥ Aimin ,
j =1
μ(z i ) ≥ μimin , J (xi ) = (
m ∑ i =1
vi μ(yi )
) i = 1, . . . , m; ; j = 1, . . . , n
n ∑
) ) bi j μ z j ≥ Jimin
j =1
6. For selected models build logical expressions: Fi = C0i W, Si = C ij W.
1.4 Synthesis of Optimal SEMS Control Algorithms
43
7. If the evaluation of proximity bi and ai are set limiting the assessment individuals decision-makers exactly the model search, for which J (xi ) =
m ∑
vi bi μ( f i ) → max
i =1
Under the following conditions: ai ≥ aimin (i = 1, . . . , m); μ(si ) ≥ μimin , (i = 1, . . . , m) 8. If the evaluation of proximity bi and ai are set limiting the assessment individuals decides not exactly with some relevant membership functions μ(bij ) and μ(aij ), the search for a model, for which: J (xi ) =
m ∑
vi μ( f i )
i =1
n ∑
) ) bi j μ bi j → max,
j =1
Under the following conditions: ai ≥ aimin (i = 1, . . . , m); μ(si ) ≥ μimin , (i = 1, . . . , m) 9. If the evaluation of proximity bi and ai are set limiting the assessment individuals decides not exactly with some relevant membership functions μ(bij ) and μ(aij ), the search for a model, for which: J (xi ) =
m ∑
vi μ( f i )
i =1
n ∑
) ) bi j μ bi j → max,
j =1
Under the following conditions: n ∑
) ) ai j μ ai j ≥ Aimin ,
j =1
μ(si ) ≥ μimin , (i = 1, . . . , m; j = 1, . . . , n); A formal record of the problem of synthesis of optimal SEMS, in which the control is set LIM, using methods GMP much more complicated due to the complexity of the task interval functions. Therefore, in this case it is advisable to move the functions to their parameters set interval. Then the problem of finding an optimal SEMS is reduced to the MPOS. It is necessary to introduce a numerical measure of affinity characteristics of the compared
44
1 Mechanisms and Control Systems
systems. In particular, if we compare the characteristics of one-dimensional transient (see Fig. 1.16), their proximity can be estimated by the following parameters are calculated according to baseline characteristics: – Static error | |/ δ = |x y − x3 | x3
(1.125)
where: x y and x z —steady and predetermined values; – Time of transition / t p = tmin ≤ δ∂
(1.126)
where δ ∂ —permissible value of static error; – deregulation | |/ σ = |xmax − x y | x y .
(1.127)
Then, as previously MPOS, introduced scores of these parameters and their preference coefficients for comparable variants. Then made their ranking and selection of the best options. When compared trajectory change output parameters of various embodiments SEMS with ideal path (Fig. 1.17), they are close to ideal can be estimated by the following values [9]: – Mean square error (MSE) E1 =
Fig. 1.16 Transients. u—Ideal SEMS, 1—option 1 SEMS, 2—option 2 SEMS, 3—Option 3 SEMS
n (2 1 ∑( u j yi − yi , n i =1
(1.128)
1.4 Synthesis of Optimal SEMS Control Algorithms
45
Fig. 1.17 Trajectories of change of output parameters. u—Ideal SEMS, 1—option 1 SEMS, 2—option 2 SEMS, 3—option 3 SEMS
j
where yiu and yi —i-th coordinates ideal trajectory and j-th the compared intelligent control system, n - the number of points carried trajectories compared (usually split evenly trajectory of the X-axis); – Modified MSE E2 =
n (/ (2 1 ∑ (( u j yi − yi ε , n i =1
(1.129)
where 0 < ε < 2 reliability factor assessment, depending on the method of interpreting the results of the comparison (the sign, ordinal, and others.); – Generalized MSE E3 =
n (2 1∑ ( u j ki yi − yi , n i =1
(1.130)
where k i —weight of i-th coordinate trajectory; – The absolute cumulative error n | | ∑ | u j| E4 = | yi − yi |
(1.131)
i =1
– Maximum error | | | j| E 5 = max| yiu − yi | – is the average of the relative variation
(1.132)
46
1 Mechanisms and Control Systems
E6 =
n ( ∑ i =1
j
yiu
−
j yi
(2
√
n ( ∑
(2 j yiu − < yi > ,
(1.133)
i =1 j
where < yi > evaluation of the average value of the coordinates yi . The value E 6 provides a good score close to the case when the ideal trajectory parallel to the axis X. In the case where the compared characteristics of various embodiments are multidimensional SEMS figures last step can be divided into some number of twodimensional shapes (curves) for each of the obtained pieces to use estimates of the proximity type (1.128)–(1.133). Obviously, the task of GMP can go to MPOS tasks and task control in a LPM and LLM. This approach is the most preferred as to obtain an acceptable complexity of solving the problems of GMP is usually necessary that the characteristics of the vector function f j (x), the domain of the solution G and used binary relations gj satisfy some special requirements. Thus, the design methods of analysis problems GMP can be built for cases where the components of the vector function f j (x)–straight or concave function, G—convex set, and the binary relations gj —continuous, concave, monotonous and regular preferences [16]. Such models are called GMP models generalized convex programming (GCP). Meeting the challenges GMP is currently associated with the use of neural networks and media such as A-life [18].
1.4.4 Multistep Generalized Mathematical Programming In the case where the function selection determined preferences of decision-makers, changes in the operation of SEMS, you must use a multi-step in the optimization of a multistep generalized mathematical programming (MGMP) allows you to implement an arbitrary the function selection on a finite set of options. Therefore MGMP scheme is a set of consecutive tasks GMP, each of which corresponds to a fixed presentation of the set of ideal characteristics of X. Of course, the computational complexity of solving MGMP rapidly increases with the number of steps. The simplest task of optimizing the SEMS is a problem of transition from one point of the phase space X 0 to another X k quite remote (Fig. 1.18), at least when it approaches the requirements to such parameters of the transition process, as the deregulation σ and the error δ and are subject to change and at the same time their coefficients and significance ν σ and ν δ. . In this case, the control can be divided into a number of stages (steps), and at each step to change the requirements to the parameters or mean perfect transition. At each step, you will select its function and, therefore, an option will change the optimal control.
1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints …
47
Fig. 1.18 Trajectories of movement. U—The ideal trajectory, g—admissible trajectory
1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints to Interval The peculiarity of the control of cyberphysical systems such as SEMS is the need to choose the best trajectories (scenarios) in conditions of not complete certainty with limitations on the part of the practical feasibility of algorithms for processing data about objects. This leads to the fact that the region determine the characteristics of the SEMS state the one way or the other is sampled based on the linear, nonlinear or ordinal scales [19, 20], and restrictions from the environment of functioning are introduced in the form of logical-probabilistic and logical–linguistic expressions [21, 22]. Therefore, the method of recurrent target inequalities [23] of further development, taking into account the logical-probabilistic and logical–linguistic limitations in the synthesis of SEMS to date has not received. However, the relevance of research in the direction of the development of the method of recurrent target inequalities for the synthesis of high-precision SEMS is undeniable. The solution of the problem of reducing the initial problem of multidimensional multicriteria conditional optimization of large dimensions for the case when there are logical-probabilistic and logical–linguistic constraints to the equivalent with a given accuracy, the finite set of inequality systems describing the target set is closely related to the solution of the problem of reducing logical-probabilistic and logical–linguistic constraints to the interval.
48
1 Mechanisms and Control Systems
1.5.1 Prerequisites for Reducing Logical-Probabilistic Constraints to Interval Constraints Let [a, b] be the interval in which there was a random variable x with the normal distribution law during fuzzification (when a logical variable is obtainedξ), F(.)— Gauss probability integral, m—the expectation of a random variable x, σ—the mean square deviation of the random variable x. Then the fair: Theorem 1 If a Boolean variable with probability of truth P{ξ = 1} was obtained by fuzzification from a random variable x having normal distribution law, then: P{ξ = 1} = P{a < x < b} ) / ) ) / ) = Φ (b − m) σ − Φ (a − m) σ . Consequence 1 If the random variable x has a standard normal distribution law (m = 0, σ = 1), then P{ξ = 1} = Φ(b) − Φ(a). Consequence 2 If m = 0, a = −b, then F(b/σ) = 0.5(1 + P{ξ = 1}) Consequence 3 If m = (b + a)/2; σ = (m−a)/3; a* = −b*, then F(b*) = 0.5(1 + P{ξ = 1}), where: b* = (bξ− m)/ σ, a* = (aξ −m)/σ, aξ the lower boundary of the interval (quantum of fuzzification) of a Boolean variable ξ, bξ the upper limit of the interval (quantum of fuzzification) of a Boolean variable ξ. Consequence 4 If a = kb, (m−3σ)/b ≤ k ≤ 1, then P{ξ = 1} = F((b−m)/σ)— F((kb−m)/σ). This condition allows for known k, m and σ to be selected b values using the function table F (.). Let [a, b] be the interval in which the random variable x was located at the fuzzification (obtaining a Boolean variable ξ), [x min ; x max ]—the interval in which the random variable x has a uniform distribution law. Then the fair Theorem 2 If a Boolean variable ξ having the probability that P{ξ = 1} was obtained by fuzzification of a random variable x having uniform distribution in the interval [x min ; x max ], then P{ξ = 1} = P{a < ξ < b} / = (b − a) (xmax − xmin ) Consequence 1 If a = −b, then b = 0.5P{ξ = 1}(x max −x min ) Consequence 2 If a = kb, x min /b ≤ k ≤ 1, then b = (P{ξ = 1}(x max −x min ))/(1−k)
1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints …
49
1.5.2 Prerequisites for Reducing Logical–Linguistic Constraints to Interval Constraints Let [a, b] be the interval in which the fuzzy quantity x was located during fuzzification (obtaining the linguistic variable v), [x min ; x max ] be the interval in which the membership function f (x) is set for the fuzzy quantity x. Then the fair Theorem 3 If the linguistic variable v having the value of the membership function μ(v) was obtained by fuzzification from a fuzzy variable x having the membership function f (x) on the interval [x min ; x max ], then a = x min + μ(v) f (x), b = x max + μ(v) f (x) Consequence 1 If the triangular appearance of the function f (x) is specified, that is, f (x) = 0.5(xmax − xmin ), at xmin ≤ x ≤ 0.5(xmax − xmin ) f (x) = − 0.5(xmax − xmin ), at 0.5(xmax − xmin ) < x ≤ xmax , Then we get the following intervals: a = x min + μ(v)0,5(x max −x min ), b = x max −μ(v)0.5(x max −x min ).
1.5.3 An Example of Solving a Fuzzy Control Problem Using Interval Reduction Theorems The problem of fuzzy optimal control is reduced to a classical mathematical programming problem if, knowing the intervals taken during fuzzification, i.e., when obtaining logical variables, and using the information theorems, go from logical-probabilistic and logical–linguistic variables to interval ones. Problem Statement In the settlement having N∗M intersecting streets, it is required to find the route of journey of the vehicle from point A with x A ,yA coordinates in point B with x B ,yB coordinates in the minimum time J. The following logical variables characterizing the environment are introduced: h1 h2 h3 t1 t2 t3 v1
the logical variable “dry”, which corresponds to humidity from 0 to 40%, the logical variable “wet”, which corresponds to 30 to 70% humidity%, n the logic variable “very wet”, which corresponds to humidity from 60 to 100%, the logical variable “hot”, which corresponds to the temperature from 20 to 40°C, the logic variable “heat”, which corresponds to the temperature from 5 to 20°C, the logic variable “cold”, which corresponds to the temperature from −5 to 10°C, the logical variable “fast”, which corresponds to the speed from 40 to 60 km/h,
50
1 Mechanisms and Control Systems
v2
the logic variable “slow”, which corresponds to a speed between 20 and 50 km/ h, the logical variable “sharply”, which corresponds to the angular velocity from 4 to 8 deg/s, the logical variable “smoothly”, which corresponds to an angular speed from 2 to 5 deg/s.
w1 w2
The following logical-probabilistic and logical–linguistic limitations are known: h 1 ⊗ t1 → v1 ⊗ w1 , P(h 1 = 1) = 0.8, μ(t1 ) = 0.9,
(1.134)
P(v1 = 1) = 0.8, μ(w1 ) = 0.8, h 1 ⊗ t2 → v1 ⊗ w2 , P(h 1 = 1) = 0.9, μ(t2 ) = 0.8,
(1.135)
P(v1 = 1) = 0.7, μ(w2 ) = 0.7, h 1 ⊗ t3 → v2 ⊗ w2 , P(h 1 = 1) = 0.7, μ(t3 ) = 0.7, P(v2 = 1) = 0.6,
(1.136)
μ(w2 ) = 0.6, h 2 ⊗ t1 → v1 ⊗ w1 , P(h 2 = 1) = 0.8, μ(t1 ) = 0.9,
(1.137)
P(v1 = 1) = 0.75, μ(w1 ) = 0.75, h 2 ⊗ t2 → v1 ⊗ w2 , P(h 2 = 1) = 0.9, μ(t2 ) = 0.8, P(v1 = 1) = 0.65, μ(w2 ) = 0.65,
(1.138)
1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints …
51
h 2 ⊗ t3 → v2 ⊗ w2 , P(h 2 = 1) = 0.7, μ(t3 ) = 0.7, P(v2 = 1) = 0.7,
(1.139)
μ(w2 ) = 0.7, h 3 ⊗ t1 → v1 ⊗ w1 , P(h 3 = 1) = 0.8, μ(t1 ) = 0.9, P(v1 = 1) = 0.6, μ(w1 ) = 0.6,
(1.140)
h 3 ⊗ t2 → v1 ⊗ w2 , P(h 3 = 1) = 0.9, μ(t2 ) = 0.8,
(1.141)
P(v1 = 1) = 0.9, μ(w2 ) = 0.9, h 3 ⊗ t3 → v2 ⊗ w2 , P(h 3 = 1) = 0.7, μ(t3 ) = 0.7, P(v2 = 1) = 0.8,
(1.142)
μ(w2 ) = 0.8, where: hi, t j, vi , wj —logical variables, → —implication sign, ⊗ —sign conjunction, P(hi = 1), P(vi = 1)—probabilities logical variables hi and vi , μ(t i ), μ(wj )—membership functions of logical variables t i and wj . In addition, it is known that the view of each intersection configuration quadrilateral, ϕ ij —rotation angles at i-th nodes (intersections) to j-th, lij —distances between i-th and j-th nodes and the waiting time at intersections τ ij . In particular, if the coordinates of all nodes s(x s ,ys ), q(x q ,yq ) i p(x p ,yp ) (intersections) are known, it is easy to calculate (see Fig. 1.19): – Distances between nodes using the following ratios: ls, q =
√) √
lq, p =
)
xs − xq
)2
)2 ) + ys − yq ,
(1.143)
)2
)2 ) + yq − y p ,
(1.144)
xq − x p
– turning angles at intersections using ratios:
52
1 Mechanisms and Control Systems
Fig. 1.19 The turn at the intersection
) / ) ϕq p = 2π − arctg (k2 − k1 ) (1 + k2 k1 ) ,
(1.145)
where: k 1 —the angular coefficient of the straight line connecting the nodes q and p, k 2 —angular coefficient of a straight line connecting s and q nodes. The coefficients k 1 and k 2 are easily obtained from the equations of the specified straight lines: )/ ) ) ) xq − xs . k1 = yq − ys
(1.146)
) )/ ) ) k2 = y p − yq x p − xq .
(1.147)
In this case it is necessary to minimize the following functionality: J (Mν ) =
∑ ) / ∑ ) / ) ) ∑) ) cτi j → min, a li j vi j + b ϕi j w j j + i, j
i, j
(1.148)
i, j
where: a, b, c—the coefficients of preference, vij —linear and wij is the angular velocity of movement associated with the hi humidity and temperature t j . In Eq. (1.148) (i, j)—this is an element of an ordered set that characterizes the considered route from the start point to the end point. For example, if you look at the M ν route from node 1 to node 3, then from node 3 to node 7 and finally from node 7 to node 8 (see Fig. 1.20) then it means that the Eq. (1.148) is summed (i, j) ∈ {(1,3); (3,7); (7,8)}.
1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints …
53
Fig. 1.20 Dynamic configuration space
Problem Solution 1. First on the terrain map the fragment is allocated (Fig. 1.20), which contains the starting and ending points of the route. In the selected fragment (Fig. 1.20) there is a dynamic configuration space (DCS) containing 8 intersections (nodes), which are characterized by the distances between adjacent nodes lij , the rotation angle at the intersections ϕ ij and the delay time at the intersection τ ij. In this case, the selected fragment should include all nodes adjacent to the start and end nodes. For example, from node 1 you need to get to node 7. Then, for node 1, it will be nodes 2, 3, 4 and 5, for node 7; it will be nodes 3, 5, 6 and 8. 2. All variants of hit from the start node to the end node of the selected fragment are constructed. For our example, these are the following route options: M 1 : (1–2–4– 8); M 2 : (1–4–8); M 3 : (1–3–8); M 4 : (1–5–3–8); M 5 : (1–3–7–8); M 6 : (1–5-7–8); M 7 : (1–5–6–7–8). 3. Intervals of logical variables characterizing the environment are calculated using the above theorems 1, 3. The transition from logical-probabilistic and logical– linguistic constraints to logical-interval constraints is carried out and this problem is reduced to the classical problem of mathematical programming. 4. In any current DCS corresponding to the selected fragment, there will be 4 constraints at the same time (in our example—these are the conditions: (1.134), (1.135), (1.137) and (1.138)), for each of which the minimum functional (1.148) is calculated from all variants of motion. 5. As an optimal route M ν is chosen, which will correspond to the minimum of the mean value of the functional J(M ν ).
54
1 Mechanisms and Control Systems
Let us consider paragraph 3 of the algorithm. Select the condition 13. This condition describes h1 -the logical variable “dry”, which corresponds to humidity from 0 to 40%, with a probability P(h1 = 1) = 0.8 and t 1 - the logical variable “hot”, which corresponds to a temperature of 200 C to 400 C, with a membership function μ(t 1 ) = 0.9. Convert logical-probabilistic variable h1 in logical-interval h1 n . To do this, we use the corollary 3 of theorem 1 and find the normal distribution function F(h1 *) with the parameters: expectation m = (b + a)/2; standard deviation σ = (m—a)/3, where a—is the lower, b—is the upper bound of the quantum corresponding to the logical variable “dry” h1 during fazzification. For this case a = 0%, b = 40%, F(h1 *) = 0.5 (1 + P{ξ = 1}) = 0.5(0.8 + 1) = 0.9. Then we find the value of h1 * in table 1 of [11]. In this case h1 * = 1.29 and for (−h1 *) = -1.29. To find the intervals a1 n and b1 n of the logical-interval variable h1 n , use the following expressions: ) ) ) )/ − h ∗1 = a1H − m σ ,
(1.149)
) )/ h ∗1 = b1H − m σ .
(1.150)
Therefore, we obtain the following lower a1 n and upper b1 n intervals: a1 n = 11.5 and b1 n = 28.5. Thus, we moved from a logical-probabilistic constraint (1.134) to a logical-interval constraint: 11.5 < h1 n < 28.5. Now convert logical–linguistic variable t 1 in the logical-interval t1 n . To do this, we use the consequence 1 of theorem 3. For the case under consideration x min = 20, x max = 40. Calculate the upper and lower at1 bt1 boundaries: at1 = xmin + μ(t1 ) ∗ 0.5 ∗ (xmax − xmin ) = 29, bt1 = xmax − μ(t1 ) ∗ 0.5 ∗ (xmax − xmin ) = 31. Thus, we moved from the logical–linguistic restriction (1.134) to the logicalinterval constraint: 29 < t 1 n < 31. Table 1.3 shows the results of the transition from logical-probabilistic and logical– linguistic constraints to logical-interval constraints. At the same time, the linear speed is transferred from km/h to m/s. Let us calculate the functional values (1.148) for the considered route variants by specifying the values of distances between intersections and angles of rotation at intersections. Delays at junctions will be considered is set to 10 s. For route M 1 : (1–2–4–8) will be: l12 = 500 m, l 24 = 400 m, l 48 = 900 m, ϕ 124 = 60°, ϕ 248 = 30°, ϕ 12 = 0°. Table 1.4 shows the results of calculation of the functional J(M ν ) for the route M 1 :(1–2–4–8).
1.5 Reduction of Logical-Probabilistic and Logical–Linguistic Constraints …
55
Table 1.3 The results of the transition from logical-probabilistic and logical–linguistic constraints to logical-interval constraints № restrictions
Intervals of quantization
Type of uncertainty
Logical-interval constraints
13
0 < h1 < 40
P(h1 = 1) = 0.8
11,5 < h1 n < 28.5
13
11.1 < v1 < 16.7
P(v1 = 1) = 0.8
12,7 < v1 n < 15.1
13
20 < t 1 < 40
μ(t 1 ) = 0.9
29 < t 1 n < 31
13
4 < w1 < 8
μ(w1 ) = 0.8
5,6 < w1 n < 6.4
14
0 < h1 < 40
P(h1 = 1) = 0.9
9,1 < h1 n < 30.9
14
11,1 < v1 < 16.7
P(v1 = 1) = 0.7
12,9 < v1 < 14.8
14
20 < t 1 < 40
μ(t 1 ) = 0.9
29 < t 1 n < 31
14
4 < w1 < 8
μ(w1 ) = 0.8
5,6 < w1 n < 6.4
16
30 < h2 < 70
P(h2 = 1) = 0.8
41,5 < h2 n < 58.5
16
11.1 < v1 < 15.7
P(v1 = 1) = 0.75
12,8 < v1 < 14.9
16
20 < t 1 < 40
μ(t 1 ) = 0.9
29 < t 1 n < 31
16
4 < w1 < 8
μ(w1 ) = 0.75
5,4 < w1 n < 6.6
17
30 < h2 < 70
P(h2 = 1) = 0.9
39,1 < h2 n < 60.9
17
11.1 < v1 < 15.7
P(v1 = 1) = 0.65
13 < v1 < 14.75
17
5 < t 2 < 15
μ(t 2 ) = 0.8
9 < t2
17
2 < w2 < 5
μ(w2 ) = 0.65
2.98 < w2 n < 4.0
n
< 11
Table 1.4 The results of calculation of the functional J(M v ) № restrictions
v
w
J
J mid
J min
J max
Jv
13
12.7
5.6
187.8
175.52
163.23
198.66
180.945
13
12.7
6.4
185.78
13
15.1
5.6
165.27
13
15.1
6.4
163.23
14
12.9
5.6
185.6
14
12.9
6.4
183.68
14
14.8
5.6
167.69
14
14.8
6.4
165.67
16
12.8
5.4
187.29
16
12.8
6.6
187.25
16
14.9
5.4
167.48
16
14.9
6.6
164.44
17
13
2.98
198.66
17
13
4.0
190.96
17
14.75
2.98
182.84
17
14.75
4.0
174.44
175.66
176.615
186.725
56
1 Mechanisms and Control Systems
For route M 2 : (1–4–8) will be: l 14 = 900 m, l 48 = 900 m, ϕ 14 = 0°, ϕ 148 = 60°. Accordingly, the calculated values of the functional will be: J min = 138.57, J max = 178.59 and J v = 158.58. For route M 3 : (1–3–8) will be: l 13 = 700 m, l 38 = 700 m, ϕ 13 = 0°, ϕ 138 = 30°. Accordingly, the calculated values of the functional will be: J min = 117.41, J max = 137.77 and J v = 127.59. For route M 4 : (1–5-3–8) will be: l15 = 700 m, l 53 = 200 m, l 38 = 700 m, ϕ 15 = 0°, ϕ 153 = 150°, ϕ 538 = 60°. Accordingly, the calculated values of the functional will be: J min = 168.77, J max = 223.55 and J v = 196.16. For route M 5 : (1–3-7–8) will be: l 13 = 700 m, l 37 = 900 m, l 78 = 900 m, ϕ 13 = 0, ϕ 137 = 30°, ϕ 378 = 1200 . Accordingly, the calculated values of the functional will be: J min = 219.0, J max = 272.65 and J v = 245.82. For route M 6 : (1–5-7–8) will be: l 15 = 700 m, l 57 = 900 m, l 78 = 900 m, ϕ 15 = 0, ϕ 157 = 450 , ϕ 578 = 1200 . Accordingly, the calculated values of the functional will be: J min = 221.68, J max = 277.68 and J v = 249.51. For route M 7 : (1–5-6–7-8) will be: l15 = 700 m, l 56 = 700 m, l 67 = 400 m, l 78 = 900 m, ϕ 15 = 0, ϕ 156 = 10°, ϕ 567 = 900 , ϕ 678 = 450 . Accordingly, the calculated values of the functional will be: J min = 241.46, J max = 296.35 and J v = 268.91. Intervals of calculated functional for all considered routes are shown in Fig. 1.21. From drawing it is visible that the M 3 route is the best, because it requires the least time (the smallest functional values (1.148)) to complete the route on average, as well as at worst and with a favorable forecast. Route M 7 is the worst, as it takes the most time. M 5 and M 6 routes are slightly different and their preference may depend on unaccounted for minor random factors, such as delays at pedestrian crossings.
1.6 SEMS Automatic Control Systems When control SEMS, first of all, there are problems of adequate perception and recognition of the environment, purposeful behavior planning and effective implementation of planned actions. The latter problem is solved quite successfully by methods of the theory of automatic control using a computer of traditional architecture with a consistent principle of data processing. Solving the first two problems using the same computing tools is associated with significant difficulties. The reason is not only the need to process large amounts of data from parallel sensors distributed in space and operating in real time, but also the use of new intelligent data processing methods that cannot be implemented by these computers [24]. Therefore, to ensure high performance control systems IR, built on the basis of SEMS, it is expedient in the automatic control systems SEMS use the neuroprocessors providing parallelization of process control computation [4].
1.6 SEMS Automatic Control Systems
57
Fig. 1.21 Calculated intervals of functional for M 1 –M 7 routes
1.6.1 Architecture of Control Systems of Modules SEMS ACS SEMS considered structures are usually the architecture of the “tree” (see Fig. 1.22), comprising a Central Control Computer (CCC) and following Neuroprocessor blocks (NB): – – – – –
Modeling block (MB); Optimization block (OB); Decision making block (DMB); Block for calculating control actions (BCCA); Vision System (VS).
CCC provides a solution to problems of selection implementation strategies required by the operator and/or the higher level system tasks and generating a sequence of actions (algorithms) are necessary for its realization. In addition, it should ensure a prompt correction of behavior depending on the information about the change of the environment, coming from VS and coordination of the subsystems. Operation CCC requires advanced skills to acquire knowledge about the laws of the environment, to the interpretation, classification and identification of emerging situations, to analyze and memorize the consequences of their actions on the basis of experience (property of the self).
58
1 Mechanisms and Control Systems CCC
MB
OB
ACS SM
DMB
BCCA
ACS SM
VS
ACS SM
Fig. 1.22 Architecture of ACS SEMS. CCC—central control computer, MB—modeling block, OB—optimization block, DMB—decision making block, BCCA—block for calculating control actions, VS—vision system, ACS SM—automatic control system for standard modules SEMS
Neuroprocessor blocks: MB, OB, DMB, BCCA, VS together provides a solution to the problem of tactical level control. It is primarily concerned with finding a solution to one of the key tasks associated with the planning of routes and tracks progress towards the objectives are not fully certain conditions, including various obstacles, taking into account the dynamics of the executive subsystems and current changes in the operational environment. This should be provided not only to progress towards the goal of a priori given by routes and paths, but also arbitrary changes needed to promote a given target. VS (see Fig. 1.23) through the CCD controllers (CCCD), the matrix controls the parameters of the CCD matrices (PCCD) of the SEMS modules. Same VS collects information from the modules SEMS and transfers it to Neuroprocessor Recognition block (NBR). NBR produces identification of surrounding objects and the operation state of the environment and transmits the information through CCDC in CCC. CCD may be provided with switches pixels. Pixels can change the configuration and size of the matrix by commands from CCDC, ensuring their adaptation to the parameters identified objects [25, 26]. Neuroprocessor block OB performs scheduling optimal trajectories platform modules SEMS and reconfigurations. To do this, it uses the information received from module VS about the environment and the requirements of algorithms behavior formed in CCC. OB also has to ensure the operational restructuring of the trajectories within the constraints and dynamics of executive subsystems. Neuroprocessor block MB provides a prediction of the dynamics of the executive subsystems for issuing corrections planned OB optimal trajectories and adaptation parameters computed control actions. This DMB determines the conditions under which in OB and BCCA adjustments will be made.
1.6 SEMS Automatic Control Systems Fig. 1.23 The structure of VS. CCCD—Controller CCD, PCCD—parameters CCD controls the parameters NBR—Neuroprocessor Recognition block
59
from CCC
CCCD
PCCD
NBR
VS
The neuroprocessor block BCCA receives information from MB, OB and DMB and, in accordance with the algorithms received from the CCC, generates control actions for the ACS by the SEMS modules. These effects make the necessary movement of SEMS for the surrounding area, turns, compression and extension modules, as well as the capture and relocation of various objects.
1.6.2 Automatic Control Subsystems of Mobile Elements of Modules SEMS The structure and operation ACS SM consider the example module SM8 SEMS, which is the most versatile. Others ACS SM can be prepared from this by simplification. The structure and operation of the ACS of SEMS modules will be considered using the example of the SM8 module. SM8 SEMS has architecture such as “tree” (see Fig. 1.24) and contains Neuroprocessor Calculation Block Optimal Movement (NBCOM). This module receives information from the BCCA and calculates the optimum without jamming leg lengthening of Legs L1–L6, extension reconfiguration of Control Rods CR1–CR6 of module platform, turns and extension of the Control Rods Movement CRM1–CRM6, as well as twists and extension of the Control Rods Gripping CRG1–CRG6. Calculated NBCOM linear and angular movements act on the corresponding Group Controls Legs (GCL), Group Control Rods Reconfiguration (GCRR), Group Control Rods Movement (GCRM) and Group Control Rod Gripping (GCRG). GCL (see Fig. 1.24) generates and outputs the control actions in Automatic Control System of the Legs (ACSL). In ACSL Controllers Legs (CL1–CL6) (see Fig. 1.25) using sensor signals of feedback, (Linear Displacement Sensors (LDS), Angular Displacement Sensors (ADS), Force Sensors (FS) and the Tactile Sensor (TS)) calculated error signals and produce for given control law (for example PID
60
1 Mechanisms and Control Systems From BCCA
ACS SM
NBCOM
GCL
ACSL
GCRR
ACSRR
GCRM
ACSRM
GCRG
ACSRC
Fig. 1.24 The structure of ACS SM. BCCA—block for calculating control actions, NBCOM— Neuroprocessor Calculation Block Optimal Movement, GCL—Group Controls Legs, GCRR— Group Control Rods Reconfiguration, GCRM—Group Control Rods Movement, GCRG—Group Control Rod Gripping, ACSL—ACS of the Legs, ACSRR—ACS of Rods a Reconfiguration, ACSRM—ACS of Rod Movement, ACSRC—ACS of Rods Capture
laws) control actions on the appropriate Motors Legs (ML1–ML6), and which carry out the required lengthening of Legs (L1–L2). GCRR (see Fig. 1.24) generates and outputs the control actions in Automatic Control System of Rods a Reconfiguration (ACSRR). Structure ACSRR analogous to the structure ACSL (see Fig. 1.25), But it has Controllers Control Rods Reconfiguration (CCRR) which, using sensor signals of feedback, (LDS, ADS, FS and TS), calculated error signals and produce for given control law (for example PID laws) control actions on the appropriate Motors Rods Reconfiguration (MRR1– MRR6), and which carry out the required lengthening of the Rods Reconfiguration (RR1–RR6). GCRM (see Fig. 1.24) generates and outputs the control actions in Automatic Control System of Rod Movement (ACSRM). In contrast to the ACSL and ACSRR the system operates as an extension of the rods and their rotation. Therefore, it contains not only the Controllers of Elongation Rods (CER1–CER6) and Motors of Elongation Rods (MER1–MER6) but and the Controllers of Turning Rods (CTR1– CTR6) and Motors of Turning Rods (MTR1–MTR6) (see Fig. 1.26).
1.6 SEMS Automatic Control Systems
61 from GCL
CL1
CL6
ML1
ML6 TS
TS
L1
L6
LDS
LDS
FS
FS
ADS
ADS ACSL
Fig. 1.25 Structure ACSL. (CL1–CL6)—Controllers Legs, (ML1–ML6)—Motors Legs, (L1– L6)—Legs, TS–Tactile Sensor, LDS–Linear Displacement Sensors, ADS–Angular Displacement Sensors, FS–Force Sensors from GCRM
CER1
CTR1
MER1
MTR1
CER6 MER6
MTR6
СTR1
LDS
TS
FS
CTR6
CTR6
LDS
ADS
TS
FS
ADS
ACSRM
Fig. 1.26 Structure ACSRM. (CER1–CER6)—Controllers of Elongation Rods, (MER1–MER6)— Motors of Elongation Rods, (CTR1–CTR6)—Controllers of Turning Rods, (MTR1–MTR6)Motors of Turning Rods, TS—Tactile Sensor, LDS—Linear Displacement Sensors, ADS—Angular Displacement Sensors, FS—Force Sensors
62
1 Mechanisms and Control Systems
GCRG (see Fig. 1.24) generates and outputs the control actions in Automatic Control System of Rods Capture (ACSRC). This system is similar to ACSRM (see Fig. 1.26) once it control elongation and turning Rods Capture (RC1–RC6). Tactile sensors and force sensors normally used for the adaptation of automatic control systems. Sensors of linear and angular displacements provide circuit of automatic control loop during transients.
1.7 Neuroprocessor Automatic Control System of the SEMS Module The main element of the control system modules SEMS is a Neuroprocessor Automatic Control System (NACS) module SEMS. Interaction NACS SEMS models at various levels to successfully carry out the necessary Intelligent Robot (IR) working operations is carried out, as a rule, a system of planning and control of purposeful behavior [9]. The block diagram of NACS is shown in Fig. 1.27. The system includes a Control Computer (CC), for example, based on neuroprocessor NM 6406 with 12 analogue and 8 digital inputs and outputs for connection of sensors and controllers, Controllers Leg-Actuators (CLA), Motors (M), Motors Legs (ML), Controllers Rods Upper Platform (CRUP), Motors Rods of the Upper Platform (MRUP), Controllers Rods Lower Platform (CRLP), Motors Rods of the Lower Platform (MRLP), Block of Pressure Sensors in the Legs (BPSL), Block of Pressure Sensors in the rods of the Upper Platform (BPSUP), Block of Pressure Sensors in the rods Lower Platform (BPSLP), Block of Moving Sensors of the Upper Platform (BMSUP), Block of Moving Sensors of the Lower Platform (BMSLP), as well as an Upper Platform (UP) and Lower Platform (LP). Six PID regulators block CLA, three PID block CRUP and three PID block CRLP admission control actions U with a yield 1–3 CC generates a control voltage V in accordance with the law of the equation: / V = k1 U + k2 U p + k3 pU , where p = d/dt. Regulators CLA, CRUP and CRLP are adaptive and can change the values of the coefficients k 1 , k 2 , k 3 according to the adaptation signals U a , coming from outputs 4–6 of the CC. Six motor legs (ML), three motor rods of the upper platform (MRUP) and three motor rods lower platform (MRLP) described by the following system of equations:
1.7 Neuroprocessor Automatic Control System of the SEMS Module
63
Fig. 1.27 Block diagram of NACS. CC—control computer, CLA—Controllers Leg-Actuators, M1–M3—Motors, ML—Motors Legs, GRUP—Controllers Rods Upper Platform, MRUP—Motors Rods of the Upper Platform, CRLP—Controllers Rods Lower Platform, MRLP—Motors Rods Lower Platform, BPSL—Block of Pressure Sensors in the Legs, BPSUP—Block of Pressure Sensors in the rods of the Upper Platform, BPSLP—Block of Pressure Sensors in the rods Lower Platform, BMSUP—Block of Moving Sensors of the Upper Platform, BMSLP—Block of Moving Sensors of the Lower Platform, UP—Upper Platform, LP—Lower Platform
e = V − kv ω, K e, Md = Tp + 1 / ω = (Md − Mc ) J p, / I = k pe p, Mc = f (I ). Settings k v , K, T, k pe , J determined by the design and parameters of the electric motors. Function M c = f (I) is usually determined by the results of experiments in the pre-commissioning. As a rule, it has a non-linear form of a dead zone and saturation.
64
1 Mechanisms and Control Systems
In blocks BPSL, BPSUP and BPSLP commonly used piezoelectric pressure sensor with digital output, consistent with the inputs 5–7 of the CC, and in blocks BMSUP and BMSLP—optoelectronic moving sensors with digital output, consistent with the inputs 2 and 4 of the CC. Setting the operator is supplied to the input 1 CC through the man–machine interface. CC contains neuroprocessor and software package. The structure of the neuroprocessor NM 6403 is shown in Fig. 1.28. It is a high performance microprocessor with elements of VLIW SIMD architectures. NM 6403 includes the control unit, calculating the address and scalar processing and assembly to support operations on vectors with elements of variable bit length. Furthermore, it has two identical programmable interfaces for use with various types of external memory, and two communication ports are compatible with the ports
Fig. 1.28 Processor NM 6403 architecture
1.7 Neuroprocessor Automatic Control System of the SEMS Module
65
TMS320C4x, for the possibility of constructing a multiprocessor system. Neuroprocessor designed to handle 32-bit scalar data and a programmable bit packed in 64-bit words. CC software package contains a parallel operating Control SubSystem of the Upper Platform (CSSUP) and control subsystem of the Lower Platform (CSSLP) (Fig. 1.29). NACS operates as follows. On input 1 of CSSUP (Fig. 1.29) from the operator or from the higher-level control system in CSSUP enter given the linear coordinates of the upper platform x BT , y BZ , z BZ , given the angular coordinates of the upper platform u ZB , v BZ , w BZ , as well as given the radius of the upper platform R BZ . At the same time the input 2 of the CSSUP from BMSUP enter the current linear coordinates of the upper platform x BT , y BT , z TB , the current angular coordinates of the upper platform u TB , v BT , w TB , as well as the current radius of the upper platform R BT . According to these signals in the Model of the Upper Platform (MUP) are calculated the required changes Δli leg length and lengths of the control rod ΔR Bi of upper platform. The received signals are supplied to the Optimizer of the Trajectory of the Upper Platform (OTUP) that calculates a control action U i (t). These actions come in the Multiplier Block of Upper Platform (MBUP) and then on output 1 in the form of analog signal U 1 –U 6 for CLA and on output 3 as analog signals U 7 –U 9 for CRUP. Control signals from the CLA arrive at the motors of the legs ML, which changes the length of the legs, providing linear and angular displacement of the upper platform. Control signals from the CRUP arrive at the Motors of the Rods of the upper platform (MRUP), which change the length of the rods, providing change in the radius of the upper platform. MBUP provides ban passing signals U i (t), when there is not synchronization of motors by entering its second input signals Qi = 0. These signals creates a Logical Block of the Upper Platform (LBUP) whose input is via the input 5 receives signals F 1 –F 6 from BPSL and through the input 6 signals F 7 –F 9 from BPSUP. LBUP checks the following types of rules: If for any i-th actuator F i > F di (allowable excess), then Qi = 0, otherwise Qi = 1. In CSSLP work is organized similarly. However, it is not required to develop control signals for motors of the legs. To improve the dynamics in NACS provided adaptation PID controllers due to a change of the coefficients k 1 , k 2 and k 3 . The required values of these coefficients are selected from the Memory Blocks MB1 and MB2 of the measured values T T x BT , y BT , z TB , u TB , v BT , w TB , R BT or x HT , y HT , z TH , u TH , v H , w TH , R H . These coefficients are served in the CLA though the output 4 CC, in the CRUP through the output 5 and in the CRLP through the output 6. In BM1 these coefficients are recorded in a table. They are calculated in a Dynamic Model of the Upper Platform (DMUP) using measured and calculated parameters of UP, entering the input 2 from BMSUP, the input 5 from BPSL, the input 6 of BPSUP, as well as from the MUP. Similarly, in the BM2 these factors are recorded in a table which is calculated in a Dynamic Model of the Lower Platform (DMLP)
66
1 Mechanisms and Control Systems
Fig. 1.29 Block diagram of the interaction of software modules. CSSUP—Control SubSystem of the Upper Platform, CSSLP—control subsystem of the lower platform, MUP—Model of the Upper Platform, OTUP—Optimizer of the Trajectory of the Upper Platform, MBUP—Multiplier Block of Upper Platform, DMUP—Dynamic Model of the Upper Platform, DMLP—Dynamic Model of the Lower Platform, MLP—Model of the Lower Platform, BM—Block of the memory
1.7 Neuroprocessor Automatic Control System of the SEMS Module
67
using measured and calculated parameters of LP, coming from BMSLP, BPSLP and Model of the Lower Platform (MLP). MUP module calculates the required changes Δli length legs and lengths of the control rods ΔR Bi upper platform using the Eq. (1.1–1.47) (see Fig. 1.15). The module MLP in principle to solve the same problem that the module MUP. Therefore, it should have the same performance. However, when the module SEMS T − R HZ .. is off-line, module MLP solves a simple problem of computing ΔR H = R H Module OTUP must choose control actions U i (t), to ensure the transition from the initial position to a predetermined, so that would not be in the area of jamming, which is described by a system of linear constraints on Δli and ΔRbi . Such restrictions may be of the order of 200 to 18 variables: x BT , y BT , z TB , u TB , v BT , w TB , R BT , t, Δl 1 , Δl 2 , Δl3 , Δl 4 , Δl 5 , Δl 6 , ΔRB1 , ΔRB2 , ΔRB3 , f p . The values for calculating and outputting frequency effects should be less than f pmin = 100 Hz. Module OTLP in principle to solve the same problem that the module OTUP. Therefore, it should have the same performance. However, when the module SEMS is off-line, the module OTLP solves a simple problem with 5 variables ΔRn1 , ΔRn2 , ΔRn3 , t, f p . Module MBUP should develop the controlling actions U 1 –U 9 by calculating the nine simple products U i (t)∗Qi with at least f pmin = 100 Hz. Obviously, it is desirable that the module MBLP had similar performance, although in this case the number calculated in it works less. Modules LBUP and LBLP should determine the value of logical variables Qi and Qj by inspection type rules: If F i > F di , then Qi = 0, or Qi = 1. If F j > F dj , then Qj = 0, or Qj = 1. The number of such rules is not more than nine. The frequency of issuance Qi and Qj values must be at least f pmin = 100 Hz. DMUP and DMLP modules is used to calculate the optimal values of k 1 , k 2 and k 3 PID controllers, excluding jamming and ensures smooth movement hexapod platform. The main computational procedure in these modules is the procedure of matrix– vector multiplication [27]. As is known, to calculate the product matrix–vector must be performed m × n multiplications and m × (n − 1) additions, where m—the number of rows of the matrix; n—the number of columns. Estimated turns out about 3000 pairs of multiply–add operations. The values of the frequencies for calculating and outputting the control object on the control actions must be at least f pmin = 100 Hz. Bit computing program blocks determined by the type of the module SEMS and requirements for accuracy and reliability of the control system. Bit ADC on the input CC and DAC—on outputs in these systems is typically at least 12, and the frequency conversion of at least 200 Hz.
68
1 Mechanisms and Control Systems
1.8 Anti-jamming in Automatic Control Systems of SEMS Modules In the Standard Modules (SM) SEMS may occur a state where one or more legs actuators or of the control rods of the platform reconfiguration enter in jamming zone [28]. This may reduce the accuracy of control and damage the drives or gear’s actuators. The existing system of automatic control SM SEMS [4], as a rule, do not contain blocks ensure the prevention of possible jamming of the legs and the control rods of the electromechanical system, related primarily to the operation of electric desync. To solve this problem required a logical analysis of movements or efforts, or to solve the problem of choosing the optimal path of movement without jamming. In this case, the controller must be equipped with additional software and computing units complemented force sensors in the legs and the control rods. Then control system structure complicates, but the quality and reliability increase. Usually, to fight jamming using low stringency, for example through the installation of springs in his legs and rods, as in hexapod company PI [29]. However, the reduced dynamic accuracy and performance of the ACS SM SEMS. Therefore, such a solution is not acceptable in many practical cases. Consider the other; free from this drawback, the possible options for preventing wedge ACS SM SEMS.
1.8.1 The Use of Force Sensors Automatic control systems SM SEMS, usually contain: Elongation Calculating Block (ECB), Upper Platform Rods Control Block (UPRCB), Lower Platform Rods Control Block (LPRCB), Actuators Legs Control Block (ALCB), Upper Platform rods Motors Block (UPRMB), Lower Platform Rods Motors Block (LPRMB), Actuators Legs Motors Block (ALMB), Upper Platform Rods Reducers Block (UPRRB), Lower Platform Rods Reducers Block (LPRRB), Actuators Legs Reducers Block (ALRB), Legs (L) of the SM SEMS, Upper Platform Rods (UPR), Lower Platform Rods (LPR), Upper Platform (UP), Lower Platform (LP), Legs Lengthening Sensors Block (LLSB), Upper Platform Rods Lengthening Sensors Block (UPRLSB), Lower Platform Rods Lengthening Sensors Block (LPRLSB), Upper Platform Coordinates Calculating Block (UPCCB), Lower Platform Force Sensors Block (LPFSB), Upper Platform Force Sensors Block (UPFSB), Legs Force Sensors Block (LFSB) and Logic Block (LB) (Fig. 1.31). System works as follows. The ECB receives set points coordinate of the upper platform x Z (t), y Z (t), z Z (t), u Z (t), v Z (t), w Z (t) and set points radii of the upper and lower platforms R BZ (t), R HZ (t). Simultaneously, the ECB receives measured in the T UPRLSB radius R BT (t), measured in the LPRLSB radius R H (t) and calculated in T T T T the UPCCB current coordinates x (t), y (t), z (t), u (t), v T (t), w T (t) based on
1.8 Anti-jamming in Automatic Control Systems of SEMS Modules
69
measurements extensions of legs measured in LLSB. From this information, ECB computes the required lengthening legs ΔL Zj (t) and rods ΔR BZ j (t), ΔR HZ j (t) and transmits them to the ALCB, UPRCB and LPRCB. At the same time, these blocks T T receive information on current values of lengthening ΔL iT (t) ΔR Bi (t), ΔR H i (t). Sensors Block (LPFSB), Upper Platform Force Sensors Block (UPFSB), Legs Force Sensors Block (LFSB) and Logic) Block (LB). ) ALCB calculates an error: ei (t) = ΔL iZ (t) − ΔL iT (t) and control action: ∫ Ui (t) = k1i ei (t) + k2i
ei (t)dt + k3i
dei (t) dt
They are served in ALDB that performs through ALRB elongation of legs (L) and a corresponding change in the coordinates(of the upper platform.( UPRCB calculates an error: e B j (t) = ΔR BZ j (t) − ΔR BT j (t) and control ∫ de B j (t) action: U B j (t) = k1B e B j (t) + k2B e B j (t)dt + k3B dt . They served in UPRDB, which performs through UPRRB elongation of rods upper platform and the corresponding change (in the radius UP. ( T LPRCB calculates an error: e H j (t) = ΔR HZ j (t) − ΔR H j (t) and control action: ∫ de H j (t) U H j (t) = k1H e H j (t) + k2H e H j (t)dt + k3H dt They served in LPRDB, which performs through LPRRB elongation of rods lower platform and the corresponding change in the radius LP. To prevent jams in this case, the actuator rods and legs of the upper and lower platforms are set corresponding force sensors. They are combined in Force Legs Sensors Block (FLSB), Force Upper Platform Sensors Block (FUPSB) and Force Lower Platform Sensors Block (FLPSB) (see Fig. 1.30). The signals from these blocks served in the Logic Block (LB). It is designed to calculate the variables ξ i , ξ Bj and ξ Hj equal to either 0 or 1. These signals allow the exclusion of dangerous tension in the legs and rods by zero error, calculated in ALCB, UPRCB and LPRCB: ) ) ei (t) = ΔL iZ (t) − ΔL iT (t) ξi ) ) e B j (t) = ΔR BZ j (t) − ΔR BT j (t) ξ B j ) ) T e H j (t) = ΔR HZ j (t) − ΔR H j (t) ξ H j Work LB is reduced to verify the type of logical rules: If F i > F di , the ξ i = 0, otherwise ξ i = 1, Where: F i —measured LFSB of force in the i-th leg, F di —permissible force. If F Bj > F dj , the ξ Bj = 0, otherwise ξ Bj = 1, Where: F Bj —measured UPFSB of force in the j-th rod of the upper platform, F dj —permissible force.
70
1 Mechanisms and Control Systems The coordinates and radii of platforms ECB radius measurement platforms
coordinates computed
LPRCB
UPRCB
LPRMB
UPRMB
LPRRB
UPRRB
LPR
UPR
ALMB
ALRBTS
L
UP
LP
LPRLSB
ALCB
UPRLSB
LPFSB
UPFSB the current coordinates and the radius of the upper
the current radius of the lower platform
LLSB
UPCCB
LFSB
LB
Fig. 1.30 Blok- scheme ACS to the logic block and force sensors. ECB—Elongation Calculating Block, UPRCB—Upper Platform Rods Control Block, LPRCB—Lower Platform Rods Control Block, ALCB —Actuators Legs Control Block, UPRMB Upper Platform rods Motors Block, LPRMB—Lower Platform Rods Motors Block, ALMB—Actuators Legs Motors Block, UPRRB— Upper Platform Rods Reducers Block, LPRRB—Lower Platform Rods Reducers Block, ALRB— Actuators Legs Reducers Block, L—Legs of the SM SEMS, UPR -Upper Platform Rods, LPR— Lower Platform Rods, UP—Upper Platform, LP—Lower Platform, LLSB—Legs Lengthening Sensors Block, UPRLSB—Upper Platform Rods Lengthening Sensors Block, LPRLSB—Lower Platform Rods Lengthening Sensors Block, UPCCB—Upper Platform Coordinates Calculating Block, LPFSB—Lower Platform Force Sensors Block, UPFSB—Upper Platform Force Sensors Block, LFSB—Legs Force Sensors Block, LB—Logic Block
1.8 Anti-jamming in Automatic Control Systems of SEMS Modules
71
The coordinates and radii of platforms
radius measurement
ECB coordinates computed
LPRCB
UPRCB
ALCB
LPRMB
UPRMB
ALMB
LPRRB
UPRRB
ALRB
LPR
UPR
L
LP
LPRLSB
UP
UPRLSB
LLSB
UPCCB The current radius LP
The current coordinates and the radius UP
LB
Fig. 1.31 Blok-scheme ACS to the logic block and displacement sensors. ECB—Elongation Calculating Block, UPRCB—Upper Platform Rods Control Block, LPRCB—Lower Platform Rods Control Block, ALCB—Actuators Legs Control Block, UPRMB Upper Platform rods Motors Block, LPRMB—Lower Platform Rods Motors Block, ALMB—Actuators Legs Motors Block, UPRRB— Upper Platform Rods Reducers Block, LPRRB—Lower Platform Rods Reducers Block, ALRB— Actuators Legs Reducers Block, L—Legs of the SM SEMS, UPR—Upper Platform Rods, LPR— Lower Platform Rods, UP—Upper Platform, LP—Lower Platform, LLSB—Legs Lengthening Sensors Block, UPRLSB—Upper Platform Rods Lengthening Sensors Block, LPRLSB—Lower Platform Rods Lengthening Sensors Block, UPCCB—Upper Platform Coordinates Calculating Block, LB—Logic Block
72
1 Mechanisms and Control Systems
If F Hj > F dj , the ξ Hj = 0, otherwise ξ Hj = 1, Where: F Hj —measured LPFSB of force in the j-th rod of the lower platform, F dj —permissible force. This approach makes it easy to avoid jams. However, this complicates the design and ensures smooth movement of the platform along a predetermined path. In some cases, when the requirements for ACS on the dynamics and accuracy is not high, can be used current sensors in electric motors.
1.8.2 Use of Pickoffs In this case, LB uses signals li (t), l j (t) with a displacement sensor located in the legs and signals R Bi (t), R B j (t), R H i (t), R H j (t) with a displacement sensor located in rods. These signals are received in LP from LLSB, UPRLSB and LPRLSB (see Fig. 1.31). LB checks the following logical rules for the feet: If the i-th leg shortens li (t) > li (t − 1) and j-th shortens l j (t) > l j (t − 1) and li > 0 and l j > 0 and Δ > δ and l i > l j , then ξ i = 0, ξ j = 1. If the i-th leg lengthens li (t) > li (t − 1) and j-th lengthens l j (t) > l j (t − 1) and l i (t) > 0 and lj > 0 and Δ > δ and l i < l j , then ξ i = 1, ξ j = 0. At the same time, the following designations: Δ—module of length difference of adjacent actuators, l i , l j —leg lengthening i and j, δ—admission to the module deformation, ξ i = 0 stop the i-th motor, ξ i = 1—the work of the i-th motor. For rods rules easier: If If If If If If
T R Bi (t) > R BT j (t) > R BT j (t) = T RH i (t) > T RH j (t) > T R H j (t) =
R BT j (t), then ξ B j = 0 i ξ Bi = 1 T R Bi (t), then ξ B j = 1 i ξ Bi = 0 T R Bi (t), then ξ B j = 1 i ξ Bi = 1 T RH j (t), then ξ H j = 0 i ξ H i = 1 T RH i (t), then ξ H j = 1 i ξ H i = 0 T R H i (t), then ξ H j = 1 i ξ H i = 1
These values ξ i , ξ j, ξ Bi , ξ Bj , ξ Hi and ξ Hj equal to either 0 or 1 as in the previous case, enter the blocks ALCB, UPRCB and LPRCB. This allows the exclusion of dangerous tension in the legs and rods by zero error, calculated in these blocks. The proposed approach also allows you to avoid jams. However, the accuracy and reliability of LB in this case highly dependent on the tolerance given by δ. To improve the situation may be setting the tolerance δ. To do this, you can apply the following algorithm to select δ with ballroom ratings Bi and preference factor K j . Choosing δ for which J = K 1∗ B1 + K 2∗ B2 there is max. At the same time let: If δ = 1, then maximum accuracy and B1 = 5; If δ = 2, then average accuracy and B2 = 3;
1.8 Anti-jamming in Automatic Control Systems of SEMS Modules
73
If δ = 3, then the accuracy of small and B3 = 1; If δ = 1, then low reliability; B1 = 1 If δ = 2, the reliability of the average; B2 = 3 If δ = 3, then the maximum reliability; B3 = 5 If reliability is preferred, then K 1 = 1; K 2 = 2. J = 1∗5 + 2∗1 = 7, J = 1∗3 + 2∗3 = 9, J = 1∗1 + 2∗5 = 11. We choose δ = 3. This value δ we set in LB. If precision is preferred, then K 1 = 1; K 2 = 2. J = 2∗5 + 1∗1 = 11, J = 2∗3 + 1∗3 = 9, J = 2∗1 + 1∗5 = 7. We choose δ = 1. This value δ we set in LB. In this variant, the smoothness of the platform movement along a given trajectory is not guaranteed.
1.8.3 Using a Calculation Block of Optimal Trajectories In this case ECB does not calculate the required leg lengthening ΔL iZ (t) Z and rods ΔR Bi (t) and ΔR HZ i (t), but determines the optimum path (smooth and without jamming) to move from the current location coordinates T x T (t), y T (t), z T (t), u T (t), v T (t), w T (t) and radii R H (t), R BT (t) at predetermined coordinates x Z (t), y Z (t), z Z (t), u Z (t), v Z (t), w Z (t) and radii R HZ (t)R BZ (t). Preparation of these paths is complicated and time-consuming task that may be performed in various ways [30]. In this method, it is advisable to pre-beta stage to create a matrix of jamming, which corresponds to the forbidden path. Then we do not need real-time to produce additional complicated calculations, thus increasing the speed of control computation. However, this requires a large number of calculations in the design and debugging. The same issue is complicated by the fact that the first payment made under the theoretical parameters of the device with the load. Unfortunately, the manufacture and assembly inaccuracies allowed, which cannot be accounted for in the calculations. This leads to cases where, due to interference, not all jamming zones were previously taken into account. Therefore, during debugging, it is necessary to clarify the jamming zones. Also, during use of the device may experience uneven development of the individual actuators in the uneven load. This can adversely affect safety. The latter would require an adjustment of the table jams in the operation of high-precision systems. In addition, this method does not prevent jamming with significant differences in the dynamic characteristics of the automatic control system for the individual legs and rods. Accordingly, this method of dealing with jamming suitable for use in conjunction with 1 or 2 methods.
74
1 Mechanisms and Control Systems
1.9 Control of Vitality and Reliability Analysis The main principles that are used for the complex systems durability control are adaptation, dynamic natural selection (or hot reservation), stress, compensation and borrowing, stupor or enabling of the emergency mode [31]. For the inner system state deviations due to various failures, to achieve the needed durability we traditionally use the hot reservation principle, which is similar to the natural dynamic selection in the living organisms. A signal to enable the mechanism of the dynamic natural selection, that is the switching the channels and the blocks to the spare ones, is the observed overlaps of the blocks’ inner state, that can be measured by its expected value of the block’s parameters, or by its failure probability [32]. The problem of the system’s durability or its reliable functioning provisioning, when the system’s inner state deviation exceeds its allowed thresholds, is stated quite a long time ago and is mostly examined [18]. However, while estimating the change with the passing of time of a complex logical function, which describes the system’s failure probability with the accounting of the relations between the blocks (excluding only the simplest schemes), there appear certain complexities and ambiguities [9]. The problem of the accounting of the parameters of the system’s blocks’ influence on the parameters of the blocks they are related to, while calculating the failure probability with the passing of time of the complex system still doesn’t have a practically acceptable solution [19], since the analytical account of that issue in a complex system invariably leads to very complex computations. Let us examine one of the possible approaches to the problem of the relations’ accounting between the blocks of a complex system.
1.9.1 A Simplified Accounting of the Relations Between the Blocks of a Complex System It is clear that with the passing of usage time T of a complex system the probabilities of correct functioning of its blocks Pic (T ) are decreasing by an exponential law [33]: Pic0 (ti ) = exp(−αi0 ti ),
(1.151)
where t i —the usage time of the i-th block of the system, ai0 —the decrease coefficient, which we find out of the Eq. (1.151), since the mean time between failures t i0 and the initial correct functioning probability Pic (T ) are usually given for the system blocks. That decrease of the probabilities may be described by the following change of their parameter values’ expected values [33]: ) ) / ) / ) Pi0 (T ) = 1 − Φ (bi − m i ) σi + Φ (− bi − m i ) σi = 1 − Pic0 (ti ),
(1.152)
1.9 Control of Vitality and Reliability Analysis
75
where: bi —the maximum allowed value of the i-th parameter, mi —the expected value of the i-th parameter, σi —the root mean square of the i-th parameter. F(x) is the probability integral that cannot be expressed through elementary functions, but there are tables of its calculated values [33], or its approximate value can be found as a sum of a decreasing row. Since the initial values for Pibo , bi and mi0 are usually known for each block, the root mean square value σi for each block may be found from the following equations:
(1.153)
Φ(− ∞) = 0.
(1.154)
It is also clear that the approaching of the expected values of the i-th block’s parameters to the dangerous (critical) threshold ci and, furthermore, to the maximal allowed threshold bi also affect the parameters of the related blocks. For instance, the change in the power supply block’s output voltage also affects the amplification coefficient of the related amplification block. However, the problem of the estimation of the change with the passing of time of a complex logical function, which describes the system’s failure probability with the accounting of the relations between the blocks still doesn’t have a practically acceptable solution [33], since the analytical accounting of that fact in a complex system leads to very complex computations. Thus, we propose the following simplified approach to that problem: When the expected value mi (t i ) of the block’s parameters in some time moment t ik fall into a dangerous zone ci ≤ |m i | < bi , we set the coefficients w(i ) = 2, u(i ) = 3 for this block. Here, w(i) is a state characteristic of the i-th block (w(i) = 3 is broken, w(i) = 2 is dangerous, w(i) = 1 is normal). u(i) is the characteristic of the i-th block proximity to the nearest broken or dangerous block (u(i) = 0 is far, u(i) = 1 is connected via a single block, u(i) = 2 is directly connected, u(i) = 3 is self). After, we perform an expected value shift: m i = m i + σi w(i )u(i )m i μ(m i ),
(1.155)
where μ(mi )—the current expected value membership function of a certain interval, which we calculate as following (Fig. 1.32): 1. If −∞ < m i < bi + m i0 , then μ(m i ) = 1 2. If − bi + m i0 ≤ m i ≤ − ci + m i0 , then
76
1 Mechanisms and Control Systems
⎧) / )⎫ (m i − m i0 + ci ) (ci − bi ) ; / ) μ(m i ) = max ) (m i − m i0 + bi ) (bi − ci ) 3. If −ci + m i0 ≤ m i ≤ m i0 , then ⎧
⎫ / (− m i + m i0 ) ci ; / μ(m i ) = max (m i − m i0 + ci ) ci 4. If m i0 ≤ m i < ci + m i0 , then ⎧
/ ⎫ (− m i + m i0 + ci ) ci ; / μ(m i ) = max (m i − m i0 ) ci 5. If ci + m i0 ≤ m i ≤ bi + m i0 , then ⎧
⎫ / (m i − m i0 − bi ) (ci − bi ); / ) μ(m i ) = max ) (m i − m i0 − ci ) (bi − ci ) 6. If bi + m i0 ≤ m i < ∞, then μ(m i ) = 1
Fig. 1.32 Fuzzification
1.9 Control of Vitality and Reliability Analysis
77
Then we set the block numbers j of the blocks that are directly related to the “dangerous” block. For them, we set the values of the relation coefficients u( j ) = 1, u( j ) = 2 and perform the expected value shift: ) ) m j = m j + σ j w( j )u( j)m j μ m j .
(1.156)
where we calculate μ(mj ) by the same rules (1–6). After, we define the block numbers q, that are connected to the block i via a single block. For them, we set w(q) = 1, u(q) = 1 and perform the expected value shift: ) ) m q = m q + σq w(q)u(q)m q μ m q
(1.157)
If now, after recounting of the expected values, we find that some block has its absolute value over the allowed threshold (|mi |>bi ), such block is accounted to be broken, its failure probability is set to one (Pi0 = 1), and the whole system’s failure probability is set to one (P0 = 1). Otherwise, we need to calculate the new values of failure probability of all blocks from the new expected values, according to the formula (1.152), and then to calculate the failure probability of the whole system, by using, for instance, a polynomial formula [33]: ∑ )
)∑ ) ) Pi0 (T ) + (− 1)1 Pi0 (T )P j0 (T ) i ij ∑ ) ∏ ) 2 Pi0 (T )P j0 (T )Pk0 (T ) + . . . + + (−1) (Pi0 (T )).
P0 = (−1)0
i jk
(1.158)
i
Thus in the proposed solution to the relations’ accounting problem, when a dangerous situation arouses, we instantly change the expected parameter values of the given block and its related blocks, which allows in a first approximation to account the mutual impacts of the blocks’ parameters to the change of the failure probabilities with the passing of time.
1.9.2 The Modeling of Changes with the Passing of Time of the Complex System Failure Probability with the Reservation of the Blocks While modeling the change with the passing of time of the complex system failure probability with the reservation of the blocks we assume that the system contains N b basic and N r reserve blocks. We must determine the change with the passing of time t of the failure probability Ps {y = 1} of the system, if we know: 1. The structure of the system, by which we can draw the block relations table. 2. The mean time between failures for each i-th block of the system t i0 .
78
1 Mechanisms and Control Systems
3. The initial probability of the correct functioning of each i-th block of the system Pi (t i0 ). 4. The maximal (critical) deviation threshold of the i-th block parameters bi . When this threshold is surpassed, we assume that the block is broken (zi = 1–i-th block failure). 5. The dangerous deviation of the parameters of the i-th block ci . When surpassed, this leads to the increase of the parameters’ deviation of related blocks. 6. The deviation threshold d i , surpassing of which leads to the substitution of the block with the reserve one. In a system with reservation the system failure means that: ) ) ) ) y = z 10 ∧ z 1 p ∧ z 20 ∧ z 2 p ) ) ∧ z 30 ∧ z 3 p ∧ . . . ) ) ∧ z N 0 ∧ z N p = 1,
(1.159)
where zib is the failure of i-th basic block and zir is the failure of the i-th reserve block of the system. Thus to compute the system’s failure probability that characterizes its reliability, in a time moment T we must first compute the probabilities of the conjunctive elements in the Eq. (1.159): { } Pi z i0 ∧ z i p = 1 = Pi0 {z i0 = 1} } { ∗ Pi p z i p = 1 = · · · = Pi0 (Ti0 ) ) ) ∗ Pi p Ti p = Pi (T ),
(1.160)
where T ib , T ir are the corresponding working time of the ith basic and reserve blocks. Furthermore: 1. Since we are interested in the situations when the blocks’ parameters are nearing the dangerous values, when ci < |mi | −3∗bi and mi (V ) ≤ −bi , then μi (V ) = max(μi ([−3∗bi ,−bi ]), μi ([−2∗bi ,−bi ])) If mi (V ) > −bi and mi (V ) ≤ 0, then μi (V ) = max(μi ([−bi ,0]), μi ([−bi ,0])) If mi (V ) > 0 and mi (V ) ≤ bi , then μi (V ) = max(μi ([0,bi ]), μi ([0,bi ])) If mi (V ) > bi and mi (V ) ≤ −3∗bi , then μi (V ) = max(μi ([bi ,3∗bi ]), μi ([bi ,2∗bi ])) If mi (V ) ≥ 3∗bi , then μi (V ) = 1 If mi (V ) < −b or mi (V ) > b, then wi (V ) = 0 and Pi (V ) = 1 (i-th block is critical) If (mi (V ) > −b and mi (V ) < −ci ) or (mi (V ) > ci and mi (V ) < b), then wi (V ) = 2 (i-th block is dangerous)
1.9 Control of Vitality and Reliability Analysis
81
If mi (V ) ≥ −ci or mi (V ) ≤ ci then wi (V ) = 1 (the block is functional) 6. Computing the proximity coefficient ui (V) for each dangerous/critical i-th block in the time moment V If wi (V ) = 0 (i-th block is critical), then ui (V ) = 0 Furthermore: If a dangerous and/or critical block is related to the i-th, then ui (V ) = 3 If a dangerous and/or critical block is related to the i-th through a single block, then ui (V ) = 2 In all other cases ui (V ) = 0 If wi (V ) = 2 (i-th block is dangerous), then ui (V ) = 3 Furthermore: If a dangerous and/or critical block is related to the i-th, then ui (V ) = 2. If a dangerous and/or critical block is related to the i-th through a single block, then ui (V ) = 1. In all other cases ui (V ) = 0. We define the related blocks from the matrix C. 7. Computing the addition to the expected value M i (V ) = bi *wi (V )*ui (V )*mi (V )*μi (V ) 8. Computing the expected value mi (V) after accounting the relations in the time moment V If mi (V ) < −d i or mi (V ) > d i , then mi (V ) = mi (0), else mi (V ) = mi (V ) + M i (V ) 9. Computing the failure probability Pi (V) for each i-th block If wi (V ) = 0 and k i = 0, then Pi (V ) = 1 If wi (V ) = 0 and k i = 1, then Pi (V ) = pi (V ): = 0.004 and V i = V If wi (V ) /= 0 and k i = 1 and mi (V ) < −d i or mi (V ) > d i , then V i = V, Pi (V ) = pi (V ): = 0.004, else: Pi (V ) = F((−b−mi (V ))/bi )−F((b−mi (V ))/bi ) We calculate the value of the function F according to the table [33]. 10. Computing the system’s failure probability ∑ ∑ ) ) pi (T ) p j (T ) P(V ) = (−1)0 ( pi (T )) + (−1)1 i ij ∑ ) ) 2 + (−1) pi (T ) p j (T ) pk (T ) i jk ∏ + ... + ( pi (T )) i
From these values, we draw the graph P(V ). 11. If P(V) = 1, stop, else: 12. If V ≥ T, stop, else V = V + Δ and go to step 4
82
1 Mechanisms and Control Systems
1.9.4 Example Consider the following example of the system shown in Fig. 1.3. That is a schematic representation of a standard module for a smart electromechanical system (SM SEMS). For it, let us assume that, Pc0 = 0.996, mi0 = 0, bi = 0.15, t 0i = 27000 h, ci = bi , Δ = 10000 h. When displayed as a directed graph on a scheme, the system looks as shown in the Fig. 1.33. In the first example case, let us assume that we don’t have spare blocks for any of the system’s blocks. Then, after running series of test simulations, we gain the following system probability from step # dependency, which is shown in Fig. 1.34. When running series of test simulations assuming that we do have a single spare block for each of the system’s blocks, we gain the following dependency, shown in Fig. 1.35.
Fig. 1.33 SM SEMS scheme. MC—Main Controller, LP—Lower Platform, UP—Upper Platform, C1-C6—Controllers, E1–E6—Engines, R1–R6—Reducers, LJ1–LJ6—Lower Joints, UJ1–UJ6— Upper Joints
1.9 Control of Vitality and Reliability Analysis
Fig. 1.34 System failure probability graph, without spare blocks
Fig. 1.35 System failure probability graph, with spare blocks
83
84
1 Mechanisms and Control Systems
1.9.5 Decision Making Methods for Durability Control Let us examine the pros and cons of the possible decision making methods while controlling the durability of a system with hot reserving. 1. Let us control the block parameters x i and switch to a reserve block when x i ≥ bi Pros: simplicity. Cons: system hiatus while switching the block and the high probability of false alerts on random short-timed parameter value peaks. 2. Let us control the block parameters x i while computing the expected values mi , and switch to a reserve block when mi ≥ bi Pros: lower false alert probability. Cons: system hiatus while switching the block and the system complexity is higher. 3. Let us control the block parameters x i while computing the current expected value mi , modeling the expected value mi (t) with the passing of time t with or without accounting the relations between the system’s blocks, defining by the modeling results the probable time moment T a of a situation when mi ≥ bi , and in a time moment T p = k p T a (k p < 1) we switch the block to a reserve one and perform the fixing of partial failures. Pros: low probability of false alerts, low probability of system hiatus while switching the block. Cons: low prognosis precision of the time moment T p , high system complexity.
1.10 Conclusion There is quite a large variety of standard models of SEMS, on the basis of which it is possible to design a variety of intelligent robots with parallel architecture type of SEMS. Modules such as a tripod have now limited the use of simple designs for a number of inherent weaknesses. Most have full functionality module SM8 SEMS and it can be called a universal module. The other modules are in some of the simplification. It is therefore advisable to study and construction of mathematical models, especially the module in order to study the characteristics and properties discussed SEMS standard modules. Using different association SEMS units by type of building different architectures of computer networks (serial, parallel, star, ring, and tree) allows the construction of intelligent robotic systems for various purposes with the parallel structures that provide high accuracy and speed. The obtained solution of the direct kinematics problem can be used in the synthesis of an automatic control system with the required quality of dynamic processes. The obtained algorithmic solution of inverse kinematics can be used in the synthesis of an automatic control system with the required quality of dynamic processes. If the developer of SEMS has one goal and this goal can be formally defined as a scalar function, i.e., quality criterion of choice, the task of choosing the optimal solution can be formalized and described a model of mathematical programming
1.10 Conclusion
85
(MP). In other cases, you should use mathematical programming in an ordinal scale (MPOS), generalized mathematical programming (GMP) or multi-step tasks generalized mathematical programming (MGMP). If you have a developed SEMS uncertainties logical-probabilistic, logical– linguistic and logical-interval type search for the optimal SEMS can be carried out by means of mathematical programming, when there is a fundamental possibility of constructing a scalar quality criterion, including of the attributes (probability, of membership functions or intervals) logical variables. The quality of the optimization will be mainly determined by the proper construction of a binary relation, describing the measure of closeness designed to SEMS perfect. This can be time-consuming and difficult task, often associated with the solution of a number of logical problems. The quality of formulation and solution of these problems depends on the experience and skills of the developer, as the decision makers (decision maker). To increase objectivity in the evaluation of the optimal model it is advisable to make a collective decision-maker with the involvement of the customer to work on the construction of a binary relationship. If you are using when searching for the optimum SEMS methods of mathematical programming in an ordinal scale (MPOS), generalized mathematical programming (GMP) and multistep generalized mathematical programming (MGMP) pass from the quantitative to the ordinal scales, i.e. moving away from models that require functional task, defining the objectives and constraints of the problem, a model taking into account the preferences of persons involved in the selection decision. In this case, the parameters are estimated MPOS options SEMS, and MGMP or GMP and their characteristics. The practical solution of such problems is currently associated with the use of neural networks and media types A-life. The need to choose the best trajectories (development scenarios) for group control of SEMS in conditions of incomplete certainty requires consideration of nonlinear constraints in the form of logical-probabilistic and/or logical–linguistic expressions (rules). The solution of such problems can be greatly simplified by reducing logicalprobabilistic and logical–linguistic constraints to interval ones by using the described theorems: 1–2 (see paragraph 1.5.1. Chap. 1) and theorem 3 (see paragraph 1.5.2. Chap. 1). Then, knowing the interval obtained during fuzzification, we move from the fuzzy optimal control problem to the classical mathematical programming problem, as shown in the example. Automatic control systems SEMS are constructed as multi-level system architecture of the “tree”. This central control computer performs like a top-level function of coordinating the work of all subsystems, and problem solving strategies to meet the required selection of the operator and/or systems of higher level tasks and generating a sequence of actions (algorithms), required for its implementation. Problem solving tactical level control subsystem perform the next level. This is usually used neuroprocessor modules to easily parallelize the calculation process to improve the speed and accuracy of calculations. At the same time they need to ensure not only progress towards the goal of a priori given by routes and paths, but also necessary to promote arbitrary changes sets the target.
86
1 Mechanisms and Control Systems
The vision systems appropriate to use adaptive CCD. These CCD provide adaptation to the parameters identified objects by controlling the pixels. This is achieved by changing the configuration and size of the matrix on commands from the controller. It also allows you to increase the accuracy and speed of recognition due to the transfer of the recognition algorithm into hardware. Subsystems automatic control of the movable elements of modules SEMS also have the architecture of the “tree”. Subsystems generally contain neuroprocessor optimal displacement calculation block that calculates optimum elongation without jamming and turns. At the same time they are the group controls of the same elements, tactile sensors and force sensors that are essential for the adaptation of automatic control systems, and linear and angular displacements sensors that provide circuit automatic feedback control to improve transient. Neuroprocessor automatic control system allows you to parallelize the process of calculating of control actions and signaling adaptation modules SEMS. This improves the speed and accuracy of control. The main computational element of the system is neuroprocessor for processing 32-bit scalar data and programmable bit packed into 64-bit words. In neuroprocessor software implements the functions of the main units NACS. Use NACS force sensor simplifies algorithms blocks LBUP, LBLP, DMUP and DMLP and improve the accuracy of the calculation. The accuracy of the sensors in BMSUP and BDPNP determines the maximum attainable accuracy of positioning of mobile platform modules SEMS. Each of the considered methods of combating jamming has its advantages and disadvantages. If you want high-precision electromechanical system based on parallel kinematics, the most accurate and reliable method is to duplicate the system calculate the optimal trajectory logic block to prevent jamming, analyzing the measured force or pressure sensors, or the measured linear displacement and elongation. Using neuroprocessors considered in the automatic control system will allow parallelizing the process of calculating the control actions and signaling adaptation SM SEMS. This improves the speed and accuracy of control. The main computational element of such a system can be neuroprocessor designed to handle 32-bit scalar data and a programmable bit packed in 64-bit words. In neuroprocessor software implements the functions of the main blocks. Using a force sensor to simplify the neural algorithms and improve the accuracy of the calculation. The accuracy of position sensors used in SM SEMS will determine the maximum achievable positioning accuracy, and their time of conversion—performance. The proposed modeling method allows increasing the probability of the prognosis of the critical situation occurring time for each block of the system, thus increasing the system durability via timely activation of reserving mechanism. By doing so, we may receive a time reserve to perform the needed technical measures for reserve block switching and for the partial failures fixing.
References
87
References 1. Merlet, J.P.: Parallel Robots, 2nd edn, p. 383. Inria, Sophia-Antipolis, France. Springer (2006) 2. Agapov, V.A. (RU), Gorodetskiy, A.E. (RU), Kuchmin, A.J. (RU), Selivanova, E.N. (RU): Medical Microrobot. Patent RU No. 2469752 (2011) 3. Tarasova, I.L., Gorodetskiy, A.E., Kurbanov, V.G.: Computer modeling ACS of SEMS actuator of space radio telescope subdish. In: Gorodetskiy, A.E. (ed) Smart Electromechanical Systems, p. 277. Springer International (2016). https://doi.org/10.1007/978-3-319-27547-5 4. Gorodetskiy, A., Tarasova, I., Kurbanov, V., Agapov, V.: Mathematical model of automatic control system for SEMS module. Inf. Control Syst. 3, 40–45 (2015). https://doi.org/10.15217/ issn1684-8853.2015.3.40 5. Gorodetskiy, A.E. Smart Electromechanical Systems Modules. In: Gorodetskiy, A.E. (ed) Smart Electromechanical Systems. Studies in Systems, Decision and Control, vol. 49, pp. 7–16. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-27547-5_2 6. Gorodetskiy, A.E.: Fundamentals of the Theory of Intelligent Control Systems, p. 314. LAP LAMBERT Academic, Berlin (2011) 7. Artemenko, Y.N., Gorodetskiy, A.E., Doroshenko, M.S., Konovalov, A.S., Kuchmin, A.Y., Tarasova, I.L.: Problems of the choice of electric drives of space radio-telescope system dish system. Mehatronica Avtomatizacia Upravlenie 1, 26–31 (2012) 8. Artemenko, Y., Gorodetskiy, A., Dubarenko, V., Kuchmin, A., Agapov, V.: Analysis of dynamics of automatic control system of space radio-telescope subdish actuators. Inf. Control Syst. 6, 2–6 (2011). Retrieved from http://www.i-us.ru/index.php/ius/article/view/14122 9. Gorodetskiy, A.E., Tarasova, I.L.: Upravlenie I Neironiy Seti (Control and Neural Networks), p. 312. Politekhnicheskii Universitet, Saint-Petersburg (2005) 10. Artemenko Ju., N. (RU), Gorodetskiy, A.E. (RU), Dubarenko, V.V. (RU), Kuchmin Ju., A. (RU), Tarasova, I.L. (RU), Galushkin, A.I. (RU), Agapov, V.A. (RU): Method for Adaptation of Reflecting Antenna Surfaces. Patent RU No. 2518398 (2014) 11. Vacilenko, N.V., Nikitin, K.D., Ponomarev, V.P., Smolin, A.J.: Fundamentals of Robotics. Tomsk MGP «RASKO» (1993) 12. Bazara, M., Shetty, K.: Nonlinear Programming. Theory and Algorithms. Mir (1982) 13. Kurbanov, V.G.: Mathematical Methods in Control Theory: Textbook. SPb Publishing House GUAP (2008) 14. Vaisbord, E.M, Zhukovsky, V.I.: Introduction to the Differential Game of Several Persons and Their Application, p. 304. Sov Radio (1980) 15. Isaacs, R.: Differential Games, p. 479. Mir (1967) 16. Yudin, D.B.: Computational Methods of Decision Theory, p. 320. Nauka (1989) 17. Tobacco, D., Kuo, B.: Optimal Control and Mathematical Programming, p. 280. Nauka (1975) 18. Gorodetskiy, A.E.: Smart Electromechanical Systems, p. 277. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-27547-5 19. Gorodetskiy, A.E., Tarasova, I.L.: Fuzzy Mathematical Modeling of Poorly Formalized Processes and Systems (Nechetkoe Matematicheskoe Modelirovanie Ploho Formalizuemyh Processov I System). SPb Publishing House of Polytechnical Institute, p. 336 (2010) 20. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G. Logical-mathematical model of decision making in central nervous system SEMS. In: Gorodetskiy, A., Kurbanov, V. (eds) Smart Electromechanical Systems: The Central Nervous System. Studies in Systems, Decision and Control, vol. 95. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-53327-8_4 21. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Methods of synthesis of optimal intelligent control systems SEMS. In: Gorodetskiy, A. (ed) Smart Electromechanical Systems. Studies in Systems, Decision and Control, vol. 49. Springer, Cham (2016). https://doi.org/10.1007/9783-319-27547-5_4 22. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Logical and probabilistic methods of formation of a dynamic space configuration of the robot group. In: Materialy 10-J Vserossijskoj Conference on Governance, Divnomorskoye, Gelendzhik, vol. 2, pp. 262–265 (2017)
88
1 Mechanisms and Control Systems
23. Fomin, V.N., Fradkov, A.L., Yakubovich, V.A.: Adaptive Control of Dynamic Objects (Adaptivnoe Upravlenie Dinamicheskimi Ob”Ektami). In: Ch Nauka (ed.) Physics and Mathematics of Literature, p. 448 (1981) 24. Chernukhin, Y.V., Pisarenko, S.N.: Extrapolation structures in neural network-based control systems for intelligent mobile robots. Opt. Mem. Neural Netw. 11(2), 105–115 (2002) 25. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L., Agapov, V.A.: Problems of increase of efficiency of use of matrix receivers for radio images in astronomy. Radio Eng. 1, 88–96 (2015) 26. Gorodetskiy, A., Tarasova, I.: Detection and identification of dangerous space objects using adaptive matrix radio receivers. Inf. Control Syst. 5, 18–24 (2014). Retrieved from http://www. i-us.ru/index.php/ius/article/view/13523 27. Artemenko, Y., Agapov, V., Dubarenko, V., Kuchmin, A.: Co-operative control of subdish actuators of radio-telescope. Inf. Control Syst. 4, 2–9 (2012). Retrieved from http://www.i-us. ru/index.php/ius/article/view/13744 28. Yangulov, V.S.: Designing programs with linear movement of the output link: tutorial, p. 169. Tomsk Polytechnic University (2011) 29. Artemenko, Y.N., Gorodetsky, A.E., Dubarenko, V.V., Kuchmin, A.Y., Tarasova, I.L.: Problems of development of space radio-telescope adaptation systems. Inf. Control Syst. 3, 2–8 (2010). Retrieved from http://www.i-us.ru/index.php/ius/article/view/14175 30. Glazunov, V.A., Phong, H.X., van Hien, N., van Dung, N.: Multi-criteria optimization of the parameters of the executive system of parallel structures. J. Eng. Phys. 2, 3–6. MIFI, Moscow (2008) 31. Wenzl, E.S.: Theory of Probabilities. Nauka (1969) 32. Ryabinin, I.A.: Reliability and Safety of Structural Complex Systems, p. 248. Polytechnic, St. Petersburg (2000) 33. Chervony, A.A., Lukyaschenko, V.I., Kotin L.V.: Complex System Reliability, p. 288. Mashinostroyenie (1976)
Chapter 2
Central Nervous System
2.1 Problems of Creating the Central Nervous System SEMS The proper functioning of any automatic control system (ACS), it must receive information about the environment and the “behavior” of the robot. Without them, she would not be able to take decisions expected of it, i.e., define the objectives and functioning reach these goals. Such a behavior of the robot is called feasible or purposeful [1]. Therefore robot software “senses” and their improvement is very important and urgent task. Currently, “senses” of the robot are various sensors (displacement, velocity, acceleration, forces, tactile sensors, etc.), forming a sensor robot system, which collects two types needed for CS information: on their own robot state and the state of object manipulation environment [2]. However, for what would the robots can independently, without needing human intervention, formulate objectives and to successfully carry them out, they should be provided not only more sophisticated sensors sensations, but also have the ability to understand the feelings the language, that is, have a sense of the type of “your own—someone else” “dangerous—safely”, “favorite—unloved”, “pleasant—unpleasant” and others. Some of these sensors senses may be analogs human senses, such as hearing, vision, tactile sensitivity. But not necessarily limited to the human senses. The robot can be made to receive the radio waves, ultrasonic oscillations, ultraviolet light or electrical signals simply by connecting it to “central nervous system” should be sensors that provide convenient electrical output signal [3]. Should be used and in some cases already in use bionic approaches in the creation of the senses and the Central Nervous System (CNS) of robots. Different countries have developed different types of devices using this approach. We study various devices for pattern recognition, artificial eyes for reading of letters and numbers, artificial ears for speech perception, various kinds of tactile sensors and other analogs of the senses [4]. It is hoped that some of these studies will facilitate the solution to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. E. Gorodetskiy and I. L. Tarasova, Introduction to the Theory of Smart Electromechanical Systems, Studies in Systems, Decision and Control 486, https://doi.org/10.1007/978-3-031-36052-7_2
89
90
2 Central Nervous System
the problem of communication between humans, computers, and other devices. Of course, any of the devices are now being developed will be useful later for use in robots. There are bodies of animal’s feelings that we still have not learned to play. The most difficult ones are gustatory and olfactory. Fortunately, until they are of special importance. There is no doubt that if it became necessary, he would have started the study of possible ways of playing. It would be nice if the robot senses authorities have all the properties of the sense organs of humans and animals. To solve the problems it is first necessary to study linguistics and formalize the sense of language, namely, alphabets and rules of construction of the signs of the letters (words) of the signs— proposals, as well as rules for understanding the meaning of sentences, the formation of new proposals and transmit them to others.
2.1.1 The Bodies of the Human Senses Preparation and primary analysis of information from the outside world and from other organs of the body, i.e. from the external environment and internal environment of the body is provided by the senses—specialized peripheral anatomical and physiological systems. It is believed that a person has 5 main senses: sight, hearing, smell, taste and touch. Recently, it is added on to another and sixth: Equilibrium [5]. Apparently, little studied and practical man lost seventh sense organ is Telepathy, or thought transference at a distance by means of electromagnetic waves of the radio. Moreover, 90% of the information people receive visually—through sight; 9% of the information people have hearing (auditory); only 1% through other senses. The vision (visio, visus)—a physiological process of perception of size, shape and color of objects, as well as their relative position and distance between them; a source of vision is the light emitted or reflected from objects in the external world. The function of vision carried out thanks to a complex system of different interconnected structures—visual analyzer, consisting of peripheral (retina, optic nerve, optic tract) and a central department, which brings together the subcortical and stem centers (lateral geniculate body, the cushion of the thalamus, the upper hills of the midbrain roof) as well as the visual cortex of the cerebral hemispheres. The human eye perceives only light waves of a certain length—about 380–770 nm. Light rays from treated subjects tested through the optical system of the eye (cornea, lens and vitreous body) and fall on the retina. In the retina, the light-sensitive cells are concentrated—the photoreceptors (rods and cones). Light falling on the photoreceptor, causes rearrangement contained therein visual pigments (in particular, the most studied of them rhodopsin), and this, in turn,—the occurrence of nerve impulses, which are transmitted in the following description of retinal neurons and in the optic nerve. According to the optic nerve and optic tract followed by nerve impulses arrive in the lateral geniculate body—subcortical centers of view, and from there to the cortical center of vision, located in the occipital lobes of the brain, which is the formation of the visual image [6].
2.1 Problems of Creating the Central Nervous System SEMS
91
Alphabet language of vizion (light sensations) are the colors. Accepted provide the seven colors of the rainbow: red, orange, yellow, green, blue, indigo and violet. Thus each color has its own frequency of electromagnetic oscillations. Rules for the structure of the letters (colors) signs (images) from—suggestions (paintings) signs have long been studied and applied in painting [7]. However, a sufficiently complete and formalized description of linguistics sensations of light, suitable for giving these abilities of robots does not yet exist. Hearing (auditus)—a feature that allows the perception of human and animal sounds. The mechanism of the auditory sensation is caused by the activity of the auditory analyzer. The peripheral portion of the analyzer includes an outer, middle and inner ear. The auricle converts the acoustic signal coming from the outside, reflecting and directing sound waves into the external auditory canal. The external auditory canal, acts as a resonator, change the properties of the acoustic signal— increases the intensity of the tones the frequency of 2–3 kHz. The most significant conversion of sound occurs in the middle ear. Here, due to the difference in the area of the eardrum and the stapes base, as well as through the lever mechanism of the auditory ossicles and the tympanic cavity muscles significantly increases the intensity of the sound conducted by reducing its amplitude. Middle ear system provides transition vibrations of the eardrum to the inner ear fluids—perilymph and endolymph. At the same time leveled to some extent (depending on the frequency of the sound), the acoustic impedance of the air in which the sound wave propagates, and the inner ear fluids. Converted waves are perceived by the cochlea, which ranges receptor cells, located on the basilar plate (membrane) in different areas, rather strictly corresponding to the frequency of the exciting sound wave it. The resulting excitation in certain groups of receptor cells distributed along the fibers of the auditory nerve in the nucleus of the brain stem, subcortical centers located in the midbrain, reaching the auditory cortex area, localized in the temporal lobes, where he formed an auditory sensation. At the same time as a result of crossing pathways from the audio signal and the right and left ear of the falls at the same time to both hemispheres of the brain. Auditory pathway has five synapses, each of which is encoded by a nerve impulse differently. Coding mechanism is so far not fully disclosed, which significantly limits the possibility of practical audiology [6]. It was believed that the human ear perceives sounds frequency from 16–20 Hz to 15–20 kHz. Subsequently, it was established that the human bone in terms of sound perception characteristic having a higher (200 kHz) frequency, i.e. ultrasound. At the same time with an increase in frequency of ultrasound sensitivity to reduced. The fact that the auditory perception of the ultrasound is placed in the current views of the hearing evolution, because this feature is inherent in any and all types of mammals. Measurement sensitivity to ultrasound is important to assess human hearing status, widening and deepening opportunity eudiometry. Sounds—harmonic oscillations whose frequencies are treated as integers, and cause a person a pleasant sensation (consonance). Close, but different frequency vibrations cause discomfort (dissonance). Sound vibrations with a continuous spectrum of frequencies perceived by the person as noise. Alphabet hearing the language (sound sensations) are the notes. They are seven: “C”, “D”, “E”, “F”, “G”, “A”,
92
2 Central Nervous System
“B”. Rules for the structure of the letters (notes) marks (tunes), signs of -suggestions (music) has long been studied and applied in music [8]. However, a sufficiently complete and formalized description of linguistics sensations of light, suitable for giving these abilities of robots does not yet exist. Olfaction—the sense of smell, the ability to detect the smell of substances dispersed in the air (or dissolved in water—for animals living in it). Unified theory of the origin of smell is not. Nominated stereo chemical hypothesis (Eymur 1964) [9], according to which the interaction between the molecules of odorous substances with the membrane of the olfactory cells at the same time depends on the spatial shape of the molecule and on the presence of certain functional groups. It is assumed that the olfactory pigment molecule can easily pass into an excited state by the action of the vibrating molecules of odorous substances. Olfactory receptors are excited substances with a molecular weight of 17 (ammonia) 300 (alkaloids). According to this theory, there are 7 primary odors (seven letters of the alphabet sense of smell)— camphor, floral, musk, mint, ether, pungent and putrid. The remaining odors (e.g., garlic) are complex, consisting of several primary. Camphor odor molecules must be approximately spherical shape with a diameter of 0.7 nm, with floral—disc-shaped with a handle, etc. We calculated the approximate dimensions of the receptor “holes”, or nests, on the membrane of the olfactory cells, which should include the molecules of odorous substances. In the formation of the olfactory sensations involved and other mucous membrane receptors of the mouth: actile, temperature, pain. Irritants only olfactory receptors, called olfactory (vanillin, benzol, xylene), unlike the mixed irritating also other receptors (ammonia, chloroform). The spectrum of odors perceived by man is very wide; undertaken many attempts to systematize them. German psychologist Henning (1924) identified 6 major odor (fruity, floral, resinous, spicy, putrid, burnt), which reflects the relationship between the so-called prism odors. Later it was shown inaccuracy Henning classification, and is now used by the scheme of the 4 basic smells (fragrant, sour, burnt, putrid), whose intensity is usually measured by the conventional 9-point scale. Until now sufficiently complete and formalized description of the olfactory sensations linguistics suitable for giving these abilities of robots does not exist [10]. Taste—a type of feeling formed by the action of different substances mainly on the taste buds. Senses of taste and sense of smell allows us to distinguish between undesirable for the reception and even deadly food from the delicious and nutritious. Smell allows animals to recognize the proximity of other animals, or even certain animals among many others. Finally, both closely related to feelings of emotional and behavioral primitive functions of the nervous system. Taste is mostly a function of the taste buds of the mouth, but each of their life experience knows that a great contribution in the sense of taste and smell makes. In addition, food texture, perceived via the mouth tactile receptors in the presence of food substances that stimulate pain closure such as peppers significantly alter taste perception. The importance of taste is that it allows a person to choose the food in accordance with the desires and often due to metabolic needs of the body tissues with respect to certain substances.
2.1 Problems of Creating the Central Nervous System SEMS
93
Not all specific chemicals excite different taste receptors, are known. Psycho physiological and neurophysiologic studies have identified at least 13 possible or probable chemical receptors in taste cells. Among them, sodium 2 receptor, 2 potassium, chlorine 1, 1 adenosine, inosine 1, 2 receptor for sweet, bitter for 2 receptor, glutamate receptor 1 and 1 receptor for hydrogen ions. For practical analysis of flavor potential of these receptors are grouped into five main categories, called primary flavors (taste the letters of the alphabet): sour, salty, sweet, bitter and umami. A person can feel the hundreds of different flavors. It is believed that all of them are combinations of the primary taste sensations (taste alphabet) as well as all the colors that we see are combinations of the primary colors (light alphabet).Until now sufficiently complete and formalized description of linguistic taste, suitable for giving these abilities of robots does not exist [11]. Touch—a complex sensation that occurs when irritation of the skin receptors, the external surfaces of the mucous membranes and the muscular-articular apparatus. The main place in the formation of the sense of touch belongs to the skin analyzer, which carries out the perception of external mechanical, thermal, chemical and others skin irritations. The sense of touch, being the most ancient form of sensations is composed of tactile, temperature, pain and movement sensation. The main role belongs to the sense of touch tactile sensations—touch and pressure. Touch receptors in the skin are a tree-like branched free end of the nerve fibers, terminal branches of which penetrate between the connective and epithelial cells, obvivaya outer root sheath of hair. Oscillation long hair outer portion is transmitted to the root portion and causes the excitation of nerve fibers. By increasing the intensity of touch begins to feel a sense of pressure. This means that the affected muscle receptors, tendons and fascia. One nerve fibers branching may approach 300 cutaneous receptors. Touch is divided into active and passive Active touch appears (shown in the manipulation of the subject and his feeling in humans) to active actions of the body, contributing to a more complete perception of the subject Passive touch occurs when the simple action of the stimulus on the skin and is not accompanied by specific reactions of the organism, usually aimed at clarifying the nature of the action of the stimulus. Equilibrium—a sense of orientation in space, which Rudolf Steiner did not hesitate to include in terms of feelings: “We are aware of the third sense if you think about what people distinguish between top and bottom If it ceases to distinguish it, then he faces a great danger, if he cannot stand upright and tilted We can point to the body, which has a close relationship with this feeling, namely, the three semicircular canal in the ear. If the damage of the body man loses his sense of orientation” [12]. The body is a sense of balance in the ear (the analysis is in the cerebellum) [6]. In general, receptors are found in all parts of the body and internal organs Pain signals to transmit information about the state of health of an organ. Telepathy—par psychological phenomenon of transmission of thoughts and feelings at a distance and to provide, thus, impacts on living and non-living objects, without the use of any technical means. Telepathy—the most common psychic phenomena It has repeatedly been tested almost every one of us The most striking example—telepathic connection between mother and child: a mother loving her child immediately feels the danger to the child at any distance Equally obvious telepathic
94
2 Central Nervous System
connection between loving people who feel the slightest nuances of the state of mind of each other. When telepathic communication logical beginning of human consciousness practically involved—working mostly intuition This manifests complete configuration at each other participants of such a connection. However, to put a telepathic experiment is extremely difficult to rigorous scientific framework. In the world a variety of experiments to establish telepathic communications were delivered in situations where other communication channels were unavailable or undesirable. As a result, the fundamental possibility of telepathic communication has been experimentally proved. This was clearly confirmed that such a link exists outside the sphere of influence of all known fields—electromagnetic, gravitational etc.
2.1.2 The Central Nervous System of the Human Senses The sense organs perceive only information, and then this information must be converted into nerve impulses and act in the brain to analyze and develop a reaction. This scheme is similar to the following: Receptor → Nerve zone → Cortex contour → Impulse → Reaction. Signals from the outside process the areas of the cerebral cortex, which are the core of CNS of the human senses. Each sense organ corresponds to a certain area of the cerebral cortex (see Fig. 2.1). At the same time the human senses is much weaker than that of animals. Man worse sees, hears a sound in a narrower range, distinguishes less odor, but it is important to understand that the formation of certain organs occurs under the terms of the environment, it is an adaptation to the environment. However, despite the weaker senses, people communicate better with each other and have a more appropriate behavior thanks to a more developed central nervous system with the linguistic
Fig. 2.1 Zones of the cerebral cortex
2.1 Problems of Creating the Central Nervous System SEMS
95
abilities. It is also important that when a person denies one of the senses, the others start to work harder [7]. According to James “conceptual” operation—this is not a mere contemplation of the world, but in a highly selective process in which the body receives instructions on how to operate it in respect of the world, to meet their needs and interests [8]. Another important property of human perception of the world is the existence of ties between the perceived information from different senses. In particular, it has long been known that there is a close relationship of sound and color [10]. In particular, the ability to “hear” colors or “see” the music called seventh sense—synesthesia [11]. People have the ability to perceive the world in a lot of senses, considered to be unique, but no more. There is nothing supernatural in nature synesthesia not. It is given to all human beings. After input from all the senses are mixed in infant brain. But at the age of about six months they are separated: the sounds—“right” visual information—“left”. Scientifically—the process of the withering away of neurons, synaptic creating bridges [13]. In synesthesiks bridges remain intact, and feelings— not separated, as if superposed. Moreover, the so-called psychics, listening to music, sometimes felt in the mouth all sorts of tastes: sweet, bitter, salty, sour. I check out this professor of psychology and neuroscience, University of California Vilayanur S. Ramachandran, who has developed a special test. On the computer screen appear black two and located in random order five. Average person is very difficult to distinguish one from the other. A synesthesiks easy to see that form a triangle of two, after all, they are colored to him. Using these tests, Ramachandran and his colleagues found: synesthesia far more common than previously thought. About one in two hundred adults [13]. Synesthesia had many famous people. For example, the French poet Arthur Rimbaud associated the vowel sounds with certain colors. Color musical notes saw the composer Alexander Scriabin. Abstract artist Wassily Kandinsky, on the contrary, I heard the sound of colors. By synesthesiks include Lev Tolstoy, Maxim Gorky, Marina Tsvetaeva, Konstantin Balmont, Boris Pasternak, Andrei Voznesensky [14]. Suffice it to sound mathematical models of human functioning CNS not yet developed that prevents the establishment of full-fledged CNS robots.
2.1.3 Tasks of Construction of the Central Nervous System Robots Solving the problem of creating a “central nervous system” senses of robots comes down, first of all, to research and develop chains of type about the following scheme: Receptors (sensors, gauges and other measuring robot system) → Nervous Circuit (information signal receiving channel and primary processing) → Zone Bark (combining signals, pattern recognition, classification, decision) → Pulse (information control signaling channel, conversion and formation of the control action) → Response (moving, stretching and other actions of the robot working bodies) (see
96
2 Central Nervous System
1
2
3
4
5
6
7
Fig. 2.2 The structure of the CNS of the robot. 1—Measuring system (sensors), 2—measuring the signal transmission channel, 3—block pre-processing of the measurement signals, 4—fuzzification block, pattern recognition and decision making, 5—channel transmission of control signals, 6— forming unit control actions, 7—working bodies robot
Fig. 2.2); it is divided into two related tasks. The first is to create a more sophisticated sensors and systems engineering sensations. The second—the creation of software robots provide the ability to understand the language of feelings and create behavioral processes on the basis of the analysis of sensations, i.e., robots provide the possibility of reflective reasoning essentially approximating the latest sign systems to those enjoyed in their daily practice, man. There is quite a lot of development of measuring systems and sensors that simulate the vision (block 1). This is a different vision system, CCD matrix, optoelectronic and television measuring system, size, brightness, etc. Parameters [15]. The same can be said about the systems that simulate the ear [16]. There is some progress in the field of measuring systems and sensors that simulate the sense of touch [17]. Little studied and practically poorly used are measurement systems that simulate the sense of smell and taste. Almost no development concerning the research and creation of technical systems that simulate the sense of balance and telepathy [18]. Consequently, the decision of the task is not yet complete in many respects. Methods and means of transmitting and converting signals with the extraction of information parameters from a signal-to-noise mixture are well studied and have a large number of different technical means for receiving, transmitting and filtering signals (blocks 2, 3, 5) [19]. As is well researched and developed the technical means of formation of control actions (block 6) [20]. Known methods of data fuzzification (signals), inference and logical interval, logic and probabilistic and logical and linguistic methods of pattern recognition and decision making (block 6) [21, 22]. However, the task of creating mathematical and software tools to ensure the robots the ability of reflective reasoning essentially approximating sign systems last to those enjoyed in their daily practice, people are still at a very early stage, limited modeling behavioral processes on the basis of sensations analysis [23].
2.1 Problems of Creating the Central Nervous System SEMS
97
Thus, the available technical solutions at this stage make it possible to proceed to the creation of simplified prototypes CNS robots. One of the most promising options mathematical implementation block 4 may be implement logical-mathematical form on the basis of behavioral analysis processes sensations of touch signals from the robot system. In this first and very important for the further operation is logical constructions fuzzification data received through the touch information channels from the various sensors. To do this, the sensors can be combined in groups, forming the following bodies of the robot senses like a man: vision as a set of X; hearing as set Y; a plurality of smell Z; taste as a plurality of U; the sense of touch in the form of a plurality of V; a plurality balance W and a plurality telepathy Q. X i ⊂ X, Yi ⊂ Y, Z i ⊂ Z , Ui ⊂ U, Vi ⊂ V , Wi ⊂ W, Q i ⊂ Q. A set of these subsets depends on a set of sensors forming the senses of a particular robot. For example, to view a subset of the following can be entered: X 1 —Contour image, X 2 —the size of the image, X 3 —the brightness of the image, X 4 —color image, X 5 —the distance to the object, X 6 —approach speeds, X 7 —removal rate. For the hearing following subsets can be entered: Y 1 —volume, Y 2 —the tone, Y 3 —interval, Y 4 —approach speeds, Y 5 —removal rate, Y 6 —direction. For the following subsets of smell can be entered: Z 1 —type of smell, Z 2 —the intensity of the odor, Z 3 —the direction of the smell, Z 4 —approach speeds, Z 5 — removal rate. For the following subsets can be introduced to the taste: U 1 —type of taste, U 2 — the power of taste, U 3 —direction. To touch these subsets can be entered: V 1 —the evenness of the surface, V 2 —dry surface, V 3 —surface temperature. Sense of balance is usually provided by robots gyroscopes. In this case, the following subsets can be entered: W 1 —deviation “up—down”, W 2 of—deviation “forward—backward”, W 3 —deviation “left—right”, W 4 —speed deviation “up—down”, W 5 —speed deviation “forward—backward”, W 6 —speed deviation “left—right”. Telepathy robots, in contrast to the human, it is explicable phenomenon and receive messages using a wireless connection. Currently, the most widely used for this purpose «Wi-Fi». Most simply a subset of the set Qi forming telepathy Q form in advance at the stage of designing a robot designed to perform various technological operations. In this case, these instructions are subsets or types of reactions (qi j ∈ Q i ), are stored in the robot’s memory and retrieved there from inference engine [24, 25]. The choice of instructions carried out by checking the feasibility of certain rules of the data subset (xi j ∈ X i ; yi j ∈ Yi ; z i j ∈ Z i ; u i j ∈ Ui ; vi j ∈ Vi ; wi j ∈ Wi ). Such rules usually have the form: if xi j = 1 ∧ yi j = 1 ∧ z i = 1 ∧ u i j = 1 ∧ vi j = 1 ∧ wi j = 1, then qi j = 1. (2.1)
98
2 Central Nervous System
With a large number of logical variables, such rules can be just as much. Then, a sequential scan of the rules in order to identify their feasibility will take a long time. In this case, it is desirable to utilize parallel computing. For this purpose, as shown in [25, 26], we can use( the procedure algebraization ) logical expressions. Boolean type data xi j ; yi j ; z i j ; u i j ; vi j ; wi j are extracted from data or signals from sensors of robot sense organs by fuzzification. For example, X 3 by fuzzification get the following logical variables: x 31 —«very weak brightness», x 32 —«weak brightness», x 33 —«normal brightness», x 34 —«strong brightness” and x 35 —«very strong brightness”. It is easy to notice that each Boolean variable connected an inherent attribute. In the simplest example of such an attribute is the { } interval In more complex cases, the attributes may be the probability P xi j = 1 or membership functions μ(x ij ). Therefore, the data from the sensor senses the robots are stored in memory in the form of logical-interval, logical and probabilistic or logical–linguistic variables [26]. Therefore, the inference engine will almost always receive not one solution, but several with varying degrees of certainty. For example: If: x ij = 1 with probability Px and yij = 1 with probability Py and zi = 1 with probability Pz and uij = 1 with probability Pu and vij = 1 with probability Pv and wij = 1 with probability Pw , the qij = 1 Pq likely. His will naturally lead to ambiguous behavior of the robot. A man in this situation it is advisable to behave purposefully or intuitively, based on our own experience, or genetically inherent behavioral patterns [27]. Task empowering robots skills purposeful behavior is still at a very early stage. Usually in such cases, the robot control system should start the search procedure for the optimal solution. This will naturally lead to ambiguous behavior of the robot. A man in this situation it is advisable to behave purposefully or intuitively, based on our own experience, or genetically inherent behavioral patterns [27]. Task empowering robots skills purposeful behavior is still at a very early stage. Usually in such cases, the robot control system should start the search procedure for the optimal solution. This requires the use of known [28–35] algorithms for computing attributes such logical functions as intervals, probability and membership functions. After selecting the optimal solution in the block (4) runs defuzzification operation [36, 37], which means the procedure for the conversion of fuzzy values derived from fuzzy inference, in the clear. These quantities channel transmitting control signals (5) transmitted to the block (6) generating control actions. Usually, in block (6), optimal laws for controlling the working bodies (7) are formed, which come there in the form of control signals (voltages). Methods of search of optimal control actions studied adequately, for example, in [38]. However, there are some difficulties associated with their parallel structures [39] for intelligent robots, built on the basis of SEMS modules.
2.1 Problems of Creating the Central Nervous System SEMS
99
2.1.4 Features of the Central Nervous System SEMS with Elements of the Psyche To make a decision on the optimal control of the behavior of an intelligent robot (IR) in collective interaction under conditions of great uncertainty of the environment, new methods based on knowledge of the environment for choosing optimal actions and understanding possible actions of other members of the team are required [40]. To do this, the IR must be endowed with IR properties similar to the mental properties of a person. It is obvious that the psyche of robots will be implemented on a different basis than the human psyche. Nevertheless, it will still be the psyche, in the sense that most psychologists put into this concept [41]. The limitations of sensory capabilities and computing power should be attributed to the notorious limited cognitive abilities of IR [42]. Therefore, providing robots with “sense organs” and their improvement is a very important and urgent task. Currently, the robot’s “sense organs” are various sensors (movements, speeds, accelerations, forces, tactile sensors, etc.) that form the robot’s sensory system, which collects two types of information necessary for the robot’s control system: about the robot’s own state and about the state of the manipulated object and the environment [2]. However, in order for robots to formulate tasks and successfully perform them in a team, they must be equipped not only with more advanced sensation sensors, but also have the ability to understand the language of sensations and evaluate the mental characteristics of team members [43]. Thus, such mental processes of a person as perception of sensory information and decision making are currently studied and imitated. They are used in the creation of CNS robots. To endow the CNS robots with other properties of the human psyche, for example, emotions and temperament, it is advisable to analyze the mental processes of a person. In practical psychology [44, 45], the following basic mental processes in humans are distinguished: perception, attention, memory, thinking, emotions, temperament, language and speech, consciousness and self-consciousness. Emotions and temperament, which serve to reflect a person’s subjective attitude to himself and to the world around him and significantly affect the appropriate behavior in a team, have been poorly studied and are not yet used in the central nervous system of robots. Emotions are amplified when there is an information deficit and contribute to overcoming it by increasing the sensitivity of the perception system [44]. At the same time, the floodgates open for receiving additional information, which, in turn, expands the possibilities of thinking, that is, decision making by the robot. Attention highlights relevant, personally significant signals. The choice is made from the set of all signals available to perception at the moment. Memory is the process of imprinting, preserving, reproducing traces of past experience. It makes it possible to maintain constant trends towards appropriate behavior for long periods of time, and to some extent predict behavior for the future. The memory of the IR is formed in the form of databases and knowledge in the process of learning and self-learning.
100
2 Central Nervous System
Thinking is associated with the use of information about the image of perception transformed in memory, called from it to make a decision—a secondary image or representation. Thinking radically expands the possibilities of a person in his quest for knowledge of the entire surrounding world up to the invisible and unimaginable, since it operates not only with primary and secondary images, but also concepts. This process in IR is associated with the formation of concepts and the identification of images that are currently the least studied. An important feature of human mental functions is that their physiological component, i.e. those changes in the work of the central nervous system that provide the corresponding mental process, is completely not perceived by a person. Neurophysiologic components of mental processes turn out to be practically inaccessible for self-observation [44], which complicates the process of borrowing from human psychology.
2.1.5 Supplementing the Structure of the Robot’s Central Nervous System with Elements of the Psyche The endowment of intelligent robots with a psyche similar to the human psyche [45] is possible with the formation of its central nervous system. The latter is especially relevant when shaping the behavior of IR teams and human operators to perform joint complex technological operations. Figure 2.2 shows the previously proposed [46] structure of the Central nervous system in general form. To endow the central nervous system of the robot with elements of the psyche in the proposed structure, block 4 of fuzzification, recognition and decision making requires a significant complication of functionality. In particular, block 4 can be represented in the following form (Fig. 2.3). In this structure in block (4.1), images should be formed, with which attention (block 4.2), memory (block 4.3), thinking (block 4.4), emotions classification system (block 4.5), and assessment of the psyche (block 4.6) will operate in the future. Depending on the analyzers (blocks 1–3, Fig. 2.2) [44] the following types of sensory information are used in block (4.1): vision, touch, hearing, kinesthesia, smell, taste. Due to the connections formed between different analyzers, the images formed in the block (4.1) reflect such properties of objects or phenomena for which there are no special analyzers, which contributes to the organization of the mental process in the central nervous system of the robot. For example, the size of an object: weight, shape, regularity and its properties, “dangerous—not dangerous”, “alive—not alive”, “delicious—not tasty”. Block (4.2) should highlight relevant, personally significant signals for solving the IR of the current task. The choice is made from the set of all signals available to perception (block 4.2) at the moment. In contrast to the perception associated with the processing and synthesis of information coming from inputs of different modalities, attention (block 4.2) restricts only that part of it that will actually be
2.1 Problems of Creating the Central Nervous System SEMS
101
from block 3
4.2
4.1
4.8
to block 5
4.4
4.3
4.9
4.5
4.10
4.6
4.7
4.11
to another IR or operator
Fig. 2.3 The structure of block 4 taking into account the psyche of the IR. 4.1—Perception; 4.2— attention; 4.3—memory; 4.4—thinking; 4.5—emotions classification system; 4.6—evaluation of emotions; 4.7—supervisor; 4.8—decision making; 4.9—language; 4.10—speech; 4.11—human– machine interface
required and processed, that is, the part that the IR needs to make a decision in a particular situation. This reduces the amount of information processed in the current choice situation. In block (4.3), in addition to storing current information, the process of imprinting, preserving, reproducing traces of past IR experience should be carried out. This makes it possible to maintain trends towards appropriate behavior of the IR for long periods of time and to some extent predict behavior for the future. The information in the memory of the IR (block 4.3) is formed in the form of databases and knowledge in the process of learning and self-learning. The thinking process of the IR (block 4.4) is associated with the formation of concepts and the identification of images, which are partially described in [47–49], but for the purposes of endowing the central nervous system of the robot with the psyche, they have been poorly studied. The latter does not yet allow us to remove the boundaries for the perceived, imagined and remembered IR at all and overcome the spatial limitations of perception. Block (4.5) (emotions) should determine the personal significance for the IR of the information perceived in block (4.1) and evaluated in blocks (4.2) and (4.4) of external and internal situations for the life of the IR. Block (4.5) serves to reflect the subjective attitude of the IR to itself and to the world around it. Unlike block (4.4), block (4.5) forms a subjective attitude of the IR to the external and internal environment, which increases with an information deficit and helps to overcome it, increasing the sensitivity of the perception system. Giving emotions to the IR is especially important when the IR interacts with a person. Blok (4.6) (evaluation of emotions) should form such features of mental processes in the central nervous system of the robot that affect the speed of recall and the strength of memorization, fluency of mental operations, stability and switch ability
102
2 Central Nervous System
of attention. It determines the type of nervous system and the dynamics of decision making. The block (4.7) (supervisor) is a program or a set of programs that ensure the interaction of all CNS blocks working in a multi-program mode of the computer system. Block (4.8) (decision making), interacting with the blocks (4.3), (4.4), (4.6) should provide the IR with appropriate behavior in the current situation. Blocks (4.9) (language) and (4.10) (speech) should provide intelligent interaction with other IR and a person through the block (4.11) (human–machine interface). The complexity of the functioning of these blocks is caused by poor knowledge and weak formalization of the linguistic features of human communication.
2.2 Formation of Images Based on Sensory Data of Robots The functioning of the robot’s ACS relies on information from sensor systems regarding the environment and the state of the robot itself. Based on this information, the ACS can determine the goals of functioning and achieve these goals [39]. Moreover, in order to provide intelligent robots with the ability to independently make decisions about appropriate behavior [50, 51], they must have the ability to understand the language of sensations formed in the central nervous system by processing sensory data.
2.2.1 Fuzzification of Data The main operation for further logical constructions in the formation of images is the fuzzification of data coming through the sensory information channels of the central nervous system from various sensors. The rules for working with data usually have the form (2.1). It should be noted that depending on the increase in logical variables, the number of such rules usually increases. Identifying their feasibility will take considerable time. In such cases, parallel computing can be used. To do this, as shown in [52], one can use the algebraization procedure of logical expressions. Rules of the form (2.1) are implications in the language of logic algebra or Boolean algebra:
2.2 Formation of Images Based on Sensory Data of Robots
xi j ∧ yi j ∧ z i j ∧ u i j ∧ vi j ∧ wi j → qi j
103
(2.2)
Expressions of the form (2.2) can be transformed into the form of the Zhegalkin algebra or into equivalent algebraic equations according to mod2 [53]. si j ⊕ si j ∗ qi j ⊕ 1 = bi j ,
(2.3)
where: ⊕ —the plus sign at mod2, *—multiplication sign in mod2, bij takes a value of either 0 libo1 and si j = xi j ∧ yi j ∧ z i j ∧ u i j ∧ vi j ∧ wi j . Then the resulting system of logical Eq. (2.3) can be written in matrix form according to mod2 [54]: A ∗ F = B,
(2.4)
where: B—a binary vector of dimension n, F—a fundamental vector of a logical system of dimension n, constructed from combinations of logical variables x ij , yij , zij , uij , vij , wij , qij obtained by fuzzification of sensory data, and supplemented with 1 in place of the last element, A—a rectangular binary matrix of dimension [n, m]. The procedure for obtaining the system of Eq. (2.4) is easy to formalize. To do this, we can first construct the fundamental vector F of the system. For example, for logical variables s1 , s2 , q1 , q2 the fundamental vector F of the system will be as follows: | | | s , s , q , q , s s , s q , s q , s q , s q ,| 1 2 1 2 1 2 1 1 1 2 2 1 2 2 | | (2.5) FT = | | | q 1 q 2 , s1 s2 q 1 , s1 s2 q 2 , s2 q 1 q 2 , s1 s2 q 1 q 2 | In this case, the following construction algorithm is used: – All logical variables sij of sensory data and all possible logical variables of conclusions qij of logical rules of the expression (2.2) are recorded; – After them, all combinations of two of these logical variables are recorded; – Then all combinations of three of the logical variables are recorded; – After that, all combinations of four of the logical variables are recorded; – And so on, and at the end the product of all logical variables is written. After that, the fundamental matrix A of the system is built.
104
2 Central Nervous System
| | | 100...0 | | | | | | 010...0 | | | | ........... | | | | | | ........... | | | | 000...1 | | | | | | 110...0 | | | | 101...0 | | | | | | ........... | | | A = || ........... || | | | 00..011 | | | | 111...0 | | | | | | 1101..0 | | | | 11001.0 | | | | | | .............| | | | .............| | | | | | .............| | | | 111111 |
(2.6)
The algorithm for constructing the matrix A is as follows [55]: – – – – – – – – – – –
In the first row, 1 is placed in the first column, and 0 in the rest; In the second row, 1 is placed in the second column, and 0 in the rest; And so on until there is 1 in the last column. Then put 1 in the first two columns, and 0 in the rest; Then put 1 in the first and third columns, and 0 in the rest; Then put 1 in the first and fourth columns, and 0 in the rest; And so on until there are two 1s in the last two columns. Then put 1 in the first three columns, and 0 in the rest; Then put 1 in the first, third and fourth columns, and 0 in the rest; Then put 1 in the first, fourth and fifth columns, and 0 in the rest; And so on until there are three 1s in the last three columns.
Naturally, the matrix system of mod 2 equations of the form (2.4) obtained in this way will have a large dimension. However, in the robot’s central nervous system, not all components of this equation (not all combinations of logical variables) are physically realizable and can be discarded. As a result of this reduction, we obtain a matrix system of mod2 equations of smaller dimension [22]: C ∗ R = G, where: C ⊂ A, R ⊂ F, G ⊂ B.
(2.7)
2.2 Formation of Images Based on Sensory Data of Robots
105
Logical type data (x ij , yij , zij , uij , vij , wij , qij ) are extracted from data or signals from the sensors of the robots’ sensory organs by their fuzzification [22]. For example, during fuzzification of the set V3 of temperature sensor readings, logical variables v3j (v3j ∈ V3 ) are formed by quantizing the entire range of the temperature sensor and assigning the resulting quanta Δ3i names of logical variables v3j , taking the values true “1” or false “0”. Then, if the input variable—temperature T can vary from – 20 °C to + 20 °C, then by entering a quantum in 10 °C, the entire temperature range can be divided into four quanta Δ31 = [− 20, − 10], Δ32 = [− 10, 0], Δ33 = [0, + 10], Δ34 = [+ 10, + 20]. Then quantum Δ31 can be named v31 {very cold}, quantum Δ32 can be named v32 {cold}, quantum Δ33 can be named v33 {cool} and quantum Δ34 can be named v34 {warm}. In this case: – the logical variable v31 will correspond to the interval: [(−20 + 20)/(20 + 20), (−10 + 20)/(20 + 20)] = [0, 0.25]. – The logical variable v32 will correspond to the interval: [(−10 + 20)/(20 + 20), (0 + 20)/(20 + 20)] = [0.25, 0.5]. – The logical variable v33 will correspond to the interval: [(0 + 20)/(20 + 20), (10 + 20)/(20 + 20)] = [0.5, 0.75]. – The logical variable v34 will correspond to the interval: [(10 + 20)/ (20 + 20), (20 + 20)/(20 + 20)] = [0.75, 1]. In particular, if, for example, the sensor shows a temperature of t = + 5 °C, then after fuzzification the following values of logical variables will be entered into the CNS database of the robot: v31 = 0, v32 = 0, v33 = 1, v34 = 0 and the corresponding values described above intervals as attributes of these logical variables. In more complex cases, the attributes of logical variables can be probabilities P{x ij = 1} or membership functions μ(x ij ). Then the data from the sensors of the robots’ sensory organs will be stored in memory in the form of logical-probabilistic or logical–linguistic variables [26], which will have probabilities or membership functions as attributes of the logical variables obtained after fuzzification in the robot’s central nervous system database. In this case, the CNS logical inference machine will almost always receive not one solution, but several with varying degrees of confidence. In addition, not every solution obtained from the system (2.7) is feasible for specific work in the current environment (state of the operating environment). This means that the resulting CNS work solution from (2.7) must satisfy constraints that can also be expressed in the form of systems of logical equations [21]: Ci ∗ Ri = H,
(2.8)
Cj ∗Rj = D,
(2.9)
where: Ci and Cj are constraint matrices obtained by analogy with the matrix C, H and D are binary vectors obtained by analogy with the vector G, Ri ⊂ Fi i Rj ⊂ Fj .
106
2 Central Nervous System
In this case, the vectors Fi and Fj are formed similarly to the vector F (see formula (2.5)): FiT = |s11 , s12 , . . . , s1n , s21 , . . . , smn , s11∗ s12 , . . . S11∗ S12∗ . . . Smn , S11∗ g11 , . . . , S11∗ S12∗ . . . Smn ∗ g11∗ . . . gmn , 1|'
(2.10)
FjT = |s11 , s12 , . . . , s1n , s21 , . . . , smn , s11∗ S12 , . . . S11∗ S12∗ ···∗ Smn , s11∗ q11 , . . . , s11∗ S12∗ ...∗ Smn ∗ l11∗ ···∗lmn , 1|'
(2.11)
where: gij —logical variables that characterize the technical limitations associated with the design of this robot, and lij —logical variables that characterize the technical limitations associated with the state of the environment (the environment of functioning of this robot at the moment). The set of solutions obtained by the robot’s central nervous system when solving Eqs. (2.7)–(2.9), taking into account (2.10) and (2.11), will lead to ambiguity of the robot’s behavior, which is associated with the incomplete certainty of the choice environment. Currently, the problems of choosing optimal solutions in conditions of incomplete certainty of the interval, probabilistic, or linguistic type have been most fully studied [27].
2.3 Formation of the Robot’s Sensation Language The development of modern robots is closely connected with the creation of their languages of sensations, on the basis of which a figurative representation of the environment and the intellectual interaction of robots between themselves and with the human operator is possible. In this area, many developments are devoted to the control of robots in different conditions. For example, in [56, 57], a spoken language is proposed as a convenient interface (ELI—Extensible Language Interface) for controlling a mobile robot. It is designed to interpret speech commands to perform the tasks of extracting and transmitting information for use in specific, narrow tasks, such as caring for the elderly. In order to use it effectively, a number of basic terms must be associated with perception and motor skills. Therefore, at present there is a wide range of tasks for which the robot using the ELI cannot be pre-programmed. For example, such as the nature of specific tasks in the household that he may be asked to perform. In [58], an algorithm is proposed for teaching the robot to see various objects. Developed robotic vision systems are based on what animals are supposed to see as developers. That is, they use the concept of layers of neurons, as in the brain of animals. Engineers program the structure of the system, but do not develop an algorithm that works in this system. Since the 1970, robotics engineers have been thinking about reducing information for displaying images in a computer’s memory, using the features of images. These
2.3 Formation of the Robot’s Sensation Language
107
can be lines or points of interest, such as angles or certain textures. Algorithms are created for finding these functions and tracking them from the image frame to the image frame in the video stream. This significantly reduces the amount of data from millions of pixels in an image to several hundred or thousands of objects. Then the engineers think about how the robot can realize what they saw and what it will need to do. They write software that recognizes patterns in images to help the robot understand what’s around it. It should be noted that certain specific tasks have been solved for the processing and comprehension of the sensory information of robots, but there is no integral algorithm that takes into account all the organs of the robot’s sense: organs of sight, hearing, smell, taste, touch, etc. no. Therefore, in order that intelligent robots could independently, go without human intervention, formulate tasks and successfully accomplish them, they must not only be equipped with sensation sensors (sensors), but also have the ability to understand the language of sensations, i.e., have sensations such as “yours is alien”, “dangerous—safe”, “beloved—unloved”, “pleasantly—unpleasantly”, etc., formed as a result of solving systems of logical equations describing the environment in the language of feelings. For this, it is possible to use logical inference systems, which in intellectual systems are associated with solving systems of logical equations [59]. They may have a higher dimension. The number of variables usually exceeds the number of equations, which leads to non-uniqueness of the solution. Using the Zhegalkin algebra [53] allows one to perform algebraization of the problem, so that the Euclidean norm can serve as a scalar measure of the quality of the solution. At the same time, to solve it, you can use a method similar to the Gauss elimination method when solving linear systems of algebraic equations with real numbers. This technique can be the basis for providing the robot with the ability to form a sensation language in the database of the “Central Nervous System of the Robot” (CNSR). In this case, the robot has the opportunity to make independent decisions regarding expedient behavior [1].
2.3.1 Algorithm of Formation of the Language of Sensations of the Robot The central nervous system of a robot is built by analogy with the central nervous system of a person who has sensory organs that perceive information about the environment and their own state. Therefore, the solution to the problem of creating a central nervous system of a robot is reduced, first of all, to research and development of circuits of the type of the following circuit (consisting of approximately seven blocks): 1—(robot sensors) → 2 (signal receiving channel), 3—(primary processing of measuring signals) → 4—(combining signals, fuzzification, recognition, classification, decision making) → 5—(transmission channel of control signals), 6—(transformation and formation of a control action) → 7—(moving, stretching and other actions of the working parts of the robot).
108
2 Central Nervous System
The description of all CNSR blocks is described in detail in [46]. It should be noted that one of the most promising options for the mathematical implementation of the fuzzification block is a logical-mathematical model for the formation of behavioral processes based on the analysis of sensations in the form of signals from the robot’s sensory system. To do this, the robot’s sensor system collects environmental information from various sensors and transmits it to the CNSR. Next, the measurement signal preprocessing unit and the fuzzification, recognition, and decision making unit [60] of the CNSR processor form the robot’s sensory language using the following algorithm: 1. Quantization of the surrounding space in the visibility zone of the robot’s sensor system with the assignment of the resulting pixels a value in the form of a pixel number. 2. Fuzzification of the robot’s sensory information for each pixel of the surrounding space and the formation in the memory of the CNSR of the display of the surrounding space in the form of pixels with their coordinates and fused data. 3. The formation of images in the display of the surrounding space for each sense organ of the robot. 4. Formation of images by combining images from different senses. 5. Assigning names to images in the form of words of the English language. 6. Write words in the form of a combination of letters of the English language. 7. If for any images there was no suitable word of the English language, then such images can be combined with others in various combinations until all possible combinations have been used. 8. If any combination finds the appropriate words of the English language, then these names are assigned to these combinations of images. 9. If after the completion of operation (8) any images cannot be found suitable words, such images are given a name in the form of a new word from a combination of English letters and the corresponding message is transmitted to the robot community to legitimize a new reference word and the corresponding image. English language and the corresponding message is transmitted to the community of robots to legitimize the new reference word and the corresponding image. Let us consider in more detail the basic operations of this algorithm.
2.3.2 Quantization of the Surrounding Space The center of gravity of the robot is placed in the center of the Euclidean space E 3 . The boundaries of the sensitivity zones (intervals) of the sensor system are determined: [− X, + X], [− Y, + Y ], [− Z, + Z]. The result is a three-dimensional subspace C ⊂ E3.This subspace is divided into quanta along the X axis with a step hx , along the Y axis with a step hy , and along the Z axis with a step hz . The quanta in [− X, + X], [− Y, + Y ], [− Z, + Z]) are assigned numbers i, j, k, respectively. As a result, the entire subspace C will be divided into many pixels pijk . Each pijk pixel will correspond to
2.3 Formation of the Robot’s Sensation Language
109
information measured by the CNSR sensor system on sensations of the organs of vision, hearing, smell, taste, touch, etc.
2.3.3 Fuzzification of Sensory Information A very important operation for forming the language of sensations is fuzzification of sensory data assigned to pixels and recorded in the CNSR database. To do this, first of all, it is necessary to combine the sensory information of each pixel pijk into groups that form, like a person, the following senses of the robot: vision in the form of the set E; hearing in the form of the set R; sense of smell in the form of the set S; taste in the form of a set U; the sense of V. In each of the introduced sets, it is possible to distinguish the subsets forming them that characterize the properties of the observed pixel (object): E i ⊂ E, Ri ⊂ R, Si ⊂ S, Ui ⊂ U, Vi ⊂ V The set of such subsets depends on the set of sensors that form the sensory organs of a particular robot. For example, for vision, the following subsets can be introduced: E 1 —image brightness; E 2 —image color; E 3 —flashing frequency; E 4 —rate of change of brightness; E 5 —speed of color change, etc. For hearing, the following subsets can be introduced: R1 —sound power; R2 is the key; R3 is the interval; R4 —rate of change of volume; R5 —rate of change of tonality; R6 —interval change rate, etc. For the sense of smell, the following subsets can be introduced: S 1 —type of smell; S 2 —odor intensity; S 3 is the rate of rise or fall of the odor; S 4 is the rate of change of the type of smell; S 5 —odor interval, etc. For taste, the following subsets can be introduced: U 1 —type of taste; U 2 is the power of taste; U 3 —rate of change in taste, etc. For touch, the following subsets can be entered: V 1 —flatness of the surface; V 2 —dry surface; V 3 —surface temperature, etc. The data forming these subsets are extracted from signals from sensors of the senses of robots by their fuzzification [46]. This data can be of logical, logicalprobabilistic or logical–linguistic types. Data of a logical type is formed from data or signals from sensors of the sense organs of robots by quantizing the entire range of a specific sensor and assigning Δn , (where n = 1,2, … N is the number of a quantum), the names of logical variables that take values: true ( 1) or false (0). For example, logical variables are formed by quantizing the entire range of the acoustic sensor and assigning the obtained Δn quanta to the names of logical variables that take the value true (1) or false (0). Then, if the range of the sound intensity sensor is in the range from 0 to 80 dB, then by entering a quantum of 20 dB, the entire range of sound intensity changes can be divided into four quanta: Δ1 = [0, 20], Δ2 = [20, 40], Δ3 = [40, 60], Δ4 = [60, 80]. Then, the quantum Δ1 can be given the name Rr1 {very weak sound}, the quantum
110
2 Central Nervous System
Δ2 should be given the name Rr2 {weak sound}, the quantum Δ3 should be given the name Rr3 {strong sound}, and the quantum Δ4 should be given the name Rr4 {very strong sound}. In particular, if, for example, the sensor shows the sound intensity r = 50 dB, then after fuzzification, the following values of the logical variables Rr1 = 0, Rr2 = 0, Rr3 = 1, Rr4 = 0 and the corresponding intervals described above will be entered into the CNSR database as attributes of these logical variables. When receiving data of a logical-probabilistic type, the probabilities P(r i ) are additionally added to the attributes, which can be determined under the normal law of the distribution of sound strength as follows: P(rn ) = 2(Φ(3)−Φ((|rn − m|/σ )), where: σ = (b + a)/6—the standard deviation; a—the lower boundary of the quantum; b—the upper boundary of the quantum; m = (b − a)/2—the expected value; Φ (.)—Gaussian standard distribution function, which corresponds to the simplest normal law with parameters m = 0, σ = 1 and whose values are known. Naturally, for logical variables corresponding to quanta, which do not include sensor readings, the probabilities will be zero. In particular, if the sensor shows the sound strength r = 50 dB, then after fuzzification, the following values of the logical variables Rr1 = 0, Rr2 = 0, Rr3 = 1, Rr4 = 0 and the following attributes corresponding to them will be entered into the CNSR database: for Rr1 —interval [0; 20] and the probability P (r 1 ) = 0; for Rr 2 , the interval [20; 40] and the probability P (r 2 ) = 0; for Rr3 , the interval [40; 60] and the probability P (r 3 ) = 1; for Rr4 , the interval [60; 80] and P (r 4 ) = 0. It should be noted that in the formation of logical-probabilistic variables, the quantization of the sensor range can be carried out with overlap. For example, if the range of the sound intensity sensor lies in the range from 0 to 75 dB, then by entering a quantum value of 30 dB, you can break the entire range of sound strength into the following four quanta: [0; 30]; [15; 45]; [30; 60]; [45; 75]. Then, if the sound intensity sensor shows r = 50 dB, then after fuzzification, the following values of the logical variables Rr1 = 0, Rr2 = 0, Rr3 = 1, Rr4 = 1 and the following attributes corresponding to them will be entered into the CNSR database: for Rr1 —the interval [0; 30] and the probability P(r 1 ) = 0; for Rr2 , the interval [15; 45] and the probability P(r 2 ) = 0; for Rr3 , the interval [30; 60] and the probability P(r 3 ) = 0.12; for Rr4 , the interval [45; 75] and the probability P(r 4 ) = 0.12. When receiving data of a logical-probabilistic type and with a uniform law of the distribution of sound strength, the probabilities P(r n ) are additionally added to the attributes, which can be determined as follows: ⎧ ⎫ 2(b−rn ) , if r ≥ m n P(rn ) = 2(rb−a n −a) , if rn < m b−a In this case, for the above example, after fuzzification, the following values of the logical variables Rr1 = 0, Rr2 = 0, Rr3 = 1, Rr4 = 1 and the following attributes
2.3 Formation of the Robot’s Sensation Language
111
corresponding to them will be entered into the CNSR database: for Rr1 , the interval [0; 30] and the probability P(r 1 ) = 0; for Rr2 , the interval [15; 45] and the probability P(r 2 ) = 0; for Rr3 , the interval [30; 60] and the probability P(r 3 ) = 0.25; for Rr4 , the interval [45; 75] and P(r4 ) = 0.25. When receiving data of a logical–linguistic type, the membership functions are additionally added to the attributes, which can be determined in the triangular form of the function as follows: ⎧ ⎫ 2(b−rn ) a+b , if r ≥ n 2 μ(rn ) = 2(rb−a n −a) , if rn < a+b b−a 2 In this case, for the above example, after fuzzification, the following values of the logical variables Rr1 = 0, Rr2 = 0, Rr3 = 1, Rr4 = 1 and the following attributes corresponding to them will be entered into the CNSR database: for Rr1 , the interval [0; 30] and the value of the membership function μ(r 1 ) = 0; for Rr2 , the interval [15; 45] and the value of the membership function μ(r 2 ) = 0; for Rr3 , the interval [30; 60] and value of the membership function μ(r 3 ) = 0.25; for Rr4 , the interval [45; 75] and μ(r 4 ) = 0.25. Thus, after fuzzification of sensory data in the database for each pixel there will be a set of logical, logical-probabilistic and logical–linguistic variables. The next step in creating a robot sensation language will be the task of forming images in the surrounding space for each sensory organ.
2.3.4 Image Formation in the Display of the Surrounding Space The formation of images is performed for the display of the surrounding space for each sensory organ individually. In particular, for the view there will be a display of the surrounding space C E ⊂ C, for hearing C R ⊂ C, for the sense of smell C S ⊂ C; for the taste of C U ⊂ C and for the sense of C V ⊂ C. In each of these mappings, adjacent pixels with equal values of logical variables and close values of their attributes can be combined. Then in the spaces of the sense organs C E , C R , C S , C U and C V we get sets of images ImE, ImR, ImS, ImU, ImV with certain contours. Since attributes of logical variables can be intervals, probabilities, membership functions, etc., for each type of attribute it is necessary, accordingly, to introduce a measure of proximity δΔ, δP, δμ. After the operation of combining pixels into image sets ImE(i), ImR(i), ImS(i), ImU(i), ImV (i), in each space of the sensory organs C E , C R , C S , C U and C V , one can depict image contours and give each circuit a name. As a result, there will be five cards K E , K R , K S , K U and K V with sets of image contours ImE(i), ImR(i), ImS(i), ImU(i), ImV(i), where i = 1,2, ….
112
2 Central Nervous System
It should be noted that, when processing two images, a preliminary analysis is first performed (spectral and correlation analysis), which includes the selection and application of the most suitable filter (linear filtering), on the basis of which contour representations (polygonal contours) are formed.
2.3.5 Formation of Images by Combining Images from Different Senses Typically, the formation of images from images consists in operations on images such as intersection, union, or symmetric difference of ordered sets and assigning the result to one or another reference image stored in the database. If there is not a single suitable image in the database, then such a combination of images is given the name of the new image, which is placed in the database for temporary storage. When this new image is repeated many times during the operation of the robot, this image becomes a reference image and is given a permanent name. To assign a combination of images to a particular standard, it is necessary to introduce a measure of proximity of ordered sets. Among the most well-known proximity measures (criterion), the following can be distinguished [61]: estimation by the maximum deviation of cardinalities of sets; estimate of the standard deviation of the cardinality of sets; probabilistic assessment of the maximum deviation of cardinalities of sets; probabilistic estimate of the standard deviation of cardinalities of sets. Using these criteria allows you to rank combinations of images according to their proximity to the reference image and at the same time allows you to enter a numerical estimate of proximity. The process of forming images based on information from the senses of the robot is carried out in the following sequence. First, we look for the presence of images that are close to the database standards in each of the K E , K R , K S , K U , and K V cards. The found images are assigned the names of the standards. They are recorded in the observable data base of the central nervous system together with their coordinates and are excluded from the corresponding maps. Then a sequential pair wise overlay of cards is performed on each other with the operation of intersecting the sets K E , K R , K S , K U , and K V . Similarly, three (four, five) cards are superimposed on top of each other. At each intersection of images, the presence of images close to the database standards is searched for. The found intersections of the images are assigned the names of the standards. They are recorded in the observable data base of the central nervous system together with their coordinates and excluded from the corresponding map intersections. Therefore, in each subsequent intersection, the maps corrected by the results of deletions are involved.
2.4 Classification of Images in the Central Nervous System
113
If any images appear in the corrected intersection, then new names are assigned to them. They are also recorded with their coordinates in the observational database of the central nervous system. At the last stage, in the symmetric differences of the sets from the maps K E , K R , K S , K U , and K V , the presence of images close to the standards in the databases is searched. Moreover, it is similar at first by the operation of the symmetric difference of two sets, then three, four and five. The found symmetrical differences of the images are assigned the names of the standards. They are recorded in the database of observable data of the Central nervous system together with their coordinates and are excluded from the respective map associations. Therefore, in each subsequent association, cards corrected by the results of deletions are involved. Thus, semantic data about the space surrounding the robot are generated in the central nervous system database, based on which the robot makes behavioral decisions [60] using standard behavioral algorithms stored in the robot knowledge base. These algorithms are recorded in the knowledge base of the robot at the stage of its creation, based on its purpose. Therefore, such algorithms will be called genetic. However, after the formation of the semantic database of the space surrounding the robot, it may turn out that two or more images are partially or completely present in the same place in space. Therefore, it is necessary to adjust the semantic database in order to exclude detected collisions. Such adjustment is closely related to the formation of semantic data—pragmatic, corresponding to the problem being solved by the robot at the moment.
2.4 Classification of Images in the Central Nervous System One of the important problems in modern intelligent robots equipped with a Central Nervous System [43] is the problem of inductive formation of images, concepts, or generalizations based on the analysis of sensory data. It is associated with the search for logical patterns based on the construction of rules that can explain the facts and predict new or missing inherent in the desired or formed image. Therefore, the purpose of the analysis is to build a set of logical connections inherent in the image, i.e. building a classification model. When the classification model is built, on the basis of the found patterns, we can attribute the objects considered in the central nervous system to some class [62].
114
2 Central Nervous System
2.4.1 Statement of the Problem of Inductive Formation of Images In this case, the task of forming images from images can be as follows. Based on the analysis of sensory data on the environment of the choice of the robot, a set of images S = {S 1 , S 2 , …, S n } is formed. The robot database contains many reference images (objects) O = {O1 , O2 , …, Om }. The indicated sets are ordered sets, and the elements of these sets are Logical Variables (LV) that takes the value 0 or 1. Each image S i and each object Oj are characterized by sets of signs (attributes). The number of attributes is fixed; attribute values can be numeric, logical, and symbolic. Among the set O of all objects represented in the database, let us single out the subset V 1 ⊂ O—the set of objects related to concept 1 (class 1). Then the remaining set of objects W 1 = O\V 1 will not be related to the concept (class). Moreover, O = V1 ∪ W1 , V1 ∩ W1 = ∅. At the first step, among the set of images S, we select the set of images SV1 ⊂ S, belonging to the class V 1 and assign them the name of this class. Then, among the set O, we select V 2 ⊂ W 1 —the set of objects related to concept 2 (class 2) and among the set of remaining images S1 = S\SV1 we select the images SV2 ⊂ S1 belonging to the class V 2 . Give them the name of this class. Further we will continue the process of such selection until all database objects have been exhausted, i.e. ((((O\V1 )\V2 ) . . .)\Vk ) = Wk = ∅. If it is so that all database objects are exhausted, but the remaining set of images is ((((S\SV1 )\SV2 ) . . .)\SVVk = Sk /= ∅, then these images are assigned the name of the new class (k + 1) and they are temporarily stored in a database with this name. There can be many such images. In this case, the task of dividing to class (k + 1) into new reference images and, accordingly, assigning these images to newly introduced standards can be posed. By analogy with [2], to extract the q-th class from images, we can construct a training set K q = K q + ∪ K q − , where K q + ⊂ Vq and K q − ⊂ Wq . Based on the training sample K q , a rule is constructed that separates the positive and negative objects of the training sample. The decisive rule is correct if it subsequently successfully recognizes objects that were not originally included in the training set.
2.4.2 Algorithms for the Formation of Decision Rules A number of algorithms are known that form decision rules in the form of a decision tree, or a set of production rules. First of all, these are the ID3 and C4.5 algorithms [63, 64], the CN2 algorithm [65] and a number of others. The result of these algorithms is generalized concepts presented in the form of decision trees or a set of production rules. A Decision tree is a tree that associates an output value with each input example, while constructing a path from the root vertex to one of the final vertices. At each of the intermediate vertices (nodes), conditions
2.4 Classification of Images in the Central Nervous System
115
are checked, and the final vertices (leaves) are indicated by the names of the solution (usually this is the name of the class to which the example belongs). It is necessary that at each intermediate vertex the results of the verification of conditions be mutually exclusive and exhaustive. The choice of a sequence of checks of conditions when moving from the root to the leaves of the tree in [63, 64] is determined by criteria that are associated with the concept of entropy. One of the problems for the above algorithms is the complexity that occurs when processing incomplete and conflicting information. In [66], the effect of noise in the initial data on the efficiency of classification models obtained using generalization algorithms was studied. The most difficult version of the noise in the database tables was considered, related to the presence of contradictions in the training set, and it was shown that the most significant influence on the decrease in classification accuracy is provided by the presence in the training set, on the basis of which the decision tree is constructed, contradictory examples, i.e., examples, attributed to different classes, with the complete coincidence of informative attributes. The UD3 (Uncertain Decision Tree) algorithm developed by Fakhrahmad and Jafari [67] is also analyzed there. This algorithm is based on the ideas of the ID3 algorithm. However, the construction of the decision tree and the subsequent classification of test cases here are associated with the use of an additional technique based on the comparison of bit strings associated with the attributes of the training sample. The main goal here is to provide the right solution when classifying examples in case the data set contains conflicting examples. The presence of conflicting examples in the training set K q can lead to the fact that during the construction of the decision tree by the UD3 algorithm, all informative attributes are already used for checks, but it is impossible to assign the name of a certain class to the sheet. Such a decision tree may contain rules that give an ambiguous classification, that is, it will be fuzzy [26]. Therefore, an additional method for resolving contradictions at the classification stage, described in [66], is needed. The results of studies and experiments presented in [66] showed the advantages of the UD3 algorithm for reducing the influence of noise of the considered type in comparison with the ID3 and C4.5 algorithms. In particular, at a noise level of up to 25%, the classification accuracy and for various training samples, the classification accuracy according to the UD3 algorithm ranged from 78 to 85%, and according to the C4.5 algorithm, it ranged from 75 to 80%.
2.4.3 Logical-Probabilistic and Logical–Linguistic Algorithms Typically, images S i formed in the central nervous system of a robot as a result of analysis of sensory data are characterized by a set of k-th attributes with a certain
116
2 Central Nervous System
amount of confidence, which can be specified in the form of probability PSik , or membership function μ Sik [68, 69]. In this case, the following algorithms can be used to assign the presented image S i to any class, the reference image Oi from the database. The logical-probabilistic algorithm LP1 consists in the following sequence of actions: All attributes of the reference images of the database are numbered and written as a string, consisting of N attributes. For each j-th reference image, we write a string of attribute probabilities in such a way that the string has dimension N and 1 is put in the place of the attribute that is in this standard, otherwise 0. For example, POik = (000111 · · · 110). For each i-th presented (classified) image, we write a string of attribute probabilities in such a way that the string has dimension N and in place of the attribute that is in the image, its probability value PSik is set, otherwise 0. For example, PSik = (0PSi2 0PSi4 PSi5 PSi6 · · · PSik · · · 00). For each presented (classified) image and each reference image, we calculate the differences ∑ in the values of the elements of their rows and sum these differences ΔSij = (POij − PSik )2 . k
1. We calculate for each i-th (i = 1, 2, …, n) images min ΔSij according to j-th standards ( j = 1, 2, …, m). 2. Providing a minimum ΔSi j j-th standard is the desired standard corresponding to the i-th (classified) image. The improved logical-probabilistic algorithm LP2 differs from the previous LP1 in that it takes into account the importance of an attribute by multiplying the probabilities PSik of the image attributes by significance coefficients g Sik > 0, for example: PSi = (0g Si2 PSi2 0g Si4 PSi4 g Si5 PSi5 g Si6 PSi6 · · · g Sik PSik · · · 00). Accordingly, in the probability line of the attributes of the standards, it is necessary to place the same significance factors g Sik in place 1, for example: PO j = (000g Si4 g Si5 g Si6 · · · g Sik · · · 00). The logical–linguistic algorithm LL1 differs from the LP1 algorithm in that instead of the probabilities of image attributes, it is necessary to set the membership functions μ Sik in it, for example: PSi = (0μ Si2 0μ Si4 μ Si5 μ Si6 · · · μ Sik · · · 00). The improved logical–linguistic algorithm LL2 differs from the LL1 algorithm in that it takes into account the importance of an attribute by multiplying the membership functions of μsik image attributes by significance coefficients g Sik > 0, for example: PSi = (0g Si2 μ Si2 0g Si4 μ Si4 g Si5 μ Si5 g Si6 μ Si6 · · · g Sik μ Sik · · · 00) Accordingly, in the probability line of the attributes of the standards, it is necessary to place the same significance factors gsik in place 1, for example, PO j = (000g Si4 g Si5 g Si6 · · · g Sik · · · 00)
2.4 Classification of Images in the Central Nervous System
117
2.4.4 Testing Algorithms A study of the ability of the LP1, LP2, LL1, LL2 algorithms to work with conflicting information was carried out by computer modeling of image classification using the software package MatLab system. Machine experiments were carried out to assess the ability of the developed algorithms to classify images in the presence of contradictions and noise in the data on their attributes. The following types of images were analyzed: cars (x 1 ), people (x 2 ), large animals (x 3 ). Images of cars x 1 were characterized by the presence of the following attributes: 1. The smell of gasoline (x 11 ), which was divided into three gradations: weak (x 111 ), moderate (x 112 ), strong (x 113 ); 2. Temperature of the motor (x 12 ), which was divided into three gradations: low (x 121 ), average (x 122 ), big (x 123 ); 3. Clearance (x 13 ), which was divided into three gradations: small (x 131 ), average (x 132 ), big (x 133 ); 4. The ratio of the lengths of the trunk-interior-motor (x 14 ), which was divided into four gradations: 1: 2: 1 is x 141 , 1: 3: 1 is x 142 , 0.5: 3: 1 is x 143 , 2: 1: 1 is x 144 ; 5. Sound level x 15 , which was divided into three gradations: quiet x 151 , average x 152 , strong x 153 . The classification of car images was carried out according to the following types: jeep X 1d , crossover X 1k , sedan X 1s , hatchback X 1x , pickup X 1p . Moreover, in the reference attribute sets (strings) of these types of vehicles, the presence of an attribute was designated 1, and the absence of 0. Therefore, the following reference strings were used in the classification: – – – – –
For the jeep: X 1d ⇒{0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 0}; For the crossover: X 1k ⇒{1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0}; For a sedan: X 1s ⇒{1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0}; For the hatchback: X 1x ⇒{1 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0}; For pickup: X 1 p ⇒{0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1}. Images of people x 2 were characterized by the presence of the following attributes:
(1) Height x 21 , which was divided into three gradations: small x 211 , average x 212 , large x 213 ; (2) Wrinkles x 22 , which are divided into three gradations: no wrinkles x 221 , few wrinkles x 222 , a lot of wrinkles x 223 ; (3) The sound of steps x 23 , which are divided into three gradations: quiet x 231 , average x 232 , strong x 233 ; (4) Walking speed x 24 , which was divided into three gradations: low x 241 , average x 242 , big x 243 ; (5) The ratio of the width of the shoulders to the width of the hips x 24 , which was divided into five gradations: – The width of the shoulders is much greater than the width of the hips x 251 ;
118
2 Central Nervous System
– Shoulder width greater than hips x 252 ; – The width of the shoulders is approximately equal to the width of the hips x 253 ; – Shoulder width less than hips x 254 ; – Shoulder width much less than hips x 255 . (6) Temperature x 26 , which was divided into three gradations: low x 261 , normal x 262 , big x 263 . The classification of images of people was carried out according to the following types: man X 2m , old man X 2om , child X 2c , old woman X 2ow and woman X 2w . As in the previous case, in the reference attribute sets (strings) of these types of people, the presence of one attribute or another was designated 1, and the absence 0. Therefore, when the classification used the following reference strings: – – – – –
For man: X 2m ⇒{001 010 001 001 10000 010}; For the old man: X 2om ⇒{010 001 100 100 01000 010}; For a child: X 2c ⇒{100 100 010 010 00100 010}; For the old woman: X 2ow ⇒{010 001 100 100 00010 010}; For woman: X 2w ⇒{010 010 010 001 00001 010}.
Images of large animals x 3 were characterized by the presence of the following attributes: (1) The smell of x 31 , which was divided into four gradations: cow smell x 311 , the smell of a horse x 312 , smell of deer x 313 and smell of ram x 314 ; (2) Height x 32 , which was divided into three gradations: small x 321 , average x 322 , large x 323 ; (3) Horns x 33 , which were divided into four gradations: no horns x 331 , small x 332 , average x 333 , large x 334 ; (4) Wool x 34 , which was divided into two gradations: short x 341 , long x 342 ; (5) Tail x 35 , which was divided into three gradations: small x 351 , average x 352 , big x 353 ; (6) Temperature x 36 , which was divided into three gradations: low x 361 , normal x 362 , big x 363 . The classification of images of large animals was carried out according to the following types: horse X 3h , cow X 3c , bull X 3b , deer X 3d , elk X 3e , ram X 3r , sheep X 3s . As in the previous case, in the reference attribute sets (strings) of these types of animals, the presence of one attribute or another was designated 1, and the absence of 0. Therefore, the following reference strings were used in the classification: – – – – – – –
A horse: X 3h ⇒{010 000 110 001 000 1010}; The cow: X 3c ⇒{100 001 000 101 000 1010}; The bull: X 3b ⇒{1000 010 0001 10 001 010}; Deer: X 3d ⇒{0010 010 0001 10 010 010}; Elk: X 3e ⇒{0010 001 0001 10 001 010}; Ram: X 3r ⇒{0001 100 0010 01 100 010}; Sheep: X 3s ⇒{0001 100 0100 01 100 010}.
2.5 Image Classification System
119
To study the effect of noise on the classification of noisy images in the software package, we used a block for introducing noise into the data on probabilities or membership functions of attributes by introducing the probabilities or membership functions of the attributes of the presented images in the form of random variables with a uniform distribution principle. In this case, the following option was introduced for introducing noise into the data. For image attributes, to which random numbers in the range of 0.75–1 were generated in the attribute string of the reference, random numbers in the range of 0–0.25 were generated in image attributes to which random numbers in the range of attribute attributes of 0 were generated. An example of a generated random string of attributes of the image “ram”: {0.1 0.15 0.2 0.8 0.95 0.04 0.01 0.1 0.85 0.04 0.01 0.2 0.8 0.75 0.15 0.1 0.1 0.8 0.1} The main stages of the experiment were as follows: – Selection and loading of standards with a set of attribute lines stored in the database, – The choice of the type of noise introduced into the test set of images, – The construction of test classification programs based on the studied algorithms, – Generating random strings of probabilities and membership functions of the checked image types, – The use of test programs for the classification of checked samples of images according to the standards of the database. The results of test experiments showed approximately the same accuracy of the LP1 and LL1 algorithms above the accuracy given in [6] and obtained for the UD3 algorithm, which forms decision rules in the form of a decision tree or a set of production rules. The LP2 and LL2 algorithms, by introducing significance factors, better classify close images, such as cows and bulls or ram and sheep.
2.5 Image Classification System When creating the Central Nervous System of Robots based on SEMS modules [70], one of the most important problems is the problem of creating systems for making decisions about appropriate behavior [71] based on the analysis of the choice environment. The environment of choice in the CNSR is mainly characterized by images formed on the basis of sensory data [48]. Therefore, it is currently important to create image classification systems in the robot selection environment that allow using simple algorithms to quickly classify images in real time and make decisions in conditions of incomplete and not completely reliable information. One of the variants
120
2 Central Nervous System
of such a system can be a system that implements the simple logical-probabilistic LP1, LP2 and logical–linguistic LL1, LL2 image classification algorithms described in [49].
2.5.1 The Block Diagram of the System The classification system in question has a flowchart shown in Fig. 2.4. The system contains a control unit (1), database standards (2), sensory system CNSR (3), the units attribute string references (4) and classified images (5), line of cases (6) and membership functions (7) attribute references, the rows of probabilities (8) and membership functions (9) attributes of the classified images, the blocks of the row select references (10) and classified images (11), a database of coefficients of
6
2
13
10
4
15 7 11 16
8 14
17
5
3
9 12
1
Fig. 2.4 Classification system
18
19
2.5 Image Classification System
121
importance (12), blocks multiplication on the coefficients of importance of attributes references (13) and classified images (14), the difference calculation blocks (15) and difference sum blocks (16), the sum database (17), the minimum sum calculation block (18), and the image class designation block (19) corresponding to the minimum sum.
2.5.2 The Principle of Operation of the System Control unit 1 sends commands to unit 2 to select the first class of the standard of the classified image and to unit 3 to select the attributes of the image formed in the technical vision system. Block 2 passes the attributes of the first reference to block 4, which forms a string of reference attributes, and then the probabilities of these attributes are passed to block 6, which form a string of probabilities corresponding to the attribute string. The values of the membership functions for these attributes are sent to block 7, which generates a string of membership function values corresponding to the row of selected attributes. Block 3 passes the attributes of the classified image to block 5, which generates a string of attributes for this image of the same dimension as the reference string. Then the probabilities of these attributes are sent to block 8, which generates a probability string corresponding to the attribute string, or passes the membership functions of these attributes to block 9, which generates a string of membership functions corresponding to the string of selected attributes. Block 10 at the command of control unit 1 selects a probability string from block 6 and passes it to block 13, or a string of membership functions from block 7 and passes it to block 13. Block 11 at the command of control unit 1 selects a probability string from block 8 and passes it to block 14, or a string of membership functions from block 9 and passes it to block 14. Block 12, at the command of control unit 1, passes the attribute significance coefficients to blocks 13 and 14, where they are multiplied by the corresponding probabilities or membership functions. Blocks 13 and 14 pass lines of probabilities or membership functions to block 15, where the differences between the elements of the lines of the reference and the classified image are calculated. These differences are passed from block 15 to block 16, where the sum of differences for the first class of the standard is calculated. This amount is transferred from block 16 to block 17 for storage. Then control unit 1 sends a command to unit 2 to select the second class of the standard of the classified image, and then, similarly to the previous one, the sum of differences for the second class of the standard is calculated, which is also transmitted from unit 16 to unit 17 for storage. Then control unit 1 sends a command to unit 2 to select the next reference class of the classified image, and then, similarly to the previous one, the sum of differences for this reference class is calculated, which is also transmitted from unit 16 to unit 17 for storage.
122
2 Central Nervous System
The calculation process continues until all the reference classes are exhausted. In this case, block 17 will contain the sum of differences for all classes of the standard. After that, control unit 1 sends a command to block 18 to determine the minimum of all stored amounts and after the end of determining this amount, block 18 sends it to block 19, which determines the number of the reference class to which the classified image belongs. This is the end of the classification process for this image, and unit 19 sends a signal to control unit 1 that the system is ready to classify the next image.
2.6 Logical and Mathematical Model of Decision Making in the Central Nervous System SEMS The operation of the ACS of the robot based on the information from the sensory systems with respect to the environment and the state of the robot. Without this information, the ACS will not be able to take decisions expected of it, i.e., to set goals and achieve the functioning of these goals [1]. However, for that to robots that are created on the basis of SEMS modules could independently, without needing human intervention, formulate objectives and to successfully carry them out, they should be provided not only more sophisticated sensors senses (sensors), but also have the ability to understand the feelings language, i.e., have a sense of the type of “the—another”, “dangerous—safely”, “favorite—unloved”, “pleasant—unpleasant” and others, formed as a result of solving systems of logic equations. If there is ability to create such a database in the central nervous system of the robot there is a possibility of independent decision making with respect to purposeful behavior [50]. The first and very important operation for further logical constructions in decision making is the fuzzification of incoming data at the touch CNSR information channels from the various sensors. To do this, first of all, to combine sensors forming groups following bodies robot senses like a man: vision as a set of X; hearing as set Y; a plurality of smell Z; taste as a plurality of U; the sense of touch in the form of a plurality of V; a plurality balance W and a plurality telepathy Q. X i ⊂ X, Yi ⊂ Y, Z i ⊂ Z , Ui ⊂ U, Vi ⊂ V, Wi ⊂ W, Q i ⊂ Q.
(2.12)
After fuzzification of sensor indicators, described in detail in Sects. 2.2.1, 2.1.3 and 2.3.3, a system of logical equations of the form 2.7 describing the solution of the problem and systems of logical Eqs. 2.8 and 2.9 describing the constraints will be obtained. The set of solutions obtained CNSR in solving Eqs. (2.7)–(2.9), of course, will lead to ambiguous behavior of the robot. A man in this situation it is advisable to behave purposefully or intuitively, based on our own experience, or genetically inherent behavioral patterns [72]. The task empowering robots skills purposeful behavior is still at a very early stage. Currently, the most thoroughly studied the problem of
2.6 Logical and Mathematical Model of Decision Making in the Central …
123
choosing the optimal decisions under conditions of incomplete certainty interval, probabilistic or linguistic type [27]. The man in the process of thinking and decision making on the basis of the available information processing normally adheres to one of two styles of deductive or inductive. There is also a third, poorly understood, and rarely encountered, the type of thinking—Abductive. Consider options for use in CNSR each of these types of decision making. When deductive thinking in decision making CNSR process begins with a global level and then goes down to the local movement [73]. Technical analog of this kind of thinking can be the optimization process when initially on the basis of the available information is searched for the best solution of all possible, and then by checking on the basis of available information, all the restrictions made solutions correction. Selection of the optimal solutions of all yi solutions obtained from (2.7) can be carried out in various ways. The easiest way in this case is to use mathematical programming methods [74]. Then, when the logical-probabilistic description of uncertainty [27], i.e., when the attributes of the logical variables in Eqs. (2.7)–(2.9) are the probabilities P{yi = 1}, the quality criterion can be expressed as follows: f 0 (Y ) =
n ∑
P{yi = 1} → max.
(2.13)
i=1
In this case probabilities P{yi = 1} can be calculated approximately by the procedure described in [26, 33, 34] algorithm. If the analysis of any CNSR is found that the influence of one or another component yi her different behavior, the quality criteria (2.13) it is advisable to the form: f 0 (Y ) =
n ∑
βi P{yi = 1} → max,
(2.14)
i=1
where: βi —assigned weight coefficients. Then, when the logical-probabilistic description of uncertainty [27], i.e., when the attributes of the logical variables in Eqs. (2.7)–(2.9) are membership functions μ(yi ), the quality criterion can be expressed as follows: f 0 (Y ) =
n ∑
μ(yi ) → max.
(2.15)
i=1
In this case membership functions μ(yi ) can be calculated approximately by the procedure described in [26] algorithm. If the analysis of any CNSR is found that the influence of one or another component yi her different behavior, the quality criteria (2.22) it is advisable to the form:
124
2 Central Nervous System
f 0 (Y ) =
n ∑
βi μ(yi ) → max,
(2.16)
i=1
where: βi —assigned weight coefficients. When the logical-interval description of the uncertainties [27], i.e., when the attributes of the logical variables in Eqs. (2.7)–(2.9) are the intervals Δij = [aji , bji ], the quality criterion can be expressed as the following expressions: f 0 (Y ) =
n m ∑ ∑ j
f 0 (Y ) =
m ∑ n ∑ [ ]2 k ji (b ji − a ji ) − c ji → min, j
f 0 (Y ) =
(2.18)
(2.19)
i
m ∑ n ∑ [ b ] k ji (b ji − b0ji )2 + k aji (a ji − a 0ji )2 → min j
(2.17)
i
n m ∑ ∑ [ ]2 k ji (b ji − a ji ) + (b0ji − a 0ji ) → min, j
f 0 (Y ) =
k ji (b ji − a ji ) → min,
i
(2.20)
i
where: k ji , k bji , k aji —coefficients of preference decision makers (PDM) of optimality, cji —PDM desired width of the interval, b0ji , a 0ji —the boundaries of your PDM intervals. After calculating the quality criteria of all possible solutions in accordance with Eqs. (2.13) and (2.14)—with logic-probabilistic description of uncertainties, or (2.15)–(2.16)—with the logical–linguistic description of uncertainties, or (2.17)– (2.20)—when describing the logic-interval uncertainty all the solutions are ranked. Then the solutions are checked for feasibility constraints (2.8) and (2.9) from the first, having the highest quality criterion. In this case the first of the audited solutions satisfying the constraints is considered optimal. When inductive thinking in decision making CNSR process starts with analysis of individual decisions to continue the search for common, global solution [74]. Technical analog of this kind of thinking can be the optimization process when initially on the basis of available information checked all the decisions on the feasibility constraints of the form (2.7), (2.9), and then looking for the best solution of all possible under the terms of the decisions of restrictions on the type of criteria (2.13)–(2.20). When abductive decision making, according to Pierce cognitive activity in CNSR is reacting abduction, induction and deduction [75]. In this case abduction carries adoption of plausible hypotheses by explaining the facts by induction implemented testing hypotheses, and by deduction of the accepted hypotheses derived investigation. Technical analog of this kind of thinking can be the process of finding the optimal solution, by analogy, when of all the possible solutions derived from Eqs. (2.7)–(2.9),
2.7 Making Behavioral Decisions Based on Solving Systems of Logical …
125
first selected by means of pattern recognition [76], the decisions that are closest to the existing solutions, stored in the database CNSR data and give good results in the past. You could then deductive methods and / or inductive action on quality criteria for decisions (2.17)–(2.20) to choose the best. In more complex cases, typical of intelligent systems that are unable to form a scalar quality criterion, the choice of the optimal solution of all yi solutions obtained from (2.7) can be carried out by means of mathematical programming in the ordinal scale, generalized mathematical programming or multistep generalized mathematical programming [26, 27, 77, 78]. Comparing the decision described methods can be concluded that the abduction is the fastest method by analogy with intuition, but its reliability depends on the completeness of the database good solutions from the past, i.e. strongly depends on the operating time of such robots in the environment similar conditions. The deductive method is faster with a relatively stable inductance with a large number of restrictions, because they do not require testing restrictions for all solutions. When the complex criteria of quality and a small number of restrictions inductive method can provide results faster as throw finding solutions for complex quality criteria for unacceptable restrictions on the making.
2.7 Making Behavioral Decisions Based on Solving Systems of Logical Equations Expedient the behavior of intelligent robot based on information from its sensor systems with respect to the environment and the state of the robot. Without this information, it cannot determine the functioning purpose and reach these goals [1]. However, for what would robots, created on the basis of SEMS modules, can independently, without needing human intervention, formulate objectives and successfully fulfill them, it must be able to understand the language of feelings. So they must have feelings the type “is dangerous—safely”, “favorite—unloved”, “pleasant—unpleasant” and others. Then, by solving the system of logical equations, formed on the basis of this language, the robots can acquire the ability to reflective reasoning. At presence of these abilities in the central nervous system of the robot the robot becomes possible of independent decision making with respect to expedient behavior [2, 50].
2.7.1 Stages of the Formation of Behavioral Decisions After collecting numerical information from the sensor system in the robot becomes possible go to the formation of sensations language. In this case the following steps (operation) necessary to execute of the software: – Fuzzification numerical data received from the sensor system, i.e., to obtain qualitative data logical type;
126
2 Central Nervous System
– Selection of images by combining qualitative data using rules of inference; – The formation of binary estimates of images such as “dangerous—not dangerous”, “strong -weak”, “bad—good” and others, using solving systems of logic equations that make up binary relations; – The formation of reflective reasoning based on logical analysis of binary estimates image is surrounded a robot; – The formation of the goals of the robot operation based on a selection of reflective reasoning corresponding maximum (minimum) quality criteria used; – Deciding whether the behavior in order to achieve the objectives formed on the basis of solving optimization problems with constraints. Operation fuzzification numerical data is widely used in intelligent control systems [55] and, accordingly, in the intelligent robot control systems. For example, in the formation of databases of expert controls [51]. After performing fuzzification for each sensor of the measuring channel are formed a plurality of X i , containing sets of logical variables x ij For example, for channel brightness measurement can be obtained following logical variables: x 11 —«very dark», x 12 —«dark», x 13 —«semidark», x 14 —«semi-light», x 15 —«light», x 16 —«semi-bright», x 17 —«bright», x 18 — «very bright.“ These logical variables for different points surrounding space robot can be true (x ijk = 1) or false (x ijk = 0). At that can often originate situations when fuzzification numerical data about the truth or falsity of the received one or another of logical variables can only speak with some confidence. In this case, each received a logical variable x ijk supplied with the appropriate attribute in the form of the probability P{x ijk = 1} or membership functions μ(x ijk ) [26], which are stored in a database together with their data. Also with logical variables are stored in the database the coordinates of points of the surrounding space of the robot corresponding to each logical variable. Naturally, when you change the environment of the robot contents of the database is updated. Operation separation images in the space surrounding the robot is widely used in the vision systems of intelligent robots [79]. In the simplest case, this operation is reduced to unite into a single set those points of spaces which have the same set of logical variables with the same attributes and provided that the distance to the nearest neighboring points with the same parameters does not exceed a predetermined value. It also determined the coordinates of the center of gravity of the resulting images. After association a plurality of points, the latter can receive additional quality parameters as a logical variables yij , obtained for example after the analysis of geometrical parameters of images (areas, volumes, contours, etc.). These additional parameters: y11 —«large volume», y21 —«smooth contour”, etc. are recorded in the database in the profile set of images together with other logical parameters and sets the coordinates of their centers of gravity. The content of this database partition is also updated when you change the environment of the robot. In the event of situations not complete certainty in the process of unification of points in space in a certain set (image) due to, for example the probability attributes of logical of variables, necessary in addition to the geometric measures proximity of points introduce additional measures
2.7 Making Behavioral Decisions Based on Solving Systems of Logical …
127
proximity, for example a valid spread of values of probability of logical variables of the neighboring points. Formation of the binary estimates of images carried out by the logical analysis of image parameters. To do this, you must first create a rule assigned to this image of a binary estimation of, for example, if the image is very bright, large and quickly moved to the side of the robot, this image (object) is very dangerous. The system of this rules introduced in CNSR knowledge base at the stage of creating a robot. In some cases, it can be corrected during use of the robot by means of training or selfstudy [80]. With a large number of such rules is expedient to bring them to a system of algebraic equations modulo two or algebra Zhegalkin logic [52]. In this case we obtain the matrix equation whose solutions are easily parallelized matrix processors. This allows sharply speeding up the logical analysis of the parameters of the images. The resulting binary estimation of images is also recorded in the database of images about the surrounding space of the robot. When you change the environment of the robot binary evaluation and the images themselves are updated. Formation of reflective robot reasoning is based on the logical analysis of binary estimates of images that are around him. To do this, you must create rules like “if— then” reaction to a particular binary estimation of image, given its location and status of the robot. For example: (1) if the image is very dangerous and is near, the robot has to move away from him; (2) if the image is very dangerous, is close by and nearby there is a large image of the good, the robot has to hide behind him. These rules are drawn up and recorded in the knowledge base at the stage of creating a robot. They can be very much and they can be corrected during operation. At that as well it is advisable to bring them to a system of algebraic equations modulo two or algebra Zhegalkin logic for parallel computing. The translation program of rules in system algebraic equations modulo two to come into software CNSR. Formation goals of the robot operation on the based selection reflexive reasoning obtained after analysis of binary estimates of the images surrounding the robot is a complex problem, related to decision of bad formalized multicriteria optimization tasks [27]. It is often necessary to select more than one specific goal, a sequence of consecutive goals in the successful implementation previous goals. In the robot design stage it is impossible to foresee all situations that may be a robot when decision making of functioning purposes. Therefore, in the robot’s memory are entered possible on the proposed conditions of use situation and corresponding possible targets with index of their efficiency. Then CNSR should be such software which could be by evaluating acceptable reflexive reasoning and available suitable most effective targets of functioning in this situation the choice to make a sequence of goals that would ensure the maximum (minimum) quality criteria, expressed numerically. Formation of such a quality criterion is a complex and time-consuming task. Solution to this problem is primarily associated with the formation and solution of a
128
2 Central Nervous System
number of logical problems, leading to the formula of calculating the quality criterion [21]. Most good results in solving this problem can be obtained using multistep generic programming [78], and programming environments such as A-life [81]. After completion of the formation of functioning sequence of goals is necessary deciding- making with respect to purposeful behavior to achieve the formulated goals. Selected expedient behavior is realized forming block the control actions on the working bodies of the robot. In the CNSR need to implement the decision of optimization problems with constraints. The basis for the solution of these problems may have different methods of mathematical programming, mathematical programming on an ordinal scale, generalized mathematical programming and multistep generalized mathematical programming [78]. A number of new approaches to solving optimization tasks under interval uncertainty are described in [82].
2.7.2 Features of Recognition, Described by Equations in Algebra Modulo Two The decision making process in the formation of the goals the operation and purposeful behavior can be greatly accelerated by the recognition formed an image M i i.e. at the expense their allocation to the different classes of images C j M, containing the so-called ideal images M i * , for which the best known solutions earlier adopted (Mi∗ ∈ C j M). In this case, you can use the method of habitualness of the situation [72] (the analogue of intuition in man), and replace the desired solution to the analogue. Pattern recognition procedure in the event of their representation in the algebra modulo 2 requires the task of rules or algorithms for language processing of the attribute that characterizes the logical variables at carrying out over them the operations of addition and multiplication modulo 2. Linguistic attributes are generally form the nonmetrizable sets Bi , characterizing images. In this case, the recognition of the choice of the best class of the plurality of alternative may be based on the search procedure of binary relations Bi gBcj , where Bcj —the set that characterizes the ideal image of the class under consideration C j M to which we want as close as possible and g—double predicate on the analyzed sets, which for example can be set by an indication of formulas logical-mathematical language, or by specifying a formal linguistic expression [6]. The problem of revealing the best approximation is reduced to two problems. The first is the problem of obtaining sets Bi , Bcj , and the second—the construction of the optimal procedure g, which allows obtaining a quantitative estimate of the proximity to the Bi to Bcj . Creating a source database for the construction of g appropriate to start with the selection in each of the compared sets of subsets of metrizable (an example of making the probability subsets), for elements that may be specified ratio and numerical measure of proximity. Next, the most difficult step is the ordering of the elements nonmetrizable subsets. It is very likely that this task will need to build a
2.7 Making Behavioral Decisions Based on Solving Systems of Logical …
129
new system of logic equations, the solution of which will lead either to a metrizable sets or to the orderly. In the first case we immediately obtain numerical measure of proximity. In the second of these measures will have to be built anew. Possible numerical estimates can be power sets; the number of matching elements; the number of groups matched elements; etc. There are no recommendations on the selection of those or other assessments currently unavailable due to insufficient study of such models. If you cannot arrange nonmetrizable sets decision to proximity most of a set to the standard must accept himself a developer or operator, based on their preferences, experience and intuition. [26]. The most frequently used and easily constructed-functional binary relationships are the following: – Estimation of by the maximum deflection power sets: ∑
xi −
∑
i
yi = Δ,
(2.21)
i
where x i = 1 and yi = 1 for non-zero (not empty) elements of set being compared and thus x i = 0 and yi = 0 for zero (empty) elements of set being compared, and Δ—numerical estimate proximity; – Estimate the standard deviation power sets: [( )2 ( )2 | | ∑ ∑ √ xi − yi i
(2.22)
i
where δ—proximity numerical estimate; – Probability estimate for the maximum deviation of the power sets: ∑ i
P(xi = 1)xi −
∑
P(yi = 1)yi = Δ P ,
(2.23)
i
where P (.)—the probability, ΔP —numerical probabilistic estimate proximity; – Probability estimate for the standard deviation power sets: √∑ ∑ ( (P(xi = 1)xi )2 − ( P(yi = 1)yi )2 = δ P , i
(2.24)
i
where δ P —numerical probabilistic estimate proximity. The use of these binary functional relationship makes it easy to rank the images Bi model for their proximity to Bcj standards and at the same time allows you to enter a numerical estimate of the proximity. But available in various literature [27, 32, 83, 84] recommendations for the calculation of the intervals of complex logic functions, at known intervals of logical
130
2 Central Nervous System
variables is still very contradictory. They can give completely unacceptable results. In more detail this question is considered in [82]. In the Central Nervous System Human (CNSH) solutions for optimality are usually made on the basis of the concept of a consistent preference for one of the other variants compared. For him it is a more natural way to select a rational alternative than the formulation of the objectives and the approach to it. When using this approach CNSR feasible set of alternatives it is expedient not to ask inequalities, and some of conditions of preference selectable variants. To solve such problems, you can use a generalized scheme of mathematical programming, moving from quantitative scales to a sequence. That is, it is necessary to move from models that require the assignment of functions that define the goals and limitations of the task to models that take into account the preferences of those involved in choosing a decision. This extends the range of applications of the theory of extreme problems and may prove useful in a number of situations of choice [55, 78]. The transition to the problems of mathematical programming in the ordinal scale (MPOS), generalized mathematical programming (GMP) and multistage generalized mathematical programming (MGMP), described in more detail in [27]. In this case, when choosing the optimal solution to a system of logical equations, part of the attributes of a logical variable can be linguistic expressions describing preferences in the form of, for example, scores formed by analyzing the opinions of decision makers. And there is a fundamental opportunity to order preferences.
2.8 Using Binary Relations When Decision Making Decision making (DM) on control the behavior of a group of interacting SEMS as dynamic objects is determined by structural approaches to organizing situational control of a group of robots and the methods of situational control used [85]. In this case, control consists in making control decisions as problems arise when solving a group task in a dynamically changing environment of choice [60]. Such control can be attributed to the optimization problems of situational control [40, 86]. The quality of situational control in solving various problems of group control depends on the structural organization of control and on decision making methods. Among the control structures, the following can be distinguished: decentralized without the allocation of a robot leader; decentralized with a leader, centralized with an operator, combined with an operator and without a leader and combined with an operator and a leader [87]. The choice of control type is determined by the available technical means and the type of group task to be solved. Moreover, the choice of a decision making method for situational control of a SEMS group largely depends on the type of group control scheme.
2.8 Using Binary Relations When Decision Making
131
2.8.1 The Tasks of Situational Control of a Group of Dynamic Objects A typical task of situational control is the problem of the optimal transition of a group of controlled objects from a certain initial variety of points of space to a finite variety, and the dimensions of these varieties can be arbitrary if the phase spaces of the controlled objects themselves are taken into account. It is completely obvious that in technical systems not only control parameters, but also the coordinates of control objects must obey certain physical restrictions. Traditionally, under optimal control, such objects control problems are considered: problems when each object can be described by a system of ordinary differential equations [77]; problems of optimal hit of objects in moving points of space, tasks when the movement of pursued objects (moving points of space) is not known in advance, and information about them comes only with time; tasks when the pursued objects are controllable and their movement is described by a system of differential equations [88, 89]. Finally, in situational control of intelligent systems with appropriate behavior, which include smart electromechanical systems SEMS [70], control can consist in choosing the best solution from a variety of alternative solutions with a fuzzy and not necessarily probabilistic or statistical description of the dynamics of control objects and the environment, and also with a non-scalar indicator of the quality of the control system. Such problems relate to decision theory [78], i.e. decision making problems on the optimality of the system. For cases where it is possible to indicate a scale— the objective function, the value of which determines the solution, the theory and methods of mathematical programming are known and well developed [74], which allow for a qualitative and numerical analysis of the clear-cut solution optimization problems that arise in this case. Taking into account the uncertainties that may arise when solving decision problems with a fuzzy mathematical description of complex systems, including the SEMS group operating in a poorly formalized environment; it is possible to use these mathematical programming methods with more or less success in these cases [71]. In the simplest decision making situation, the PDM pursues a single goal and this goal can be formally defined as a scalar function, i.e. quality criterion of choice. In this case, the values of the quality criterion can be obtained for any admissible set of argument values. It is also assumed that the domain of determination of the selection parameters is known, i.e. component of the selected vector, or, in any case, for any given point, it can be established whether it is an acceptable choice, i.e. Does it belong to the domain of determining the quality criterion for a solution. In such a situation, the problem of choosing a solution can be formalized and described by MP model. In other cases, one should use MPOS, GMP or MGMP [27].
132
2 Central Nervous System
2.8.2 Generalized Description of the Task of Situational Control of the SEMS Group In general, the solution to the group problem of situational control of a group of SEMS is the synthesis of a search algorithm, i.e. an ordered set of w ⊂ U from a set of alternative combinations of controls U(t k ) of the best combinations of control laws of each member of the group from U(t k ) based on estimation of quality Q built with the system of preferences E and the environment of choice O(t k ) [71]: w ⊂ 2U ∗ Q U ,
(2.25)
where: 2U —the set of all U subsets, QU —the set of all quality estimations of (corteges from 2 to |U|), ∗—the sign of the Descartes multiplication. To make effective decisions in a particular situation in the environment of selection O(t k ) and the corresponding changes in a group of cooperating robots the best way, the people controlling computer programs, decision-makers, must have certain principles or rules that contain the fundamental requirements for effective control, the most important of which are expertise, ability to make decisions in the absence of precedents, the existence of link between situational variables, when all the factors of the situation are integrated, being a system and affecting each other, there is a dual influence of factors, when situational factors have different and sometimes even contradictory characteristics. [71]. To implement these principles, certain decision making methods have been developed for situational control of dynamic objects [27]. Most often, situational control uses methods of system and situational analysis, factor and cross-factor analysis, genetic analysis, diagnostic method, expert-analytical method, methods of analogies, morphological analysis and decomposition, methods of simulation, game theory, etc. However, the greatest effect and quality of control are achieved when a system of methods is applied in a complex, which allows you to see the control object from all sides and avoid miscalculations. One of the effective methods in such a system can be decision making based on the use of an equivalence relation, which has the properties of reflexivity, symmetry and transitivity [90]: Oi ∼ O϶ ,
(2.26)
where: Oi —the evaluated solution and Oc —the reference solution. To evaluate the quality of decision making in the ratio (2.26), you need to enter a measure of proximity δ ij of the decision to the reference one. Then, the binary relation will be used instead of the relation (2.26): Oi δi j O j .
(2.27)
2.8 Using Binary Relations When Decision Making
133
2.8.3 Mathematical Methods for Using Binary Relations in Decision In the process of searching for the best decision to be made using binary relations, it is necessary to consistently solve the following basic mathematical problems, namely, determining the adequacy of mathematical models of control objects and the environment, constructing a set of accepted Oi , and reference Oc decisions, constructing a set of binary relations δ ij and calculating values characterizing these binary relations. When constructing a mathematical model, the researcher usually takes into account only the most significant factors for achieving the set control goals. At the same time, the adequacy of the model depends on the control goals and control quality criteria adopted for optimization. Building a perfectly adequate model is fundamentally impossible due to the practical impossibility of taking into account the infinite number of parameters of the original object. As a rule, the behavior of the SEMS group in the environment of choice is not fully defined. Therefore, when searching for optimal or best solutions for situational control of the SEMS group, fuzzy mathematical models of dynamic objects and functioning environments are usually used, among which one can distinguish: Logical— Interval (LIM), Logical -Probabilistic (LPM) and Logical—Linguistic (LLM) [26]. Moreover, their logical and mathematical part can be written in the following form [91]: ⎫ X (t + 1) = A ⊗ x(t) ⊕ B ⊗ u(t) ⊕ r , Y (t) = C ⊗ x(t) ⊕ D ⊗ u(t) ⊕ h
(2.28)
where: X(t)—the extended binary state vector; u(t)—input vector; Y (t)—output vector; r, h – 0.1 vectors; A, B, C, D—0.1 matrices, ⊗—multiplication according to mod 2, ⊕—addition according to mod 2. Moreover, each component x i , uj , yk of vectors X, u, Y should be characterized by the corresponding values of their probabilities or membership functions or intervals, the calculations of which are described in detail in [26]. The process of assessing the adequacy of such models of complex systems in general form can be reduced to solving the problem of finding a binary relation g i , which is an element or a subset of the set G(gi ⊆ G) and which corresponds to the relation I i g o I c when the constraints I i q i U i and I c q i U i (qi ⊆ Q, i = 1, 2, . . . , m) where I i and I c are mathematical models or images of the estimated i -th and reference model, G and Q are some fixed compact sets, go is the best binary ratio, and U i are the models or images given a priori restrictions. In this case, we can assume that the plans or strategies and tactics of building a model are admissible by the i -th constraint if the pair (Ii , Ui ) ∈ qi and the pair (I϶ , Ui ) ∈ qi , and the plan or strategy and tactics of building the model are optimal if the pair ( I i , I c ) (Ii , I϶ ) ∈ g0 , where g 0 is the preference of the decision maker, the cardinality of the set go is minimal and the elements of the set are ordered according to some feature.
134
2 Central Nervous System
The relations g and q can be expressed as a system of logical equations [27]: CG = E
(2.29)
DQ = Y.
(2.30)
or
Vectors G and Q have dimension N and in the most general case can have N = 2n − 1 component of the form: ⟨g1 , g2 , . . . , gn , g1 g2 , g1 g3 , . . . , gn−1 gn , g1 g2 g3 , . . . ., gn−2 gn−1 gn , . . . , g1 g2 . . . gn−1 gn ⟩ , ⟨q1 , q2 , . . . , qn , q1 q2 , q1 q3 , . . . , qn−1 qn , q1 q2 q3 , . . . ., qn−2 qn−1 qn , . . . , q1 q2 . . . qn−1 qn ⟩ , The components gi of the vector G is logical variables that characterize the proximity of the objects and relations of the constructed model I i to the elements and relations of the ideal model I ϶ . The qi components of the Q vector are logical variables that characterize the correspondence of objects and relations of the constructed model I i to the elements and relations of the U i constraint model. Matrices C and D consist of identification strings ci and d i having the dimension of vectors G and Q and containing elements 0 and 1 in the specified order. For example, | | c1 = | 0 0 1 1 . . . 0 1 |
(2.31)
Vector E has the dimension of vector G, its ei components can take the value 1 with some probabilities Pi (ei ). The vector Y has the dimension of the vector Q and its yi components can take the value 1 with some probabilities Pi (yi ). The values of these probabilities are calculated using the probabilities of the components gi and qi. . In this case, the probability values can be calculated approximately according to the algorithm described in [33]. Mathematical methods for solving problems of determining go are described in detail in [27]. After choosing an adequate model, it is necessary to start calculating the quantities characterizing the binary relations gij . In the process of solving this problem for SEMS group control systems, in which control objects are described by LPM, LLM or LIM, measures of the quality of decisions made are also set in the form of binary relations describing the preferences of the decision maker, in the form of, for example, point scores formed on the basis of analysis of the opinion of experts in a given area. Then the estimation of the proximity of the decision to the reference (optimal) one is reduced to the problem of mathematical programming in ordinal scales [78].
2.9 The Influence of Emotions on Decision Making
135
In contrast to the MPOS problem, optimization by the Generalized Mathematical Programming method corresponds to the choice of the decision to be made based on comparing its characteristics with the characteristics of an ideal solution, and not their parameters [78]. Mathematical methods for solving these problems are also described in detail in [27]. At the same time, the search for the optimal solution can be automated by artificial intelligence software.
2.9 The Influence of Emotions on Decision Making In order for an intelligent robot (IR) to choose the most effective action in a specific situation [40], it must have information about the parameters of the environment and its own state and be able to analyze them. At the same time, in order for the IR to act expediently in a changing and unfamiliar environment and without human participation, it is necessary to endow the IR with properties similar to the mental properties of animals. It is obvious that the psyche of robots can be implemented on a different hardware and software base than the psyche of animals. Nevertheless, it will still be the psyche, in the sense that most psychologists put into this concept [41]. The limitations of sensory capabilities and computing power should be attributed to the notorious limited cognitive abilities of IR [42]. In particular, some mechanisms for taking into account the influence of emotions on decision making are proposed and implemented on models [92, 93]. Let’s consider one of the possible approaches to taking into account the emotions of “pleasure” and “self-preservation” when making a decision by a mobile IR.
2.9.1 The Emotional Component of the Decision Making System Emotions and temperament are among the main assessments of the behavior of highly organized organisms [45]. Therefore, it is quite natural to take into account the influence of emotions on the behavior of the IR in the conditions of incomplete information about the environment of choice. Emotions can be an extremely important mechanism for determining the style and manner of behavior of an IR. The mechanism of emotions is important when assessing the current situation and making decisions. This mechanism will be a “physiological layer” in the IR control system [94]. In [46], the architecture of ACS robot is proposed, which implements simple mechanisms of emotions. The emotional component of ACS robot is based on the so-called need-information theory of emotions by Simonov [95]. It is assumed that emotions are an assessment of the current need (its quality and value) and the possibility of
136
2 Central Nervous System
satisfying it. In particular, the formula for representing the totality of factors affecting the occurrence and nature of emotion in a person has become widely recognized [96]. De = f (E(In −Ia )),
(2.32)
where: De —the degree of emotion; E—the energy and quality of the actual need; (I n − I a )—assessment of the probability (the possibility of satisfying the need based on innate and ontogenetic experience); I n —information about the means predictably necessary to meet the need; I a —information about the existing means that the subject actually has [92]. Equation (2.32) cannot be used to obtain specific quantitative values. It only illustrates the principle of the formation of positive or negative emotions of various strengths [97]. However, based on the analysis of various simulated mechanisms of emotions, it is possible to formulate some relations suitable for taking into account the influence of emotions on decision making and control. In particular, when deciding on the choice of a route for a mobile IR, some of the proposed ratios can be used. For example, you can evaluate the strength of an emotion as follows: – Pleasure: ) ( I p = k p / J −J P∗ ,
(2.33)
) ( I S = k S / J −JS∗ ,
(2.34)
– Self-preservation:
where: k p —an assessment of the influence of the emotion of “pleasure” on the decision on the optimality of the IR route in time, depending on the nature of the IR (impatient, balanced, slow, etc.), k S —an assessment of the influence of the emotion of “self-preservation” on the decision on the optimality of the risk of an accident on the IR route, depending on the nature of the IR (fearful, balanced, bold, etc.); J—the calculated values of the optimization functional without logicalprobabilistic and logical–linguistic constraints from the environment, J P * is the calculated values of the optimization functional taking into account logical–probabilistic and logical–linguistic constraints from the environment affecting the sense of «pleasure», and J S *—the calculated values of the optimization functional taking into account logical-probabilistic and logical–linguistic constraints from the environment affecting the sense of «self-preservation». The value of the functional J T that minimizes the travel time along the route M v (v—route number) without taking into account restrictions can be calculated approximately by the formula [98]: ⎞⎞ ⎛ ⎛ ∑ ) ∑ ( ) ∑( ) ( ⎝a li j /vi j + cτi j ⎠⎠, JT (Mv ) = ⎝ b ϕi j /wi j + i, j
i, j
i, j
(2.35)
2.9 The Influence of Emotions on Decision Making
137
where: a, b, c—preference coefficients, vij , wij —linear and angular speeds of movement, τ ij are delays at intersections related to the environment, for example humidity hi , temperature t j and/or the number of pedestrians at the intersection δjo . As shown in [98], the values (i,j) are an element of an ordered set that characterizes the i, j-th sections of the considered route M v from the starting point to the end point. When determining the minimum value of the functional J T (M v ) by a formula of the form (2.35), that is, when solving an optimization problem, it is necessary to take into account environmental constraints, for example, the following types: – Climatic: snow, rain, temperature, visibility on the road, etc., which can be entered in the form of logical-probabilistic and logical–linguistic equations, for example: h 1 ⊗ t1 → v1 ⊗ w1 ; P(h 1 = 1) = 0.8; μ(t1 ) = 0.9; P(v1 = 1) = 0.8; μ(w1 ) = 0,
(2.36)
where: h1 —the logical variable “dry”, which corresponds to humidity from 0 to 40%, t 1 —the logical variable “hot”, which corresponds to a temperature from 20 to 40 ° C, v1 —a logical variable “fast”, which corresponds to a speed from 40 to 60 km/h, w1 —a logical variable “sharp”, which corresponds to an angular velocity from 4 to 8 degree/s. – Traffic: traffic density, the number of lanes, the complexity of intersections, the number of pedestrians at intersections, etc., which can also be introduced in the form of similar logical-probabilistic and logical–linguistic equations limiting speeds and delays. – Road conditions: pavement (asphalt, crushed stone, soil), irregularities (potholes, pits), bends of the road, etc., which can also be introduced in the form of similar logical-probabilistic and logical–linguistic equations limiting speeds and delays. As shown in [99], constraints of type (2.36) can be reduced to logical-interval constraints. In this case, the value of the functional J T (M v ) calculated from the equation of type (2.35) will be interval. The choice of a specific value J T (M v ) (minimum, maximum or average) depends on the preference of the expert in solving the optimization problem. However, the choice of the optimal route still depends on the risk of an accident on the route, characterized by the functional J R (M v ), The definition of the value of the functional J R (M v ) minimizing the risk of movement along the route M v taking into account logical-probabilistic and logical–linguistic constraints is described [100], where expert assessments of the degree of accident risk are used at intersections and sections between intersections, based on the analysis of statistical data and/or the results of computer simulation of traffic on sections under the climatic, traffic and road traffic conditions described above. Then the minimized functionality will have the form: J = k T JT (Mv ) + k R J R (Mv ),
(2.37)
138
2 Central Nervous System
where: k T , k R —expert preference coefficients characterizing estimates of driving speed and accident risk. The strength of the emotion of pleasure I P depends in a complex way on the signals x 1 ,…, x n of the robot’s sensory system that affect the feeling of pleasure: I P = f x (x 1 , …, x n ). Moreover, the signals x 1 , …, x n are generally not orthogonal and have different scales of dimension. For example, oxygen level, pressure level, temperature, etc. Similarly, the strength of the self-preservation emotion I S depends in a complex way on the signals y1 , …, yn of the robot’s sensory system that affect the sense of self-preservation: I S = f y (y1 , …, yn ). Moreover, the signals y1 , ..., yn in general are also not orthogonal and have different scales of dimension. For example, illumination, precipitation level, ambient temperature, etc. It is currently not possible to identify the functional dependencies of I P and I S on the signals of the IR sensor system. However, to estimate the values of I P and I S , a number of algorithms can be proposed that are suitable for use in procedures for optimizing the routes of movement of mobile IR. For example, if it is possible to determine the strength of the emotion I P and I S by mathematically processing the statistics of observations of the work of the IR or the results of computer simulation, then it is possible to take into account the influence of emotions on the choice of the optimal route by taking into account the forces of emotion I Pij and I Sij on the ij-th sections of the route of movement and obtain plausible values of the functional characterizing the optimality of various routes of M v movement IR: – taking into account the feeling of pleasure: J P∗ (Mv ) = J − k p
∑
I Pi j ,
(2.38)
I Si j ,
(2.39)
– taking into account the sense of self-preservation: JS∗ (Mv ) = J − ks
∑
– taking into account the feeling of pleasure and self-preservation: J ∗ (Mv ) = J − k p
∑
I Pi j − ks
∑
I Si j = J − J p − Js .
(2.40)
where: k p , k s —the preference coefficients of experts characterizing pleasure and safety assessments.
2.9.2 Estimates of the Strength of Emotion Based on the Use of Sensory Signals It is currently not possible to obtain the exact functional dependencies of I p and I s on the signals of the sensor system of the IR. Moreover, it is impossible in advance to separate the signals x 1 , …, x n from the signals y1 , …, yn from all the sensor signals
2.9 The Influence of Emotions on Decision Making
139
IR. Moreover, often the same sensory signals can affect both I p and I s . This problem can be solved to some extent by using computer modeling. For example, at the beginning of the search for the values of the estimates of the forces of emotions I p and I s on each i-th section of the route M v movement, you can select all available sensory signals z 1 , z 2 , ..., z n and, forming, like a person, for example, the following robot senses: vision in the form of a set E; hearing in the form of a set R; sense of smell in the form of a set S; taste in the form of a set U; touch in the form of a set V. Then translate all the selected signals into dimensionless quantities with a range of [0; 1] and quantize them, i.e. get Δzj . Next, it is necessary to enter the quantum weights aj for I Pi and bq for I Si , using the expert’s available or borrowed from literary sources experience in the operation of IR. After that, it is necessary to calculate: I Pi =
∑
a j Δz j .
(2.41)
bq Δz j .
(2.42)
j
I Si =
∑ q
Now it is necessary to determine the values of the functional on the entire route J (Mv ), J P , JS , J P∗ (Mv ), JS∗ (Mv ) and J ∗ (Mv ). For a mobile IR moving along the route M v , when calculating J(M v ), it is necessary to use the ratio (2.35)–(2.37) by entering the maximum values allowed for this area vij , wjj and minimum τ ij , and when calculating the value of J * (M v ), it is necessary to use the ratios (2.35)–(2.42). At the same time, when calculating J(M v ), minimum risk values can be used to determine J R (M v ), and when calculating J * (M v ) to obtain J R (M v ), you can use the procedure of expert assessments of accident risks on sections of the M v route described in [100]. Next, it is necessary to calculate the kp1 estimates of the influence of the emotion of “pleasure” using the ratio (2.41), and the k S1 estimates of the influence of the emotion of “self-preservation” using the ratio (2.42). Then the calculated value of J * (M v ) must be remembered. The next step will be to select from the signals Δzj the signals Δx j that most affect I p and the signals Δyq that most affect I s . To do this, you need to select calcu∑ an arbitrary set∑of Δx j and Δyq . For this set, ∗ a Δx ; I = b Δy ; k ; k and J late the new values I P2 = (M j j S2 q q P2 S2 v) = j q J1∗ (Mv ). Remember J1∗ (Mv ). Then change the set of Δx j and Δyq and for each set calculate and store the values of the minima of the functional J2∗ (Mv ), J3∗ (Mv ) and so on until all combinations of the signals available in the robot’s sensor system are exhausted. Next, it is necessary to determine the minimum of all the memorized functional of I o (M v ) and save the corresponding sets of Δx j and Δyq . The last step will be to select from all available values of the quantum weights in and out of the most influential on the strength of emotions I p and I s . To do this, you an arbitrary set of aj and bq . For this set, calculate: ∑ need to select ∑ ∗ ∗ ∗ ∗ ∗ = a Δx ; I = b Δy values I P2 j j q q ; k P2 ; k S2 and J01 (Mv ). Remember j q S2
140
2 Central Nervous System
∗ J01 (Mv ). Then change the set aj and bq and for each set calculate and memorize ∗ ∗ J02 (Mv ), J03 (Mv ) and so on until all combinations of the values aj and bq available in the robot’s memory are exhausted. Next, you need to determine the minimum of ∗ all the stored functional J01 (Mv ) and save the corresponding set a0 and b0 . The selected values Δx j ; Δyq ; a0 and b0 will be considered the most influential on the strength of emotions I p and I s , and the route M v , which corresponds to the received minimum of functionality, is taken as optimal, taking into account the influence of emotions on decision making. During the operation of the IR, environmental conditions and values and a set of sensory signals z 1 , z 2 , ..., z n change. Therefore, the procedure for searching for the most influential values Δx j ; Δyq ; aj and bq must be repeated and recorded in the database along with the current environmental parameters. Then, after a certain period of operation of the IR, a sufficient number of possible “reference” situations will accumulate, from which it will be possible to choose the one closest to the current one using simple algorithms described in [49, 101], and, accordingly, select the values of k p and k s corresponding to this standard. At the same time, the determination of the minimum of the functional (2.45) and the M v route, which corresponds to the minimum obtained, is significantly accelerated. The disadvantage of the given method of accounting for the influence of emotions on decision making is the poor conditionality of the influence of sensory system signals directly on emotions. Numerous observations of the emergence of various emotions in a person indicate that they arise under the influence of various images and their combinations in the environment, as well as self-analysis of one’s own state. At the same time, various emotional responses (decision making) of the human central nervous system are associated with genetic behavior programs embedded in memory before birth and transmitted from parents from generation to generation [102]. Examples of such genetic programs can be feelings of self-preservation, procreation, hunger, curiosity, etc.
2.9.3 Estimates of the Strength of Emotion Based on the Analysis of Images Obtained After Processing Sensory Signals To eliminate this drawback, the determination of the forces of emotions I p , I s and the values of the coefficients k P and k s can be based not on normalized sensory signals, but on the surrounding IR images OPi , affecting I p , and OSj , affecting I s . Obviously, these images should be formed from sensory signals. At the same time, there remains the problem of isolating from all the images in the environment of the choice of IR the most affecting the strength of emotion I p images OPi and the most affecting the strength of emotion I s images OSj . In addition, it is necessary to arrange OPi images in line P of the classification of images affecting the sense of pleasure and OSj images in line S of the classification of images affecting the sense of self-preservation by
2.9 The Influence of Emotions on Decision Making
141
introducing rules for the formation of these lines. As a result, the rows will contain sets of significance coefficients of the selected samples. When forming images from sensory signals, you can use the algorithm described in [103] and consisting in the following sequence of operations. 1. Quantization of the surrounding space in the field of view of the robot’s sensor system with the assignment of names to the resulting pixels in the form of a pixel number. 2. Fuzzification of sensory information work on each pixel of the surrounding space and the formation in the memory of the central nervous system of the robot of the display of the surrounding space in the form of pixels with their coordinates and fuzzified data. 3. The formation of pixels with their coordinates and fuzzified data of images in the display of the surrounding space for each sense organ of the robot (Vs— vision sensor, Hs—hearing sensor, Os—olfaction sensor, Fs—flavor sensor and Ts—touch sensor). At the same time, after the operation of combining pixels into image sets ImVs(i), ImHs(i), ImOs(i), ImFs(i), ImTs(i), there will be image contours in each space of the senses C Vs , C Hs , C Os , C Fs and C Ts . Each contour is assigned a name. As a result, there will be five sets or maps K Vs , K Hs , K Os , K Fs and K Ts with sets of image contours ImVs(i), ImHs(i), ImOs(i), ImFs(i), ImTs(i), where i = 1,2, …, n. 4. The formation of images by combining images from different sensory organs, consisting in operations performed on images (sets K Vs , K Hs , K Os , K Fs and K Ts ) of the type of intersection, union or symmetric difference of ordered sets and attributing the result to a particular reference image stored in the database. As a result of using this algorithm, semantic data about the OPi and OSj images surrounding the IR are generated in the IR database, on the basis of which behavioral decisions can be made by the robot [104] using typical behavioral algorithms stored in the robot’s knowledge base. However, after forming a database of semantic data about the surrounding IR images, it may turn out that two or more images are partially or completely present in the same place of space. Therefore, it is necessary to adjust the semantic database in order to exclude the detected collisions. Such an adjustment is closely related to the separation of OPi images from semantic data, which most strongly affect the emotion (feeling) of pleasure, and OSj , which most strongly affect the emotion (feeling) of self-preservation. It is usually difficult to separate OPi from OSj at the first stage of the iterative assessment of the influence of emotions on the decision on the optimality of the route. Moreover, often the same images affect both a sense of pleasure and a sense of self-preservation, but to varying degrees. Therefore, both sets of OP and OS images are initially combined into one O P S = O P ∪ O S . After forming a set of OPi and OSj images in the IR environment in the database, the following algorithm for finding the optimal route can be used. 1. Determination of the starting and ending points of movement on the terrain map. For example, as shown in Fig. 1.20.
142
2 Central Nervous System
2. Determination of all traffic routes from the starting point to the end point (see Fig. 1.20). 3. Determination of all traffic sections on all routes (see Fig. 1.20). 4. For each M v route, the calculation of the value of the functional J T (M v ) using the expression (2.35). 5. For each M v route, the calculation of the value of the J R (M v ) functional using the method of logical-probabilistic or logical–linguistic estimates of the minimum risk described in [100]. 6. For each M v route, the calculation of the value of the functional J(M v ) using the expression (2.37). 7. Creation of a database of standards in the form of reference strings ϶ ϶ ϶ C ϶ pi j , Cai j , C si j , C bi j , and their corresponding values of emotional forces I pij and I sij . For example: C ϶ pi j = /O P1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 O P20 /, Where: OP1 is the image “the sea in the window on the right”, OP20 is the image “beautiful buildings in the window on the left”. ϶ Cai j = /20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 O P20 /,
Where: 20 = a1 is the coefficient of significance of pleasure for the image OP1 , 1 = a20 is the coefficient of significance of pleasure for the image OP20 . J op1 is the power of emotion for the OP1 image and J op20 is the power of emotion for the OP20 image. Csi϶ j = /0 0 O S3 0 0 0 0 0 0 O S11 0 0 0/, Where: OS3 is the image “road repair”, OS11 is the image “ground”. Cbi϶ j = /0 0 3 0 0 0 0 0 0 0 11 0 0 0/, Isi j = Jos3 + Jos11 , Where: 3 = b3 is the safety significance coefficient for the OS3 image, 11 = b11 is the safety significance coefficient for the OS11 image. J os3 is the power of emotion for the OS3 image and J os11 is the power of emotion for the OS11 image. a 8. For each ij-th section of the movement, the formation of analyzed strings Cai j b and Cai j , the elements of which will be the probabilities or values of the image membership function multiplied by the significance coefficients a and b. For example: a Cai j = /20 ∗ 0.80 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01 ∗ 0.7/
= /160 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.7/, b Cai j
= /0 03 ∗ 0.70 0 0 0 0 0 011 ∗ 0.90 0 0/
2.9 The Influence of Emotions on Decision Making
143
= /0 02.10 0 0 0 0 0 09.90 0 0/. a b 9. Assignment of the analyzed strings Cai j and C ai j to one of the reference ones in accordance with the algorithm described in [15, 16] and assignment of these standards by the ij—sections of the emotion forces I pij and I sij . 10. Assignment of values of k p and k s coefficients. 11. For all possible M v routes, the calculation of the value of the functional (2.40). 12. Determination of the route M v with the minimum value J*(M v ), which is taken as the optimal one.
In addition, using the above algorithm, it is possible to correct the set of values of the coefficients a and b, as well as k p and k s . For each set, calculate and save the minimum value of the functional until all combinations of coefficients available to the robot are exhausted. After that, it is necessary to determine the minimum of all the stored minimum values of the functional and assign to the set of values a, b, k p and k s corresponding to the obtained minimum, the status of the most influencing the strength of emotion “pleasure” and the strength of emotion—“self-preservation”. In this case, the route M v , which corresponds to the obtained minimum of the functional, is taken as optimal. During the operation of IR, the surrounding conditions change and, accordingly, the set of images in the choice environment. Therefore, it is expedient to repeat the procedure of searching for the most important basic images and the values of their significance coefficients a and b and record them in the database along with the current environmental parameters (signals of the sensory system). Then, after a certain period of IR work, a sufficient number of possible “reference” situations will accumulate, from which it will be possible to choose those closest to the current one using simple algorithms described in [49, 101], and, accordingly, choose the values of the strengths of emotions I pij and I sij corresponding to the selected standards. At the same time, the determination of the minimum of the functional J*(M v ) and the route M v , which corresponds to the obtained minimum, are significantly accelerated.
2.9.4 Example of Accounting for the Influence of Emotions To demonstrate the influence of emotions on the decision to choose the optimal route, we will supplement the task set out in Sect. 1.5.3 with the conditions for taking into account the risk of an accident and the influence of the forces of the emotion of “pleasure” and “ self-preservation”, which lead to the solution of the problem of minimizing the functionality (2.40), where coefficients k T , k R , k P , k S —are set by an expert or a group of experts. In Sect. 1.5.3, the solution of the task of searching in a locality with N * M intersecting streets (see Fig. 1.20) for the optimal route of a vehicle from point A with coordinates x A , yA to point B with coordinates x B , yB in the shortest time
144
2 Central Nervous System
was shown. That is, finding a route that corresponded to the minimum value of the functional J T (M v ). At the same time, the logical-probabilistic and logical–linguistic constraints were translated into Logical-interval ones and as a result, the following values of linear velocities were obtained in the sections between intersections min V = 12.7 m/s, max V = 15.1 m/s and angular velocities at intersections min w = 5.6 gr/s, max w = 6.4 gr/s. In the same place, the maximum and minimum values of the JT (Mv ) functional characterizing the travel time for each route were obtained for all the selected seven routes, which are shown in Table 2.1. From Table 2.1 it can be seen that the route M 3 (1–3–8) is the best, since it will take the minimum travel time. In [100] outlines the definition of accident risks on traffic routes and finding a route that has the minimum value of the functional J R (M v ). At the same time, for each analyzed route, a list of intersections included in them was compiled. For each of the identified intersections, the approximate values of their parameters corresponding to the attributes of the reference strings were determined, and by fuzzifying these values, the values of the functions of belonging to the attributes of the corresponding reference strings were determined. Then the classification of the analyzed strings for intersections was carried out with the assignment of degrees of accident risks corresponding to the identified standards, and then the degrees of accident risk at all intersections and the total degrees of accident risk at intersections along all routes were calculated. As a result, the following functional values were obtained in [100] for all the selected seven routes: J R (M 1 ) = 0.98; J R (M 2 ) = 1.28; J R (M 3 ) = 1.17; J R (M 4 ) = 1.22; J R (M 5 ) = 1.35; J R (M 6 ) = 1.43; J R (M 7 ) = 1.13. Further, using formulas 2.43 and 2.44, the minimum and maximum values of the functional characterizing the travel time, taking into account the risk of an accident on the route, were calculated for k T = 0.01 1/s and k R = 1 for each route, which are given in Table 2.2. Table 2.1 The maximum and minimum values of the JT (Mv ) functional № Route no., M v with the number The value of the functional, The value of the functional,
of intersections
min J T (M v ),s
max J T (M v ), s
1
M 1 (1–2–4–8)
163.23
198.66
2
M 2 (1–4–8)
138.5
178.59
3
M 3 (1–3–8)
117.41
137.77
4
M 4 (1–5–3–8)
168.77
223.55
5
M 5 (1–3–7–8)
219.0
272.65
6
M 6 (1–5–7–8)
221.68
277.68
7
M 7 (1–5–6–7–8)
241.46
296.35
2.9 The Influence of Emotions on Decision Making
145
minJT R (Mv ) = {min{k T JT (Mv )} + min{k R J R (Mv )}},
(2.43)
maxJT R (Mv ) = {max{k T JT (Mv )} + max{k R J R (Mv )}}
(2.44)
Usually min{k R J R (Mv )} and max{k R J R (Mv )} are the same, but min{k T JT (Mv )} and max{k T JT (Mv )} are not. Therefore, the routes were ranked according to the minimum and maximum values of the functional. Analysis of the values of the above functional showed that, taking into account the risk of an accident, the M 3 route would be optimal. To assess the forces of emotions J P (M v ) “pleasure” and J S (M v ) “selfpreservation”, which affect the choice of the optimal route M v , that is, the value E are created, with of the functional (2.40), databases of reference images of O Pi E values of J Pi values affecting J P (M i ), and O Si with values of J Si values affecting J S (M v ). At the same time, the values of J Pi and J Si are set by experts based on their preferences. In the example under consideration, the following reference images were used: – for J P (M v ): OP1 “river” with an impact value of J P1 = 4, OP2 “mountains” with an impact value of J P2 = 3, OP3 “forest” with an impact value of J P3 = 2, OP4 “lowrise buildings” with an impact value of J P4 = 1, OP5 “factories” with an impact value of J P5 = −1, OP6 “destroyed houses” with an impact value of J P6 = −2, OP7 “multi-storey buildings” with an impact value of J P7 = −3, OP8 “landfill” with an impact value of J P8 = −4. – for J S (M v ): OS1 “ dry asphalt” with an impact value of J S1 = 4, OS2 “concrete” with an impact value of J S2 = 3, OS3 “dry, rolled dirt road” with an impact value of J S3 = 2, OS4 “packed, compacted soil” with an impact value of J S4 = 1, OS5 “loose, freshly poured soil” with an impact value of J S5 = −1, OS6 “wet sand” with an impact value of J S6 = −2, OS7 “dry sand” with an impact value of J S7 = −3, OS8 “rolled snow” with an impact value of J S8 = −4.
Table 2.2 The minimum and maximum values of the functional characterizing the travel time, taking into account the risk of an accident № Route no., M v with the number The value of the functional, The value of the functional,
of intersections
min J TR (M v )
max J TR (M v )
1
M 1 (1–2–4–8)
2.61
2.97
2
M 2 (1–4–8)
2.67
3.07
3
M 3 (1–3–8)
2.34
2.55
4
M 4 (1–5–3–8)
2.91
3.45
5
M 5 (1–3–7–8)
3.54
4.08
6
M 6 (1–5–7–8)
3.65
4.21
7
M 7 (1–5–6–7-8)
3.54
4.09
146
2 Central Nervous System
Each reference image in the databases is matched with its identification strings ϶ ϶ C Si and C Si , the elements of which correspond to the reference images OPi and OSi , ordered in descending order by the values of their influence J Pi and J Si . At the same time, the presence of 1 in any place in the line means the presence of a reference image, and 0 means its absence. In this example, eight OPi reference images and eight OSi reference images were used. Therefore, there were 28 reference identification lines in each reference database. A fragment of the database from the reference identification strings for “pleasure”: C ϶ P1 = /1 0 0 0 0 0 0 0/ − J P1 = 4; C ϶ P2 = /0 1 0 0 0 0 0 0/ − J P2 = 3; C ϶ P3 = /0 0 1 0 0 0 0 0/ − J P3 = 2; C ϶ P4 = /0 0 0 1 0 0 0 0/ − J P4 = 1; C ϶ P5 = /0 0 0 0 1 0 0 0/ − J P5 = −1; C ϶ P6 = /0 0 0 0 0 1 0 0/ − J P6 = −2; C ϶ P7 = /0 0 0 0 0 0 1 0/ − J P7 = −3; C ϶ P8 = /0 0 0 0 0 0 0 1/ − J P8 = −4; C ϶ P9 = /1 1 0 0 0 0 0 0/ − J P9 = 3 + 4 = 7; C ϶ P10 = /1 0 1 0 0 0 0 1/ − J P10 = 4 + 2 + (−4) = 2; C ϶ P11 = /0 0 1 0 0 0 0 1/ − J P11 = 2 + (−4) = −2; C ϶ P12 = /0 0 1 1 0 0 0 0/ − J P12 = 2 + 1 = 3; C ϶ P13 = /0 0 0 0 0 0 1 1/ − J P13 = (−3) + (−4) = −7; C ϶ P14 = /0 0 0 1 0 0 1 0/ − J P14 = 1 + (−3) = −2; C ϶ P15 = /0 0 1 0 1 0 0 0/ −J P15 = 2 + (−2) = 0; C ϶ P16 = /0 0 0 0 1 1 0 0/ − J P16 = (−1) + (−3) = −4; A fragment of the database from the reference identification strings for “selfpreservation”: ϶ C S1 = /1 0 0 0 0 0 0 0/ −J S1 = 4; ϶ C S2 = /0 1 0 0 0 0 0 0/ − J S2 = 3; ϶ C S3 = /0 0 1 0 0 0 0 0/ − J S3 = 2; ϶ C S4 = /0 0 0 1 0 0 0 0/ − J S4 = 1; ϶ C S5 = {0 0 0 0 1 0 0 0} − J S5 = −1; ϶ C S6 = /0 0 0 0 0 1 0 0/ − J S6 = −2; ϶ C S7 = /0 0 0 0 0 0 1 0/ − JS7 = −3; ϶ C S8 = /0 0 0 0 0 0 0 1/ − J S8 = −4; ϶ C S9 = /0 1 1 0 0 0 0 0/ − J S9 = 3 + 2 = 5; ϶ C S10 = /0 0 1 0 0 0 0 1/ − J S10 = 2 + (−4) = −2; ϶ C S11 = /0 0 1 0 0 1 0 1/ − J S11 = 2 + (−2) + (−4) = −4; ϶ C S12 = /0 0 0 1 0 1 0 0/ − J S12 = 1 + (−2) = −1; ϶ C S13 = /1 0 0 0 1 0 0 0/ − J S13 = 4 + (−1) = 3; ϶ C S14 = /1 0 0 0 0 0 1 0/ − J S14 = 1 + (−3) = −2; ϶ C S15 = /0 0 1 0 1 0 0 0/ − J S15 = 2 + (−2) = 0; ϶ C S16 = /1 0 0 0 0 0 1 0/ − J S16 = 4 + (−3) = −1. In this example, seven routes are analyzed: M 1 (1–2–4–8); M 2 (1–4–8); M 3 (1– 3–8); M 4 (1–5–3–8); M 5 (1–3–7–8); M 6 (1–5–7–8); M 7 (1–5–6–7-8) containing a
2.9 The Influence of Emotions on Decision Making
147
different number of sections of long lij . Total traffic sections N = 13 (l12 , l 13 , l 14 , l 15 , l24 , l 37 , l 38 , l 48 , l 53 , l 56 , l 57 , l 67 , l 78 ). After analyzing the images in each section, identification strings are compiled that characterize them and contain the same number of elements as the reference strings. But unlike the reference rows, in place of elements with values of 1, characterizing the presence of the corresponding image and with values of 0, characterizing its absence, the values of probabilities or membership functions (depending on the image classification method used) are set, which characterize the presence of the corresponding images: a – For l 12 was C aP12 = /0.8 0.9 0.1 0.2 0.1 0.1 0.05 0.05/; C S12 = /0.1 0.9 0.8 0.05 0.1 0.06 0.1 0.01/; a – For l13 was C aP13 = /0.9 0.1 1 0.05 0.1 0.05 0.1 0.9/; C S13 = /0.05 0 0.9 0.1 0.1 0 0.05 0.85/; a – For l 14 was C aP14 = /0.1 0.1 0.95 0 0.05 0.1 0 0.9/; C S14 = /0 0.1 0.9 0 0.05 0.9 0.1 0.85/; a – For l15 was C aP15 = /0.1 0.05 0.9 0.9 0 0.1 0 0.1/; C S15 = /0.1 0.05 0 0.9 0.95 0.1 0.05/; a – For l 24 was C aP24 = /0.9 0 0.1 0.03 0.05 0 0.2 0/; C S24 = /0.9 0.1 0.05 0 0.1 0.1 0.1 0/; a – For l 37 was C aP37 = /0.05 0 0.1 0.1 0.9 0.8 0 0/; C S37 = /0.8 0.1 0.05 0 0.1 0 0.9 0.1/; a – For l 38 was C aP38 = /0.1 0 0.9 0 0.95 0.05 0.1 0/; C S38 = /0.1 0 0.95 0 0.9 0.05 0.1 0/; a – For l 48 was C aP48 = /0.9 0.1 0 0.05 0.05 0 0.1 0/; C S48 = /0.95 0 0.1 0.05 0 0.1 0 0/; a – For l 53 was C aP53 = /0.1 0 0.8 0 0.9 0 0.1 0/; C S53 = /0.05 0 0.9 0 0.85 0 0.1 0/; a a = /0.95 0.05 0 0 0.9 0 0.05 0/; – For l 56 was C P56 = /0.05 0 0.1 0 0 0 0.9 0.9/; C S56 a – For l 57 was C aP57 = /0.1 0.05 0 0.85 0 0.1 0.9 0/; C S57 = /0.9 0 0.05 0.1 0 0 0.9 0.1/; a = /0.1 0 0.05 0 0.85 0 0.1 0/; – For l 67 was C aP67 = /0 0.05 0.1 0 0.9 0 0.1 0/; C S67 a a – For l78 was C P78 = /0.85 0.05 0.1 0 0.05 0 0 0/; C S78 = /0.95 0 0.05 0 0.1 0.05 0 0/.
After using the classification algorithm [101] and the computer program [105], it was found that: The string C aP12 corresponds to the reference string C ϶ P9 with the value J P9 = 7; a ϶ The string C S12 corresponds to the reference string C S9 with the value J S9 = 5; a ϶ The string C P13 corresponds to the reference string C P10 with the value J P10 = 2; a ϶ The string C S13 corresponds to the reference string C S10 with the value J S10 = − 2; – The string C aP14 corresponds to the reference string C ϶ P11 with the value J P11 = −2; a ϶ – The string C S14 corresponds to the reference string C S11 with the value J S1 = − 4; – – – –
148
2 Central Nervous System
– The string C aP15 corresponds to the reference string C ϶ P12 with the value J P12 = 3; a ϶ – The string C S15 corresponds to the reference string C S12 with the value J S12 = − 1; – The string C aP24 corresponds to the reference string C ϶ P1 with the value J P1 = 4; a ϶ – The string C S24 corresponds to the reference string C S1 with the value J S1 = 4; a ϶ – The string C P37 corresponds to the reference string C S16 with the value J P16 = − 4; a ϶ – The string C S37 corresponds to the reference string C S14 with the value J S14 = − 2; – The string C aP38 corresponds to the reference string C ϶ P15 with the value J P15 = 0; a ϶ corresponds to the reference string C S15 with the value J S15 = 0; – The string C S38 a – The string C P48 corresponds to the reference string C ϶ P1 with the value J p1 = 4; a ϶ – The string C S48 corresponds to the reference string C S1 with the value J p1 = 4; a ϶ – The string C P53 corresponds to the reference string C P15 with the value J P15 = 0; a ϶ – The string C S53 corresponds to the reference string C S15 with the value J S15 = 0; a – The string C P56 corresponds to the reference string C ϶ P13 with the value J P13 = −7; a ϶ – The string C S56 corresponds to the reference string C S13 with the value J S13 = 3; a – The string C P57 corresponds to the reference string C ϶ P14 with the value J P14 = −2; a ϶ corresponds to the reference string C S14 with the value J S14 = − – The string C S57 2; – The string C aP67 corresponds to the reference string C ϶ p5 with the value J P5 = −1; a ϶ – The string C S67 corresponds to the reference string C S5 with the value J S5 = −1; a – The string C P78 corresponds to the reference string C ϶ P1 with the value J P1 = 4; a ϶ – The string C S78 corresponds to the reference string C S1 with the value J S1 = 4. Now we can calculate the emotion strengths J P (M v ) “pleasure” and J S (M v ) “selfpreservation” for all analyzed routes. For example: for route M 1 (1–2–4–8) we get: J P (M1 ) = J P9 + J P7 + J P1 = 7−3 + 4 = 8; JS (M1 ) = JS9 + JS13 + JS6 = 5 + 3 − 1 = 7. Table 2.3 presents the calculated values of the emotion strength evaluation functional J P (M ν ) “pleasure” and J S (M ν ) “self-preservation” for each route. The analysis of the obtained results shows that the highest value of the functional characterizing the strength of emotions “pleasure” on the route M 1 (J P (M 1 ) = 8) and the highest value of the functional characterizing the strength of emotions “selfpreservation” also on the route M 1 (J S (M 1 ) = 7). Table 2.2 shows the minimum and maximum values of the functional J TR (M v ), which correspond to the functional J in Eqs. (2.37)–(2.40). Substituting into formula (2.40) the obtained values of the functional from Table 2.2 and Table 2.3, taking into account the coefficients k P = k S = 0.1, it is possible to calculate the minimum and maximum values of the functional, taking into account the strengths of emotions. For example, for route M 1 :
2.9 The Influence of Emotions on Decision Making
149
Table 2.3 The calculated values of the emotion strength evaluation functional J P (M ν ) “pleasure” and J S (M ν ) “self-preservation” № Route no., M v with the number The value of the functional, The value of the functional,
of intersections
min J P (M v )
max J S (M v )
1
M 1 (1–2–4–8)
8
7
2
M 2 (1–4–8)
6
−4
3
M 3 (1–3–8)
1
6
4
M 4 (1–5–3–8)
−5
3
5
M 5 (1–3–7–8)
0
5
6
M 6 (1–5–7–8)
1
−2
7
M 7 (1–5–6–7–8)
2
−3
minJ (M1 ) = 2.61 − 0.8 − 0.7 = 1.11; maxJ (M1 ) = 2.97−0.8−0.7 = 1.47. Table 2.4 shows the minimum and maximum values of the functional, taking into account the strengths of emotions J P (M v ) “pleasure” and J S (M v ) “self-preservation” and the average value of the functional J * (M v ) for all analyzed routes. The average value of the functional is calculated by the following formula: J ∗ (Mv ) =
min J ∗ (Mv ) + max J ∗ (Mv ) 2
(2.45)
Table 2.4 shows that the route M 1 (1–2–4–8) is optimal. Because of all those given in Table 2.4, it has the minimum value of the functional min J * (M v ) = 1.11 and the functional mean J * (M v ) = 1.29. Table 2.4 The minimum and maximum values of the functional, taking into account the strengths of emotions J P (M v ) “pleasure” and J S (M v ) “self-preservation” and the average value of the functional J * (M v ) №
Route no., M v with the number of intersections
Min. the value of the functional, min J * (M v )
Max. the value of the functional, max J * (M v )
Mean value of the functional, mean J * (M v )
1
M 1 (1–2–4–8)
1.11
1.47
1.29
2
M 2 (1–4–8)
2.47
2.87
2.67
3
M 3 (1–3–8)
1.64
1.85
1.75
4
M 4 (1–5–3–8)
3.11
3.65
3.38
5
M 5 (1–3–7–8)
3.04
3.58
3.31
6
M 6 (1–5–7–8)
3.75
4.31
4.03
7
M 7 (1–5–6–7–8)
3.64
4.19
3.92
150
2 Central Nervous System
Therefore, taking into account the influence of emotions, the best route was the route M 1 (1–2–4–8), and not the route M 3 (1–3–8), which confirms the expediency of taking into account emotions when interacting with a robot router with a person.
2.10 The Influence of Temperament on Decision Making Advances in the development of intelligent robots expand the scope of their activities related to the implementation of technological operations in a team with a person. Recently, the so-called social robotics has been developing rapidly. Androids are available at airports for transporting passengers’ luggage or for patrolling. Robot nurses and robot nurses are used in hospitals. At the same time, the intended functions of such robots should be much broader than just performing mechanical operations. For example, a robot serving an elderly sick patient cannot simply insist on the patient taking the right medicine at the right time [106]. Sooner or later, due to the peculiarities of the human psyche, a conflict situation will arise, numerous variants of which arise when ordinary doctors and nurses communicate with patients. A competent, psychologically balanced doctor or nurse will be able to extinguish the conflict. When this place is occupied by a robot, the situation is complicated by the perception of it as a mechanical device that can be ignored. Such a simple example demonstrates the existing problems of psychological interaction between a robot and a human. Another example may be the well-known case that occurred with the HitchBOT robot, which was created as a cute hitchhiker and who was able to maintain a simple conversation and capture a photo of the journey [106]. Many followed his wanderings around the world, but the journey of HitchBOT himself after Canada, Holland and Germany ended in Philadelphia, where an unknown user smashed the robot for no reason and with extreme cruelty [107]. These examples show the relevance of the development of new methods for the formation of specific properties of IR, allowing taking into account the human psyche with which the robot interacts and adapting to the peculiarities of its behavior. At the same time, one of the most important is to take into account a person’s temperament. To do this, it is necessary, first of all, to analyze the properties of a person’s temperament and the existing methods of its detection.
2.10.1 Types of Classification of Human Temperament At the moment, the main efforts to adapt the interaction of IR with a person are aimed at recognizing the state and imitation of the human psyche and its manifestations in the form of emotions and temperament. Considering the process of emotional interaction, two tasks can be distinguished. The first task is to recognize the human temperament, which is necessary for the correct mental response of the robot. The solution to this problem should be based on knowledge of the properties
2.10 The Influence of Temperament on Decision Making
151
of a person’s temperament and on numerous experiments in teaching IR on examples of interaction. The second task is the formation of an IR emotional response, that is, the adaptation of the robot to the human temperament, which is much more difficult. A person’s temperament is a set of innate properties of the psyche. It serves as the basis for character formation. Temperament is based on the type of the higher nervous system. People (as well as animals) differ from each other from birth [108]: by the strength of the processes of excitation and inhibition; by the balance of these processes; by the mobility (changeability) of the processes of excitation and inhibition. Together, this determines the endurance of the psyche. The Russian scientist Ivan Petrovich Pavlov showed that with the predominance of excitation over inhibition, conditioned reflexes are formed quickly and slowly subside, and with the opposite ratio, they are formed slowly and quickly fade away. Therefore, it depends on the temperament: the rate of occurrence of mental activity; stability of mental processes; mental pace and rhythm; intensity of mental processes; orientation of mental activity. Temperament provides an individual style of activity, that is, methods of work characteristic of a particular person. So, for example, one child, when solving a problem, can sit idle for a long time, think about it and immediately write down the result, and the other will immediately start writing something down, sketching, crossing out and after a while will isolate the main thing from it. The same result— different ways to achieve. The properties of temperament manifest themselves depending on the situation and specific conditions. Therefore, people of different temperaments can act in completely different ways in identical situations. In psychology, it is customary to distinguish the following properties of temperament [109–111]. Sensitivity is an assessment that characterizes the minimum strength of irritation from the outside to start reactions in the psyche of an individual. Reactivity is an assessment that characterizes the strength and speed of response to an unexpected stimulus. For example, a reaction to light, a loud sound, an unexpected action. The distractibility of a person and the possibility of concentration depend on reactivity. Activity (passivity) is an assessment that characterizes the degree of influence of temperament on the stimuli surrounding it. That is, this is the speed with which a person can influence circumstances, obstacles that prevent him from achieving his goal. Activity follows from the ratio of a person’s orientation to the outside world and focus on their goals, desires, needs, and beliefs. Plasticity (rigidity) is an assessment that characterizes the speed of a person’s adaptation to changes in the external environment. Plasticity is a good ability to adapt; rigidity is the impossibility, difficulty in changing beliefs, views, and interests. Extraversion (introversion) is an assessment that characterizes a person’s orientation to the external world or the internal one (orientation of vital energy). The second interpretation: a person’s orientation to the present external (extraversion) or figurative past or future (introversion).
152
2 Central Nervous System
The excitability of emotions is an assessment that determines the speed of emotional response to a minimum stimulus from the outside (the minimum force to which an emotional reaction occurs). The rate of reactions (duration) is the speed of mental processes and reactions. For example: the speed of reaction, the pace of speech, the speed of the mind. Each person has characteristic temperament properties that the robot must take into account when interacting with a person. The analysis of these properties as a result of test tests allows us to assess the temperament of the person with whom the robot interacts. In this case, it is necessary, first of all, to determine belonging to one of the 4 types of temperament, or types of the nervous system (according to its properties), distinguished in psychology. There are several typologies by which the type of temperament is determined. According to the Processes of Excitation-Inhibition 1. Sanguine This is a strong, mobile and balanced type. It is characterized by a rapid process of excitation and its rapid change to inhibition. A person with this type of psyche is distinguished by love of life, activity, sociability, responsiveness. He is not inclined to experiences, easily adapts to new conditions, and strives for leadership. Sanguine is successful in work, friendship and love. He easily switches from one thing to another and with the same ease changes hobbies. However, without external stimuli, he begins to get bored, becomes lethargic. It is always distinguished by some superficiality in the perception of people and phenomena, which sometimes causes difficulties in interpersonal relationships. But a sanguine person easily copes with them. The feelings and emotions of a sanguine person are vivid, but unstable. Often and loudly laughs, but gets angry over trifles. Sanguine is resourceful and agile, can control emotions. As a rule, has a fast speech. 2. Phlegmatic This is a strong, inert and balanced type. Conditioned reflexes are developed slowly, but after they become very stable. A person with this type is always passive, cautious and reasonable. Sometimes it comes to “boredom and nausea”. At the same time, he is peaceful and friendly. It is easy to manage and control its actions. A phlegmatic person does not differ in emotionality and sensitivity, but you can always rely on him. He has great perseverance, self-control, patience, high efficiency, but is slow. Stable in relationships, not prone to change. Provides good resistance to negative long-term stimuli from the outside. The phlegmatic self-control and composure sometimes turns into indifference to oneself and others, to work. 3. Choleric This is strong, mobile and unbalanced type. The processes of excitation prevail over the processes of inhibition. Easily excitable, aggressive and restless type. Choleric is characterized by variability, inconstancy, impulsivity, activity and optimism. Along
2.10 The Influence of Temperament on Decision Making
153
with great vital energy, one can distinguish intemperance and sharpness of movements and actions, shoutiness, low level of self-control, impatience and frequent sharp mood swings. Choleric is characterized by expressive facial expressions, rapid speech and rapid movements. 4. Melancholic This is weak, inert (or mobile) and unbalanced type. Has a pessimistic attitude, a tendency to anxiety and reasoning. He is reserved and unsociable, easily vulnerable, emotional, highly sensitive. Has weak resistance to external stimuli, is inhibited and passive. Melancholic, as a rule, is not confident, timid, fearful, and touchy. But he has a very developed inner world and associative thinking. Melancholic does not differ in expressiveness of facial expressions and movements, does not adapt well to new conditions. Characterized by quiet speech, weak attention and fatigue. The use of this classification of human temperament by a robot is useful, as it facilitates the adaptation of the robot to this type of person. However, in practice, there are rarely people who behave in any circumstances in exact accordance with a specific type of psyche. Galen’s Typology The Roman philosopher and physician Galen also identified 4 types of temperament, but he focused on feelings. This typology does not contradict the others; on the contrary, it complements them. However, using it by a robot to evaluate a person is problematic. 1. Sanguine This is a person who is malleable to feelings, but quickly cools down. Strives for pleasure, gullible and gullible person. 2. Choleric This is a man of passions. He is characterized by pride, vindictiveness and ambition. 3. Phlegmatic This is a person who is resistant to the influences of the senses. He does not complain and is not indignant; he is irritated with great difficulty. 4. Melancholic This is a person whose sadness is his main characteristic. Any suffering seems unbearable, and desires are saturated with sadness. Often thinks that they are being neglected, offended over trifles. According to the Ratio of Signal Systems A signal system is usually called a set of mental processes responsible for perception, information analysis and response. A person has two such systems: – the first signaling system (assimilation of information through the activity of the cerebral cortex, through receptors);
154
2 Central Nervous System
– the second signal system (everything related to speech and the word). According to the ratio of signaling systems, 3 types of people (temperament) can be distinguished. 1. The Artist The first signal system prevails. 2. The Thinker The second signal system prevails. 3. Mixed Type Approximately equal influence of both systems. This classification is considered relative, since the severity of the systems depends on the specific type of activity, which makes it difficult to use it in robots. Constitutional Theory of E. Kretschmer The connection between constitutional features and the human psyche was noticed by Hippocrates, who described two types of people: – habitus apoplecticus—dense, muscular, strong; – habitus phitisicus—thin, elegant, weak. Currently, there are more than 20 classifications of body types. German psychiatrist and psychologist Kretschmer compiled a typology of temperaments depending on a person’s physique [112]. Kremer suggested that by observing human behavior and evaluating his physique to assess what type of temperament he belongs to, you can use it to establish contact between a robot and a person. At the same time, he identifies the following types. 1. Schizotimic (asthenic type) The Asthenic body type is characterized by the predominance of the longitudinal dimensions of the body. Asthenics have a narrow face, a long and thin neck, a long and flat chest, a small stomach, thin limbs, underdeveloped muscles, thin pale skin. This is a weak-willed and closed type, prone to emotional swings. Gentleman and dreamer, idealist. At the same time, the schizothymic is stubborn and selfish, prone to abstract reflections. 2. Cyclothymic (picnical type) The Picnic body type is characterized by medium or increased fat deposition, welldeveloped muscles, shortened neck and limbs, relatively wide rounded shoulders, a cylindrical or conical chest, a rounded abdomen, a wide pelvis with characteristic fat
2.10 The Influence of Temperament on Decision Making
155
deposition, rounded hips. Like the first type, it is distinguished by emotional waves. This is a cheerful talker and a humorist, a realist, a good conversationalist. 3. Ixothymic (athletic type) The Athletic type is characterized by high growth, especially in women, strong musculature and bone, weak fat deposition, a conical or cylindrical chest, a straight abdomen, in women the male type of tertiary hairline, pelvis, facial features. This type is not distinguished by flexibility of thought, calm and unimpressive. His gestures and facial expressions are restrained. Ixothymic has difficulty adapting to new conditions. This classification is easily implemented in the robot’s central nervous system by image recognition methods. However, it is also too generalized and does not characterize the mental characteristics of human behavior in various circumstances. Modern concepts of temperament [113] try to unite groups of theories of temperament: humoral-endocrine, constitutional and neurodynamic. For example, in V. M. Rusalov’s theory of individuality, it is proposed to consider the general constitution as a mechanism for the formation of temperament, which is defined as a multilevel, integral characteristic, including biochemical, anatomical, morphological and neurophysiological levels [114]. Having analyzed the described existing types of classifications of human temperament, we can recommend the following three types for use in intelligent robots: – Constitutional, based on the analysis of human images; – Dialogs, based on the analysis of a person’s responses to the questionnaire items; – Behavioral, based on the analysis of the characteristics of human behavior in various situations. Since in practice there are often situations of fluctuations in a person’s temperament when various life situations arise, it is difficult to take them into account when evaluating a person and using any one of the listed types of classification. Therefore, it is necessary to provide the robot with the ability to determine the degree of belonging of a person to a particular type of temperament after using all three types of classification. Then, if all three types of classification give the same result (For example, a sanguine person), then the degree of belonging to the identified type of classification will be μ(sanguine) = μ(c) = 1 in terms of linguistic variables [28]. If all three give different results (which is unlikely), then we get, for example: μ(sanguine) = μ(choleric) = μ( phlegmatic) = μ(c) = μ(x) = μ( f ) = 1/3. (2.46) It is most likely that two types of classification will give the same result, and the third is excellent. Then, for example, we get: μ(c) = 2/3 and μ(x) = 1/3. In the case when all three types of classification give different results, the task of assessing temperament becomes uncertain. One of the promising approaches to solving such a problem may be the use of logical-probabilistic and logical–linguistic methods of analyzing the environment of choosing a solution that allows you to adjust the classification results.
156
2 Central Nervous System
2.10.2 Methods of Temperament Diagnostics Currently, there are a number of methods for diagnosing temperament. 1. The Eysenck test is the most popular technique that determines the type of temperament on two scales: stable and unstable, introversion and extroversion. It allows you to determine the severity of each type of temperament and the nature of a mixed temperament. The Eysenck Personality Inventory (or EPI) was published in 1963. and contains 57 questions, 24 of which are aimed at identifying extroversion-introversion, 24 others—to assess the emotional stability of instability (neuroticism), the remaining 9 constitute a control group of questions designed to assess the sincerity of the subject, his attitude to the survey and the reliability of the results [115]. The test is easily implemented in the robot’s central nervous system by using an expert system and a monitor dialog box or, somewhat more complicated, a speech dialogue. However, it has poor sensitivity to threshold selection situations, which makes it difficult to create high-quality expert systems. In case of ambiguity of the answers to the questions, you can enter an assessment of the degree of belonging to a particular type of temperament, depending on the number of positive unambiguous answers. 2. Another popular technique is the Belov’s formula. This questionnaire is smaller than the previous one. It characterizes only temperaments (without scales), but also gives out the value and percentage of each type in a person. The Belov formula is a questionnaire [116] that has 4 types of questions on the number of types of determined temperaments (choleric, sanguine, phlegmatic and melancholic). Each type contains 20 questions, for example: “if you are: restless or fussy and unrestrained, hot-tempered and …..and …., then you are a pure choleric.” At the same time, if not all questions are answered positively, you can enter the degree of belonging to one or another type temperament, depending on the percentage of positive answers to 20 questions. This test is good for human introspection, but its implementation in the central nervous system of the robot in the form of an expert system requires a significant correction of the questionnaire and the introduction of rules for assessing the degree of belonging. 3. Mendelevich V. D. proposed a set of the following parameters with which the diagnosis of temperament is possible [117]: – Emotionality, which is one of the important diagnostic parameters of the clinical assessment of temperament types. This parameter includes: the concept of the rate of occurrence of an emotional reaction, affect or experience after the onset of an irritant or the appearance of a significant situation; the prevailing modality of emotions; the prevailing and typical degree of severity of emotional experiences; the temporal characteristics of emotional
2.10 The Influence of Temperament on Decision Making
157
experiences (duration, stability, lability); the severity and direction of vegetative reinforcement of emotional experience. Any expressed emotional experience is accompanied by vegetative manifestations: palpitations (rapid or reduced), changes in respiration and thermoregulation, features of perspiration and salivation, dryness or humidity, redness or paleness of the skin, acceleration or deceleration of peristalsis and fluctuations in blood pressure, etc. Outwardly, emotionality is manifested by such alternative qualities as: impressionability—emotional coldness (differing in the depth of experience), emotional excitability—emotional non–excitability (differing in the rate of appearance of affect), emotional stability—emotional lability (differing in the duration of preservation of emotional experience of one modality). – The speed of thinking. This parameter is evaluated based on: the speed of the appearance of associations (“speed of mind”)—you can judge by the speed of answers to questions or tasks; speech speed (style of speech)—you can judge by the speed of pronouncing words and phrases. There are people whose thinking speed is high (“fast-thinking”) and low (“slow-thinking” or “slowthinking”). The peculiarities of thinking affect only a quantitative indicator. Qualitative indicators (purposefulness, productivity, etc.) are not taken into account when assessing the types of temperament. – The speed of motor acts. Reflects sensorimotor reactivity, characterized by the speed of the appearance of a response to a stimulus. On the basis of sensorimotor reactivity, individuals are distinguished who have average indicators, as well as those that increase or do not exceed them. Clinically, this is manifested by speed, “nimbleness” or. On the contrary, slowness when walking, running and performing everyday actions—washing, dressing, etc. – Sociability. It is understood as the expression of a subjective or objective orientation towards communication. Sociability can be considered a borderline phenomenon of psychic individuality. On the one hand, it enters into the structure of temperament and is biologically mediated; on the other hand, the nature of upbringing affects the process of forming sociability. According to this parameter, sociable (extroverts) and closed (introverts) people are distinguished. This method of temperament diagnostics takes into account the properties of temperament fluctuations depending on the circumstances of a person’s decision making and allows for a more accurate assessment of a person, but its implementation in the robot’s central nervous system is complex, not entirely clear and requires the development of fast dialog algorithms for analyzing large volumes of logicalprobabilistic and logical–linguistic expressions. 4. A. Thomas and S. Chess proposed the following classification of the results of observing behavior [118]: – The level of activity determined by motor characteristics (mobility during bathing, feeding) and the ratio of active and passive behavior during the day.
158
2 Central Nervous System
– Rhythmicity, regularity, estimated as the degree of predictability of the time of occurrence of behavioral reactions (for example, feelings of hunger) and the duration of functions in time (for example, sleep duration). – Approximation or removal associated with the characteristics of emotional and motor reactions to new stimuli. – Adaptability, assessed on the basis of response to new or changing situations. – Intensity, characterized by the severity of reactions regardless of their quality or orientation. – The reactivity threshold, defined as the level of stimulation necessary for the appearance of reactions, regardless of their quality and sensory modality. – Mood, i.e. the ratio of joyful and joyless states, as well as reactions qualified as a disposition to others. – Distractibility, characterized by the effectiveness of the action of new emerging stimuli to change behavior. – Duration of attention and perseverance are two interrelated categories reflecting the time duration of some activity and the ability to continue the activity despite difficulties in its implementation. This classification method requires long-term observation of a person and timeconsuming processing of the results, which is not suitable for implementation in human–machine systems (HMS) when performing technological operations. It is also known that temperament is the primary basis that determines individual behavior characteristics from the very first months of life and proved to be stable over many years or throughout life as a whole. The fact of the stability of the manifestation of temperament traits is confirmed by longitudinal studies of American psychologists A. Thomas and S. Chess, who observed the manifestations of temperament in the same children in infancy, 5 and 10 years of age. Scientists have identified temperament parameters (9 components) in newborns whose temperament manifests itself in “pure form”. This information about the person with whom the robot is supposed to interact can be entered into the memory of the IR when it is included in the HMS, which will speed up decision making in difficult situations. Summarizing the data on clinical methods for detecting human temperament, it can be argued that all of them: require constant observation during the interaction of the robot with a person; reflect only the quantitative side of mental activity; do not carry a semantic load, since they are a biological product; are non-motivational, i.e. there is no incentive basis in the temperament itself that significantly affects the behavior of a particular person in various decision making situations. Therefore, after analyzing various methods for determining a person’s temperament, it can be concluded that it cannot be unambiguously attributed to a certain type of temperament, especially if it is limited to the generally accepted 4 types (choleric, etc.) Obviously, it is necessary to expand the list of temperament types and adjust the degree of belonging to one or another type, based on the logical-probabilistic and logical–linguistic analysis of the results of interaction in various situations. At the same time, there should not be infinitely many formed correction rules.
2.10 The Influence of Temperament on Decision Making
159
2.10.3 Adaptation of SEMS to Human Temperament in Human–Machine Systems Adaptation of the behavior of SEMS interacting with a person taking into account the current situation when making a decision is possible after assessing the degrees of belonging to the temperament of the person with whom it interacts. At the same time, adaptation will consist in a gradual increase in the emotional saturation of SEMS behavior, starting from zero, analysis of the human response to the robot’s actions and correction of behavior at the level of emotions and actions. In addition, when adapting the behavior of SEMS interacting with a person, it is advisable to take into account the recommendations of psychologists on interaction with a person of different temperament [117]. The process of adaptation of SEMS is significantly complicated by the fact that the estimates of human temperament are predictive in nature, associated with the uncertainty of the influence of many factors in conditions of great uncertainty of the environment, which do not lend themselves to accurate mathematical descriptions. However, despite the existing technical difficulties of taking into account factors affecting the adaptation of robot behavior, taking into account the assessment of human temperament, the choice of correction of robot behavior can be made by comparing with reference models of behavior from a database based on logical– linguistic and statistical analysis of a large volume of observational data of HMS behavior in various situations making a decision. At the same time, the following recommendations (algorithms) can be used to correct the behavior of the IR. Algorithm 1 When interacting with a choleric, the IR should not react instantly to a person’s behavior, for example, not immediately execute his commands. First, it is necessary to assess the current situation, then the IR and classify it for a certain period of time τ. That is, to attribute the situation to some reference, go to the type of correct (reference) human behavior stored in memory in a similar situation and execute the command, or not. Further, if the IR decides to reject a person’s command, for example, in the case of a sharp negative reaction of a person, then the IR must assess the possibility of individually dangerous situations for a person before continuing the interaction. In case of a high probability of occurrence of individually dangerous situations for a person, the IR must inform the person about this and offer options for human behavior that allow avoiding such situations. If a person does not follow the recommendations, then the IR should repeat its recommendations for correcting human behavior. If the person still does not follow the recommendations, then the IR should not insist on implementing the recommendations and try to assess the degree of risk of the person. If the degree of risk is less than acceptable, then the IR should stop interacting with a person for a while. Algorithm 2 Before executing a person’s commands when interacting with a sanguine person, the IR should ask for a more detailed explanation of the purpose of the operation, and when formulating recommendations for correcting human behavior, explain in detail the purpose of the correction. Next, the IR should respond
160
2 Central Nervous System
to the person’s objections to the correction of behavior and not insist, but simplify or cancel the recommendations. If errors are detected in a person’s commands, the IR must inform the person about this and offer to correct the behavior (task). When forming recommendations for correcting human behavior, the IR should explain that mistakes in human commands were random and easily correctable. Then, when forming corrections in behavior and recommendations, the IR should observe the facial expressions of the sanguine person and choose the type of recommendations that the facial expressions will correspond to, meaning satisfaction from interaction with the IR. At the same time, the IR should avoid monotonous recommendations; periodically change the wording without changing the direction of the recommendations. If a person does not comply with the proposed recommendations, the IR must enter into a dialogue with the person and request advice or opinion and correct his behavior and give a recommendation to the person on joint optimization of behavior to solve the current problem. Algorithm 3 When interacting with a melancholic, the IR should avoid harsh loud commands that correct human behavior. In case of non-fulfillment or incorrect execution of commands, the IR should not give a negative assessment of his behavior in a dialogue with a person, but in a mild, non-accusatory form should ask the person to correct his behavior in order to execute commands and try to convince the person that such a correction of the state of the IR will ensure a more optimal performance of the current task. At the same time, the IR should avoid too negative assessments of non-fulfillment of human behavior corrections and should give some time to the person so that he can slowly correct behavior, listening to intuitive reactions. Algorithm 4 When interacting with a phlegmatic person, the IR should give the person a lot of time to make a decision about correcting behavior. At the same time, the IR must wait for the emotional response of the person. After waiting for the person to correct his behavior, the IR must approve the correction, and in case of additional correction, the IR must give clearer and more detailed instructions for correction.
2.10.4 Correction of the Structure of the Central Nervous System SEMS Endowing an intelligent robot with the ability to assess the temperament of the person with whom it interacts is possible during the formation of its central nervous system. The latter is especially relevant in shaping the behavior of SEMS teams and humanoperators performing complex technological operations together. To provide the CNS SEMS with the ability to assess a person’s temperament in the structure proposed by us in [44] (see Fig. 2.2) it is necessary to supplement block 4 of fuzzification, recognition and decision making with new elements, for example, expert systems that allow, through dialogue with a person, to assess the degree of belonging of
2.10 The Influence of Temperament on Decision Making
161
from block 3
4.1
4.2
4.10
to block 5
4.3
4.4
4.5
4.11
4.6
4.12
4.7
4.8
4.9
4.13
to another IR or operator
Fig. 2.5 The structure of block 4 taking into account the temperament of the IR. 4.1—perception; 4.2—attention; 4.3—memory; 4.4—thinking; 4.5—emotions classification system; 4.6— evaluation of emotions; 4.7—temperament classification system; 4.8—temperament assessment; 4.9—supervisor; 4.10—decision making; 4.11—language; 4.12—speech; 4.13—human–machine interface
the latter to various types of temperament and, based on logical-probabilistic and logical–linguistic analysis of decision making conditions in collective interaction, to correct the behavior of SEMS. In this case, the structure of block 4 can be as follows (Fig. 2.5). In block 4.1, images should be formed, with which attention (block 4.2), memory (block 4.3), thinking (block 4.4), emotions classification system (block 4.5) and temperament assessment (block 4.7) will operate in the future. Blocks (4.1; 4.2; 4.3; 4.4, 4.5; 4.6, 4.9, 4.10; 4.11; 4.12; 4.13) perform the same functions as in the structure of block 4 taking into account the emotions of the IR shown in Fig. 2.3. Block (4.7) should classify the type of temperament of the person with whom the IR interacts, taking into account the information perceived in block 8 and evaluated in blocks 9 and 11 external and internal situations for the life of the IR and based on the analysis of the dialogue with the person through blocks (4.10; 4.11; 4.13). Block (4.8) of temperament assessment and correction should form such features of mental processes in the central nervous system that affect the speed of recall and the strength of memorization, fluency of mental operations, stability and switchability of attention. It determines the type of nervous system and the dynamics of decision making. When IR interacts with a person and especially with a team, it is necessary to endow IR with the ability to correct behavior for the type of human temperament defined by block (4.7). The work of blocks (4.7; 4.8) in assessing and correcting temperament is as follows. Block (4.7) starts the process of determining the type of temperaments. At the same time, block (4.7) first classifies a person’s temperament by a constitutional
162
2 Central Nervous System
method based on the analysis of human images selected by block (4.2) from images generated by block (4.1) and recorded in the memory of block (4.3). The temperament assessment obtained in block (4.7) is transmitted to block (4.8). Then block (4.7) classifies a person’s temperament by a dialog method based on the analysis of a person’s answers to questions asked by the expert system of block (4.7) through blocks (4.11–4.13). The temperament assessment obtained in block (4.7) is also transmitted to block (4.8). Further, if the time allotted for identifying a person’s temperament allows, block (4.7) classifies a person’s temperament by a behavioral method based on an analysis of the characteristics of human behavior in various situations that were observed earlier and that are stored in the memory of block (4.3). In addition, block (4.7) can cause certain human reactions through dialogue through blocks (4.11–4.13) and analyze them using block (4.4). The temperament assessment obtained by this method in block (4.7) is transmitted to block (4.8). Block (4.8), comparing the results of the work of block (4.7), determines the degree of belonging of the types of temperament identified in three ways, i.e. the values of the membership functions (μ(c), μ(x), μ( f ), μ(m)). Then selects the type of temperament with a large value of the membership function and transfers it to the memory of block (4.3). Block (4.4) corrects the behavior of the IR formed in block (4.7) in accordance with the instructions stored in the memory of block (4.3) and corresponding to the type of temperament recorded in the memory of block (4.3).
2.10.5 Example of Taking into Account the Influence of Temperament in Human–Machine Systems As an example of taking into account the influence of temperament, let’s consider driving a car along a given route by drivers of different temperaments with a robot assistant (navigator) for a minimum time without an accident and without exceeding the maximum speed. Highway S contains several i-th sections of si . Each si contains straight sections sij of long l ij and turns with radii Rij and angles of rotation ψij . Moreover, each sij can have different coefficients of adhesion to the road ϕij .Taking into account the dynamics of the car and the trouble-free passage of the route S in each section of the S i movement, four j = 4 special segments of sij movement can be distinguished: si1 —acceleration, si2 —uniform movement, si3 —braking and si4 —turning. When calculating the t s1 time and the ls1 path on the si1 acceleration segment, it is necessary to take into accounts that: – The car cannot accelerate to a speed V s1 greater than the permissible V m under road conditions (V s1 = V m ); – The maximum acceleration during acceleration au is determined by the technical parameters of the car and is usually calculated from the condition of the acceleration time t p to a given speed V p (au = V p /t p );
2.10 The Influence of Temperament on Decision Making
163
– When taking into account the temperament, the acceleration estimate during acceleration changes taking into account the reaction time of the choleric driver t vx , or the reaction time of the sanguine driver t vc , or the reaction time of the melancholic driver t vm , or the reaction time of the phlegmatic driver t vf , as well as the time of issuing the command by the navigator robot for the driver choleric t px , sanguine t pc , melancholic t pm or phlegmatic t pf . For example: ) ) ( ( aux = V p / t p+ tcx + t px ; auc = V p / t p+ tcc + t pc ; ) ) ( ( au M = V p / t p+ tcM + t pM ; auφ = V p / t p+ tcφ + t pq ; – The length of the path l s1 on the acceleration segment si1 cannot exceed the length of the path ls4 on the turn segment si4 (l s1 ≤ l s4 ); – When calculating the time t s2 and the path l s2 on the segment si2 of uniform motion, it is necessary to perform the inequality: ls2 ≤ (S i − l s1 − l s3 − l s4 ), where ls3 and ls4 are the lengths of the segments si3 and si4 . – When calculating the t s3 time and ls3 path on the si3 braking segment, it is necessary to take into accounts that: – The speed V i3 = V i4 at the end of the ls3 path to prevent an accident cannot be greater than the critical V ij for the segment si4 – turn (V i4 ≤ V ij ); – The maximum acceleration during braking aT is determined by the technical parameters of the vehicle and road conditions and is usually calculated from the braking time t T condition from the initial speed V T 0 to 0 during braking (aT = V T 0 /t T ) or from the braking distance S T (aT = V 2 T0 /2S T ). When taking into account the temperament, the acceleration assessment during braking changes taking into account the reaction time of the choleric driver t vx , or the reaction time of the sanguine driver t vc , or the reaction time of the melancholic driver t vm , or the reaction time of the phlegmatic driver t vf , as well as the time of issuing commands by the navigator robot for the driver choleric t px , sanguine t pc , melancholic t pm or phlegmatic t pf . For example: ( ( ) ) aT x = V p / tT + tεx + t px , aT c = V p / tT + tcc + t pc , ) ) ( ( aT M = V p / tT + tcM + t pM , aT φ = V p / tT + tεφ + t pxφ ; – It is necessary to fulfill the inequality: ls3 + l s2 ≤ S i − l s1 − l s4 . When calculating the time t s4 and the path ls4 on the segment si4 of the turn, it is necessary to take into accounts that: – The speed of V i4 on the si4 segment to prevent an accident cannot be greater than the critical V ik (V i4 ≤ V ij ). Vik =
√
3.6(g ∗ Ri4 ∗ ϕi4 ),
(2.47)
164
2 Central Nervous System
where g is the acceleration of gravity, Ri4 is the turning radius and ϕ i4 is the coefficient of adhesion to the road on the si4 segment; – The path ls4 on the segment si4 of rotation can be calculated by the formula: ls4 = (π ∗ Ri4 ∗ Ψi4 )/180
◦
(2.48)
where: ψi4 is the angle of rotation on the segment si4 . In addition, it can be taken into account that the braking distance on a dry road can be approximately calculated by the formula S T = (0.1 * V T 0 )2 , and on a wet road by the formula: ST = (0.1 ∗ VT 0 )2 + 0.5(0.1 ∗ VT 0 )2 . You can also use the comparative Table 2.5 of the dependence of the braking distance on the quality of the road surface (a passenger car whose tires have an average coefficient of adhesion): When calculating the critical speed of V ik , you can use the Table 2.6. Table. 2.5 The dependence of the braking distance on the quality of the road surface №
Speed, V T 0
17 m/c
22 m/c
25 m/c
1
Dry asphalt, braking distance S T
20.2 m
35.9 m
45.5 m
2
Wet asphalt, braking distance S T
35.4 m
62.9 m
79.7 m
3
Snow-covered road, braking distance S T
70.8 m
125.9 m
159.4 m
4
Black ice, braking distance S T
141.7 m
251.9 m
318.8 m
Table. 2.6 The dependence of the coefficient of adhesion on the type of support surface №
Type of support surface
Coefficient of adhesion for high-pressure tires, ϕ
Coefficient of adhesion for low-pressure tires
1
Dry asphalt
0.7–0.8
0.7–0.8
2
Dry, rolled dirt road
0.6–0.7
0.4–0.6
3
Dirty, wet dirt road
0.1–0.3
0.15–0.25
4
Loose, freshly poured soil
0.3–0.4
0.4–0.6
5
Packed, compacted soil
0.4–0.6
0.5–0.7
6
Wet sand
0.3–0.6
0.4–0.5
7
Dry sand
0.25–0.3
0.2–0.4
8
Loose snow
0.15–0.2
0.2–0.4
9
Rolled snow
0.25–0.3
0.3–0.5
10
Swamp
0
0.1
11
Concrete
0.7–0.8
0.7–0.8
2.10 The Influence of Temperament on Decision Making
165
When computer modeling the passage of a given route by cars with a robot navigator and drivers of different temperaments, the following data were used: Vehicle parameters: length L = 6 m, width H = 3 m; Maximum permissible vehicle speed: V = 60 km/h (16.7 m/s); Traction on a dirt road: ϕ = 0.5 (see Table 2.6); Vehicle acceleration speed: V p = 200 km/h (56 m/s) for the time t P = 10 s; V Then au = t pp = 5.6 m/c2 . In addition, according to Table 2.5, there will be: – At the initial braking speed V T0 = 28 m/s, the braking distance S T = 100 m; – The minimum braking time will be t T = 2 S T / V T0 = 7.14 s; – Maximum braking acceleration aT = V T0 /t T = 3.9 m/s. The initial data for modeling the passage of the route by drivers with different temperaments were as follows. The number of sections on the road N = 16. The number of partitions into segments of each road section: J = 4: j = 1— acceleration, j = 2—uniform movement, j = 3—braking, j = 4—turning. Lengths of all 16 road sections si : S 1 = 1000 m; S 2 = 1200 m; S 3 = 1000 m; S 4 = 1400 m; S 5 = 1000 m; S 6 = 1200 m; S 7 = 1500 m; S 8 = 1000 m; S 9 = 1800 m; S 10 = 1000 m; S 11 = 2000 m; S 12 = 2000 m; S 13 = 1500 m; S 14 = 1600 m; S 15 = 1000 m; S 16 = 1200 m. Turning angles on all 16 road sections ψi : ψ1 = 90°; ψ2 = 80°; ψ3 = 120°; ψ4 = 80°; ψ5 = 60° ψ6 = 95° ψ7 = 90° ψ8 = 60° ψ9 = 60° ψ10 = 90° ψ11 = 90° ψ12 = 90° ψ13 = 85°; ψ14 = 85°; ψ15 = 90°; ψ16 = 0°. Turning radii on all 16 road sections Ri : R1 = 2 m; R2 = 4 m; R3 = 8 m; R4 = 10 m; R5 = 8 m; R6 = ; 4 m; R7 = 2 m; R8 = 10 m; R9 = 4 m; R10 = 10 m; R11 = 2 m; R12 = 4 m; R13 = 6 m; R14 = 10 m; R15 = 8 m; R16 = 0 m. The coefficients of adhesion to the road on all 16 sections of the road ϕi : ϕ1 = ϕ2 = ϕ3 = ϕ4 = ϕ5 = ϕ6 = ϕ7 = ϕ8 = ϕ9 = ϕ10 = ϕ11 = ϕ12 = ϕ13 = ϕ14 = ϕ15 = ϕ16 = 0.5. Temperament types k = 5: 1—choleric, 2—sanguine, 3—melancholic, 4—phlegmatic, 5—temperament is not taken into account. The minimum reaction time of the driver t v5 = 1 s. The minimum reaction time of the navigator robot t p5 = 0.5 c. The reaction time of the choleric driver t v1 = 2 s. Reaction time of the sanguine driver t v2 = 4 s. Reaction time of the melancholic driver t v3 = 6 s. Reaction time of the phlegmatic driver t v4 = 8 s. The time when the robot issues a command for the choleric driver t p1 = 4 s. The time when the robot issues a command for a sanguine driver t p2 = 6 s. The time when the robot issues a command for the melancholic driver t p3 = 8 s. The time of issuing a command by the robot for the phlegmatic driver t p4 = 10 s.
166
2 Central Nervous System
Simulation Algorithm 1. Setting the initial conditions; 2. Setting variables i = 1 (the first section of the movement); r = 1 (choleric), N = 16, k = 4; 3. Calculation of braking time t Tr for a driver with r temperament:t Tr = t T + t vr + t pr ; 4. Calculation of aur acceleration for a driver with r temperament:aur = V p / (t p + t vr + t pr ); 5. Calculation of braking acceleration aTr for a driver with r temperament: aTr = V Tn /t Tr ; 6. Speed calculation at the end of the first segment of the i-th section V i1 = V m ; 7. Speed calculation on the second segment of the i-th section V i2 = V m ; 8. Calculation of the passage time of the first segment of the i-th section by a driver with r temperament: t i1r = V 11 /aur ; 9. Calculation of the distance traveled( on the first ) segment of the i-th section by a 2 /2; driver with r temperament: si1r = atv ∗ ti1r 10. Calculation the critical speed before turning by a driver with r temperament √ of ) ( V K = 3.6 g ∗ Ri∗ ϕi (acceleration g = 9.8 m/s2 ); 11. Assignment of speed on the third and fourth segment of the i-th section by a driver with r temperament V i3 = VK, V i4 = VK; 12. Calculation of the time t i3r of the passage of the si3r path on the third segment of the i-th section ( by2 a) driver with r temperament: ti3r = (Vi2 − Vi3 )/aT r ; /2; si3r = Vi2 ti3r − aT r ti3r ( ) ◦ 13. Calculation of the traveled path on the turn Si4 = π ∗ Ri∗ Ψi /180 ; 14 . Calculation of the passage time of the fourth segment of the segment of the i-th section t i4 = si4 / V i4 ; 15. Calculation of the distance traveled on the second segment of the i-th section by a driver with r temperament si2r = S i − si1 − si3r − si4 ; 16. Calculation of the travel time by a driver with r temperament of the second segment of the i-th section: t i2r = si2r / V i2 ; 17. Calculation of the travel time of the i-th section by a driver with r temperament: tir = ti1 + ti2r + ti3r + ti4 ; 18. Select the next segment of the path: i = i + 1, if i > N, then go to point 20; 19. Perform all operations from point 3 to point 18; 20. Select the following type of temperament: r = r + 1, if r > 5, then the end of the algorithm; 21. Perform all operations from point 3 to point 18. The simulation results are shown in Figs. 2.6 and 2.7. As can be seen from Figs. 2.6 and 2.7 the time of passing the route without accidents depends on the temperament of the driver. Not taking into account the driver’s temperament by the robot navigator leads to a slowdown in the passage of the route, if the navigator overestimates the time of issuing commands, or to accidents
2.11 Conclusion
167
Fig. 2.6 The graph of the dependence on temperament on the 1st segment of the route
Fig. 2.7 The schedule depends on the temperament on the entire route
due to the high speed of entering the turn when the times of issuing commands are inconsistent with the driver’s temperament.
2.11 Conclusion In order for robots to formulate goals independently, without needing human intervention, and successfully fulfill them, they must be equipped not only with more advanced sensation sensors, but also have the ability to understand the language of feelings. They should use and in some cases bionic approaches are already being used in the creation of sensory organs and the “central nervous system” of robots (CNSR). To solve the problem of creating a CNSR, it is necessary, first of all, to study and formalize the linguistic meaning of the language, namely alphabets and rules for constructing signs (words) from letters, from sentence signs, as well as rules for understanding the meaning of sentences, forming new sentences and transmitting
168
2 Central Nervous System
them to others. However, quite reasonable mathematical models of the functioning of the central nervous system have not yet been developed, which prevents the creation of a full-fledged CNSR. Solving the problem of creating mathematical and software tools that provide robots with the ability to reflexive thinking and essentially bringing the latest sign systems closer to those that people use in everyday practice are still at a very early stage, limited to the processes of modeling behavior based on the analysis of sensations. Nevertheless, the available technical solutions at this stage allow us to start creating simplified CNSR prototypes. One of the most promising options for its implementation may be a logical-mathematical approach to the implementation of the processes of behavior formation based on the analysis of the perception of sensory signals from a robotic system. Currently, the most thoroughly studied the problem of choosing the optimal solutions for the formation of behavioral processes in the conditions of incomplete certainty interval, probabilistic or linguistic type. However, intelligent robots, built on the basis of SEMS modules there are certain difficulties compute control actions since the adoption of decisions related to their parallel structures. Endowing an intelligent robot with properties similar to the mental properties of a person is an urgent problem, the solution of which will improve the quality of decision making on optimal control of the behavior of an intelligent robot when performing complex technological tasks in a team and in conditions of great uncertainty of the functioning environment. The solution to the problem of endowing an intelligent robot with elements of the human psyche by creating mathematical and software tools that allow the robot of the Central Nervous System to take into account the peculiarities of the psyche of team members when choosing optimal collective actions is still at the very initial stage, limited to modeling behavioral processes based on the analysis of sensations. Nevertheless, the available technical solutions at this stage allow us to start creating simplified prototypes of a robot with a central nervous system endowed with a psyche. One of the most promising options for its mathematical implementation may be a logical-mathematical approach to the implementation of the formation of behavioral processes based on the analysis of sensations in the form of signals from the robot’s sensory system. The proposed new structure of the robot’s central nervous system with blocks allows taking into account the psychology of team members to ensure optimal interaction when performing joint technological operations. The proposed approach to the formation of images based on information from the sensors of the Central Nervous System using algebraization and matrix solution of systems of logical equations can be effectively used in the formation of strategies and tactics for controlling intelligent robots in conditions of incomplete certainty. An important task for intelligent robots is to ensure independent decision making regarding appropriate behavior. The proposed algorithm for the formation of the language of robot sensations allows robots to provide the possibility of reflexive and reasoned reasoning. To do this, the following procedures are proposed: quantization of the surrounding space, blurring of sensory information, image formation
2.11 Conclusion
169
when displaying the surrounding space, image formation by combining images from different sensory organs and assigning words to the generated language. The construction of a set of logical connections inherent in the image, i.e. the construction of the classification model allows, based on the found patterns (image attributes), to relate the objects considered in the central nervous system to any class. If the images formed in the central nervous system of the robot as a result of the analysis of sensory data are characterized by a set of attributes with a certain amount of confidence that can be specified in the form of probability or membership function, then the following algorithms can be used for their classification: logicalprobabilistic LP1 and LP2 or logical—linguistic LL1 and LL2. Computer experiments have shown the advantages of the introduced algorithms LP1, LL1, LP2 and LL2 in speed and classification accuracy compared with Quinlan algorithms ID3 and C4.5. If there are similar rows of attributes in the classified images, it is advisable to introduce attribute significance coefficients (LP2 and LL2 algorithms). The proposed principles of deductive, inductive and abduction of decision making based on the information from the central nervous system using algebraization and matrix solving logical equations apply effectively in the formation of strategy and tactics of control intelligent robots in a not complete determination. Thus the most rapid decision making is by using the principle of abduction, including elements of deductive and inductive reasoning. Validity and reliability of decision making in such an approach may be increasing in the operation of the robot, if included in the self-controls system collects data sampled good solutions, given in the past, the right solutions. Are described and discussed the stages of formation sensations language CNSR based on the use systems of equations modulo two or systems of logical equations in algebra Zhegalkin and the features of recognition, described by equations in algebra modulo two. It is shown that the effective use of method situation habitualness in CNSR, which is analogous of human intuition and allows the desired solution to replace analogue. Then, sharply increases the rate of formation of reflective reasoning. When selecting the optimal reflective reasoning described by systems of logical equations in algebra Zhegalkin and in making the best decision on the advisability of conduct for achieving of formulated goals it is desirable to find the optimum reduced to a well-studied problem of mathematical programming. In the case when part attributes of logical variables in the system of equations CNSR are linguistic expressions, more naturally choice of the best solution is the transition to the concept of a consistent preference for one of the other variants compared. To solve such problems can be generalized scheme of mathematical programming, shifting from quantitative scales to the ordinal, or shifting from models that require functional assignment, defining goals and constraints of the problem, to models, taking into account the preferences of the persons participating in choosing decision. If the SEMS control system contains uncertainties of a Logical—Probabilistic, Logical–Linguistic or Logical—Interval type, the search for optimal control can be carried out using mathematical programming methods, when there is a fundamental possibility of constructing a scalar quality criterion, including from attributes
170
2 Central Nervous System
(probabilities, membership functions or intervals) logical variables. In this case, the quality of optimization will mainly be determined by the correctness of constructing a binary relation describing the measure of the proximity of the designed SEMS control system to the ideal one. This can be a time consuming and complex task, often involving solving a number of logical problems. The quality of setting and solving these problems depends on the experience and skill of the developer as a decision maker. To increase objectivity in assessing the optimality, it is advisable to make the decision maker collective with the involvement of the customer in the work on building a binary relationship. In a situational control system of a SEMS group without an operator, each SEMS must have a reference solution Oj and a measure of the proximity δ ij of the estimated solution to the reference one. Differences in Oj and δ ij for different SEMS will lead to collisions in group interaction. In addition, obtaining adequate Oj and objective δ ij , taking into account the no determinism of models of the environment of choice and/or incompleteness, can be very laborious and not always justified. Therefore, it is advisable to have an operator or a leader in the situational control system of the SEMS group. Emotions greatly influence the style and behavior of highly organized animals and humans. Therefore, the endowment of intelligent robots with some mechanism similar to human emotions in some cases is very relevant when assessing the current situation and making decisions. Especially when assessing the security of decisions made. On the basis of the proposed algorithms for taking into account the influence of emotions of “pleasure” and “self-preservation” by processing sensory signals of the IR and analyzing statistical data and computer modeling results, it is possible to form IR movement routes that take these emotions into account when choosing the optimal route in traffic sections. The results of the study of the influence of emotions can be used in the central nervous system of intelligent robots to classify sections of traffic routes by comparing them with standards obtained based on the analysis of sensory and statistical data and calculated estimates of the influence of emotions, which will improve the quality of traffic control in conditions of incomplete environmental certainty. The improvement of control systems of intelligent robots performing technological operations in a team with a person, that is, as part of Human–Machine Systems (HMS) is associated with the development of methods for the formation of specific properties of the robot, allowing to take into account the psyche of the person with whom the robot interacts. At the same time, one of the most important is the development of methods for assessing and taking into account a person’s temperament. Analysis of the existing types of classifications of human temperament has shown that three types can be recommended for use in intelligent robots working as part of the HMS in conditions of incomplete environmental certainty: constitutional, based on the analysis of human images; dialog, based on the analysis of human responses to questionnaire items, and behavioral, based on the analysis of human behavior characteristics in various situations. At the same time, the choice of the type of
References
171
temperament of the diagnosed person can be carried out on the basis of choosing the maximum degree of belonging of a person to a particular type of temperament after using all three types of classification. It is possible to increase the accuracy of the assessment of temperament by expanding the list of temperament types and corresponding correction of the degree of belonging to a particular type, based on the logical-probabilistic and logical– linguistic analysis of the results of the interaction of IR with a person in various situations. At the same time, correction of the robot’s behavior can be carried out by comparing it with reference models of behavior from a database built on the basis of logical–linguistic and statistical analysis. In order to endow the IR with the ability to assess the temperament of the person with whom it interacts, it is necessary to introduce it into the structure of a number of additional program blocks during the formation of its central nervous system, which carry out the diagnosis and assessment of temperament by logical–linguistic methods in an interactive mode. The results of computer simulation of driving a car along a given route by drivers of different temperaments with a robot assistant (navigator) for the minimum time of passing the route without accidents and without exceeding the maximum speed showed that taking into account the driver’s temperament allows you to minimize the time of passing the route without accidents, and not taking into account temperament can lead to accidents when cornering. The latter confirms the expediency of endowing intelligent robots with the ability to adapt to human temperament.
References 1. Dobrynin, D.A.: Intelligent robots yesterday, today, tomorrow. X natsional’naia konferentsiia po iskusstvennomu intellektu s mezhdunarodnym uchastiem KII-2006 (25–28 sentiabria 2006 g., Obninsk). In: Proceedings of the National Conference on Artificial Intelligence with International Participation, V.2. M: FIZMATLIT (2006). (In Russian) 2. Hahalin, G.K.: Applied ontology language hypergraphs.Trudy vtoroi Vserossiiskoi Konferentsii s mezhdunarodnym uchastiem “Znaniia-Ontologii-Teorii” (ZONT-09). In: Proceedings of the Second Russian Conference with International Participation “Knowledge-OntologiesTheories”. October 20–22, Novosibirsk, pp. 223–231 (2009). (In Russian) 3. Hahalin, G.K., Kurbatov, S.S., Naydenova, K.A.: The integration of intelligent systems analysis/synthesis of image and text: project outlines INTEGRO. Trudy Mezhdunarodnoi nauchnotekhnicheskoi konferentsii “Otkrytye semanticheskie tekhnologii proektirovaniia intellektual’nykh sistem” OSTIS-2011. In: Proceedings of the International Scientific-Technical Conference “Open Semantic Technologies of Intelligent Systems” OSTIS-2011, February 10–12, 2011, Minsk, Belarus, pp. 302–318 (2011). (In Russian) 4. Hahalin, G.K., Kurbatov, S.S., Litvinovich, A.V.: Synthesis of visual objects on naturallanguage description. Trudy vtoroi Mezhdunarodnoi nauchno-tekhnicheskoi konferentsii “Komp’iuternye nauki i tekhnologii” (KNiT-2011). In: Proceedings of the Second International Scientific and Technical Conference “Computer Science and Technology”, 3–7 October 2011, Belgorod, pp. 595–600 (2011). (In Russian) 5. Fominih, I.B.: Adaptive systems and information model of emotions. Trudy Mezhdunarodnoi konferentsii Intellektual’noe upravlenie: novye intellektual’nye tekhnologii v zadachakh
172
6. 7.
8.
9. 10. 11.
12. 13.
14. 15. 16.
17. 18. 19. 20. 21.
22.
23.
2 Central Nervous System upravleniia (ICIT’99). In: Proceedings of the International Conference Intelligent Control: the new intelligent technology in control problems (ICIT’99), 6–9 December 1999, Pereslavl (1999). (In Russian) Vagin, V.N., Anishchenko, I.G.: The concept of the mark in science and art. Novosti iskusstvennogo intellekta. News Artif. Intell. 3 (2006). (In Russian) Voskresenskiy, A.L., Hahalin, G.K.: Context fragmentation in linguistic analysis. Desiataia natsional’naia konferentsiia po iskusstvennomu intellektu s mezhdunarodnym uchastiem KII2006. In: Proceedings of the 10th National Conference on Artificial Intelligence with International Participation, vol. 3, M:. FIZMATLIT, 25–28 September 2006, Obninsk (2006). (In Russian) Gorbushin, N.G.: Models of sense and consciousness in artificial intelligence. // Desiataia natsional’naia konferentsiia po iskusstvennomu intellektu s mezhdunarodnym uchastiem KII2006. In: Proceedings of the 10th National Conference on Artificial Intelligence with International Participation, vol. 3, M.: FIZMATLIT 2006, 25–28 September 2006, Obninsk (2006). (In Russian) Golovin, S.Y.: Slovar’ prakticheskogo psikhologa. Dictionary Practical Psychologist. Harvest, Minsk (1998). (In Russian) Bol’shoi psikhologicheskii slovar [A significant psychological dictionary]. - M.: Prime EVROZNAK. Ed. B.G Meshcheryakov, Acad. V.P. Zinchenko. 2003. (In Russian) Karpov, V.E.: Emotions robots. XII natsional’naia konferentsiia po iskusstvennomu intellektu s mezhdunarodnym uchastiem KII-2010. In: Proceedings of the XII National Conference on Artificial Intelligence with International Participation, September 20–24, 2010, Tver, M.: FIZMATLIT, pp. 354–368 (2010). Steiner, R.: Antroposofiia. Fragment 1910 g [Anthroposophy. Detail 1910]. M.: Titurel, p. 504 (2005). (In Russian) Fominykh, I.B.: Emotions as a unit estimates the behavior of intelligent systems. Desiataia natsional’naia konferentsiia po iskusstvennomu intellektu s mezhdunarodnym uchastiem KII2006. In: Proceedings of the 10th National Conference on Artificial Intelligence with international participation, vol. 3, 25–28 September 2006, Obninsk, M.: FIZMATLIT (2006). (In Russian) Afanasieff, V.V.: Sveto-zvukovoi muzykal’nyi stroi [Light-sound music system]. M.: “Music” (2002). (In Russian) Polivtsev, S.A., Khashan, T.S.: The study of geometric and acoustic properties of the sensors for the technical hearing system. Problemy bioniki. Prob. Rob. Bion. 6 (2003). (In Russian) Ying, M., Bonifas, A.P., Lu, N., Su, Y., Li, R., Cheng, H., Ameen, A., Huang, Y., Rogers, J.A.: Silicon Nanomembranes for Fingertip Electronics. Published 10 August 2012, IOP Publishing Ltd. (2012) Tactile organs.: Entsiklopedicheskii slovar’ Brokgauza i Efrona. Brockhaus and Efron Encyclopedic Dictionary, vol. 82 and 4 additional, St. Petersburg (1890–1907). (In Russian) Semashko, N.A.: Bol’shaia meditsinskaia entsiklopediia. Great Medical Encyclopedia. Moscow, Publish. OGIZ RSFSR (1934). (In Russian) Proceedings of the Institute of Mechanics, Ural Scientific Center RAS, vol. 9, Part II (2012). (In Russian) Yu, M. (2009). Rachkov. Tekhnicheskie sredstva avtomatizatsii. Technical Means Of Automation. Textbook, 2nd ed., M, MGIU, p. 185 (2009). (In Russian) Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Ergatic operating instructions manual methods of analysis and decision-making processes in injuries and accidents of power. Informatsionno-upravliaiushchie sistemy. Inform. Contr. Syst. 6, 29–36 (2013). (In Russian) Gorodetskiy, A.E., Dubarenko, V.V., Kurbanov, V.G., Tarasova, I.L.: Logical and probabilistic modeling techniques poorly formalized processes and systems. Izvestiia IuFU. Tekhnicheskie nauki. Proc. SFU Tech. Sci. 6(131), 255–257 (2012). (In Russian) Koss, V.A.: The model of natural intelligence and ways of achieving the objectives of artificial intelligence. Matematicheskie mashiny i sistemy. Math. Mach. Syst. 1(4), (2006). (In Russian)
References
173
24. Gavrilova, T.A., Khoroshevskiy, V.F.: Bazy znanii intellektual’nykh sistem. Base Knowledge of Intelligent Systems, St. Petersburg, Piter (2001). (In Russian) 25. Kulik, B.A., Kurbanov, V.G., Fridman, A.Y.: Parallel processing of the data and knowledge of the methods of the algebra of tuples. Trudy SPRIIAN. Proc SPIIRAS 5(36), 168–179 (2014). https://doi.org/10.15622/sp.36.10. (In Russian) 26. Gorodetskiy, A.E., Tarasova, I.L.: Nechetkoe matematicheskoe modelirovanie plokho formalizuemykh protsessov i sistem. In: Fuzzy Mathematical Modeling Badly Formalized Processes and Systems, p. 336. SPb.: Publishing House of the Polytechnic. University Press (2010). (In Russian) 27. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Methods of synthesis of optimal intelligent control systems SEMS. Smart Electromech. Syst., 25–45, https://doi.org/10.1007/978-3-31927547-5_4 28. Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasonins. Inf. Sci. 8(3), 199–249 (1975) 29. Ahlefeld, H., Hertzberger, Y.: Vvedenie v interval’nye vychisleniia. Introduction to Interval Calculations, M.: Mir, p. 360 (1987). (In Russian) 30. Levin, V.I.: Interval’naia matematika i issledovanie sistem v usloviiakh neopredelennosti. In: Interval Mathematics and Research Systems in Conditions Of Uncertainty. Publish. Penz. tehnol. Inst., Penza (1998). (in Russian) 31. Levin, V.I.: Interval Continuous logic and its application in control problems. Izv. RAN. Teoriia i sistemy upravleniia. Bull. Russian Acad. Sci. Theor. Contr. Syst. 1 (2002). (In Russian) 32. Levin, V.I.: Continuous logic and its application. Informatsionnye tekhnologii. Inform. Technol. 1, 17–21 (1997). (In Russian) 33. Gorodetskiy, A.E., Dubarenko, V.V.: Combinatorial method of calculating the probabilities of complex logic functions. Zhurnal vychislitel’noi matematiki i matematicheskoi fiziki. J. Comput. Math. Mathemat. Phys. 39(7), 1246–1249 (1999). (In Russian) 34. Dubarenko, V.V., Kurbanov, V.G., Kuchmin, A.Y.: A method of calculating the probabilities of logic functions. Informatsionno-upravliaiushchie sistemy. Inform. Contr. Syst. 5, 2–7 (2010). (In Russian) 35. Gitman, M.B.: Vvedenie v teoriiu nechetkikh mnozhestv i interval’nuiu matematiku: Ch1: Primenenie lingvisticheskoi peremennoi v sistemakh priniatiia reshenii. In: Introduction to the Theory of Fuzzy Sets and Interval Mathematics: Part 1: Application of the Linguistic Variable in the Decision-Making Systems. Textbook—Perm: Publishing House of Perm State Technology, p. 45. University Press (1998). (In Russian) 36. Komartsova, L.G., Maksimov, A.V.: Neirokomp’iutery. [Neurocomputers]. M.: Publishing House of the Moscow State Technical University, p. 320 (2002). (In Russian) 37. Vvedenie v matematicheskoe modelirovanie. In: Trusova, P.V., Logos, M. (eds.) Introduction to Mathematical Modeling: Textbook, p. 440 (2005). (In Russian) 38. Gorodetskiy, A.E., Dubarenko, V.V., Kurbanov, V.G.: Method of searching for optimal control actions on objects with dynamic adaptation to changes in the environment. 6-i SanktPeterburgskii simpozium po teorii adaptivnykh sistem (SPAS”99). In: 6th St. Petersburg Symposium on Adaptive Systems Theory (SPAS”99). St. Petersburg, pp. 228–232 (1999). (In Russian) 39. Dobrynin, D.A., Karpov, V.E.: Modeling some forms of adaptive behavior of intelligent robots . Informatsionnye tekhnologii i vychislitel’nye sistemy. Inform. Technol. Comp. Syst. 2, 45–56 (2006). (In Russian) 40. Kunc, G., Donnel, O., Upravlenie, S.: sistemnyj i situacionnyj analiz upravlencheskih funkcij. Control: System and Situation Analysis of Control Functions. Moscow, Progress Publ., p. 588 (2002). (In Russian) 41. Alexandrov, O.P.: Psychology of robots as science. In: Alexandrov, O.P., Kazakhbayeva, G.U., Shirokov, O.N., et al. (eds.) New Word in Science: Prospects of Development: Materials Of the X International Scientific and Practical Conference, vol. 2 and vol. 1, Cheboksary, 31 December 2016, pp. 192–198. CNS “Interactive Plus”, Cheboksary (2016). https://doi.org/ 10.21661/r-116360
174
2 Central Nervous System
42. Karpov, V.E., Karpova, I.P., Kulinich, A.A.: Social Communities of Robots, p. 352. URSS, Moscow (2019) 43. Gorodetskiy, A.E., Kurbanov, V.G.: Smart Electromechanical Systems: The Central Nervous Systems, p. 270. Studies in Systems, Decision and Control 95, Springer International Publishing, Switzerland. ISBN 978-3-319-53326-1, 2017, . https://doi.org/10.1007/978-3319-53327-8 44. Granovskaya, R.M.: Elements of Practical Psychology, 5th edn. ispr. and dop. – SPb.: Speech, p. 655 (2003). 45. Karpov, V.E.: Emotions and temperament of robots: behavioral aspects. J. Comp. Syst. Sci. Int. 53(5), 743–760. 46. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Challenges related to development of central nervous system of a robot on the bases of SEMS modules. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds.) Smart Electromechanical Systems: The Central Nervous System. Studies in Systems, Decision and Control 95, Springer International Publishing AG, pp. 3–16 (2017). https://doi.org/10.1007/978-3-319-53327-8_1 47. Gorodetskiy, A.E., Tarasova, I.L.: Logical and mathematical method of making behavioral decisions. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems, pp. 3–14. Behavioral Decision Making/Studies in Systems, Decision and Control 352, Springer Nature Switzerland AG (2021). https://doi.org/10.1007/978-3-030-68172-2_1 48. Gorodetskiy, A.E., Tarasova, I.L.: Decision making an autonomous robot based on matrix solution of systems of logical equations that describe the environment of choice for situational control. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems, pp. 259–273. Situational Control. Studies in Systems, Decision and Control 261, Springer International Publishing (2020). https://doi.org/10.1007/978-3-030-32710-1_20 49. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Classification of Images in Decision Making in the Central Nervous System of SEMS. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems, pp. 187–196. Behavioral Decision Making/Studies in Systems, Decision and Control 352, Springer Nature Switzerland AG (2021), https://doi. org/10.1007/978-3-030-68172-2_15 50. Ackoff, R., Emery, F.: O tseleustremlennykh sistemakh. On Purposeful Systems, p. 269. M.: Sov. Radio (1974). (In Russian) 51. Gorodetskiy, A.E., Erofeev, A.A.: Principles of intelligent control systems mobile objects. Avtomatika i telemekhanika. Autom. Rem. Contr. 9 (1997). (In Russian) 52. Gorodetskiy, A.E., Dubarenko, V.V., Erofeev, A.A.: Algebraic approach to the solution of logical control problems. Avtomatika i telemekhanika. Autom. Rem. Contr. 2, 127–138 (2000). (In Russian) 53. Zhegalkin, I.I.: Arifmetizatsiia simvolicheskoi logiki. [Arithmetization symbolic logic]. Matematicheskii sbornik. Math. Coll. 35, 3–4 (1928). (In Russian) 54. Dubarenko, V.V., Kurbanov, V.G.: The method of bringing the systems of logical equations in the form of linear sequential machines. Informatsionno – izmeritel’nye i upravliaiushchie sistemy. Inform. Measur. Contr. Syst. 7(4), 37–40 (2009). M.: Publish. Radiotechnika. (In Russian) 55. Gorodetskiy, A.: Osnovy teorii intellektual’nykh sistem upravleniia. Foundations of the Theory of Intelligent Control Systems, p. 313. LAP LAMBERT Academic Publishing GmbH @ Co. KG (2011). (In Russian) 56. Connell, J.: Robots that talk and listen. In: Markowitz, J. (ed.) De Gruyter (2014) 57. Connell, J., Marcheret, E., Pankanti, S., Kudoh, M., Nishiyama, R.: Proceedings of the Artificial General Intelligence Conference (AGI-12), LNAI 7716, pp 21–30 (2012) 58. Theconversation.com/how-do-robots-see-the-world-51205 59. Kurbanov, V.G., Burakov, M.V.: Solving of logic functions systems using genetic algorithm. In: Proceedings of the II International Scientific and Practical Conference “Fuzzy Technologies in the Industry—FTI 2018”, pp. 410–417 (2018). http://ceur-ws.org/Vol-2258/paper49. pdf
References
175
60. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Decision making in central nervous system of a robot. Inform. Contr. Syst. 1, 21–30 (2018). https://doi.org/10.15217/issnl684-8853.2018. 1.21. (In Russian) 61. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Behavioral decisions of a robot based on solving of systems. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds.) Smart Electromechanical Systems: The Central Nervous System, Studies in Systems, Decision and Control 95, pp. 61– 71 (2017). https://doi.org/10.1007/978-3-319-53327-8_5 62. Vagin, V.N., Golovina, E.Y., Zagoryansky, A.A., Fomin, M.V.: Dostovernyi i pravdopodobnyi vyvod v intellektual’nykh sistemakh [Reliable and plausible conclusion in intelligent systems]. In: Vagin, V.N., Pospelov, D.A. (eds.) 2nd Edition Revised and Enlarged, FIZMATLIT Publ., p. 712 (2008). (In Russian) 63. Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986). https://doi.org/ 10.1007/BF00116251 64. Quinlan, J.R.: Improved use of continuous attributes in C 4.5. J. Artif. Intell. Res. 4(1), 77–90 (1996). https://doi.org/10.1613/jair.279 65. Clark, P., Niblett, T.: The CN2 induction algorithm. Mach. Learn. 3(4), 261–283 (1989). https://doi.org/10.1007/BF00116835 66. Vagin, V.N., Krupetskov, A.V., Fomina, M.V.: An algorithm for constructing decision trees in the presence of discrepancies in the data. In: Seventeenth National Conference on Artificial Intelligence with international participation KII-2019, October 21–25, 2019, Ulyanovsk, Russia. Collection of Scientific Works, vol. 2, pp. 182–191 (2019) 67. Fakhrahmad, S.M., Jafari, S.: Uncertain decision tree inductive inference. Int. J. Electron. 98(10), (2008). https://doi.org/10.1080/00207217.2011.593138 68. Gorodetskiy, A.E., Tarasova, I.L.: Algebraic methods for obtaining and converting images in the technical diagnosis of complex systems under conditions of incomplete certainty. (Part 1). (Informatsionno-upravlyayushchiye sistemy) Inform. Contr. Syst. 5, 10–14 (2008). (In Russian) 69. Gorodetskiy, A.E., Tarasova, I.L.: Algebraic methods for obtaining and converting images in the technical diagnosis of complex systems under conditions of incomplete certainty. (Part 2). (Informatsionno-upravlyayushchiye sistemy) Inform. Contr. Syst. 6, 22–25 (2008). (In Russian) 70. Gorodetskiy, A.E.: Smart electromechanical systems modules. In: Smart Electromechanical Systems, pp. 7–15. Studies in Systems, Decision and Control 49, Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-27547-5_2 71. Gorodetskiy, A.E.: The principles of situational control SEMS group. In: Smart Electromechanical Systems, pp. 3–13. Situational Control. Studies in Systems, Decision and Control 261, Springer International Publishing (2020). https://doi.org/10.1007/978-3-030-32710-1_1 72. Gorodetskiy, A.E.: Fuzzy decision making in design on the basis of the habituality situation application. In: Reznik, L., Dimitrov, V., Kacprzyk, J. (eds.) Fuzzy Systems Design. Social and Engineering Applications, pp. 63–73. Physica-Verlag, A Springer-Verlag Company. New York (1998) 73. Kondakov N.I. Logicheskii slovar’-spravochnik. [Logical dictionary-reference book]. - M.: Nauka, 1975, 720 p. (In Russian) 74. Tobacco, D., Kuo, B.: Optimal’noe upravlenie i matematicheskoe programmirovanie. Optimal Control and Mathematical Programming. M.: Nauka, p. 280 (1975). (In Russian) 75. Peirce, C.S.: Rassuzhdenie i logika veshchei. Reasoning and Logic of Things: Lectures for the Cambridge Conference (1898). (In Russian) 76. Gorodetskiy, A.E.: The use of a situation accustomed to accelerate the adoption of intellectual solutions in information and measuring systems. In: Physical Metrology: Theoretical and Applied Aspects, pp. 141–151. SPb: Publishing. KN (1996). (In Russian) 77. Gorodetskiy, A.E., Tarasova, I.L.: Upravlenie i neironnye seti. Control and Neural Networks, p. 312. SPb.: Publishing House of the Polytechnic. University Press (2005). (In Russian) 78. Iudin, D.B.: Vychislitel’nye metody teorii priniatiia reshenii. Computational Methods of Decision Theory, p. 320. Moscow, Nauka Publ. (1989). (In Russian)
176
2 Central Nervous System
79. Moshkin, V.I., Petrov, A.A., Titov, V.S., Iakushenkov, I.G.: Tekhnicheskoe zrenie robotov. The Technical Vision of Robots. Moskva, Mashinostroenie, p. 272 (1990). (In Russian) 80. Nikolenko, S.I., Tulup’ev, A.L.: Samoobuchaiushchiesia sistemy. Self-Learning Systems, p. 288. Izdatel’stvo: MTsNMO (Moskovskii tsentr nepreryvnogo matematicheskogo obrazovaniia); (2009). (In Russian) 81. Gorodetskiy, A.E., Dubarenko, V.V., Tarasova, I.L., Shereverov, A.V.: Programmnye sredstva intellektual’nykh sistem. Software Intelligent Systems, p. 171. SPb.: Izdatel’stvo SPbGTU (2000). (In Russian) 82. Kuchmin, A.Y.: Ob odnom metode nelineinogo programmirovaniia s proizvolnymi ogranicheniiami. A method for nonlinear programming with arbitary constraints. Informatsionno-upravliaiushchie sistemy. Inform. Contr. Syst. 2, 2–9 (2016). (In Russian) 83. Alefel’d, G.: Khertsberger Iu. Vvedenie v interval’nye vychisleniia. Introduction to the Interval Calculation. M.: Mir, p. 360 (1987). (In Russia) 84. Levin, V.I.: Raschet dinamicheskikh protsessov v diskretnykh avtomatakh s neopredelennymi parametrami s pomoshch’iu nedeterministskoi beskonechnoznachnoi logiki/Kibernetika i sistemnyi analiz. Calculation of Dynamic Processes in Discrete Machines with Uncertain Parameters Using Non-deterministic Infinite-Logic, vol. 3. pp. 15–30 (1992). (In Russian) 85. Gorodetskiy, A.E., Tarasova, I.L.: Situational control a group of robots based on SEMS. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems, pp. 9–18. Group Interaction. Springer International Publishing (2018). https://doi.org/10.1007/978-3-319-997 59-9_2 86. Pospelov, D.A.: Situacionnoe upravlenie: Teoriya i praktika. Situation Control: Theory and Practice, p. 286. Nauka Publ., Moscow (1986). (In Russian) 87. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Situational control of the group interaction of mobile robots. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems, pp. 91–101. Situational Control. Springer International Publishing (2020). https:// doi.org/10.1007/978-3-030-32710-1_7 88. Vajsbord, E.M., Zhukovskij, V.I.: Vvedenie v differencial’nye igry neskol’kih lic i ih prilozhenie. Introduction to the Differential Game of Several Persons and Their Application, p. 304. Sov. Radio Publ., Moscow (1980). (In Russian) 89. Ajzeks, R.: Differencial’nye igry. Differential Games, p. 479. Mir Publ., Moscow (1967). (In Russian) 90. Kostrikin, A.I.: Vvedenie v algebra. Introduction to Algebra, pp. 47–51. Nauka Publ., Moscow (1977). (In Russian) 91. Gorodetskiy, A.E., Tarasova, I.L., Shkodyrev, V.P.: Matematicheskoe modelirovanie intellektual’nyh sistem upravleniya: Modelirovanie determinirovannyh intellektual’nyh sistem upravleniya. Mathematical Modeling of Intelligent Control Systems: Modeling of Deterministic Intelligent Control Systems, p. 181. Polytechnic University Publ., St. Petersburg (2016). (In Russian) 92. Karpov, V.E., Val’tsev, V.B.: Dynamic planning of robot behavior based on an “intellectual” neuron network. Sci. Tech. Inform. Process. 38(5), 344–354. (In Russian) 93. Karpov, V.E.: Models of social behavior in group robotics. In: Large-Scale Systems Control, Issue 59, pp. 165–232. IPU RAS, Moscow (2016). (In Russian) 94. Barteneva, D., Lau, N., Reis, L.P.: A computational study on emotions and temperament in multi-agent systems. In: Proceedings of the AISB’07: Artificial and Ambient Intelligence, Newcastle, GB, CoRR (2008). arXiv:0809.4784 95. Simonov, P.V.: Need-information theory of emotions. Quest. Psychol. 6, 44–56 (1982). (In Russian) 96. Simonov, P.V.: Otrazhatel’no-ocenochnaya funkciya emocij. In: The Reflective-Evaluative Function of Emotions. M.: Ed. Center of the EAOI, 2011. (In Russian) 97. Karpov, V.E.: Imprinting and central motor programs in robotics. In: Proceedings of the IV International Scientific and Practical Conference “Integrated Models and Soft Computing in Artificial Intelligence” May 28–30, 2007, Kolomna, Russia. M.: Fizmatlit, pp. 322–332 (2007). (In Russian)
References
177
98. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Safe control of SEMS in group interaction. Inform. Contr. Syst. 1, 23–31 (2019). https://doi.org/10.31799/1684-8853-2019-1-23-31. (In Russian) 99. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Reduction of logical-probabilistic and – inguistic constraints to interval constraints in the synthesis of optimal SEMS. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems, Group. Interaction. Studies in Systems, Decision and Control 174, pp. 77–90. Springer International Publishing (2018). https://doi.org/10.1007/978-3-319-99759-9_7. (In Russian) 100. Gorodetskiy, A.E., Tarasova, I.L.: Logical and linguistic risk assessments when choosing the optimal route. Hybrid and synergetic intelligent systems. In: Materials of the VI All—Russian Pospelov Conference with International Participation June 27–July 1, 2022 Zelenogradsk, Kaliningrad Region (2022). (In Russian) 101. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Method of image classification. RU Patent No. 2756778 C1 102. Pekelis, V.D.: Your Possibilities, Man! 5th edn. reprint and additional—M.: Knowledge (1986). (In Russian) 103. Gorodetskiy, A., Kurbanov, V.: Irina Tarasova. Formation of Images Based on Sensory Data of Robots. PRIPT 2019. Pattern Recognition and Information Processing. In: Proceedings of the 14th International Conference, 21–23 May 2019, Minsk, Belarus (2019). (In Russian) 104. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Logical-mathematical model of decision making in central nervous system SEMS. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds.) Smart Electromechanical Systems: The Central Nervous System. Studies in Systems, Decision and Control 95, Springer International Publishing AG (2017). https://doi.org/10.1007/978-3-31953327-8-4 105. Gorodetskiy, A.E., Tarasova, I.L. (2022) State registration of computer programs, RU 202260418. In: Image Classification Software Package Using the Logical-Probabilistic Algorithm LP2 (2022). (In Russian) 106. Madrigal, A.C.: Meet the Cute, Wellies-Wearing, Wikipedia-Reading Robot That’s Going to Hitchhike Across Canada, The Atlantic (2014) 107. Nechet, T.: Vandals Broke and Beheaded the Famous HitchBOT Traveler, Komsomolskaya Pravda in Ukraine 108. Troshina, S.V.: Human temperament: essence, types and their characteristics. http://lemur59. ru/node/10506 109. Gurevich, P.S.: Psychology: Textbook. In: Gurevich, P.S. (ed.) 2-Urga, p. 332. M.: NIC INFRA-M (2015). (In Russian) 110. Neznanov, N.G.: Psychiatry [Electronic Resource]: Textbook. In: Neznanov, N.G. (ed.) M.: GEOTAR-Media (2016). http://www.studmedlib.ru/book/ISBN9785970438282.html 111. Chovdyrova, G.S.: Clinical psychology. General part: studies. In: Chovdyrova, G.S., Klimenko, T.S. (eds.) Manual/M.: UNITY-DANA: Law and Law, p. 247 (2017). ISBN 978-5-238-01746-4. http://znanium.com/catalog/product/1028496 112. Kretschmer, E.: Body Structure and Character, Publishing House: Academic Project, p. 328 (2015). ISBN: 978-5-8291-1765-8. (In Russian) 113. Comb, N.F.: Psychological tests for professionals. Comp. Minsk: Sovrem. shk., p. 496 (2007). (In Russian) 114. Rusalov, V.M.: Questionnaire of temperament structure (OST). In: IP of the USSR Academy of Sciences, p. 50 (1990). (In Russian) 115. Aizenka’s, G.: EPI personality Questionnaire (Methodology). M.: Almanac of Psychological Tests, pp. 217–224 (1995). (In Russian) 116. Mironova, E.E.: Collection of psychological tests. Part I: Manual/Comp. Mn.: Women’s Institute of ENVILA, 2005. – 155 p. (In Russian) 117. Mendelevich, V.D.: Clinical and Medical Psychology Textbook 6, “MED Press-Inform”, p. 432 (2008). (In Russian) 118. Thomas, A., Chess, S., Birch, H.G., Hertzig, M.E., Korn, S.: Behavioral individuality in early childhood (1963)
Chapter 3
Group Control
Abstract This chapter discusses the features of the synthesis of automatic control systems SEMS in group control. The expediency of using the principles of situational control in this case is shown. The problems of finding the optimal situational control algorithm are discussed, which always arise for the SEMS group in the joint execution of technological operations. It is concluded that the natural time limit for making an optimal decision on the situational control of a group of SEMS in real time imposes restrictions on the number of members of the controlled group and the distance between them, associated with the dynamics of the choice of environment and the dynamics of controllability of the SEMS themselves. Mathematical and algorithmic methods of decision support are given. To do this, for example, it is possible, based on the purpose of a particular robot, to compile a list of possible instructions. Then, using mathematical and computer modeling, determine a set of acceptable instructions for group behavior. It is shown that when solving this problem, it is necessary to take into account the dynamic characteristics of robots, which can be optimized by adjusting the parameters of automatic control systems for robots. Various approaches to decision making are analyzed: deductive, inductive and abductive, and it is concluded that the latter is the fastest by analogy with intuition, but its reliability depends on the completeness of the base of good decisions from past experience, i.e. depends strongly on the operating time of such robots in similar environmental conditions. When determining the optimal solution under conditions of incomplete certainty, it is advisable to use binary relations that can be expressed as logical equations in the Zhegalkin algebra, reduced to a matrix form, which makes it easy to parallelize the process of finding the optimal solution. Particular attention is paid to the issues of safe control. At the same time, the possibility of using influence diagrams is shown. The choice of specific circuit structures depends on the tasks solved by the group, the properties of the environment for the functioning of the group, the characteristics of the members of the group, and the resources available to implement the control system. Algorithms for assessing the risks of accidents on the sections of the analyzed routes are given, taking into account the “observed” area of the terrain. Estimates of group intelligence of robots and a generalized structure of a software package for testing models of groups of intelligent robots, including expert systems for creating dynamic models of interacting robots and environments, are proposed. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. E. Gorodetskiy and I. L. Tarasova, Introduction to the Theory of Smart Electromechanical Systems, Studies in Systems, Decision and Control 486, https://doi.org/10.1007/978-3-031-36052-7_3
179
180
3 Group Control
Keywords Situational control · SEMS group · Movement control · Decision making · Matrix solution · Systems of logical equations · Environment of choice · Influence diagrams · secure control · Probability of accidents · Group intelligence · Software package · Testing group
3.1 Principles of Situational Control of the SEMS Group Situational control (from lat situatio-position) is the operational control of a group of interacting dynamic objects with appropriate behavior [1]. Such control consists in making control decisions as problems arise in accordance with the dynamically changing environment of choice [2]. This requires another level of control, which would be the interface between the group and the operator, setting targets [3, 4], and which can be attributed to the optimization problems of situational control [5, 6]. Moreover, the operator or preference decision maker (PDM) can be not only a person, but also a computer program that makes decisions. In the works [7–9] on situational control of enterprises and other socio-economic objects under the environment of choice understand the control situation as a specific set of circumstances that have a significant impact on the work of the organization at the moment. Pospelov [5] expands the concept of the situation, adding to its information about the relationships between objects and says that “the current situation is a set of all information about the structure of the object and its functioning at a given time.” All information also includes cause-and-effect relationships, which can be expressed in a variety of sequential events or processes. In this sense, the situation is fundamentally different from the state and events that can only correspond to one point in time. In case of situational control of the SEMS group, similarly to the group of robots, the environment of choice or the environment of the control problem can be understood as a subjective assessment of specific characteristics of SEMS and the external environment (situational variables) and the relationships between them that take place at the present time, but depend on the events that occurred and developing in time and space [10–14]. The creation and development of situational control systems of the SEMS group requires a lot of resources to collect information about the objects and the control environment, their dynamics, control methods, as well as to systematize this information within the semiotic model. Therefore, it is considered that the method of situational control is appropriate to apply only in cases where other methods of formalization lead to the problem of too large (for practical implementation) dimension [15].
3.1 Principles of Situational Control of the SEMS Group
181
3.1.1 The Concept of Situational Control The concept of situational control of a group of interacting dynamic objects can be reduced to the following system of basic provisions: • There is no universal approach to control, as it depends on the specific problem situation in the environment of choice, requiring different approaches to their resolution; • Situational probabilistic and not fully defined factors are taken into account in the strategies, structures and control methods, thereby achieving effective decision making; • There is more than one way to achieve the control goal and it is necessary to find the optimal way; • The results of decision making on control in the same environment of choice due to its stochasticity and incomplete certainty may differ significantly from each other; • Any decision on control should be considered only in close connection with other problems of interacting dynamic objects; • Decision makers (computer programs) can adapt the characteristics of interacting dynamic objects to the situation (the environment of choice) or change the situation according to the requirement of the control goal; • Decision makers should have psychologically comfortable conditions for interaction with controlled dynamic objects. Thus, situational control is primarily the art of the decision maker, to correctly identify and assess the situation and to choose the most effective control methods that best meet the situation and control goals. In this case, the control process should consist of the following mandatory steps that must be implemented by the control system to achieve effective control in each specific situation: • Creating the necessary conditions for changes in the dynamic configuration space of managed interacting dynamic objects; • Creation of a database and knowledge base on the environment and control facilities; • Identification and analysis of the situation in the environment of choice; • Selection of approaches and methods of optimal control in the current situation; • Development of control actions for the conduct required to achieve control objectives change in the dynamic objects; • Assessment of the likely consequences of situational control. By formalizing the environment of choice or environment tasks of situational control you must meet the following conditions: • The selection environment must contain a finite number of factors and describe their state and relationships;
182
3 Group Control
• The environment of choice should contain factors that significantly affect the objects of control, since it is impossible to take into account the influence of absolutely all factors when making a decision; • The selection environment must contain factors that affect the interaction of control objects in the group; • In the environment of choice, it is necessary to take into account the causes and consequences of possible control situations. Compliance with the latter condition leads to the need to classify control situations. This is due to the fact that their recognition is the first stage of the process of resolving situational control tasks. To date, a large number of classifications of control situations with different classification characteristics and depth of decomposition have been developed. As a basis for the analysis and resolution of control situations can be used a model based on the account of a number of sources of control situations in the environment of the choice of a group of robots, considering their content characteristics and the use of known strategies for resolving situations to control a group of SEMS. The following main types of restrictions can be distinguished: • Technological, which are determined by the type and flexibility of interacting SEMS; • Intellectual, reflecting the levels of competence of mathematical provision of control computing systems that make decisions taking into account changes in the selected environment; • Limitations in the formulation of the problem due to the actual nature of the work performed by the group of interacting SEMS.
3.1.2 The Principles of Situational Control SEMS Groups In any approach to the organization of situational control of the SEMS group, it is necessary to collect information about the environmental parameters, the current state of individual SEMS from the group, the planned actions of the group members, etc. Generally after collecting the information, a model of the selection environment is created O(t k ). Then the planning situation control group SEMS will be: • In the division of group tasks into subtasks: O(t0 ) ⇒ O(t1 ) . . . O(t0 ) ⇒ O(t f ), U (t1 )
U (t f )
(3.1)
{ } where: U (tk ) = u a1 (tk ), u a2 (tk ), . . . , u an (tk ) , u ai (tk )—control action applied to ai SEMS at a time t k , k = 0, 1,…f , in distribution between SEMS of group of solutions of subtasks so that the solution of a group task was carried out in the minimum time taking into account the available restrictions, including on information interaction. In general, the solution of the group problem of situational control will be the synthesis of the search algorithm, i.e. the ordered set w ⊂ Ω of the many alternative
3.1 Principles of Situational Control of the SEMS Group
183
combinations of controls U(t k ) (see the paragraph 2.25 in Chap. 2). This should be the best combination of control laws for each member of the SEMS group u ai (tk ) ∈ U (tk ), obtained on the basis of quality estimates Q, constructed taking into account the system of preferences E and selection environments O(t k ). In order to make the most effective decisions in a given situation in an O(t k ) selection environment and to make the appropriate changes in the group of interacting SEMS in the best possible way, the decision makers must have certain principles or rules containing fundamental requirements for effective control, the most important of which are the following: • Competence, i.e. the correct use of the mathematical apparatus in the formalization of the environment of choice and completeness of information about the current state of control objects and the environment; • The ability to make decisions in the absence of precedents, when no control situation, no matter how standard it may seem, cannot be absolutely similar to any situation that has taken place in the past; • The presence of the relationship of situational variables, when all the factors of the situation constitute a single whole, a system and therefore somehow affect each other; • The presence of dual influence of factors when situational factors have different, sometimes even contradictory characteristics; • Continuity of changes, when changes in control objects and their external environment, one way or another, occur constantly; • Irreversibility of changes when any control action puts the SEMS group on a new stage of configuration space change; • The ability to react quickly when the constant change of situational variables requires continuous development of control decisions aimed at adapting the SEMS group to these changes; • The presence of prerequisites for changes, when along with the constant monitoring of changes, it is necessary to continuously monitor the presence of prerequisites and conditions necessary to bring the parameters of the SEMS group in line with the changed situation; • Compliance with the optimal ratio of results and costs in the formation of the optimality criteria with the greatest approximation of the SEMS group to the goals; • The possibility of a priori decision, which allows not only to correctly assess the situation and respond to its change in a timely manner, but also to anticipate possible changes in this situation; • The ability to make decisions not only the formation of changes in the SEMS group when the situation changes, but also the partial adaptation of the environments of choice to the control goals; • Compliance with the psychological comfort of the interaction of the decision maker with the control SEMS. It is desirable that all the requirements and principles are implemented in the creation of situational control systems in conjunction. Their combination depends
184
3 Group Control
on the specific tasks solved by the SEMS group, the state of the environment of choice, the hardware and software of the situational control system and some other factors. To implement these principles, certain methods of situational control of dynamic objects have been developed [16]. Most often in situational control methods used system and situational analysis, factor and cross-factor analysis, genetic analysis, diagnostic method, expert-analytical method, methods of analogy, morphological analysis and decomposition, simulation methods, game theory, etc. However, the greatest effect and quality of control are achieved when the system of methods is used in the complex, which allows you to see the control object from all sides and avoid miscalculations. It is possible to distinguish the following main tasks solved at creation of systems of situational control of SEMS group: • Development of a system for monitoring the occurrence of critical situations requiring situational analysis; • Selection, adaptation and development of methods for collecting, analyzing and synthesizing information about the environment of choice; • Selection of methods of statistical data analysis; • Definition and formalization of reference situations for each environment of choice; • Formation and updating of the database of situations; • Preparation of instrumentation, including the Mat. Apparatus for determining the factors characterizing the development of the situation and indices for assessing their condition; • Definition and actualization of the factors characterizing the state of the situation, assessment of their comparative importance, development of indices of the state of the situation; • Selection and adaptation of methods of formation of evaluation systems; • Development of scenarios for the possible development of situations, including a meaningful description and definition of a list of the most likely scenarios (options) of situations, the formation of a list of the main factors affecting the development of the situation, the identification of the most likely factors that affect the development of the situation, and discarding those factors which cannot have a significant impact on the change in the situation; • Analysis of options for the development of situations to identify the main dangers, threats, risks, strengths, prospects in the development of the situation; • Development of expert forecast of changes in factors and indices characterizing the situation, presented in the form of the most likely scenarios of the situation with the assessment of the stability of situations for the developed alternative scenarios of their development; • Assessment of the development of the situation in terms of the possibility of achieving the goals of the SEMS group; • Synthesis of alternative control solutions and control actions to achieve the goals of the SEMS group;
3.1 Principles of Situational Control of the SEMS Group
185
• Development of recommendations for strategic and tactical decision making in the analyzed situation; on the mechanisms of their implementation; on control over the implementation of decisions; on support of the implementation of decisions; on the analysis of results, including an assessment of the effectiveness of decisions taken and the effectiveness of their implementation; • Development of methods for the selection of the most effective solutions for the situational control of the SEMS group under conditions of incomplete definiteness; • Development of interfaces between the decision maker and the SEMS group, providing psychologically comfortable interaction conditions. The development of interfaces between decision makers and SEMS one of the main objectives is the establishment of psychologically comfortable conditions of interaction of the DM with the group control SEMS. This issue has recently received a lot of attention. In particular, in [17] analyzes the ways of assessing the success of the dialogue, as well as the means of registration and analysis of PDM, necessary to configure the interfaces of DM and intelligent robots, and concludes the need to take into account the psychological aspects of their interaction. In [18] there is a need to equip robots with tools that would help him to respond to the rapid “decision making”, to exclude the choice of a bad decision and to act in such a way as to maximize the safety of people, thereby increasing human confidence in the robotic system. In [19] an approach to control the behavior of robots based on the mechanism of emotions and temperament is proposed. It is demonstrated that these psychological features can be modeled in quite simple ways. The proposed emotional architecture of the robot control system is based on the V. P. Simonov’s information theory of emotions, and the features of temperament are reduced to a two—parameter model of the “excitation-inhibition” type. A number of experiments based on mobile robots are described. These experiments demonstrate a set of different types of robot behavior: melancholic, choleric, sanguine, and phlegmatic. All these types were implemented using the so-called temperament regulator, which determines the balance between the values of the excitation and inhibition parameters of the robot control system. The paper also proposes an automatic model of temperament, which allows describing the behavior of an individual. On the basis of this model, it is shown that in solving some problems of collective behavior it is advisable to have individuals with different behavior in a group. And this behavior is also determined by the individual emotions and temperament of the robot.
3.1.3 The Methodology of Situational Control SEMS Group Situational control of the SEMS group uses the method of control of complex technical and organizational systems, based on the ideas of the theory of artificial intelligence and the representation of knowledge about the objects of control, ways to
186
3 Group Control
Fig. 3.1 Scheme of solving the problem of situational control
control them and the environment of choice at the level of logical-interval, logicalprobabilistic and logical–linguistic models, the use of training as the main procedures in the construction of control procedures for current situations and the use of deductive systems to build multi-step solutions [20]. The solution to the problem of situational control of the SEMS group can be represented as the following sequence of operations (see Fig. 3.1): • Formalization based on the SEMS collected by sensors of the Central Nervous System in the database and knowledge of information about the current situation in the current environment of the selection of objects of dynamic control creates its mathematical model. Description, which is supplied to the analyzer input; • Analysis, that is, evaluation of received messages and determination of the need for intervention of the control system in the process occurring in the control objects. If the current situation does not require such intervention, the Analyzer does not send it for further processing. Otherwise, the description of the current situation enters the Classifier; • Classification using the information stored in the Analyzer and classification of the situation to one or more classes, which correspond to one-step solutions. The analyzer passes this information to the Correlator; • Define the SEMS group transformation rules to be used. All rules of transformation of the SEMS group in different situations are stored in the Correlator. These rules are called logical-transformational rules (LTR) or correlation rules.
3.1 Principles of Situational Control of the SEMS Group
187
The full list of LTR determines the capabilities of the situational control system to influence the control objects to solve the task. The Correlator determines the LTR, which should be used. If such rule is unique, it is issued for execution; • If there are several such rules, the choice of the best of them is made after processing the preliminary decisions in the Extrapolator, after which the Correlator gives a decision on the impact on the object; • If the Correlator or Classifier cannot make a decision on the received description of the current situation, the random Selection block is triggered and one of the influences that have not too much influence on the objects is selected, or the system refuses any impact on the objects. This suggests that the control system does not have the necessary information about its behavior in this situation. In solving the problems of situational control is often used to build situational models that simulate the processes occurring in the control objects and the control system. They are constructed on the basis of the following basic operations: 1. Creation of databases and knowledge about the environment, control objects and control system in the computer memory; 2. Construction of models of control objects, the environment of the choice and the control system, 3. Description of states of objects and environment of choice in the class of semiotic models; 4. Formation of a hierarchical system of generalized descriptions of the States of control objects and the environment of choice; 5. Classification of states (situations) for possible solutions; 6. Forecasting the consequences of decisions; 7. Training and self-study. The necessity of operation 1 is determined by the need to turn on the computer into the control loop at the earliest possible stages of assessment and control search in order to increase the efficiency of the decision maker. Typically, a decision maker in situational control systems uses semantics, for example, semantic networks in the form of a graph, whose nodes correspond to concepts and objects, and arcs to relations between objects. The content of operation 3, complementing the second, is that the representation of all the necessary models is carried out with the help of elements of the language in which the PDM describes the control system and its functioning. Usually PDM in situational control systems uses semantics, such as semantic networks in the form of a graph, the nodes of which correspond to the concepts and objects, and the arc relationships between objects. The differences between semiotic and formal systems are as follows: • Semiotic systems have the set of signs absent in formal systems, possessing, in particular, plans of expression (syntax) and content (semantics); • Unlike formal systems, semiotic systems can independently change their syntax and semantics; • Semiotic systems are open, not closed, as formal.
188
3 Group Control
• The main stages of the system based on situational models include: • Obtaining a description of the current situation available on the analyzed control object and the environment of choice. • Replenishment of micro description of the situation; • Classification of the situation and identification of classes of possible solutions to the used estimates of system; • Output of permissible values of estimates (in this case there is a reverse movement on hierarchical levels of knowledge representation of the situational model); • Predicting the impact of acceptable decisions as final estimates; • Decision making on estimates. The work of such systems under conditions of incomplete certainty is based on the analysis of logical-probabilistic and logical–linguistic rules, logical inference and optimization procedures of mathematical programming in ordinal scales and generalized mathematical programming [20].
3.2 Situational Control a Group of Robots Based on SEMS From the point of view of control information systems, the SEMS group can be considered as a group of “cognitive control information systems embedded in their environment” [15]. Therefore, such systems must have two-way communication with the outside world and other robots involved in group interaction [21]. The main feature of SEMS-based robots in solving complex group control tasks is the ability to intelligently process the information available to them to make decisions about preferred actions in a changing situation. In well-known publications on methods of robot intellectualization, for example, using the evolutionary approach [22], self-study [23] or the recently popular algebraic approach [24–26]. Insufficiently covers the problem of implementing decisions in group’s intellectualized devices, although many authors have noted the importance of the task. For example, in [15] it is noted that in order to achieve synergetic effect from interactions of SEMS-type intelligent systems in group control tasks, a unified methodological basis for providing the desired group behavior of robot teams is necessary. Since each robot has an appropriate behavior [27], group control of such robots becomes ineffective in centralized control and more flexible decentralized strategies are required. For example, a team of robots can be regarded as a system of systems (SoS) [28], where the tasks of control planning come to the fore. Planning systems by control (SPC) can be built within the framework of the situational approach [5]. An essential part of the problem is the formalization of the term “situation”. Determination of the situation depends on the subject area and the formalization apparatus used [9, 29]. Usually, the full situation is described in the following main aspects: knowledge about the environment of choice, the current structure of the object, the current state of the control system and the technology (strategies) of
3.2 Situational Control a Group of Robots Based on SEMS
189
control [5]. In addition, the solution of the problem of the matching motion in the group situational control of robots requires taking into account both the physical limitations of the robots themselves and the environment of operation, and the dynamic limitations caused by the movements of robots involved in situational group control [7]. Therefore, the situation can be considered as a time slice of the trajectories of changing the characteristics of control objects in the selection environment described as some multidimensional space. The selection criteria of this space have changed little in comparison with the requirements for the selection of elements of the state vector in the classical theory of the state space [30], developed to control dynamic objects that admit description in the form of differential and difference equations. Namely, in the space of states should include all the time-changing characteristics of the object, significantly affecting the achievement of control objectives. The procedure for constructing a state space, that is, separating the actual control objects from the environment, is usually taken out of the control process, since there are no constructive methods for selecting elements of the state vector for real objects. The need to select the best trajectories (scenarios) in conditions of not complete certainty, as well as limitations on the part of the practical feasibility of algorithms for processing data on objects lead to the fact that the area of determining the state characteristics in one way or another is discretized on the basis of linear, nonlinear or ordinal scales [31]. This complex decision making problem can be based on the formation of the permitted dynamic space of robot configurations (PDSCR) [32]. In this paper, we consider one of the approaches to the construction of PDSCR.
3.2.1 The Construction of the Current Dynamic Model of the Environment The current PDSCR contains two parts: the current dynamic model of the environment and the current dynamic models of the robots ‘ own state. When constructing the current dynamic model of the environment, it is necessary to provide robots with the ability to understand the language of sensations formed as a result of solving systems of logical equations. Such systems are similar to the human Central nervous system and therefore can be called “Central nervous systems of the robots” (CNSR) by analogy [33]. In general, the algorithm for constructing the current dynamic model of the environment can be as follows: • Recording of current measuring information from sensors and external sources into the CNSR database; • Fuzzification of sensor data is formed by fuzzification of logical variables, as attributes which can be of the probability that the measured variable in a given quantum; • Association fuzzification data into groups, forming sense organs of the robot as a human, for example, the vision in the form of Z; the hearing in the form of
190
• •
• •
•
•
3 Group Control
multiple C; the smell of many O; the taste of many U; touch in the form of a set A; The allocation in each of the sets forming a subset, characterizing the properties of the observed or studied object: Z i ⊂ Z , Ci ⊂ C, Oi ⊂ O, Bi ⊂ B, Ai ⊂ A; Allocation of qi images in the environment of the robot space, in the simplest case, this operation is reduced to combining into one set of qi those points of space that have the same set of logical variables with the same attributes and provided that the distance to the nearest neighboring point with the same parameters does not exceed some predetermined value. To do this, as shown in [34], you can use the logical expression algebraization procedure; Classification of qi images formed by CNSR. This is a recognition operation of qi images generated by the CNSR, which can be described as logical matrices M i . The assignment of these images qi to the different classes of the images of q1 , q2 , q3 (types of objects) described by the reference Boolean matrices M E1 , M E2 , M E3 [20], for which the previously made optimal solutions are known, for example, for transportation by one of the robots R1 , R2 or R3 can rely on the procedure of searching for binary relations M i g M Ej , where g is a two-place predicate on the analyzed sets, j = 1,2,3. In the simplest case, g = 1 if M i = M Ej , otherwise g = 0; Definition of additional qualitative parameters in the selected samples. Additional parameters are obtained in the form of new Boolean variables F ij . They can, for example, be obtained by analyzing the geometric parameters of images (si areas, d i dimensions, gaps between vij images, the coordinates of the objects x i , yi , etc.); Formation of binary estimates of selected images. Binary estimates of images can be obtained by logical analysis of parameters characterizing images. For this purpose it is necessary to make rules of assignment to this image of this or that binary assessment, for example, for subjects (their images) which should be removed from the room: if the image very close to a door of the room, has small dimensions (the area) and big gaps between neighboring subjects, this image (object) the best for withdrawal from the room.
As a result of applying the described algorithm can be obtained, for example, the following set: Set of objects of type q1 Q 1 = ∪q1n ,
(3.2)
Set of objects of type q2 Q 2 = ∪q2m ,
(3.3)
Set of objects of type q3 Q 3 = ∪q3k ,
(3.4)
A set that covers all the gaps V = ∪vi j ,
(3.5)
The set of coordinates of objects q1 X 1 = ∪xq1 ,
(3.6)
3.2 Situational Control a Group of Robots Based on SEMS
191
Y1 = ∪yq1 ,
(3.7)
The set of coordinates of objects q2 X 2 = ∪xq2 ,
(3.8)
Y2 = ∪yq2 ,
(3.9)
The set of coordinates of objects q3 X 3 = ∪xq3 ,
(3.10)
Y3 = ∪yq3 ,
(3.11)
A set that characterizes space :
S = ∪lk h k ,
(3.12)
where l k and hk are the length and width of the k-band.
3.2.2 The Construction of the Current Dynamic Models of Robots When constructing current dynamic models of robots ‘ own States taking into account the physical limitations of the robots themselves, it is necessary to provide the CNSR with “sense organs of their own state” and the possibility of exchanging this information with other participants of the movement. Consequently, it is desirable that robots have the ability to understand the language of self-perception like a human [35]. Then in the solution of systems of Boolean equations formed on the basis of the language of the feelings of the robot, there is a possibility of forming an independent image of your own current condition. The algorithm for constructing the current dynamic model of self-state in the presence of appropriate sensors of self-perception in General is similar to the previous one. As a result of the application of this algorithm, for example (see Fig. 3.2), can be obtained: Set S k , characterizing the space of the warehouse, where objects are moved. Set S q corresponding to the area of the object. A set SR1 corresponding to the area occupied by the first robot; A set of SR2 corresponding to the area occupied by the second robot; A set of SR3 corresponding to the area occupied by the third robot; Set of coordinates of the centers of the areas occupied by X SR , Y SR robots. The coordinates of the center of the doorway x d , yd and its width hd.
192
3 Group Control
Fig. 3.2 An example of placing products of three dimensions in a warehouse 1, 2, 3—robots
3.2.3 An Example of the Construction PDSCR for the Task of Extracting from the Premises Items Robots-Loaders Statement of the problem Let’s consider that dynamic models of robots and environment of their functioning are constructed and therefore robots have the following information. In the room of a rectangular shape length L and width H are U items, of which N items of type q1 length lq1 and width hq1 , M items of type q2 length lq2 and width hq2 and K items of type q3 length lq3 and width hq3 . Figure 3.2 shows the three types of q1 , q2 , and q3 objects placed in a room, and the three robots 1, 2 and 3 that are required to vacate the room. U = N + M + K.
(3.13)
In this case the following conditions are true: • Objects are located so tightly to each other that any gap between them has a size vij : { } vi j < min lq1 , lq2, lq3 , hq1 , hq2 , hq3 .
(3.14)
• Objects from the room are unloaded by three robots R1 , R2 , R3 , having a length of lR1 , lR2 , lR3 and a width of hR1 , hR2 , hR3 , through the doorway width d: d > max{l R1 , l R2 , l R3 , h R1 , h R2 , h R3 , lq1 , lq2 , lq3 , hq1 , hq2 , hq3 }. (3.15)
3.2 Situational Control a Group of Robots Based on SEMS
193
• The area of the room S k shall be: Sk > Sq + Sr ,
(3.16)
where: Sq =
Sr =
∑
∑
(lq1 ∗ hq1 ) +
(l R1 ∗ h R1 ) +
∑
∑
(lq2 ∗ hq2 ) +
(l R2 ∗ h R2 ) +
∑
∑
(lq3 ∗ hq3 ).
(l R3 ∗ h R3 ).
(3.17)
(3.18)
It is required to free the room from all objects at the minimum time: U ∑
T (i ) → min(n ∗ t),
(3.19)
i=1
where: T (i) = n ∗ t is the time required for removal of the i-th subject from the room, depending on the number of steps n and the duration t of the step of the robot associated with the design of the robots and ambient conditions. Let’s say t = const. To select or synthesize the algorithm for solving the problem, first of all, it is necessary to obtain an algorithm for constructing the permitted dynamic configuration space. Problem solution Let’s consider the solution of this problem by constructing resolved dynamic spaces of robot configurations. The procedure for extracting objects from the room in this statement consists in step-by-step building allows for dynamic configurations of space, providing sequential extraction of items from the room. This analyzes the possibility of extraction in the minimum time at each step of one object, taking into account the necessary shifts of neighboring objects. The algorithm allows for dynamic configurations of space that moves from the room to the warehouse of the first object and does not take into account the possibility of shear of adjacent items to increase the clearance in general will be next: 1. Let’s number all objects in the room i = 1,…, U, for example, linearly, starting from the lower left corner. Let the number of implementation on removing primary will initially be: p = 0. 2. We calculate for each subject i the ratio of the form: { { { }}} , Ji = min k1ri + k2 /(max min vi j ij
(3.20)
194
3. 4. 5. 6. 7. 8. 9.
10. 11. 12.
3 Group Control
where: k 1 , k 2 —preference coefficients; r i —the distance from the center of the doorway to the i-th object, vij —the gap between the i-th object and the adjacent j—th object. Define a bidder’s qp to removal from the room, which will have the minimum value of J i for the i-th subject. Determine the coordinates x qp , yqp of the applicant qp . Find the maximum value of dimensions (width or length) hq or l q of applicant qp . Select one Rm of the three robots R1 , R2 , R3 , the respective bidder qp . Find the maximum value of the dimensions (width or length) hRm or lRt of the selected robot Rm . Calculate the width Hq(t) of the allowed dynamic space of the current robot configuration: H Rm = xq p + yq p + h Rm ∗ l Rm . If the applicant qp gap vi j > max{h Rm , l Rm }, then determine the trajectory of the corresponding Rm robot to remove from the room of this qp object and remove it. Let’s increase the number of implementations to remove the object p = p + 1 and compare the resulting number with the total number of objects U. If p = < U, then go to step 3, removing from the calculation by the formula (3.20) all previously deleted items. End.
3.3 Movement Control of the SEMS Group The problem of control in the group interaction of objects is a global problem, actual for many spheres of life. Wherever there is a group of living or technical objects that need to work together to do some work or solve some task, this problem arises. In the technical field, it is most relevant in robotics. At the same time, optimizing the interaction of a group of robots in the performance of a joint task is now becoming increasingly difficult [36]. It requires taking into account the complexity and intelligence of technical robot control systems, providing a constant expansion of the scope of such robotic systems (RTS). In complex RTS based on Smart Electromechanical systems [37], it is necessary to analyze not only the behavior of an individual robot, but also the behavior of a group of robots interacting with it. The complex RTS under consideration has appropriate behavior due to the presence of the Central nervous system in SEMS [38]. A group of interacting robots can be transport robots performing coordinated movements, robots collectors performing joint operations, etc. When creating such groups, specialists are primarily faced with the need to assess the ability of the group to make the right decisions under uncertainty. It is necessary to take into account the static and dynamic features of each robot and account features of decision making in the Central nervous system of individual robots. For this purpose, computer simulation with identification and evaluation of decisions in
3.3 Movement Control of the SEMS Group
195
complex dynamic systems under conditions of uncertainty [2] of the environment is usually used. This environment will eventually be filled with robots and all sorts of “smart” systems. Naturally, it will then require these robots and artificial intelligence systems to have “instincts” that allow them to avoid collisions with obstacles and each other while driving. However, if these instincts are too strong, robots will be too slow, which will adversely affect the effectiveness of their actions. To eliminate this situation, it is necessary to develop algorithms that will constantly strive to find the optimal balance between speed and security. This will allow the robots to always operate with high efficiency. In addition, the task of controlling a group of robots has additional complexity due to the need to ensure coordination between robots. In complex RTS, each robot must satisfy its kinematic and dynamic equations, as well as existing phase constraints, including dynamic constraints that ensure no collisions between robots. The need to compare different trajectories of interacting robots (development scenarios) with each other, as well as restrictions on the practical feasibility of algorithms for processing data about objects and the environment lead to the fact that the area of determining the characteristics of the state in one way or another is discretized. For example, it is translated into a logical form by fuzzification of data [20]. Then they are analyzed using algebraic logical expressions [24, 34] and optimized by mathematical programming methods in ordinal scales or generalized mathematical programming [39]. The creation and development of situational control systems requires a lot of resources to collect information about the objects and control environment, their dynamics, control methods, as well as to systematize this information within the semiotic model. Therefore, it is considered that the method of situational control is advisable to apply only in cases where other methods of formalization lead to the problem of too large (for practical implementation) dimension. It should be noted that the control technology developed by scientists currently does not guarantee the absolute safety of the movement of robots, which is confirmed by several incidents that occurred during the tests. Existing control technology for cars also does not guarantee this. They are only trying to minimize the likelihood of a collision. For example, Magnus Eggerstedt thinks that: “The existing control systems of such cars are very conservative; they will not allow the car to move in the presence of even the slightest danger. All this will lead to the fact that on the roads on which the robot cars will move, there will always be congestion and traffic jams, which cannot be eliminated independently, even the most highly intelligent automatic systems” [40].
196
3 Group Control
3.3.1 Setting the Task of Controlling the Movement of a Group of Robots In solving the problems of control of the group interaction of robots, much attention is paid to the issues of their self-organization and maintenance of homeostasis within the group. For example, solving the problems of formation structure [41], working out coordinated movements [42], joint search and transportation of objects [43], etc. However, the mechanisms considered in these tasks do not guarantee the completeness of the tools that are necessary to meet all the tasks that the group may be faced with. There is a need for another level of control, which would be an interface between the group and the operator, setting the target tasks [3, 4], and which can be attributed to the optimization tasks of situational control [5, 6]. In this case, the operator can be not only a person, but also a computer program that makes decisions. Consider the task of translating a group of robots A = {a1 , a2 , . . . , an } located at time t 0 at the points S = {s1 , s2 , . . . , sn } the surrounding limited space L 3 ⊂ E 3 , (E 3 —three-dimensional Euclidean space) to target points F = { f 1 , f 2 , . . . , f n } of this space by the time t f in the minimum time: T = t f −t0 → min,
(3.21)
With a minimum probability of collision of robots: PA → min.
(3.22)
Usually, when using various mathematical optimization methods, the condition (3.22) is replaced by an inequality of the form: ∑
m i j (tk ) ≤ M,
(3.23)
i, j
where: M is the maximum number of collisions, i,j are robot numbers from 1 to n (i /= j), k—the number of the moment in time from the time interval T, the value mij (t k ) is determined from a Boolean expression: «if at time t k a trajectory r i of the robot ai intersects with the trajectory r j of the robot aj that is ri ∩ r j /= 0, so m i j (tk ) = 1, else m i j (tk ) = 0». The search for a solution to the problem is carried out in an environment of choice, which changes over time t, that is, it is dynamic O(t). It can be split Into O(t k ) layers with some constant or variable step hk : { ( )} O(t) = O(t0 ), . . . , O(tk ), . . . , O t f .
(3.24)
]) ( [ Each layer O(tk ) k = 0, 1, . . . t f / h k contains the surrounding space L 3 , divided into cells eq (t k ) with constant or variable steps hx , hy , hz along the axes X, Y, Z. Here q is the cell number, q = 1,2,…,Q.
3.3 Movement Control of the SEMS Group
{ } O(tk ) = e1 (tk ), e2 (tk ), . . . , e Q (tk ) .
197
(3.25)
Each cell eq (t k ) is characterized by the presence or absence of ai robots, and obstacles Bi (t k ), in the form of: prohibiting the movement of signs vi (t k ), traffic lights wi (t k ), road markings γ(t k ), etc.. Bi (tk ) = {vi (tk ), wi (tk ), γ (tk ), . . . .}.
(3.26)
In addition, each cell is characterized by a matrix of interaction of the robot with the environment G(tk ) = {G 1 (tk ), G 2 (tk ), . . . , G n (tk )}, describing the influence of the cell environment on the dynamic state of the robot, it is a collection of cells—the traffic rules of type: if–then. The complexity of the problem, which requires the use of methods of situational control, is that the parameters and functions characterizing the cells eq (t k ), can be deterministic, stochastic, or not fully defined. An example of the task can be the task of driving a group of robot cars through the intersection. In this case, the surrounding space L 3 of dimension L x , L y , L z is allocated in the vicinity of the intersection along the X, Y, Z axes. It is divided at the beginning of the control t 0 into cells eq (t 0 ) with constant steps hx , hy , hz along the axes X, Y, Z. At points S = {s1 , s2 , . . . , sn } of the surrounding space L 3 and the corresponding cells contain robot cars A = {a1 , a2 , . . . , an }. Each of them is characterized by speed, acceleration and target points F = { f 1 , f 2 , . . . , f n } this space. They need to arrive at the time t f in the minimum time T. Moreover, the number of possible collisions of robot-cars must satisfy the inequality (3.23). Cell sizes (steps hx , hy , hz ) are selected larger than the dimensions of the largest robot-car. Each cell eq (t k ) is characterized by the presence or absence of robotcars ai , and obstacles Bi (t k ). In addition, each cell is characterized by the interaction of the robot-car with the environment in the form of matrices G(tk ) = {G 1 (tk ), G 2 (tk ), . . . , G n (tk )}, describing the influence of the cell environment (road surface, humidity, temperature, etc.) on the dynamic state of the robot. In the linear setting is the transfer function of the vehicle by the disturbance. The set of cells is characterized by the rules of movement Rm (t k ) through the intersection of the type: if–then. These rules are determined by the type of crossroad. For example: When passing through the intersection If: “There are no traffic lights No additional characters Each street has 1 lane It is necessary to turn to the right In front of the intersection there is a robot car It moves at a speed greater than controlled”
198
3 Group Control
So “Controlled robot car continues to move to the intersection (moves to the next cell) without braking” The selection environment O(t) containing cells and robot cars changes over time t, that is, it is dynamic. It can be broken into O(t k ) layers with some constant or variable hk step depending on the dynamic properties of robot cars and perturbing properties of the cell environment. Then the optimization problem (3.21) with constraints (3.23) and other logical, logical-probabilistic and logical–linguistic constraints can be solved sequentially for each layer of the environment of choice O(t). However, this approach does not guarantee the minimization of the travel time through the intersection of the whole group of cars A = {a1 , a2 , . . . , an }, since the selection environment at each subsequent step depends on the decisions made in the previous steps and can change over time.
3.3.2 Principles of Situational Control In the considered formulation of the problem, the following structural approaches to the organization of situational control of a group of robots are possible: centralized control of ai (i = 1,2,…,n) robots with operator K (Fig. 3.3), decentralized control of robots ai without isolation of the leader robot (Fig. 3.4), decentralized control of ai robots with allocation of the leader’s work aL ∈ A (Fig. 3.5), combined control of ai robots with the operator K without highlighting the robot of the leader aL (Fig. 3.6) and with the allocation of the robot of the leader aL (Fig. 3.7). Centralized control with the operator requires the following functions [3]: • • • • •
Interface between the team robots and the operator; Creation of a database and a database of rules on the environment and robots; Decomposition of the task into subtasks; Building a joint action plan of the group of robots; Distribution of areas of responsibility between robots.
Fig. 3.3 Centralized control
3.3 Movement Control of the SEMS Group
Fig. 3.4 Decentralized control
Fig. 3.5 Decentralized control with a leader
Fig. 3.6 Combined control with coordinator
199
200
3 Group Control
Fig. 3.7 Combined control with coordinator and leader
Most of these functions are typical of any other control structure. In order to ensure that the structure in question has the property of completeness (that is, if the goal is achievable, the sequence of actions to the goal will be found), access of all robots of the group to the database and operator knowledge is required. This imposes certain restrictions on the access time to the shared database of each member of the group and it requires calculation and dynamic redistribution of priorities on the sequence of control of the members of the group of robots. With decentralized control without a leader, each robot is engaged in a logical conclusion [44]. This will make it difficult or impossible to access the databases and knowledge of other team members. This will lead to a violation of the coordination of movements of the group members. In addition, it will be possible for conflicts to occur with independent decision making by each member of the group. The equivalence of roles in the group in decentralized control is eliminated when choosing a leader [3]. In this case, the leader coordinates the movements of the group members and eliminates conflicts, and the rest of the robots play a supporting role of remote databases and knowledge and query relays. At the same time, the effectiveness of movement control of a group member will depend heavily on the degree of its remoteness from the leader, and the entire group on the number of group members. Under combined control with the operator and without isolation of the robot leader, group members can share information about their own condition and planning movements. This allows them to analyze commands or movement plans coming from the operator and reject unacceptable ones that lead to conflicts with neighbors. However, the equality of group members requires operator intervention when setting priorities to resolve conflicts. This can lead to slower decision making in the group. Under combined control with the operator and with the allocation of robot of the leader group members can also exchange information about their own condition and planning movements. But the presence of a leader allows you to eliminate contradictions and set priorities within the group with the help of a leader. This increases the security of group control, but there is a dependence of the effectiveness of conflict control on the degree of remoteness of the conflicting robots from the leader.
3.4 Decision Making an Autonomous Robot Based on Matrix Solution …
201
With any structural approach to the organization of situational control of a group of robots, it is necessary robots together to collect information about the environmental parameters, the current state of individual robots of the group, the planned actions of members of the group of robots, etc. After collecting the information in the general case, the environment of choice O(t k ) can be characterized at some point in time t k by the following tuple: O(tk ) =< A(tk ), S(tk ), F(tk ),
∑
m i j (tk ), M, Bi (tk ), G(tk ), Rm (tk ) > .
(3.27)
i, j
Planning of situational control of a group of robots is: • Division of group tasks into subtasks according to the expression 3.1. • Distribution between robots of group of solutions of subtasks so that the solution of a group problem was carried out in the minimum time taking into account the available restrictions, including on information interaction.
3.4 Decision Making an Autonomous Robot Based on Matrix Solution of Systems of Logical Equations that Describe the Environment of Choice for Situational Control One of the main features of the development of modern industrial society is the mass use of robots and other robotic machines, both in production processes and in everyday life. The functioning of autonomous intelligent robots is based on information from sensor systems regarding the environment and the state of the robot itself. Without this information, their ACS will not be able to determine the goals of operation and achieve these goals [45]. However, in order for these robots to be able, without human intervention, to formulate tasks and successfully perform them, they must have the ability to understand the language of sensations, i.e. to have feelings such as “friend–foe”, “dangerous–safe”, “beloved–unloved”, “pleasant–unpleasant”, etc., formed as a result of solving systems of logical equations [26]. Having the ability to form an image database based on the language of sensations in the CNSR [38], the structure of which is described in Chap. 2, you get the opportunity to independently make decisions about appropriate behavior [2]. At the same time, the formation of the behavior of an autonomous robot is based on pragmatic information obtained by successive transformation of the measuring information from the robot sensors into syntactic, then semantic and finally pragmatic [46]. As a result, the top-level system, i.e. the system of situational control, operating semantic [47] and pragmatic [48] information, form the behavior of the robot in the group, i.e. determine the sequence of its actions, which in an ever-changing environment are necessary to achieve this goal.
202
3 Group Control
3.4.1 Processing of Information in the CNSR Processing of sensory information in the CNSR contains the following main stages: 1. Quantization of the surrounding space in the field of view of the sensor system of the robot with the assignment of the resulting pixels names in the form of a pixel number. To do this, place the center of coordinates of the Euclidean space E 3 at the center of gravity of the robot. Then determine the boundaries of the sensitivity zones of the sensor system: −X, +X, −Y, +Y, −Z, +Z. The result is a three-dimensional space C ⊂ E 3 . This space is divided into quanta on the X-axis with a step hx , Y-axis with a step hy and Z-axis with a step hz . As a result, the entire space C will be divided into a set P = 2I*2 J*2 K pixels pijk ∈ P, where 2I is the number of partitions of the X-axis, 2 J is the number of partitions of the Y-axis and 2 K is the number of partitions of the Z-axis. Each pijk pixel will correspond to the information measured by the CNS sensory system about the senses of sight, hearing, smell, taste, touch, etc. 2. Fuzzification of sensory information SEMS on each pixel of the surrounding space and the formation in the memory of the CNSR display of the surrounding space in the form of pixels with their coordinates and fuzzified data. To do this, the sensors can be combined into groups that form the following robot senses like a person: vision in the form of a set V; hearing in the form of a set R; smell in the form of a set S; taste in the form of a set G; touch in the form of a set T. In each of the introduced sets it is possible to allocate the subsets forming those characterizing properties of the observed or studied object: Vi ⊂ V , Ri ⊂ R, Si ⊂ S, G i ⊂ G, Ti ⊂ T .
(3.28)
The set of such subsets depends on the set of sensors that form the senses of a particular robot. For example, to view you can enter the following sub-sets: V 1 is the brightness of the image; V 2 is a color image; V 3 —the flashing frequency; V 4 —the rate of change of brightness; V 5 —speed color changes, etc. Hearing can be introduced following subsets: R1 volume, R2 is the tone; R3 —interval; R4 —rate approximation; R5 —removal rate; R6 —direction. The following subsets can be introduced for smell: S 1 —type of smell; S 2 —intensity of smell; S 3 —rate of increase or decrease of smell; S 4 —rate of change of type of smell; S 5 —interval of smell, etc. For taste, the following subsets can be introduced: G1 —type of taste; G2 —the power of taste; G3 —the rate of change in taste, etc. For touch, the following subsets can be introduced: T 1 —surface evenness; T 2 —surface dryness; T 3 —surface temperature, etc. Of the signals from the sensors of the robot for each pixel pijk by fuzzification of the extracted logical data (vi jk ; ri jk ; si jk ; gi jk ; ti jk ). For example, for V 1 , we obtain the following logical variables by fuzzification: v11k is “very weak brightness”, v12k is “weak brightness”, v13k is “normal brightness”, v14k is “strong brightness” and v15k is “very strong brightness”. This means that for every logical variable is associated to its inherent attribute. In the simplest case, this attribute is the interval. In
3.4 Decision Making an Autonomous Robot Based on Matrix Solution …
203
{ } more complex cases, the attributes can be probabilities P vi jk = 1 or membership functions μ(vijk ). Consequently, data from robot sensors are stored in memory in the form of logical-interval, logical-probabilistic, or logical–linguistic variables [34]. 3. Formation of images in the display of the surrounding space for each sense organ of the robot. The formation of images is made to display the surrounding space for each sense organ separately. In particular for vision will be the display of the surrounding space C V ⊂ C, for hearing C R ⊂ C, for smelling C S ⊂ C, for taste C G ⊂ C and for touch C T ⊂ C. In each of these maps, you can combine adjacent pixels with equal values of logical variables and similar values of their attributes. Then in the spaces of the senses C V , C R , C S , C G and C T we get sets of images ImV , Im R , Im S , ImG , ImT with certain contours. Since the attributes of logical variables can be intervals, probabilities, membership functions, etc., for each type of attribute it is necessary to enter a measure of proximity δΔ , δ P , δμ . After the operation of combining pixels into sets of images ImV , Im R , Im S , ImG , ImT in each space of the senses, you can draw the contours of images C V , C R , C S , C G , C T and assign a name to each contour. As a result, there will be five cards K V , K R , K S , K G and K N with sets of image contours ImV , Im R , Im S , ImG , ImT . 4. Formation of imagers by combining images from different senses and assigning names to images in the form of English words. Usually, the formation of imagers from images consists in operations on images such as the intersection or union of ordered sets and the assignment of the result to a particular reference imager stored in the database. If no suitable imager is found in the database, this imager combination is given the name of the new imager that is placed in the database for temporary storage. With repeated repetition of this new imager during the operation of the robot, this imager becomes a reference and it is assigned a permanent new name. To assign a combination of imagers to a particular standard, you must enter a measure of the proximity of ordered sets. Among the most well-known proximity measures, the binary functional relations can be distinguished (2.20–2.23). The use of these binary functional relations allows ranking combinations of images by their proximity to the reference image and at the same time allows entering a numerical estimate of the proximity [34]. The process of formation of imagers based on information from the senses of the robot is carried out in the following sequence. Initially searched for the presence of images that are close to the standard imagers of the database in each of the cards K V , K R , K S , K G and K T . Found images are the names of the benchmarks. They are recorded in the CNSR observable database together with their coordinates and are excluded from the relevant maps. At the second stage, in the intersections of sets from K V , K R , K S , K G and K T maps, the presence of images close to the standard imagers in databases is searched. At the same time, in the beginning, successive pairs of cards are superimposed on each other with the implementation of the operation of intersection of sets: (K V ∩
204
3 Group Control
K R ; K V ∩ K S ; K V ∩ K G ; K V ∩K T ; K R ∩ K S K R ∩ K G ; K R ∩ K T ; K S ∩ K G ; K S ∩ K T ; K G ∩ K T ). At each intersection of the images, the presence of images close to the database standard imagers is searched. The found intersections of images are given names of standards. They are recorded in the CNS observable database together with their coordinates and are excluded from the corresponding map intersections. Therefore, in each subsequent intersection participate corrected by the results of the removal of the map. Then, the three cards are sequentially superimposed on each other with the operation of crossing the sets: (K V ∩ K R ∩ K S ; K V ∩ K R ∩ K G ; K V ∩ K R ∩ K T ; K R ∩ K S ∩ K G ; K R ∩ K S ∩ K T ; K S ∩ K G ∩ K T ). At each intersection of the images, the presence of images close to the database standard imagers is searched. The found intersections of images are given names of standards. They are recorded in the CNSR observable database together with their coordinates and are excluded from the corresponding map intersections. Therefore, in each subsequent intersection participate corrected by the results of the removal of the map. Then, four cards are sequentially superimposed on each other with the operation of crossing the sets: (K V ∩ K R ∩ K S ∩ K G ; K V ∩ K R ∩ K S ∩ K T ; K R ∩ K S ∩ K G ∩ K T ). At each intersection of the images, the presence of images close to the database standard imagers is searched. The found intersections of images are given names of standards. They are recorded in the CNSR observable database together with their coordinates and are excluded from the corresponding map intersections. Therefore, in each subsequent intersection participate corrected by the results of the removal of the map. Next, all five adjusted maps are overlaid on each other (K V ∩ K R ∩ K S ∩ K G ∩ K T ). In the intersection of images, the presence of images close to the database standard imagers is sought. The found intersections of images are given names of standards. They are recorded in the CNSR observable database together with their coordinates and are excluded from the corresponding map intersections. In the third stage, the combinations of sets of maps K V , K R , K S , K G and K T are searched for the presence of images close to the standard imagers in the databases. At the same time, it is similar at the beginning of the operation of combining two sets:(K V ΔK R ; K V ΔK S ; K V ΔK G ; K V ΔK T ; K R ΔK S ; ΔK G ; K S ΔK T ; K G ΔK T ΔK G ; K S ΔK T ; K G ΔK T ), then three—(K V ΔK R ΔK S ; K V ΔK R ΔK G ; K V ΔK R ΔK T ; K R ΔK S ΔK G ; K R ΔK S ΔK T ; K S ΔK G ΔK T ), four—(K V ∪ K R ∪ K S ∪ K G ; K V ∪ K R ∪ K S ∪ K T ; K R ∪ K S ∪ K G ∪ K T ) and five—(K V ∪ K R ∪ K S ∪ K G ∪ K T ). The found image associations are assigned the names of the standards. They are recorded in a database of the observed data CSR together with their coordinates and are excluded from the relevant associations of the cards. Therefore, in each subsequent Association, the maps adjusted according to the results of the deletions participate. At the fourth stage, the symmetric difference sets from K V , K R , K S , K G and K T maps are searched for the presence of images close to the standard imagers in the databases. In this case, it is similar
3.4 Decision Making an Autonomous Robot Based on Matrix Solution …
205
to the first implementation of the operation of the symmetric differ( ) K V ΔK R ; K V ΔK S ; K V ΔK G ; K V ΔK T ; K R ΔK S ; ence of two sets , K R ΔK G ; K R ΔK T ; K S ΔK G ; K S ΔK T ; K G ΔK T ( ) K V ΔK R ; K V ΔK S ; K V ΔK G ; K V ΔK T ; K R ΔK S ; , four— then three— K R ΔK G ; K R ΔK T ; K S ΔK G ; K S ΔK T ; K G ΔK T (K V ΔK R ΔK S ΔK G ; K V ΔK R ΔK S ΔK T ; K R ΔK S ΔK G ΔK T ) and five— (K V ΔK R ΔK S ΔK G ΔK T ). The found symmetric image differences are assigned the names of the standards. They are recorded in a database of the observed data CNSR together with their coordinates and are excluded from the relevant associations of the cards. Therefore, in each subsequent association, the maps adjusted according to the results of the deletions participate. If any more intersections, joins, or symmetric differences are corrected, they are given new names in the form of a new word of English letters. They are also recorded with their coordinates in the database of the observed data of the CNSR and the corresponding message is transmitted to the community of robots to legitimize the new reference word and its corresponding image. Thus, in the database CNSR formed semantic data about the robot’s space, based on which a robot accepts behavioural solutions [2] using the stored knowledge base of the robot behavioral model algorithms. These algorithms are recorded in the knowledge base robot at the stage of its creation, based on its purpose. Therefore, such algorithms will be called genetic. However, after the formation of a database of semantic data about the surrounding space of the robot may be that in the same place of space partially or completely there are two or more imagers. Therefore, it is necessary to adjust the semantic database in order to eliminate the detected collisions. This adjustment is closely related to the formation from semantic data—pragmatic, relevant to the problem being solved by the robot at the moment. For example, if a robot-car is faced with the problem of accident-free passage of the intersection, then it requires the formation of images for displaying the surrounding space only from the sense organs of vision C V ⊂ C and hearing C R ⊂ C. Accordingly, the process of obtaining pragmatic information to solve the problem is sharply reduced.
3.4.2 Genetic Robot Algorithms In general, we can distinguish five large interacting modules of robot behavior algorithms Module 1: Saving lives. This module contains algorithms that, based on the analysis of imagers in the surrounding space and their own state, form the movements of robots in such a way as to avoid collisions and hits in hazardous areas of the environment, as well as algorithms for moving to favorable zones with dynamic changes in the environment.
206
3 Group Control
In addition, there may be algorithms of environmental impact in order to change it for the most comfortable of its own functioning. Module 2: Self-Improvement. This module contains algorithms for adapting the robot’s own subsystems to changes in the environment, as well as algorithms for optimizing the subsystems and the robot as a whole for the purposes of performing certain tasks coming from higherlevel systems (coordinators) or from the systems of planning their own behavior. Module 3: Curiosity. This module contains algorithms for finding new imagers in the environment by improving the sensor system of the Central nervous system, adding new sensors and improving the methods of processing sensory, syntactic and semantic data. Module 4: Training. This module contains search algorithms for obtaining new knowledge about the properties of the environment by trial and error, as well as the synthesis of new behavioral algorithms for previous modules based on the analysis of options for responding to changes in knowledge about the environment. Module 5: Creating your own kind (continuation of the genus). This module contains algorithms for the synthesis of similar robots with the search and use of appropriate objects in the environment.
3.4.3 Methods of Decision Making Based on Robot Reflections in the Environment of Choice A robot’s decision about its own behavior can be reflexive and conscious. In reflexive decision making robot based on semantic data selects the behavioral algorithm from the modules of genetic algorithms described above the one with the highest priority. Setting priorities is carried out at the design stage by solving an optimization problem using mathematical programming methods [49]. For example, if there is an object in the environment that is dangerous and which moves in the direction of the robot, then the robot takes to the implementation of the algorithm of leaving the trajectory of this object, since the preservation of life for it has the highest priority. When making a conscious decision, the robot analyzes the coordinator’s goal of functioning, as well as the behavior and intentions of neighboring robots, selects from the semantic data on the environment pragmatic data related to the purpose of functioning, and then selects from the genetic algorithms those that most optimally lead to the goal of functioning. In this case, different approaches to decision making can be used. In deductive decision making process of thinking in the CNSR begins with the global level and then goes down to the local. The technical analogue of this type of thinking can be the process of optimization, when first, on the basis of available information, the best solution is sought out of all possible, and then, by checking on the basis of available information, all restrictions, the solution is corrected.
3.4 Decision Making an Autonomous Robot Based on Matrix Solution …
207
The choice of the optimal solution from all ui solutions can be carried out in different ways. The most simple case, to use methods of mathematical programming [49], when required to compute an n-dimensional vector U that optimizes (paying in a minimum or a maximum depending on the substantive problem definition) the criterion for the quality of the solutions f 0 (U) subject to constraints f j (U ) ≤ c j , j ∈ 1, 2, . . . r ; u ∈ W , where f j (U) are known scalar functions, cj —assigned number, W is a predetermined multiple n-dimensional space. Optimization options, when the constraints describing the selection environment are set in the form of logical-probabilistic, logical–linguistic and logical-interval relations [34], are discussed in detail in the paragraph 2.4.3 of Chap. 2. After calculating the quality criteria of all possible solutions with a logicalprobabilistic or logical–linguistic or logical-interval description of uncertainties, all solutions found are ranked. Then the solutions are checked for the feasibility of other constraints describing the robot’s own state, starting with the first one having the highest quality criterion. In this case, the first of the tested solutions satisfying these constraints is considered optimal. When inductive decision making process of thinking in the CNSR begins with the analysis of individual decisions and then goes to find a common, global conclusion. The technical analogue of this type of thinking can be the optimization process, when first, based on the available information, all solutions are checked for the feasibility of constraints describing their own state, and then the best solution is sought out of all possible solutions under constraints. According to Pearce, cognitive activity in the Central nervous system is the interaction of abduction, induction and deduction [50]. In this case, the abduction carries out the adoption of plausible hypotheses by explaining the facts, by induction testing of hypotheses is realized, and by deduction from the accepted hypotheses the consequences are deduced. The technical analogue of this type of thinking can be the process of finding the optimal solution by analogy, when from all possible solutions are first selected by methods of pattern recognition [20] those solutions that are closest to the existing solutions stored in the database of the CNSR and gave good results in the past. Then you can use deductive and/or inductive decision making methods based on the selected quality criteria to find the best one. Comparing the described methods of decision-making, it can be concluded that the abduction method is the fastest by analogy with intuition, but its reliability depends on the completeness of the database of good decisions from past experience, i.e. strongly depends on the time of operation of such robots in similar environmental conditions. The deductive method is faster than the inductive method with a large number of constraints, since it does not require constraint checking for all solutions. With complex quality criteria and a small number of restrictions, the inductive method can give a faster result, as it will reject the search for a solution by complex quality criteria for solutions that are unacceptable by restrictions. However, for intelligent systems, as a rule, it is not possible to form a scalar quality criterion. Then the choice of the optimal solution from all ui solutions can be carried out by methods of mathematical programming in ordinal scales, generalized mathematical programming or multistep generalized mathematical programming
208
3 Group Control
[34, 39]. In this case, the problem of finding the optimal solution is the problem of choosing the “best” (in the sense of the binary ratio q0 ) among the permissible solutions, i.e. as close as possible to the reference solution uc It is required to find one solution u0 —this being the binary correlation subject to the binary relations that describe constraints: u 0 q0 u ϶ ,
(3.29)
u϶q j c j ,
(3.30)
u0q j c j ,
(3.31)
where j = 1, 2, . . . , n, (u ϶ , u 0 ) ∈ W . In the simplest case, the binary relations q0 and qj can be numbers 0 and 1. Then the optimal solution is q0 = 1 and qj = 1. Naturally, the search for such a solution will be unlikely. Therefore, when determining the optimal solution of qo and, accordingly, qj should be expressed as logical equations in the Zhegalkin algebra [51] reduced to the matrix form [52]: A0 U = Q o ,
(3.32)
AjC = Q j,
(3.33)
where: A0 and Aj —identification strings containing 0 and 1 in a given order, U— vector of logical variables characterizing the behavior of the Autonomous robot, C—vector of logical variables characterizing the environment, Qo —vector of logical variables characterizing the proximity of the robot’s behavior to the standard, Qj — vector of logical variables characterizing the limitations of the environment. Dimensions A0 and U, as well as Aj and C coincide. The attributes of the logical variables in Eqs. (3.32) and (3.33) can be probabilities, membership functions, and score estimates set by the decision maker.
3.5 Using Influence Diagrams in Group Control of SEMS Modern robotic systems can be built on the basis of intelligent electromechanical systems [23, 53], which possess due to the presence of the central nervous system [2, 25, 33] reasonable behavior. When controlling the behavior of a group of interacting SEMS is necessary, first of all, to assess the ability of making the right decisions in an uncertain environment, taking into account the static and dynamic characteristics of each SEMS and select the environment and taking into account the characteristics of the decision in the CNS of individual SEMS. In addition, the task of control a
3.5 Using Influence Diagrams in Group Control of SEMS
209
group of SEMS has additional complexity because of the need to ensure coordination of interactions between them. Quality control in solving joint group task will depend on the type of control group behavior SEMS, and the method of decisionmaking. Therefore, when solving the problem of choosing the optimal SEMS group control system for each specific group task, an analysis of all combinations of control types and decision making methods is necessary. Recent tasks relate to the tasks of situational control [1, 5, 54]. With a structural approach to the organization of situational control of the SEMS group, it is necessary for all SEMS to collect together information on environmental parameters, on the current status of individual SEMS groups, on planned actions by members of the SEMS group, etc. After collecting information, in the general case, a model O(t k ) of the choice environment is created at time t k (control step). Then the planning of situational control of a group of SEMS when dividing a group task into subtasks will be [54, 55] similar to the expression (3.1). That is, when distributing a group of solutions by subtasks between SEMS, it will be so that the solution of the group problem is optimal. For example, it is carried out in the shortest possible time, taking into account existing restrictions, including information interaction. The complexity of this task, which requires the use of situational control methods, lies in the fact that the parameters and functions characterizing the choice environment can be deterministic, stochastic or not completely defined. When solving such problems, you can use influence diagrams.
3.5.1 Options for Using Influence Diagrams When Making a Decision In fact, Influence Diagrams (ID) is Bayesian Belief Networks (BBN), extended by the concepts of variants V, decisions D and utility (usefulness) U. The vertices of the solution, or rather, the instructions contained in them, determine the temporary seniority of the network [56]. Network fragments for the influence diagram are shown in Fig. 3.9. In order to search, the best in some sense decisions in the diagrams of the “vertices of utility” influence are associated with the state of the network. Each vertices of utility (usefulness) contains a usefulness function that links each configuration of her parents’ state with usefulness. When making a decision, proceed from the likelihood of network configuration. Therefore, you can calculate the expected usefulness of each alternative and choose the alternative with the highest expected usefulness. This is the principle of maximum expected usefulness. ID may contain several usefulness vertices. Moreover, the general usefulness function is the sum of all local usefulness functions. The decision making process using such ID will be carried out in the following order.
210
3 Group Control
After observing the values of the variables that are the parents of the first vertices of the decision, it is necessary to find out the maximum usefulness for the alternatives; The expert calculates these usefulness under the assumption that all future decisions are optimal, using all available information at the time of each decision. The complexity of constructing and investigating ID is largely determined not by the number of vertices of chances, but by the complexity of their relationships, both among themselves and, especially, with the vertices of decisions and usefulness. Currently, there are a number of software implementations of the shells of expert systems based on BBN, which allow you to operate not only discrete, but also continuous random variables. An example of such software is «Hugin» [57]. However, when using BBN containing both continuous and discrete variables, there are a number of limitations: The discrete variables cannot have continuous parents; The continuous variables must have a normal distribution law; The distribution of a continuous variable with discrete parents and continuous parents is a normal distribution. The logical conclusion in a BBN with continuous and discrete states is the distribution of the probabilities and parameters of the Gaussian distribution laws throughout the network, depending on the decisions (evidence) obtained. The process of logical conclusion is based on fairly complex mathematical algorithms. In the decision making process, it is important not only to find a decision, but to find the best decision in terms of control security, i.e. minimize collisions in group control, and at the same time the best in the sense of utility (usefulness) containing a usefulness function that connects each configuration of her parents’ state with usefulness. Therefore, it is advisable to combine the safety functions S and usefulness U with various assigned weighting factors and the formation of the efficiency function E. Then, in ID, safety vertices S and usefulness U will have common heirs of efficiency E. Each vertices E contains an efficiency function, which connects each configuration of the state of its parents S and U with efficiency E (Fig. 3.10). As before, the decision should be based on the probability of network configuration. Therefore, you can calculate the expected efficiency of each alternative and select the alternative with the highest expected efficiency. This is the principle of maximum expected performance. The effectiveness E i of the i-th decision Di can be calculated using the formula: { } E i = K iU Ui + K iS Si ,
(3.34)
where: U i is the utility of the i-th decision, Ui = 1/T ; K iU is the coefficient of significance of the utility, S i is the safety of the i—the decision, Si = 1/M and K i S is the safety significance coefficient. Identifier may contain several performance vertices. Moreover, the overall efficiency function is the sum of all local efficiency functions.
3.5 Using Influence Diagrams in Group Control of SEMS
211
The decision making process using such influence diagrams will be carried out in the following order: • After observing the values of the variables that are the parents of the first vertices of the decision, it is necessary to find out the maximum efficiency for the alternatives; • The expert calculates these efficiencies under the assumption that all future decisions will be optimal, using all available information at the time of each decision. The complexity of constructing and investigating such an identifier is determined by the complexity of the security and usefulness relationships between themselves and with efficiency vertices, as well as the relationship of usefulness and security vertices with decisions. Therefore, the process of finding the optimal decision for situational control of a group of robots is performed at each control step for each robot in the group. At the same time, the following variants of situational control structures are possible: • • • • • • • •
Without a coordinator or leader; With a coordinator that determines the effectiveness for each robot; With a coordinator who determines the utility for each robot; With a coordinator who determines security for each robot; With a leader that determines the performance of each robot; With a leader that determines the utility for each robot; With a leader who determines security for each robot; With a coordinator who determines the utility for each robot, and a leader who determines the safety for each robot; • With a coordinator who determines the safety of each robot, and a leader who determines the utility for each robot.
3.5.2 Features of the Search for the Optimal Decision in Various Variants of Situational Control Structures When choosing a structure without a coordinator and leader at each control step for the every robot of the group, the influence diagram is analyzed (Fig. 3.11). Therefore, to calculate the efficiencies E ij of the decisions made Dij , each robot must be aware of the behavior of all members of the group. For this, robots must have a developed Information-Measuring System (IMS) with the ability to quickly exchange information with other robots. After identifying the intentions of other robots due to the exchange of information, situations may arise when a number of robots initially made the decision may not be optimal, and in some cases not safe. Then these robots are looking for a new optimal solution. In this regard, a “stupor” situation may arise when robots begin to infinitely recalculate the effectiveness of decisions and, accordingly, the process of completing the required task by the group stops. To get out of this state, the members of the group make not the best decision in the sense
212
3 Group Control
of usefulness, but the safest one, which ensures at least the elimination of collisions between members of the group. You can slightly improve the controllability of such a group by introducing robot priorities and using rules for acceptable movements in each member of the robot group. When choosing a structure with a coordinator that determines the efficiency for the every robot (Fig. 3.12) at each control step, the coordinator calculates the usefulness and safety of decision options for all robots and can additionally inform group members about decisions made. To do this, the coordinator and robots must have a developed IMS with the ability to quickly exchange information. In this case, the coordinator tightly controls the behavior of each robot. In this case, considerable time is required to calculate the effectiveness of all decisions and flexibility in control is lost during a sharp change in the situation, for example, due to breakdowns and failures, or an unexpected change in the environment of choice. When choosing a structure with a coordinator determining the utility for the every robot (Fig. 3.13) at each control step, the coordinator calculates the usefulness of the decision options for all robots and transfers the calculated values to the group members. The members of the group calculate the security of the decisions and determine the optimal efficiency of the decision using the values of the usefulness of the decisions received from the coordinator. At the same time, the coordinator and robots should have a developed IMS with the ability to quickly exchange information, and the control system of a group of robots should be more flexible and secure Sudden changes in the situation, for example, due to breakdowns and failures, or an unexpected change in the environment of choice can be taken into account by members of the group when calculating the effectiveness of decisions. When choosing a structure with a coordinator determining security for the every robot (Fig. 3.14) at each control step, the coordinator calculates the security of decision options for all robots and transfers the calculated values to group members. Members of the group are calculated usefulness decisions and determine the optimum efficiency decisions with decisions obtained from the coordinator of security values. At the same time, the coordinator and robots must have a developed IMS with the ability to quickly exchange information. However, the group control system becomes safer compared to the previous structure, since the coordinator calculates the safety of decisions made by robots taking into account the analysis of the behavior of all members of the group. In this case, sudden changes in the situation, for example, due to breakdowns and failures, or an unexpected change in the environment of choice can be taken into account by members of the group when calculating the effectiveness of decisions. This control structure provides a safer control, but not the most efficient. When selecting the structure of the leader of determining the efficiency of the every robot (Fig. 3.15), the robot—each control step, the leader evaluates the usefulness and safety of possible decisions for all robots. To do this, robots must have a developed IMS with the ability to quickly exchange information. At the same time, the leader coordinates the behavior of each robot, setting priorities and rules for unacceptable behavior. In this case, considerable time is required to calculate the effectiveness of all decisions and flexibility in control is lost during a sharp change in the situation, for
3.5 Using Influence Diagrams in Group Control of SEMS
213
example, due to breakdowns and failures, or an unexpected change in the environment of choice. The control structure does not require an additional coordinator. However, the reliability and overall performance of a group is highly dependent on the status and performance of the leader. In a structure with a leader determining the utility of decisions for the every robot (Fig. 3.16), the leader robot at each control step calculates the usefulness of decision options for all robots, and robots determine the security of decisions and, accordingly, their effectiveness. To do this, robots must have a developed IMS with the ability to quickly exchange information. At the same time, the leader coordinates the implementation of the subtasks of each robot, setting priorities, and each robot evaluates the safety of movements, taking into account the rules of unacceptable behavior. At the same time, the security of the group increases, but the effectiveness greatly depends on the status and information and computing capabilities of the leader. When choosing a structure with a leader that determines the security of decisions for the every robot (Fig. 3.17), the leader robot at each control step calculates the security of decision options for all robots, and the robots determine the usefulness of the decisions and, accordingly, their effectiveness. To do this, robots must have a developed IMS with the ability to quickly exchange information. The leader coordinates the safety of movements of group members, and each robot evaluates the usefulness of the decisions made and chooses the most effective ones, taking into account safety. The safety of the group increases if the robot leader has a more developed IMS compared to other members of the group, the requirements for which in assessing the environment of choice are reduced. When choosing a structure with a coordinator determining the utility for the every robot and a leader determining safety for each robot (Fig. 3.18) at each control step, the coordinator distributes the task into subtasks for each robot and calculates the utility of the decisions of each robot, and the robot leader coordinates safe movement of group members with setting priorities. Robots calculate the effectiveness of decisions. In this case, the responsibilities for determining the effectiveness are distributed between the coordinator and the robot leader, which can increase the reliability of the functioning of the group. When choosing a structure with a coordinator determining security for the every robot and a leader determining usefulness for the same robot (Fig. 3.19), at each control step, the coordinator calculates the security of robot decisions, and the robot leader calculates the usefulness of robot decisions based on their behavior in a group. Robots calculate the effectiveness of decisions. In this case, the responsibilities for determining the effectiveness are distributed between the coordinator and the leader robot. This will increase security by taking into account a larger number of influencing factors by the coordinator.
214
3 Group Control
3.6 Secure Control of the SEMS Group To create a future of humanity that will be literally filled with robots and all sorts of “smart” systems, it is required that these robots and artificial intelligence systems have “instincts” that allow them to avoid collisions with obstacles and with each other while driving. However, if these instincts are too strong, the robots will be too slow, which will negatively affect the effectiveness of their actions. To solve this problem, we need to develop algorithms that constantly strive to find the optimal balance between speed and safety, which will allow robots to always act with high efficiency. Collision avoidance is the main aspect of the systems of all vehicles and other robotic devices that can move completely independently, in automatic mode. Some of the developers of control systems for robot cars deliberately allow them to commit minor traffic violations in the event of a collision hazard. In addition, the task of control a group of robots has an additional complexity due to the need to ensure coordination between robots. In complex robotic systems, each robot must satisfy its own kinematic equations, as well as existing phase constraints, including dynamic constraints that ensure that there are no collisions between robots. Safe control is closely related to survivability control, algorithms that are included in the mathematical support of intelligent robots. In this case, the behavior of SEMS can be adjusted due to the flexible response of the automatic survivability control system included in the automatic control system to sudden changes in time of external conditions and the internal state of the SEMS itself. The most studied and frequently encountered tasks of survivability control are adaptation, hot redundancy, compensation, and borrowing [58]. Less studied and less common is the problems of stress and stupor or switching on emergency mode. In the process of development of robot ACS and their intellectualization, new modes of their functioning began to appear. In particular, robots created on the basis of SEMS are able to work as part of a group of robots under the control of an operator or a higher-level ACS [59]. In this case, there may be situations when the operator’s instructions and/or higher-level ACS will contradict the internal state of the SEMS itself. Another, no less difficult task is to build algorithms for checking the feasibility of conditions. They should probably rely on simulating the behavior of SEMS when executing the proposed operator instructions and/or top-level control system instructions. At the same time, it is desirable that the created algorithms can take into account the possible rapid degradation of SEMS and include survivability control mechanisms in advance with the output of messages to the upper level of group control about the undesirability or danger of the proposed behavior instructions. When forming a set of acceptable controls (SEMS behavior instructions), it is first necessary to identify and write to the SEMS ACS database the acceptable values of parameters of individual group members, as well as their static and dynamic characteristics. Then, based on the purpose of a particular SEMS, you need to make a list of possible U ki (t) instructions. Next, you need to identify a set of acceptable
3.6 Secure Control of the SEMS Group
215
Y d (t) behavior instructions by mathematical and computer modeling of the dynamic configuration space. The task is divided into two stages. At the first stage, for example, using computer simulations of SEMS, invalid U * ki (t) instructions are identified among possible U ki (t) instructions, which lead to the output of certain parameters and characteristics beyond the acceptable limits: Uki∗ (t) ⊆ Uki (t).
(3.35)
These instructions should be excluded from the possible: Ukid (t) = Uki (t)/Uki∗ (t).
(3.36)
At the second stage, dangerous instructions are identified among Ukid (t) instructions, i.e. those Ukio (t) whose frequent repetition leads to rapid degradation of SEMS with subsequent failures and breakdowns. In this case, logical-probabilistic and logical–linguistic modeling of SEMS degradation is required [60, 61] with analysis of degradation time. If the degradation time of the t di system during ( repeated ) application of any U d ki (t) instruction is less than the permissible tdop tdi < tdop , then these instructions are classified as dangerous Ukio (t) and they are excluded from the possible ones. Therefore: ( ) Yd (t) = Ukid (t)/Ukio (t) .
(3.37)
Next, among the Yd (t) instructions, we identify those Ukic (t) that can lead to collisions. They are also excluded from the possible ones. As a result, the safe control instructions will be: ( ) Us (t) = Yd (t)/Ukic (t) .
(3.38)
In some cases, when implementing group control systems for robots, some instructions issued by the top-level ACS (coordinator-planner ACS) may not be clear to the SEMS ACS, although they were considered acceptable by the simulation results. This, for example, may be due to incomplete adequacy of the models used. You can partially remove such instructions from acceptable ones by semantic analysis of instructions for correctness and non-inconsistency, and by organizing a dialogue between interacting SEMS ACS. In order to achieve a specific goal for a group of robots, each robot can perform a pre-defined sequence of actions without collisions in the case of a deterministic environment. In the case of a nondeterministic environment, this sequence must be found by the control system of a group of robots in the process of achieving the goal. At the same time, you first need to synthesize a single control system to stabilize the robot relative to a certain point in the state space with phase constraints. Then you need to look for optimal robot trajectories in the form of points in the state space for robots to move from different initial conditions to the specified end positions.
216
3 Group Control
First, we solve the problem of choosing the optimal route for all members of the group without intersections. If the routes of the group members do not intersect and the time to reach the goal of moving the group does not exceed the required time, then the solution to the task is found, otherwise they move on to the next stage. At this stage, the task of driving group members in the case of possible intersections of traffic routes is solved. If the time to reach the goal of moving the group, taking into account the passage of intersections without collisions, does not exceed the required time, then the solution to the task is found, otherwise they proceed to the next stage. At this stage, the problem of safe traffic with the crossing of the trajectories and dynamics of movements with delays at the intersections to avoid collisions, applying the rules of journey of intersections or prioritization of journey of intersections of members of the group. When solving the problem of safe passage of the intersection, two approaches to group control can be used. The first approach, based on the use of traffic rules and the method of dividing the surrounding space into cells, is described in paragraph 3.3.1. At the same time, since the conditions for choosing control at each subsequent step (transition from one cell to the next) depend on the decisions taken at the previous steps and may change over time, it is necessary to use forecasting and modeling of sequences of situations during transitions before reaching the final goal of crossing the intersection when searching for control. The second approach is based on the use of priorities. An example of this approach is the problem of interaction in the warehouse of robot loaders built on the basis of SEMS modules. In this task, you need to move three robots from the specified points (areas) to the end points (a rectangular storage room) without colliding with the set robot priority values: PR1 —for robot R1 , PR2 -for robot R2 , and PR3 -for robot R3 , for example, as numbers 1, 2, 3. In addition, priorities can be determined during the movement of robots in the event of a dangerous situation (possible collision). For this purpose, robot parameters can be compared. For example, if the size of the i-th robot is larger than the j-th, then its priority is greater (PRi > PRj ). Other robot parameters can also be used (speed, proximity to an intersection, weight, destination, etc.). More fine-grained prioritization will be used when comparing several robot parameters at once, especially taking into account their significance by introducing significance coefficients: PRi =
N ∑
pin kin ,
(3.39)
n=1
where: pin is the nth parameter of i-th robot, and k in is the coefficient of significance of this parameter. Algorithms for controlling these robots at each step of the movement, i.e. when moving from one cell eq (ti ) of the configuration space to an adjacent one, eq+1 (ti+1 )
3.7 Logical–Linguistic Method of the Movement of Mobile SEMS …
217
determines the possibility of a collision (ri ∩ r j /= 0, then m i j (tk ) = 1, otherwise m i j (tk ) = 0). If m i j (tk ) = 1, the control algorithm determines the robot that has the highest priority and gives it a command to pass the intersection, and gives the other robot a command to delay the passage for the time when the first robot passes the intersection, taking into account the maximum braking time, depending on the speed and road conditions. Building such an optimal algorithm is a difficult task, since some of the parameters of control objects and the environment, taking into account possible obstacles on traffic routes, are not fully defined. They can be described as logical-probabilistic and/or logical–linguistic expressions [31]. To solve such problems, it is necessary to use multi-step generalized mathematical programming [39] and software tools of the A-life type [34]. The solution can be significantly simplified by reducing logicalprobabilistic and logical–linguistic expressions to logical-interval expressions [62]. In this case, the group’s travel time will be slightly longer, but the safety is higher.
3.7 Logical–Linguistic Method of the Movement of Mobile SEMS with Minimal Probability of Accidents The development of Unmanned Vehicles (UV’s), including unmanned aerial vehicles (UAV’s), has recently become in high demand [63, 64]. The R&D works on UV’s are determined by the following key problems: (1) extending the duration of autonomous operation; (2) improving navigation systems; (3) increasing payload; (4) raising the degree of autonomy based on artificial intelligence. The fourth problem has recently been associated mainly with using neural network technologies [65–69]. Among their significant drawbacks, note the controversial problems of choosing a sufficient learning sample without overtraining the neural network and the problem of covering as many choice situations in decision making as possible [70]. In addition, when forming control principles and algorithms, researchers and engineers consider information security problems for UV’s [71] but often neglect motion safety issues of optimal routing [72, 73]. However, the prevention of accidents is the main operating principle of motion control systems for UV’s and other robotic devices capable of moving in an automatic mode [74]. To implement this principle, it is necessary to develop algorithms for estimating the probability of an accident on a route and select the safest route under the existing constraints. Moreover, when solving the motion control problem, it is necessary to consider additional complexities due to the coordination of all motion participants: each participant must satisfy the corresponding kinematic equations and the existing state-space constraints, including dynamic constraints [62, 75] to minimize the probability of collision and related risks. Risk assessments are predictive in nature since their uncertainty is associated with many factors that cannot be accurately estimated. The uncertainty of predictable risks causes situations reducing the probability of
218
3 Group Control
UV’s accident-free motion along a route. Qualitative and quantitative methods [76– 78] are used to assess risks under uncertainty. The qualitative approach consists in determining all possible types of accident risks on a route and identifying their areas of occurrence and sources [79]. Further, this approach can serve for obtaining quantitative risk assessments. The quantitative approach allows calculating the value of individual risks on route segments and the entire route [80, 81]. Note that methods of probability theory and mathematical statistics are often used. In this case, it is necessary to study scenarios that simulate and analyze the simultaneous consistent change of all factors on route segments considering their interdependence. The conditions of implementing UV control algorithms are described by an expert through scenarios (e.g., pessimistic, optimistic, and most probable ones) or a system of constraints on the main parameters of the route and the corresponding indicators characterizing the probability of an accident. This approach involves expert assessments obtained by complex procedures [82], starting with the selection of the number and qualification levels of experts. The results of the multi-step procedure are processed by statistics and qualitative analysis methods. Regression and correlation analysis tools are used for comprehensive risk analysis, and methods of the logical-probabilistic approach are employed for detailing and analyzing structurally complex routes [83]. In risk prediction under limited statistical data, it is reasonable to create a database of reference route segments that contains their qualitative attributes and quantitative expert assessments (the values of their membership functions and the values of their significance factors), as proposed in the logical–linguistic classification [84]. Within the scenario approach, which uses fuzzy set methods to calculate the values of the membership functions, it is then possible to rank the set of admissible routes by comparing a given route with the reference routes from the database [85]. In this case, the probability of an accident on reference route segments can be estimated by simulating UV’s motion under uncertainty [20] and the available statistical data. Simulation modeling generates hundreds of possible accident combinations. After analyzing the simulation results and statistical data, it is possible to obtain distributions of the probabilities of accidents on reference route segments and give an integral assessment of the control efficiency and intelligence level of the UV’s [86] after optimal routing. In particular, this approach has been applied to determine the probabilities of accidents on reference route segments when forming the reference database in the proposed logical–linguistic method. The problem is to develop a method for an automatic control system to select an optimal route of the vehicle that moves under uncertainty using logical–linguistic classification of route segments to certain reference models with the risk assessments or probabilities of accidents determined previously.
3.7.1 The Routes Ranking Problem When searching for the best combinations of SEMS motion control laws, the common problem is to find an optimal control minimizing the performance criterion
3.7 Logical–Linguistic Method of the Movement of Mobile SEMS …
Ji = k T Ti + k P Pi ,
219
(3.40)
where: Ti = ti f −ti0 is the time to transfer the i-th SEMS (i = 1,2,…) located at the time instant t 0 in an initial point si of a bounded space L 3 ⊂ E 3 to a target point f i of this space by the time instant t f ; E3 denotes the three-dimensional Euclidean space; k T is the significance factor of the goal achievement rate, adjusted by an expert or a group of experts; Pi is the estimated probability of an accident involving the i-th SEMS while moving along the route during the time T i ; finally, k P is the significance factor of the estimated probability of an accident, also adjusted by an expert or a group of experts. In the proposed ACS, it is first necessary to determine SEMS travel time on all possible routes. Under the existing logical-probabilistic, logical–linguistic, and other constraints, to calculate the product J (Rv ) = k T Ti on each SEMS route Rv , the ACS should evaluate the functional [65]. J (Rν ) =
∑ ( ∑ ) ∑ a li j /vi j + b(ϕi j /w j j )+ (cτi j ) → min, i, j
i, j
(3.41)
i, j
where: a, b, c—the coefficients of preference, vij —linear and wij is the angular velocity of movement associated with the hi humidity and temperature t j ; τ ij —the delay time at an intersection depending on its type and load; ϕi j are the turning angles at an intersection; finally, lij are the lengths of segments between intersections. As shown in [84], (i,j) is an element of the ordered set describing a given route from the starting point to the terminal point. After evaluating the functional (3.41) for all possible SEMS routes from point si to point f i , the routes can be ranked by the time of arrival to point f i . However, the fastest route may also turn out to be the most accident-prone. Therefore, the next step in optimal routing should be ranking of the routes Rv by the probability of an accident, Pi (Rv ).
3.7.2 The Database of Reference Route Segments Within the proposed method, when determining the probability of accidents on SEMS routes and the product k P Pi , we apply a logical–linguistic classification algorithm of route segments, which attributes a given route segment to a reference one. As shown in [23], this algorithm has high speed and efficiency. For implementing the algorithm, a database of reference route segments is created when developing the ACS for the SEMS. This database contains rows with the parameters (attributes) of reference route segments and the probabilities of an accident on such segments, determined in advance based on simulation modeling and statistical data. The presence of an attribute is indicated by one and its absence by zero. Each route contains one or several segments at an intersection and one or several segments between intersections. Therefore, the database includes rows characterizing motion at an intersection and
220 Table 3.1 Intersections
3 Group Control
Database row
Type of intersection direction of motion
Probability of accident
C1 = / 10000000000/
| with passage to the right
PC1 = 0.12
C2 = / 01000000000/
| with passage to the left
PC2 = 0.15
C3 = / 00100000000/
T with passage to the straight
PC3 = 0.13
C4 = / 00010000000/
T with passage to the right
PC4 = 0.11
C5 = / 00001000000/
⊥ with passage straight PC5 = 0.14
C6 = / 00000100000/
⊥ with passage to the left
PC6 = 0.17
C7 = / 00000010000/
✝ with passage straight
PC7 = 0,18
C8 = / 00000001000/
✝ with passage to the right
PC8 = 0.16
C9 = / 00000000100/
✝ with passage to the left
PC9 = 0.20
C 10 = / 00000000010/
L with passage to the left
PC10 = 0.10
C 11 = / 00000000001/
G with passage to the right
PC11 = 0.09
between intersections. Tables 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8 show an example of reference rows from the database. The Database of Reference Rows for Intersections: The Database of Reference Rows for Route Segments between Intersections: Table 3.2 Turning angles
Database row
Angle and direction of Probability of turn accident
ϕ 1 = /100000000/
−180° (left)
Pϕ1 = 0.11
ϕ 2 = /010000000/
−135° (left)
Pϕ2 = 0.12
ϕ 3 = /001000000/
−90° (left)
Pϕ3 = 0.13
ϕ 4 = /000100000/
−45° (left)
Pϕ4 = 0.14
ϕ 5 = /000010000/
0° (straight)
Pϕ5 = 0.06
ϕ 6 = /000001000/
+45° (right)
Pϕ6 = 0.10
ϕ 7 = /000000100/
+90° (right)
Pϕ7 = 0.09
ϕ 8 = /000000010/
+135° (right)
Pϕ8 = 0.08
ϕ 9 = /000000001/
+180° (right)
Pϕ9 = 0.07
3.7 Logical–Linguistic Method of the Movement of Mobile SEMS … Table 3.3 Angular velocities
Table 3.4 Number of lanes for Intersections
Table 3.5 Linear velocities
Table 3.6 Number of lanes for Route Segments between Intersections
Table 3.7 Time of day
Table 3.8 Route segment length
221
Database row Angular velocity, deg/s Probability of accident w1 = /1000/
2
Pw1 = 0.10
w2 = /0100/
4
Pw2 = 0.11
w3 = /0010/
6
Pw3 = 0.12
w4 = /0001/
8
Pw4 = 0.13
Database row
Number of lanes
Probability of accident
s 1 = /1000/
1
Ps1 = 0.10
s 2 = /0100/
2
Ps2 = 0.12
s 3 = /0010/
3
Ps3 = 0.13
s 4 = /0001/
4
Ps4 = 0.14
Database row
Linear velocity, m/s
Probability of accident
v1 = /1000/
5
Pv1 = 0.10
v2 = /0100/
10
Pv2 = 0.11
v3 = /0010/
15
Pv3 = 0.12
v4 = /0001/
20
Pv4 = 0.13
Database row
Number of lanes
Probability of accident
s1 = /1000/
1
Ps1 = 0.10
s2 = /0100/
2
Ps2 = 0.12
s3 = /0010/
3
Ps3 = 0.13
s4 = /0001/
4
Ps4 = 0.14
Database row
Time of day (h)
Probability of accident
t 1 = /10000/
0–6
Pt1 = 0.10
t 2 = /01000/
6–10
Pt2 = 0.13
t 3 = /00100/
10–15
Pt3 = 0.15
t 4 = /00010/
15–20
Pt4 = 0.14
t 5 = /00001/
20–24
Pt5 = 0.20
Database row Route segment length Probability of accident l 1 = /10000/
Very short, 200 m
Pl1 = 0.10
l 2 = /01000/
Short, 400 m
Pl2 = 0.12
l 3 = /00100/
Medium, 600 m
Pl3 = 0.13
l 4 = /00010/
Large, 800 m
Pl4 = 0.14
l 5 = /00001/
Very large, 1000 m
Pl5 = 0.15
222
3 Group Control
3.7.3 Determining the Probability of an Accident on a Route To rank the routes Rv by the probability of an accident Pi (Rv ), the ACS of the SEMS first creates a list of intersections for each route. Next, for each list of intersections, the sensing system of the ACS determines the approximate values of their parameters corresponding to the attributes of the reference rows and fuzzifies these values to find the membership functions for the attributes of the corresponding reference rows. Then the ACS classifies the rows for intersections by comparing them with the reference rows from the database according to the algorithm described in [84]: it assigns values for the probabilities of accidents corresponding to the identified reference rows and calculates the probabilities of accidents at all intersections and the total probability of accidents at intersections along the entire route. For example, a certain intersection is characterized by the following parameters (attributes): intersection with passage straight, 1 lane, turning angle 30°, and angular velocity 5.6 deg/s. In this case, the row characterizing intersections has the form /00001000000/; classification using the logical–linguistic algorithm [84] attributes it to the reference row C 5 with the probability of an accident PC5 = 0.14. The row /1000/ characterizing the number of lanes is classified as the reference row s1 with the probability of an accident Ps1 = 0.10. After fuzzification, the row characterizing the turning angle takes the form /0 0 0 0 0 0 0.3 0.7 0 0 0 0/, being classified as the reference row ϕ 6 with the probability of an accident Pϕ 6 = 0.10. After fuzzification, the row characterizing the angular velocity takes the form /0 0 0.3 0.7 0/, being classified as the reference row w3 with the probability of an accident Pw3 = 0.12. When passing an intersection, accidents are possible under one of the following events: C i (i = 1, 2, …), or ϕ j ( j = 1, 2,…), or wq (q = 1, 2, …), or sg (g = 1, 2, …). They correspond to the probabilities of an accident PCi , Pϕ i , Pwq , and Psg , respectively. According to the rules for calculating the probability of a logic function, the logic function F 1,2,…,n in the Zhegalkin algebra [51] has the form: (3.42) where f 1 , f 2 , f 3 , . . . , f n —logical functions or variables (events), ~—denotes addition modulo 2, ↔ denotes equivalence. According to the paper [85], the probability of an accident when passing such an intersection (n = 4) is given by: ) ( P = (−2)0 ∗ PCi + Pϕ j + Pwq + Psg + +(−2)1 ∗ (PCi Pϕ j + PCi Pwq + PCi Psg + Pϕ j Pwq + Pϕ j Psg + Pwq Psg )+ . +(−2)2 ∗ ((PCi Pϕ j Pwq + P Ci Pϕ j Psg + PCi Pwq Psg + Pϕ j Pwq Psg )+ ) +(−2)3 ∗ PCi Pϕ j Pwq Psg (3.43) For a large number of logical functions (n > 8), it is possible to calculate the probability approximately, being restricted to 8–10 row members; for details, see [85].
3.7 Logical–Linguistic Method of the Movement of Mobile SEMS …
223
If there are N intersections on the route, fuzzification, classification, and formula (3.43) will be used to calculate the probabilities of an accident for each intersection; after that, formula (3.43) gives the probability of an accident PN at all intersections of the route. Then for each route, the ACS of the SEMS first creates a list of segments between intersections. Next, for each list of segments between intersections, the ACS determines the approximate values of their parameters corresponding to the attributes of the reference rows and fuzzifies these values to find the membership functions for the attributes of the corresponding reference rows. Then the ACS classifies the rows for segments between intersections by comparing them with the reference rows from the database according to the algorithm described in [85]: it assigns values for the probabilities of accidents corresponding to the identified reference rows and calculates the probabilities of accidents on segments between intersections and the total probability of accidents on all segments between intersections along the entire route. For example, a certain segment between intersections is characterized by the following parameters (attributes): 1 lane, travel time 8 h, linear velocity 12.7 m/s, length 500 m. In this case, the row characterizing the number of lanes has the form /1000/, being classified as the reference row s1 with the probability of an accident Ps1 = 0.10. The row /01000/ characterizing travel time is classified as the reference row t 2 with the probability of an accident Pt2 = 0.13. After fuzzification, the row characterizing the linear velocity takes the form /0 0.45 0.55 0/, being classified as the reference row v3 with the probability of an accident Pv3 = 0.12. After fuzzification, the row characterizing the segment length takes the form /0 0.5 0.5 0 /, being equally classified as the reference row l2 with the probability of an accident Pl2 = 0.12 or reference row l3 with the probability of an accident Pl3 = 0.13. Therefore, the probability of an accident due to the length of the segment between intersections can be estimated by the average value (Pl2 + Pl3 )/2 = 0.125. When passing a segment between intersections, accidents are possible under one of the following events: t i (i = 1, 2, …), or vj ( j = 1, 2,…), or l q (q = 1, 2, …), or sg (g = 1, 2, …). They correspond to the probabilities of an accident Pti , Pvj , Plq , and Psg , respectively. According to [58], the probability of an accident on such a segment (n = 4) is given by: ( ) P = (−2)0 ∗ Pci + Pϕ j + Pwq + Psg ( ) PCi Pϕ j + PCi Pwq + PCi Psg 1 + (−2) ∗ +Pϕ j Pwq + Pϕ j Psg + Pwq Psg ( ) P P P + P P P Ci ϕ j wq Ci ϕ j sg + (−2)2 ∗ PCi Pwq Psg + Pϕ j Pwq Psg ) ( + (−2)3 ∗ PCi Pϕ j Pwq Psg .
(3.44)
If there are M segments between intersections on the route, fuzzification, classification, and formula (3.44) will be used to calculate the probabilities of an accident for each segment between intersections; after that, formula (3.44) gives the probability
224
3 Group Control
of an accident PM on all segments between intersections of the route. Finally, the probability of an accident on all routes Rv is calculated by the formula: P(Rv ) = PN (Rv ) + PM (Rv )−2PN (Rv )PM (Rv ).
(3.45)
3.7.4 Ranking and Optimization of Routes Due to the uncertain environment of the UV moving on a route, when calculating the performance criterion (3.41), it is necessary to consider the constraints in the form of logical and probabilistic modulo 2 equations [20]. As shown in [69], these constraints can be reduced to logical-interval ones. In this case, two values of the performance criterion (3.41), min J(Rν ) and max J(Rν ), are obtained for each route Rv . For the chosen values of the significance coefficient k p , we calculate the two values below for each route to rank the routes Rv : min Jv = {min{k T JT (Rv )} + min{k P P(Rv )}};
(3.46)
max Jv = {max{k T JT (Rv )} + max{k P P(Rv )}}.
(3.47)
Usually, the values min{k P P(Rv )} and max{k P P(Rv )} coincide whereas min{k T JT (Rv )} and max{k P P(Rv )} do not. Therefore, the ranking is performed by the minimum and maximum or the average value Jv =
1 (max Jv + min Jv ). 2
(3.48)
The choice of an optimal route for the SEMS may depend on the opinion of an expert or a group of experts.
3.8 Assessment of the Group Intelligence of SEMS in RTS The problem of optimizing the interaction of a group of robots in the performance of a joint task is now becoming increasingly complex, requiring the complexity and intellectuality of technical robot control systems to be constantly expanded to ensure the continued expansion of the scope of such RTS. In complex RTS, built on the basis of SEMS [37], it is necessary to analyze not only the behavior of an individual robot possessing due to the presence of the central nervous system (CNS) [38] appropriate behavior, but also the behavior of an interacting group of robots whose functions are closely interrelated in the so-called “small group”. This group can be vehicles that
3.8 Assessment of the Group Intelligence of SEMS in RTS
225
carry out coordinated movements, a group of robots of collectors performing joint operations, etc. When creating such groups, specialists face a wide range of problems, such as assessing the ability of the group to make correct decisions under uncertainty, determining the optimal number of team members, assessing the compatibility of group members, taking into account the static and dynamic characteristics of each robot, decision making in the central nervous system of individual robots, etc. Procedures that are usually used to assess the performance of one robot can be based on testing and evaluating the quality of decisions taken, according to a particular scale of its ability to reason [87]. In assessing the quality of work performed when solving a joint task by a group of robots, testing becomes more complicated, taking into account the so-called “psychological characteristics”, just as the work of the human operator in the group is evaluated [88]. Known is the approach to vector estimation of intelligence [89], based on the calculation of the system of artificial intelligence estimates using test results. Such vector estimates usually contain static probabilistic components that assess the ability to solve fuzzy applied problems, and dynamic probabilistic components that assess the system’s ability to self-learn [90]. Obviously, this approach can be used for an objective assessment of the suitability of certain robots to work in a particular RTS. However, in this case, comparison and selection of the best candidates from the set of tested robots can be based on a numerical evaluation of the results of dynamic testing of group work, when the results of the test are recorded at different time intervals into which the entire test run is divided and with various combinations of RTS participants. This is due to the specific differences between the group and individual activities of the robot. Therefore, the recommended formulas for calculating the components of the vector estimate of the intelligence of an individual robot cannot be directly applied to assessing the suitability of this robot for group interaction. In particular, as shown in [91, 92], there are factors such as the time of the group reaction, the time necessary for making correct or important joint decisions, and other static and dynamic characteristics. To assess the intellectual activity of groups of operators of Human–Machine Systems (HMS), there are enough constructive approaches. For example, in [92] measurements and perceptions in the SEMS are considered. This method allows you to objectively determine (using purely instrumental methods) the parameters of the human sensory system that affect the adaptability of the operator to the natural and technological environment. The approach [93] modeled the cognitive process of the operator and the behavior of the system, which is affected by the operator’s actions in emergency situations of the Nuclear Power Plant (NPP). The operator model models the cognitive behavior of the operator in random situations on the basis of the Rasmussen decision making model and is implemented using AI-methods of distributed cooperative output with the so-called blackboard architecture. Rule-based behavior is modeled using the knowledge representation in If–then rules. Article [94] is devoted to the development of a general method for estimating the load for various tasks and workplaces. This model was developed by introducing sets of linguistic variables and applying the Analytical Hierarchy Process (AHP) to assess the external load imposed on the human operator in the SEMS. For this purpose, a five-point scale
226
3 Group Control
of the set of linguistic variables was constructed and hierarchical priority procedures were established. The task variables and workstation (for example, for physical, environmental, postural and mental workloads) that can include operator perception of the workload are selected as workload factors, and the AHP method is used to collect different weights. Finally, the Overall Work Level (OWL) is calculated using a computer system to determine the work entrusted to the operator. Some other approaches in similar areas are described in the patent [94]. In [95], the operator group model is connected to the system and includes a data storage device for the ideal operator model, a recognition section for inputting voice information generated by the operator, and for visual information provided to the operator. When the failure information is entered in the SEMS, the data about the object and/or its operation, presented in the relevant sections on the control panel’s operating panel, and when the operator responsible for the decision is selected and indicated, then this operator can quickly take corrective measures for excluding the stall in the work of the object. By analogy with SEMS, to evaluate the intelligence of the central nervous system of robots in the RTS, it is necessary to conduct its computer simulation with identification and evaluation of the decisions taken in complex dynamic systems under conditions of incomplete certainty [20]. The structure of the mathematical model should include methods for estimating the parameters characterizing the group intelligence of the RTS. In this paper, an approach is based on fuzzy mathematical and computer simulation of the behavior of the RTS robot group and the dynamics of the selection environment [34]. Methods for estimating the parameters of the robot’s intellect are considered, taking into account the work in the group. Some new approaches and results are presented. However, the methods discussed can be improved in the future, given estimates psychological characteristics group behavior, such as average scores of professional competence groups static components of intelligence candidates in a group, the components of the vector learning test candidates having a mean estimated level of intelligence and etc.
3.8.1 Test Simulation of a Robotic System In the test simulation of RTS to obtain estimates of the group intelligence RTS first need to set in the form of fuzzy sets of the following models: • • • •
Source environment of choice A; End environment of choice B; Valid i environment of choice Av i ; Dynamic spaces of configurations Dj of the j-th group of robots at times t k : D j (tk ) = ∪D p (tk ),
(3.49)
3.8 Assessment of the Group Intelligence of SEMS in RTS
227
where D p (tk )—dynamic space of configurations of the p-th robot at the time tk , T 0 < tk < T e ,
(3.50)
where: T 0 —start time, T e —end time; • Set of valid configurations of the p-th robot K v p ; • Set of valid configurations of the j-th group of robots K v j ; • Initial i-th task environment: ( ) ( ) Wi Ti0 = A ∪ Di Ti0 .
(3.51)
where: Ti0 —start time for i-th task environment; • The final i-th task environment: ( ) ( ) Wi Tie = B ∪ Di Tie .
(3.52)
where: Tie —end time for i-th task environment; • Target set W t . All of these fuzzy sets can be given as a set of algebraic, differential and/or difference equations, as well as logical-interval, logical-probabilistic, logical–linguistic, and sometimes purely linguistic expressions. For each ( j-th ) group( of )robots (SEMS) forming RTS, the i-th task of transition 0 from W ji Ti j to W ji Tiej is formed in the shortest possible time ΔT ji : (
( ) ( )) W ji Ti0j → W ji Tiej /(ΔT ji = Tiej −Ti0j ) → min ,
(3.53)
where: Ti0j , Tiej —start time and end time solutions of the i-th task of the j-th group of robots. Then the binary ratio is calculated: ( ) W ji Tiej q Wt ,
(3.54)
( ) where q—metric of proximity W ji Tiej to W t or double predicate on the analyzed sets, which can be set, for example, by specifying the formulas of the logicalmathematical language or by specifying a formalized linguistic expression [96, 97]. When designing the q formation ( ) procedure, it is necessary to obtain a quantitative estimate of proximity W ji Tiej to W t . Creation of the initial base for the design q it is advisable to start with the selection in each of the compared sets of metrizable subsets, for elements of which relations and numerical measures of proximity can be specified. The next, most difficult step is ordering the elements of non-metrizable subsets. It is very likely that to solve
228
3 Group Control
this problem, you will need to build a new system of logical equations, the solution of which will lead either to the metrizable sets or to the ordered ones. In the first case, we immediately obtain numerical measures of proximity. In the second, these measures will have to be built anew. As possible numerical estimates can be used the power of sets, the number of matching elements, the number of groups of matching elements, etc. Any recommendations for the selection of certain assessments cannot be recommended at the present time due to the lack of knowledge of such models. In case of impossibility of ordering of non-metrizable sets the decision on the greatest proximity of any set to the standard shall be made by the person-designer performing selection of candidates for RTS group [20]. The most frequently used and easily constructed binary proximity measures of sets of the form (2.21–2.24). The use of binary functional relations ( ) of the form (2.21– 2.24) is most preferable, since it allows ranking models W ji Tiej by their proximity to the target set W t and at the same time entering a numerical estimate of group intelligence [34]. Each j-th group of robots is given n attempts (n = 1, 2, 3,…,N), each of which requires the following restrictions: A ⊆ Aiv ,
(3.55)
B ⊆ Aiv ,
(3.56)
D j (tk ) ⊆ K vj ,
(3.57)
D p (tk ) ⊆ K vp .
(3.58)
For the evaluation of learning j-th group of robots each subsequent i-th transition problem should be more complex, and to assess the sustainability (professionalism) in the decision making of the j-th group all i-th transition problems should be about the same complexity. Sometimes it is desirable to assess the differences in the similarity of the CNS members of the group of robots in the RTS, which may indirectly characterize the so-called “psychological compatibility of group members”. In this case, a similar test simulation is additionally performed separately for each group member.
3.8.2 Baseline Assessment of Group Intelligence in the Central Nervous System RTS According to the results of the test simulation of the j-th group of robots in RTS solving the consistently complicated i = 1, 2, 3,…, F problems can be analogous to
3.8 Assessment of the Group Intelligence of SEMS in RTS
229
the American psychologist Renzulli [98–100] calculate the following estimates of the group intelligence of RTS: • coefficient of intellectual abilities of the j-th group of robots: Qj =
F N 1 ∑∑ qin j ; F N i=1 n=1
(3.59)
• creative factor of the j-th group of robots: F N 1 ∑∑ 1 , Vj = F N i=1 n=1 ΔTin j
(3.60)
where: ΔT inj —time of the n-th attempt to solve the i-th task by the j-th group of robots; • the coefficient of motivational involvement M j of the j-th group of robots is also calculated by the formula (3.60), however, all test task (i = 1, 2, 3, …, F) should be of approximately the same complexity. Combining the entered coefficients, it is possible to calculate the intelligence coefficient of the j-th group of robots in RTS: I Q j = kq Q j + kv V j + k M M j ,
(3.61)
where: k q , k v i k M —weight coefficients set by the operator conducting the test simulation, depending on the purpose of the tested RTS. Recently, additional assessments of the group intelligence of the group of operators, taking into account the psychological characteristics of the group members, are offered. Similarly, we can introduce additional factors that characterize group behaviors of robots in the RTS. For example, you can enter the coefficient of learning each of the complex j—group, calculating IQj when { solving } { i =} 1,2,3,…F task. Then find the maximum maxi I Q j and the minimum mini I Q j value of all the calculated. Then the learning ratio will be: { } { } L A = max I Q j − min I Q j . i
i
(3.62)
A certain interest may be the estimation of stability in decision making of the j-th group of robots in RTS, which can be characterized by the coefficient of stability S t , calculated by the results of n = 1,2,3,…, N attempts of the same or similar problem: { } { } St = max I Q j − min I Q j . n
n
(3.63)
It is also interesting to assess the similarity of participants of the j-th group of robots in RTS. This requires every i-th task to be divided into h = 1, 2, 3,..,H subtasks
230
3 Group Control
performed by each h-th robot of the j-th group. Then you need to calculate the IQh of each robot and find the maximum maxh {IQh } and the minimum minh {IQh } value. Then the rate of unanimity will be: { } { } Uh = max I Q j − min I Q j . h
h
(3.64)
In addition to the test results can be: average score estimates of group intelligence, average estimates of the stability of decision making group, differences in the consensus of the Central nervous system group members of the group of robots, and the stability and variability of reasoning.
3.9 Software Package for Testing Group Intelligence Assessment Models Currently, the problem of optimizing the interaction of robots in the group, taking into account their intelligence when working together is becoming increasingly important. In Complex Robotic Systems (CRS), built on the basis of SEMS [37] it is necessary to analyze not only the behavior of a single robot, which has due to the presence of the central nervous system appropriate behavior, but also the dynamics of the interaction of SEMS in RS. In assessing the performance of such CRS have to solve a wide range of problems. These problems include: • Evaluation of the ability of the SEMS group consisting of CRS for proper decision making under uncertainty [101]; • Determination of the optimal number of group members; • Evaluation of compatibility of group members taking into account static and dynamic features of each robot; • taking into account the peculiarities of decision making in the CNS of individual robots, etc. Procedures that are commonly used to evaluate the efficiency and performance of a single robot can be based on testing and assessing the quality of decisions made, in accordance with a particular scale of its ability to reason [87]. When assessing the quality of work when solving a joint task by a group of robots, testing is complicated taking into account the so-called “psychological features”, just as the work of a human operator in the group is evaluated [88]. A well-known vector approach to the evaluation of intelligence [89], based on the calculation of the assessment system of artificial intelligence with the use of test results. Such vector estimates usually contain static probabilistic components that evaluate the ability to solve fuzzy applied problems, and dynamic probabilistic components that evaluate the system’s ability to self-study [90]. It is obvious that this approach can be used for an objective assessment of the suitability of certain robots to work in a particular RS. In this case, the comparison and selection of the best
3.9 Software Package for Testing Group Intelligence Assessment Models
231
candidates from the set of tested robots can be based on the numerical evaluation of the results of dynamic testing of group work. In this case, the test results are recorded in different time intervals, which are divided into the entire test run. This is due to the specific differences between group and individual work activities. Therefore, the recommended formulas for calculating the components of the vector estimation of the intelligence of an individual robot cannot be directly applied to the assessment of the suitability of this robot for group interaction. In particular, as shown in [91, 92], there are factors such as group reaction time, the time required to make correct or important joint decisions, and other static and dynamic characteristics. By analogy with human–machine systems (HMS) to assess the intelligence of CNS robots in CRS it is necessary to carry out its computer simulation with identification and evaluation of decisions in complex dynamic systems under conditions of uncertainty [20]. However, in this case, the structure of the mathematical model should include methods for evaluating the parameters characterizing the group intelligence of CRS. In addition, when assessing the quality of work when solving a joint task by a group of robots, testing is complicated taking into account the so-called “psychological characteristics”, just as the work of a human operator in the group is evaluated. Therefore, it is important to create new systems and algorithms for testing groups of robots to assess the group intelligence team of working together robots based on SEMS, taking into account their “psychological” features. In this paper we propose an approach based on fuzzy mathematical and computer modeling of the behavior of a robots group in the CRS and the dynamics of the environment of choice [34]. The use of the results of test modeling of a group of robots in the CRS in the system introduced in [101] estimates allows taking into account the characteristics of the CNS of each robot group and the psychology of the group in solving the joint problem. First, these estimates have static and dynamic components that take into account both the ability to make decisions under uncertainty and the ability to learn in the process. Secondly, they have components that characterize the psychological stability of groups. In some cases, vector estimates can be replaced by average estimates, which make it easy to rank from best to worst groups.
3.9.1 A Typical Structure of a Software System Testing of Models of Groups of Robots Software Testing Complex (SCT), which provides testing of models of different groups of robots in order to assess their group intelligence, has structure of the tree type (Fig. 3.20). In SCT, the supervisor (1) maintains a dialogue with the operator (2) and the human–machine interface (3). The supervisor is designed to select operating modes and control software modules. Each module is connected to the supervisor via the controller. Modules 4, 5, 6, 7, 8 are expert systems that contain a database (DB), a knowledge base (KB) and
232
3 Group Control
an inference engine (IE) for model elements of the corresponding modules. In addition, module 4 contains models of control objects, module 5—environment models, module 6—test models of groups of robots, module 7—random number generator, program testing, computer intelligence assessments and module 8—corrective corrections of the training programs. Module 9 is a database of test items. Module 10—storing the results of all blocks. The system of testing robot models works as follows. Before starting work, the operator 2 via the man–machine interface 3 controlled by 1 supervisor fills in the module 9 test tasks’. In each expert system of modules 4, 5, 6, 7, 8, the DB and the KB are filled with relevant information. In addition, the operator can adjust the DB and KB, as well as conduct training of expert systems on the results of test tests. The supervisor then starts the module 4. The controller of this module gives the command to create an object model. After that, an object model is created, which is stored in the memory module 10. Similarly, other object models are created, which are also stored in module 10. At the end of the process of creating object models, the module controller sends a corresponding signal to the supervisor. After receiving a signal about the end of the process, which creates models of control objects, the supervisor starts the module 5. The controller of this module gives the command to create an environment model. IE substitutes from the database the available data on the elements of the environment in the KB rules and creates a model of the environment, keeping it in the memory block 10. Similarly, other environment models are created, which are also stored in memory module 10. At the end of the process of creating models of the media controller subsystem of the module 5 reports to the supervisor the appropriate signal. The modules 4 and 5 can start and run in parallel, which speeds up the system as a whole. In this case, the module 6 subsystem runs after the receipt of the signal about the end of creation of models of objects and environments. After receiving a signal about the end of the process, which creates models of control objects and the environment, the supervisor starts the module 6. The controller of this module gives the command to build the test task model from the control object models and environment models that are stored in the memory module 10. IE substitutes from the database the available data on the assembly of the tested model in the rules of the KB and creates a model of the test task, which is also stored in the memory module 10. At the end of the assembly process of the tested model of the robot group, the controller of the module 6 transmits a corresponding signal to the supervisor 1. After receiving a signal about the end of the process, which carries out the assembly of the tested model of the group of robots, the supervisor starts the module 7. The controller of this unit gives the command for testing, launching a random number generator and test programs. In the block of test tests IE substitutes from the database data on the situation in the test problem in the rules of the KB and it forms a sequence of situations in the test problem on the basis of a random sample from the random number generator. The module accumulates statistics of test results and calculates estimates of group intelligence. At the end of the process of calculating
3.9 Software Package for Testing Group Intelligence Assessment Models
233
the estimates of group intelligence controller module 7 transmits the corresponding signal to the supervisor. After receiving a signal about the end of the testing process, the supervisor transmits the test results via the human–machine interface to the operator. After analyzing the test results, the operator makes a decision on the completion of the work, or on the need to adjust the testing. If necessary, the supervisor starts the system of module 8 by issuing the corresponding command to the controller of this module. The controller of this module gives the command for correction. IE, substituting data from the database about possible adjustments in the rules the KB determines the sequence of correction and type of correction test patterns and transmits the corresponding information in the module 4, which introduces the corrective amendments in these models. After adjustment at the request of the operator through the human–machine interface, the supervisor can instruct the controller of the module to start training, which analyzes the results of the correction, comparing the intelligence estimates obtained before and after the correction, and enter the successful values of the corrections in the database with the simultaneous correction of the relevant rules of the KB of the module 8.
3.9.2 Assembly and Adjustment of Models of Control Object The module 4 (see Fig. 3.20) under the control of the controller provides for the assembly and adjustment of models of three types of intelligent robots: robot-forklift for warehouse work, robot-car at the intersection and robot-assembler mirror antenna system. The database of this module stores the structures and elements of the models of these robots. The rules are stored in KB: • • • •
Selection of the structure of different types of robots; Selection of models of component parts of robots for the selected structure; Formation of Central nervous systems of different robot models; Connection of robot component parts models according to the selected structure.
The structures of robot models contain a list of their elements and connections between them, indicating all inputs and outputs for each type of robot. Elements of any robot model represent data on the state of the model: initial, required, current, permissible, permissible control actions and models of component parts of robots. Initial state data is information about the initial coordinates and characteristics of the robot, as well as its components. For example, hands and hands with fingers for a robot forklift. In addition, additional information may be available on the initial state of the environmental sensors and the model status feedback sensors. The data on the required state of the model contains a list of step-by-step transitions from the initial states to the required end states and transition constraints. Data about the current state of the model contain information from the environment sensors and feedback state of the model.
234
3 Group Control
Data on the permissible state of the model contains the permissible coordinates and characteristics of the robot, as well as its components. In addition to these data on the state of the robot model and its structure, contains the values of permissible control actions and information about the components of the robot models. Information about the components of robot models contains parameters, static and dynamic characteristics of control systems models: electric drives, actuators and feedback sensors. In KB there are rules for selecting and assembling the structure of the robot model, which are rules of the “if–then” type. The operator sets the robot type via the human–machine interface and the supervisor. IE substitutes the data type of the robot, selected from the database, the rules of BL, and the selection of its structure in general or the assembly of component parts. The formation of the Central nervous system of robot models is carried out by means of control actions in the models of control systems, based on the analysis of the initial, current, permissible and required state, as well as the components of the robot and permissible control actions. Assembly of robot models from their components is carried out in accordance with the rules of connection of models of robot components (control systems, electric drives, actuators, feedback sensors) selected structure, which is described by the specified parameters and characteristics. Similarly, on the instructions of the operator through the human–machine interface and supervisor, the IE creates models of other types of robots, placing them in the storage of work results in module 10 (see Fig. 3.20).
3.9.3 Assembly and Adjustment of Models of the Environment The module 5 (see Fig. 3.20) under the control of the controller provides for the assembly and configuration of models of three types of environments in which the work of groups of intelligent robots. These include warehouses, road junctions and near-earth orbits of satellites. Accordingly, the database of this module has three sections, where the structures of environments with elements of models of these environments are stored. KB also has three sections, where the stored selection rules in the model structure of the environment, rules for the selection of the models of elements for the chosen structure and rules of the association of the models of elements in the shared model of the environment functioning group of intelligent robots. Environment structures contain for each type of environment a list of media model elements and the relationships between them, indicating all inputs and outputs. The following software modules are the elements of the storage environment model: • Dimensions of premises; • Schemes of cargo location, their coordinates and dimensions;
3.9 Software Package for Testing Group Intelligence Assessment Models
235
• Scheme of passages between the goods their coordinates and dimensions; • Schemes of departure and entry from the warehouse, their coordinates and dimensions; • Schemes of cargo delivery from the warehouse, their coordinates and dimensions; • A set of parameters for the required states of warehouse models with a list of step-by-step transitions from the initial state to the final one; • The list of restrictions during transitions; • A set of parameters of the current state of the warehouse models with information from environmental sensors; • A set of valid states of the models of warehouses with valid coordinates and characteristics of the warehouses as a whole and their component parts. Elements of the model environment “crossroads” are: • • • • • • • • •
Types of intersections; Types of streets adjacent to the intersection; Availability of traffic lights; Types of traffic lights; Schemes of the initial location of robots-cars; Dimensions of the robot car; Current coordinates, direction and speed of robot cars; The list of restrictions on the movement of robots-cars; A set of permissible coordinates and speeds of robots-cars, etc.
The following software modules are the elements of the “near-earth orbits of satellites” environment model: • • • • • • • •
Types, parameters and characteristics of orbits; Types, the parameters of the disturbing factors; Static and dynamic characteristics of disturbing factors; Number and type of satellites; Current, valid and desired satellite coordinates; The speed of satellites and installed on them elements of the mirror system; Types, parameters and characteristics of the mirror system; Permissible control actions, etc.
Rules for selecting the structure of the model of the functioning environment contain rules of the “if–then” type for the choice of the scheme for the given operator through the human–machine interface and the supervisor of the type of environment, rules for selecting models of environment elements for the selected scheme and rules for combining the selected models of environment elements in the General model of the environment of the group of intelligent robots. Inference Engine (IE) is a software module that, under the control of the controller, performs a logical conclusion from the database pre-filled with current facts according to the rules of the BR in accordance with the laws of formal logic.
236
3 Group Control
The IE first selects from the BS the rules for selecting the structure of the environment model for the type of robot group specified by the operator and the type of environment for its functioning. Then, from the selected rules, selects the rules that correspond to the current facts of the database, distributes the selected rules by priorities and selects the structure of the environment model, which corresponds to the rule with the highest priority. At the next stage, the IE selects rules for choosing models of environment elements from the database, selects those that correspond to the selected environment structure and current database facts, distributes the selected rules by priorities and selects those models of environment elements that correspond to the rules with the highest priority. Then IE chooses from the KB rules of combining the selected models of elements of the environment into a General model of the environment for the functioning of a group of intelligent robots for the selected group of robots and the environment of its functioning, placing it in a module where all the results are stored. Similarly, on the instructions of the operator through the human– machine interface and supervisor creates models of other types of environments functioning groups of robots, placing them in a module where all the results are stored.
3.9.4 The Assembly of Test Models Groups of Robots In module 6 (see Fig. 3.20) under the controller provides for the assembly of the models of group three types of intelligent robots: robot forklift, the robot vehicle and robot collector mirror antenna system for the proper functioning in three environments: warehouses, intersections of roads and low-earth orbit satellites. Accordingly, the database has three sections, which store the layout of the members of different groups of robots in the possible environments of their operation, their coordinates, robot model schemes, as well as static and dynamic characteristics. In addition, the database stores the initial, permissible, current and final coordinates of robots, permissible control actions and goals of functioning in different environments, as well as options for group interaction algorithms for different target tasks. The KB contains the following “if–then” rules»: • Selection of the structure of the robot group models for the purpose of functioning; • Selection of algorithms for interaction of robot models in the group under the selected structure; • Formation of a group model of the same type and different types of models of robots and their environments. IE is a software module that performs the output from the pre-filled current facts from the database to the rules of the KB in accordance with the laws of formal logic. IE first selects from the KB selection rules for the model structure of groups of robots taking into account the purpose of operation. Then, from the selected rules, the ones that correspond to the current facts of the database (the layout of the members of the group of robots in the model of the environment of their functioning, their coordinates,
3.9 Software Package for Testing Group Intelligence Assessment Models
237
parameters of robot models, as well as their static and dynamic characteristics) are selected. IE distributes the selected rules by priority and chooses the structure of the model groups of robots, which corresponds to the rule with the highest priority. At the next stage, the IE selects the rules for choosing algorithms for interaction of robot models for the selected structure and purpose of operation, taking into account the current facts (parameters of robot models, their static and dynamic characteristics, and initial, permissible, current and final coordinates of robot models, permissible control actions). Then IE distributes the selected rules according to priorities and selects those algorithms for interaction of robot models in the environment, which correspond to the rules with the highest priority. Then, the IE selects from the BR the rules for the formation of a group model from the same and different types of robot models and their environments, taking into account the location of groups in the surrounding space and robot movements in the specified directions. The operator prioritizes the selected rules and selects the rules of assembling the model of the group of robots in the environment, which correspond to the rules with the highest priority. After that, the group model and the environment of its functioning are assembled. This model is stored in the result store.
3.9.5 Testing Groups of Robots The module 7 provides for testing of models of groups of robots-forklifts, robots-cars and robots-assemblers, functioning in the appropriate models of environments. For example, warehouses, intersections of roads and near-earth orbits of satellites. The tested models of robot groups are formed with the help of the module “assemblies of tested models of robot groups” and stored in module 10 (see Fig. 3.20). The test results are used to calculate estimates of their group intelligence. The elements of the DB module are: test task, possible limitations and obstacles, their parameters and characteristics, algorithms of test tests, formulas for calculating estimates of group intelligence with adjustable coefficients, test time, the number of attempts to perform the test task, types of failures and accidents and other collisions, formulas for calculating their number [108]. At the same time, to calculate the estimates of group intelligence, you can use the expressions (3.59–3.61). The KB is the following set of rules: • Selection of test tasks, • Selection of possible constraints and obstacles with their characteristics and parameters, • Selection of test algorithms, • Choice of formulas to calculate estimates of group intelligence, and custom indexes. IE is a software module that performs a logical inference from the KB rules in accordance with the current facts from the database and the laws of formal logic. At the beginning of the IE selects from the KB rules that are responsible for the
238
3 Group Control
selection of test tasks for the test objectives set by the operator and stored in module 10 (see Fig. 3.20) group model and operating environment. Then, from the selected rules, rules are compiled that correspond to the current facts (types of obstacles and restrictions, check time, number of attempts to complete the test task, types of failures and accidents, and other collisions). Next, the selected rules are prioritized. After that, the test task corresponding to the rules with the highest priority is selected. At the next stage, the IE selects the rules of selection of possible constraints and obstacles with their characteristics and parameters for the selected test task. Then stored in the module 7 (see Fig. 3.20) the group model and the operating environment. From the selected rules, the IE selects the rules that correspond to the current facts (possible constraints and obstacles, their parameters and characteristics) from the database, distributes the selected rules according to priorities and selects those constraints and obstacles with their parameters and characteristics that correspond to the rules with the highest priority. Next, the IE selects from the BS rules for the selection of algorithms for test tests, based on the selected test problem and possible constraints and obstacles. From the selected rules, the IE selects the rules that correspond to the current facts (formulas for calculating the estimates of group intelligence) from the database, distributes the selected rules according to priorities, selects the algorithm of test tests to which the rules with the highest priority correspond and transmits it to the test module. At the same time, the identifier of the selected algorithm is transmitted to the module for calculating intelligence estimates. Then, the IE selects from the KB the rules for the selection of formulas for calculating the estimates of group intelligence and adjustable coefficients for the selected algorithm of test tests. From the selected rules, the IE selects the rules that correspond to the current facts (types of failures and accidents and other collisions, formulas for calculating their number). It then distributes the selected rule according to the priorities and selects the formulas to calculate estimates of group intelligence and custom coefficients that correspond to the rules with the highest priority and transmits them to the module of calculation of estimates of the intelligence of the module 7. The module of test tests for the chosen algorithm of test tests by means of the random number generator substitutes obstacles and restrictions with their parameters and characteristics (probability of occurrence, coordinates, etc.) into the algorithm of test tests. In addition, the test module accumulates statistics of collisions and other collisions. Then passes it to the module 7for calculating intelligence estimates. In the module 7 under the control of the controller on a set of test statistics are calculated estimates of group intelligence on the selected IE formulas and stored in a special memory section of module 10, which also stores the identifier of the test algorithm used in this case, and in the memory of the training module.
3.10 Conclusion
239
3.9.6 Model Training and Correction In module 8 provides for the calculation of the amendments according to the results of testing and correction models produced in modules 4, 5 and 6. DB contains the acceptable and desired estimates of group intelligence test for different tasks and sets of possible amendments of the models. The KB contains rules such as “if–then” selection of corrections for the analysis of the values of the calculated estimates. The IE inference mechanism selects from the knowledge base all the correction rules corresponding to the identifier of the testing algorithm used, stored in module 9. These actions are performed by the controller command, launched based on the operator’s request through the human–machine interface and the supervisor. From the selected rules, the IE selects the rules corresponding to the current facts (actual and desired estimates of group intelligence corresponding to the identifier of the test algorithm used). The IE then selects the selection rules corresponding to the rules with the highest priority and passes them to module 8 (corrections). Selected corrections can be made to test models via module 8 at the operator’s request. In addition, if the operator wishes, a training module can be launched that analyzes the results of the correction by comparing the intelligence scores obtained before and after the correction. Successful correction values are entered into the database of modules 9 (training) and 8 (corrections) with simultaneous correction of the corresponding rules in the knowledge base.
3.10 Conclusion The problem of finding the optimal algorithm of situational control always arises when it is necessary that some group of SEMS together to perform some work. The natural restriction on the time of optimal decision making on situational control of the SEMS group in real time imposes restrictions on the number of members of the controlled group and the distances between them associated with the dynamics of the environment of choice and the dynamics of the controllability of the SEMS themselves. The classification of situations, which consists in assigning the current situation to one or several classes corresponding to some control, allows simplifying and accelerating the planning of situational control of the SEMS group. If the resulting solution to the classification problem is unique and the selected class of situations requires some certain impact on the objects, then the objects are served associated with this control class. At the same time, one of the prerequisites for the effective application of the situational approach should be observed: the number of possible control decisions is significantly less than the number of possible situations [5]. It is therefore necessary before making a decision about the plan situational control to simulate the control group SEMS. For example, on the basis of fuzzy mathematical modeling of poorly formalized processes and systems, which allows for step-by-step
240
3 Group Control
construction of the path to go back and discard non-effective sections of the path. Naturally, such a search for the optimal plan of situational control based on modeling requires additional computing power and time from the control system. Significant progress in this direction can provide parallelization of calculations, i.e. simultaneous passage of all possible ways of the plan of situational control with the subsequent decision on optimality. The considered algorithm allows optimizing the work of robot loaders to release the room from the objects in it when consistently using robots. However, analysis of the algorithm shows that if it turns out that the objects are q1 , q2 ,…qn —are of different types, they can be shifted by different types of robots working almost in parallel (simultaneously). In this case, to exclude emergency situations and breakdowns of intersections of the trajectories of movements in the robot control must existing current dynamic model of health robots to complement the set of valid instructions of the control. For this, for example, it is possible, based on the purpose of a particular robot, to make a list of possible instructions. Further, by mathematical and computer simulation of PDSCR (the permitted dynamic space of robot configurations) to reveal a set of acceptable instructions of group behavior. In solving this problem, it is necessary to take into account the dynamic characteristics of robots, which can be optimized by adjusting the parameters of automatic control systems of robots, which is a complex optimization problem with nonlinear constraints. The solution of this problem requires separate consideration. In addition, in some cases, when implementing systems of group control of robots, some instructions issued by the upper-level ACS (automatic control systems) may not be clear to the ACS of the robot, although the simulation results they have been assigned to valid. This, for example, may be due to the incomplete adequacy of the models used in the PDSCR. Partially remove such instructions from the permissible possible due to the semantic analysis of the instructions for correctness and are not contradictory, and through the organization of a dialogue between interactive ACS robots. Wherever there is a group of complex intelligent technical objects that must work together to perform some work or solve some problem, there is a problem of finding the optimal algorithm of situational control. The natural restriction on the synthesis time of the algorithm for finding the optimal solution in real time imposes restrictions on the number of members of the controlled group and the distances between them associated with the dynamics of the environment of choice and the dynamics of controllability of the robots themselves. Classification of situations makes it much easier and faster to plan the situational control of a group of robots. Classification of situations is to attribute the current situation to one or more classes corresponding to some control. If the resulting solution to the classification problem is unique and the selected class of situations requires some certain impact on the objects, then the objects are served associated with this control class. At the same time, one of the prerequisites for the effective application of the situational approach should be observed: the number of possible control decisions is significantly less than the number of possible situations.
3.10 Conclusion
241
Otherwise, you need to solve the problem of analysis of the current situation in the environment of choice O(t k ). It becomes a problem of estimating the previous control with the purpose of making decisions about the change plan situational control. Adjusting the plan of situational control at each subsequent step does not necessarily lead to the construction of the optimal plan; since in the aggregate the selected stepby-step path to the goal will not be guaranteed to be optimal (the control process may not be Markov). Therefore, it is necessary before making a decision on the plan of situational control to conduct a simulation of control of a group of robots, for example, on the basis of fuzzy mathematical modeling of poorly formalized processes and systems. This allows for step-by-step construction of the path to go back and discard the ineffective parts of the track (see Fig. 3.8), where bold denotes the best way). Naturally, such a search for the optimal plan of situational control based on modeling requires additional computing power and time from the control system. Significant progress in this direction can provide parallelization of calculations, i.e. simultaneous passage of all possible ways of the plan of situational control with the subsequent decision on optimality. Wherever there is a group of complex intelligent technical objects that must work together to perform some work or solve some problem, there is a problem of finding the optimal algorithm for decision making for each control object, for example autonomous robot. At the same time, the formation of the behavior of an autonomous
Fig. 3.8 Example of finding the optimal solution
Fig. 3.9 Network fragments
242
3 Group Control
Fig. 3.10 Fragments of the influence diagram
Fig. 3.11 Fragment of ID in the structure without a coordinator and leader
robot is based on pragmatic information obtained by successive transformation of the measuring information from the robot sensors into syntactic, then semantic and finally pragmatic information into the CNSR (Central nervous systems of the robots). As a result, the situational control system forms the behavior of the robot in the group, i.e. determine the sequence of its actions, which in an ever-changing environment are necessary to achieve this goal. A robot’s decision about its own behavior can be reflexive and conscious. In reflexive decision making robot based on semantic data selects the behavioral algorithm from the available genetic algorithms the one with the highest priority. When making a conscious decision, the robot analyzes the coordinator’s goal of functioning, as well as the behavior and intentions of neighboring robots, selects from the semantic data on the environment pragmatic data related to the purpose of functioning, and then selects from the genetic algorithms those that most optimally lead
3.10 Conclusion
243
Fig. 3.12 Fragment of ID in the structure with a coordinator determining the effectiveness for each robot
Fig. 3.13 Fragment of ID in the structure with a coordinator determining the usefulness for each robot
244
3 Group Control
Fig. 3.14 Fragment of ID in a structure with a coordinator determining security for each
Fig. 3.15 Fragment of ID in a structure with a leader determining the effectiveness for each robot
3.10 Conclusion
245
Fig. 3.16 Fragment of ID in the structure with a leader determining the usefulness for each robot
Fig. 3.17 Fragment of ID in a structure with a leader determining security for each robot
246
3 Group Control
Fig. 3.18 Fragment of ID in the structure with a coordinator determining the usefulness and a leader determining safety for each robot
to the goal of functioning. Different approaches to decision making can be used: deductive, inductive and abductive decision making. The abduction method is the fastest by analogy with intuition, but its reliability depends on the completeness of the database of good solutions from past experience, i.e. it strongly depends on the time of operation of such robots in similar environmental conditions. In determining the optimal solution under conditions of not complete certainty based on binary relations, the latter can be expressed as logical equations in the Zhegalkin algebra reduced to a matrix form, which makes it easy to parallelize the process of finding the optimal solution. A comparison of the features of using the considered Influence Diagrams for making decisions in the given control structures of a group of robots shows that all of them can be used in situational group control systems. The feasibility of using specific structures depends on the tasks being solved by the group, the properties of the environment for the functioning of the group, the characteristics of the group members and available resources for the realization of the control system. The complexity of constructing and investigating influence diagrams depends on the type of group control structure and is determined by the complexity of the security and utility relationships between themselves and with efficiency vertices, as well as the relationships of usefulness and security vertices with decisions. Therefore, the
3.10 Conclusion
247
Fig. 3.19 Fragment of ID in a structure with a coordinator determining security and a leader determining utility for each robot
Fig. 3.20 Structure of the software package. 1—supervisor; 2—operator; 3—human–machine interface; 4—assembly and adjustment of models of control object; 5—assembly and adjustment of models of the environment; 6—assembly of test models groups of robots; 7—testing groups of robots; 8—model training and correction; 9—text tasks; 10—storage of results of work of all blocks
248
3 Group Control
process of finding the optimal solution for situational control of a group of robots is advisable to perform at each control step for each robot in the group with modeling the behavior of the entire group when finally choosing the optimal plan for situational control. In a group control structure without a coordinator and a leader, their informationmeasuring and control systems have higher requirements than in a structure with a leader and a coordinator. The research results can be used in situational control of a group of interacting SEMS among themselves without human participation in a variety of tasks, for example, in driving vehicles that perform coordinated movements, a group of collector robots performing joint operations, etc. The principles and stages of decision making for safe movement control of a group of robots constructed on the basis of SEMS modules are considered. It is suggested that when forming a set of acceptable controls (SEMS behavior instructions), it is first advisable to identify and record the acceptable values of parameters of individual group members, as well as their static and dynamic characteristics, in the SEMS ACS database. The next step is to identify a set of acceptable behavior instructions for group members by mathematical and computer modeling of the dynamic configuration space. The chapter describes a mathematical formulation of the problem of controlling the safe movement of a group through an intersection, taking into account the rules of passage. It is proposed to solve this problem using forecasting and modeling sequences of situations during the transition from one layer to another before reaching the final goal. When solving decision making tasks safe motion control of a group of robots based on their priorities to build an algorithm of motion control of a group of robots should start with the definition of each step of movement, the possibility of collision and the robot with higher priority to command the passage of the intersection, and another robot to give a command to delay the passage at the time of passage crossing the first robot with the maximum brake time, depending on speed and road conditions. To solve such problems, you can use multi-step generalized mathematical programming and software tools such as A-life. When selecting an optimal route for unmanned vehicles, it is necessary to minimize the probability of an accident. For this purpose, various algorithms are developed to assess accident risks at each route planning stage considering the “observed” area of the terrain. Risk assessments are predictive in nature since their uncertainty is associated with many factors that cannot be accurately estimated. Therefore, when creating a database of reference route segments, the probabilities of an accident on such segments are determined at the ACS design stage based on simulation modeling and statistical data. Under limited statistical data, it is reasonable to predict accident risks using logical–linguistic and logical-probabilistic methods. For this purpose, databases of reference route segments are created, containing the qualitative attributes of segments and the probabilities of an accident obtained after modeling. When the ACS of the SEMS determines the probability of an accident on a route, its sensory system obtains
3.10 Conclusion
249
the quantitative values of attributes on route segments. After their fuzzification, the ACS finds the values of the membership functions for the specified attributes and creates rows similar to the reference rows of the database. For each route segment, the ACS identifies the closest reference row from the database and assigns to this segment the probability of an accident corresponding to the reference row. Using these probabilities of accidents on route segments, the ACS calculates the probability of accidents on the entire route using appropriate rules (calculating the probability of logical OR functions). When selecting an optimal route, a trade-off between travel time and the probability of an accident must be observed by minimizing the following performance criterion: the sum of travel time and the probability of an accident, multiplied by given significance factors. These significance factors are adjusted by experts and entered into the ACS database at the formation stage. Usually, the performance criterion has an interval value, so the choice of an optimal route will depend on the expert’s preferences. Along with traditional approaches, the problems under consideration will require artificial intelligence technologies for determining the probabilities of accidents on reference segments. We emphasize that previously, optimal routing problems were considered without the probabilities of accidents. The introduced estimates of the results of the test modeling of the group of robots in the RTS allow to take into account the characteristics of the CNS of each robot group and the psychology of the group in solving the joint problem. First, these estimates have static and dynamic components that take into account both the decision making capacity under uncertainty and the ability to learn in the process. Second, they have components that characterize the psychological stability of groups. In some cases, vector scores can be replaced by average scores, which make it easy to rank groups from best to worst. The presented version of testing of robot groups in RS (Robotic Systems) was relatively simple and did not take into account a number of design features and real operating conditions. It seems that a more subtle modeling of RTS features and the study of the nature of variability in time of the proposed group assessments based on the results of testing of candidates will help to identify a number of other important group psychological features for work in RTS, such as indecision, nervousness, patience, scrupulousness, etc. [8]. In this case, no less important task is the generation of the most adequate test tasks for specific RTS. It is also important to note that the assessment of group intelligence of robots needs to be improved in terms of assessing learning, professionalism, psychological stability and compatibility with the human observer or supervisor—controller of the process, which in many cases can be key in making the right decisions in conditions of uncertainty. The generalized structure of the software package for testing models of groups of intelligent robots, including expert systems for creating dynamic models of interacting robots and environments. It allows you to evaluate the group intelligence of Complex Robotic Systems (CRS) by computer modeling, taking into account the characteristics of the group members and the environment of their functioning. The introduction of estimates of the results of the test simulation of a group of robots in the CRS in solving a joint problem can take into account the characteristics of each
250
3 Group Control
robot group and the psychology of the group. The supply of the complex with the training and correction module allows the operator to make the correction of robot models based on the results of testing and thereby ensure the improvement of the quality of CRS work. It seems that a more subtle modeling of CRS characteristics and the study of the nature of variability in the time of the proposed group assessments based on the results of testing of candidates will help to identify a number of new important group psychological features for work in CRS, such as indecision, nervousness, patience, scrupulousness, etc. [8]. It is also important to note that the assessment of the group intelligence of robots should be improved in terms of assessing learning, professionalism, psychological stability and compatibility with the human observer (operator) or the head-controller of the technological process. Such an assessment in many cases can be the key to making the right decisions in the face of uncertainty.
References 1. Gorodetskiy, A.E., Tarasova, I.L.: Situational control a group of robots based on SEMS. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart Electromechanical Systems: Group Interaction/Studies in Systems, Decision and Control, vol. 174, pp. 9–18. Springer International Publishing. https://doi.org/10.1007/978-3-319-99759-9-2 2. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Decision-making in central nervous system of a robot. Info. Cont. Syst. 1, 21–30 (2018). https://doi.org/10.15217/issnl684-8853.2018. 1.21 (in Russian) 3. Vorob’ev, V.V.: Logical inference and action planning elements in robot groups. In: Proceedings 16th National Conference on Artificial Intelligence KII-2018, Moscow, vol. 1, pp. 88–96 (2018) (In Russian) 4. Ya, I.D., Shabanov, I.B.: Model of application of coalitions of intelligent mobile robots with limited communications. In: Proceedings 16th National Conference on Artificial Intelligence KII-2018, Moscow, vol. 1, pp. 97–105 (2018) (In Russian) 5. Pospelov, D.A.: Situation Management: Theory and Practice [Situacionnoe upravlenie: Teoriya i praktika], Nauka, M. 286p. (1986) (In Russian) 6. Kunc, G., Donnel, O.S.: Management: System and Situation Analysis of Control Functions [Upravlenie: sistemnyj i situacionnyj analiz upravlencheskih funkcij]. Progress, 588p. (2002) (In Russian) 7. Sokolov, B., Ivanov, D., Fridman, A.: Situational Modeling for Structural Dynamics Control of Industry-Business Processes and Supply Chains//Intelligent Systems: From Theory to Practice, Sgurev, V., Hadjiski, M., Kacprzyk, J. (eds.)., pp. 279–308. Springer-Verlag, London, Berlin, Heidelberg (2010) 8. Ya, F.A.: Situational Control of the Structure of Industrial and Natural Systems. Methods and Models. LAP, Saarbrucken, Germany (2015) 9. Mishin, S.P.: Optimal Control Hierarchies in Economic Systems [Optimal‘ny‘e ierarxii upravleniya v e‘konomicheskix sistemax]. M. PMSOFT (2004) (In Russian) 10. Kalyaev, I.A., Kapustyan, S.G, Gaiduk, A.R.: Self-organizing distributed control systems for groups of intelligent robots built on the basis of the network model [Samoorganizuyushhiesya raspredelenny‘e sistemy‘ upravleniya gruppami intellektual‘ny‘x robotov, postroenny‘e na osnove setevoj modeli]. UBS 30(1), 605–639 (2010) (In Russian) 11. Kalyaev, I.A., Gaiduk, A.R., Kapustian, S.G.: Control of a team of intellectual objects based on schooling principles [Upravlenie kollektivom intellektual‘ny‘x ob“ektov na osnove stajny‘x principov]. Bull. Scient. Center Russian Acad. Sci. 1(2), 20–27 (2005) (In Russian)
References
251
12. Kapustian, S.G.: Decentralized method of collective distribution of goals in the group of robots [Decentralizovanny‘j metod kollektivnogo raspredeleniya celej v gruppe robotov]. In: SG Kapustian Proceedings of the higher educational institutions, Electronics, 2. pp. 84–91 (2006) (In Russian) 13. Kalyaev, I.A.: Principles of collective decision making and control in the group interaction of robots [Principy‘ kollektivnogo prinyatiya resheniya i upravleniya pri gruppovom vzaimodejstvii robotov]. In: Mobile Robots and Mechatronic Systems: Mat. Scientific Schools Conference. Publishing House of Moscow State University, pp. 204–221 (2000) (In Russian) 14. Kapustian, SG.: The method of organizing multi-agent interaction in distributed control systems of a group of robots when solving the area coverage problem [Metod organizacii mul‘tiagentnogo vzaimodejstviya v raspredelenny‘x sistemax upravleniya gruppoj robotov pri reshenii zadachi pokry‘tiya ploshhadi]. Artificial Intell. 3, 715–727 (2004) (In Russian) 15. Ya, F.A.: SEMS-based control in locally organized hierarchical structures of robots collectives. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds) Smart Electromechanical Systems: The Central Nervous System, Studies in Systems, Decision and Control, vol. 95, pp. 31–47. Springer International Publishing, Switzerland 16. Vasiliev, S.N., Zherlov, A.K., Fedosov, E.A., Fedunov, B.E.: Intellectual control of dynamic systems [Intellektual‘noe upravlenie dinamicheskimi sistemami]. FIZMATLIT, 352 p. (2000) (In Russian) 17. Prishchepa, M.V.: Development of a user profile with account of the psychological aspects of human interaction with an information mobile robot [Razrabotka profilya pol‘zovatelya s uchetom psixologicheskix aspektov vzaimodejstviya cheloveka s informacionny‘m mobil‘ny‘m robotom]. Tr. SPIIRAN 21, 56–70 (2012). (In Russian) 18. Ladygina, I.V.: Social and ethical problems of robotics [social no-eticheskie problemy robototechniki]. Vyatka State Univ. Bull. 7, 27–31 (2017) (In Russia) 19. Karpov, V.E.: Emotions and temperament of robots behavioral aspects. J. Comp. Syst. Sci. Int. 5, 126–145 (In Russian) 20. Gorodetskiy, A.E., Tarasova, I.L.: Fuzzy mathematical modeling of poorly formalized processes and systems [Nechetkoe matematicheskoe modelirovanie ploxo formalizuemy’x processov i sistem ]. SPb.: Publishing house Polytechnic. Un-t 336 p. (2010) (In Russian) 21. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Safe Control of SEMS at Group Interaction of Robots. Materialy 10-j Vserossijskoj mul’tikonferencii po problemam upravlenija [Proceedings of the 10th all-Russian multi-conference on governance]. Divnomorskoye, Gelendzhik, vol. 2, pp. 259–262 (2017) (In Russian) 22. Shkodyrev, V.P.: Technical systems control: from mechatronics to cyber-physical systems. In: Gorodetskiy, A.E. (eds) Studies in Systems, Decision and Control: Smart Electromechanical Systems, vol. 49, pp. 3–6. Springer International Publishing, Switzerland (2016) 23. Gorodetskiy, A.E.: Smart electromechanical systems modules. In: Gorodetskiy, A.E. (ed) Studies in Systems, Decision and Control: Smart Electromechanical Systems, vol. 49, pp. 7– 15. Springer International Publishing, Switzerland (2016) 24. Kulik, B.A., Ya, F.A.: Logical analysis of data and knowledge with uncertainties in SEMS. In: Gorodetskiy, A.E. (ed.) Studies in Systems, Decision and Control: Smart Electromechanical Systems, vol. 49, pp. 45–59. Springer International Publishing, Switzerland (2016) 25. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Logical-mathematical model of decision making in central nervous system SEMS. In: Gorodetskiy, A.E., Kurbanov, V.G (eds) Smart Electromechanical Systems: The Central Nervous System, pp. 51–60. Springer International Publishing AG (2017). https://doi.org/10.1007/978-3-319-53327-8_4 26. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Behavioral decisions of a robot based on solving of systems of logical equations. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds) Smart Electromechanical Systems: The Central Nervous System, pp. 61–70. Springer International Publishing AG (2017). https://doi.org/10.1007/978-3-319-53327-8_5 27. Akkof, R., Jemeri, F.: O celeustremlennyh sistemah [About purposeful systems]. Sov Radio Publication, Moscow, 269 p. (1974) (In Russian)
252
3 Group Control
28. Gorod A., Fridman A., Saucer B.: A quantitative approach to analysis of a system of systems operational boundaries. In: Proceedings of International Congress on Ultra Modern Telecommunications and Control Systems (ICUMT-2010), October 18–20, Moscow, pp. 655–661 (2010) 29. Melikhov, A.N., Berstein, L.S., Korovin, S.I.: Situacionnye sovetujushhie systemy s nechetkoj logikoj [Situational advising systems with fuzzy logic]. Moscow, Science Publication, 272 p. (1990) (In Russian) 30. DeRusso, P.M., Roy, R.J., Close, C.M.: State Variables for Engineers, 608 p. Wiley (1965) 31. Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Methods of synthesis of optimal intelligent control systems SEMS. In: Gorodetskiy, A.E. (ed) Smart Electromechanical Systems, pp. 25– 44. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-27547-5 32. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 946 p. (2001) 33. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Challenges related to development of central nervous system of a robot on the bases of SEMS modules. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds) Studies in Systems, Decision and Control, Smart Electromechanical Systems: The Central Nervous System, vol. 95, pp. 3–16. Springer International Publishing, Switzerland (2017) 34. Gorodetskiy, A.: Fundamentals of the Theory of Intelligent Control Systems [Osnovy teorii intellektual’nyh sistem upravlenija], 313 p. LAP LAMBERT Academic Publishing GmbH@Co. KG Publication (2011) 35. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Logical and probabilistic methods of formation of a dynamic space configuration of the robot group. In: Proceedings of the 10th all-Russian multi-conference on governance [Materialy 10-j Vserossijskoj mul’tikonferencii po problemam upravlenija] Divnomorskoye, Gelendzhik, vol. 2, pp. 262–265 (2017) 36. Yu, K.A.: Analysis of polynomial constraints by the solution tree method [Informacionnoupravljajushhie sistemy] 6(91), 6–9 (In Russian) (2017). https://doi.org/10.15217/issn16848853.2017.6.9 37. Smart Electromechanical Systems/Studies in Systems, Decision and Control, vol. 49, , 277 p., Gorodetskiy, A.E. (ed). Springer International Publishing, Switzerland (2016). https://doi. org/10.1007/978-3-319-27547-5 38. Smart Electromechanical Systems: The Central Nervous Systems/Studies in Systems, Decision and Control., vol. 95, 270 p. Gorodetskiy, A.E., Kurbanov, V.G. (eds). Springer International Publishing, Switzerland (2017). https://doi.org/10.1007/978-3-319-53327-8 39. Iudin, D.B.: Computational Methods of Decision Theory [Vychislitel’nye metody teorii priniatiia reshenii], 320 p. Nauka Publication, Moscow (1989) (In Russian) 40. Lee, S.G., Diaz-Mercado, Y., Egerstedt, M.: Multirobot control using time-varying density functions. IEEE Trans. Robotics 31(2), 489–493 (2015). https://doi.org/10.1109/TRO.2015. 2397771 41. Rubenstein, M., Ahler, C., Nagpal R., Kilobot.: A low cost scalable robot system for collective behaviors. In: Proceedings IEEE International Conference on Robotics Automation (2012) 42. Mondada, F., Gambardella, L.M., Floreano, D., Dorigo, M.: The cooperation of swarm-bots: Physical interactions in collective robotics. IEEE Robot. Autom. Mag. 12(2) (2005) 43. Dorigo, M., Floreano, D., Gambardella, L.M., Mondada, F., Nolffi, S., Baaboura, T., Birattari, M., et al.: Swarmanoid: a novel concept for the study of heterogeneous robotic swarms. IEEE Robot. Autom. Mag. 20(4) (2013) 44. Karpov, V.E.: Control in static swarms. Problem statement. In: V11-th International scientificpractical conference “Integrated models and soft computing in artificial intelligence” (2013) (In Russian) 45. Dobrynin, D.A.: Intelligent robots yesterday, today, tomorrow. In: X National Conference on Artificial Intelligence with international participation (25–28 September, Obninsk): Conference Proceedings [X natsional’naia konferentsiia po iskusstvennomu intellektu s mezhdunarodnym uchastiem KII-2006 (25–28 sentiabria, Obninsk)], vol. 2. Moscow. FIZMATLIT Publication (2006) (in Russian)
References
253
46. Gorodetskiy, A., Kurbanov, V., Tarasova, I.: Formation of images based on sensory data of robots. In: PRIPT 2019. Pattern Recognition and Information Processing. Proceedings of the 14th International Conference, 21–23 May, Minsk, Belarus (2019) 47. Davydov, O.I., Platonov, A.K.: Robot and artificial intelligence. Technocratic approach. Preprint IPM im. M. V. Keldysh, 24 p. (2017) (In Russian). https://doi.org/10.20948/prepr2017-112 48. Krysin, L.P.: Types of pragmatic information in the “explanatory dictionary.” Izvestiya RAN seriya literatury I yazyka 74(2), 3–11 (2015). (In Russian) 49. Karmanov, V.G.: Mathematical Programming [Matematicheskoe programmirovanie ]. Fiz.Mat. Literature Publication, 263p. (2004) (In Russian) 50. Svetlov, V.A.: Methodological concept of Charles Pearce’s scientific knowledge: unity of abduction, deduction and induction. Logiko-Filosofskie shtudii 5, 165–187 (2008) (In Russian). ISSN 2071-9183 51. Zhegalkin, I.I.: Arithmetization ymbolic logic [Arifmetizatsiia simvolicheskoi logiki]. Mathematical Coll [Matematicheskii sbornik] 35(3–4) (1928) (in Russian) 52. Gorodetskiy, A.E., Dubarenko, V.V., Erofeev, A.A.: Algebraic approach to the solution of logical control problems. Automation Remote Cont. [Avtomatika i telemekhanika] 2, 127–138 (2000) (In Russian) 53. Gorodetskiy, A.E.: Smart electromechanical systems architectures. In: Gorodetskiy, A.E. (ed.) Smart Electromechanical Systems. Studies in Systems, Decision and Control, vol. 49, pp. 17– 23. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-27547-5_3 54. Gorodetskiy, A.E.: The principles of situational control SEMS Group. In: Gorodetskiy, A.E., Tarasova, I.L. (ed) Smart Electromechanical Systems: Group Interaction/Studies in Systems, Decision and Control, vol. 174, pp. 3–13. Springer International Publishing (2020). https:// doi.org/10.1007/978-3-030-32710-1_1 55. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Situational control of the group interaction of mobile robots. In: Gorodetskiy, A.E., Tarasova, I.L. (eds) Smart Electromechanical Systems: Group Interaction/Studies in Systems, Decision and Control, vol. 261, pp. 91–101. Springer International Publishing (2020). https://doi.org/10.1007/978-3-030-32710-7 56. Romanov, V.P.: Intelligent Information Systems in the Economy: Textbook [Intellektual’nye informacionnye sistemy v ekonomike: Uchebnoe posobie ], Tihomirova, N.P. (ed.), 496 p. Ekzamen Publication, Moscow (2003) (In Russian) 57. Available at: http://hugin.sourceforge.net/ 58. Gorodetskiy, A.E., Tarasova, I.L.: Control and Neural Networks [Upravlenie i nejronnye seti], 312 p. Polytechnic University Publication, St. Petersburg (2005) (In Russian) 59. Gorodetskiy, A.E., Tarasova, I.L.: Smart Electromechanical Systems. Group Interaction. Studies in Systems, Decision and Control, vol. 174, 337p. Springer International Publishing (2018). https://doi.org/10.1007/978-3-319-99759-9. 60. Ziniakov, V.Y., Gorodetskiy, A.E., Tarasova, I.L.: Control of vitality and reliability analysis. In: Gorodetskiy, A.E. (ed.) Smart Electromechanical Systems, pp 193–204. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-27547-5_18 61. Ziniakov, V.Y., Gorodetskiy, A.E., Tarasova, I.L.: System failure probability modelling. In: Gorodetskiy, A.E. (ed.) Smart Electromechanical Systems, pp. 25–44. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-27547-5_4 62. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Reduction of logical-probabilistic and logical-linguistic constraints to interval constraints in the synthesis of optimal SEMS. In: Gorodetskiy, A.E., Tarasova, I.L. (ed.) Smart Electromechanical Systems. Group Interaction, pp. 77–90. Springer International Publishing (2018). https://doi.org/10.1007/978-3-319-997 59-9_7 63. Yevtodyeva, M., Tselitsky, S.: Military unmanned aerial vehicles: trends in development and production. Path. Peace Sec. 57, 104–111 (2019). (In Russian) 64. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: A logical-linguistic routing method for unmanned vehicles with the minimum probability of accidents. Control Sci. 4, 24–30 (2022)
254
3 Group Control
65. Li, C.: Artificial intelligence technology in UAV equipment. In: 2021 IEEE/ACIS 20th International Fall Conference on Computer and Information Science (ICIS Fall), Xi’an, China, pp. 299–302 (2021). https://doi.org/10.1109/ICISFall51598.2021.9627359 66. Xia, C., Yudi, A.: Multi–UAV path planning based on improved neural network. In: 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, pp. 354–359 (2018). https://doi.org/10.1109/CCDC.2018.8407158 67. Varatharasan, V., Rao, A.S.S., Toutounji, E., et al.: Target detection, tracking and avoidance system for low-cost UAVs using AI-based approaches. In: 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS), Cranfield, UK, pp. 142–147 (2019). https://doi.org/10.1109/REDUAS47371.2019.8999683 68. Zheng, L., Ai, P., Wu, Y.: Building recognition of UAV remote sensing images by deep learning, IGARSS 2020–2020. In: IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, pp. 1185–1188 (2020). https://doi.org/10.1109/IGARSS 39084.2020.9323322 69. Zhang, Y., McCalmon, J., Peake, A., et al.: A symbolic-AI approach for UAV exploration tasks. In: 2021 7th International Conference on Automation, Robotics and Applications (ICARA), Prague, Czech Republic, pp. 101–105 (2021). https://doi.org/10.1109/ICARA51699.2021. 9376403 70. Aggarval, C.: Neural networks and deep learning. Springer International Publishing (2018) 71. Kim, H., Ben-Othman, J., Mokdad, L., et al.: Research challenges and security threats to AI-driven 5G virtual emotion applications using autonomous vehicles. Drones, Smart Dev., IEEE Netw. 34(6), 288–294 (2020). https://doi.org/10.1109/MNET.011.2000245 72. Kim, M.L., Kosterenko, V.N., Pevzner, L.D., et al.: Automatic trajectory motion control system for mine unmanned aircraft. Mining Indus. J. 3(145), 60–64 (2019) (In Russian) 73. Kutakhov, V.P., Meshcheryakov, R.V.: Group control of unmanned aerial vehicles: a generalized problem statement of applying artificial intelligence technologies. Control Sci. 1, 55–60 (2022). https://doi.org/10.25728/cs.2022.1.5 74. Dolgii, P.S., Nemykin, G.I., Dumitrash, G.F.: Unmanned control of vehicles. Molodoi Uchenyi 8.2(246.2), 13–15 (2019) (In Russian) 75. Vlasov, S.M., Boikov, V.I., Bystrov, S.V., Grigor’ev, V.V.: Noncontact local orientation means for robots [Beskontaktnye sredstva lokal’noi orientatsii robotov]. ITMO University, St. Petersburg (2017) (In Russian) 76. Moskvin, V.A.: Risks of Investment Projects [Riski investitsionnykh proektov]. INFRA-M, Moscow (2016) (In Russian) 77. Reshetnyak, O.I.: The methods of investment risk assessment in business planning. Bus. Info. 12, 189–194 (2017). (In Russian) 78. Yu, P.A.: Risk assessment for an investment project. Scient. J. KubSAU 19, 73–98 (2006) (In Russian) 79. Kulik, Yu.A., Volovich, V.N., Privalov, N.G., Kozlovskii, A.N.: The classification and quantitative assessment of innovation project risks. J. Mining Inst. 197, 124–128 (2012). (In Russian) 80. Yu, V.I.: Analysis of quantitative risk assessment methods for investment projects, Trudy. In: Proceedings of 12th Conference “Russian Regions in the Focus of Change” [12-oi konferentsii “Rossiiskie regiony v fokuse peremen”]. Yekaterinburg, pp. 52–61 (2017) (In Russian) 81. Korol’kova, E.M.: Risk Management: Control of Project Risks [Risk-Menedzhment: Upravlenie proektnymi riskami]. Tambov State Technical University, Tambov (2013) (In Russian) 82. Mirkin, B.G.: The Problem of Group Choice [Problema gruppovogo vybora]. Nauka, Moscow (1974). (In Russian) 83. Solozhentsev, E.D.: Risk and Efficiency Management in Economics: A Logical-Probabilistic Approach [Upravlenie riskom i effektivnost’yu v ekonomike: logiko-veroyatnostnyi podkhod]. St. Petersburg State University, St. Petersburg (2009) (In Russian) 84. Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Classification of images in decision making in the central nervous system of SEMS. In: Gorodetskiy, A.E., Tarasova, I.L. (eds.) Smart
References
85. 86.
87.
88.
89. 90. 91. 92.
93. 94. 95. 96.
97.
98. 99. 100. 101.
255
Electromechanical Systems. Behavioral Decision Making, Studies in Systems, Decision and Control, vol. 352, pp. 187–196. Springer Nature Switzerland AG (2021) Gorodetskiy, A.E., Kurbanov, V.G., Tarasova, I.L.: Patent RU no. 2756778 (2021) Gorodetskiy, A.E., Tarasova, I.L., Kurbanov, V.G.: Assessment of UAV intelligence based on the results of computer modeling. In: Gorodetskiy, A.E., Tarasova, I.L. (eds) Smart Electromechanical Systems, Studies in Systems, Decision and Control, vol 419, pp 105–116. , Springer Nature Switzerland AG (2022). https://doi.org/10.1007/978-3-030-97004-8_8 Gorodetskiy, A.E., Tarasova, I.L.: Intelligent software for automated testing of sensor systems [Intellektual’nye programmnye sredstva dlya avtomatizirovannyh ispytanij sensornyh sistem]. In: Physical Metrology: Theoretical and Applied Aspects, Gorodetskiy, A.E., Kurbanov, V.G. (eds), pp. 68–74. Publishing House KN (1996) (In Russian) Gorodetskiy, A.E., Al-Rasasbeh, R.T.: Vector estimates of the group activity of operators. In: Collection of Works: Modern Problems of Socio-Economic Development and Information Technology, pp. 63–68. Baku (2004) Al-Kasasbeh, R.T.: Statistical-similar model of organization work for small group information system operators. In: Proceeding of International Carpathian conference ICCC, pp. 217–224 Popchetelev, E.P.: Training for Studying Group, pp. 65–67. Leningrad Technical University News, Leningrad (1988) Antonets1, V.A., Anishkina, N.M.: Measurements and perception in man. In: Machine Systems, 12 p. Preprint of IAP RAS, N 518, Nizhny Novgorod, 12 p. (1999) (In Russian) Yoshida, K., Yokobayashi, M.: Development of AI-based simulation system for man-machine system behavior in accidental situations of nuclear power plant. https://www.semanticscholar. org/paper/Development-of-AI-Based-Simulation-System-for-in-of-Yoshida-Yokobayashi/ 9771efc557abb6aeb13abdaf24d946569d73ea70 Hwa, S.J., Hyung-Shik, J.: Establishment of overall workload assessment technique for various tasks and workplaces. Int. J. Indust. Ergon. 28, pp. 341–353. ISSN 0169-8141 2001 Wataru, K., Hiroshi, U.: Man-Machine System. United States Patent 5247433 Eykhoff, P.: System Identification, Parameter and State Estimation. Wiley, London (1974) Gorodetskiy, A.E.: Fuzzy decision making in design on the basis of the hubituality. In: Reznik, L., Dimitrov, V., Kasprzyk, J. (eds) Fuzzy System Design, Physica Verlag. ISBN 3-7908-11181 (1998) Gorodetskiy, A.E.: On the use of the habitual situation for accelerating decision-making in intelligent information and measurement systems [Ob ispol’zovanii situacii privychnosti dlya uskoreniya prinyatiya reshenij v intellektual’nyh informacionno-izmeritel’nyh sistemah]. In: Gorodetskiy, A.E., Kurbanov, V.G. (eds) Physical Metrology: Theoretical and Applied Aspects, pp. 141–151. Publishing House KN (1996) Renzulli, J.S.: What makes giftedness? Reexamining a definition. Phi Delta Kappan 60(3), pp 180–184, 261 (1978) Renzulli, J.S.: Schools for Talent Development: A Practical Plan for Total School Improvement. Creative Learning Press, Mansfield Center, CT (1994) Renzulli, J.S., Reis, S.M.: The Schoolwide Enrichment Model: A Comprehensive Plan for Educational Excellence. Creative Learning Press, Mansfield Center, CT (1985) Gorodetskiy, A.E., Tarasova, I.L.: Estimates of the group intelligence of robots in robotic systems. In: Gorodetskiy, A.E., Tarasova, I.L. (eds) Smart Electromechanical Systems: Group Interaction/Studies in Systems, Decision and Control, vol. 174, pp. 161–170. Springer International Publishing (2019). https://doi.org/10.1007/978-3-319-99759-9_13
Chapter 4
Examples of Using SEMS Modules
4.1 Controlled Ciliated Thrusters In recent years, progress in robotics can be partly explained by interaction with biological sciences. Robots that simulate the complexity and adaptability of biological systems are becoming one of the main goals of research in the field of robotics. Of particular note is the use of a bionic approach in the creation of medical robotic systems for various purposes, for example, micro-robots designed to deliver drugs to infected tissue [1, 2] or micro-robots capable of moving through blood vessels [3]. At the same time, one of the most difficult and difficult to solve problems is the creation of controlled miniature and energy-efficient propellers for such micro robots. When solving this problem, it is important to use movers that simulate the work of animal and human muscles and, in particular, the work of the ciliated apparatus [4–6]. The efficiency of the functioning of such propellers can be significantly increased by controlling the rigidity and shape of its elements.
4.1.1 Muscles Muscles have a strictly ordered structure, which ensures a highly efficient conversion of the energy of adenosine triphosphate (ATP) into mechanical work. The nucleotide coenzyme ATP is the most important form of chemical energy conservation in cells. ATP cleavage is a highly exoergic reaction. The chemical energy of ATP hydrolysis can be used to interface with endoergic processes such as biosynthesis, movement and transport. The packaging of contractile proteins in a muscle is comparable to the packaging of atoms and molecules in a crystal. The fusiform muscle consists of bundles of muscle fibers. In the myofibrils of skeletal muscles, there is a correct alternation of lighter and darker areas. Therefore, skeletal muscles are often called striated. The © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. E. Gorodetskiy and I. L. Tarasova, Introduction to the Theory of Smart Electromechanical Systems, Studies in Systems, Decision and Control 486, https://doi.org/10.1007/978-3-031-36052-7_4
257
258
4 Examples of Using SEMS Modules
Fig. 4.1 Muscle structure
myofibrill consists of identical repeating elements, the so-called sarcomeres (see Fig. 4.1). The sarcomere is bounded on both sides by Z-disks. Thin actin filaments are attached to these disks on both sides [7, 8]. During ATP hydrolysis, myosin becomes able to interact with actin and begins to pull actin filaments to the center of the sarcomere (see Fig. 4.1). As a result of this movement, the length of each sarcomere and the entire muscle as a whole decreases. It is important to note that with such a system of motion generation, called the sliding thread system, the length of the threads (neither actin threads nor myosin threads) does not change. Shortening is a consequence of only moving the threads relative to each other. The signal for the beginning of muscle contraction is an increase in the concentration of Ca2+ inside the cell. The concentration of calcium in the cell is regulated by special calcium pumps built into the outer membrane and membranes of the sarcoplasmic reticulum (Ca2+ storage), which weaves myofibrils.
4.1.2 Ciliated Apparatus A cilium is an outgrowth of cells covered with a plasma membrane. The thickness of the cilia lies in the range of 0.2–0.25 μ; the length is 5–10 μ [4]. Under the membrane is an axoneme consisting of 9 pairs of microtubules, which is a cylindrical formation. In the center of the axoneme there are two more microtubules enclosed in a shell. Therefore, it is said that the cilia have a structural model: 9 + 2. Nine peripheral doublets are connected to the central pair by radial “spokes”. The basal body lies at the base (Fig. 4.2). The energy supply of the mechanochemical process in the microtubule is also carried out due to the hydrolysis of ATP, and the globular heads of dynein have ATPhase activity [4].
4.1 Controlled Ciliated Thrusters
259
Fig. 4.2 Structure of the cilia
The type of cilia movement, according to most researchers, can be characterized as a rowing stroke or a rowing movement, which consists of two phases—effective and return. The rate of cilia oscillations is about 160–250 times per minute and significantly depends on body temperature. At the same time, direct measurements and indirect calculations have established that the effective movement is performed by the cilia 3–6 times faster than the return [4, 5].
260
4 Examples of Using SEMS Modules
In the effective phase, calcium ions passing through channels in the membrane of the cilia move to the negatively charged end of the microtubule, interact with dynein and participate in the hydrolysis of ATP. With a further increase in the concentration of calcium, when all the dynein pens are involved, the flow of calcium stops towards the end. The concentration of calcium near the cell membrane increases and the calcium pump closes. After that, the return phase begins, which begins with the recovery of ATP with the release of free calcium, which is excreted through the cytoplasmic membrane into the periciliary fluid. The release of calcium through the cytoplasmic membrane of the cilia includes a calcium pump of microtubules located on the opposite side of the microtubules involved in the “stroke” phase. The involved dynein handles return the cilia to its original position with less hydrodynamic resistance than in the effective phase.
4.1.3 Simple Ciliated Thruster Comparing the structure of the muscle (Fig. 4.1) with the structure of the standard SEMS module (Fig. 1.3) it is easy to notice their similarity, namely the presence of two disks (platforms) connected by threads (legs) capable of changing their size under the action of control signals. In this case, the serial connection of such modules will be very similar to the structure of the cilia. Therefore, the simplest analogue of the cilia will be a controlled ciliated mover (Fig. 4.3), simulating the vibrations of the cilia of the ciliated cells and containing an Electromechanical Cilia (EMC) in the form of a serial connection of standard SEMS modules (SM1, SM2,…SMn) [9, 10] and an automatic control system (ACS) in the form of a Coordinate Control Block (CCB) of the platforms of these modules [11]. At the same time, the shape of the EMC oscillation can be set differently in threedimensional space by setting the pulse sequences from the CCB for linear x i , yi , zi and angular ui , vi , wi coordinates of the platforms of the i-x modules. Such a thruster can be used in a medical micro robot [3], as rowing paddles. The automatic control system of such a mover has, as a rule, a “tree” type architecture (see Fig. 4.4), containing, as well as the SEMS control systems [11], a central control computer (CCC) and the following subsystems: a modeling block (MB); a optimization block (OB); a decision making block (DMB); the control block for the coordinates of the cilia (CBCC); the vision system (VS) and the automatic control system for standard modules (ACS SM) SEMS. Block CCC provides a solution to the tasks of choosing a strategy for performing the task required from the operator and/or a higher-level system for the frequency and shape of vibrations and forming the appropriate sequence of actions (algorithms) necessary for its implementation. In addition, it should provide operational correction of behavior depending on information about changes in the external environment coming from the VS, and
4.1 Controlled Ciliated Thrusters
261
SM1
CCB SM2 SMn
Fig. 4.3 Structure of the controlled ciliated propulsion. CCB—Coordinate Control Block SM1, SM1, …, SMn—standard SEMS modules
Fig. 4.4 Structure of the ACS EMS. CCC—central control computer, MB—modeling block, OB— optimization block, DMB—decision making block, CBCC—control block for the coordinates of the cilia, VS—vision system, ACS SM—automatic control system for standard modules SEMS
coordination of the functioning of subsystems. The functioning of the CCC requires the presence of developed abilities to acquire knowledge about the laws of the environment, to interpret, classify and identify emerging situations, to analyze and memorize the consequences of their actions based on work experience (self-learning property). Block VS controls its parameters through controllers by commands from the Central computer, collects information, identifies surrounding objects and the state
262
4 Examples of Using SEMS Modules
of the functioning environment from the collected information and transmits this information to the Central computer. The VS can be equipped with matrix pixel switches that can change the configuration and pixel size according to commands from the CCC, ensuring their adaptation to the parameters of the identified objects [12, 13]. Blocks MB, OB, DMB and CBCC are usually made in the form of neuroprocessor modules, the algorithms of which are implemented programmatically. Block OB, based on the information received from the VS about the state of the environment and in accordance with the requirements of the behavior algorithms formed by the CCC, plans the optimal trajectories of the SEMS module platforms. At the same time, operational restructuring of trajectories should be ensured, taking into account the limitations and dynamics of the executive subsystems. Block MB provides forecasting of dynamics of executive subsystems for issuing corrections of planned block OB optimal trajectories and adaptation of parameters of calculated control actions. At the same time, the block DM determines the conditions under which adjustments will be made in the CBCC. Block CBCC on incoming information from blocks OB, MB, DMB, and in accordance with the algorithms coming from the CCC, it generates control actions for blocks ACS SM. The latter control the required movements of platforms in the surrounding space. Block ACS SM also have a “tree” type architecture [14, 15] and contains a block for calculating optimal displacements (BCOD), which, according to incoming information from the block CBCC, calculates optimal leg extensions without jamming of the standard module. Linear and angular displacements calculating in the BCOD and are sent to the corresponding group leg regulators (GLR). The foot controllers available in the GLR use feedback sensor signals (linear motion sensors, angular motion sensors, force sensors and tactile sensors) to calculate error signals. Then they develop, according to the given control laws (for example, PID laws), control actions on the corresponding leg motors. The latter carry out the required leg extensions, providing EMC fluctuations in accordance with the time diagram, for example, shown in Fig. 4.5. Figure 4.5 shows the simplest variation of oscillation, namely, a quick rotation in one plane from 0° to 180° at the effective phase and then a slow return from 180° to 0°.
4.1.4 Ciliated Thruster with Controlled Rigidity The efficiency of the ciliated thrusters can be significantly increased by controlling the rigidity of the EMC. To do this, modules with controlled rigidity (CRM) can be installed between the SM2 modules (Fig. 4.6) [16]. In this case, the efficiency of the propulsion is increased by reducing the resistance of the medium when returning the EMC. Due to the formation of the control law in the automatic control system of
4.1 Controlled Ciliated Thrusters
263
Fig. 4.5 Time diagram of the modules operation
SM2 CRM OCB
RCB CRM SM2 CRM SM2
Fig. 4.6 Structure of the ciliated thruster with controlled rigidity. OCB—oscillation control block, RCB—rigidity control block, SM2—SEMS modules, CRM—modules with controlled rigidity
such a mover, it is possible to obtain the form of oscillations of its EMC similar to the form of oscillations of the biological cilia (Fig. 4.7). During the oscillations of the biological cilia, its stroke occurs about 3–5 times faster than the return. Moreover, during the stroke, its rigidity is much greater than
264
4 Examples of Using SEMS Modules
Fig. 4.7 The shape of the biological cilia oscillations
during the return. This ensures a reduction in energy costs for the return movement and an increase in the rowing force. Modules with controlled rigidity can be made of laminated materials with controlled rigidity. They contain layers of electrically conductive filaments separated by a flexible porous electrical insulating material impregnated with an electrorheological suspension [16]. The amount of rigidity of the layered material varies depending on the control voltage applied to the conductors from the rigidity control block. Systems of automatic control of Ciliated Thrusters with Controlled Rigidity (CTCR), as well as the ACS of the thruster described above, can have “tree” type architecture (Fig. 4.8). They contain CCC and the following subsystems: MB, OB, DMB, CBCC, VS and ACS SM SEMS. To control the rigidity of the ACS, the CTCR is additionally equipped with a rigidity control module (CRM) and ACS CRM. Block RCB on the incoming information from the blocks OB, MB and DMB and in accordance with the algorithms coming from the block CCC, it develops control actions for automatic control systems of CRM, which carry out the required changes in their rigidity. Block ACS CRM has a “tree” type architecture (see Fig. 4.9) and contains a block for calculating optimal rigidity (BCOR), which, based on incoming information from
4.1 Controlled Ciliated Thrusters
265
Fig. 4.8 Structure of the SCTCR. CCC—central control computer, MB—modeling block, OB— optimization block, DMB—decision making block, CBCC—control block for the coordinates of the cilia, VS—vision system, ACS SM—automatic control system for standard modules SEMS, RCB—rigidity control block, ACS CRM—automatic control systems for modules with controlled rigidity
RCB
BCOR
ACS CRM
RCC1
CRMN
CRM1
RCCN
Fig. 4.9 Structure of the ACS CRM. RCB—rigidity control block, ACS CRM—automatic control systems for modules with controlled stiffness, BCOR—a block for calculating optimal rigidity, RCC—the rigidity control controller, CRM—modules with controlled rigidity
the block RCB, calculates the optimal rigidity of CRM and transmits this information to the rigidity control controllers (RCC), which generate the corresponding control voltages supplied to the block CRM.
266
4 Examples of Using SEMS Modules
Fig. 4.10 Time chart CRM
The algorithms of the rigidity control system are explained by the time diagram shown in Fig. 4.10, where V is the angle of rotation of the cilia, and c1 , c2 , . . . , cn is the rigidity of modules with controlled rigidity. At the same time, the simplest option is considered, namely, a quick turn in one plane from 0° to 180° when rowing and then a slow return from 180° to 0°. When controlling the rigidity in this case, during the stroke of the Tg , voltage pulses are applied to the rigidity modules to increase rigidity, and during the start of the return, voltage pulses of smaller amplitude are applied to reduce rigidity.
4.1.5 Ciliated Thruster with Controlled Rigidity and Shape The efficiency of the functioning of the considered thrusters can be further increased by controlling the shape of its rowing elements. To do this, SM5 SEMS modules can be used in its design (see Fig. 1.6 [9]), the platforms of which contain controlled rods with motors. Such modules with controlled rods make it possible to increase the surface area of the EMC during fungus and reduce it during return by changing the control voltage supplied to the motors of the controlled rods. Typically, electric drives of controlled rods contain: reducer (R), displacement sensors (DS), for example optoelectronic, force sensors (FS), for example piezoelectric and controllers (C).
4.1 Controlled Ciliated Thrusters
267
The system of automatic control of Ciliated Thrusters with Controlled Rigidity and Shape (CTCRSh), as well as the ACS of the thruster described above, can have “tree” type architecture (Fig. 4.11). It contains CCC and the following subsystems: MB; OB; DMB; VS, CBCC, ACS SM SEMS, the rigidity control block and ACS CRM. To control the shape of the ACS CTCRSh is additionally equipped with a shape control block (CShB) and automatic control systems for motors controlled rod (ACS MCR). CShB, based on the incoming information from the OB, MB and DMB and in accordance with the algorithms coming from the CCC, develops control actions for ACS of controlled rod motors (MCR), which carry out the required changes in the shape of the platforms of SM5 SEMS modules. The ACS MCR has “tree” type architecture (see Fig. 4.12) and contains a block for calculating optimal elongations (BCOE). The latter, based on the incoming information from the CShB, calculates the optimal elongations of the controlled rods (CR) and transmits this information to the controllers of the controlled rod motors (CMCR), which generate the corresponding control voltages supplied to the motors of the controlled rods of the MCR. The algorithms of the shape control system are explained by the time diagram shown in Fig. 4.13, where V is the angle of rotation of the cilia, and c1 , c2 , …, cn are the dimensions of the rowing surfaces (platforms of SM5 SEMS modules). At the MB
OB
CBCC
ACS SM
CCC
DMB
CShB
RCB
ACS SM
ACS CRM
VS
ACS MCR
ACS MCR
ACS CRM
Fig. 4.11 Structure of the ACS CTCRSh. CCC—central control computer, MB—modeling block, OB—optimization block, DMB—decision making block, CBCC—control block for the coordinates of the cilia, VS—vision system, ACS SM—automatic control system for standard modules SEMS, RCB—rigidity control block, ACS CRM—ACS for modules with controlled s rigidity, CShB—a shape control block, ACS MCR—ACS for motors controlled rod
268
4 Examples of Using SEMS Modules
CShB
Fig. 4.12 Structure of the ACS MCR. CShB—a shape control block, ACS MCR—ACS for motors controlled rod, BCOE—a block for calculating optimal elongations, CMCR—the controllers of the controlled rod motors, MCR—controlled rod motors
ACS MCR
BCOE
CMCR1
CMCRn
MCR1
MCRn
same time, the simplest option is considered, namely, a quick turn in one plane from 0° to 180° when rowing and then a slow return from 180° to 0°. In this case, when controlling the shape during the stroke of the Tg , control signals are sent to the MCR to increase the stroke surface, and during the start of the return, control signals are sent to reduce the return surface.
Fig. 4.13 Time diagram of the size change of SM5 SEMS module platforms
4.2 Flagellar Thruster
269
4.2 Flagellar Thruster When solving the problem of creating controlled miniature and energy-efficient thrusters for micro robots, thrusters that simulate the work of the flagellum—the organelle of movement in bacteria and a number of protozoa can be used [17, 18].
4.2.1 Biological Flagellum The flagellum of a eukaryotic cell is an outgrowth similar to a cilia [17], having a thickness of about 0.25 μ and a length of up to 150 μ, clad with a plasma membrane. Inside there is an axoneme—a cylinder, the wall of which is built of 9 pairs of microtubules connected by “handles”. In the center of the axoneme there are 2 (less often 1, 3 or more) microtubules (the so-called 9 + 2 structure). At the base of the flagellum are two mutually perpendicular basal corpuscles. Flagella, unlike cilia, move in a wave-like or funnel-like manner, due to the sliding of microtubules of neighboring pairs relative to each other with the help of “handles”, using energy of adenosine triphosphate (ATP) [17]. A completely different molecular mechanism underlies the movement of the bacterial cell [18]. The ability of bacteria to move quickly and directionally is due to the presence of a special movement organ in these organisms—the bacterial flagellum (BF), the structure and functioning of which encodes about 50 genes. BF, penetrating the cell wall, goes under the cytoplasmic membrane. It has a discrete structure consisting of a basal body; a long outer thread and a “hook” connecting these two parts (see Fig. 4.14). The basal body, located in the thickness of the cell wall, rotates, sets in motion an external semi-rigid spiral protein thread, which provides the generation of hydrodynamic force, directionally pushing the cell. For a long time, ATP was searched for in the composition of BF, but it was not found. Relatively recently (the end of the 70 s), it turned out that the basal body of the flagellum is a miniature electric motor, thanks to which the bacterial cell is able to develop a very high speed—100 μ/s, Fig. 4.14 Electron microscopic view of the bacterial flagellum
270
4 Examples of Using SEMS Modules
that is, more than 50 cell body lengths per second. The energy of this process turned out to be unique for mobility systems—the transmembrane potential of hydrogen (or sodium) ions on the membrane. Morphologically, the flagellum is constructed of three main parts: a basal body rotating in the thickness of the cell wall and acting as a miniature electric motor, an external semi-rigid spiral thread constructed from flagellin protein and playing the role of a screw when bacteria move in the medium, and a so-called “hook”, a flexible protein structure connecting the basal body and the outer thread [19–21]. However, this does not exhaust the entire structure of the bacterial motor apparatus. There is an additional amount of proteins and protein formations in the cytoplasm and cytoplasmic membranes involved in the rotation of the flagellum. However, these three main parts accumulate them in their composition. The apparatus of the flagellum motor contains about 25 different proteins, including specific proteins that switch the direction of rotation of the motor. Recently, a number of proofs have been obtained that these proteins form the two main parts of the motor—the rotor and the stator. Special attention should be paid to the protein of the outer thread of the flagellum— flagellin. The properties of the flagellin subunit in the filament are such that they allow the protein above the spiral of the BF filament to take different forms (the flagellum has the shape of a semi-rigid spiral of the screw type), thereby ensuring the movement of the cell in the medium.
4.2.2 Structure of the Flagellar Thruster The serial connection of SEMS modules [9] is very similar to the structure of BF. Therefore, the simplest electromechanical analogue of the BF will be a controlled flagellar mover [22] (Fig. 4.15), simulating the wave-like or funnel-shaped movement of the flagellum and containing an electric rotation drive (ERD) fixed on the base, and electromechanical flagellum (EMF) connected to the ERD through a gear (G) and ACS. The EMF is made in the form of a serial connection of SM5 SEMS modules (see Fig. 1.6) [23], between which modules with controlled rigidity (CRM) are installed [16]. At the same time, the type of EMF movement can be set differently in threedimensional space by setting sequences of control pulses from the coordinate control block (CCB) ACS for the engines of the legs of SM5 SEMS modules for linear x i , yi , zi and angular ui , vi , wi coordinates of the platforms of these modules. The speed of rotation of the EMF can be changed due to the control voltage from the speed control block (SpCB) ACS for ERD. The shape of the EMF can be changed by applying control actions from the shape control block (ShCB) The ACS is installed in the engines of the controlled rods (MCR) of the SM5 SEMS module platforms, and the rigidity of the EMF is due to the supply of control actions from the rigidity control block (RCB) ACS in CRM [24]. Such a thruster can be used in a medical micro robot [3] as a propeller.
4.2 Flagellar Thruster
271 CC
SM5 CRM CCB
RCB CShB
CRM
SM5 CRM SM5
R ERD
SpCB
Fig. 4.15 The scheme of the flagellar thruster. CC—control computer, CCB—the coordinate control block, CShB—the shape control block, SM5—the module SM5 SEMS, CRM—module with controlled rigidity, G—a gear, ERD—an electric rotation drive, RCB—rigidity control block, SpCB—the speed control block
4.2.3 Architecture of the Automatic Control System Let’s consider the architecture of the automatic control system of the flagellar thruster (ACS FT) on the example of its use in a medical micro robot [3]. The ACS FT has, as a rule, “tree” type architecture (see Fig. 4.16). It consists of a control computer (CC) containing the following modules: a modeling block (MB); optimization block (OB); a decision making block (DMB); and the following control blocks: a coordinate control block (CCB), a shape control block (CShB), a rigidity control block (RCB) and a speed control block (SpCB). Each of the listed control blocks is connected to ACS: CCB—with ACS for leg motors (LM) of standard SEMS modules; RCB—with ACS for modules with controlled rigidity (CRM).
272
4 Examples of Using SEMS Modules
Fig. 4.16 Architecture of the ACS FT. MB—a modeling block; OB—optimization block; DMB—a decision making block; CC—a control computer; CCB a coordinate control block; SpCB—a speed control block; ACS ERD—ACS of electric rotation drive; CShB—the shape control block; RCB— rigidity control block; ACS LM—ACS of leg motors; ACS MCR—ACS for motors controlled rod; ACS CRM—ACS for module with controlled s rigidity
CShB—with ACS for motors of controlled rods (MCR) of standard SEMS modules. SpCB—with ACS for the rotation electric drive (ERD). CC provides a solution to the problems of choosing a strategy for fulfilling the task required from the operator and/or a higher-level system for the type and form of movement and the formation of a sequence of actions (algorithms) necessary for its implementation. In addition, it should provide operational correction of subsystems depending on information about changes in the external environment coming from the vision system (VS not shown in Fig. 4.16) of the microrobot, and coordination of the functioning of all subsystems. Computer modules: MB, OB and DMB are usually made in the form of microprocessor modules, the algorithms of which are implemented programmatically. The OB, based on the information about the state of the environment received from the VS micro robot and in accordance with the requirements of the control algorithms formed in the CC plans the optimal trajectories of the SEMS module platforms, their reconfiguration and changes in the rigidity of the CRM and the speed of the ERD. At the same time, operational restructuring of trajectories should be ensured, taking into account the limitations and dynamics of the executive subsystems.
4.3 Medical Micro Robot
273
MB provides forecasting of dynamics of executive subsystems for issuing corrections of planned OB optimal trajectories and adaptation of parameters of calculated control actions. At the same time, the DMB determines the conditions under which adjustments will be made in the CCB, CShB, RCB and SpCB. CCB, based on incoming information from OB, MB and DMB in accordance with the algorithms coming from CC, develops control actions for ACS LM, which carry out the required movements of SM5 SEMS platforms in the surrounding space. CPhB, based on incoming information from OB, MB and DMB in accordance with the algorithms coming from the computer, develops control actions for ACS MCR, which carry out the required reconfigurations of SM5 SEMS platforms. RCB, based on incoming information from OB, MB and DMB in accordance with the algorithms coming from the computer, develops control actions for the ACS CRM, which carry out the required changes in their rigidity. SpCB, based on incoming information from OB, MB and DMB in accordance with the algorithms coming from the computer, generates control actions for the ACS ERD, which carry out the required changes in the rotation speed. The structure and work of ACS LD and ACS MCR are described in [22], ACS CRM—in [24], and ACS ERD—in [21].
4.3 Medical Micro Robot The medical micro robot is designed to transport medicines through the veins of a living organism. A spiral floating micro robot is known, containing artificial “tails”—flat ribbons curled in a spiral, connected to a magnetic head controlled by an external magnetic field. The length of the tapes is from 25 to 75 μm, the thickness is 27–42 nm, the width is less than 2 μm, and the diameter of the spiral is about 3 μ. The disadvantages of this device are the high complexity of control, requiring the use of special means of visualizing the position of the micro robot in a living organism, low positioning accuracy and strong dependence on external electromagnetic fields [1]. The closest to the one under consideration is the device of an in-tube miniature displacement robot designed for the diagnosis of small diameter pipes. The robot has a rigid body, inside which, sequentially along the course of the robot’s movement, there are electrically interconnected power supply, control unit and a rowing mechanism containing an electric drive, the stem of which is connected by means of a flexible connection with a module containing sliding groups of petal stops [2]. The main drive motor extends the desired group of stop lobes oriented at a certain angle to the pipe surface, which allows the robot to move in the desired direction. Sliding stops in this design allow you to move at a speed of up to 7 cm/s in the resonant mode of operation of the main engine and at the same time do not damage the walls of the pipeline. When the solenoid winding of the main engine is periodically switched on and off, the rod is retracted, pushed out by the spring. The robot moves in the direction in which the petals of the stops are inclined.
274
4 Examples of Using SEMS Modules
The main disadvantages of such a device are low load capacity, since its thrust is determined by the anisotropy of friction when the robot moves, and low design flexibility, which reduces its patency in complex profiles of the surrounding, surface. The new microrobot, built on the basis of SEMS modules, increases the load capacity and flexibility of the design, which ensures an increase in the efficiency of drug delivery to the required hard-to-reach areas of the body.
4.3.1 New Micro Robot Device In contrast to those mentioned, a robot based on SEMS modules includes an image acquisition and transmission unit installed in front of the power supply unit, electrically connected to the power and control units, a container for transporting medicinal substances with a controlled flap installed behind the power supply unit and a screw or flagellated propulsion unit with piezoelectric motors electrically connected to the units control and power supply, and the case is made in the form of a flexible cover, put on a system of three interconnected tripod mechanisms with a shift of 120° in the cross-sectional plane. Each of these mechanisms contains tripods connected in series in such a way that the upper platform of one tripod is the lower platform of the subsequent one. In this case, the platforms are made in the form of rods forming a triangle, consisting of two parts, between which there are actuators with linear electric drives electrically connected to control and power units. In addition, rowing mechanisms are installed in the attachment points of the platform rods. They contain flexible flagella connected to piezoelectric rotary electric drives electrically connected to control and power units. The device of the microrobot is explained by the drawings, where in Fig. 4.17 shows the general view of the device and in Fig. 4.18 shows the layout of tripod mechanisms. The following designations are accepted in the drawings: flexible housing cover 1, flexible flagella 2 of the rowing mechanism, piezoelectric rotary electric drives 3 of
Fig. 4.17 General view of the device
4.3 Medical Micro Robot
275
Fig. 4.18 Layout of the drive mechanisms
the rowing mechanism, power supply 4, control unit 5, image acquisition and transmission unit 6, container for transporting medicinal substances 7 with a controlled flap 8, flagella 9 of the mover with a piezoelectric motor 10, tripod mechanisms 11 with rods 12 and actuators 13 on linear electric drives, rods 14 of tripods 11 platforms with actuators 15 on linear electric drives.
4.3.2 Operation of the Device After the introduction of the medical micro robot into the vein (vessel), the power supply unit 4 is turned on, then the control unit 5 and the image acquisition and transmission unit 6. The image acquisition and transmission unit 6 transmits the image of the inner walls of the vessel in front of the robot to the operator and the control unit 5. The control unit 5 analyzes the received image and sends a control signal to the linear motors of the actuators 15, changing the size of the rods 14 and, accordingly, the thickness of the robot, depending on the current size of the vessel section. At the same time, the control unit 5 supplies a control voltage to the actuators 13 of the rods 12 of the tripods 11, which change the position of their platforms and, accordingly, bend the robot body in accordance with the current bending of the inner
276
4 Examples of Using SEMS Modules
walls of the vessel. In addition, the control unit 5 supplies a control voltage to the piezoelectric motor 10, which rotates the flagellum 9 to ensure the robot’s progress along the vessel, and to the piezoelectric rotary actuators 3 of the rowing mechanism that creates the rowing movements of the flagella 2. Thanks to the rowing movements of the flagella 2, a large cross-country robot is provided in hard-to-reach areas and, in addition, the rowing movements of the flagella 2 can create the effect of cleaning the inner surface of the vessel. Also, the improvement of the robot’s patency is achieved by turning the robot and changing its thickness. After the robot arrives at its destination, the control unit 5 gives the command to open the controlled flap 8 of the container 7 and the drugs contained in it are unloaded. Thus, drugs are delivered to hard-to-reach places of a living organism with high efficiency. Taking into account the peculiarities of the working environment and increased safety requirements, the micro robot control system must meet the following requirements: • The vascular bed is not rectilinear, which implies the need to control the orientation of the micro robot during movement; • The force of pressing the ciliated lateral thrusters of the micro robot against the vessel walls should be controllable. On the one hand, the forces that arise should not be too large, so as not to damage the vessel during fixation. On the other hand, they should be sufficient to keep the micro robot in the pulsating blood flow; • The diameter of the vessel may be unstable due to pulsation of the blood flow, anatomical features or concomitant pathology, therefore it is necessary to provide for a change in the diameter of the micro robot depending on the diameter of the vessel; • During the movement of the micro robot, unforeseen obstacles in the form of anatomical constrictions or pathological changes may occur in its path. In this case, an obstacle analysis should be carried out and a decision should be made on the possibility of further movement or adaptation to the changing lumen of the vessel.
4.4 Adaptive Capture The device is known [according to patent No. 2624278 “Adaptive capture”, dated 12.07.2016, publ. BI No. 19, 03.07.2017], which consists of a drive control system and an electro-mechanical system including a housing (palm) with fingers mounted on it. Each finger consists of phalanges with tactile sensors mounted on contact surfaces. The body and phalanges of the fingers are made of the same type of modules in the form of lower and upper platforms, each of which contains a support platform and at least three mounting pads. Moreover, each of the mounting pads is connected to the neighboring ones by means of telescopic rods and to the support pads by means of controlled rods containing electric drives, gearboxes, displacement sensors and force sensors. In this case, the upper and lower platforms are interconnected by
4.4 Adaptive Capture
277
means of six legs-actuators containing lower hinges attached to the mounting pads of the lower platform, and upper hinges attached to the mounting pads of the upper platform, and linear electric drives with gearboxes, displacement sensors and force sensors. Tactile sensors are installed on the outer surfaces of the mounting pads. The main disadvantage of such a device is the insufficient reliability of capturing transported objects due to the inability to adapt the rigidity of the grip to the weight and fragility of the transported object. In the capture based on SEMS modules and modules with controlled rigidity (CRM), the reliability of capturing transported objects is higher due to the expansion of the range of size and fragility of the surfaces of the objects being moved. This ensures an increase in the efficiency of delivery of transported objects, including those with increased fragility. This new technical solution is an improvement of the device according to patent No. 2624278. It solves the task due to the fact that modules with controlled rigidity are installed between the phalanges of the fingers, including between the phalanges of the fingers adjacent to the body and the body. In addition, a rigidity control unit has been introduced into the control system, the output of which is electrically connected to the control inputs of modules with controlled rigidity.
4.4.1 Capture Device The capture device is explained by the drawings, where in Fig. 4.19 shows a view of the same type of module of the body and phalanges, in Fig. 4.20—general view of the fingers, in Fig. 4.21—a general view of the grip with independent attachment of the phalanges of the fingers, in Fig. 4.22—general view of the grip with dependent attachment of the phalanges of the fingers. The body (hereinafter referred to as the palm for clarity) and the phalanges of the fingers are made in the form of separate modules of the same type, the scheme of which is shown in Fig. 4.19. The module includes the lower 1 and upper 2 platforms, each of which contains a support pad 3 and at least three mounting pads 4. The support pads 3 are connected through hinges 5 controlled rods 6 with actuators 7 and through hinges 8 with mounting pads 4, which allows you to change the size and shape of platforms 1 and 2. The lower 1 and upper 2 platforms are interconnected by six legs-actuators 9 with drives 10, through the lower 11 and upper 12 hinges. The mounting pads 4 are connected to adjacent telescopic rods 13. The drives 10 are equipped with gearboxes, displacement sensors and force sensors (not shown in the drawing). Tactile sensors 14 are located on the outer surfaces of the mounting pads of the 4 modules of the phalanges of the fingers. On the support platform 3 of the upper platform 2 of the module, called, by analogy with the human hand, the palm, the control system 15 is located on the inside, and tactile sensors 16 are located on the outer surface.
278
4 Examples of Using SEMS Modules
Fig. 4.19 Type of the same type of housing module and phalanges
To solve this task, a rigidity control block 17 has been introduced into the control system 15, the outputs of which are connected to the inputs of modules with controlled rigidity 18 (in Fig. 4.20 invisible), installed between the phalanges of the fingers and, including, between the phalanges of the fingers adjacent to the palm and the palm.
4.4 Adaptive Capture
279
Fig. 4.20 General view of the fingers
The fingers of the device are made as follows (Fig. 4.20). The mounting pads 19 of the first module with controlled rigidity 18 are attached to the mounting pads 4 of the upper platforms 2 of the modules of the first phalanges of the fingers, and the mounting pads 20 of this module are attached to the mounting pads 4 of the lower platforms 1 of the modules of the second phalanges. Mounting pads 19 of the second module with controlled rigidity 18 are attached to the mounting pads 4 of the upper platforms 2 of the modules of the second phalanges, and mounting pads 20 of this module are attached to the mounting pads 4 of the lower platforms 1 of the modules of the third phalanges. The number of phalanges and fingers, if necessary, can change both up and down.
280
4 Examples of Using SEMS Modules
Fig. 4.21 General view of the grip with independent attachment of the phalanges of the fingers
The attachment of fingers to the palm can be of two types: independent (Fig. 4.21) and dependent (Fig. 4.22). With independent fastening of the phalanges of the fingers (Fig. 4.21), with each mounting platform 4 of the upper platform 2 of the palm module, the mounting platforms 19 of the third modules with controlled rigidity 18 are fixed, and the mounting platforms 20 of these modules are fixed with the mounting platforms 4 of the lower platforms 1 of the modules of the first phalanges of the fingers.
4.4 Adaptive Capture
281
Fig. 4.22 General view of the grip with dependent attachment of the phalanges of the fingers
With the dependent attachment of the phalanges of the fingers (Fig. 4.22) with two fixing pads 4 of the upper platform 2 of the palm module, two fixing pads 19 of the third modules with controlled rigidity 18 are fixed, and the fixing pads 20 of these modules are fixed with the fixing pads 4 of the lower platforms 1 of the modules of the first phalanges of the fingers.
4.4.2 Capture Operation The control system 15 (see Fig. 4.19) evaluates the size and shape of the transportation object with the help of the technical vision system included in it. Then, by commands from the control system 15, using the drives 7 rods 6 of the platform 2 of the palm module, its size changes in accordance with the size of the transported object. With the help of drives of 10 legs-actuators of 9 modules of the phalanges of the fingers, the length of the fingers changes in accordance with the size of the transported object. With the help of drives of 7 rods of 6 modules of platforms 1 and 2 modules of the phalanx of the fingers, their width and shape are changed in accordance with the size and shape of the transported object. Thus, the capture adapts to the size and shape of the transported object. Then, by commands from the control system 15, the device is moved to the capture point. With the help of actuators 10 of the foot-actuators 9 of the palm module, according to commands from the control system 15, the center of the platform 2 of
282
4 Examples of Using SEMS Modules
the palm module using corrective signals from tactile sensors 16 of this platform is brought into contact with the captured object. Finally, according to commands from the control system 15, with the help of actuators 10 foot-actuators 9 modules of the phalanges of the fingers, the latter using corrective signals from tactile sensors 14 platforms 1 and 2 modules of the phalanges of the fingers, the latter are brought into contact with the captured object. The required gripping force is developed by the drives of 10 actuator legs 9 finger phalanx modules and drives 7 rods 6 platforms 1 and 2 finger phalanx modules in accordance with signals from the control system 15 and with corrective signals from force sensors in the actuator legs 9 and rods 6 modules platforms 1 and 2 finger phalanx modules. To increase the reliability of the object’s grip, taking into account the adaptation of the grip stiffness to the weight and fragility of the object’s surface, the stiffness of the modules with controlled rigidity 18 increases according to commands from the rigidity control block 17. And after that, by commands from the control system 15, the transported object is lifted and moved to the required point of shipment. Thus, the transported objects are delivered to the point of shipment with high efficiency and reliability.
4.4.3 Algorithms of Automatic Control System Operation Algorithm 1. Positioning of the upper platform of the palm module 1. Installation of the upper platform of the palm module parallel to the upper plane of the capture object. 1.1. Measurement of the angles ul and vl between the upper platform of the palm module and the upper surface of the object relative to the X and Y axes, respectively; 1.2. Calculation of the changes in the lengths of the legs of the palm module corresponding to the measured angle of the ul according to the formulas given in [25]; 1.3. Calculation of the number of steps N Li of the drives of the legs of the palm module actuators corresponding to the calculated changes in the lengths ΔLi according to the formula: NLi = ΔLi/hLi
(4.1)
where: hLi is the drive step of the i-th leg; 1.4. The supply of the number of pulses equal to the calculated numbers of steps to the actuators of the palm module actuators;
4.4 Adaptive Capture
283
1.5. Calculation of the changes in the lengths of the legs of the palm module corresponding to the measured angle vl according to the formulas given in [25]; 1.6. Calculation of the number of steps of the drives of the legs of the palm module actuators corresponding to the calculated changes in lengths according to the formula of type (4.1); 1.7. Supply to the drives of the legs of the palm module actuators of the number of pulses equal to the calculated numbers of steps. 2. Placing the fingers parallel to the upper platform of the palm module. 2.1. Measurement of the angles un1 , un2 , un3 and vn1 , vn2 , vn3 between the upper platform of the palm module and the upper platforms of the modules of the third phalanges of the fingers; 2.2. Calculation of changes in the lengths of the legs of the modules of the third phalanges of the fingers ΔLif according to the formulas given in [25]; 2.3. Calculation of the number of steps of the drives of the legs of the actuators of the modules of the third phalanges of the fingers corresponding to the calculated changes according to the formula of type (4.1); 2.4. The number of pulses equal to the calculated number of steps is supplied to the actuators of the third phalanx modules of the fingers. 3. Rotation of the upper platform of the palm module relative to the Z axis. 3.1. Measurement (calculation) of all angles α k between the sides of the upper platform of the palm module and the upper surface of the object; 3.2. Determination of the required angle of rotation of the upper platform of the palm module relative to the Z axis according to the formula: wl = min(α1 , α2 , . . . , αk ); 3.3. Calculation of the change in the length of the legs of the palm module ΔLi corresponding to the obtained angle wl according to the formulas given in [25]; 3.4. Calculation of the number of steps the drives of the legs of the actuators of the palm module correspond to the calculated changes in lengths according to the formula of type (4.1); 3.5. The supply to the actuators of the palm module actuators of the number of pulses equal to the calculated number of steps, providing the required position of the palm relative to the object. End of the algorithm. Algorithm 2. Changing the size of the palm of the grip 1. Measurement of the coordinates of the three extreme points of the palm surface A(xA , yA ); B(xB , yB ); C(xC , yC ) and the coordinates of the four extreme points of the object P1 (a1 , b1 ); P2 (a2 , b2 ); P3 (a3 , b3 );P4 (a4 , b4 );
284
4 Examples of Using SEMS Modules
2. Calculation( of the) required coordinates of) the three extreme points of the palm ( ) ( surface A∗ xA∗ , yA∗ ; B∗ xB∗ , yB∗ ; C ∗ xC∗ , yC∗ , provided that the upper platform of the palm module will have the shape of an equilateral triangle, one side of which is adjacent to one side of the upper surface of the object, and the other two sides pass through the vertices of the rectangle of the upper surface of the object formed by the other sides, according to the formulas given in [25]; 3. Calculation of the required changes in the lengths of the controlled rods of the upper platform of the palm module according to the formulas given in [26]; 4. Calculation of the number of steps N R of the drives of the controlled rods corresponding to the calculated changes in the lengths of ΔRB according to the formula of type (4.1). 5. Calculation of the necessary change in the lengths of the legs of the palm module according to the equations given in [25]. 6. Calculation of the number of steps N Li of the actuators’ foot drives corresponding to the calculated elongations ΔLi according to the formula of type (4.1); 7. Supply to the drives of the controlled rods of the number of pulses equal to the numbers of steps calculated in clause 4 and simultaneously to the drives of the actuators’ foot of the number of pulses equal to the numbers of steps calculated to point 6 of the algorithm. Algorithm 3. Positioning of the phalanges of the fingers 1. Calculation of the change in the lengths of the legs of the module of the third phalanx of the second finger ΔLi2 (x, y, z, u2 ) for the angle u2 = 30° according to the equations given in [25]; 2. Calculation of the number of steps of the actuators of the actuators of this module corresponding to the calculated changes in lengths according to the formula type (4.1); 3. Feed in these drives, the number of pulses equal to the calculated number of steps; 4. Calculation of the change in the lengths of the legs of the module of the third phalanx of the third finger ΔLi3 (x, y, z, u3 ) for the angle u3 = −30° according to the equations given in [25]; 5. Calculation of the number of steps of the actuators of the legs of the actuators of this module corresponding to the calculated length changes according to the formula of type (4.1); 6. Supply to these drives the number of pulses equal to the calculated number of steps. End of the algorithm.
4.4 Adaptive Capture
285
Algorithm 4. Changing the size and shape of the phalanges of the first finger of the grip 1. Calculation of the change in the lengths ΔR of the controlled rods of the upper platform of the third phalanx, as well as the lower and upper platforms of the first and second phalanges of the finger according to the formula: ΔR = R − Rh/ho ; where: ho and h are the width of the object and the initial width of the finger, respectively, R are the initial lengths of the controlled rods; 2. Calculation of the number of steps of the drives of the controlled rods corresponding to the calculated elongations according to the formula of type (4.1); 3. Calculation of the offset Δx of the upper platform of the third phalanx, as well as the lower and upper platforms of the first and second phalanges of the finger according to the formula: Δx = ΔRx/R, where x is the initial coordinate of the centers of the platforms of the phalanges of the first finger; 4. Calculation of the change in the lengths of the legs of the phalanges of the fingers corresponding to the calculated displacements according to the formulas given in [25]; 5. Calculation of the number of steps of the actuators of the actuators corresponding to the calculated elongations according to the formula of type (4.1); 6. The supply to the drives of the controlled rods of the number of pulses equal to the numbers of steps calculated in point 2 of the algorithm and the simultaneous supply to the drives of the legs of the actuators of the number of pulses equal to the numbers of steps calculated in p. 5 of the algorithm. End of the algorithm. Algorithm 5. Changing the size and shape of the phalanges of the second and third fingers 1. Calculation of the change in the lengths ΔR of the controlled rods of the upper platforms of the third phalanges of the fingers, as well as the lower and upper platforms of the first and second phalanges of the fingers according to the formulas given in [26]; 2. Calculation of the number of steps of the drives of the controlled rods corresponding to the calculated elongations according to the formula type (4.1); 3. Calculation of the displacement Δy of the upper platform the third phalanx, as well as the lower and upper platforms of the first and second phalanges of the finger according to the formula: Δy = ΔRy/R; 4. Calculation of the change in the lengths of the legs of the phalanges of the fingers corresponding to the calculated displacements according to the formulas given in [25]; 5. Calculation of the number of steps of the actuators of the actuators corresponding to the calculated elongations according to the formula of type (4.1);
286
4 Examples of Using SEMS Modules
6. Feeding into the drives of the controlled rods the number of pulses equal to the numbers of steps calculated in point 2 of the algorithm and simultaneous feeding the number of pulses equal to the numbers of steps calculated in point 5 of the algorithm is added to the actuators’ foot drives. End of the algorithm. Algorithm 6. Changing the lengths of the phalanges of the fingers 1. Measuring the height of the object H 0 . 2. Calculation of changes in the lengths of the phalanges of the fingers according to the equation Δz = ΔH = H0 /3 − H − hz ; 3. Calculation of the variation in the lengths of the legs ΔLi modules of the phalanges of the fingers using the equations given in [25]; 4. Calculation of the number of steps of the drives of the legs of the actuators of the phalanx modules corresponding to the calculated elongations ΔLi according to the formula (4.1); 5. Supply to the drives of the legs of the actuators of the phalanx modules of the number of pulses equal to the calculated number of steps. End of the algorithm. Algorithm 7. Object capture 1. Positioning of the phalanges of the first finger. 1.1. Assignment for the first finger of the elementary angular displacement of the phalanges −Δu; 1.2. Calculation of the change in the length ΔLi1 of the legs of the actuators of the third phalanx of the first finger according to the equations given in [25]; 1.3. Calculation of the number of steps ΔNi1 of the actuators of the legs of the actuators of the third phalanx module according to Eq. (4.1); 1.4. Feeding into the actuators of the legs of the third phalanx the phalanges of the first finger of the number of pulses equal to the calculated number of steps ΔNi1 ; 1.5. If the signal from the tactile sensor of the third phalanx is ξ31 = 0 (there is no contact), then the transition to clause p. 1.3 of this algorithm will not be ξ31 = 1 (there is a contact) yet; 1.6. The number of pulses equal to the calculated number of steps ΔNi1 is supplied to the actuators of the second phalanx of the first finger; 1.7. If the signal from the tactile sensor of the second phalanx is ξ21 = 0 (there is no contact), then go to clause p.1.5 of this algorithm until there is ξ21 = 1 (there is a contact); 1.8. Feeding into the drives of the legs of the first phalanx of the first finger the number of pulses equal to the calculated number of steps ΔNi1 ;
4.4 Adaptive Capture
287
1.9. If the signal from the tactile sensor of the first phalanx is ξ11 = 0 (there is no contact), then go to p. 1.7 of this algorithm until there is ξ11 = 1 (there is a contact). 2. Positioning of the phalanges of the second finger. 2.1. The task for the second finger of the elementary angular movement of the phalanges −Δv; 2.2. Calculation of the elongations ΔLi2 of the legs of the phalanx module of the second finger according to the formulas given in [25]; 2.3. Calculation of the number of steps ΔNi2 of the actuators of the legs of the actuators of the third phalanx module according to the formula (4.1); 2.4. Supply to the drives of the legs of the third phalanx of the second finger the number of pulses equal to the calculated number of steps ΔNi2 ; 2.5. If the signal with the tactile sensor of the third phalanx ξ32 = 0 (no contact), then go to clause 2.4 of this algorithm until ξ32 = 1 (there is a contact); 2.6. Feeding the second phalanx of the second finger into the foot drives the number of pulses equal to the calculated number of steps ΔNi2 ; 2.7. If the signal from the tactile sensor of the second phalanx is ξ22 = 0 (there is no contact), then go to p. 2.5 of this algorithm until there is ξ22 = 1 (there is contact); 2.8. The number of pulses equal to the calculated number of steps ΔNi2 ; 2.9. If the signal from the tactile sensor of the first phalanx is ξ12 = 0 (there is no contact), then go to p. 2.7 of this algorithm until there is ξ12 = 1 (there is contact). 3. Positioning of the phalanges of the third finger. 3.1. Assignment for the third finger of the elementary angular movement of the phalanges Δv; 3.2. Calculation of the extensions ΔL3i of the legs of the module of the phalanx of the third finger according to the formulas given in [25]; 3.3. Calculation of the number of steps ΔNi3 according to the formula (4.1). 3.4. Supply to the drives of the legs of the third phalanx of the third finger the number of pulses equal to the calculated number of steps ΔNi3 ; 3.5. If the signal from the tactile sensor of the third phalanx ξ33 = 0 (there is no contact), then go to point 3.4 of this algorithm until there is ξ33 = 1 (there is a contact); 3.6. Feeding the number of pulses equal to the calculated number of steps into the foot drives of the second phalanx of the third finger; 3.7. If the signal from the tactile sensor of the second phalanx is ξ23 = 0 (there is no contact), then go to p. 3.6 of this algorithm until there is ξ23 = 1 (there is contact); 3.8. The number of pulses equal to the calculated number of steps ΔNi3 .
288
4 Examples of Using SEMS Modules
3.9. If the signal from the tactile sensor of the first phalanx is ξ13 = 0 (there is no contact), then go to p. 3.8 of this algorithm until there is ξ13 = 1 (there is contact). End of the algorithm. Algorithm 8. Object compression 1. Positioning of the first finger. 1.1. The task for the first finger of the elementary angular movement of the phalanges Δu; 1.2. Calculation of the elongations ΔL1i of the legs of the module of the phalanx of the first finger according to the formulas given in [25]; 1.3. Calculation of the number of steps ΔNi1 according to the formula (4.1); 1.4. Feeding into the drives of the legs of the third phalanx of the first finger the number of pulses equal to the calculated number of steps ΔNi1 ; 1.5. If the signal from the force sensor of the third phalanx is μ31 < f ( f is the specified force), then go to p. 1.4 of this algorithm until there is μ31 > f ; 1.6. The supply to the drives of the legs of the second phalanx of the first finger is the number of pulses equal to the calculated number of steps ΔNi1 ; 1.7. If the signal from the tactile sensor of the second phalanx is μ21 < f , then go to point 1.6 of this algorithm until there is μ21 > f ; 1.8. The number of pulses fed to the foot drives of the first phalanx of the first finger is equal to the calculated number of steps ΔNi1 ; 1.9. If the signal from the tactile sensor of the first phalanx is ξ21 < f , then go to point 1.8 of this algorithm until there is μ21 > f ; 1.10. Feeding into modules with variable stiffness of the first finger of a given current strength I; 2. Positioning of the second finger. 2.1. The task for the second finger of the elementary angular movement of the phalanges Δv; 2.2. Calculation of the elongations ΔL2i of the legs of the module of the phalanx of the second finger according to the formulas given in [25]; 2.3. Calculation of the number of steps ΔNi2 according to the formula (4.1); 2.4. Supply to the drives of the legs of the third phalanx of the second finger the number of pulses equal to the calculated number of steps ΔNi2 ; 2.5. If the signal from the tactile sensor of the third phalanx is μ32 < f , then go to point 2 of this algorithm; 2.6. The supply to the drives of the legs of the second phalanx of the second finger is a number of pulses equal to the calculated number of steps ΔNi2 ; 2.7. If the signal from the tactile sensor of the second phalanx is μ22 < f , then go to point 2 of this algorithm; 2.8. The number of pulses equal to the calculated number of steps ΔNi2 ;
4.4 Adaptive Capture
289
2.9. If the signal from the tactile sensor of the first phalanx is μ12 < f , then go to point 2 of this algorithm; 2.10. Supply to modules with variable stiffness of the second finger of a given current strength I. 3. Positioning of the third finger. 3.1. Assignment for the third finger of the elementary angular movement of the phalanges −Δv; 3.2. Calculation of the elongations ΔL3i of the legs of the module of the phalanx of the third finger according to the formulas given in [25]; 3.3. Calculation of the number of steps ΔNi3 according to the formula (4.1); 3.4. Supply to the drives of the legs of the third phalanx of the third finger the number of pulses equal to the calculated number of steps ΔNi3 ; 3.5. If the signal from the tactile sensor of the third phalanx μ33 < f , then go to point 2 of this algorithm; 3.6. Feeding the number of pulses equal to the calculated number of steps ΔNi3 to the drives of the legs of the second phalanx of the third finger; 3.7. If the signal from the tactile sensor of the second phalanx is μ23 < f , then go to point 2 of this algorithm; 3.8. Feeding the number of pulses equal to the calculated number of steps ΔNi3 to the drives of the legs of the first phalanx of the third finger; 3.9. If the signal from the tactile sensor of the first phalanx is μ13 < f , then go to point 2 of this algorithm; 3.10. Feeding into modules with variable stiffness of the third finger of a given current strength I. End of the algorithm.
4.4.4 Adaptive Capture Automatic Control System The use of hex-like SEMS structures in the designs of adaptive grippers for robots makes it possible to obtain in their automatic control systems the maximum accuracy of actuators with minimal travel time due to the introduction of parallelism in the processes of measurement, calculation, movement and the use of high-precision piezo motors capable of operating in extreme conditions, including in outer space [9, 14]. However, such structures have a complex kinematic scheme, which requires improving the algorithms of their ACS, which, among other things, provide solutions to new, complex problems of adaptation to the parameters of the captured objects due to optimal trajectories of movements without jamming the platforms of SEMS modules, on the basis of which the palms and phalanges of the fingers of the considered adaptive grips are constructed. The structures of ACS, as well as their mathematical and computer models for standard modules (SM) are described in [9, 14]. These systems provide dynamics in
290
4 Examples of Using SEMS Modules
terms of speed and accuracy of control of shifts and rotations of module platforms, as well as reconfiguration of these platforms. However, a number of issues related to the adaptation of the considered grips taking into account the dynamics of the environment require further improvement of automatic control systems of adaptive grips that simulate the work of the hands. At the same time, the adaptability of such mechanisms is carried out by controlling the shape of the platforms of the SM5 SEMS modules forming the phalanges of the gripping fingers, as well as by controlling the rigidity of special stiffness modules [24] placed between the phalanges of the gripping fingers. To ensure high performance of automatic control systems with such grips, it is advisable to use neuroprocessors in ACS of the latter, providing parallelization of the processes of calculating control actions [15, 26–28]. Architecture of ACS The ACS of adaptive capture (ACS AC) has “tree” type architecture (see Fig. 4.23). At the upper level, it contains a central control computer (CCC), a vision system (VS) and the following blocks: • • • •
neuroprocessor modeling block (NMB); neuroprocessor optimization block (NOB); microprocessor decision making block (NDMB); neuroprocessor block for calculating control actions (NBCCA).
The second level of the ACS AC contains a subsystem for automatic control of the capture coordinates (SSAC CC), a subsystem for automatic palm control (SSAC CCC
NMB
NOB
SSAC СC
NDMB
SSAC P
NBCCA
SSAC F
VS
SSAC RCB
Fig. 4.23 Architecture of the ACS AC. NMB—neuroprocessor modeling block; NOB—neuroprocessor optimization block; NDMB—microprocessor decision making block; NBCCA—neuroprocessor block for calculating control actions; VS—vision system; SSAC CC—a subsystem for automatic control of the capture coordinates; SSAC P—a subsystem for automatic palm control; SSAC F—a subsystem for automatic finger control; SSAC RCB—a subsystem of automatic control of the block with controlled rigidity
4.4 Adaptive Capture
291
P), a subsystem for automatic finger control (SSAC F) and a subsystem of automatic control of the block with controlled rigidity (SSAC RCB). CCC provides a solution to the problems of choosing a strategy for performing the task required from the operator and/or the system at a higher level and forming a sequence of actions (algorithms) necessary for its implementation. In addition, it should provide operational correction of behavior depending on information about changes in the external environment coming from the VS, and coordination of the functioning of subsystems. The functioning of the CCC requires the presence of developed abilities to acquire knowledge about the laws of the environment, to interpret, classify and identify emerging situations, to analyze and memorize the consequences of their actions based on work experience (self-learning property). The NMB, NOB, NDMB, and NBCCA modules jointly provide a solution to the tactical-level control problem. Its complexity lies primarily in finding solutions to one of the key tasks associated with the adaptation of the capture in not fully defined conditions, taking into account the dynamics of the executive subsystems and current changes in the operating environment. The VS, by commands from the CCC, controls the parameters of the CCD of the VS matrices, collects information from them and transmits it to the neuroprocessor recognition block. This module identifies the capture objects and the state of the operating environment and transmits this information to the CCC. CCD arrays can be equipped with pixel switches that can change the configuration and size of the matrices according to commands from the CCC, ensuring their adaptation to the parameters of identifiable capture objects [12, 13]. Usually the capture VS is an integral part of the robot’s VS. NOB, based on the information received from the VS about the state of the environment and in accordance with the requirements of the behavior algorithms formed by the CCC, it plans the optimal movements of reconfigurations of the platforms of the palm and finger phalanges SEMS modules. At the same time, an operational restructuring of the control laws should be ensured, taking into account the limitations and dynamics of the executive subsystems. NMB provides forecasting of dynamics of executive subsystems for issuing corrections of optimal control laws planned by NOB and adaptation of parameters of calculated control actions. At the same time, the NDMB determines the conditions under which adjustments will be made in the NOB and NBCCA. NBCCA, based on incoming information from the NOB, NMB and NDMB and in accordance with the algorithms coming from the CCC, develops control actions for the subsystem of automatic control of the capture coordinates (SSAC CC), the subsystem of automatic palm control (SSAC P) and the subsystem of automatic finger control (SSAC F), which carry out the required movements of the capture along the axes X, Y and Z, moving the center of the palm to the capture point and adapting its exchange to the size of the capture object, as well as adapting the fingers to the size and shape of the captured object and capturing the object (shifts, turns, compression and stretching). Subsystems of automatic control of movable elements.
292
4 Examples of Using SEMS Modules
The SSAC CC subsystem is usually part of the automatic control system of the robot arm and contains three stepper motors moving along the corresponding coordinates X, Y, Z. The algorithm of the SSAC CC is described, for example, in [11], and the structure of the SSAC CC is shown in Fig. 4.24. The SSAC P subsystem is built on the basis of the SM5 SEMS module [23] and its structure and operation are similar to the automatic control system of this module (ACS SM) [30]. The SSFC F subsystem (Fig. 4.25) contains a finger coordinate calculation module (MCC F) and group finger controllers (GFC1–GFC3) that control the finger phalanx automatic control systems (ACS PF), the structure and operating principle of which are similar to the automatic control system of this module (ACS SM) [30]. From NBCCA
BCSM
SSAC CC
CM Z
CM X
CM Y
StM Х
StM Y
StM Z
MS X
MS Y
MS Z
Fig. 4.24 Structure of the SSAC CC. NBCCA—neuroprocessor block for calculating control actions; BCSM—block for calculating the steps of movement along the coordinates X, Y, Z; CM X, CM Y, CM Z—controllers of the motor of movement along X, Y, Z; StM X, StM Y, StM Z—stepper motors in X, Y, Z coordinates; MS X, MS Y, MS Z—motion sensors in X, Y, Z coordinates
4.4 Adaptive Capture
293
Fig. 4.25 Structure SSAS F. NBCCA—neuroprocessor block for calculating control actions; MCC F—a finger coordinate calculation module; GFC1–GFC3 group finger controllers; ACS PF—system for automatic control of the phalanges of the fingers
GFC1–GFC3 (see Fig. 4.25) generates and outputs control actions to the finger phalanx automatic control systems (ACS PF). Leg Controllers (CL1)–(CL6) (see Fig. 4.26) ACS PF using feedback sensor signals [linear displacement sensors (LDS), angular displacement sensors (ADS), force sensors (FS) and tactile sensors (TS)] error signals are calculated and control actions on the corresponding motors legs (ML1)–(ML6) are generated according to the specified control laws (for example, PID laws), which carry out the required leg extensions (L1)–(L6) (see Fig. 4.26). In addition, GFC1–GFC4 generates and outputs control actions to the automatic control system of the reconfiguration rods (ACS RR). In it, as in the previous system, controllers of controlled reconfiguration rods (CCR R1)–(CCR R6) using feedback sensor signals LDS, ADS, FS and TS error signals are calculated and control actions are generated according to the specified control laws (for example, PID laws) on the corresponding motors rods (MR1)–(MR6), which carry out the required extensions of controllers of controlled reconfiguration rods (CCR R1)–(CCR R6). Tactile sensors and force sensors are usually used to adapt automatic control systems. Linear and angular displacement sensors ensure the closure of automatic feedback control systems during transients.
294
4 Examples of Using SEMS Modules
Fig. 4.26 Structure of ACS of legs. (GFC1) – (GFC3)—group finger controllers; (CL1) – (CL6)— Leg Controllers; (ML1) – (ML6)—motors legs; (L1) – (L6) – legs; ADS—angular displacement sensors; LDS—linear displacement sensors; TS—tactile sensors; FS—force sensors
Subsystem for automatic control of the block with controlled rigidity SSAC RCB contains a block for calculating the rigidity of fingers (BCRF) and group control current regulators (GCCR) for controlled stiffness blocks. The BCRF calculates the required stiffness of the fingers using signals from the NBCCA, and the GCCR calculates and outputs the corresponding amperages to the RCB.
4.5 Using a Platform Based on SEMS Modules for Obtaining Radio Images in Astronomy Recently, more and more attention has been paid to the creation of new methods and means of “radio vision”, i.e. the conversion of received radio radiation to optical images. Such systems are being developed for applied tasks (for example, navigation), as well as for radio astronomy [29–34]. However, in order to obtain a radio image of an area using an antenna with a point receiver, it is necessary to view individual sections sequentially. Therefore, recently it has been proposed to use a matrix receiver, which will allow you to see the entire area at once. The advantages of this approach are obvious. Firstly, in principle, N times (N is the number of pixels), with the same signal-to-noise ratio, saves observation time, which is always limited. Secondly, if the source is variable, then with sequential mapping we will get a picture that will not adequately reflect the state of the object for a given period of time. You can specify a number of matrix receivers already used in radio astronomy. For example, the 1.3 mm receiver of the National Radio Astronomy Observatory of
4.5 Using a Platform Based on SEMS Modules for Obtaining Radio Images …
295
the USA, which is a 2 × 4 array of SIS receivers with a common heterodyne and a quasi-optical input [31]. A system with a slightly higher level of integration, where there is a single receiving module with seven irradiators, waveguide SIS mixers and UPSH, created at Chalmer Technical University (SISYFOS project [32]). JPL (CalTech) creates quasi-optical systems at 230 and 492 GHz based on a grid of 2 × 5 SIS junctions integrated with dipole antennas and placed on a flat surface of a parabolic quartz lens with a metallized parabolic surface [33]. A similar device based on a DBS and a hyper-hemispherical lens was created about 15 years ago in JSC “Gas-Turbine Engineering RPC “Salut” and successfully tested at the IAP RAS [34]. To successfully solve the problem of searching for objects of observation, it is necessary to capture it with the field of view of the receiver matrix with subsequent focusing, i.e. it is necessary to scan quickly and accurately, followed by moving the center of the matrix into the focus of the mirror system of the radio telescope. This is usually done with the help of a counterreflector/subdish (CRef). However, it is very difficult to achieve the desired accuracy and scanning speed for large radio telescopes [35]. Therefore, it is proposed to install a matrix receiver on the SEMS platform and scan directly by the receiver itself. At the same time, it becomes possible to quickly and accurately scan with simultaneous focusing. Thus, the use of matrix receivers for radio vision is relevant and expedient, but at the same time there are a number of problems associated with the need to achieve some compromise between the resolution of the radio telescope and the time of accumulation of the signal providing the required sensitivity. Let’s look at these problems in more detail.
4.5.1 Problems of Creating a Matrix Radio Receiver The problems of creating a matrix radio emission receiver are significantly different for bolometric receivers used for measurements in a continuous spectrum, and for heterodyne receivers that are used for spectral measurements. It is much easier to create a lattice of bolometers with a large number of elements than a matrix heterodyne receiver, since in the latter case a heterodyne common to all channels is needed. And since the power of the heterodyne is always limited, the number of channels cannot be increased above a rather modest value (~10 to 15), unlike bolometer arrays, in which the number of elements already reaches ~100 or more [36]. Another problem specific to heterodyne lattices is related to signal processing during spectral measurements, since a parallel spectroanalyzer must be attached to each element. And this is a bulky and expensive device. This circumstance also greatly limits the possibilities of increasing the number of elements in a heterodyne matrix receiver. The next problem is related to the cooling of the receivers. Modern high-sensitivity receivers operate at ultra-low temperatures to reduce their own noise. Heterodyne receivers require cooling up to ~4.5° K. Such temperatures are achieved when cooled
296
4 Examples of Using SEMS Modules
with liquid helium. But using fill systems on a radio telescope for a long time is inconvenient and expensive. Therefore, closed-cycle cooling systems are more often used. Their cooling capacity does not exceed 1–1.5 W, which also limits the possibility of increasing the number of receiver elements. Bolometers operate at even lower temperatures, ≤0.3° K. Such temperatures are realized in cooling systems based on the helium isotope. The optical scheme of matrix receivers should ensure the formation of the necessary radiation pattern of each element, their sufficiently dense “packaging”, the minimum level of the cross-polarization component, if possible, a long accumulation time, etc. The calculation of the quasi-optical scheme is usually based on the theory of Gaussian beams. In this case, there is a problem of “packing” the irradiators, usually placed in the focal plane of the antenna. On the one hand, in order to obtain a complete Nyquist sample, the irradiators must be located at distances: lo ≤
λF , 2D
(4.2)
where: F is the effective focal length, D is the diameter of the antenna, and the pixel sizes of the matrix to ensure maximum resolution should correspond to the width of the radiation pattern in the focal plane H, which depends on the wavelength and diameter: H∼
λ , da
(4.3)
where: d a is the size of the aperture (opening) of the antenna. However, for optimal antenna irradiation, the size of the horns should be several times larger, and to ensure sufficient accumulation time and correspondingly high pixel sensitivity, their dimensions should also be several times larger than the width of the radiation pattern. For example, from the amplitude-frequency response obtained experimentally on RT-70 in Yevpatoria by ETU LETI employee Postnikov Yu. V. (see Fig. 4.27). It can be seen that in order to ensure the accumulation of a signal on a pixel during the observation of a point source, i.e. to ensure that its image does not go beyond the pixel, pixel sizes l ≈ 10 H are required. In addition, from Fig. 4.27 it can be seen that the amplitude of the oscillations depends on the angle of the antenna, as well as on the antenna design and the parameters of the automatic control system of the antenna drives, i.e. on the natural frequencies ω. Therefore, in order to ensure high efficiency of using matrix receivers for obtaining radio images in astronomy, first of all it is necessary to solve the problem of optimal choice of pixel sizes based on the angle of location and wavelength of radiation, which can be solved on the basis of a compromise between these requirements and adaptation of the matrix receiver to the current values of the angle of location and wavelength of radiation received by this antenna. Let’s consider one of the solutions to this problem.
4.5 Using a Platform Based on SEMS Modules for Obtaining Radio Images …
297
Fig. 4.27 Amplitude-frequency characteristic of RT-70 in Yevpatoria
4.5.2 Estimation of Optimal Pixel Sizes of Matrix Receivers One of the reasons for using the intensity of the focus oscillation in the focal plane of the radiation receiver as an estimate of the optimal pixel size to obtain the maximum accumulation time and, accordingly, the sensitivity of the receiver may be the fact that the effect of the radiation wave on the receiver is determined, as a rule, by its intensity I, i.e. the time-averaged value of the energy flux density [37]. Therefore, it is advisable to take the following as the maximum pixel size of the matrix receiver: lmax =
√ 2I ,
(4.4)
i2 (t) dt,
(4.5)
T
1 I= T
∫2 − T2
where: i(t) is the oscillation of the antenna focus during its operation, T is the period of oscillation of the antenna focus during its operation. As a rule, it can be assumed that for a two-mirror antenna: T=
4π 2 , ω1 ω2
(4.6)
298
4 Examples of Using SEMS Modules
where: ω1 and ω2 are the frequencies of the first vibration tones of the main dish (MD) and subdish (SD), respectively. Then the fluctuations of the focus of the two—mirror antenna can be expressed in the first approximation in terms of the fluctuations of the edges of the main mirror and the counter—reflector as follows: i(t) = kz a1 sin ω1 t + kr a2 sin(ω2 t + ϕ)
(4.7)
where: a1 , a2 are the oscillation amplitudes of the edges of the MD and the SD, respectively, ϕ is the phase shift between the oscillations of the edges of the MD and the SD, k z is the coefficient of dependence of focus oscillations on the oscillation of the edge of the MM and k r is the coefficient of dependence of focus oscillations on the oscillation of the edge of the SD. The coefficients k z and k r can be expressed in terms of the antenna parameters as follows (see Fig. 4.28): kz = kr =
sin ψ1 , sin ψ3
(4.8)
2 sin ψ2 , sin ψ3
(4.9)
where: ψ1 —the angle of incidence of radiation on the edge of the surface of the MD, ψ2 —the angle of incidence of radiation on the edge of the surface of the SD, ψ3 —the angle of incidence of radiation of the edge of the surface of the SD on the focal plane. The angles ψ1 ,ψ2 ,ψ3 are calculated through the antenna parameters as follows (Fig. 4.28): ( ) D 1 , ψ1 = arctg 2 2(Fz − h)
(4.10)
where: Fz—focal length of the MD, h—depth of the MD, D—diameter of the MD of the antenna. ( ) dD 1 , (4.11) ψ2 = ψ1 − arctg 2 2(DF + dFz − dh) where: d is the diameter of the SD, F is the inter-focal distance. ) ( 2(DF + dFz − dh) , ψ3 = arctg Dd
(4.12)
Now, taking into account the expressions (4.4), (4.5) and (4.7), the maximum pixel size of the matrix receiver can be determined from the following expression:
4.5 Using a Platform Based on SEMS Modules for Obtaining Radio Images …
299
Fig. 4.28 Motion of the rays
lmax
[ | | ∫T2 | |2 (kz2 a12 sin2 ω1 t + 2kz a1 kr a2 sin ω1 t sin(ω2 t + φ) + kr2 a22 sin2 (ω2 t + φ)) dt =| √T − T2
(4.13) The last expression can be simplified by calculating its maximum value. We introduce the following notation: T
∫2 i1 =
sin2 ω1 t dt,
(4.14)
sin2 (ω2 t + φ) dt,
(4.15)
− T2 T
∫2 i2 = − T2
300
4 Examples of Using SEMS Modules T
∫2 2 sin ω1 t sin(ω2 t + φ) dt.
i3 =
(4.16)
− T2
Then from (4.14)–(4.16) we get: T
T
∫2
∫2 sin2 ω1 t dt =
i1 = − T2
1 T (1 − cos 2ω1 t) dt = , 2 2
(4.17)
− T2
T
T
∫2
∫2
i2 =
sin2 (ω2 t + φ) dt = − T2
1 (1 − cos(2ω2 t + 2φ)) dt 2
− T2 T
T = − cos 2φ 2
T
∫2
∫2 cos 2ω2 t dt + sin 2φ
− T2
sin 2ω2 t dt =
T 2
(4.18)
− T2
T
∫2 2 sin ω1 t sin(ω2 t + φ) dt
i3 = − T2
T
T
∫2
∫2 (2 sin ω1 t sin ω2 t) dt + sin φ
= cos φ − T2
(2 sin ω1 t cos ω2 t) dt
(4.19)
− T2
Since, 2 sin ω1 t sin ω2 t = cos(ω1 − ω2 )t − cos(ω1 + ω2 )t, 2 sin ω1 t cos ω2 t = sin(ω1 + ω2 )t + sin(ω1 − ω2 )t T
T
∫2
∫2 sin(ω1 + ω2 )t dt = 0,
− T2
sin(ω1 − ω2 )t dt = 0 − T2
Then i3 = −
2 sin((ω1 + ω2 ) T2 cos φ 2 sin((ω1 − ω2 ) T2 ) cos φ − ω2 − ω1 ω2 + ω1
(4.20)
Substituting the values from (4.17), (4.27) and (4.28) into expression (4.22), we obtain the expression:
4.5 Using a Platform Based on SEMS Modules for Obtaining Radio Images …
lmax =
√
I1 − I2 ,
301
(4.21)
where: I1 = kz2 a12 + kr2 a22 , I2 =
8kz kr a1 a2 cos φ((ω2 + ω1 ) sin( T (ω12−ω2 ) ) + (ω2 − ω1 ) sin( T (ω12+ω2 ) ) T (ω22 − ω12 )
.
If we assume that (ω1 − ω2 ) and (ω1 + ω2 ) are integers, then the second term (I 2 ) in the last expression will be equal to 0. Then: lmax =
√ I1 .
(4.22)
In most cases, the expression (4.21) can be used with sufficient accuracy to estimate the maximum pixel size of the matrix receiver. At the same time, the minimum pixel size can be taken as lmin = H .
(4.23)
As an example, to estimate the maximum pixel size of a matrix receiver, we take the following values of the RT-70 antenna parameters: F = 24.2 m, F z = 21 m, D = 70 m, d = 3 m, h = 14.58 m. Let’s assume that a1 = 2 mm, a2 = 2 mm (a1 , a2 are the oscillation amplitudes of the edges of the MD and SD, respectively). From Fig. 4.27 it can be determined that ω1 = 8.5 s−1 , ω2 = 23.7 s−1 (ω1 and ω2 are the frequencies of the first vibration tones of the main mirror and counterreflector, respectively). From the formula (4.10), (4.11), (4.12) accordingly, we get: ψ1 = 39.8◦ , ψ2 = 38.04◦ , ψ3 = 38.04◦ . Further from the formula (4.6), (4.8), (4.9) we get T = 0.195, k z = 0.646, k r = 1.232. Then, substituting the calculated values for expression (4.21) we get: I 1 = 7.736 and I 2 = 0.004. Therefore, in expression (4.21)), the value of I 2 can be neglected. Therefore, the formula (4.22) can be used to estimate the maximum pixel size of the matrix receiver. In the example under consideration, there will be I max = 2.781 mm.
4.5.3 Matrix Receiver Adaptation Algorithms It was shown above that in a millimeter-range radio telescope, the pixel size that provides the maximum sensitivity of the receiver differs significantly from the pixel size that provides the maximum spatial resolution of the antenna. Therefore, in order to ensure optimal parameters of such radio telescopes during radio astronomical observations and radio astronomical location, it is advisable to adjust (adapt) the pixel sizes of matrix radiation receivers to the observed radiation sources. At the
302
4 Examples of Using SEMS Modules
same time, the adaptation algorithms of matrix receivers will vary slightly depending on the operating modes of the radio telescope. In the scanning mode, the antenna continuously moves along a given trajectory at a given speed. In this case, the signal and information about the position of the antenna are read at such a frequency that during the interval between counts, the antenna is shifted only by a small part of the radiation pattern (RP). To increase the accuracy of this mode in the millimeter range, constant adjustment of the antenna focus is required by controlling the MD shields and the SD. In addition, in order to meet the requirements for the accuracy of movement along a given trajectory in the millimeter range, it is necessary to control the position of the optical axis of the antenna, depending on such disturbing factors as weight and wind disturbances, and adjust the trajectory due to the corresponding movements of the SEMS platform with a matrix receiver. An increase in the accuracy of the optical axis position control is possible by reducing the pixel size of the matrix receiver and adjusting the position of the matrix receiver on the focal axis of the radio telescope mirror system. However, at the same time, the signal accumulation time decreases due to fluctuations in the antenna design elements (see Fig. 4.28). Therefore, in order to achieve an optimal compromise between conflicting sensitivity requirements and spatial resolution, it is advisable to use the following matrix receiver adaptation algorithm in this antenna operation mode. 1. Input the initial data: λ, H , v, f1 (β), f2 (β), k1 , k2 , kz , kr , n = 1, l0 , N , where, H is the width of the bottom in the focal plane, v is the speed of the antenna, k 1 is the coefficient selected by the operator in the range from 0.1 to 1, f 1 and f 2 are the pre–calculated or experimentally determined natural frequencies (the first tone) of the MD and SD, respectively, k 2 is the coefficient selected by the operator in the range from 50 to 200, provided that Δt1