262 75 14MB
English Pages 311 p [298] Year 2020
System Architecture and Complexity
Systems of Systems Complexity Set coordinated by Jean-Pierre Briffaut
Volume 2
System Architecture and Complexity Contribution of Systems of Systems to Systems Thinking
Jacques Printz Foreword written by
Daniel Krob
First published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK
John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA
www.iste.co.uk
www.wiley.com
© ISTE Ltd 2020 The rights of Jacques Printz to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2020932472 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-561-9
Contents
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xiii
Part 1. The Foundations of Systemics . . . . . . . . . . . . . . . . . . . . . . . .
1
Introduction to Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Chapter 1. The Legacy of Norbert Wiener and the Birth of Cybernetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.1. The birth of systemics: the facts . . . . . . . . . . . . . . . . . 1.1.1. The idea of integration . . . . . . . . . . . . . . . . . . . . 1.1.2. Implementation and the first applications . . . . . . . . . . 1.2. Modeling for understanding: the computer science singularity 1.3. Engineering in the 21st Century . . . . . . . . . . . . . . . . . 1.4. Education: systemics at MIT . . . . . . . . . . . . . . . . . . .
. . . . . .
6 8 14 21 24 29
Chapter 2. At the Origins of System Sciences: Communication and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
2.1. A little systemic epistemology . . . . . . . . . . . . . . . 2.2. Systems sciences: elements of systemic phenomenology 2.2.1. Control/regulation. . . . . . . . . . . . . . . . . . . . 2.2.2. Communication/information . . . . . . . . . . . . . . 2.3. The means of existence of technical objects . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . .
33 38 42 45 51
vi
System Architecture and Complexity
Chapter 3. The Definitions of Systemics: Integration and Interoperability of Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. A few common definitions . . . . . . . . . . . . . . . . . . . 3.2. Elements of the system . . . . . . . . . . . . . . . . . . . . . 3.3. Interactions between the elements of the system . . . . . . . 3.4. Organization of the system: layered architectures . . . . . . 3.4.1. Classification trees . . . . . . . . . . . . . . . . . . . . . 3.4.2. Meaning and notation: properties of classification trees .
. . . . . .
55 59 62 65 65 74
Chapter 4. The System and its Invariants . . . . . . . . . . . . . . . . . . . . . .
83
4.1. Models . . . . . . . . . . 4.2. Laws of conservation . . 4.2.1. Invariance . . . . . . 4.2.2. System safety: risks .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . .
. . . .
. . . . . .
. . . .
. . . . . .
Part 2. A World of Systems of Systems . . . . . . . . . . . . . . . . . . . . . . .
129
Introduction to Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
Chapter 6. The Problem of Control . . . . . . . . . . . . . . . . . . . . . . . . . .
133 133 142 145 147
Chapter 7. Dynamics of Processes . . . . . . . . . . . . . . . . . . . . . . . . . .
151
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
7.1. Processes . . . . . . . . . . . . . . . . . . . . . . . 7.2. Description of processes . . . . . . . . . . . . . . 7.2.1. Generalizing to simplify . . . . . . . . . . . . 7.2.2. Constructing and construction pathways . . . 7.2.3. Evolution of processes . . . . . . . . . . . . . 7.2.4. Antagonistic processes: forms of invariants . 7.3. Degenerative processes: faults, errors and “noise”
. . . .
. . . .
. . . .
116 119 120 126
. . . .
. . . .
. . . .
. . . . . .
. . . .
6.1. An open world: the transition from analog to all-digital 6.2. The world of real time systems . . . . . . . . . . . . . . 6.3. Enterprise architectures: the digital firm . . . . . . . . . 6.4. Systems of systems . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . . . .
113
. . . .
. . . .
. . . . . .
Chapter 5. Generations of Systems and the System in the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . . . .
83 89 96 106
. . . .
. . . .
. . . . . .
. . . .
5.1. System as a language . . . . . . . . . . . . . . . . . . . . . . . . 5.2. The company as an integrated system . . . . . . . . . . . . . . . 5.2.1. The computer, driving force behind the information system 5.2.2. Digital companies . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . . . .
55
. . . . . . .
. . . .
. . . . . . .
. . . . . . .
153 158 165 166 168 170 173
Contents
vii
7.4. Composition of processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1. Antagonistic interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5. Energetics of processes and systems . . . . . . . . . . . . . . . . . . . . . . . .
176 178 181
Chapter 8. Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
191
8.1. Means of systemic growth . . . . . . . . . . . . . . 8.2. Dynamics of the growth of systems . . . . . . . . . 8.2.1. The nature of interactions between systems . . 8.2.2. Pre-eminence of the interaction . . . . . . . . . 8.3. Limits of the growth of systems . . . . . . . . . . . 8.3.1. Limits and limitations regarding energy . . . . 8.3.2. Information energy . . . . . . . . . . . . . . . . 8.3.3. Limitations of external origin: PESTEL factors 8.4. Growth by cooperation . . . . . . . . . . . . . . . . 8.4.1. The individuation stage . . . . . . . . . . . . . . 8.4.2. The cooperation/integration stage . . . . . . . . 8.4.3. The opening stage . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
195 197 200 204 207 211 214 216 221 223 226 233
Chapter 9. Fundamental Properties of Systems of Systems . . . . . . . . .
235
9.1. Semantic invariance: notion of a semantic map . . . . . . . . 9.2. Recursive organization of the semantic . . . . . . . . . . . . 9.3. Laws of interoperability: control of errors . . . . . . . . . . . 9.3.1. Models and metamodels of exchanges . . . . . . . . . . 9.3.2. Organization “in layers” of the models and systems . . . 9.3.3. Energy performance of the interaction between systems 9.3.4. Systemic approach to system safety . . . . . . . . . . . . 9.4. Genealogy of systems . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
235 239 240 241 243 245 247 252
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
257
List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
269
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
275
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
277
Foreword
The systems that mankind structures and constructs are at the very heart of the modern world: in fact, we only need to think of the social and political systems, complex and industrial systems, monetary and financial systems, energy production and distribution systems, transport systems, communication systems and, more generally, the numerous organizational and technological systems that surround us to realize that “engineered” systems1 are simply everywhere… All these systems are becoming characterized more and more both by a very high level of intrinsic complexity and by ever-increasing interdependences. Mastering these systems has therefore become a real challenge, in a world that is undergoing profound transformation where they must be able to resist the constant and often contradictory pressure of citizens and consumers, markets and regulations, competition and technological innovation, to cite just a few examples of forces that they are subject to2 in almost all fields of industry and services where complex systems play a role (aeronautical, automobile, digital, energy, civil engineering, high tech, naval, railways, mines, spatial, information systems, etc.). Mastering the complexity of a system fundamentally means being capable of mastering its integration process – in other words, its construction by interconnection and/or connection of smaller systems. This process certainly still remains largely misunderstood insofar as it generates the mysterious phenomenon of emergence: an
1 Living systems are another story and we will limit the scope of our considerations to engineered systems, in other words to artificial systems (as opposed to natural systems) architectured by mankind. This already covers a vast spectrum given that technical systems and organizational systems are both probably going to be included in this conceptual framework. 2 Not forgetting the intrinsic limits of our planet’s physical and environmental resources that are impossible not to take into account.
x
System Architecture and Complexity
integrated system will effectively always have new properties – known as “emerging” properties – which cannot easily be obtained or expressed with its basic building blocks. To thoroughly understand this last concept, let us consider, as a simple example, a wall that is only made of bricks (without mortar to bind them). A simple systemic model of a brick can then be characterized by: – functional behavior consisting, on the one hand, of providing reaction forces when mechanical forces act on the brick and, on the other hand, of absorbing rays of light; – internal behavior provided by three invariant states: “length, width and height”. In the composition of brick models like these, a wall will simply be a network of bricks bound by the mechanical action/reaction forces. But a wall like this can mechanically only have functional behavior consisting of absorbing rays of light. However, it is clearly not the usual behavior of a wall, since the existing holes in the walls (for windows) that must allow light to pass through must be taken into account! We will also observe that it is difficult – and probably in vain – to attempt to express the shape of a wall (which is one of its typical internal states) as a function of the length, the width and the height of bricks that make it up. We are therefore obliged to create a dedicated, more abstract, systemic model, to specifically model a wall when we want to express its real properties. In systemic terms, this is expressed by observing that the wall properties such as “allow light to pass through” or wall characteristics such as its “shape” are properties and emerging characteristics for the wall, meaning that they can only be observed at the scale of the “wall” system and not at the scale of its “brick” sub-systems and that they therefore emerge from the integration process of “brick” sub-systems which lead to the “wall” system. This mechanism of emergence is in fact a universal characteristic of systems: every system always has emerging properties in the way that we have just explained. These considerations can appear to be logical, or even naive, but unfortunately we can only observe in practice that their consequences are generally not understood, which leads to numerous problems with quality and a lack of control that is observed in numerous modern complex systems. Most engineers and decision-makers effectively continue to think that control of the components of an integrated system is sufficient to control the system overall. However, the very nature of the integration mechanism requires not just directors to be in charge of components when an integrated system needs to be designed: it is also essential to entrust the high level integrative model for the system under consideration to a specific director. This is the key role of a system architect, which unfortunately often does not exist in most industrial organizations, as strange as this may seem for anyone who is conscious of the underlying systemic fundamentals!
Foreword
xi
Considering the integration of a heterogeneous complex system, the resultant of the interaction of several homogeneous systems, necessarily requires transverse reasoning, constructing models of a new type that capture the emergence by coherently integrating the parts of each homogeneous model that constitutes an overall systemic model. Yet it must be observed that nothing prepares us for this new paradigm of systemic reasoning, which calls for bringing together different expertise of extremely varied nature and to surpass traditional storage of the organization of knowledge, although we have known for a long time3 that we cannot optimize a system as a whole by optimizing each of its sub-systems. The emergence of a true science of systems – systemics – that is capable of rigorously tackling the numerous problems that are presented by the design and management of the evolution of modern complex systems is therefore an urgent requirement if we want to be able to bring satisfactory answers to the numerous, profoundly systemic challenges that humanity is going to have to respond to at the dawn of the third millenium. This emergence is, of course, not easy because it can easily be understood that the development of systemics is confronted mechanically by the set of classic disciplines that all attempt to provide part of the required explanations for understanding the system and which therefore do not naturally appreciate a new discipline attempting to encapsulate them in a holistic approach. This book by Jacques Printz is therefore an extremely important contribution to this newly emerged scientific and technical discipline: it is effectively primarily one of the very rare “serious” books published and which provides a good introduction to systemics. It effectively provides an extremely wide view of this field, picking up a lead from system architecture in other words by the part of systemics that is interested in the structure of systems and in their design processes, which allows each and every one of us to understand the issues and the problems related to systemics. We can therefore only encourage the reader to draw from the essence of Jacques Printz’s master book which combines reminders of past development that explain how systemics has emerged, an introduction to the key concepts of systemics, and practical examples that allow the nature and scope of the ideas that are presented to be wholly understood. However, it will not be possible to easily cover the entire perimeter of systemics because it is too broad a subject. Many other areas could in fact be subjects of other books: system dynamics, system sizing and optimization, system evolution trajectories, the design of homogeneous system families, agile and collaborative approaches for system development, the governance of systems, etc. Readers who 3 This result is due to Richard E. Bellmann in an article on optimal control dating from 1952. He showed that only the systems that can be optimized by parts are linear systems, a class to which no complex system belongs in practice.
xii
System Architecture and Complexity
wish to know more can, for example, benefit from the series of international conferences “Complex Systems Design & Management” that the CESAMES association organizes each year in Paris and every two years in Asia4 and which allow the vital signs of the systemic approach applied to engineering of complex systems to be regularly taken. The numerous conferences organized by the international organization for systems engineering, INCOSE (International Council on Systems Engineering), are also excellent places for more information about this field, and meet the international community that is interested in these problems. The universe of systems is in fact a “continent”5 that creates a true scientific and technical field for a real science of systems. However, systemics still remains to be constructed, necessarily in a profound interaction between theory and practice, despite the awarness of great pionneers such as H. Simon and K.L. von Bertalanffy. Unfortunately, there have not been many who have followed in their footsteps. Let us hope therefore that the systemic challenges that the modern world has begun to tackle are at last going to allow this to emerge. Daniel KROB Chairman CESAMES (Centre d’excellence sur l’architecture, le management et l’économie des systèmes – French Center of Excellence for Architecture, Management and Economics of Systems, http://cesam.community/en/) Paris and Professor École Polytechnique Paris and INCOSE Fellow
4 The conference “Complex Systems Design & Management Asia” was organized from 2014 to 2018 in Singapore. The 2020 edition of this conference is being organized in Beijing with the support of the Chinese division of INCOSE and of the Chinese Aeronautical Federation. 5 Using an expression that Marcel Paul Schützenberger would probably not have denied.
Preface
This book has a long history! Over the course of our respective experience, we were convinced of the requirement to allocate importance to teaching a new genre that has accompanied the development of IT, and IT systems, since their creation. The milestones of this evolution can be summarized in a few titles: Industrials Dynamics and Principles of Systems by J. Forrester, Sciences of the Artificial by H. Simon, and the series by G. Weinberg, Quality Software Management, Volume 1: Systems Thinking (or even its Introduction to General Systems Thinking), accompanied by three other volumes. Perhaps also the three volumes by J. Martin, Information Engineering. In the 1990s, there was a major event whose overwhelming effect on our system of thinking has perhaps never been measured completely: IT definitively ceases to be included as a centralized system around a “large” mainframe controller, to become a distributed system, an assembly of “clients” and “servers” that provide mutual services to each other and that must, however, be integrated. There is no “center” in the vulgar meaning of the term, or rather, the center is everywhere; everything interacts. Machines that will soon be hand-held are infiltrating and infusing into society as a whole, in all fields. The challenge is expressed in two words: parallelism and integration, notwithstanding a quality of service connected to the diffusion of technology and the risk associated with use. In this context, under the instigation of C. Rochet, who was in charge of a training program for these new systems at the IGPDE1, the idea of proposing a complete training cycle that gave great importance to this system thinking was born, 1 Institut de la gestion publique et du développement durable (French institute for public management and economic development); permanent training operator of the Ministry for the Economy and Finances and of the Ministry for Action and Public Accounts.
xiv
System Architecture and Complexity
in association with the École Polytechnique and via the chair of Complex Systems Engineering directed by Daniel Krob, with useful contributions from the CNAM. A process of the same nature was developed at the Ministry of Defense via the DGA (Direction générale de l’armement – French Directorate General of Armaments). An original training program was created from this intense activity, indissociably combining an essential conceptual contribution and an immediate implementation of newly acquired concepts, all of which led to the training in systems architecture that is currently provided by CESAMES Academy, a course with the RNCP2 label. This book is the result of this past history and countless discussions, reports, presentations, conferences, tests/errors, etc. that have accompanied it. Since the observation drawn from the available literature in French, either the available works were obsolete – dating from before the distribution revolution – or incomplete and/or ill-suited because they were too qualitative and not integrated into a project framework that would allow coordinated action between all the parties involved and the engineering teams, which are themselves distributed. To create systems that respond to the present and future requirements of the users, we first need to “think” about complexity in a context that is precisely that proposed in this book, hence its title and subtitle. Jacques PRINTZ February 2020
2 Répertoire national des certifications professionnelles – French national register of professional certifications, see http://www.rncp.cncp.gouv.fr/.
PART 1
The Foundations of Systemics
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
Introduction to Part 1
In this first part, we will focus on the most fundamental notions of systemic analysis. They are threefold, summarized in a compendium in the conclusion to this book. These three principles need to be considered as indissociable and distinct, but I repeat, indissociable. Over and above the usual definitions that have been they are encountered here and elsewhere, ever since systems became a talking point, definitions are not strictly such because they no longer are purely descriptive. Because of this, we are going to focus on a closely examine of what it means when we say that a set of interacting entities can be described as a “system” – in other words, how a system is constructed from its primitive elements and what differentiates it from a simple group with no clearly defined objective. This differentiation is the result of an operation, or an operator, in the mathematical sense of the term, which today appears to be central to this problem: having an integration operator means that a “sum” of disparate elements will appear once the operation has been carried out as a unique entity, a unit, that only reveals its newly created functional capacities via its external interfaces. To interact with it, there is no need to know how it is made, but it is necessary to know what it is used for and what its service contract is. An entity that is an “integrated” system therefore has two aspects: (1) a functional aspect, in a way logical and atemporal – “timeless” – which is more than the sum of its parts, and (2) a solid physical aspect that allows the system to carry out its functions in the material world, energetic, a temporalized and quantified universe that is ours, without forgetting the “noise” of the environment. These two aspects are in a complementary relationship: they are inseparable, although different.
4
System Architecture and Complexity
Once it is “integrated”, this new unit can in turn be input into the construction of new units at a higher level, in a hierarchy of abstractions that are recursively interwoven with each other. This is what is commonly known as a “system of systems” which is coupled together more loosely than the constituent systems. This type of system appeared in parallel with the miniaturization of computers, thanks to technologies for integration of components that was perfected in the 1980s. From that point it was then possible to distribute the calculation power to process the information where required, and on demand, as they say. This turning point took place in the 1990s, initially in defense and security systems, and was then rapidly generalized in the decade 2000–2010 with the ubiquity of the Internet and of the World Wide Web. This revolution – since it is much more than a simple evolution – explicitly generated a central architectural motif that constitutes the heart of systemics, because for the integration operator to play their role, it was still necessary for the elements to be integrable and for them to have something in common: the information that makes them exist. This “something” is an “exchange model” that we will detail in Chapters 8 and 9. Finally, a new phenomenon to consider is the all-pervasive nature of errors. Not only in the physical components of systems, but also – and this is entirely new at this scale – in their functional component and generally in the form of programs, no matter how this component is integrated into the system. This is the third aspect of our approach. These errors will lead to malfunctions with varying degrees of serious consequences, taking into account the context of the system use. The system safety, including its human component, becomes a main preoccupation so that the systemic approach will allow us to understand better in order to construct systems whose architecture guarantees that they are both robust and resilient to inevitable hazards. This is an intrinsic dimension of the overall complexity and necessary, in order to give them an autonomy that is as good as possible, given the technologies and the methods used to develop them and implement them.
1 The Legacy of Norbert Wiener and the Birth of Cybernetics
Without intending to be an exhaustive historical account, this chapter provides some specific details regarding the concerns of the founders of what is now the science of systems or systemics, in order to understand the stakes that still remain extremely relevant today. Its objective is to avoid the anachronisms that are often the origins of serious misinterpretations. In this chapter, we propose to revisit the key event system sciences, or systemics originated, in which the mathematician N. Wiener, professor at MIT, has been an emblematic actor1. The summary that we propose does not claim to provide a chronological history of the facts as far as it could be restituted2, which would in any case not be of great interest. It attempts, on the other hand, to recreate the dramatic climate of the era in which all available researchers find themselves caught up in the efforts of the Anglo-American war to fight against totalitarian regimes. Some do so with conviction and without scruples, such as J. von Neumann, others due to a moral duty, such as N. Wiener who was a pacifist and felt disgusted by violence. This is a period of intense interactions and exchanges between exceptional scientific personalities, often of European origin, where urgent action against totalitarian regimes is the common good. To make sense of this situation, it is necessary to 1 But not the only actor! J. von Neumann played a major role, if not more important, on the practical and theoretical plans. For a complete overview, refer to G. Dyson, Turing’s Cathedral – The Origins of the Digital Universe, Penguin, 2012; W. Aspray, John von Neumann and the Origins of Modern Computing, MIT Press, 1990, in particular Chapter 8, “A theory of information processing”. 2 One of the essential works that best describes this very particular atmosphere is the work by S.J. Heims, John von Neumann and Norbert Wiener, From Mathematics to the Technologies of Life and Death, MIT Press, 1980; and more recently, P. Kennedy, Le grand tournant: pourquoi les alliés ont gagné la guerre, 1943–45, Perrin, 2012.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
6
System Architecture and Complexity
imagine trying to put ourselves in the situation in order to judge the potential importance of the contributions made by each one, and to do this, it is essential to have a thorough understanding of the problem. REMARK.– We have voluntarily ignored the contribution made by the biologist L. von Bertalanffy and his General System Theory, which today is only of historical interest3. Its influence in the field of engineering has been small, even zero, given the level of generality in which it was situated, and in any case, engineering was not his field of concern, contrary to N. Wiener, C. Shannon, J. von Neumann or A. Turing who “got their hands dirty” with machines and/or real systems. 1.1. The birth of systemics: the facts The problem presented to N. Wiener4 and his group at the MIT5 was to study methods to increase the number of shots hitting the target and improve anti-aircraft defense, whose effectiveness could be measured by the ratio of the number of shots fired per downed airplane. Its overall objective was to improve the management of “shell” resources, inflict the greatest damage to the enemy (airplane pilots are a rare resource) and above all to save human lives. At the time, when N. Wiener began his reflection, a summary of the technological environment would read as follows: Gunners had perfect knowledge of ballistics and used firing charts to make adjustments to land-based cannons, for straight shots and for parabolic shots, over distances up to 20–30 km (correction required for the Coriolis force that is caused by the rotation of the Earth). The final targeting was carried out by observers close to the points of impact, who gave correction orders by telephone in 1914 and then by radio in 1939–1940. Shots were fired by the DCA (air defence) in a completely new environment onto moving targets – airplanes with speeds of up to 600 km/h, in other words, approximately 170 m/sec, at altitudes of 5–6,000 m. With shells of high initial speed, 800–1,000 m/sec, 5–6 seconds passed before the explosion capable of destroying the target was triggered, without taking into account the deceleration that slows down the initial speed.
3 For an exhaustive study, refer to the thesis by D. Pouvreau, Une histoire de la “systémologie générale” de Ludwig Bertalanffy – Généalogie, genèse, actualisation et postérité d’un projet herméneutique, EHESS, March 2013. 4 Refer to Chapter 12, “The war years 1940–1945”, from his autobiography, I Am a Mathematician, MIT Press, 1964. 5 Among others J. Bigelow; refer to article “Behavior, purpose and teleology”, cowritten by A. Rosenblueth, N. Wiener, J. Bigelow, which can be downloaded from the JSTOR website.
The Legacy of Norbert Wiener and the Birth of Cybernetics
7
REMARK.– In 10 seconds of freefall, a weighted object travels approximately 500 m and reaches a speed of 100 m/sec (360 km/h). Combined with the speed of the airplane and with the initiatives taken by the pilot who can change course, we can easily understand that the “shot hitting the target” is something highly improbable. It is also the reason why it is necessary to “spray” a very wide area, with many cannons and a great quantity of munitions, to have a hope of making an impact; it is best to forget about efficiency. The first radar (radio detection and ranging) equipment made their first appearance in England where they played an essential role and allowed the RAF to contain the attacks by the Luftwaffe. The radar allowed regrouping bomber formations to be “seen” far in advance, something that no observer would be able to detect simply by visual means, given the distances, with in situ information an impossibility. Thanks to radar, the command center for aerial operations understands what is probably going to happen and can therefore position its retaliation in advance with maximum efficiency for “attack” airplanes. In other words, radar provides all information about the trajectory followed by the target: position, speed and possible acceleration, and integrates all that to calculate the probable trajectories of bombers. MIT (Massachusetts Institute of Technology) had, at the time, one of the very first electronics laboratories for the study of servomechanisms – the Lincoln Laboratory. It later proved its worth in the construction of the SAGE system6, the first anti-aircraft defense system that can be described as modern. Servomechanisms are becoming essential devices for automatic adjustment of mobile mechanical parts in such a way as to control their movements as minutely as possible and absorb the shocks that generate destructive vibrations. This is fundamental to move the chassis of anti-aircraft guns and follow the trajectory of target airplanes. Servo-control of the movement requires precise measurements, in particular of the mass to be moved, in order to activate effectors with the right adjustment data and make cyclic corrections. These developing automatisms led to all studies concerned with the transfer functions that have the fundamental properties of reincorporating as inputs some of the results that they produce as outputs. Knowledge of the function of transfer, obtained either empirically by experimentation; theoretically by calculation, or by combination of both of these, is therefore fundamental data for correct control of the mechanisms in question.
6 Semi Automatic Ground Environment; refer to the book by K.C. Redmond, From Whirlwind to MITRE: The R&D Story of The SAGE Air Defense Computer, MIT Press.
8
System Architecture and Complexity
Using these mechanisms, amplification will be possible in a ratio of 1:100, 1:1000 or much more, depending on the principle of the control stick or of the rudder. They use a low energy signal (electric current, shearing a hydraulic circuit) which at a smaller scale closely reproduces a phenomenon of much greater energy, such as the movement of a gun carriage that may weigh several hundreds of kilos, or even a few metric tons for the turrets of marine artillery for which compensation of the erratic movements of the boat must be made. Obtaining “true” signals that are uncontaminated by random “noises” from the environment is therefore absolutely essential. We also know how to “calculate” some of the fundamental mathematical operations of differential and integral calculus, in a rapid and unexpected manner. This is by using analogical calculators that are quite difficult to manipulate, such as those that equip the gunsights of B29 bombers – another celebrated airplane among those used during World War II. The first computers arrived a few years later, but it is certain that N. Wiener already had relatively precise intuition given the exceptional environment that he was surrounded by and to which he was one of the star contributors. He knew J. von Neumann well and had held discussions with him on many occasions7. Rather than making long and fastidious calculations by hand, consulting tables and charts that are familiar to engineers, it was becoming simpler to carry out calculations directly with “programmed” functions in analogical form. And since in addition, we knew how to “clean” the signal that carried information about the background noise, it was possible to calculate the adjustment parameters required for the various appliances, on demand. 1.1.1. The idea of integration N. Wiener’s stroke of genius is to have understood that by integrating all these technologies appropriately and by managing to ensure all those working on their development worked together, we would be able to create machines capable of imitating behaviors such as the gesture of a baseball player. A popular sport among students at Ivy League universities on the East Coast, baseball requires precisely intercepting balls arriving with strange trajectories (“curveballs”), thrown by the “pitcher”. In a certain way, we can say that the machine has, in the memory of its electronic circuits, a “model” of the behavior of objects on which it is supposed to act, to be precise the airplane/pilot couple. The dedicated term to designate this type of 7 Refer to John von Neumann: Selected letters, History of mathematics, vol. 27, London Mathematical Society, American Mathematical Society, Miklos Rédei.
The Legacy of Norbert Wiener and the Birth of Cybernetics
9
capacity, that only appeared with the first IT applications 30 years later, is “metamodel”, because the rules that define it are independent of the medium. They are beyond, or above (the meaning of the Greek word μετα) the material format. Since programming is itself a model, the model of this programming is described as a metamodel to avoid confusion; it is a logical entity that is capable of instructing itself with its own rules, such as the universal Turing machine that can instruct itself or the grammar of written French which in this case is its own metalanguage. When computers began their exponential development, in the 1970s–1980s, we quickly understood that these fundamental rules to understand systems can, and even must, be specified in a language that is independent from the various possible targets. It can be expressed in the language of the universal Turing machine, independent of all technology8, therefore adaptable to all physical formats that allow their logic to be expressed in a way that can be represented (analogue circuitry, electronic tubes and soon transistors). This is a pure abstraction, totally immaterial, but without which it is impossible to act. The model obtained in this way concentrates the information required for action of the system (what it does, in operational terms) and for action on the system (how it is done) in its physical reality; it defines the means of valid interactions, and to this end, it operates as a grammar of the actions to be carried out. REMARK.– We note that any abstraction level in a system is a model of the entities that are thus abstracted. In a modern computer, there are easily 20 of them! Rather than piling up the “meta”, it is more simple and clearer to say that layer N has a model of the layer that is immediately lower in the hierarchy of system entities, which allows us to program services on this same layer using only the interface defined by the model. Now we turn to 1942–1943, when already one of the first laws of new systemics could be formulated: To interact with an environment, any system must have an abstract model of this environment, meaning of the information, independently of the physical/organic nature of this environment and of the system (law no. 2 of the compendium, section C.2.2). In the case of the firing system studied by N. Wiener, the model comprises rational mechanical equations, Lagrange/Hamilton version, applied to the ballistics of atmospheric trajectories whose parameters must be known: temperature, pressure, humidity, winds, etc.
8 In theoretical computer science, the Church–Turing thesis states that everything that is “calculable”, in the logical sense of the word, can be expressed by a Turing machine which becomes, due to this, a universal modeling language; this is the case today where computer science has diffused into all disciplines.
10
System Architecture and Complexity
Machines designed on these principles – imitating certain behaviors that are until then characteristic of living organisms – were thus attributed with an entirely mechanical “intelligence”. Later, in the 1950s, it was known as “artificial intelligence”9, completely constructed by human intelligence, thanks to knowledge of the laws of nature and the capacity for rapid and precise retroactions, for secondor third-order corrections (these are infinitesimal quantities), meaning the magnitude at instant t, its first and second derivatives. More prudent, Norbert Wiener used the term “machine learning”, as early as 1946, less misleading and more precise than the media-friendly “artificial intelligence”, to create dreams! A term that is still used in the United States, in preference to artificial intelligence. In fact, the airplane trajectory is not random because it is subject to different types of determinisms that the pilot must take account of in their decisions, knowing that they can be impossible to predict, although probable. The airplane is subject to the laws of rational mechanics and to the laws that mechanics of its structures that can break, the strength of materials being one of the pillars of engineering sciences. The pilot can also execute dangerous, even mortal, maneuvers, for their own physiological safety (“black” sheet, increase of the apparent weight and difficulty in moving, loss of consciousness due to the centrifugal force, etc.), and it must take into account the decisions of other pilots in their squadron. Despite all these constraints, the trajectory can be extrapolated and calculated thanks to the data collected by radars. N. Wiener was a master of calculation of apparently random trajectories, such as those particles subject to Brownian motion, that he was the first to know how to calculate, using Lebesgue integrals of which this was one of the first concrete applications. He considered statistical mechanics, and its creators J.C. Maxwell, L. Boltzmann and J. Willard Gibbs, as an absolutely essential contribution to the new physics that was taking shape, and as a more correct understanding of the, operation of nature and of its representation. REMARK.– Let us remember that for a curveball, at least four points are required. With a single point, we are in “a ball”, two points determine a direction (the speed). With three we have the plane acceleration (curvature) and with four we have two planes and therefore the torsion of the curve – in other words a position and three vectors. With these four parameters, the movement can be located in a narrower and narrower cone. This data can be the basis for calculation of the exact values, to provide to servomechanisms to point the guns in the right direction, follow the movement of airplanes and adjust the shell rockets so that they explode at the right moment, in the most probable region of the presence of airplanes. All of this takes place while 9 Term used by J. Mc Carty, in 1955–1957, then at Dartmouth, with a preference for the austere Mathematical logic.
The Legacy of Norbert Wiener and the Birth of Cybernetics
11
correcting the margin with the latest data coming from the radars, as long as calculations are made fast enough and data is transfered without error10. Effectively, between the instant T of the situation, and the instant where the orders transmitted will be effective, there is a time difference ΔT; if ΔT is too big, given the evolution of the situation, the orders will not be adapted. This interval ΔT, response time and/or latency of phenomena, is a fundamental information regarding the transfer function. N. Wiener very quickly understood that here he had at hand all the ingredients for a new method of analysis of the problems, allowing new solutions to situations that had remained inaccessible to analysis because they were too complex – far beyond “simple” machines that engineers had dealt with until then. And with a correct interaction of what is “simple”, we could create something “complex” that is controlled, useful and organized, which respects the laws of engineering. His own way of creating order in a world that is subject to undifferentiated disorder. He christened this “new science” cybernetics, or the art of steering, in the same way as a helmsman, by adapting the Greek word (κυβερνητική) used to designate the pilot of ships with which the ancient Greeks had conquered the Mediterranean – an unstable sea if ever there was one – and a term that is found in the roots of many words such as government, to govern, etc. The very essence of this new science pertains to time-related actions that are carried out in an unstable environment – to go from a point A to a point B in a nonlinear fashion, exactly like a helmsman who knows that a straight line is never the right way to go about setting a course when currents, tides, winds, the movement of the boat (rolling, pitch, yaw), waves, etc. need to be taken into account to maneuver the sails and the rudder. The actions carried out are finalized11, meaning organized and controlled as a function of the objective to be reached, the goal, and this is the new fundamental point, which is a moving target resulting from an environment that is itself structured but whose laws are largely unknown, although they do exist (much later, deterministic chaos was used to characterize them). It must therefore be possible to continuously adapt the action plan, which is difficult with analog calculators, but which is intrinsic to the architecture of von Neumann where the information recorded in the computer memory (computing instrument in his terminology) is sometimes 10 In his book dated 1951, Reliable Organisms From Unreliable Components, von Neumann wrote: “Error is viewed, therefore, not as an extraneous and misdirected or misdirecting accident, but as an essential part of the process.” Also refer to Chapter 15, “Theory of self-reproducing automata”, in the book by G. Dyson, Turing’s Cathedral, already cited, which is essential regarding the central role played by the errors in the engineering process. 11 Before the very official Macy Conferences, from 1942 to 1953, there had been an ephemeral Teleological Society which met informally; refer to S.J. Heims, The Cybernetics Group, MIT Press. In French, refer to the book by J.-P. Dupuy, Aux origines des sciences cognitives, which gives a good overview of this, but from a point of view that is subject to discussion, interesting in the same way as everything else he has done and written.
12
System Architecture and Complexity
program and sometimes data (but another 10 years were required before this logic dream became reality). How can the environment be put to good use to reach the objectives: this is the ambition of cybernetics. To do this, it will be necessary to be “intelligent”, in N. Wiener’s meaning of the term, meaning as a mathematician12. Taking time to think about it, we see that we are dealing with a double trajectory13, a double dynamic. On the one hand, the aspect imposed by the environment (meaning what is considered as not part of the system), what biologists like C. Waddington named “epigenetic landscape” (that René Thom rechristened substrate space or even phenomenological deployment) and on the other hand, the mobile aspect that the pilot can direct taking into account the intrinsic properties of this mobility and the experience that it has of the behaviors observed in the environment and of the evolving dynamic that characterizes it. The steering therefore consists of servo-controlling the trajectory to go from A to B, using the environmental dynamics to advantage in order to optimize the journey. When the pilot has navigated sufficiently and acquired experience – either empirically or theoretically through knowledge of the phenomenological laws that structure transformations of the environment that they are also part of – they know how to put the actions together that will allow them to reach their goal, scheduling that they alone can judge because they know the local situation. The pilot must therefore extract information from their environment and to do this, they must have suitable sensors, and translate this information in the behavioral “language” of the effectors of the mobile that the designer is master of, an operation that physicists and electronic engineers are very familiar with, for which the dedicated term is “transduction”, for example transform an electric current into hydraulic pressure servo-controlled on the signal conveyed by the current (see Chapter 6). In N. Wiener’s approach, anyone who uses the term interactions, implicitly means elements that interact in a certain order via energy exchanges. These elements are the constituents of the system (the buildings blocks in today’s computer science jargon; a much better term than that of “black box” often used at the time), to the exclusion of others that are therefore simply ignored, but which exist all the same. For N elements, there are N×(N-1) monodirectional exchange links, but if the system organizes itself, if certain elements have a “memory” (possibly from layered human operators), it will be necessary to consider all the parts of these elements. Either through theoretically 2N combinations or sequences of temporal actions that are possible in an even greater number, sequences that will be investigated thanks to 12 Refer to his autobiography, I Am a Mathematician, MIT Press, 1956, which leaves no doubt about the way we see the world. 13 In geometry, these are graphs known as tracers; refer to https://en.wikipedia.org/ wiki/Tractrix.
The Legacy of Norbert Wiener and the Birth of Cybernetics
13
the theory of automatons for which J. von Neumann provided the basis at a later date14. The beginnings of the theory or theories of complexity therefore took root right at the start of the science of systems, but it was J. von Neumann who was the true initiator15. Curiously, N. Wiener never talks about this, nor does he talk about the theory of automatons, which does not mean that he did not know it16! Organizing this complexity, meaning specifying the architecture of the system using rigorous and precise logicomathematical models, quickly became the organizing center of projects, such as the SAGE system, with J. Forrester who carried out the first living demonstration of this in the 1950s. He was the first to become fully aware of the practical problems of interdisciplinarity and attempted to resolve them, thanks to models that allow exchanges between the concerned parties, including military personnel in the case of SAGE17. Concerning the very object of cybernetics, N. Wiener’s point of view cannot be clearer. He tells us18: “From the point of view of cybernetics, the world is an organism, neither so tightly joined that it cannot be changed in some aspects without losing all of its identity in all aspects nor so loosely jointed that any one thing can happen as readily as any other thing. It is a world which lacks both the rigidity of the Newtonian model of physics and the detail-less flexibility of a state of maximum entropy or heat death, in which nothing really new can happen. It is a world of Process, neither one of a final equilibrium to which Process leads nor one determined in advance of all happenings, by a pre-established harmony such as that of Leibniz… Life is the continual interplay between the individual and his environment, rather than a way of existing under the form of eternity.” 14 Concerning this very fundamental subject, we can re-read the memo written by C. Shannon, Von Neumann’s Contributions to Automata Theory (readily available on the Internet); also refer to the recent book by G. Dyson, Turing’s Cathedral – The Origins of the Digital Universe, already cited. 15 Refer to the retranscriptions of his conferences at the Hixon symposium, September 1948, on Theory of Self-reproducing Automata, University of Illinois Press, 1966, published posthumously by A.W. Burks; and volume V of the complete works: Design of Computers, Theory of Automata and Numerical Analysis, Pergamon Press, 1961. This is irrevocable for anyone who wants to take the trouble to read, then. S. Ulam developed certain ideas provided by his colleague and friend about self-reproducing automatons. 16 He certainly uses the term “automaton”, but not in the rigorous mathematical sense that von Neumann gave it in his various writings. 17 On all these aspects, refer to K. Redmond, T. Smith, From Whirlwind to MITRE, The R&D story of the SAGE Air Defense computer, MIT Press, 2000, already cited. 18 Autobiographie, p. 327.
14
System Architecture and Complexity
This is indeed about the science of systems, with no doubt, even though we could detail his words further. 1.1.2. Implementation and the first applications Cybernetics has had a strange fate on each side of the Atlantic. In the United States, it has remained grosso modo an engineering science where it has prospered under the auspices of the MIT, with in particular J. Forrester, SAGE project director and future professor at the Sloan School. He published, among others, two books that marked important milestones in systems sciences: Industrial Dynamics, in 1961 and Principles of Systems, in 1971, recently republished by Pegasus Communications. Cybernetics by N. Wiener, (Forrester version), has been successfully included in the engineering cycles of complex systems, sometimes without even being mentioned by name, a true implementation of the “wou wei”, the art of governing without action, without effort and spontaneously, of the emperors of China19, particularly in engineering of quality20 which is fundamentally an integrated control throughout the system lifecycle in compliance with the service contract requested by the system users. Quality is a “trajectory”, a pathway, which is constructed by taking into account the hazards of the project environment, including those of organizations. For this, it must be kept in mind by all the actors and concerned parties, and not in a quality sense that is exhibited like a dancer to demonstrate that it is being done. The vocation of quality is to be, not to only seem, prominent and not salient, which acts like a force field. The entropic nature of human activity means that if it is not maintained over time, it degenerates and the undifferentiated disorder takes over again. Recently, systemics has even undergone a true media success in the United States with a cult book, The Fifth Discipline, which presents System Thinking for a wide-ranging public, in the simple language that English-speakers enjoy so much21.
19 In the Forbidden City in Beijing, in the meeting room of the Palace of Tranquil Longevity, all visitors can see the two characters “wou wei” written in traditional Chinese above the throne of the Emperor: 無 爲. 20 Refer to the works of G. Weinberg, in particular Quality Software Management, System Thinking, Dorset House Publishing, 1991. 21 The Fifth Discipline: The Art and Practice of the Learning Organization (first edition 1990, latest edition 2006); book by Peter Senge, professor at MIT. P. Senge explains in this book the integrator role of System Thinking in what is known as Core Disciplines: Personal mastery, Mental Models, Building Shared Vision, Team Learning which are also great classics in the field of quality management and project management.
The Legacy of Norbert Wiener and the Birth of Cybernetics
15
In France, things have been different because very quickly cybernetics sought fields of application that went far beyond what they were capable of explaining, owing to a lack of prediction, in sociology of organizations or in biology22. Norbert Wiener himself probably contributed indirectly to this state of affairs under the influence of his book destined for the wider public The Human Use of Human Beings, which was very quickly released in French under the simple title Cybernétique et société23 (meaning “Cybernetics and Society”). This a more accessible book than his Cybernetics: or Control and Communication in the Animal and the Machine, which appeared simultaneously in 1948 from MIT Press and Hermann in France. In the rest of this volume, we will denote that book by the shorthand C/CCAM. Unavailable for a long time in French, this seminal book was translated by Éditions du Seuil, in 2014, in the Sources du savoir collection, with a presentation of the context by the science historian Ronan Le Roux, based on his PhD thesis work, La cybernétique en France (1948–1970) (cybernetics in France), defended in 201024. The French title is: La cybernétique – information et régulation dans le vivant et la machine (meaning: Cybernetics – Information and regulation in living beings and machines). We note that control has been rendered by control, which is perfectly correct, but that communication and animal have respectively been rendered by information and living, two translations that we can contest. As Wiener himself says, “Traduttore, traditore” (C/CCAM, Chapter VIII, “Information, language, and society”). In the introduction to their book The Mathematical Theory of Communication, Shannon and Weaver are, however, particularly clear. They tell us, in the introduction: “The fundamental problem of communication is the reproduction, either exactly or approximately at one point, of a message that was selected at another point. Frequently the messages have meaning; […] These semantic aspects of communication are irrelevant to the engineering problem.” In communication, we are only interested in the structure, in the syntax and the coding, and of course in the mistakes. The distinction between communication and information is therefore very radical. When we talk about ICT, we note the same distinction; in academic disciplinary fields, we distinguish sciences and technologies of information and communication (STIC) from other fields, while maintaining the distinction between information and communication. Mathematics, in the sense that it constitutes the “language of nature”, is part of the STIC, but mathematicians would certainly be offended if we said to them that their science came from cybernetics, irrelevant they would say! Look at the short text by Grothendieck, found below. This choice of translation is therefore not the most judicious, at the least, because it maintains a confusion, which means that the proposed translation is at the limit of a false meaning. “Information” is much wider than “communication”, and includes sciences of language, natural and/or artificial, including 22 We can re-read the remarkable recension of this work produced by L. de Broglie in no. 7 of the NRF review, July 1953, Sens philosophique et portée pratique de la cybernétique, unfortunately too quickly forgotten. 23 Several editions, including the most comprehensive by UGE/10-18 in 1971. 24 Available from Garnier under the title Une histoire de la cybernétique en France, 2018.
16
System Architecture and Complexity
mathematics. The information is intentional by nature, performative as linguists say, because it conveys meaning, which is something that the communication according to Shannon refuses to do, and which, just like the laws of physics, needs to remain objective. The same can be said for the term “living”. Animals are unequivocally elements of the living, in general; with viruses, the frontier with chemistry is perhaps more blurred? Animals are living organisms that are born and die; their main objective is to survive, because as P. Kourilsky states in his book Le jeu du hasard et de la complexité, “to live, you firstly need to survive”. To survive, eat, reproduce, animals have projects, they develop strategies, with intermediate objectives, and a final goal. Where there is no purpose, there is no control, because in order to control it is necessary to know with respect to what we are controlling; this is perfectly clear in C/CCAM, as well as in the article “Behavior, purpose and teleology”, previously cited. Why has this choice of translation been made? Is it the very notion of teleology that bothers translators of the book? We know that the “politically incorrect” term purpose/teleology is banned from biological speech, let us say since the book by Jacques Monod, Le hasard et la nécessité (Chance and Necessity), replaced by “chance” which in this case does things very well. This is a little strange in engineering sciences, because what is more teleological than engineering? Even though we use the term “project” because it is indeed teleology that we are talking about. The entire quality approach, up to the most recent developments, like lean management, is teleological in its very essence. Project management is a perfect example of application of cybernetics to an engineering organization as is practiced in systems engineering. Norbert Wiener was a professor in one of the most prestigious engineering institutions and he trained engineers (see section 1.3). Manufacturing machines or systems presumes that there is an objective to be reached, an objective that will materialize in a construction plan, with the detail of the parts and of the assembly method, without which integration is impossible when the number of parts exceeds a few dozen. With 100 parts, there is a factor 100 (noted 100!) combinations, in other words approximately ≅ 9.3 × 10157; given that there are only 1080 atoms of hydrogen in the observable universe, do we understand that without a plan, chance will not operate?! In an engineering project, the importance attributed to chance must be zero; this is what we call risk management, a risk that must be compensated for by suitable counter-measures for the engineering to be sound. I have not gone through the entire translation with a fine comb, but we can all the same remark that in the introduction to the 1948 version, the article by Rosenblueth, Wiener and Bigelow, above, appears as a footnote, whereas the note disappears in the translation (done on the 2nd edition in 1961 with MIT Press, whom I have not consulted). To be perfectly honest, R. Le Roux does mention this article in his presentation, and gives a brief summary of it. These few reservations aside, we acknowledge this translation; but as always, it is necessary to consult the sources themselves, which nothing can replace. Box 1.1. The translation of Cybernetics by N. Wiener
The Legacy of Norbert Wiener and the Birth of Cybernetics
17
Thanks to cybernetics, N. Wiener intended to explain the complexity of human societies, history, which in positivistic minds like A. Comte awakened old memories of the social “engineering” and “physics” of which É. Durkheim positioned himself as a master in his founding works. In the atmosphere of the 1950s, in the middle of the Cold War that may have turned hot at any moment, some dreamed of “machines to govern the world” managed by “scientists”, considered more reliable than the politicians and ideologists that had plunged the world into chaos. Utopic dream, of course, because since von Neumann had restated, a short time before his death by interrogating: “Who is guarding the guardians?” This is the entire problem, centralized and/or distributed, you must choose, and interact, avoiding both gulags and re-education camps for “new humanity”, and the individualistic disorders of “every man for himself”. It is still the case that cybernetics never really entered the territory of engineering sciences, except at the edges, thanks to individual initiatives25, with a small exception in the CNAM syllabus, where the action of professors J. Lesourne26 and J.-P. Meinadier27 must be shown; in any case not in the large engineering school programs, where it had a reputation as rather scandalous and not serious. It is true that when we examine the jargon introduced by Edgar Morin in a cold light and without searching for controversy, we have a sickening feeling due to the excessive inflation of verbs and the combination of hollow words that flow freely28. “Nebulous” our teachers would have said! Creating neologisms each time that we believe we have discovered something original has never explained anything, nor resolved the smallest problem. We have known this ever since G. d’Ockham and his principle of sobriety, at least! Without injury to the past, and reconstructing a plausible situation, perhaps in the mindset of N. Wiener, this was a way of providing retort to J. von Neumann who had just released his very famous Theory of Games and Economic Behavior that was going to revolutionize economic sciences – except that he did not have the mathematical language that would have allowed him to express his ideas that were forming in all directions. For anyone who has read in detail his latest works, in particular God & Golem, Inc. published in 1964, the year of his death, it is very
25 Such as the one by G. Donnadieu at the IAE; refer to his book, written with M. Karski, La systémique, Penser et agir dans la complexité, 2002. 26 His book, Les systèmes du destin, Dalloz, 1976, and a few others, have been a basis for his lessons in forecasting. 27 His two books, Ingénierie et intégration des systèmes, 1998, and Le métier d’intégration de systèmes, 2002, both with Hermès, remain essential reference works. 28 This remark only relates to the part of his book that concerns La méthode and similar texts; his sociology analyses are generally interesting. Refer to the point of view expressed by J.-P. Dupuy in Ordres et désordres, Chapter 7, “La simplicité de la complexité”, that I share.
18
System Architecture and Complexity
clear. He said to us (p. 88): “cybernetics is nothing if not mathematical”, and a little further: “I have found mathematical sociology and mathematical economics or econometrics suffering under a misapprehension of what the proper use of mathematics is in social sciences and of what is to be expected from mathematical techniques, and I have deliberately refrained from giving advice that, as I was convinced, would of course lead to a flood of superficial and ill-considered work.” In addition, he was quite critical of J. von Neumann’s game theory because he considered that it was impossible to assimilate the economy to an immobile game, in a world where everything changes, including the rules of the game, which is obvious; unfortunately, J. von Neumann, who died in 1957, was no longer there to provide an answer. No doubt that N. Wiener would have had a strong reaction to jargon and the conceptual clutter that inundated his theory, and made cybernetics inoperable, at least for engineers, – he who had been a professor at one of the most prestigious universities in the world, his beloved MIT. Concerning “financial mathematics”, an oxymoron, and the “systemic crisis” that we have been going through since 2008, he would very probably choke with anger “Ill-considered work!” Now this language that is required for a good description of the processes that are at work in systems, came about in the 1950s–1960s. Today we are well aware of this: this is the language of computer science, and more particularly languages such as IDEF, UML/SysML and BPML/BPMN29. R. Thom who was very strongly opposed in his typically undiplomatic fashion to the “dictatorship of fuzzy concepts”30 had attempted to answer this with his catastrophe theory, a name introduced by his English colleague C. Zeeman. He did not claim to have instigated himself, but even this was not a great success, for the totally opposite reason that a high level of mathematics was required to understand his geometric language, involving differential manifolds and singularities. One of his best commentators, the great Russian mathematician V. Arnold31, said that between two lines of Thom, 99 more lines were needed to create something that was at least comprehensible! His work Stabilité structurelle et morphogenèse (1972) could therefore have rivalled the thousands of pages of the seminar on algebraic geometry given by his colleague at the IHES, A. Grothendieck. Communication was decidedly difficult!
29 Refer to https://en.wikipedia.org/wiki/IDEF; https://en.wikipedia.org/wiki/Systems_Modeling_ Language; in addition to https://en.wikipedia.org/wiki/Business_Process_Modeling_Language. 30 Refer to his article published in Les Débats, “Halte au hasard, silence au bruit”, 1980. 31 Refer to his book Catastrophe Theory, 3rd edition, Springer, 1992.
The Legacy of Norbert Wiener and the Birth of Cybernetics
19
This was therefore another lost opportunity and yet the requirement had been created by the significant increase in the complexity of the systems designed by engineers, with computer science endlessly replicated or almost so, at all levels as had become obvious in the Apollo space projects and especially the Space Shuttle. Complexity of systems whose risks are beginning to be recognized, if not truly understood, such as “high-frequency” transactions, but can we still talk about engineering in the context of financial computer science32? Risks that also understandably give rise to irrational fears. The obligation to provide explanations is therefore more pressing than it has ever been in the history of sciences and techniques. For someone who does not understand, knowledge very easily turns into magic, which would open the door to irrational ideas, were they not saddled with mathematics that lack in rigor, such as in finance. Having a general method for consideration of the complexity of the system of the world in the 21st Century and extracting from this all that can reasonably be automated (without losing grip on the pedals and remaining in control of events) is the essence of the problem that we need to resolve. To do this, we must consider three areas of complexity, in a symbiotic relationship, at the very center of systems in interactions, distinct but indissociable. The complexity of the technical objects/systems33 that we are capable of constructing and maintaining is the traditional complexity that engineers have always had to overcome, ever since the era of cathedrals. This basic complexity has been added to by: – the complexity of corresponding engineering projects that force all the interested parties in the system to work in a coordinated manner, that is, in the case of a large project, several thousands of actors are required to work in phase with each other over the long term (in quantum mechanics, we would say they are “correlated”); – the complexity of uses – meaning semantics, the “what is this used for” – induced by technical objects that we have available and the new requirements that these objects provoke, objects that are now accessible to all; with Internet and the World Wide Web, we are talking about millions of users, even billions! Organizing and integrating these three complexities (see Figure 1.2) into a coherent whole, finding the right compromise, or at the very least one that is 32 Refer to the book by J.-F. Gayraud, Le Nouveau Capitalisme Criminel, Odile Jacob, 2014. 33 Concerning this double notion, refer to G. Simondon, Du mode d’existence des objets techniques, Aubier, and B. Gille, Histoire des techniques, Encyclopédie Pléiade.
20
System Architecture and Complexity
acceptable, without taking inconsiderate risks, this is the challenge that will be examined in Chapters 8 and 9. One of the most significant characteristics of complexity34 is the emergence of properties and/or new behaviors that it is impossible to anticipate, in particular everything that relates to uncertainties of all kinds that are present everywhere, which in information theory is known as “noise”. Learning to compensate for these hazards, to live intelligently with “noise”, or even to use it in the same way as in error correction codes, is the key to complex systems engineering. We can summarize the sequence that began in the 1940s by the representation in Figure 1.1. This is the result of the work of two generations of engineers and researchers. The third, which is beginning to position itself at the controls, has immense prospects ahead, if it is able to rise to the challenge and demonstrate that it is up to the task. In this sequence, France has proved itself worthy, with systems such as STRIDA or SENIT created by the Ministry of Defense. It has a data transmission network that is among the best in the world, thanks to its historical operator, France Télécom and its historical research center, the CNET (which is now FT R&D or what remains of it). Thanks to EDF’s energy transport network, (which is now EDF/RTE), all French households and companies have a high-quality, low-cost supply, a system that has shown many times how robust it is, and that we will come back to in section 8.2. We could mention aeronautics, the space industry, the automotive industry, information systems with the MERISE method that introduced on a large scale the logical concept of computer science metamodels, etc. All this demonstrates, over and above all the more or less pessimistic speeches, a great capacity to manage highly complex environments that are one of the legacies of the generation of the 30-year post-war boom in France, and that we need to make productive and improve. The diagram in Figure 1.1 is a simplification, but it demonstrates a dynamism and a stage of advancement that cannot be ignored. In the 1970s–1980s there was a profusion of projects that did not all produce the expected results but which obviously demonstrate a capacity that will be useful. For this, students at reputable scientific universities and business/management schools must reinvest in the scope of systems and industry, abandoned in the 1990s–2000s, as the various reports on competitiveness have widely demonstrated.
34 Refer to the book by the physicist P. Bak, How Nature Works – The Science of SelfOrganized Criticality, Springer, 1996.
The Legacy of Norbert Wiener and the Birth of Cybernetics
21
Figure 1.1. From the generation of pioneers to the digital economy generation
It is certain that the system culture that today culminates in C4ISTAR (Command, Control, Communications, Computers, Intelligence, Surveillance, Target Acquisition and Reconnaissance) systems, this fifth discipline to reuse the title of the American best-seller by P. Senge, a professor at the MIT who is very familiar with the works of J. Forrester, played a critical role. REMARK.– It is not prohibited to believe that the general culture (both scientific and managerial) of French engineers familiar with abstraction including with its humanist dimension, has a lot to say on the subject, given our accomplishments and our expertise, such as the electronuclear program implemented by EDF which allows each one of us to have cheap and high-quality energy. 1.2. Modeling for understanding: the computer science singularity In the 1990s–2000s, it became obvious that what the pioneers N. Wiener, J. von Neumann and A. Turing had imagined was beginning to turn into reality. Of the three, only N. Wiener perhaps knew about transistors (he never talked about them). None of them could have foreseen that one day, thanks to the magic of microelectronics and the integration of components, machines would be available (von Neumann referred to a computing instrument) with performance a million
22
System Architecture and Complexity
times better than those that they had contributed to creating, and means of communication allowing: (a) images to be transmitted in real time from one end of the planet to the other, and (b) machines to work in networks on problems of a size that goes far beyond what any human would be capable of mastering. This capacity was going to allow an “en masse” programming, in the large as our American colleagues say, that was almost inexistent in the 1950s. However, some 40 years later were processed billions of lines of code programmed by millions of programmers making up the profession today. Each second, hundreds of billions of programmed instructions loyally execute, without error nor fatigue, what the programmers recorded in the memory of the machines, measured in billions of characters. But as we will see in this book, as an echo of a strange symmetry this immense success generated simultaneously its set of new problems, in particular safety. An instrument like the LHC35 at the CERN produces billions of pieces of information for each experiment, that only correctly programmed computers are capable of analyzing and “understanding”. What the wider public knows as Big Data is only accessible thanks to the programs aiding mathematical processing of information, only able to identify in the apparently shapeless “mass” what carries interesting information. Today, at the start of the 21st Century, several billion people, via their smartphones and/or the tablets, have a capacity for interaction and access to information via the Internet and the World Wide Web, at a scale that has never been seen in the known history of humanity. They can form communities of interest via social networks, capable of mobilizing themselves almost instantaneously for a cause and to give feedback in real time or nearly so. No need to be a prophet to understand that all our modes of governance, that the “authority” and education, that the constitution of our common good, our consumer habits, etc., are going to be profoundly changed, for the better or for the worse. Our terrestrial or aerial vehicles integrate dozens of microprocessors and carry onboard millions of lines of code, directly and indirectly, to ensure our security and optimize the resources. Originally descriptive and qualitative, modeling has become quantitative and massively calculatory, a precious aid to the non-trivial exploration of complex situations in which the space of the states generated by the combinatory is immense (for a 9
memory of 1 GB this gives the colossal number 25610 ). A universal usage that reinforces the veracity of the Church–Turing thesis. REMARK.– We are faced with what mathematicians call singularity, a bifurcation, and what physicists call a change of phase – a restructuration of material or even a metamorphosis, but from which we do not know if a bee or a hornet will emerge. We are on the edge of a watershed where, once we cross over, nothing will be as it 35 Refer to the book The Large Hadron Collider: a Marvel of Technology, EPFL Press, 2009.
The Legacy of Norbert Wiener and the Birth of Cybernetics
23
was, where a new passage will appear and take shape, where the transformation is very quickly going to become irreversible, taking into account the colossal energies that are at stake and carried by the 9–10 billion inhabitants that will soon be present on earth. Today, we have all the technology to provide good systemics. We have above all the right concepts and the right languages, such as those of abstract machines that we know how to define with all the rigor and precision required by the problem that needs to be resolved, which can lead to the creation of DSL/Domain Specific Languages. We can interact and give feedback in a certain manner at a scale that Norbert Wiener would never have imagined, with an MIT that has significantly developed its leadership36. Individuals and community interests have become “active units” for the first time in social history37 to borrow a good expression used by the economist F. Perroux. Intelligence – meaning our capacity for adaptation to the environment – can be distributed and controlled by exchanges of information, and in doing so defuse conflicts thanks to models such as game theory that J. von Neumann opened the door to, in particular those that model cooperation. Without cooperation, without feedback, there is no solution to the conflicts that are unavoidably going to appear. Models of game theory perhaps avoided a transformation of the Cold War into a global disaster, but we will never be certain of this38. However, what we can be sure of is that they have added a little rationality where ideologies/ideologists had cast doubts on the capacities of mankind to surpass fratricidal rivalries. Models such as the “prisoner’s dilemma” would perhaps have removed N. Wiener’s doubts about game theory39. Programming, in other words in fact modeling and calculating, has become a new way of reasoning, which it has taken us several decades to understand correctly after Alan Turing and a few others had laid down the foundations40. We can imagine machines keeping pace with humans, but what gives the logical capacity to the machine is its programming which itself remains dependent on human intelligence and under control of programmers. Deep Blue did not convince Kasparov, but rather it was the programmers who programmed the IBM machine. This is something that must never be forgotten. Learning programming, in a broad sense and for this 36 Refer to the website www.cesames.net of the authors for more information. 37 F. Perroux, Unités actives et mathématiques nouvelles, Dunod, 1975. 38 Refer to the book The Strategy of Conflict, reprinted in 1981, by the 2005 Nobel prizewinner, T. Schelling. 39 Refer to R. Axelrod, The Evolution of Cooperation, 1984, available in French from Odile Jacob, Donnant Donnant, 1992 and M. Nowak, Super Cooperators – Why Cooperation, Not Competition, is the Key to Life, Canongate Books Ltd., 2011. 40 Refer to G. Dowek, Métamorphoses du calcul, Le Pommier, 2007.
24
System Architecture and Complexity
reason, must be at the heart of training of future citizens of society known as “digital”, in the same way as reading and multiplication tables, after the invention of the printing press. Our future and the condition of our freedom are contained herein. The act of programming, its “performative” aspect as linguists say, is at the heart of the system approach. We therefore have technology to create good systemics, but we are also in particular going to have to do this by necessity to organize the complexity of the world in the 21st Century, to put order, a minimum of rationality and good sense, in what could also become a gigantic chaos of information, this time at the scale of the planet. It is necessary to be blind and deaf to imagine that the current socioeconomic disorders are able to continue for a long time “business as usual”, and then there will be a deluge. Systems science teaches us that, sooner or later, any imbalance that violates the structural invariants of the system leads to a correction in proportion to the amplitude of the imbalance taking into account the energies at stake; the shock in return, like a “tsunami”, can be violent if it is neither anticipated nor controlled. The warning signs are always second-order weak signals41, undetectable by statistical surveys. Without the theoretical model by Brout, Engler, Higgs et al., the experimental physicists at the CERN would never have been able to discover the Higgs boson. Here is what we need to convince ourselves of. It is not sufficient to have a lamp or a map, it is still necessary to know where to look, which will determine how. This is why we have the moral obligation to obtain the consent of the wider public. What we need is a grammar to understand the world in which we are now one of the interested parties, both subject and object, spectator and actor – a grammar whose language, in other words our programs, is computer science language that it is now essential to master. In addition, systemics can largely help us to see a little more clearly. With the current state of technology and engineering methods, there is no alternative. 1.3. Engineering in the 21st Century In the engineering problems that are created by modern, globalized and now massively computerized societies, a distinction needs to be made between the necessary, which remains stable, and the contingent that can fluctuate as a function of the environment and of external constraints (see PESTEL, section 8.3.3). Delivering technology that will in the end be rejected by users for whatever reasons, or that it will turn out to be impossible to maintain in the long term, or dangerous, must from now on be considered an inexcusable mistake because it is potentially fatal. Phenomena caused by ageing of elements, by definition unknown, must be able to be observed, unless they are understood. It is therefore necessary to think 41 Refer to the case of the electric system.
The Legacy of Norbert Wiener and the Birth of Cybernetics
25
about them right from the start of the system lifecycle, when the requirements are expressed, and in the architecture phases, by associating all the interested parties. For this, the system must be instrumented in such a way that it can be observed from inside, taking its invariants into account. The complexity that this new engineering must organize, emerged progressively from work produced in the 1980s–1990s. It is three-fold, as indicated in Figure 1.2.
Figure 1.2. The three complexities of new engineering
Besides the complexity of technologies that has continued to develop incessantly, for this it is sufficient to look at a latest generation microprocessor, with its billions of components, or an assembly chain of robots with its hundreds of machines in the automotive industry, two other complexities described as “human” in the diagram have added themselves: one concerning engineering and its processes under development, and one concerning the uses – meaning the semantic – made by users that are no longer engineers like in the first systems, hence uncertainties and human errors. For the complexity of technologies, we have available a wide range of modeling and simulation tools that have undergone an almost exponential development thanks to our scientific expertise, of which some is old, but above all thanks to the formidable development of information and communication technologies (ICT). This has no precedent in the history of techniques. The duration
26
System Architecture and Complexity
of the design period of a modern airplane like the A380, or the Boeing Streamliner, has thus been divided by two thanks to the “virtual” design offices set up by aircraft manufacturers, which is from 10 years to less than 5 years! Human complexity is of an entirely different order. Human rationality is “limited” to paraphrase the title of a book by H. Simon42, and deviant behaviors must not be excluded, as underlined in the studies that have been carried out following the Challenger43 space shuttle disaster, without mentioning the misguided ways of finance that have made the financial system work for the richest 1%44. The error is human, as we often say, but it is also all-pervasive in nature. What we need to find are rules of engineering and architecture that compensate for them, as J. von Neumann had anticipated in his studies about reliable automatons. The systems under development that we have dedicated several books to are fundamentally human systems, even though they are now relatively computerized. Engineering of the interface between users and systems, now based on a graphical screen, has become a major theme of systemics. These three sources of complexity interact with each other, thus creating new combinations that need to be taken into account, taking them two by two, then three by three, with a total of seven possibilities. The corresponding interactions can be organized with the help of PESTEL45 factors (political, economic, technological, ecological, legal; see Chapter 8) that we will come back to. Ignoring these factors, allowing the complexity to stay in metastasis is equivalent to a death stop for the system whose development cycle will not pan out to its usual term. Hence, the analysis method proposed in this book, which is intended to be a general method of a framework for engineering systems of systems, must allow us to: – discover what must necessarily be included, and to imperatively identify the hard core of the system, its irreducible complexity, that will be its organizing center and its spinal column; this vital information is the model of the system, its “map” to reuse the famous expression, often misunderstood, by A. Korzybski: “The map is
42 Refer to Models of Bounded Rationality, vol. 3, MIT Press; also refer to D. Kahneman, A. Tversky, Choices, Values and Frames, Cambridge UP; or even G. Allison, the famous Essence of Decision, 1999. 43 Refer to D. Vaughan, The Challenger Launch Decision, Chicago UP; or even the series of books by C. Morel, Les décisions absurdes. 44 Thesis defended by the Nobel prize winner J. Stiglitz in his book, Le prix de l’inégalité, 2012; also refer to the works of G. Giraud, for example Illusion financière, 2012. 45 Refer to https://en.wikipedia.org/wiki/PEST_analysis.
The Legacy of Norbert Wiener and the Birth of Cybernetics
27
not the territory”46. This is a metaphor that tells us that reality must never be reduced to a model, as sophisticated as it may be; – express and communicate what has been understood to all concerned parties in order to obtain the agreement of each of the actors in language that they understand, according to their point of view, therefore a plurality of models whose coherence must be managed, which is the subject of socio-dynamics. No question must remain unanswered, avoiding the argument of authority, in creating the confidence required to “live together” based on reciprocal goodwill and contradictory debate. This is the ABC of the correct organization of interactions, in some way a “geometry of cognitive entity”47 that takes into account the world as it is. This double requirement, construction and explanation, must allow us to look at the complexity of the situations that we are already facing, such as global warming or the correct management of nonrenewable resources, with a good chance of success. The somber predictions made in the Meadows report, “The Limits to Growth”, violently contested by many economists (including in France by the former Prime Minister R. Barre) when it was published by the Club de Rome in 1972 are now almost a reality48, above all concerning the pollution of ecosystems because for the moment we do not repurpose the waste we produce. Its main results could be reworked using simulation capacities with no comparison to those of the 1970s that were implemented by J. Forrester at the MIT. Non-controlled development leads to harm such as pollution, an inconsiderate waste of fossil energies creates imbalances that can have a global effect on the Earth’s ecosystem. There is nothing in this that is truly original to invent, but the problem and the questions still need to be taken in the right order. Anyone who has been confronted by the problem of engineering of complex systems knows that the order in which the questions need to be tackled both dictates the dimensioning and is imperative. At this scale, any problems that are badly laid out, with uncertain data, rapidly become an inextricable combinatory nightmare. Maturity cannot be ordained; it is constructed step by step, by a long learning process. Cognitive psychology, the stages by which our intelligence develops, the need for cooperation between humans49, has in fact explained it very well. It is necessary to convince ourselves of it, however without conceding our critical thinking and our vigilance that need to be used at the right moment to validate or disprove the hypotheses in the model. 46 In Chapter IV, “On structure”, of his book Science and Sanity, 5th edition, 1994. 47 I have borrowed this term from the logician J.-Y. Girard; refer to his website. 48 Refer to D. Meadows et al., Beyond the Limits, 1992; and also R. Kurzweil, The Singularity is Near, Penguin, 2005, to return to a degree of optimism, but to be manipulated with a pinch of salt because he is the Chief scientist for Google, and besides this a hardened militant for transhumanism and H+ humanity. 49 Refer to the book by M. Nowak, Harvard professor, Super Cooperators – Beyond the Survival of the Fittest; Why Cooperation, Not Competition, is the Key to Life, Canongate books, 2011.
28
System Architecture and Complexity
Nothing is ever acquired definitively, structures are subject to “wear”, entropy and a laissez-faire approach mechanically fabricate undifferentiated disorder no matter what happens. None of the natural or artificial processes escape this and everything is reformulated because the situation itself evolves. Fate and chance are not involved, nor are poker players, but the contingency that reshuffles the cards does indeed have a role to play. It must be possible to teach the method of resolving problems in complex situations, this fifth discipline, both in major engineering schools and in business schools, and in a general manner where decision-makers are educated, reflecting on scenarios like those evoked in this introduction. All will be faced sooner or later with situations where only collective intelligence will be able to give answers that are acceptable for the masses. For this, the method must be “decontextualized” (in computer science, we would say context free), meaning a generalization to simplify it, so as to bring out the concepts that underlie it, but never losing the connection with reality, which neutralized it immediately. This collective intelligence is more than a simple addition; it is an integration of skills that emerge thanks to the organized interactions of the various disciplinary fields, human sciences included. The sociodynamic aspects are fundamentally in the human “energy” of large projects, clearly brought to light by J.W. Forrester. “More is different” said the Nobel prizewinner P. Anderson in an article that has remained famous, and this is what we need to prepare ourselves for in sciences of engineering of open systems of the 21st Century in which human beings, more than ever, have a large role to play. In A. Grothendieck’s autobiography – a true genius of mathematics who died in 2014, and a re-founder of modern algebraic geometry, page 48 of Récoltes et semailles50 – there is an entirely illustrative small text of the method to put into practice for whoever wants to be seriously faced with the systems that surround us on all sides: those that we create and those that are given to us by nature. We can take it as it comes, without changing a single word, to make systemics a tool for analysis and communication that is useful for everyone, for a common good: “The structure of a thing is not at all something that we can ‘invent’. We can only patiently update it, humbly find out about it, ‘discover’ it. If there is inventiveness in this work, and if it so happens that we do the work of a blacksmith or a tireless constructor, this is not at all to ‘shape’ or to ‘build structures’. These have not waited for us before coming into being, and to be exactly what they are! But this is in order to express, as loyally as we can, these things that we are in the process of discovering and finding out about, and this structure that is reticent to reveal itself, that we are trying to sound out and to grasp with a 50 Only available on the Internet.
The Legacy of Norbert Wiener and the Birth of Cybernetics
29
language that is perhaps still stuttering. Thus we are led to constantly ‘invent’ the suitable language to express more and more finely the detailed structure of mathematical things, and to ‘construct’ the ‘theories’ that are supposed to recount what has been understood and seen, with the help of this language, gradually and entirely. Here there is a continuous, uninterrupted movement of coming and going, between the apprehension of things and the expression of what is apprehended, by a language that is refined and recreated as work goes on, under the constant pressure of immediate requirement.” What more can we say, we must create the grammar of this new multi-faceted language. Systemics, science of control and equilibrium of systems, science of the finality and integration of processes that nature provides and in which we have the role of interested parties, as a function of a clearly specified and assumed objective (the heart of the model, its fundamental invariant that makes it exist as an individualized distinct system) in a contingent environment that is locally unpredictable, but globally deterministic and where there are rules and order like those taught to us by physics must at last find the place that Norbert Wiener and his colleagues had assigned to him, a place in the front row, to resolve the global problems that our globalized and open society is now confronted by. The solutions can only be local and cooperative. There can be no single center, because one single center is a point of fragility that is incompatible with a robust human society faced with internal and/or external hazards, resilient, capable of evolving without destroying itself, meaning truly durable. Cooperate or perish, continuously adapt, this is the question, and also the message of hope of the solution brought by systemics and its new language that provides us with the key to how to proceed. In many ways, this beginning of the 21st Century resembles the 1950s, full of risks and promises. If we do not organize complexity, complexity will destroy us slowly but surely. 1.4. Education: systemics at MIT As an example, we give a brief view of the chronology and the main changes in teaching systems science at MIT, not by open-mouthed admiration of this famous institution, but to encourage those who have the heavy responsibility of organizing engineering teaching in France to reflect. The continuity of effort is impressive, with no equivalent in France, unless we go back to the era of the beginning of the Industrial Revolution and to the foundation of the large engineering schools – away from the university that did not want these disciplines to be considered ancillary (to remain polite!).
30
System Architecture and Complexity
Officially founded in 1998, the ESD (Engineering Systems Division) has been entrenched in an interdisciplinary approach to systems engineering problems over the course of decades of development. 1948
MIT, professor N. Wiener published Cybernetics
1954
H.M. Paynter implemented the first classes in systemics
1961
MIT, Sloan School, J. Forrester published Industrial Dynamics
1971
A.H. Keil implemented the Center for Policy Alternatives
1973
Foundation of the Center for Transportation Studies
1975
Foundation of the Technology and Policy Program
1985
Creation of the Center for Technology, Policy, and Industrial Development
1988
Launch of the leadership program for Manufacturing
1989
The MIT commission on Industrial Productivity publishes its report “Made in America”
1991
Implementation of the doctoral school Technology, Management, and Policy (TMP)
1993
The engineering school at MIT published Engineering with a Big E
1996
The Eagar committee recommended creation of the Engineering Systems Division (ESD)
1996
The MIT program System Design and Management was founded
1998
Implementation of a Master of Engineering in Logistics
1998
Foundation of the Engineering Systems Division (ESD)
2000
First double-capacity chair recruited by the ESD
2004
First international symposium on systems engineering (Engineering Systems) at the MIT
2004
Introduction to the ESD Doctoral Program incorporating the TMP
The Legacy of Norbert Wiener and the Birth of Cybernetics
31
2004
Implementation of an interuniversity council for engineering systems
2008
Recruitment of two new double-expertise chairs at the ESD
2009
Merger at the ESD of the architecture and the planning schools; the ESD division is now present in the five MIT schools
2009
The biomedical innovation center (Center for Biomedical Innovation) at the MIT joins the ESD
2009
Second international symposium on systems engineering (Engineering Systems) at the MIT
Box 1.2. The long continuous history of teaching systemics at MIT (the full original version of this box is available on the authors’ website51)
In France, to have an equivalent critical mass, it is necessary to imagine a coordinated merging of teaching at the Ecole Polytechnique and its application colleges, in broad terms ParisTech, Ecole Centrale, including the intergroup, and SupÉlec which have also just merged together, and business schools such as INSEAD, HEC or ESSEC; not forgetting the CNAM and “life-long” learning. In the past, the situation was different due to the dispersion of strengths and to “Gallic” rivalries, but the accomplishments were impressive, at least during the 30-year postwar boom in France, which is of principal importance, but at the cost of a considerable waste of energy. The digital revolution and the revolution of en masse computerization, the omnipresence of MOOCs, is going to require revisions to the foundations of teaching in engineering where the science of systems will occupy a principal position. It is therefore time to “get down to it” before it is too late, to avoid relegation. The factor that has made the MIT what it is, amongst other factors, is a desire to stay awake and alert.
51 Available at http://www.cesames.net/.
2 At the Origins of System Sciences: Communication and Control
2.1. A little systemic epistemology In his works dedicated to the philosophy of science, in particular, Science et méthode, H. Poincaré dedicated an entire chapter to explaining what a definition should mean, asking the question: “What is a good definition?”. He caused a scandal by saying: “A good definition is one that is understood (by pupils, by those who must use it, by engineers…)”, because, in effect, if this is not the case, we will not be able to communicate, nor a fortiori to act. For H. Poincaré, intuition was something entirely fundamental, hence his constant reminder: “What is the point of this?”, a phrase that was repeated by L. Wittgenstein in his lectures at Cambridge: “Don’t ask for a meaning, ask for a use”, in reference to the aphorism 3.328 of the Tractacus: “If a sign does not mean anything, it has no meaning.” A better expression could not be found to avoid any sterile scholasticism, in particular, concerning systemics, with its shifting boundaries, and where circular reasonings permanently lie in wait. By intuition, said H. Poincaré, “the mathematical world enters into contact with reality…”. And, later: “An engineer must receive a complete mathematical education, but what use should it be to them? Different aspects need to be examined and quickly so; engineers have no time to be nitpicking. It is necessary, in the complex physical objects that are available to them, that they recognize promptly the areas where the mathematical tools that we have placed in their hands can be applied.” Prudent and pragmatic, he recommended: “We cannot demonstrate everything and we cannot define everything; and it will always be necessary to apply a little intuition; it does not matter if it is done a little earlier or a little later, or even to apply a little more or a little less of it, provided that with correct use of the assumptions that it provides us with, we learn to reason correctly”.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
34
System Architecture and Complexity
Concerning the subject we have at hand, we must distinguish between the definition of systems sciences, in this case, systemics, on the one hand, and the definition of systems, meaning their construction processes, in what they share, on the other hand, in the same way that we distinguish between geometry, science of figures, their constitutive parts {types → point, straight line, plane, etc.} and their relationships to construct the figures that are familiar to us, in a similar approach to that taken by D. Hilbert in his work The Foundations of Geometry. We have known, ever since Euclid and his work Elements (around 300 BCE), that in all well-constructed theories, there are some terms that are primary elements of the theory and that we consider to be the obvious truths. On the basis of these, we are able to construct theorems and other elements derived from the theory. In geometry, the point (“which has no part”, as Euclid stated), the straight line, the plane, etc. are in this category. From here, we are able to construct all the figures that are familiar to us: triangles, quadrilaterals, circles, parallelepipeds, spheres, etc. and study their properties. By adding a metric, we will be able to define what an inequality is, and establish correspondences and geometrical transformations, which will culminate with the group theory placed in the foreground by F. Klein in the 19th Century (refer to his work Programme d’Erlangen), and the organization of the axioms of geometry proposed by D. Hilbert in his famous work, The Foundations of Geometry (1899). This is also what D. Hilbert recommended to “instill order” into the physics of his era (refer to the sixth problem in his list of 23, formulated at the time of the symposium of mathematicians in Paris in 1900, whose grand finale was the mathematization of quantum mechanics in the 1920s, with, amongst others von Neumann at the helm). If we adhere to the good practices of the scientific method1 in the way that we present physical theories, that is to say, concerning systems sciences (a “physics of information” according to certain physicists), we need three things: (1) one or several descriptive formalisms, (2) a phenomenology of systems and of their constitutive elements, including measurement instruments, in such a way as to connect formalisms to the reality of systems that are thus observed and represented, and (3) a validation procedure to ensure coherence of the whole entity that is constituted in this way, that is to say, the equivalent of experiments in physics. In the case of systems that we are studying in this book, this will be either real experimentation or, since we have significant calculation powers, simulations of physical processes or, in very specific cases, demonstrations, in the mathematical sense of the word, but which will never be sufficient, which is an obligation imposed by the principle of reality.
1 Refer to von Neumann’s conference, “Method in the physical sciences”, Œuvres complètes, vol. VI.
At the Origins of System Sciences: Communication and Control
35
We also know that in the physical world in which we live, none of these constructions exist. They are abstractions at the “limit”, like the dodecahedron, one of the Platonic solids that idealizes a football, but, for Plato, this was the symbol of ether, a strange environment in which the world was bathed. In our world, only solids exist, a line is a tube, which can be formed from points that our sensory organs cannot separate, a point is a small ball, a quantum of energy with which we have a capacity for interaction, but even then we should not look at the surfaces at a high resolution, because at a small scale, everything becomes discrete and fleeting, and, at a quantic level, there is now only a vacuum, and, if we calculate very closely, then the fractals that we all saw the first images of towards the end of the 1980s mechanically appear. H. Poincaré was a mathematician and a philosopher of science of an exceptional level. His book sits at the transition between two worlds, classical physics, whose mathematical tool he perfectly mastered, and new physics, which began with kinetic gas laws. In it he severely condemns the methods used by Boltzmann, which he believed to be seriously lacking in rigor, using the theory of special relativity and the structure of space, to which he was a major contributor, and some consider him to have even invented, before Einstein2. He anticipates the problem of chaos in his studies of the instability of the solar system, with the problem known as the “three bodies” problem. When he died, in 1912, at the age of 58, mathematics were in “crisis”, with the discovery of the paradoxes of set theory, which arose from definitions that were too vague and which allowed contradictory or senseless mathematical configurations, such as “the set of all sets that do not contain themselves as elements” to be fabricated. The rest is known, with excessive axiomatization, by D. Hilbert, then the demonstration, by K. Gödel, of his theory of incompleteness, in 1931, which ruins the slightly “totalitarian” objective of D. Hilbert and of his theory of demonstration. As soon as the theories become a little complex, there are theorems that cannot be demonstrated: neither true nor false! But the final blow to the customary application of logic, known as “classic”, was dealt with by the observation of the totally strange behaviors of atoms – on the one hand, particles, and, on the other hand, waves – in this new world that is progressively being unveiled. One of the pillars of classical logic, the law of excluded middle, no longer works, in particular, when infinity is manipulated. The world is not logical in the Aristotelian sense, but it is perhaps mathematical, turning mathematics into the role of a language3. Hence the attempts to refound mathematics without calling on the excluded third parties, using only demonstrations known as constructive. Only mathematical configurations that can 2 Refer to Poincaré’s opuscule La mécanique nouvelle (The New Mechanics), reprinted by Gabay. 3 This was the thesis written by Galileo; E. Wigner spoke of “unreasonable efficiency of mathematics”. The linguistic aspect of this construction will be underlined by Grothendieck, as we have seen in the historical introduction, and many other mathematicians such as L. Lafforgue, another recipient of a Fields medal.
36
System Architecture and Complexity
be explicitly constructed step by step without ever calling on infinity exist. The pioneer of this approach was the logics mathematician J. Brouwer, which earned him the eternal enmity of D. Hilbert. What we believed to be simple objects or concepts, independent from each other, such as Newton’s universal time, Euclidian geometrical space, where physical phenomena are deployed, matter that we can touch and feel, the waves that appear to diffuse everywhere, even to the ends of the observable universe, appear to be interdependent. Mathematicians such as H. Weyl, in the wake of Einstein and his general relativity theory, talk of an indissociable complex {Space, Time, Matter}4; more recently, with information, we talk about a complex {Matter (meaning bodies + waves), Energy, Information}. After 250 years of total reign, the handsome edifice of Newtonian physics, where time, geometrical space and the matter that the space contains exist separately, which is a deterministic and reversible edifice that is regulated like a clock, collapses on the spot. What we believed to be universal is only a specific case at our scale, and, even then, not always so, if we include thermodynamics and its second principle. What passed as simply common sense must be reviewed. For those who wish to find out more, it is useful to look at the evolution of ideas and representations that have been made of nature since the 1920s, in a few research works, starting with the standard particle model set up in the 1960s and leading up to the discovery of the Higgs boson in 2012. In chronological order, we can make a short list: H. Weyl, Philosophy of Mathematics and Natural Science, A. Einstein, L. Infeld, The Evolution of Physics: The Growth of Ideas from Early Concepts to Relativity and Quanta, W. Heisenberg, Physics and Beyond, N. Bohr, Atomic Theory and the Description of Nature, R. Feynman, The Character of Physical Law, G. Cohen-Tannoudji, M. Spiro, La Matière-EspaceTemps – La logique des particules élémentaires, E. Klein, Sous l’atome, les particules, and, the latest, G. Cohen-Tannoudji, M. Spiro, Le boson et le chapeau mexicain, a small masterpiece of pedagogical clarity, with the work by É. Klein, where information enters into play. In the meantime, throughout the 1930s, in response to the “crisis”, logic has been entirely reworked with the fertile notion of calculability and the thesis known as “Church-Turing”: everything that can be thought and understood by humans, and that can be communicated to third parties (this is a significant constraint), can take the form of a “calculable” function, for Turing, with the paper machine that bears his name (that L. Wittgenstein described as a “human that calculates”), and for Church
4 The title of one of his works, written in the 1920s in German, conceived as he was leaving Göttingen, where he was one of the most brilliant pupils of Hilbert, to go to Princeton where he joined Einstein and von Neumann; a work translated into French in 1922, re-published and completed in the 1950s.
At the Origins of System Sciences: Communication and Control
37
thanks to a formalism of representation of functions (lambda-calculus5, the ancestor of programming languages), the equivalent of the Turing machine, but without the “workings” of the tape heads and endless tapes. The notion of an algorithm that is at the heart of this approach is, in essence, a constructive notion. If we accept the Church-Turing hypothesis, everything that we can think and exchange between ourselves can therefore be represented by calculable functions, at least in theory, but with the invention of computers, they become effectively calculable6, which does not mean that they can be decided given its theorem. And this is what happens! If we attempt to understand quantum mechanics according to Aristotle’s interpretation, we immediately encounter insurmountable contradictions, where even von Neumann “hit a brick wall”7 and gave up, which meant that, in jest, R. Feynmann received the comment “the person who proclaims to have understood quantum mechanics, has in fact not understood it”. But everything can be calculated with extraordinary precision, hence E. Wigner’s observation in the form of a paradox concerning “the unreasonable effectiveness of mathematics”. Creating a constructive definition therefore comes down to exhibiting a construction program, a rule or a calculation model, in other words, quite simply, an algorithm, because it is a pleonasm. All this history is told remarkably well in the small publication by G. Dowek8, Les métamorphoses du calcul, recently awarded a prize by the Académie Française. At the time when systems “science” was taking shape, in the 1940s–1950s, we knew how to manipulate complex objects like matter/space/time, particle/wave duality, energy and its various forms, and even take into account the observer, that is part of the experiment in the quantum world because it interacts via the measurement instruments with the observed phenomenon, etc. We have at hand an arsenal of logic that perhaps does not allow us to “understand”, but in many practical cases does allow us to predict9 because we can calculate, possibly closely, as is demonstrated every day through the resolution of problems by simulations. The only limit is the representation of the calculation elements, for example, the very large numbers that combinatorial analysis easily fabricates; a number such as 1,000 (factor 1,000, which is the number of permutations of 1,000 elements) is so 5 Abundant information on the Internet; the introduction on the website of the Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/entries/lambda-calculus/) is extremely well constructed. 6 On this particular point, the comments made by J.-Y. Girard that accompany the translations of the two works Le théorème de Gödel, E. Nagel, J. Newman, and La machine de Turing, A. Turing, published by Le Seuil, are interesting. 7 G. Birkhoff and J. von Neumann, The Logic of Quantum Mechanics, 1936. 8 Le Pommier, 2011. 9 To paraphrase the title of a collection of interviews by R. Thom, Prédire n’est pas expliquer (Predicting is not Explaining), Flammarion.
38
System Architecture and Complexity
large, in engineering notation it is around 5 × 102.567, that there will not be enough atoms in the known universe to be able to write it10! We are able to make use of all this fundamental work that was produced in the 1950s–1990s, in particular, with the boost resulting from the almost exponential development of information and communication technologies and sciences (ICTS) during this time, to revisit systems sciences and provide them with the solid foundations that they lacked. What more should we ask for so that engineers can work calmly and enable the public to profit from these extraordinary machines whose behavior we know how to analyze, including in the event of breakdowns. And what can be better understood than a program, written in a suitable system language, that is “comfortable” for engineers as H. Poincaré would certainly have advised us, a program of which all aspects can be dissected and instrumented, in other words, on different levels of abstraction of its hierarchical organization, including by simulation. Each written instruction is the result of a programming act, set down by the programmer, who is conscious of what they are doing. Programming is in the constructive logic of intuitionists, which is the ultimate restriction on understanding, in a strictly predicative framework such as that required by H. Poincaré to avoid paradoxes. 2.2. Systems sciences: elements of systemic phenomenology Returning to systemics/cybernetics and to Norbert Wiener, what are the primary elements, those that can be understood intuitively, and for which we can content ourselves with descriptions that create a consensus between those that manipulate them; elements from which a comprehensible and useable object can be constructed by humans, described as a “system”? In N. Wiener’s era, the term system already had a rich polysemy, which meant that precise use of it was problematic11; it was applied all over the place. For this reason, with his friends, he preferred to opt for a new term: cybernetics, as we mentioned in the introduction describing this history. 10 For example, refer to G. Chaitin, The Limits of Mathematics, Springer, 1997. Concerning the Church-Turing thesis, refer to the work by D. Deutsch, professor at Oxford, L’étoffe de la réalité (The Fabric of Reality), Penguin, 1997. 11 For a better understanding of this, refer to the Vocabulaire technique et critique de la philosophie, by A. Lalande, PUF, 1926 or even the Vocabulaire technique et analytique de l’épistémologie by R. Nadeau, PUF, 1999. The Dictionnaire des sciences, by M. Serres and N. Farouki, Flammarion, 1997, is not much more convincing but it provides input that is specific to the operating system, information system, expert system, dynamic system (this is a mathematical object).
At the Origins of System Sciences: Communication and Control
39
The confusion is palpable. In the United States, the term cybernetics has almost disappeared, except in composites such as cyber-security and cyber-war, but we easily encounter names such as systems thinking or system approach. On the other hand, the term systems engineering is totally unambiguous, as we can prove by looking at reference documents such as the NASA Systems Engineering Handbook, the Systems Engineering Handbook produced by INCOSE (International Council on System Engineering, an institution close to MIT and the IEEE) and the summary of all this in the ISO-15288 Systems Engineering Standard (available in French from AFNOR, 2008). Rather than sterile quibbles, practitioners concentrated on engineering methods in the broad sense, to be put in place to manufacture “objects/systems” to which the method is applicable. The approach is therefore totally constructivist. Lastly, N. Wiener had been on the right track in attempting to impose a new term. However, the most pertinent aspect was what he and his friends considered to be the fundamental characteristics of these new technical objects, to use G. Simondon’s term12. Two characteristics play a central role in this new systems science: – control, an essential notion: the feedback loop that allows decisions to be adjusted to take into account the objective set by making + or – corrections; – communication between the various organs in the cybernetics system that exchange information about their respective states via messages that are sent and/or received, in other words, a suitable language that must integrate the “noise” of the environment and the random errors. These are the two characteristics to which it was necessary to add the capacity of the first servomechanisms, in other words, “organs”, this “mechanico-electrical system which was designed to usurp a specifically human function” (N. Wiener in Cybernetics or Control and Communication in the Animal and the Machine (C/CCAM), p. 13); let us note that in this phrase the term function means the capacity to do or to transform, in other words, a discretized time process, in the sense of an action, or a series of regulated actions that began to appear, for example, in the firing system. We recall that, in mathematics, a function is a simple rule of more general correspondence between elements from different sets (a rule unrelated to time), but, in the real world, calculation requires energy and time to carry out transformations, via organs that are known as transducers (see Chapter 6), whose linguistic nature must be pointed out. The term process was already used, for example, in chemistry and in biology, but its use in this new field of knowledge would only take place progressively, to become essential in the 1970s, with the 12 Refer to his thesis, Du mode d’existence des objets techniques, 1958, republished in 2012 by Aubier.
40
System Architecture and Complexity
arrival of computer science13 in all systems14. N. Wiener, a cultivated man who enjoyed philosophical references, fell in line with W. Leibniz with his Calculus ratiocinator, or with R. Descartes and his “machine animals”, improved by the analog calculation machines that were then in use. But the main machines for systemics were computing instruments, according to J. von Neumann’s meaning, or computation. In other words, our computers which were much easier to program and, thanks to which, we are able to define two other aspects of information: (1) the amount of calculation required to carry out a transformation, taking this term to mean the process, which gave rise to the theory of algorithmic complexity (C. Bennett’s logical depth), and (2) the size of the program text describing the transformation that needs to be carried out, which is the textual complexity described by Kolmogorov–Chaitin, which depends on the underlying abstract machine. In terms of sharing roles, the role of J. von Neumann was absolutely critical. At the end of 1945, he had been named project director of the ECP/Electronic Computer Project that would later produce the IAS15 machine, which became fully operational in 1952 and is considered to be the “mother” of all modern computers. It was an electromechnical assembly weighing around 30 metric tons and which consumed around 200 KW (or the equivalent of 200 modern irons!), probably the most complex machine that has ever been created by mankind to date. In his conferences, he talked about “ultra high complexity”. Our quibbles about the distinctions between complexity and complication would probably have surprised him or made him smile (no one knows this for sure, but according to his friends, he said: “if people do not believe in the simplicity of mathematics, this is certainly because they don’t realize how complicated life is”), because the terrifying problem that he had to resolve was the hypothetical reliability of such a varied set of elements, not the problem of making puns. How could something reliable be constructed with unreliable components? How could a failing element be compensated for by replacing it with another one stored in reserve16, without interrupting the service, which is the fundamental problem of control? These were 13 The foundation text in computer science is Cooperating Sequential Processes, by E. Dijkstra, dated 1968. 14 A recent work, often cited in the “agile” methods community, for example, B. Ogunaike, H. Ray, Process Dynamics, Modeling and Control, Oxford University Press, 1994; very comprehensive, on the basis of processes from chemical engineering. 15 For details, refer to the sites: http://en.wikipedia.org/wiki/IAS_machine or http://www. ias.edu/people/vonneumann/ecp, and above all to the book by H. Goldstine, the main collaborator of von Neumann, The Computer from Pascal to von Neumann, Princeton University Press. 16 In his posthumous works, these are Self-Reproducing Automata, published by his collaborator A. Burks. Also refer to The Complete Works, vol. 6 and The Computer and the Brain, Yale University, 1957.
At the Origins of System Sciences: Communication and Control
41
his concerns and those of his team. Hence, the genesis of his founding works about the theory of automatons17 in order to create some order from apparent chaos, because the automatons of this already asynchronous machine were the logical support of the semantic invariant of the system, in his terminology, their “logical model/design”. This was already a way of focusing, in a language that was still being formed, on the linguistic and/or grammatical aspects, in the formal sense of the term, in order to organize the overall problem, and allow the engineering actors to communicate and resolve the immense problems that they had to face, each at their own level, and without interference. Before returning in more detail, in Chapter 3, to the operational/logical aspect of communication, we immediately note that all acts of communication pre-suppose that the communicating elements have a name that identifies them in a sorting space where they can be found. In addition, the transport of information requires a conveying environment, a “substrate space”, continuous or discontinuous, whose role is to supply a physical support whose energy modifications can be interpreted as a signal that can be distinguished from the ambient noise that is particular to the environment. In the artificial systems that are created by humans, a great variety of conveying environments have been or are still used, such as, for example, devices that are: – mechanical, with cables, pulleys, levers, suspension bars, gears, projectiles; – hydraulics, with various incompressible liquids, which allow pressures to be transmitted, thanks to piping, ducts and reservoirs; – visual, with optical signals such as flags, semaphores, signs in the highway code; – sound, with conventional acoustic signals such as those that have been used by armies in ancient times with drums and bugles; – electrical and electronic, with the first wire telegraphs in the 19th Century, then with electromagnetic waves, radars, lasers, etc. In brief, all production/creation of the observable energy configuration, of all kinds, could be exploited as an environment for conveying information as long as it is detectable and measurable. Nature, for its part, has invented a great number of them, such as spider webs, chemical pheromones used by numerous animal species such as insects, bodily fluids like blood and the lymph node that carries hormones, the nervous system, which integrates chemical and electrical signals, etc. In Chapter 6, we give a few details of the energy converters named “transducers”, which play a fundamental role in all systems. 17 See the memo by C. Shannon, Von Neumann’s Contributions to Automata Theory, already cited.
42
System Architecture and Complexity
2.2.1. Control/regulation The term “control” needs to be taken into consideration. In other words, it means servo-control, servomechanism, in the same way as in the theory of optimal control. In N. Wiener’s opinion, it is a distinctive characteristic. When we talk about control, we also talk about ipso facto control with respect to an objective or a goal to reach, an “end purpose”18 in his words, like in the system of anti-aircraft batteries, which presupposes a theory of measurement. Hence the importance of the notion of feedback; in Cybernetics/CCAM, Chapter IV is entirely dedicated to the problems posed by the implementation of this feedback. In the introduction, he says (p. 13): “[…] when we desire a motion to follow a given pattern, the difference between the pattern and the actually performed motion is used as a new input to cause the controlled part to move in such a way as to bring its motion closer to that given by the pattern”. In this phrase, each word is important. For this to have a chance of working, two conditions must both be met: (1) perfect knowledge of the movement of the target object, if there is one; for this, observations and measurements are necessary, (2) correct dosage of the positive or negative corrections of the movement made by the pursuit organ in order to avoid oscillations and/or “coming off the track”. This requires work on the speeds and accelerations of the magnitude to be regulated, but, above all, knowledge of how to measure all this. Without reliable measurements, it is impossible to correctly regulate anything. The feedback mechanism must therefore precisely measure the speeds (magnitude of the first order) and the accelerations (magnitude of the second order) that it is going to apply to the movement of the mobile object in order to apply the correct retroactive action. Like the good mathematician that he was, it is obvious that N. Wiener was perfectly aware of all this. The practical problem was having measurements and actuators/amplifiers that were capable of loyally executing the prescribed orders and of measuring their effects, then the possibility of carrying out statistical processing to improve the precision of orders. There was a fundamental difficulty with this, which led him to state (p. 16): “[…] the good prediction of a smooth wave seemed to require a more delicate and sensitive apparatus than the best possible prediction of a rough curve; and the choice of the particular apparatus to be used in a specific case was dependent on the statistical nature of the phenomenon to be predicted”.
18 Refer to his “program” text, Behavior, Purpose and Teleology, written with A. Rosenblueth and J. Bigelow, 1943; refer, in insert 1.1, to the remarks made about the translation of the C/CCAM work.
At the Origins of System Sciences: Communication and Control
43
Faced with these double difficulties, intrinsic to the device itself and which self-induce each other, Wiener saw a kind of resurgence of Heisenberg’s principle of uncertainty, but, this time, at a macroscopic level and in engineers’ constructions themselves. Measuring means, first, disturbing, which means sending an energy stimulus in a suitable form and registering what comes back, which constitutes the system’s response to the initial stimulus. Knowledge comes from understanding the response to the disturbance that was caused by the stimulus19. Here it is a case of observation of weak signals, sullied by errors, labeled “noise” by C. Shannon, that can be exploited only with statistical processing and a constant comparison with reality, but according to a time step that is intrinsic to the instruments (due to their inertia) and that can introduce bias. This exploitation will allow a decision to be made, itself on its own specific time step; hence the two clocks in Figure 2.1.
Figure 2.1. The fundamental feedback loop and its clocks
Going right to the heart of the subject, we see that the main problem with all these devices is not elimination of the errors but compensation and control of them, which is a very good summary of R.W. Hamming’s words20: “Most bodies of knowledge give errors a secondary role, and recognize their existence only in the later stages of design. Both coding and information theory, however, give a central 19 This is the method used by physicists to construct the standard particle model where the result of impacts caused by collisions such as those in the LHC at CERN are analyzed. 20 Refer to his “legacy” work, Coding and Information Theory, Prentice-Hall, 1980. This is also a central aspect of the theory of automatons, as C. Shannon recalls in his memorandum Von Neumann’s Contributions to Automata Theory, already cited.
44
System Architecture and Complexity
role to errors (noise) and are therefore of special interest, since in real-life noise is everywhere.” We cannot express this better ourselves! The errors and control of their effects are the main problem of systems sciences. All this is perfectly transposable in human systems, like an engineering project whose systemic nature we have had the opportunity of highlighting (refer to the Écosystème des projets informatiques and Estimation des projets de l’entreprise numérique projects). On the feedback loop itself, we can use the representation that J. Forrester gave of this, in Principles of Systems, a few years later, in the notations that were forerunners of SADT/SART, IDEF, etc. (see Figure 2.1). The decision-maker, whether human or machine, makes its decision based on the information collected and on its objectives; a decision that is materialized by a sequence of orders to be carried out (continuous line), on the basis of information captured then transmitted, possibly with errors and/or with noise (dotted line). The time shift is all the more important since the information required for the decision demands a high energy expenditure. In this, we will recognize the ancestor of the 1980s quality systems loops {plan, do, check, act}. This loop will be revisited and completed in Chapter 6, concerning real-time/C2 systems, which are ubiquitous in all interfaces. To have a good understanding of the radical change that is brought about by these new electronic devices, it is necessary to remind ourselves of the first control equipment, for example, the centrifugal governor in the steam engines designed by J. Watt, or even carburetors in automobiles, invented by K. Benz in the 1900s. In the first case, it is necessary to prevent the excess heat from causing an explosion in the boiler, by dosing the arrival of the fuel and/or by opening the safety valve. The second is a case of feeding the engine with a mix of vaporized fuel and air that allow optimal combustion, at least in theory. In both cases, these are mechanical devices that have their own latency and their own margins of error; these devices apply antagonistic actions to make the corrections that are necessary for control. In the case of the carburetor, we know that the quantity of unburned remains was significant and that we were far from an optimal thermodynamic yield. In the case of the centrifugal governor, the energy intake to make the governors move was not negligible, which degraded the yield whilst introducing a new source of error (vibrations, resonances, etc.). Implementation of analog electronic devices (in N. Wiener’s era), then digital ones (in the era of J. Forrester and of the first IBM Whirlwind computers in the SAGE project), would make all these mechanisms much more efficient, consuming little energy with a much shorter latency time,
At the Origins of System Sciences: Communication and Control
45
hence much better signal processing and a sufficiently small time step, compatible with the dynamics of the phenomenon to be controlled in order to avoid destructive vibrations. New controls, which could not be envisaged with simply electromechanical devices, then became a possibility. This being the case, it has been very clear from the beginning that a feedback loop has, in itself, its own limitation, which depends on the “size” of the system (a term that needs to be carefully defined) and on the flow of messages that the controller must process; the energy required for processing this flow is another limiting factor that needs to be taken into account. Right from the start, we have known that it is necessary to create a hierarchy and place the controllers as closely as possible to the elements that need to be controlled. Inverting the proposal, we can therefore say, which is almost a “law” of systemics (refer to the compendium, section C.2.1): the processing capacity of the fundamental feedback loop determines the size of the system that this loop can control. If, at a certain instant in the life of the system, for a given internal or external reason, the flow of messages to be processed exceeds the processing capacity of the loop, this does not mean that the system will stop working. It means that the state of the system in the configuration space enters a zone where it is no longer controllable in a certain manner. Survival of the system is at the mercy of the smallest incident, of the smallest perturbation, which cannot be compensated for, because there are no longer any resources available to make the correction (see Chapter 4). 2.2.2. Communication/information In the 1950s, a very clear idea of the coding processes was already in place for communicating information between an emitter and a receiver in a reliable manner, taking into account hazards of communication, errors, “noises”, etc. Code theory dates back to this era. N. Wiener worked closely with C. Shannon, and he knew all the actors of this new theory that was set up between MIT and Bell Labs, not forgetting A. Turing. For N. Wiener, the measure was a fundamental aspect of the new theory of cybernetics, including its probabilistic processing. His book Cybernetics/CCAM mentions contact with A. Kolmogorov21, a young Russian
21 To understand the extent of the contribution made by Kolmogorov to systems sciences, refer to the two volumes The Kolmogorov Legacy in Physics, 2003, and Kolmogorov’s Heritage in Mathematics, Belin, 2004. Also refer to https://en.wikipedia.org/wiki/Andrey_Kolmogorov.
46
System Architecture and Complexity
mathematician who was famous for his axiomization of the calculation of probabilities and whose work led, in the 1960s, to a new measure of information called the quantity of textual information, or textual complexity (TC), introduced by Kolmogorov–Chaitin. No one knows the nature of the exchanges that they had, and we know how prudently C. Shannon and W. Weaver expressed themselves concerning semantic and pragmatic information22. Here, we include the fundamental communication diagram, created by C. Shannon.
Figure 2.2. Claude Shannon’s communication model
The great merit of C. Shannon’s communication model is to demonstrate particularly clearly: (a) the aspects of coding/decoding of information that are logical, abstract operations, founded on a mathematical theory, destined for a great future, which is used to discretize the useful signal, meaning it can thus be better distinguished from the “noise”; (b) the transmission/reception aspects that are physical elements, purely physical and conjunctural, that depend on the technologies of the era and on the communication system; and (c) the purely random disturbance aspects that depend on the “noise” and on the errors that can affect all the physical elements of the system. By definition, this “noise” is not known; this is a lack of knowledge of which we can observe the effect only by suitable autonomic management mechanisms. “Noise” processing comes from a theoryless world, to reuse the term forged by L. Valiant in PAC (see section 4.2.1), which can only be implemented progressively, through trial and error, through learning, intuition and through the creativity of the designer. From this same perspective, the coding/decoding is, itself, in a theoryful world.
22 Refer to their book, Mathematical Theory of Communications, 1949.
At the Origins of System Sciences: Communication and Control
47
In passing, we can observe the predominance of intuition and creativity over purely rational knowledge of a more mechanical nature23. The latter only exerts its power if it is integrated within the first, which is fundamental from an epistemological point of view. We can also note the analogy with works by K. Gödel on the subject of logic. As we know, the quantity of information in C. Shannon’s sense (C. Shannon himself preferred the term “entropy”24, denoted H in his writings because it measures a level of organization), is purely syntaxial in nature and is only interested in the structure of the message and not in its meaning (in other words, the conveyor of meaning and not what is meant), but a message that must, however, be minimized in size to create the least possible demands on the transmission equipment in order to maximize the flow rate of the communication channel, taking account of the errors, known as “noise” in the theory of communication; this is a statistical measure. “Economical” coding, in terms of code size, reduces the inertia of the system that “works” less and therefore improves the response time and diminishes the latency. The intention of N. Wiener is however perfectly clear when he tells us: “The notion of the amount of information attaches itself to a classical notion in statistical mechanics: that of entropy. […] The amount of information in a system is a measure of its degree of organization.” Hence the importance of A. Kolmogorov’s measure, which is an intrinsic measure, notwithstanding the organ of calculation, in this case, a Turing machine; but he did not know about the dissipative structures that I. Prigogine25 advocated and which would shed completely new light on the thermodynamics of irreversible processes 20 years later. Taking up J. Maxwell’s metaphor of the demon, we can illustrate this entire problem as we have done in Figure 2.3.
23 This was the thesis of H. Poincaré and constructivist mathematicians, also that of the mathematician and the Swiss epistemologist F. Gonseth, in addition to that of A. Grothendieck who had a lot to say about the subject in his autobiographical texts, Récoltes et semailles and La clé des songes. 24 Based on a suggestion made by J. von Neumann who said to him, reported in the Scientific American, vol. 225, no. 3, 1971: “You should call it entropy, for two reasons. In the first place, your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.” 25 His work Introduction to Thermodynamics of Irreversible Processes was released in 1962 in English, in 1968 in French; his works earned him the Nobel Prize in 1977.
48
System Architecture and Complexity
Figure 2.3. Maxwell’s demon for a system
An architect is a person who considers the state of the world with the means that they have available to them, in other words, the tools and methods according to the meaning of Aristotle’s Organon, which determines the extent of their knowledge, according to a terminology introduced by F. Gonseth, and taken up by G. CohenTannoudji. From the requirements formulated by the actors (this is the answer to the problem to be resolved), as a function of their capacities, uses and engineering, and the equipment and technology that are available, the architect creates the system, that then appears as a new type of organism (this is the solution selected from amongst many possibilities, taking into account the constraints). We could therefore say that cybernetics is a theory of organisms which places emphasis on control: (a) of the structural invariant of the organism in question, in other words, its means of existence and its lifecycle – a series of changes of state/transitions – in the environment that accommodates it and it is of which moreover, a part, bringing errors and compensation for them to the forefront (refer to the remark by R. Hamming, in the footnotes for section 2.2.1), and (b) of the information required for its survival, information that is as much internal as external and that is necessary as much for current uses and/or future uses as for engineering project teams.
At the Origins of System Sciences: Communication and Control
49
The methods that are specific to organized interactions (see Figure 2.3, right-hand side) can be based on the definition of the language that is specific to the system, that of the organism that has just been created, distinguishing, as the logicians in the Vienna circle26 taught us, between the external language that relates to users, and the internal language(s) that relate(s) to engineering. The linguistic nature of information is therefore a fundamental aspect of that. As we see in Figure 2.3, the architect’s action consists of sorting between the processes that can be totally or partly automatized, and the others that will remain managed by human actors. To do this, they need information. This separation has a cost that corresponds to the cost of engineering in a broad sense. By reasoning in this way, we place ourselves, from a point of view of thermodynamics, in a situation that is very close to what is sometimes known as the third principle, referred to as Maximum Entropy Production ((MEP); an important notion according to the physicists who are interested in the physics of information), because the automated part is a dissipative structure that is highly organized, “hot”, therefore of lower entropy, costly to design, develop and maintain, and with a high energy consumption, as we can see with green computing, where the production of heat is in the process of becoming the main problem for computing infrastructure. In the terminology of systems engineering, this distinguishing phase corresponds to the expression of requirement and to the specification of demands that lead to the contract for development of the system to be developed, and the justification of this development to respond to users’ requests. In conclusion, we can say that, from the 1970s onwards, we have had a relatively comprehensive theoretical corpus which allows information to be referred to in a rigorous and precise manner: – the quantity of information QI in Wiener–Shannon’s27 notation, associated with messages that the system is going to process. This is a statistical measure of the degree of organization of the source that is at the basis of information coding. Repetitive and frequent information will cause a low quantity of information, in compliance with Shannon’s formula QI s = p × Log2 1 , where p designates the p
26 In particular, refer to R. Carnap, The Logical Syntax of Language, Routledge and Kegan Paul Ltd., 1937; original in German. 27 Discovered independently by N. Wiener; for a succinct, rigorous and comprehensive presentation, refer to A. Renyi, Probability Theory (with an appendix relating to the theory of information), Dover, 2007. The reference remains C. Shannon, W. Weaver, The Mathematical Theory of Communication, UCP, 1949; and also L. Brillouin, Science and Information Theory, Academic Press, 1962. The small work by J.-P. Delahaye Complexité aléatoire et complexité organisée provides a clear and pedagogical summary.
50
System Architecture and Complexity
frequency of the symbol to code; for a message of N symbols, we will have QI N = N × QI s . This quantity of information is closely associated with the learning process that the system operators need to undergo in order to correctly operate the system with an error rate that is as low as human capacities allow. Rare messages, using C. Shannon’s measure, have a low probability and therefore a high QI, which reliably expresses the intuition that learning these messages will be costly. We recall that, for C. Shannon, the average statistical entropy H of the symbols used for economical coding (alphabet of the code) is given by the formula N 1 H= p × Log 2 , which is a weighted average given the frequency of the 1 x px use of symbols. Let us note in passing that the structure of the code determines the size of the text; around 3 times longer in binary as in decimals;
– the amount of calculation required to process and to calculate the transformations to be carried out on the input/output data of the system, to adjust the various equipment of its constitutive elements and to analyze their return, as in the case of a firing system, or the SAGE system. This measure is an intrinsic characteristic; it is known as algorithmic complexity (AC) or logical depth; it measures the performance in a broad sense, the calculation capacity (number of calculation steps) + capacity for interactions (number of inputs/outputs) with the exterior. This capacity is doubly linked: (a) to the state of ICT technologies, in short, of processors, as well as external networks and memories; (b) to the skills and the experience of “programmers” in the broad sense, to their knowledge of useful algorithms and programming tools. The higher this AC, normalized to the time step of the control loop (this is then a power, a quantity of calculatory energy per unit of time, which is a mix between the capacity for interaction with the exterior and the transformation capacity of the interior), the more costly the infrastructure will be; – the length of texts to be written by the various actors to carry out all types of calculation and/or adjustments of the teams that constitute the elements in the system, in other words, “programming” in the widest sense of the term, which, in linguistic terminology, corresponds to the performative aspect of the system, in other words, its capacity to transform, to execute orders, to perform. This text, the size of which measures the textual complexity (TC), is closely correlated to the quantity of human work to be carried out to produce it and maintain it in operational condition. It is also a measure of the degree of organization, of the order it creates, in compliance with the MEP principles, that complete the second principle of thermodynamics, already mentioned. This is also an intrinsic measure whose standard is, by convention, a Turing machine.
At the Origins of System Sciences: Communication and Control
51
In summary, three measures are fundamental for analysis of the complexity of systems: QI for messages, AC for calculations/transformations and TC to produce program texts and create order. REMARK.– This entire problem configuration is absent from the book by J. Segal, Le zéro et le un – Histoire de la notion d’information au XXe siècle, 2003, which is however very frequently read, with its 900 pages. 2.3. The means of existence of technical objects We want to conclude this brief epistemological sequence by saying a few words about a remarkable philosophical work completed in the 1950s–1960s by G. Simondon, which has remained totally separate from discussions in cybernetics and systemics, where its name is never cited. However, he is the author of original thinking about the nature of technical objects, and, more generally, about the information, in his thesis Du mode d’existence des objets techniques, 1958, recently republished by Aubier, in 2012. The bibliography of his thesis is interesting because it shows that he had knowledge of the work carried out in N. Wiener’s circle, but his words were different, more sociological, a point of view that had captured the attention of J. Forrester (who is not cited by G. Simondon) when he began tenure at the Sloan School, that is to say, everything concerning the human aspect and sociology of projects28, at the margin of the technical part itself. G. Simondon considers that the technical object taken as a whole cannot be separated from two human components/communities that are carefully distinguished: the designers, those who create the object (in simple terms, the engineers in charge of how it is done, meaning engineering in the broad sense of the term including maintenance), and the users who use it, also sometimes known as end-users, meaning those who can answer the questions: what is that for and what use is that to me (see Figure 2.4). The terms user and end-user have given rise to a great deal of regrettable confusion. All the more so since the user and/or designer can also, but not always, be end-users (it is possible for the designer of an airplane to never be in the position of a pilot!).
28 In systems engineering, this is the socio-dynamic component, highly important to understand the power relationships between the parties taking part in large projects. Refer, for example, to the GALILEO European project, where the power games and rivalries between the concerned parties have slowed the project down considerably.
52
System Architecture and Complexity
Figure 2.4. The technical object according to Gilbert Simondon: the triplet {U, S, E}
To clarify and specify, a notion of a role will be introduced, stating that a physical person can be in the role of a user as much as in that of an end-user, in the maintenance sense, or of the designer, with all the biases that this can raise. We have often reproached French engineers for confusing the two roles and of putting useless objects on the market for average users who are not engineers29; we still remember the incomprehensible screens of the SNCF SOCRATE reservation system and users’ despair, in the 1990s. Here, we are dealing with a symbiotic relationship, in the biological sense of the term where, at a given instant, “S” available technologies will be organized and integrated by “I” engineering in such a way as to allow “U” users to improve their performance and/or their well-being in their own specific situations. The entire problem of strategic alignment is tackled here, 30 years before it made American consultants, to whom French groups became very partial in the 1980s and especially in the 1990s, very rich, in an era when management of these groups was becoming fundamentally financial with the success that we know in terms of employment on a national level. “Refocus on one’s core profession” they said, and, in doing so, they lost all capacity for industrial adaptation, liquidated R&D departments (which ruined many large industrial companies such as the CGE and its laboratory at Marcoussis which now does not exist) therefore asphyxiated engineering, except for fusion acquisitions and financial engineering, which allowed the consultants to be paid a second time. Technical objects evolve under the double constraint of the uses and technological innovations that result from R&D, which are today called product lines and which are themselves assembled to create technical systems, according to
29 Refer, for example, to the book by C. Morel, L’enfer de l’information ordinaire, Gallimard, 2007.
At the Origins of System Sciences: Communication and Control
53
B. Gille’s30 interpretation, for example, electrical systems. Technical objects are renewable, their lifetime is limited by their technology, whereas a system that is all-encompassing due to its organization can potentially last forever because it can renew its parts without damaging the service contract of the whole. For all these reasons, and in homage to the founding work by G. Simondon, we propose calling the triplet {U, S, E}, the “Simondon triplet”.
30 Refer to the volume of the Encyclopédie Pléiade, Histoire des techniques.
3 The Definitions of Systemics: Integration and Interoperability of Systems
3.1. A few common definitions The most commonly used definitions, found in all books that deal with systemics or the theory of systems either closely or vaguely, are as follows. DEFINITION 3.1.– J. Forrester, in Principles of Systems and Industrial Dynamics: “A system means a grouping of parts that operate together for a common purpose […] a system may include people as well as physical parts.”1 DEFINITION 3.2.– P. Delattre, in Système, structure, fonction, évolution – Essai d’analyse épistémologique: “The notion of a system can be defined, in a completely general way, as corresponding to a set of interacting elements…”2 DEFINITION 3.3.– J.-L. Le Moigne, in La théorie du système général – Théorie de la modélisation3, is more eloquent. Simplifying his words, he tells us that the general system is the description of an artificial object “which, in an environment, given end purposes, exerts an activity and sees its internal structure evolve as time passes, without however losing its unique identity”. He insists on the constructive aspect of the thing with its modeling aspect, articulated around three hubs, that he calls a triangulation, in other words: operational (what the object does), ontological (what the object is) and genetic (what the object becomes).
1 Pegasus Communications Inc., 1990 and 1999 (originals in 1961–1968). 2 Maloine, 1971. 3 PUF, 1977.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
56
System Architecture and Complexity
Along the same lines, we can also mention Le macroscope by J. de Rosnay, with its “10 commandments” which are in fact obvious truths, as we will see later on. Saying in commandment no.1 that it is necessary to “conserve variety” comes down to saying that redundancies are necessary for adaptation: Shannon’s second theorem. Saying in commandment no. 2, “do not open the centrifugal governors”, is pure tautology because an open “governor” can obviously no longer control anything; the rest is in the supplementary section. The book is qualitative, and therefore cannot be used by engineers or by anyone who wants to model in order to understand, measure, etc. Wanting to explain systemics using metaphors from biology (he trained at the Institut Pasteur, then at MIT) is delicate in the sense that we do not really know very well how the biological machinery works, and even less everything that touches embryogenesis and engineering of living things4. It is better to do the opposite, as von Neumann recommended and before him Descartes, starting with perfectly understood simple mechanisms5, and to increase in complexity, step by step, by progressive integration. R. Thom, the famous mathematician and 1958 Fields Medal laureate, with his destructive mindset (well-known for his Catastrophe Theory), provides us with a “spatialized” morphological definition in his pamphlet La boîte de Pandore des concepts flous6: DEFINITION 3.4.– “A system is the portion of space contained inside a box (B). The walls of the box can be material or purely fictional, insofar as a precise assessment of the energy and material flows that pass through its walls can be drawn up.” He found it more interesting that the usual combinatorial analysis definition, considered suspect in his eyes, referring to a theory that does not produce any mathematical theorem (a desire of N. Wiener, as we reminded ourselves in the historical introduction), and that at best is only a set of non-constructive definitions; a theory to watch “with suspicion…”, he said. Hence the metaphor of the “black box” that led to numerous misinterpretations (see Figure 3.1). The walls of the black box give rise to the inside/outside border which plays an essential role; the appearance or not in the black box is a voluntary act, a decision by the modeling architect, not the result of chance. This is a predicative act, in Poincaré’s meaning of the term.
4 To convince ourselves of this, refer to the book by J.-C. Ameisen, La sculpture du vivant, Le Seuil, or P. Kourilsky, Le jeu du hasard et de la complexité, Odile Jacob. 5 In his biography of Turing, A. Hodges tells us that Turing said: “Keep the hardware as simple as possible”; a rule of wisdom that machine architects have slightly overlooked with the number of transistors integrated into a chip, several thousands in 2010, but the engineering process remains perfectly controlled. 6 Refer to a recension of his articles in Apologie du logos, Hachette, 1990.
The Definitions of Systemics: Integration and Interoperability of Systems
57
Figure 3.1. The metaphor of the black box
Taking the point of view of an engineer, which will by definition be constructive, we can understand reticence to follow modus operandi that do not really allow action to be taken, because in engineering we must never rely on chance. At this level of generality, we are often led to say only banal things. The use of models is proclaimed everywhere, except that at the time of release of these books, computer science, of which we sensed the importance in advance, was not developed enough; it was still centralized at that time but this changed radically in the 1990s with distributed computer science. The “black box” was simply a metaphor to generate understanding that a careful distinction is necessary between what is “outside” the system and what it interacts with – in other words, the external flows, from the “inside” of the box therefore from “how” it is made. As R. Thom makes sure to say, the walls of the box can be purely conventional, but in all cases, they must be well defined. From the point of view of an engineer, the transformation F ( Input ) ⎯⎯→ ( Output ) needs to be specified and constructed, taking care not to use any information from contexts other than those carried by flows, failing which the transformation function F has no meaning. For N. Wiener, this was absolutely obvious, and he said7: “Suppose that we have a machine in the form of a ‘black box’, that is a machine performing a definite stable operation (one that does not go into spontaneous oscillation) but with an internal structure that is inaccessible to us and that we do not have knowledge of”. In a future supporting volume, we return to these definitions based on the work of D. Krob for the Complex Systems Engineering Chair at the École Polytechnique8, where to go back to the words of R. Thom, we will give him real mathematical content that allows the construction of computerized models. Traditionally, in engineering systems literature9, we distinguish the problem space – including the facts that it relates to – from the solution space that always contains multiple solutions. In effect, there are always several F functions that can correspond to the constraints that need to be satisfied; constraints whose nature we characterize using the acronym PESTEL (political, economic, social, technical, 7 Refer to God & Golem Inc., p. 42; cited previously. 8 Refer to Éléments de systémique – Architecture des systèmes available at: http://cesam. community/ and on the author’s website. 9 Refer to the books by J.-P. Meinadier, cited previously, published by Hermès.
58
System Architecture and Complexity
ecological and legal, see Chapter 8). In more mathematical language, it can be said that the transformation made by the black box must be described/specified in a “context-free” language, a notion introduced by N. Chomsky in his founding article “On Certain Formal Properties of Grammars”10. All these notions provided the starting point for theoretical computer science in France with the professors M.-P. Schützenberger and M. Nivat, in the 1960s. Everything that is contextual is subject to interpretation and must therefore be “considered with mistrust”, according to R. Thom, for the simple reason that this damages human communication. The aspect that has cemented the success and effectiveness of European mathematics11 is that it is in fact context free. It has logical forms, a system of disembodied relationships, and thus transposable in any culture, as was the case for the Jesuit fathers with the Éléments d’Euclide that was translated into Chinese in the 1600s12. In the history of science, it is necessary to remember that originally, numerology and astrology, for example, were intimately associated with mathematics and with the birth of astronomy. The book Mysterium Cosmographicum, in which Kepler formulated his famous laws, was an astrology book. Newton was an alchemist. But it was the separation that made mathematics so effective, and Principia Mathematica by Newton remains a model of its kind which is actually based on Éléments by Euclid, a work that Newton had perfect knowledge of. Even with such general definitions (definitions 3.1, 3.2 and 3.3, cited previously), we can nevertheless establish a certain number of properties that all systems, whatever their nature, will share. We will now establish this for three of them: – constitutive element (building block or simply a module); – interaction/communication; – invariant/control regulation. In some way, we need to look for the equivalent of the “points”, “straight lines”, “planes” which constitute the physical and societal spatio-temporal object that we will call a system, and the instruments that allow them to be organized and manipulated, the equivalent of the ruler and compass used by surveyors.
10 Refer to the journal Information and Control, vol. 2, no.2, 1959; the editorial office consisted of all the actors in this new approach, including M.-P. Schützenberger. 11 Refer to the famous text by E. Wigner, The Unreasonable Effectiveness of Mathematics in Natural Sciences. 12 Refer to P.M. Engelfriet, Euclid in China – The Genesis of the First Translation of Euclid’s Elements in 1607 & its Reception Up to 1723, Brill, 1998.
The Definitions of Systemics: Integration and Interoperability of Systems
59
3.2. Elements of the system For the definition to work, we need to identify “elements” that are not systems themselves, in other words name them, failing which the definition becomes circular and is no longer of use. The definition presumes a finalized integration process that must be controlled. Since the 18th Century, a distinction has been made between the elements of a clockwork mechanism and the final integrated object: besides this, at that time, a clock was not called a system. But an astronomical clock, or a watch that includes complications, is a good metaphor for talking about systems and complexity, as used by H. Simon in his book, The Sciences of the Artificial13, with Chapter 8 entitled “Architecture of Complexity”. For simplicity, a clock is made of gears, cams, axes and pivots, an escapement to beat out the seconds (meaning a well-determined and stable time step), and an energy storage mechanism, for example a spiral spring (this stores potential energy), which will be gradually released to maintain the movement, which will be displayed by different quadrants and hands, for hours of the day, a stopwatch, the phases of the moon, the tides, the planets, etc. depending on the complications of the timepiece, since the basic features are hours, minutes and seconds. All of this is mounted on a frame, known in horology as a caliber, which defines the structure of space (this is a topology) in which the various constituent parts will be integrated. In systems engineering, we talk about a host structure or an architecture framework.14 Each of these elements has a very precise function. Correct organization of their layout (which needs to be controlled/regulated) according to an explicit construction plan, that is, a program of the assembly, results in an extremely precise reproduction of the movement of the stars, depending on the size and number of teeth on the gears. The “result” that emerges from this layout, visible to the owner of the clock, will be the time and date of the day, for example, that will constitute the interface between the system and the owner, meaning the man/machine interface. “More is different” said the physicist P. Anderson, to which we could add “and open” because it generates new freedoms. A more subtle statement than the very classic and wellworn, “the whole is more than the sum of its parts” because the “more” is not at all guaranteed, whereas the difference is certain. P. Anderson directly targeted phenomena related to scale, well known in solid-state physics, which was not yet called “emergence“; this is exactly what we are talking about here.
13 At MIT Press, numerous editions, first in 1969, which is translated into French by J.-L. Le Moigne. 14 In the jargon used in the 1990s–2000s: workbench, integration framework and architecture framework; in reference to the frame theory by R. Shank. Refer, for example, to the DoD Architecture framework, the TOGAF, by the OPEN Group, or even the NAF by NATO.
60
System Architecture and Complexity
Using the escapement, we will be able to create an initial rotation mechanism, using gears, that will restitute the movement of the circles in Ptolemy’s model that roll over each other, as seen in the theory of epicycles. The clock is never more than an easily transportable mechanical model of the apparent movement of the stars around the Earth. The precision of the size of the gears and the rotation axes, and the quality of the pivots (sometimes made of diamond) mean that the clock will consume the potential energy that is stored in its springs more quickly or less quickly in order to compensate for the inevitable friction. And all clockmakers know that once a certain number of moving gears and parts is surpassed, a clock seizes up irrevocably15, thus limiting the capacities of the system and its lifetime, because the state of the gears is modified over time by oxidation, wear to the teeth, which increases the friction, etc. Saying that a given object is an element of the system is the same as saying that there is a “collectivizing”16 relationship that rigorously defines belonging or non-belonging to the system that we are attempting to characterize (see section 3.1, definition 3.4 of the black box). Without this criterion, we risk incorporating “foreign” elements into the system that will falsify all reasonings that relate to the globality of the system. In the case of a clock, in the same way as with all mechanical objects, a parts list, the assembly drawings and the tools required for assembly are all needed. In biology, the role of the immune system is to ruthlessly destroy any element that is present in the organism that does not bear the signature identifying it as a member of the community17. The elements are subject to wear or degeneration according to their own laws that are specific to them, including some that are not well known, or are completely unknown and which, due to this, generate black swan18, if we are not careful. Over time, the elements will be altered and will no longer adhere to their service contract; it then becomes necessary to replace them, which can cause maintenance downtime. Certain systems, like for example electrical and telephone systems, can no longer be shut down globally as this would cause a major economic disaster. Thanks to 15 Refer to the caliber designed by P. Phillipe, in 1989, which numbers 1728 parts, a record! We recall that the combinatorial analysis of the assembly is the number 1728!, that is, ≈ 1.07 × 104.646 obtained via Stirling’s formula; in other words, such a large number that we cannot write it, even using all the atoms of hydrogen that the visible universe contains, that is, 1080 (according to Christian Magnan; refer to https://en.wikipedia.org/wiki/Observable_universe. 16 Refer to the definition of sets in N. Bourbaki, Théorie des ensembles, Hermann, 1970. 17 Refer to the work by P. Kourilsky, Le jeu du hasard et de la complexité, Odile Jacob, 2014. 18 Refer to the best-seller by N. Taleb, The Black Swan – The Impact of the Highly Improbable, Penguin, 2007. Using Poincaré’s language, this is a property known as “predicative”; it therefore has nothing new.
The Definitions of Systemics: Integration and Interoperability of Systems
61
conveniently arranged redundancies we can stop certain parts without significantly affecting the overall service contract of the whole, and then carry out maintenance work. In the case of a company “extended” to its clients and/or its suppliers, there is interpenetration of information systems, which means that the relationship of belonging to such and such a system becomes problematic, which will have serious consequences in the event of incidents/accidents because then, whose fault is it? The entire problem of systems of systems is then posed, and of the conditions for “healthy” interoperability that must respect the rights and obligations of the systems that come together in this way. Cooperation has a price, as long as it gives a competitive advantage, but it always has a cost. A second important characteristic involving the entire system corresponds to the concept of a predicative set, a notion that comes from H. Poincaré, following his discussions with B. Russel, concerning the paradoxes of the set theory that was gathering pace at that time, at the beginning of the 20th Century19. A set is known as predicative if the list of its constituent elements is immovable. In sets of this kind, we can reason in complete security about what belongs or does not belong to the set. Negation has some meaning, but in real-life situations that are subject to a future, sets such as these are rare. In a non-predicative set, the list of elements is open and its composition can vary over time. The negation of a sub-set of a list of this kind is therefore open and we do not really know what is in it; therefore, we cannot apply operations planned in advance to the elements of this sub-set. Reasonings that involve the intervention of excluded third parties no longer function because the inside/outside border is blurred. We note that the concept of context-free language is a predicative notion because the language is enough in itself and does not require external input to interpret its semantics. In a company, there may be people on the premises who are not employees. They must be identified as such and be monitored by any means. This is because their behavior or constantly security clearance is not the same as that of the employees, who are supposed to know the rules of the company for whom they work. In the case of an electric system, the set of elements/equipment that ensures production is predicative; we can therefore plan the production. In the case of renewable energies, or of the installation of photovoltaic panels that turns each one 19 Refer to H. Poincaré, Dernières pensée, Chapter IV, “La logique de l’infini”; and “Science et méthode”, Chapter III and the following, “Les mathématiques et la logique”. And more in depth, refer to Archives Henri Poincaré, published by A. Blanchard.
62
System Architecture and Complexity
of us into a potential producer, the set of means of production loses the property of predicativity. This is a significant difficulty for the network regulator and for EDF (Électricité de France), hence the need to collect information via “intelligent” meters such as Linky. REMARK.– The incorporation of “foreign” elements makes the system non-predicative, which is an annoyance but is not prohibitive. It can even be a strategic advantage, but it needs to be taken into account in engineering and architecture to control its potentially damaging effects. In companies that apply security rules, each person who is an outsider to the department must be accompanied by a qualified person who is responsible for their actions. The pair that is thus created remains predicable. The accompanied person temporarily inherits certain properties that characterize the employees. What we need to remember is that the elements of the system are identified as such by an explicit membership criterion which must be controlled at all times in the “life” of the system, and by the function that they carry out within the system “community”. The “everything” obtained in this way, according to the assembly plan (this is the integration operator) of the system is different from the sum of its parts; this is an area of new freedoms. The overall function of the system is an emerging property which is not present as such in any of the elements constituting the system, but all of which must comply with the rules of the assembly plan (in logical terms, in theoretical computer science, this is an assembly operation) and the means of the authorized interactions (a grammar of interactions). The set defines an integration operator, in the mathematical sense of the term, from which a set of elements creates/manufactures a new entity which will be known as “integrated” and whose properties cannot be deduced from any of its constituent elements. 3.3. Interactions between the elements of the system When we talk about interactions we talk ipso facto about two things: (1) the need for a physical means for exchanges and communication between the interacting elements, either a mechanical connection, as in the case of centrifugal governors, or an electrical connection with conducting wires/cables, or a connection by electromagnetic or optoelectronic waves through laser bundles as is now habitual; (2) conventions so that the interactions are operational and contribute to the system’s mission, where each of the elements exchange information, or meaning, with others with which they are interacting, depending on the linguistical means. In passing, we note that there are intended interactions, in some way operational, and unintended interactions, generally damaging, such as those defined simply by being physical neighbors (heating of a given zone causes a breakdown in certain unprotected and/or
The Definitions of Systemics: Integration and Interoperability of Systems
63
exposed equipment, etc.) and that need to be monitored; for example, the centralized management system (CMS) for autonomic management of buildings. This means that in the description of the system interactions20, we will, of course, necessarily find: – the interconnection matrix of the elements between themselves, in the form of directed relationships which specify the physical nature of the interconnection and the capacities required for the link. This is the notion of “communication channel”, which plays a fundamental role in Shannon–Weaver’s theory of information. The elements in interactions with the exterior (the outside) must be listed; – the classification of the messages that convey the exchanged information. In the world of telecommunications, which is above all a world of messages and signals, a conventional notation known as ASN.1 language is laid down as a basis. The physical representation of the message comes from a specific problem related to the format. When the first computers appeared, engineers used delay lines with mercury baths, which propagated conventional signals with a certain speed. In the world of the Internet, there is an XML language, known to a good number of Internet users; – the procedures and rules of exchange which allow elements to synchronize themselves and to act in a coherent and finalized manner, thanks to protocols and conventions. In Roman armies, until the Napoleonic period, the fighting units synchronized themselves with drums, bugles and flags (the word standard initially meant a coat of arms, which was a sign by which to recognize friends/enemies; a device that is always in effect in modern ground–air combat via transponders and the IFF identification21, so that the anti-aircraft defense was not destroyed by friendly aircraft). Procedures and rules of exchange operated like the grammar of a language which ensures coherence of all the behaviors of the elements in the system. The mechanics of interactions can be centralized or decentralized but its dynamic will still be transactional – that is, quantified – both in energy and in time/duration, as in the case of the electric system: what is consumed by one can no longer be consumed by another, and the global energy resources must be diminished by the same amount. Ideally decentralization may appear to be a better solution, because there is no center, yet each element is a potential center, which means that each one is linked to all others. Intuitively, we do sense that there are indeed limits to decentralization, for example when we need to react rapidly in response to dangerous situations, or to sudden changes in the environment. Inversely, an excess
20 All this is described in research literature for systems engineering: NASA Systems Engineering Handbook and INCOSE Systems Engineering Handbook. 21 Refer to https://en.wikipedia.org/wiki/Identification_friend_or_foe.
64
System Architecture and Complexity
of centralization will inundate the messaging center if the system is large, which can lead to significant latency of varying durations, depending on communication channel congestion, and response times that are incompatible with the survival of items that are awaiting answers. This will lead to a randomized malfunction of the feedback loop (Figure 2.1), hence the phenomena related to what the physicist P. Bak calls Self-Organized Criticality22. We will come back to this in Chapters 8 and 9. Living organisms propose a wide range of centralized/decentralized mechanisms. The nervous system is relatively centralized. The neurovegetative and endocrine systems can be considered as mixed. The immune system is extremely decentralized because the immune response must be immediate in the case of an attack on the organism, an injury, etc. In human societies, language is a decentralized mechanism23 and each time we have ventured to unilaterally legislate language, this has generally not ended well. The military and the police, who in some way play the role of an immune system, are highly centralized systems both in principle and in their organization, able to act locally on request, or be “projected” outside national territory, as in the case of the armed forces. In the defense and security systems24 that all modern states have, we are accustomed to distinguishing what is conventionally known as: (1) thinking time, aptly named, but in order to think and make decisions, the capacity to process large amounts of information is necessary and therefore large memory storage capacities are required; and (2) real time which implies an instantaneous answer, or at least the fastest possible; as in the problem of the “anti-missile shields” that were so dear to President Reagan at the time of the Strategic Defense Initiative, whose most immediate effect was to bring down what was left of the Soviet economy. Using the metaphor of the clock, mentioned previously, we can say and affirm that a system architect is not someone who knows how to cut gears, although they do need to be able to judge the fitness for purpose, but instead it is someone who provides the assembly plan of elements, possibly organized into sub-assemblies, until they obtain the final technical object; taking G. Simondon’s meaning of the term, while preserving the ability to check the consistency of the {U, S, E} triplet, hence the existence of thresholds that should not be exceeded. 22 Refer to his book How Nature Works – The Science of Self-Organized Criticality, Copernicus Springer. 23 In his work, Aux sources de la parole – Auto-organisation et évolution, Odile Jacob, 2013, P.-Y. Oudeyer explains how by using models a clear consensus can emerge from interactions within a human group, modeled by robots; the emergence is in the interaction. 24 These are C4ISTAR systems, already mentioned.
The Definitions of Systemics: Integration and Interoperability of Systems
65
What we need to remember (second section C.2.2) is that the interactions between elements in the system will naturally translate into grammer-based operations of these interactions fundamental principle is a custodian. This is like specifying the coherent “sentences” which are all valid behaviors of the system with respect to its environment, that is, its semantics, itself characterized by a certain sequence of actions that the system effectively does. 3.4. Organization of the system: layered architectures As soon as they cross a certain threshold of complexity – even though this is only in terms of the size of the system, measured by the number of basic mechanisms/equipment integrated – the artificial systems are organized hierarchically to the engineering teams’ capacity to understand what is required as they ensure the development and maintenance of the system. This point has been theorized by H. Simon in Chapter 8 of his work, Sciences of the Artificial: “Architecture of complexity”. From a strictly epistemological point of view, we wish in this chapter to refer to a few fundamental aspects of the hierarchical organization of systems, illustrating it using examples of everyday life of humancreated systems. 3.4.1. Classification trees First, we note that the notion of hierarchy, meaning classification, is deeply rooted in the cognitive capacities that appear to be specific to the human species25, with language – which to a certain extent is also a classification26 – allowing individuals who have mastered it to interact. Every human has the capacity to learn their mother tongue, but instilling modifications to this language requires a collective work among the speakers concerning the grammar of language, its metamodel, that is, all the rules that are applied to guarantee coherence of the model. All classification comes from the principle of economy of thinking. At level 0, everything that is perceived in situations that have been lived is listed as a primitive element, a factual description of what exists, and of the essential memorization mechanisms that allow us to survive and to adapt; then very quickly, as soon as the stock of memorized situations increases, classification levels appear that are based on abstractions shared by certain elements. Hierarchy is a means of organization of
25 Refer to C. Lévi-Strauss, La pensée sauvage, Plon, 1962. 26 Refer to the founding text by J. Piaget, Équilibration des structures cognitives, PUF, 1975. The organization of a database is entirely founded on this principle. For a more philosophical method, refer to the famous text by J.L. Borgès, La bibliothèque de Babel, vol. I, Pléiade, p. 491.
66
System Architecture and Complexity
knowledge which – using a few general criteria resulting from the observation of a specific individual object and in fact which mean that it can be identified as such27 – allow it to be found in the classification and therefore all its known characteristics to be used. It is a system of addressing and/or locating knowledge which is the basis for the mechanisms that have been in all computers since von Neumann’s model to organize memory content, and even much more recently to organize a library as soon as the number of works exceeds a few hundred. When we ascend in the classification tree up to the root, we proceed by generalization, and when we descend to the leaves, we specialize more and more until we reach the individuals themselves. Once the key elements of classification have been integrated, which was the objective of the learning, it is easy to find the way. For these same reasons, classification of the entities is at the heart of engineering of artificial systems, under a more down-to-earth name: system classification, as H. Simon has shown us with the clock metaphor (see Figures 3.2, 3.3 and 3.5). The essential point to understand perfectly is that in all classifications, a principle of exclusion applies: an individual cannot feature in two leaves of the tree because they would have two names, a principle without which the classification would be inconsistent, which is obviously a prohibitory fault because for our hunting/gathering ancestors, classification was a tool for survival. Mistaking a cow for a lion meant running the risk of death, except when seen from very far away like in the classification by J.L. Borges28. The leaves on the tree have properties that are exclusive to each other; only the properties that they have in common, meaning the abstract properties, feature in the hierarchy. Classification is a strong semantic act. In terms of a system, this principle of exclusion that avoids ambiguities must be part of the construction procedure, and once constructed, it is necessary to ensure that this is indeed the case, in other words that the elements designed as isolated are effectively isolated in the physical reality of the physical system. This is what we will illustrate in two examples of systems that have had a profound influence on the history of techniques: (1) the internal combustion engine, which equips automobiles and/or airplanes, a 19th-Century invention; and (2) the computer, which is without doubt the greatest machine ever invented by man in the second half of the 20th Century.
27 In the files and/or databases, a long time before the invention of computers, there are indices, or “unique” identifiers that distinguish the individuals from each other. In cladistics, we refer to the phylogenetic classification of living things. 28 Refer to the new “La langue analytique de John Wilkins”, in the collection Autres inquisitions, Œuvres complètes, Pléiades, vol. I, p. 747.
The Definitions of Systemics: Integration and Interoperability of Systems
67
3.4.1.1. The internal combustion engine As its name suggests, the internal combustion engine29 is a system that regulates energy produced by explosions of a mixture of air/petrol (or diesel) – in other words a discontinuous phenomenon, so that it becomes a motor torque distributed over driving wheels which allows the vehicle to advance continuously and not in stops and starts. The engine is integrated into an equipment hierarchy, as shown in Figure 3.2.
Figure 3.2. Hierarchical organization of an automobile. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In a four-stroke engine, four cylinders, turn at 3000 turns/min, making 3000 explosions (in the order 1, 3, 2, 4, 1, etc.) minute or 50 per second, knowing that an explosion is a rapid chemical phenomenon (less than a millisecond). The energy produced by the engine results from a quick succession of shocks in a back-andforth motion of the piston which will have to be transformed into continuous rotational energy so that the car does not lurch forwards!
29 Refer to the Wikipedia entry: https://en.wikipedia.org/wiki/Reciprocating_engine; well-written, or alternatively: http://jl.cervella.free.fr/ressources/moteur.html.
68
System Architecture and Complexity
From a purely operational point of view, after energy is produced in the engine cylinders and before it is made available on the driving wheels, it is necessary for the discrete energy signal to have been transformed into a continuous signal. That means an operational chain: Discontinuous → continuous transduction
Raw energy ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ Organized energy
The resultant forces created by the movement of the various parts of the engine must be zero, otherwise the operation of the engine will lead to an awkward drift of the vehicle in the form of a parasitic force, moreover, variable, and made up of vibrations. The mechanical energy that is dissipated in the form of impacts and/or vibrations, not recovered on the driving wheels, must be cushioned to avoid creating destructive and/or unpleasant parasite phenomena: displacement of certain parts, unbearable noise, resonance, etc. because energy that is not used on the driving wheels is transmitted throughout the structure of the vehicle and is released as it can be, where it can be, because if it accumulates, it will be worse! This example highlights the general problem of mechanical coupling because, in addition to the functional coupling, non-functional coupling are systematically going to appear which are generally damaging, related to the production of energy itself and to the geometry of the constituent parts of the vehicle. These non-functional coupling must also be checked, via the information that they generate, because whether we want it or not they are part of the system. It is up to the entire vehicle, as a system, to ensure that the global energy balance is effectively compensated – in other words, a fundamental invariant: Energy produced [ Supply ] ⇔ Organized energy [ Demand ] + Wasted energy
The hierarchical structure of the vehicle is obviously going to play an important role in maintaining this invariant, given that it is understood that compensation for waste energy must be carried out as closely as possible to the source taking into account the known transmission modes of the energy, via mechanical assemblies. The geometry of a part like the crankshaft is determined by the requirement to cancel out the resultant forces caused by the movements of the pistons, connecting-rods and the crankshaft itself; hence its very specific form obtained empirically through trial and error. REMARK.– We note the similarity with the supply/demand equilibrium of the electrical system. This is formally the same problem, the only difference is the scale of the energies.
The Definitions of Systemics: Integration and Interoperability of Systems
69
3.4.1.2. The computer and its interface stack The computer is probably the greatest hierarchical machine that has ever been invented by mankind. Figure 3.3 gives some idea of this.
Figure 3.3. Hierarchical organization of a computer in layers. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Within the computer – which is first an immense electrical circuit including hundreds of kilometers of invisible cables buried in a mass of silicium30 – coexist physicochemical structures that are completely dependent on the laws of nature with their hazards, and structures that are purely logical, completely abstract and immaterial: the programs created by the various categories of programmers which mean that, from the point of view of the user, it will be possible to provide a service, meaning that a single cause (in this case the activation of a command of the system) will always produce the same effect in reality. In the software, there are more than 20 levels depending on the organizations used (besides, the term “layered architecture” comes from the world of computers, and for good reason!), and it is necessary to add at least as much from a hardware point of view. 30 Although it is highly abundant in nature, the silicium in our computers does not exist in a natural state; it needs to be transformed with a complex physicochemical process to give it the property of being a semiconductor.
70
System Architecture and Complexity
The computer’s lesson in systemics teaches us how to go from a world that is dependent on the laws of nature, in particular those of the world of quantum physics, to an idealized world of algorithms and cooperating processes, whose control methods are perfectly subject to the directives given by the machine programmer. The computer does absolutely nothing unless it has been decided by a programmer; it has strictly no initiative, nor any autonomy that was not foreseen in its design. This is the very condition for its programming, more exactly its capacity to be programmed, and of its utility, as we briefly mentioned in Chapter 2 with the Church–Turing thesis. From the point of view of its circuitry, a machine instruction is never more than a diagram of interconnections of a functional element, like an adder circuit, with other elements where the “numerical” information that needs to be taken into account is registered, incorrectly called “data” because at this level of the machine, there are only configurations of energy states. The resulting energy state will be interpreted as a number on the next level up. In von Neumann’s architecture, the interconnection diagrams, meaning the commands that the computer must execute, and the “data” are registered in the same organ: the memory. The memory is a neutral organ whose function is to maintain the energy state of its constituents as long as it is fed electrically. In the upper stages of the hierarchy, its content will be interpreted as interconnection or data diagrams, with possible permutations of these two roles. This allows us to virtually introduce “abstract machines”, that are purely logical, into the only real physical machine; for example, the bootstrap stage in Figure 3.3, where a protosystem, invisible to the user, will construct in its memory the structures of the final system that will be visible to them (for those who use Microsoft Windows, this is the role performed by BIOS), hence the name bootstrap. As we all know, the quantum world, at an atomic level, is a random world. Totally strange phenomena take place there, such as allowing an electric current to pass or not, depending on certain conditions: in one case, we have a semiconductor, and in the other case, a perfect insulator; thanks to semiconductor physics, to the extreme purification of materials31, that the computer has really been able to develop. Bodies that are naturally semiconductors, like galena (lead sulphide mineral), are rarely present in nature. However, ultra-pure silicium crystals with which all integrated circuits are made are perfect insulators; they are glass! By disturbing the crystal very slightly by introducing certain impurities, the semiconductor phenomena will emerge.
31 This is the mechanism of fusion in zones, on which all semiconductor physics is based.
The Definitions of Systemics: Integration and Interoperability of Systems
71
Transistors are today technical objects in the nano world. With etching distances of 10–20 nanometers, a transistor is made up of rows of atoms that reach numbers of a few hundred atoms of silicium, that is a small block of matter that contains a few million atoms (a block of 100 × 100 × 100 = 106 = 1 million), which means that a complete circuit can contain, or will between 2010–2020, contain 10 billion transistors (i.e. 1010) for a few cm2 of silicium, which can appear enormous but which does however remain small at the scale of Avogadro’s number, 6.02 × 1023 atoms per mole. But these nanometric transistors are sufficiently small to be subject to the influences of alpha particles (2 protons + 2 neutrons, like helium) which are produced by natural radioactivity. These particles are sufficiently energetic to be able to change the state of a transistor. Without counting other random phenomena like diffusion (at the atomic scale and at atmospheric temperature, everything is moving! It is only at absolute zero, –273°C, that matter is at rest), the electrical and magnetic fields are induced by the electrical charges in movement, which can lead to parasite currents, etc. When we say that the circuit commutes at the speed of 1 gigahertz, this is as if we were turning on/switching off the light a billion times per second; and this can create overpotentials at the scale of the circuit. Since it is a current that is circulating, this will produce heat, and it will be necessary to dissipate the heat produced, otherwise there will be a breakdown. From a physics point of view, the circuit is a machine that makes information circulate, which is represented by a certain energy state that is temporarily or permanently memorized in its circuits. When we talk about the circulation of information, we ipso facto refer to the phenomena of communications such as telephone lines, which are sensitive to all kinds of “noise” whose effects must be cancelled by suitable compensations. The same can be said for circuits that contain tens of kilometers of nanometric cables and where “noise” must be controlled and inhibited. In short, if nothing is done to organize the interactions, “it is not going to work!” All the reliability of a circuit, its physical capacity to do exactly what the programmer has logically decided, is based on collective phenomena, not those of an atom, but a few million atoms. How far down we can go, no-one is today able to say32. It is necessary to test and observe what happens, without forgetting slow phenomena such as aging. This is all the more so since the circuit must remain operational, for at least as long as the lifetime of the system that hosts it, although the diffusion phenomena of microscopic conductors of a few nanometers, in aluminum or in copper, are correlated to thermal agitation. Only large atoms of gold, 32 In quantum information, we hope to be able to control the states of a single atom by “cooling” them to the extreme, hence the new possibilities of coding by playing with the two states: (1) ground state of minimal energy; and (2) excited state following an energy input; however, as stated by S. Haroche, Nobel prize winner, who works on the subject, “this is not for tomorrow”.
72
System Architecture and Complexity
used for a long time in these technologies, do not move much. Since there are many transistors, we will be able to use some to correct the others, with error correction codes whose pertinence is based on a statistical property related to the intrinsic random nature in the quantum world itself. So that the circuit breaks down, it is necessary for the two alpha particles, perfectly synchronized to the billionth of a second (meaning less than the cycle of a clock), disturb the energy profile of the information and of its corrector code, two physical zones of the machine that are different from a topological point of view. However, this is a probability that we know how to calculate. All the reliability of the structure will be organized in this way, with compensation mechanisms placed in a suitable manner, since the structure is itself constantly under surveillance, at least its electrical state. If this is declared to be good for the service, then we can verify that the operational logic of circuits, meaning the automatons in finished state (this is an invariant that is independent of the materials that constitute it, materialized by test games that are calculated in a suitable manner), remains nominal and in compliance with the expected service. If this is not the case, the circuit is no longer anything more than a simple resistance that will lose the meaning that its designers had attributed to it. It is still the case that at a certain level of the hardware stack, random phenomena, either those from the quantum world, or those from the classical world, must be perfectly compensated for not 99.999% but to exactly 100%, because at the N large commutation speed of the machine ( 0.9999...) N ⎯⎯⎯⎯ → 0 breakdown would be certain. This means that the layer above can become completely abstract from that below it which then operates like a perfect black box, providing the expected service. If the electrical system constituted by the circuit is not guaranteed at 100%, it is then strictly impossible to validate the operational logic of the circuit, because we no longer know if the cause of a breakdown is “physical” or “logical”, and since we are in the nano world, we can no longer even observe what happens on the outside using intrusive devices. The observer must therefore be integrated at the layer that they observe (this is Maxwell’s demon of the layer that filters what is good (see Figure 2.3); but in this case, the demon is the architecture!). Herein lies the magic of layers. A “layer” that possibly does not have this property is not a layer; it is just a thing or unformed magma! Only the interface that separates the sender from the receiver counts, as shown in the diagram in Figure 3.4. The sender only sees the part of the receiver that is strictly necessary for their use, meaning their external language. All the rest is and must remain hidden. One can refer to the authors’ website concerning the organization of a computer stack, which is an excellent illustration of this fundamental principle.
The Definitions of Systemics: Integration and Interoperability of Systems
73
Figure 3.4. Hierarchical functional logic of layers. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Without perfect mastery of layer engineering, which is the very essence of the constructive modular approach of systems engineering, there would be neither computers, nor communications systems, nor systems at all! This is because the logic of layers has allowed propagation of abnormal situations to be controlled, which are non-functional, and whose accumulation would be mortal for the system. The decoupling, materialized by the interface, radically separates two worlds, both of which respect a logic that is particular to them. Each level has its Simondon triplet {U, S, E}. This is controlled emergence, that nature also does very well with its conservation laws by creating more and more diversified structures33 at all scales. By analogy with mathematics34, we can say that a layer is a theorem or a set of theorems that are seen as axioms, which can be used within their limits of validity to
33 On this subject, one can refer to the work by the Nobel prize winner R. Laughlin, A Different Universe – Reinventing Physics from the Bottom Down, Basic Books, 2005; he is a specialist of semiconductors, and a Professor at Stanford. 34 In his famous work, The Foundations of Geometry, D. Hilbert thus places the axioms of geometry into a hierarchy, leading us to various geometries that we know of; since geometry is historical and, as we already know, one of the pillars of engineering science.
74
System Architecture and Complexity
construct other theorems. If necessary, they add on certain levels new axioms that are related to that level, in order to semantically enrich the foundations of the system to satisfy the functional requirements of the applicative layers, at the top of the stack in Figure 3.2. This is what J. von Neumann, right from the start, called the logical model for the machine, described by him as a computing instrument. Engineering of layer N+1 is independent from engineering of layer N, notwithstanding compliance with the rules of the interface, meaning the external language that is associated with the layer. This independence is applied via naming mechanisms, meaning the act of giving a name, of the entities that are specific to the layer, and of cross-referencing to keep track of who has done what in the system, from one layer to another. If the layers are not or are badly decoupled, this will the specific hazards for each of the layers will combine, due to this become unintelligible for their immediate environment. The corresponding engineering teams will be able to work independently from each other, as long as the interface between each of them ensures the transduction of one domain to another, which requires the cooperation of each team. Hence, traceability matrices that memorize past states are essential engineering tools; without these, reconstructing the situation that has led to a mistake is impossible. The lesson given by these two examples is that all “living” systems, meaning those that provide a service to their users, possess a logical model that keeps it functionally “alive”, meaning coherent from the point of view of its mission, that is, its maintenance in operational conditions. Understanding a system means explaining its logical model, and above all guaranteeing the logical model as an invariant of fundamental comprehension, in other words, deterministic and with a mastery of the construction process, level by level. This is a text that is associated with a certain abstract machine (in fact, a stack of machines organized in a hierarchical manner) which operationally defines its semantic. Whatever happens, we are always able to extract the history of the successive transformations that are carried out, and compensate for the hazards found, which is a prerequisite for the healthy engineering of errors, thanks to traceability matrices. 3.4.2. Meaning and notation: properties of classification trees These two examples show, each in their own way, the importance of the act of “naming”. Naming things, choosing a name, is never a trivial act, something that we have known for a long time; in the Bible, it is even an attribute that God confers on humankind; Genesis 2:18. If the elements of reality do not have the same name, the
The Definitions of Systemics: Integration and Interoperability of Systems
75
act of communication is impossible, confusion takes over and that is Babel, another biblical metaphor; Genesis 11:1. Logicians have taught us that it is necessary to be careful with names. Two different names do not necessarily mean two different things; G. Frege distinguished what he called the meaning, what we now call a name, from what designates the name, hence in his terminology the meaning and notation. The evening star and the morning star are two names for the same object, in this case the planet Venus. The number pi has multiple meanings: other than the usual denomination 1 Circumference π= , where π = 4 × 1 − x 2 dx is also found, which is the 0 Diameter calculation of the surface area of a circle with radius 1, or even that used by Mac 1 1 Laurin to calculate the 100 first decimals: π = 4 × arctan − arctan and other, 5 239 more exotic ones. More in-depth, the objective of algebraic geometry, refounded in the 1960s–1970s by A. Grothendieck, was to replace the geometrical language on the basis of figures, by algebraic expressions, which is a way of studying the correspondence between the continuous universe of geometrical beings and the uncountable universe with that which we can approximate to be continuous. As we know, rational numbers have the same cardinality as all whole numbers, but the continuous contains an infinity of numbers that are not rational. A computer, by definition, only allows manipulations of what is uncountable, which does not prevent us from making it perform magnificent approximations of the continuous, for example caustic surfaces in optics or even fractal curves, and making it control fundamentally continuous physical–chemical processes, or at least modeled by differential equations, and all this thanks to the inertia of the real world where, from a certain threshold, the two universes are at a tangent. This last point is fundamental because it comes down to defining a quantum of action thanks to which the constructed system remains unintelligible. A quantum depends on the expertise of the architect who designed the system because it is a decision made by the architect (refer to the numerous additions on the authors’ website).
In the closed world of the constituent parts of an automobile (although it could just as easily be a large carrier like an A380), the billions of components of a computer today, etc., the description of the system is a finite set of references where everything is listed; this is what mechanics have called it since the beginning of the
76
System Architecture and Complexity
automobile industry, a classification, the bill of materials35. This classification reflects exactly the hierarchical structure as it is visualized on Figure 3.2. Everything else is the situation created with integration of the computerized communication mechanisms within the systems themselves that allows the user, whether they are human or artificial, to interact with the system, wherever it is. To understand the consequences of this, we need to return to Simondon’s triplet {U, S, E}. Any interaction by the user will give rise to structures stored in the memories of computers/processors (a calculation organ in the J. von Neumann sense of the term) which are now interested parties in all systems. We recall once again that for computerized systems and silicium, all this was transformed at one time or another into energy profiles. This point will be taken up in Chapter 5. Each time the user interacts with the system, an active memorization structure, referred to as a process, must be created and therefore named. The user can interrupt their interaction, temporarily or definitively, which leads to the consequence of having the capacity to find what has been created. When an interaction is declared finished from the point of view of the user, it does however leave an imprint that is required for traceability of the operations carried out, a trace that must be conserved in a long-term archive, in compliance with the legislation in effect; this is known as forensic data36. In the event of a breakdown, thanks to these traces, the history of the interactions that have led to an abnormal situation can be reconstructed; this is therefore an essential device for system safety. In terms of naming, the consequence is that it will be necessary to create new names throughout the life of the system. In a computerized system, they are now all like this, the set of constituent elements of the system is no longer a finite set with a known extent, it is in the literal sense, infinite – in other words not finite. The set is no longer predicative; as a consequence, excluded third parties are no longer going to work when we reason on the basis of a “living” system that interacts with its environment. The “parts” of the system are, on the one hand, physical parts, visible, in the usual meaning of the term, but also invisible “parts”, which cannot be touched but simply “seen”, on the other hand. There are therefore real parts in all computerized systems, of a finite and known number, as well as, and this is new, virtual parts of a potentially infinite number. This is shown in the diagram in Figure 3.5 with which we could project the description of the vehicle in Figure 3.2.
35 In systems engineering, these are matrices known as N2; refer to NASA System Engineering Handbook, paragraph 4.3 “Logical decomposition”, in the 2007-6105 version. 36 Refer to https://en.wikipedia.org/wiki/Forensic_data_analysis and https://en.wikipedia.org/ wiki/Computer_forensics.
The Definitions of Systemics: Integration and Interoperability of Systems
77
Figure 3.5. Static and dynamic classification. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The figure shows the static part and the dynamic part of the system of which some of the constituent equipment integrates software to a greater or lesser extent. When this software “works” on behalf of users, humans or non-humans (e.g. “connected objects” in interactions), there is a creation of workspaces, known as “sessions” in systems engineering jargon, to which a name must be given to be able to manage them (see section 5.1, the language of interaction and raw orders) because they are likely to interact with the equipment, and also from user to user. In terms of complexity, this “opening” will have serious consequences. The opening creates new opportunities, but it also creates architectural obligations, because it will be necessary to organize it rigorously in terms of engineering. It is not enough to “open”, it is also necessary to control, and for this to trace, to conserve histories to validate the transformations that have been carried out. These dynamic entities related to the evolution of the system over time of course have a relationship with the static element that has created them; but they can also have relationships between themselves, 2 to 2, 3 to 3, etc., which means that the potential combinatorial is of the same order as all the parts, that is, if N entities are created:
( )
O 2 N . But there is worse because the context thus created can itself enter into a
( ) , and so on
relationship with the set that created it, and this time, we are in O 22
N
and so forth, as the operations carried out by the users continue, in other words:
78
System Architecture and Complexity
2 N O 22 . We are hotfooting it into what physicists who are interested in information call the third infinite, that of combinatorials that are conveyed by information, the infinity of complexity or the infinitely complex37. Other than the errors that we have already mentioned, the architect designer must take care of the control of this combinatory aspect so that the system is simply testable: this is the role of the integration process, one of the fundamental processes of systems engineering, which is nothing more than a concatenation operator, in the language of theoretical computer science.
With the telephone numbers or IP addresses required for the proper of the Internet, all users are familiar with naming problems. A telephone number identifies a specific telephone, thanks to a code of 10 numbers that is known to everyone, accompanied if necessary by the international dialing code. Ten numbers, that makes 10 billion potential telephone numbers, for a population of 65 million, plus companies, plus users who sometimes have several telephone numbers. Any computer connected to the Internet and, in general, all “connected objects”, which we call the Internet of Things (IoT), are provided with an IP address that identifies it as a communicating entity. The IP address has first been coded on 32 bits, which is a potential for identification of 4.294.967.296 objects! Largely insufficient with the development of the IoT. With the level V6, the IP address has gone from 128 bits, meaning a potential for identification of 2128 which in engineering notation would be written ≅ 3.34... × 1038 . The number can seem immense, but when we think about it correctly, not all that big, for at least two reasons:
1) it is necessary to recall that what makes the combinatorial of things that are infinitely complex are the relationships or the interactions, no matter what they are. Maintaining a trace of incoming and outgoing phone calls on the telephone over the legal period of two or three years (refer to the L of PESTEL) is quickly going to generate thousands of thousands of combinations that will need to be archived, but above all to be found again, just in case! It is necessary for the identifier of the interaction to be unique because if it is not, we are no longer sure of the sustainability of the incoming/outgoing relationship. There is a risk of confusion about the identity; 2) moreover, it is humanly possible to memorize numbers consisting of 39–40 numbers, unless they are written in places that are highly visible to the user, which is contrary to all security regulations. The solution consists of associating a familiar name “in plain English” with the code, that the user can then easily memorize.
37 Refer to J.-P. Baton, G. Cohen-Tannodji, L’horizon des particules – Complexité et élémentarité dans l’univers quantique, NRF Essais, Gallimard, 1989.
The Definitions of Systemics: Integration and Interoperability of Systems
79
Taking the Latin alphabet, lower case and upper case, in addition to the punctuation and some current symbols, we rapidly reach approximately 70–80 characters. If the designer authorizes names in plain English of 30 characters, the combinatorial is this
( ) ≈ 2.25... ×1055 ; to be compared with the 1038 for the IP time 7030 = 10 address! Therefore, it overflows. 30×log 70
Thirty characters – this is very short and not very user-friendly; we recall that SMS messages or Twitter messages are made up of 160–140 characters. If we redo the calculation with for example 100 characters, the space for names that is thus created will count (this is a text line, in font size 10–11). Morality, clear required for considerations of user-friendliness in human/machine interfaces – this is the external language of the user – will cause a difficult problem in the management of names to comply with the coding possibilities, given the internal names that are authorized by technology. By means of these naming problems, we touch on one of the difficulties of correctly managing the inside/outside interaction, meaning the of interaction of the user U with the two other elements in the triplet {U, S, E}. We will return to this in Chapters 8 and 9. Here we give a few technical elements that allow us to understand the possibilities, but also the new problems, caused by classification trees. Every tree in the classification has a unique root, intermediate nodes and terminal leaves. The root is a node that only has sons, an intermediate node has sons and a single father, and a terminal leaf has a single father and no sons (see Figure 3.6).
Figure 3.6. Construction of classification trees
Each element of the tree, whether it is a leaf node or an intermediate node, can be located by a series of numbers, starting at the root {R, i1, i2, …, ik} which constitutes its name; the order or rank of the son is conventional. The series of numbers depends on the depth (or height) of the tree.
80
System Architecture and Complexity
If all the nodes have the same number of connections, that is, n, we say that the tree is balanced; it is n-ary. A tree of this kind, counting the root node, of depth/height h n h +1 − 1 . will have a total number of nodes = 1 + n + n 2 + n3 + ... + n h = n −1 The number of terminal leaves is sometimes known as the “width” of the tree; we see that the depth/height of the tree varies in the same way as the logarithm of base n of the number of leaves (refer to an application of these properties in Chapter 8). In ICT, binary trees, with two connections per node, play an important role due to the technology, because many physical devices that are easy to manufacture, present two stable energy states that we then know how to compose (meaning concatenate). All binary numbers can be interpreted as paths in a binary tree. For this, it is simply a case of interpreting 0 as going to the left and 1 as going to the right. In Figure 3.7, the number 010 therefore denotes one of the eight leaves of a binary tree of depth 3 (i.e. 23). A binary number with 32 bits will have a designation capacity that is equal to 232, that is 4,294,967,296, a little more than 4 billion leaves. This interpretation of the number, whether binary or not, since this works with any numbering basis, will be used as a storage system. 1
0 0
1
1 1
0
010
Figure 3.7. Binary tree to code the names of leaves
In a computer, a binary number can therefore designate a memory compartment, this is the addressing mechanism, meaning a memory reference, which is a compartment that can contain a value that is also represented by a binary number. This choice of “all binary” numbers was made at the origin of the first machines by von Neumann himself. But the reference/value distinction made is absolutely fundamental because we do not “calculate” the same thing by operating on references or on values; in the language of G. Frege, the meaning (names) and the notation must not be confused if we want to
The Definitions of Systemics: Integration and Interoperability of Systems
81
avoid serious logical errors! In linguistics, this is the distinction between signifying/ signified that is operated by F. de Saussure to organize the signs of language. Another fundamental property of binary trees is that they will allow any type of tree to be represented, as shown in Figure 3.8. We note that the intermediary node with its n2 sons has led to a composite node formed of binary nodes numbering n2-1, which inherit the properties of the initial node.
Figure 3.8. Binary trees representing any tree. For a color version of this figure, see www.iste.co.uk/printz/system.zip
To add to what has already been said about the communication between equipment, it is fundamental to understand that all acts of communication between an emitter and a receiver presuppose that these are located in two classification trees, particular to the systems that accommodate them. They are therefore perfectly identified in their respective hierarchy (a little like the URLs that allow resources to be identified on the Internet) which will allow us to associate specific processings with them allowing exchanges to be controlled. For example, a recipient can refuse to receive messages if the emitter is not registered as having the right to send them to that recipient. This exchange can be represented as in Figure 3.9.
Figure 3.9. Communication in a space of names
82
System Architecture and Complexity
The emitter/receiver equipment has a local tree, positioned respectively in the general system tree; this means that any element has a local name, and a complete name from the root of the system tree that their element is part of. As long as the exchange is carried out inside the system, we can be content with local names, but as soon as the exchange is made between systems, complete names must be used. We again encounter the fundamental dichotomy inside/outside which structures the identification of the constituent elements and the organization of the exchanges within and between systems. Every local name must have a corresponding complete name, a unique identifier that can exist in two forms: an internal form, generally in binary language, and an external form, in “plain English” in the user’s language, whether this is human or machine. This is the price of coherence and universal exchange, a fundamental aspect of a communication architecture.
4 The System and its Invariants
4.1. Models The “system” object itself can undergo evolutions that may be large or small, under pressure originating from three sources: (a) users who require new functions and/or other modes of operation, for new uses; (b) technological innovations that will allow the system to increase its capacities and provide the users with competitive advantages; and (c) the environment with its uncontrollable hazards. The objective of these evolutions is to improve the system’s service contract with regard to its users, or simply to survive. For developments of this kind to be possible, whilst avoiding re-doing everything, the system must have certain abilities that are independent of the desired evolutions so that these can become easily integrated and constitute an operational evolution of the system. A little like the axioms of a mathematical theory, these abilities will constitute the invariants of the system, which must not change under any circumstances, and if they do, they will compromise the general equilibrium. These invariance properties can be immaterial, important in the entire system, such as compliance with a procedure, or can be conveyed by a particular piece of equipment. For example, the equilibrium between supply and demand for the electrical system is an invariant property. Each piece of production equipment is supposed to guarantee its production by contract. Compliance with a 220/380 volts potential, or frequency and synchronization at 50 Hertz, can be guaranteed by certain pieces of equipment such as UPSs which thus become the carriers of the property of invariance. The circulation of information relating to the state of the system, which moves upwards in the form of messages to the center of operations, must be standardized on a national or European scale. Reliability of the communication
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
84
System Architecture and Complexity
system must be guaranteed, meaning that nothing must be neither lost nor added, by an invariance property implemented by error correction codes (ECCs) which are simultaneously procedures that come directly from the theory of codes (Shannon, Hamming, Huffman, etc.) and tangible material entities, largely distributed at the very core of the system that they protect from hazards, for example, the alpha particles produced by cosmic radiation. Without these redundancies, no network would work for more than a few minutes. The integrity of the information transmitted is the invariant of all communication systems, hence the fundamental importance of Shannon’s second theory1. In an automobile, the chassis is a geometrical invariant common to a whole range of vehicles. It is now designed to be impact resistant. In a computer, the “logical design”, as named by von Neumann, is an architectural invariant shared by all computers, for any constructor: all of them have a shared memory that can contain data and instructions, one or several processors2, asynchronous input–output ports. J. von Neumann’s architecture model materializes the fundamental property of the universality of modern computers, not in a theoretical sense like Turing’s “paper” machine, but in very real technical objects that arise from human intelligence, initially with electronic tubes, triodes and pentodes, and those that are today entirely transistorized in silico. Programming languages, when they are designed well, are invariants that allow the programs that have been written in a language L1 to be executed and to provide the same results for any executing machine. The compiler guarantees the semantic invariance of what the programmer wrote when they transformed/translated the external language L1 into the internal language of a particular machine LMx. Compliance with the Internet http protocol allows any client to connect to the Internet (they do have a unique identifier) and to interact with other clients and/or to access the resources on the network unambiguously; name engineering is a fundamental problem. With the increase in power of “connected objects”3, billions of elements will now find themselves interacting with each other. In the GPS constellation, or the future GALILEO, each satellite must be perfectly synchronized with all others thanks to high precision atomic clocks to which the relativistic correction does, however, need to be applied to guarantee the precision of the signal that is emitted. 1 Much more so than the “law” known as of essential variety, coined by W.R. Ashby, because the theory of the channel with noise allows a constructive approach to the problem of coding whose existence it guarantees. 2 The Tera-100 machine delivered by Bull to the CEA for digital simulations has approximately 240,000 Intel processors. Latest generation video game consoles have several tens of processors. 3 Commonly called the Internet of Things, IoT.
The System and its Invariants
85
The list could easily be extended. The stability of certain aspects of the system is a necessary condition for its engineering. If everything moves, it is then impossible to stabilize anything, which will immediately affect the engineering teams which will become permanently destabilized. In ICT, stabilization of the hardware/software interface, a state of affairs acquired in the 1960s4, has allowed two opposing universes to be integrated, with, on the one hand, a material system subject to hazards and to the uncertainties of physics, including at a quantic level, and, on the other hand, a logical, deterministic universe, protected from the hazards of the environment, thanks to which the construction work of a programmer becomes simply possible (see Figures 3.3–3.5). What is important, from the point of view of the system’s life and of its potential for evolution, is to be able to take things into consideration between what is essential, what we cannot touch recklessly for fear of damaging it, and what is conjunctural, likely to be modified as a function of the events that will appear throughout the life of the system. In living organisms, the constitutive matter, atoms and molecules are constantly being renewed, but the structure (the form, in the interpretation provided by Aristotle) remains stable or grows back in the case of amputation in certain batrachians, for example. In the systemic approach, there must therefore be, at the heart of the approach, identification and a clear distinction of the essential aspects and of the conjunctural or non-essential aspects. Since engineering corresponds to these two aspects, their organization and relations will necessarily be different; the disappearance of an essential element is an irremediable loss, whereas it would always be possible to reconstruct an inessential element. REMARK.– In biology, the genotype is separated from the phenotype, even if there are, obviously, interactions between these two processes; the phenotype is what results from the interactions between the organism’s genotype and the environment of this organism. In a company, what we call the “company model” must at the very least structure the interactions with the environment (in PESTEL terms) in which the company evolves. The point of connection between what is essential and what is inessential is made of physical–chemical mechanisms that are known as transducers (e.g. the conversion of a pressure or of a concentration into electrical current; refer to the authors’ website for more information about transducers). They provide the conversion from one world to another. Thanks to these transducers, well-situated in the “machine” space, we can replace the electromechanical and/or hydraulic devices of the first 4 The interface of IBM 360 systems, and of the associated machine code, is an important date in the history of ICT; it has served as a model for many others, such as, for example, the byte code of the Java virtual machine in the 1990s.
86
System Architecture and Complexity
servo-controls by devices composed purely of information, initially analog, then digital. These principles were also the basis for analog machines, because certain electronic devices are the physical equivalent of operations (such as deviations or integrations) much more easily than digital machines. These transducers are critical devices for everything that falls under the category of a sensor or an effector. In section 3.3, we saw how the mechanism that provides the correct coupling between the interacting elements is necessarily a fundamental invariant of the system. It must be defined a priori so that everyone can comply with its rules. As we have seen, these elements are more or less independent processes that each have their own dynamic given the phenomena that take place in them. The methods of interaction between the processes, Inter Process Communication (IPC) to use the correct term, must comply with the time schedules of the various processes. In the first steam machines, a worker (generally a child) opened the valves for steam to enter the piston (a part called the “slide valve”), at the right moment, until one of them realized that this opening simply needed to be servo-controlled by the movement of the connecting rod to make this manual operation automatic. For around a century, all this machinery was mechanical, hydraulic and then electromechanical, to be progressively replaced from the 1950s to 1960s by a wide variety of devices known as command-control (CC, or C2) which are at the basis of all modern systems. These devices make both intensive (a large quantity of calculations) and extensive (ubiquitous, where it is needed) uses of computers insofar as the command–control information was completely digitalized, with suitable configuration. We can represent a modern system in a diagram, like in Figure 4.1.
Figure 4.1. Diagram showing the principle of modern systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The System and its Invariants
87
In line with the definition, the system is thrown into the heart of an environment, characterized by a certain PESTEL profile which can vary over time and space where the system is deployed. Each item of equipment is identified as an element of the system, with its own temporality (symbolized by a clock on the diagram), essential for successful control. Each element is linked to the C2 center by any information link (mechanical, hydraulic, electronic) but whose service contract must comply with the equipment constraints, in particular, those concerning possible saturations. In the case of war drones, this link can be a satellite link with a center located at thousands of kilometers from the targets. The equipment can have a certain autonomy authorized by the center. The center must comply with time schedules for the various equipment, which is usually described as “real time”, which simply means “absolute compliance with the schedule”, whatever that is, the millisecond or less (the cadence of integrated circuits is currently of the order of the nanosecond, 10–9 s), or several hours, and so on. The diagram highlights the critical role played by the C2 center which is the true director of the coherence of the whole. The center is the carrier of the system mission and the heart of the active unit (AU), according to François Perroux’s interpretation, which integrates all energy, human and information resources that the system requires to accomplish its task and avoid saturation in all circumstances. The equipment are resources attributed to the AU, depending on the desired capacity to obtain a given effect. In sciences of action and decision, the term “capability logic”, in other words, a logic based on an optimal use of the resources that define the capacity, for maximum effect, which is left up to the appreciation of the human operator who interacts with the system via the interaction language CRUDE, which we will return to in section 5.2.1. This is obviously a timerelated logic, non-reversible (in mathematics, we would say it is non-commutative, nor associative), because once such or such a resource has been consumed, it is no longer available, and the order in which they are consumed is significant. REMARK.– If a vehicle comes back to its starting point, for any reason, the fuel is consumed irreversibly, and the stock is reduced by the same amount. In the most advanced systems, the C2 center has a real “intelligence”, which is reflected in the acronym C4ISTAR (Command, Control, Communications, Computers, Intelligence, Surveillance, Target Acquisition and Reconnaissance) which characterizes it5; this point will be examined in more detail in section 8.1. It is itself structured like a system, in fact, a system of systems because it is a coherent federation that must respect the specific rules of the various equipment that it coordinates. The center must only manage information that is essential for the coherence of the whole, which is essential for the maintenance of this coherence
5 Refer to our two latest works: Architecture logicielle, 3rd edition, Dunod, 2011 and Estimation des projets de l’entreprise numérique, Hermes-Lavoisier, 2012.
88
System Architecture and Complexity
over time and nothing else. What is circumstantial to a piece of equipment must be considered as inessential for the center, and it is above all essential to duplicate the information towards the center, although this is essential from the point of view of the equipment. This is the entire problem of interoperability which is thus presented. We see the language aspect of the system taking shape, because each piece of equipment is characterized from the point of view of the center by states, associated with commands and messages such as equipment shutdown, ready, active, suspended (waiting for an event x, y, or z), interrupted (waiting for repairs), etc. The adjustment parameters, now numerous, allow for an adaptation to local conditions but by adding complexity, in other words, a large combination that must also be organized to be controlled. In defense and security systems, the “language” of the system is illustrated for human operators by a dictionary of graphic symbols, a language that is found in certain video games, an ideography6 that will allow operators to understand each other, independent of their usual languages which is only a comment, and to command–control the situation in which the system is operating. These aspects are detailed in Part 2 of this book, “A World of Systems of Systems”. To conclude with a metaphor: biology certainly provides us with the most spectacular example of invariance with the structure of the genome, written in chemical language using four proteins symbolized by four letters {A, T, C, G}, common to all the living states of the planet ever since the origin of life, a billion years ago. This being the case, the structure of the genome remains an immense question mark when we know that the largest known genome belongs to a small Japanese flower, Paris japonica, which contains 150 billion bases, in other words, 50 times the human genome. As for understanding operational architecture, and its genesis, one had to wait a little longer given the immensity of the challenge and its complexity. Talking about a program (in the computer science sense) regarding DNA is extremely daring insofar as we do not know the operational semantic, and biologists do not really know what this will code as, hence the strange expressions for a computer scientist like “non-coding DNA” that cause real unease7.
6 Refer to the Military Standard, Common Warfighting Symbology; also used by the NATO community; refer to http://en.wikipedia.org/wiki/NATO_Military_Symbols_for_Land_ Based_Systems. 7 What biologists have called “the genetic whole” after the publication of Jacques Monod’s work is in the process of being revised; refer among others to the works by H. Atlan, M. Morange and J.-C. Ameisen.
The System and its Invariants
89
But practices in the engineering of very large systems can certainly enlighten us, and perhaps even provide some ideas8. There is a strong analogy between biological engineering, which is not well-known, and complex systems: to organize complexity, the starting point must be with simple objects that are elementary, but not simplistic, with properties that are perfectly known and mastered by engineering actors, and then integrate them in a certain manner as a function of the end purpose of the system. The computing stack mentioned previously is today the best example of the way in which this fundamental composition operator/operation in artificial systems must be organized. What is truly extraordinary in engineering is the reliability of the construction process of living beings who, from a single cell, will produce an organized set of 75–100 thousand billion cells which renew themselves at the speed of a thousand billion cells per day – an absolutely unreasonable efficiency in the eyes of physicists who spent a serious amount of time considering the question, such as E. Schrödinger in his small work What is Life, M-P. Schutzenberger, a computer scientist who was also a doctor, or more recently R. Laughlin in A Different Universe, previously cited in Chapter 3. It is a highly organized process, perfectly Newtonian, in which chance, in the mathematical sense, no longer has a place. 4.2. Laws of conservation To continue the analysis of the invariant properties of systems, it is a good idea to look at the laws that are at the very foundation of the immense success of the physics of Europeans, from Copernicus, Galileo, Newton, etc. if we compare this to what advanced civilizations such as the civilization of classical Islam or the Chinese civilization have achieved. The most fundamental laws of physics, sometimes called super laws or phenomenological laws, are conservation laws that emerged progressively throughout the 18th Century. They are something radically new, the observation of which has been made possible thanks to the precision of measurement instruments, such as Roberval’s scales, the Vernier caliper scale to measure distances or angles, or even high precision clocks to measure time and above all longitude for transoceanic navigation. Without the precision of the measurements carried out by T. Brahé, Kepler would never have discovered the elliptical movement of the far planets, in this case, Mars; a fortiori, the abnormal advancement of Mercury’s perihelion, 8 H. Atlan expresses himself in a slightly similar way in his last work, Le vivant postgénomique, Odile Jacob, 2011; refer to Chapter 4, p. 134. Refer especially to P. Kourilsky, Le jeu du hasard et de la complexité, Odile Jacob, 2015.
90
System Architecture and Complexity
and even less so the invariance of the speed of light posed as an axiom of the theory of relativity. One of the first laws is the conservation of the quantity of movement, observed in the laws of impact, which strongly encourages, from the point of view of the description of phenomena, very clear ideas to be established about the notion of a vector, with the parallelogram of forces, etc9. There will be others like the Principe de moindre action by Pierre Fermat, René Huygens, Pierre Maupertuis; the Conservation de la masse by Antoine de Lavoisier with his famous “nothing is lost, nothing is created, everything is transformed”; then that of energy with Carnot, Clausius and Gibbs and especially the construction of an equation operated by Lagrange and Hamilton in differential form (the sum “kinetic energy + potential energy” is constant). The discovery of atoms and nuclear reactions clearly shows fundamental principles such as Einstein’s famous E = M × c 2 , which relates the weighted mass (meaning the atoms themselves and not the “inertial” mass which depends on the speed of the body in movement) and energy, followed by the no less famous W = h ×ν by Planck which this time relates energy, therefore the mass, to the frequency of a wave, hence the wave mechanics of Louis de Broglie based on the equivalence M × c 2 = h ×ν . These are discoveries that totally overthrew our view of the world, with the literally “incomprehensible” wave/particle duality, to paraphrase R. Feynman. Indeed, there are not many of these laws, but without them, nature is literally incomprehensible10. The recent discovery of the Higgs boson results from the application of these conservation laws at the atomic/subatomic scale which requires means of observation that are both enormous and perfectly controlled, such as the LHC at the CERN, and absolutely massive data processing because an experiment at the LHC creates hundreds of billions of events to analyze, inaccessible using simple human means. The mathematical analysis of these conservation laws, established by the German mathematician E. Noether11, has shown the importance of the notions of symmetries that are associated with them, a symmetry which cannot be “broken” without an energy counterpart. By reasoning in this way, Paul Dirac demonstrated the existence of antimatter, even before it was effectively discovered, by applying the conservation principles in such a way as to maintain a coherent mathematical 9 For more information, refer to E. Mach, La mécanique, and R. Dugas, Histoire de la mécanique, reprints with J. Gabay. 10 For a non-technical presentation of these laws, refer to the work by C. Gruber and P.-A. Martin, De l’atome antique à l’atome quantique, Presses polytechniques et universitaires romandes, 2013, in particular, Chapter 5, “Les grands principes de conservation, symétrie et invariance”. 11 Refer to the work by Y. Kosmann-Schwarzbach, Les théorèmes de Noether : invariance et lois de conservation au XX e siècle, Éditions de l’École polytechnique, 2011.
The System and its Invariants
91
formulation of the laws of physics. These principles have played a fundamental role in the discovery of the famous boson12 which “explains” the cohesion of matter, why matter “holds together” and forms stable structures over time. What can we draw from all this that will be useful for a deeper understanding of the notions of system and information? To do this, we need to indicate an important school of thought, initiated by physicians, biologists and computer scientists associated with the work done by the Santa Fe Institute, founded in 1984. The theme that related the various domains is the “physics of information”, meaning the study of information, in the broad sense, with a physical point of view, like that of conservation laws. See, for example, the proceedings of the work done at the “Complexity, entropy, and the physics of information” workshop, in June 1989, published by Wojciech Zurek in 1990, or even those of the symposium dated October 27–31, 1992, by the Académie Pontificale, “The emergence of complexity in mathematics, physics, chemistry and biology”, published in 1996. Above the physical part of our material or immaterial objects (such as fields), there is the information, in the broad sense, that is necessary for them to work correctly, and which will determine the action of the various communities. Hence, the slightly provocative formulation given by the physicist John Archibald Wheeler13 (famous for popularizing the term “black holes”), “It from bit”, which should be understood as: “[…] put every it – every particle, every field of force, even the spacetime continuum itself – derives its function, its meaning, its very existence entirely – even in some contexts indirectly – from the apparatus-elicited answers to yes-or-no questions14, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom – at a very deep bottom, in most instances – an immaterial source and explanation; that which we call reality arises in the last analysis from the posing yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.” 12 Refer to J. Baggott, Higgs: The Invention and Discovery of the God Particle, Oxford University Press, 2012; translated into French, La particule de Dieu, published by Dunod; with homage to Emmy Noether. 13 See his text, Information, Physics, Quantum: The Search for Links; refer also, for those who are not afraid of quantum physics, to: Quantum Theory and Measurement, Princeton University Press, 1983, and his article “Law without law”. 14 A formulation with which John von Neumann would probably not have agreed, because in a complex universe, uncertainty or ignorance (refer to the Black Swans) is everywhere; logic is modal, at least. From the 1980s onwards, computer scientists have developed logic known as “temporal” to take into account this more complex and more stable reality.
92
System Architecture and Complexity
We have known, since the work of C. Shannon and N. Wiener, that there is a deep analogy between information and organization, between entropy and disorder highlighted by the similitude of L. Boltzmann and C. Shannon’s formulae, an analogy that is reinforced by the 3rd MEP principle, Maximum Entropy Production, which is a principle of organization; this being the case, analogy does not mean causality; it is therefore wise to remain prudent. The physicist P. Bak has provided quite a captivating vision of the relations that appear to exist between these various notions, by what they call self-organized criticality, SOC, in his work How Nature Works – The Science of Self-Organized Criticality. Without information and control of the level of saturation of the feedback loop, the behavior of the systems is incomprehensible, and since this will be seen in section 4.2.2, without a repair procedure, without information about the state of its constitutive elements, the lifetime of the system, measured by an availability, is limited. We will return later to this aspect of complexity. But we can already say that a semantic invariant is necessarily preserved which formalizes the intentional actions of the user of a system (we could say, by linguistic analogy, its “performative” aspect15) until they are implemented in transducers which in fine “will execute” the user’s intention in the real world; this invariant implements a succession of languages, integrated with each other, of which the stack of interfaces that are present in all computing equipment provides an excellent illustration. To progress, we need to return to the definition of the system as a technical object, as G. Simondon explained it, with the triplet {U, S, E}, Figure 4.2.
Figure 4.2. The system and its invariants. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The technical object can be of any size, either like a mobile phone, which manages a power of a few Watts, or like the electrical system, which manages a power of the order of 100–110 gigawatts (in other words, the equivalent of approximately 5000 Hiroshima atomic bombs per hour, in other words, a little more than a bomb per second, or in a more prosaic manner, the simultaneous supply of 100 million irons). 15 Refer to J.L. Austin’s work How To Do Things with Words, 1962; translation from Le Seuil, Quand dire, c’est faire, 1970.
The System and its Invariants
93
In order to guarantee the coherence and use of the technical object, it is simultaneously necessary for (a) the system to remain useable and attractive to its users to which it provides a competitive advantage; and (b) understandable for the engineering teams who ensure its development, maintenance and operation, in other words, three activities with very different skills and experience profiles which are executed in parallel. Here, we see N. Wiener’s problem in its entirety, at the very beginning of his reflection on the subject of what cybernetics would become, because it is necessary to control the coherence of the various underlying flows. The quality system is implemented precisely so that this coherence and this utility are guaranteed as far as possible throughout the system’s life. The quality system is in some way the immune system of artificial systems; its challenge is to fabricate something reliable from non-reliable elements, by playing on well-positioned redundancies thanks to the review mechanisms that are said to be “independent” (see our previous works on the engineering and integration of projects). The set {Technical Object U, S, E} is plunged into a socioeconomic environment that we have characterized by the acronym PESTEL, an environment that it is itself a part of. To correctly use the system, the users must have a sufficient level of knowledge, known as the “grade” in English-speaking countries. In France, approximately 80% of a given age range have been educated at the Baccalaureate level, and therefore, at least in theory, have a certain level and amount of skills. It is quite simply a permit to perform actions x, y or z, which require a certain level of knowledge to avoid damaging one’s environment or oneself. To correctly carry out assignments in engineering, the corresponding community which is triple (see Figure 4.2) must have sufficient maturity, in skills and in experience. For energy, this is Baccalaureate + 5 years. Engineering makes a product/system available to the community of users which satisfies a service contract which we have characterized by the acronym FURPSE under economic conditions, because it is necessary to be able to pay it, characterized by the acronym CQFD (the basis of the quantum energy project) and TCO (the integral of the quanta “project” for the entire lifeline). By reasoning in the same way as H. Simon in Sciences of the Artificial, Chapter 8, “Architecture of complexity”, we can improve Figure 4.2 as follows (see Figure 4.3). In the diagram, the users are considering “what is this used for in the environment in which I am acting?”, whereas engineers must ask the question “how is it made, how can it be repaired in the event of a breakdown, etc.?”, in their environment of constraints and hazards, to respond to the requirements of users, meaning they should seek how to control all this, while preserving the semantic invariant.
94
System Architecture and Complexity
Figure 4.3. Architecture of complexities
In “energy” terms, three types of energies must be considered, to which three types of complexities will be associated, and three architectures will organize these complexities: – that required for learning by the actors so that they become able to use the system without putting their life and/or their environment in danger. The corresponding effort is correlated with the quantity of information associated with the flow of messages that must be managed by the user: this is the Shannon QI measure, associated with these messages. REMARK.– With this measure, which is infrequent with a low probability of occurrence, a large quantity of information will be transferred, which will require a long learning time because its low occurrence means that it will rapidly be forgotten; – that required for the correct operation of the material part of the system and more particularly for the management of interactions at the very heart of the system. We will correlate it with the required treatment power to ensure that all the transformations that are carried out by the “machine” are correct, in other words, the calculation capacity that the system must have, in the broad sense, via its transducers, to accomplish its mission. This is the measure AC, for algorithmic complexity, used for the first time by J. von Neumann to count the number of operations that are required to solve a problem; – that required for the production of the text that defines the set of procedures to be carried out, in other words, programming in the broad sense, either automatically, or manually, by engineering actors. We will correlate them with Kolmogorov– Chaitin’s measure of textual complexity, TC.
The System and its Invariants
95
Regarding more particularly the interactions that will be organized and managed by the information system, we can complete the general situation presented in Figure 4.1 by Figure 4.4.
Figure 4.4. Control of flows and command–control. MMI stands for man-machine interface. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In this diagram, the information system only sees what the system architect has decided to let it see so that the control and command–control are carried out optimally, taking into account the service contract. Energy comes from diverse origins: wave, heat, fluid flow, etc. If we wanted to be perfectly rigorous, and is consistent with the energy logic of the standard particle model, the material “matter” part itself would be fermions, meaning the protons, neutrons and electrons that are the constituent parts of matter at our scale, and the “energy” part would be of bosons, massless particles such as photons of electromagnetic energy, which ensure their cohesion. In a chemical reactor, like a combustion engine, matter is recomposed of constant mass, liberating heat energy; in a nuclear reactor, matter is transformed by the fission of atoms of uranium/plutonium, which, as we all know, liberates a lot of energy that is recovered per unit of mass essentially in the form of heat radiation. The error rate and/or the breakdown rate are added to the three complexities, which is logical. A man-machine interface for users will be all the more difficult to master since it is complex, in the usual sense of the term, that is, with many messages and relationships between the messages. The same can be said for the machine and engineering. The user will make even more errors because they will
96
System Architecture and Complexity
master the “grammar” of the correct use of these messages more or less effectively16 and more specifically rare messages, which are by definition difficult to memorize. However, to understand the magnitude of the phenomena that are related to the scale of the processes at work and that we want to servo-control, and to measure the potential consequences of errors, let us look at the case of the electric system (ES) which produces a peak power of approximately 100–110 gigawatts. If the material part of the transport system has a yield of 99%, this means that the ES must be capable of compensating, through suitable mechanisms, for a power of approximately 1 gigawatt (one section of a nuclear power station!) which is wandering around in the network, in thermal and/or mechanical form, and which may accumulate! If for any reason, control is not implemented, or is implemented badly, we very quickly have a potential Chernobyl incident given the energies that are at stake. Hence the critical importance of the partitioning that is implemented by the architecture of the transport network. At the least incident, at the least imbalance, it is possible for the operator in charge of the control to no longer be in a position to act on the physical reality, even if the information continues to function correctly, from its own point of view. In the diagram in Figure 4.4, the correct interpretation is that the energies used by the information system, the command–control (ICS) in the wider sense of C4ISTAR systems, have no measure in common with those that are released in the controlled material and/or human system, which can give the incorrect appearance that there is no risk, if the operators of the ICS have unfortunately forgotten reality. There is a real lesson to be learnt from this for the daredevils of the financial system that circulate the flows of hundreds of billions of euros, per millisecond (see the excesses of “high frequency trading” HFT17), a time that totally escapes all regulatory control from human actors, in addition to rigging the rules of the game practiced on the markets, symbolized by Adam Smith’s famous “invisible hand”! Unfortunately, we know what the result is. This is what we will specify in sections 4.2.1 and 4.2.2. 4.2.1. Invariance As we have already said, without going into details, control of any kind implies energy expenditure that has a detrimental effect on the effectiveness of the system. 16 By way of an example, refer to NATO message systems; you can find many references by typing in: “Adat-P3 military messaging”. 17 Refer to http://en.wikipedia.org/wiki/High-frequency_trading, for an introduction to this new means of speculation; also refer to J.-F. Gayraud, Le nouveau capitalisme criminel, Odile Jacob, 2014.
The System and its Invariants
97
James Watt’s centrifugal governor for the very first steam engines consumed energy which, due to this, is no longer available to make the machine move, but prevents it from exploding due to overheating. The centrifugal governor therefore plays the role of an insurance which guarantees that, in the event of an imbalance, we will have sufficient resources to return to the nominal situation: remaining in a state of balance. The relevant consideration to make in systemics is how to obtain a deep understanding of the nature of the relationship between the energy that the governor is able to mobilize in order to correct any imbalance that may arise and which could prove to be damaging or destructive to all or part of the system, or that could pose a risk to its users and/or to the engineering teams, with the objective of applying the most suitable counter-measures for the situation. We all experience this type of situation when we are driving vehicles. When a driver is faced with an unforeseen obstacle, they have a choice of two strategies: (1) stopping before the obstacle, if possible; or (2) deviating around the obstacle as indicated in Figure 4.5. In case (1), the vehicle’s brakes need to be able to absorb the kinetic energy of the vehicle, over a distance ≤ than that of an obstacle. The energy that can be mobilized comes from its brakes, which are only effective if the wheels are not blocked. In case (2), it will deviate from the initial trajectory by modifying the direction and grip of its tires, possibly by slowing down, using its brakes to avoid the car skidding or swerving; in this last hypothesis, the driver loses control of the vehicle.
Figure 4.5. Control of a trajectory with deviation around an obstacle. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Figure 4.6. Control of a trajectory which avoids the obstacle. For a color version of this figure, see www.iste.co.uk/printz/system.zip
98
System Architecture and Complexity
Several general lessons can be drawn from the situation: – From the moment when the obstacle is seen to the moment when the effect of the decisions made is felt, there is always a period of latency during which the system (here, the vehicle) follows its nominal trajectory. If this latency time, which depends both on the governor and on the inertia of the system in movement, is not carefully controlled, the shock can become unavoidable. – The energy to inject into the system to carry out the correction must be compatible with the structural constraints of the system. In the case of the vehicle, if the braking is too sudden, the wheels will become blocked, and therefore lose their grip, which is almost the same thing as cancelling out the braking; hence, the advantage of ABS devices. If the change of trajectory is too sudden, the centrifugal force that depends on the radius of the new trajectory can be enough to destabilize the vehicle, which will also translate into a loss of grip, with potentially fatal consequences. If this energy for control is not suitable for the dynamic context of the system at the instant T, loss of control is inevitable. The system must always have resources in reserve that can be mobilized without delay to guarantee a latency ΔT that is compatible with the phenomenon that needs to be controlled. For this to take place, it is necessary to measure and quantify the capacities18. – The system situation, that is, its state, is eminently variable. In the case of a vehicle, the dynamic is a function of the mass, therefore of the number of passengers and/or of the weight of the luggage on board. This means that no plans can be made in advance and that it will be necessary to adapt to the spatio-temporal situation of the system in real time, integrating the latency time. The capacity for adaptation of the system depends not only on the knowledge of its internal state at the instant T but also on the environment in which it evolves in order to prepare alternative plans that will always need to be adapted to the context of the situation. – In the event of an incident/accident, a new obstacle appears, which can set off a waterfall phenomenon that will amplify the initial problem (take the example of pileups on motorways) and potentially lead to a situation of a general blockage of the system; or what economists call a systemic breakdown, highly inappropriately since, by removing the controls, the system is destined to become uncontrollable. Spatial organization of the system must be such that the zone that has been contaminated by the accidental phenomenon is confined and temporarily isolated from the rest of the system. In the case of the electric system, organization in autonomous regions provides this guarantee. We therefore see that there is a direct relationship between the latency
18 Hence, the terminology, a little strange, of “capability logic” that is used in the environment of systems where this problem exists; the invariant to guarantee is the capacity of the system, in terms of resources, to interact to carry out its assignment.
The System and its Invariants
99
of the control loop and the “size” of the “energetic” region whose hazards must be controlled. A region of this kind determines an autonomous or autonomic loop (using a term introduced in automatic computing approaches), of which the size is determined by the latency (or the inertia) of the loop.
Figure 4.7. Loss of control of the trajectory. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The metaphor of the trajectory can greatly help the system architect designer to plan for the types of situations that they will have to face, without, of course, knowing the place nor the date of the expected events. Two types of trajectory can be envisaged, increasing in terms of the difficulty of adaptation: (1) trajectories for avoidance and/or deviation, possibly involving stopping; and (2) pursuit trajectories, when the difficulty encountered obliges them to change course from plane A to plane B, whose dynamic will, by definition, be different. This can be schematized as follows, Figure 4.8.
Figure 4.8. Control with the change of trajectory and avoidance of an obstacle. For a color version of this figure, see www.iste.co.uk/printz/system.zip
100
System Architecture and Complexity
In a situation where one has to avoid an obstacle, the state of the fixed or mobile obstacle needs to be evaluated before selection of the most appropriate strategy, either stopping, or deviation, and ensuring that we have the resources that are required to carry out the action that has been selected. For the case of stopping, an evaluation needs to be made of the distance to the obstacle and of the energy that can be mobilized for braking. For the case of deviation, it will be necessary to choose where the deviation will, in fact, pass, depending on the dynamic parameters of the deviation trajectory, or on the speed, the radius of curvature and possibly the torsion (if 3D), taking into account the constraints that are specific to the system, as well as the possibilities proposed by the environment. Once the obstacle has been bypassed, all the trajectory parameters need to be recalculated to see if we can return to a nominal situation, as if there had not been an obstacle, or one must announce the consequences of this deviation on the lifeline of the system and on the service contract. In the context of a pursuit, the situation is more difficult to analyze, although it is intuitively quite obvious, because this time, it is necessary to control the movements of two trajectories, that of stopping at A and that of starting off at B, in such a way that the connection will be made smoothly. For this, it is necessary to calculate a transitional trajectory in such a way as to progressively organize the move from plane A to plane B. “Plane B” cannot start instantly. It is therefore necessary to plan for an increase in power whose “energy” will come from the deceleration of A, at least in theory. At the time of contact, the parameters of the two trajectories, the transition A→B, and B must be correctly aligned to avoid impacts. One of the most famous examples of this type of realignment is the case of the Apollo 13 mission following the incident that caused cancellation of the initial mission19. This type of trajectography is exactly what is encountered in completely general and relatively frequent situations, which makes the metaphor useful because it allows us to “see” the complexity of the situations that need to be managed. For example, when the manager of the RTE transport network must shutdown a power station for a given reason, or if one stops by itself (e.g. falling bars following an incident), while maintaining the equilibrium between supply and demand, it is exactly in this situation. Another example, with a company that must deeply rethink its offer of products and services, here again for a given reason (e.g. globalization, merger and acquisition, digital impact (transform or perish)), we are again in a configuration of trajectories, except that in this case, the general management of the company must manage an organizational and human problem that we do not know how to describe with clear formulae as we did in the example for SE. The metaphor of the trajectography can provide one last service; an understanding of the profound nature of control phenomena. There are two types of 19 Refer to https://en.wikipedia.org/wiki/Apollo_13.
The System and its Invariants
101
control energies. An initial energy corresponds to the control of a nominal trajectory, with no specific obstacle. However, there are operations that need to be carried out depending on the position of the system on its lifeline. The road can rise or fall, turn right or left, there can be signs giving instructions to the driver who must consequently be educated. In a modern car, we can say that we have classic devices such as an automatic clutch, a speed adjuster/limiter, an ABS braking system, etc. The control energy is used to maintain the engine at its level of optimal yield, in compliance with the highway code. A second energy will allow us to react to risky situations in which the nominal trajectory must be abandoned in favor of another trajectory which will allow the risk (which, by definition, if unforeseeable) to be compensated for. For example, a complete stop of the vehicle requires a device that must completely absorb the kinetic energy of the vehicle, without killing the occupants (from 10 g and above, the lungs and internal organs explode). In case (1), we can always remain in control with suitable servo-control; in case (2), this is not certain. In any system, we will find energy bands of different levels: – Nominal/controllable: the system lives its life, in its environment, and everything is for the better in the best of all worlds. – Non nominal/controllable (NNC): we have enough resources to execute the appropriate action via suitable procedures that are planned in advance and maintain the system in its current working state, with no catastrophic consequences on the service contract. This NNC case corresponds to a situation for which we have a theory, a certain model, and we can act as a consequence of this. – Non nominal/non controllable (NNNC): we do not have enough resources to execute the appropriate action which, moreover, does not feature in the risk management plan. The system will be damaged, or destroyed in extreme cases (Air France flight 447 Rio–Paris in June 2009, Chernobyl, Fukushima, etc.). The case of NNNC corresponds to a situation where we have no theory, but we can at least hope to learn lessons in order to avoid being surprised a second time, so long as we have deployed the correct means of autonomic management. According to the theory of dynamic systems, this type of situation can be represented using “potential” functions. Here, we will only give an intuitive representation, because in the world of real systems, very few of them are sufficiently well-defined for it to be possible to model them with this theory, in particular, as soon as there are human operators in the loop, and in a general manner a degree of hazard, for example, the intensity of an earthquake, the resonance modes of a machine, etc. However, we can make some very interesting qualitative reasonings.
102
System Architecture and Complexity
The idea is to represent the “potential” of the system through a landscape of valleys and hills, as shown in Figure 4.10. The valley bottoms and the summits are the singularities of potential. The points of equilibrium and instability are represented by balls. When the ball is at the bottom of a valley, it stays there. When it is on a summit or on a plane, the slightest disturbance is likely to make it fall into the neighboring valleys. In this representation, what is important is the number of singularities, and their forms that can be expressed using polynomials. In case no. 2 of Figure 4.9, we have a polynomial of degree 4, in other words, a maximum of three singularities, which corresponds to the number of zeros (extremes) of the polynomial for the derivative.
Figure 4.9. Different types of potential energies
The most interesting case for our subject of interest here is the potential known as “Mexican hat” because it allows us to qualitatively reason about the above situations. The ball in B1 (Figure 4.10) corresponds to an optimal yield situation that we know how to control. If the ball falls to the right or to the left, we always have enough energy to bring it back to position B1, compensating for the fall h1.
Figure 4.10. Potential known as “Mexican hat”. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The System and its Invariants
103
A position such as B2 is unstable, because if the ball that moves due to an imbalance is not caught in time (problem of latency period), it will reach a threshold where the system is no longer capable of mobilizing sufficient energy to compensate for the fall h2. The situation then becomes catastrophic. In the case of the electric system, there are optimal positions of equilibrium in which all the energy produced is consumed as closely as possible to the production points and the surplus (e.g. nuclear electricity) is sold to our neighbors by the RTE trading system. If the supply exceeds the demand, the system is no longer at its optimal level but the service contract is satisfied. If the demand exceeds the supply, the service must be degraded to satisfy the equilibrium between supply and demand, or one must buy external energy via the trading system. In the case of situations such as the equilibrium between supply and demand of the electric system, there is a dissymmetry between the supply and the demand. If the supply > demand, there is always a position where the service contract is respected, even without resorting to trading. If the opposite is true and supply < demand, there is necessarily a degradation of the service contract. The equilibrium can only be re-established by putting the users “in the dark” and/or by buying energy from outside the system. In this case, the potential must reflect this dissymmetry, with a potential function of an uneven degree. Figure 4.11 shows different aspects of a potential of degree 3, including the C3 case that corresponds to the situation of the electric system with its intrinsic dissymmetry.
Figure 4.11. Evolution of a C3 stable potential towards an intrinsically unstable C1
In reality, the form of the potential can change over time and as the situation progresses, taking into account the hazards, which comes down to making coefficients of the polynomial of time functions, or more generally of a variable of the evolution of a parameter t which takes into account the internal state of the
104
System Architecture and Complexity
system that results from the lifeline. Namely, in the case of a potential of degree 3, a function of the following type would be (we can always make the x2 term disappear which does not change the morphology): P → a ( t ) x3 + b ( t ) x + c ( t )
We can also change the degree of the polynomial that describes the morphology, which is similar to analyzing the dynamic parameters of the singularity at the time of the change of morphology. A potential that is intrinsically stable (potential of an even degree) can evolve towards an unstable potential (potential of uneven degree), or an intrinsically unstable potential (case C1), as shown in Figure 4.11. If the dynamic parameters of the trajectory (speed, curvature, torsion, in other words, the derivatives of the first and second orders) are not continuous, there will be potentially destructive “jolts”, impacts, clicks and/or vibrations. In the most common case, sought out by architects of complex systems, there will be evolutions of the potential, as indicated in Figure 4.12, where, from a single intrinsically stable valley, we will find ourselves in a system with two valleys (if the potential is of degree 4), of which each one reveals a different stable optimum, taking into account the selected strategy. In systems engineering, preference can be given to the duration of the construction, which we wish to be as short as possible, or on the contrary an optimum expenditure, which is the same as giving preference to the TCO (Total Cost of Ownership). Here again, qualitative reasoning can give rise to thresholds where the strategy can be changed given the observed situation. Currently, release onto the market as early as possible is considered to be the best strategy, despite being the most risky, which means that starting from a prudent strategy where expenditure is strictly controlled, we can allow ourselves to shift to a more offensive strategy in terms of time period, once we are sure that the expenditure will not evolve out of control.
Figure 4.12. Doubling of a stable potential valley into two stable valleys
The System and its Invariants
105
What we have referred to as plane A and plane B, in fact, correspond to morphological situations that are prepared in advance. This corresponds in the real world to situations where the management of a company permanently maintains various types of strategic plans to react in the best possible way depending on the hazards of the economic “war” arising from the globalization of the economy where predatory behavior is seen in its pure and simple state. In this case, the work of operational directors consists, in the case of a decision to go from one plan to another, of managing pursuit trajectories, known in the jargon as “strategic alignment”, avoiding jolts if there is a discontinuity in the dynamic parameters of speed, curvature and torsion. On the way, this demonstrates why weak signals are of critical importance (these are magnitudes of second order, often hidden in the background noise, so by definition they are difficult to perceive without suitable measurement and/or observation instruments, including human), to manage the competitive advantages of any system in the best possible way. This also shows why, in the case of an electric system, making each one an energy producer is not necessarily a good strategy, given the current dynamic of our system. Causing a latest generation giant petrol tanker or container ship to change course too abruptly (400 meters long, 40–50 meters wide, with thousands of containers piled up over 8–10 levels) can completely disrupt the structure and cause the ship to sink. The metaphor of the trajectory allows us to take stock of control energies that are applied and to better represent dynamic phenomena, and hazards, of which systems are the site. REMARK.– The theory of dynamic systems in its current state20 is not easily applicable to the systemics that are envisaged in this book, in the same way that it would be to modeling the digital calculation codes that are used in the simulation of certain physical phenomena that can be approximated using continuous functions. All the systems envisaged here, due to the importance of ICT, are discrete systems whose state changes are carried out by discrete transitions that are modeled by atomic transactions (in other words, a quantum of action), with the theory of automatons. A non-trivial adaptation is necessary, but it requires profound knowledge of the domain of systems engineering before any attempt at mathematization that is not exaggerated; the fundamental question is the approximation of phenomena that are intrinsically discrete by continuous functions. However, a qualitative analysis by simulation, despite its limitations, remains possible on the condition that the required skills and knowledge are present. 20 For an initial introduction, refer to R. Thom, Stabilité structurelle et morphogenèse (difficult); V.I. Arnold, Catastrophe Theory; a classic: J.-M. Souriau, Structure des systèmes dynamiques (and his website); P. Back, How Nature Works. Especially refer to the work of J.-L. Lions and his numerous pupils. Also refer to the work of J.-J. Slotine at the MIT, Applied Linear Control.
106
System Architecture and Complexity
4.2.2. System safety: risks To end our search for the fundamental invariants of systems, it becomes obvious that this invariance needs to be sought among human actors themselves, users and engineering that give “life” to the system. In fact, the human actor shares their time between, on the one hand, (a) the time dedicated to learning and the maintenance of the knowledge required for their specific action, or to correct the errors and/or contrasts, and, on the other hand, (b) the time dedicated to the interactions with the members of their community to ensure that its individual action remains coherent from the global point of view of the system, and in compliance with the service contract. Quite obviously, they must also rest and relax, to avoid the stress phenomena whose devastating effects are known in terms of the rate of human errors (see Figure 4.13). If there is divergence from this, sooner or later, there will be a disaster, as seen in Chernobyl.
Figure 4.13. Rodin’s thinker studying Quantum Physics for Dummies
Knowing that human actors make mistakes, individually, as well as collectively21, and that the “machine” itself will come across hazards related to the physics of its constituents (wear of parts, fatigue, etc.) and to the physics of the environment, we see very well that the fundamental invariant will be based on the ability to compensate for errors of all kinds that will unavoidably appear during the life of the system. Here, we come across the point of view of R. Hamming who underlines the importance that the architect must apply to their mistakes. A little stress over a limited time duration reinforces the vigilance and lowers the error rate (this is a pure system safety concept, well-known to fault tolerant architectures that tolerate errors up to a certain threshold where the system saturates, which is equivalent to a “disruption of symmetry”). 21 Refer, in particular, to the impressive analysis of the Challenger space shuttle disaster, D. Vaugham, The Challenger Launch Decision – Risky Technology, Culture, and Deviance at NASA, The University of Chicago Press, 1996.
The System and its Invariants
107
Errors are by definition unknown, and the consequences of their occurrence are of a magnitude that we cannot measure, while remaining certain that they exist. At best, we can set up statistics about the risks we run, but only after the event. As for the risk itself, it is defined as a mathematical expectancy, in other words:
R = p × Cost_of_Damage in which p is the probability of the expected event. In industries that involve risks, like EDF nuclear installations, the scale in Table 4.1 is used (INES scale22).
Difference Incident
Accident
0 1 2 3 4 5 6 7
Level No importance Anomaly Simple Serious Confined within the site Off-site Serious Major
Examples Not classified because no consequence Outside authorized limits Low level of irradiation Irradiation of a few people A fire Three Mile Island High level of irradiation + death Chernobyl; Fukushima-Daiichi
Table 4.1. The international classification scale for nuclear events, known 23 as the INES scale (International Nuclear Event Scale)
This classification is only an example. But it is clear that the notion of risk now needs to be considered extensively, as we can see with social networks that can destroy a reputation and/or push people to the brink of suicide, because a private life no longer exists. In fact, every system, depending on its operating methods, the maturity of its engineering teams, etc., must have a risk scale, correlated with its internal organization/architecture and with the environment. With this notion of risks associated with the errors, or with ill-will, we are at the heart of the problem of complexity. One of the actors in the system makes an error (or violates a law), but according to the context in which the error is revealed, the consequences can be either catastrophic, or inexistent. A good example of a situation of this kind is the crash of the first Ariane 50124. A software fault that was already present on the Ariane 4 launchers, but masked due to its dynamic, was revealed in
22 Refer to https://en.wikipedia.org/wiki/International_Nuclear_Event_Scale. 23 Refer to https://en.wikipedia.org/wiki/International_Nuclear_Event_Scale. 24 There has been abundant documentation on this. Refer, for example, to http://sunnyday.mit. edu/nasa-class/Ariane5-report.html.
108
System Architecture and Complexity
the context of Ariane 5 where the dynamic was different due to the power of the motors of the new launcher. REMARK.– The error rate of human actors implicated in risk situations has been the subject of a great number of ergonomic studies. It can reach 5–10 errors per hour, depending on the context and the degree of fatigue/stress of the actor. What must be integrated, and it is not easy to admit, is that this rate is never zero; there are always residual faults! In the language of the physics of unstable phenomena, this is what is known as a “critical” situation, hence the term criticality used by P. Bak25 and many others. Critical situations are often associated with a change of phase. For example, water that normally boils at 100°C conserves its state at 105°C, 110°C, but if the least disturbance appears, like an impurity, or a weak impact, then it immediately vaporizes, which can lead to a destructive explosion. Equilibrium of this kind is known as metastable, meaning over the normal point of stability. Correctly controlled, like in explosives, or combustion engines, with a low activation energy, these phenomena are, however, very useful for humanity. In a system, when an error manifests itself, and the system sees it, it changes operating mode ipso facto, because it will be necessary to correct the error. This is the equivalent of a phase change. The aptitude of the system to control its availability more or less well, in other words, its aptitude to provide the service we are expecting it to provide, is known as the system safety which is measured by the availability, a statistical magnitude. It is represented in Figure 4.14.
Figure 4.14. The chain of errors. For a color version of this figure, see www.iste.co.uk/printz/system.zip
25 Refer to his work, cited previously, How Nature Works.
The System and its Invariants
109
The critical period of time runs from the moment T1 when the default has been executed, the date at which the state of the system begins to degrade, to the date TFin, when the system recovers its nominal operating mode after repairs and when users can again use it without risk. The latency time is the time that passes between the execution of the fault and the detection of the failure. The magnitudes MTTF (Mean Time To Failure) and MTTR (Mean Time To Repair) are statistical; they measure the average correct operation time and the average repair time, which allows the availability of the system to be calculated: D=
MTTF MTTF + MTTR
With this definition, we see that if we had an infinitely effective repair mechanism, that is MTTR = 0, or an MTTR < at the latency threshold of the feedback loop, the availability would always be equal (or perceived as equal) to 100%. Hence, the importance of the detection of saturation thresholds whose effect on the availability can be devastating because the system becomes randomly uncontrollable having been controllable. One of the most fundamental invariants of systems, since as Hamming states, “… in real-life noise is everywhere”, and that will have to put up with errors, is the presence of a sub-system which is in charge of carrying out general autonomic management and repairs in such a way that the availability of the system is as close as possible to 1, but it will never strictly be equal to 1, over a given time period. This had already been identified by N. Wiener and, in particular, by J. von Neumann who made the first theoretical studies of it. This sub-system must not only survey the internal state of the equipment and interactions between the various equipment, manage redundancies but also the environment to ensure that the conditions for the correct operation are fulfilled; for this, the system may cooperate with other systems that are plunged into this environment, via procedures of interoperability that will be specified in Part 2 of this book. The mission of this sub-system is to avoid disasters and to extend the system lifetime as far as possible. One of the most counter-intuitive aspects of error engineering is their ability to undergo metastasis during the latency period. An error that is demonstrated in an element of the system will propagate, generally via manipulated data, towards elements with which this element is coupled and which therefore inherits an incoherent state. The more inactive the system is, the more significant the sharing of resources, the faster the propagation. The potentially faulty element must be architectured to detect its own errors as quickly as possible, a property known as Fail-Fast, and confine them in such a way as to inform its environment of its failure, but to inform it in a language that can be understood by the environment, in other
110
System Architecture and Complexity
words, not that of the faulty element: it is necessary to reformulate the information in the “language” of the receiver, in other words, a transduction (see Chapter 6). Without going into useless details at this stage, we will add that this latency time is closely related to the structure of confinement spaces (see regional systems in the case of an electrical system); the “size” of the confinement space must be such that it allows this part of the system to isolate itself in the event of a catastrophe, or for the survival of the set. This size is therefore an increasing function of the latency of the feedback loop, hence the importance of the Fail-Fast property and of autonomic management. Here, we again see this fundamental dichotomy, pointed out by J. von Neumann, between the internal language and the external language. There are tens of ways for an element that is a little complex to break down, but from the point of view of the element’s environment, this can be translated by only two or three states, for example: done, not done but recoverable, and not done and irrecoverable, depending on the means agreed on by the architect; it being understood that the detail should be registered in an onboard journal that is specific to the element. We will remark in passing that the decision logic cannot be a simple yes/no logic and that the excluded third party will obviously not “work”. A modal logic is required, also temporal, because an alert must generally be processed in a given time period, a resource must be available at date T1, not before, and not after when it is too late, etc. Failing this, the system is inadaptable. Here, we have a precise illustration of the “law” known as the “law” of essential variety, formulated by W.R. Ashby26 in a trivial manner (this is a false law), because we know, at least since Shannon’s theory of coding, that a saturated code has no resilience, by definition, since all the possible states of the code have a meaning. REMARK.– The “law” of essential variety stipulates that only the variety of the control system can reduce that which results from the process to be controlled; in other words, the control system must never be saturated. This is the profound reason why, in nature, and a fortiori in artificial systems, a correct dose of redundancy is required, in suitable places of the system, a redundancy that is calculated by the architect, which depends on the environmental “noise” to which the element is exposed. Returning to the diagram showing the principle of systems (Figure 4.1), each of the elements constituting the equipment needs to be surveyed in a suitable manner, taking into account (a) its function in the system and (b) its spatio-temporal and human environment. The functional structure is doubled, in some way, in such a
26 Refer to his work An Introduction to Cybernetics, 1st edition, 1956. Ashby was a biologist and a psychiatrist.
The System and its Invariants
111
way as to make the autonomic management system explicit with dedicated elements and resources, as indicated in Figure 4.15.
Figure 4.15. Elements of the autonomic system. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The functional blocks that constitute the system, known as building blocks27 in systems engineering, all have a dual image in the autonomic management system. Each ITG-F gives rise to an ITG-FS which, in turn, is related indissociably, like Socrates and his demon. ITG-FS supervises the state of correct operation of the ITG-F, by collecting information via sensors, and possibly acts retroactively via the effectors, for example, by executing certain tests to confirm or reject a probable breakdown hypothesis, etc. The objective of these ITG-FS is to maximize the availability D of the system, and we see that here again, there is an equilibrium to be found so that the autonomic management system itself does not become too complex, which requires an autonomic management in contradiction with the target objective of simplicity (of “simplexity”, in reference to the neologism introduced by A. Berthoz in his work Simplexity (refer to the authors’ website)). To this end, autonomic management is therefore a participant in the information system as we can see in Figures 4.4 and 4.15. In the simplest case, this ITG-FS is an MMI (man–machine interface), and it is up to the human operator to make decisions that are imposed given its knowledge of the context of the surveyed function, via the control post (see Figure 5.1). If the human operator is not correctly formed and is themselves surveyed by their peers, disaster is certain in systems at risk. 27 In his work The Logic of Life, the biologist and Nobel prize-winner François Jacob used the term “integron” to explain the hierarchical organization of living things; this holds the same meaning as the building blocks, but applied to living things.
112
System Architecture and Complexity
REMARK.– We will remark that the couple ITG-F ⇔ ITG-FS forms a learning loop that will give the operators better understanding of the system’s behavior. New autonomic management models can be extracted from this new knowledge, or an adaptation of existing models can be carried out, possibly by adapting the functional part to optimize this fundamental capacity28 which contributes to the robustness of the whole. This is a good example of the application of PAC logic.
28 Concerning this aspect, refer to the remarkable work by L. Valiant, Probably Approximately Correct – Nature’s Algorithms for Learning and Prospering in a Complex World, Basic Books, 2013.
5 Generations of Systems and the System in the System
A system is never formed from nothing at all. There are always antecedents; it is part of a genealogy, which means that we can talk about generations and/or successive adaptations, those in generation N drawing lessons from the previous generations. ICT has made us familiar with this notion of generation, but it goes much further back1. Without going all the way back to antiquity (a Greek or Roman sailing ship has all the characteristics of a system, starting with the pilot!), or if we look within the contemporary era, we can cite tool machines, now robots, generations of computers, today we have System On Chip (SoC) with circuits of 2–3 cm2, which weigh a few grams and which contain 10 billion transistors, or even the stages of the electronuclear program, where we go from turbo alternator machines with a power level of 75 MW to the current groups with a power level of approximately 1400 MW (EPR can reach 1650 MW), and this can be done in five to six stages over approximately 20 years. What we should note specifically, from a systemics point of view, is that machines and/or systems can be used to make new machines, more powerful and more reliable. This is true for the three examples cited above, but the most interesting is perhaps the computer itself, taking into account the simulation capacities that it places in the hands of the engineers and scientists that develop the systems. This is totally explicit for J. von Neumann who saw in this an exploration tool of non-linear systems, essential for understanding complex systems2.
1 For example, see M. Dumas, L’Histoire des techniques, 5 vol., PUF; B. Gille, Histoire des techniques, Encyclopédie Pléiade (cited previously), or the monumental Science and Civilisation in China, J. Needham, Cambridge University Press. 2 Refer to the recension of J. von Neumann’s work by his colleague and friend S. Ulam; and also, in American scientist: Fermi, Pasta, Ulam and the Birth of Experimental Mathematics, vol. 97, 2009.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
114
System Architecture and Complexity
The change from the capabilities of analog simulation to the digital simulation that began in the 1960s is a fundamental rupture in engineering sciences. There is a before and an after, a true singularity. Without enormous calculation capacities, without simulations of objects/systems such as the energy transport network, air traffic control, Airbus A380 or the LHC at CERN, etc. are quite simply unimaginable. This rupture is characterized by the fact that the quality of the technical system that is constructed lies in the capability of the construction system, meaning engineering, to validate, stage by stage, from the elementary construction elements (the building blocks) up to the final totally integrated system. Through a strange change of perspective, the integration process thus becomes the main reason for using G. Simondon’s triplet {U, S, E} which allows the system to exist. The successive generations of related systems are never only the application of the fundamental loop (Figure 2.1) to the technical object itself. The three entities in the symbiotic relationship will evolve in a coherent and finalized manner, while creating an acceleration effect that some have named the “Red Queen” effect, in reference to Alice3 who must run faster and faster simply in order to stay in one place, until when, we do not really know. Maintenance of the coherence of the three entities must be managed like a system invariant; it must remain under control, like in real time systems, which require the latency of the loop to be compatible with that of the entities, a lack of which means the system is no longer controlled or, in other words, its coherence with regard to the environment is no longer guaranteed (we have seen that, with high frequency transactions, this is no longer the case). Each of the entities finds an autonomy, but the system seen as a whole degrades and finally “dies”. What is certain, however, is that there is a limit, slightly similar to a situation in flows where the fluid, beyond a certain threshold, changes from a laminar flow to a turbulent flow, expressed in mechanics by abrupt braking. This is a well-known phenomenon in the pipes that supply hydroelectric dams. In hydrodynamics, these phenomena are governed by the Reynolds number, characteristic of fluids, or even the “sound barrier”, which is a well-known threshold speed. The generic diagram from Figure 4.1 is shown again in Figure 5.1, this time with its tables of command clearly visible. Today, in all machines and/or modern systems, these tables, even if they still exist physically, as a last resort, have been virtualized. These are now computer screens with a user-friendly interface, or supposedly so, for operators. Due to this, they can be moved and operated remotely. The operator is often an automaton for carrying out simple and repetitive adjustments, supervised by a human operator, like in modern airplanes. In information systems in companies, one of the important tasks is specifically to construct representative dashboards of the company’s activity from one day to the next, or in real time, in the case of supply chains. 3 Refer to Into the Looking Glass, L. Carroll (he was a logician), see also https://en.wikipedia. org/wiki/Red_Queen_hypothesis.
Generations of Systems and the System in the System
115
Figure 5.1. Autonomic management and remote operations. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The initial system has undergone two additions: (1) a local autonomic management and supervision mechanism, which is displaced for equipment and the complete system; (2) a complete system of remote supervision, which can operate all equipment remotely and totally substitute the C2 center of the initial system; (3) an information storage mechanism, which is required for the management of the equipment as follows: initial state of the configuration, its history and the onboard diary, etc. (in the world of telecoms equipment, this is what we call a Management Information Base (MIB4). From a system point of view, MIBs must be standardized, at a minimum, the semantics of the information that is managed, which, taking into account the history of the progressive development of these fundamental devices, is expressed in practice by MIBs at two levels: (a) a level that is local to the equipment, (b) a system level shared by all the equipment, which again here requires the implementation of translation/ transduction mechanisms and of a language that is specific to the system (for Internet and the World Wide Web, this language is called XML, the heir of SGML, associated with the protocol http, without which the current World Wide Web would not exist). The displaced C2 system therefore becomes an integral part of the system perimeter because everything that can affect the displaced C2 affects ipso facto the entire system. The links must be monitored and managed because this is a point of weakness, which is the price to pay for the facilities provided by remote operation. Here again, we see the fundamental invariant, a symmetry which stipulates that all new facilities mechanically lead to new areas of risk, which must imperatively be 4 There is a vast amount of literature about this fundamental subject, which constitutes important industrial stakes; for an introductory basis, see http://en.wikipedia.org/ wiki/Management_information_base.
116
System Architecture and Complexity
compensated for in order to maintain the invariants of the structure. But the sum of the two must remain humanly manageable, which reminds us of the ancient motto of Greek philosophers: “man is the measure of all things”, this remains true even, and especially, in engineering! In the case of the electrical system5, a complete communication system was developed in the 1980s, the ARTERE system, with reinforced system safety criteria, which the networks at this time did not provide, to further improve the availability of the energy transport network. 5.1. System as a language At different times, we have used the notion of “external language”, the language with which the system interacts and acts on its environment, and the notion of “internal language”, the language that defines the capabilities of the system and the methods of implementation of these capabilities via the equipment that make the system up. As we have already stated, these two notions are totally fundamental. They define structure invariants that we will find in all systems. On the basis of the outline diagrams in Figures 4.1, 4.4 and 5.1, we can represent these invariants as follows (Figure 5.2) using two abstract structures organized like grammars.
Figure 5.2. System languages. For a color version of this figure, see www.iste.co.uk/printz/system.zip 5 Information on the Réseau de Transport d'Électricité (RTE) website (name of the French power provider in charge of the grid). See also http://en.wikipedia.org/wiki/Réseau_de_ Transport_d%27Électricite.
Generations of Systems and the System in the System
117
The grammar of the external language, GLExt, represents the knowledge that we have of the environment, materialized knowledge (reified) by different models, via the CRUDE interface that defines the language of the possible actions, meaning the performative language of the system (see Figure 5.2). We have seen that in the case of the firing system studied by N. Wiener, the laws of dynamics are part of this grammar. As we all know, this knowledge is limited, uncertain, and, in our complex universe, there can always be unexpected events. It must be possible to revise, correct and adapt the GLExt and the models it is based on in order to gain a better understanding of the reality of the situations of the environment in which the system operates. Initially, this can be a simple stimulus → response, like in a reflex arc, in an extensive manner, case by case, then, as the life of the system progresses in an intensive manner, with self-reflection about cases (thanks, perhaps, to “mirror neurons”, whose role is just beginning to be discovered6) giving rise to higher level abstractions7, using different linguistic mechanisms such as generative grammars, which means, in effect, reducing the size of the corresponding text (TC complexity). Grammar of the internal language, GLInt, represents the capabilities of the system as it is now, as it is at instant T of its life, and their implementation methods, methods that can be technical, human, environmental, etc. in nature. The active units, AU, that we have mentioned at various times, are the organs for this implementation; they define the performative aspect of the system, in other words, what it is capable of, given the specific limitations of these AU. An AU that has a human actor in its control loop will have a latency that is greater than 1 second, in the best case scenario; therefore, all control that requires shorter response times cannot be completed by humans alone. The MIBs that we have mentioned previously (Figure 5.1) are part of the models that are part of the definition of the GLInt. As opposed to the GLExt, the internal universe of the system is a closed world; behaviors are known, at least in theory, but this does not mean that everything that happens there is predictable, because in this there are also hazards, uncertainties and, sometimes, ignorance. The general behavior of the system, notwithstanding its architecture, is then related to selforganized “criticality”, in P. Bak’s sense, because it is specifically the architecture that is chosen which means that the consequences of the hazard can be catastrophic, or, on the contrary, constrained to a small area of the system perimeter. Hazards and uncertainties simply arise from another logic, as we have seen briefly via Figure 4.14. These two grammars constitute the logical design, or logical model, in the J. von Neumann sense of the term, of any system. Whatever happens, whatever the way in 6 Refer to the works by S. Dehaene, Les neurones de la lecture and Le code de la conscience, Odile Jacob. 7 In his work Équilibration des structures cognitives, PUF, 1975, J. Piaget provides a qualitative explanation of the psycho-cognitive development of a child, from the first stages of development to the highly elaborate cognitive structures seen in adults, from what he calls progressive assimilation schemes, implementing systemic feedback loops that optimize memory space.
118
System Architecture and Complexity
which the system is constructed, they still exist, in one form or another. Their underlying logic is temporal and modal, but this “raw” logic can be “informed”, in the non-formalized sense, which does not prevent it from existing empirically and in a widespread manner. But already we can determine the stakes of this: either (a) implicitly in the minds of user and engineer actors, which are not very communicable, or (b) explicitly via documentary systems and suitable formalisms, taking into account the engineering of underlying models. They can be centralized or distributed in equipment and/or the C2 control center of the system. This “grammatical engineering” must live as long as the system itself, and even beyond that, if we consider its dismantling. That means more than 100 years, if we take the example of the EPR. However, reasoning in terms of grammars and formal or semi-formal systems allows for the correct formulation of the problem, which is a necessary but insufficient condition for attempting to resolve it. Translating the external language into internal language is equivalent to expressing the processes of the real world in those that are implemented by the system. The phrases that are valid in one are translated, via transductors if necessary, into phrases that are valid in the other. The possibility, in the phenomenological sense, of this translation/transduction is based on the representation of the world in general, using processes that will encapsulate the semantic in a unique definition. Let us note that, in all theory, there can be several external languages, if only those used by user and/or engineer operators, including ICT languages and several internal languages, which can be organized into hierarchies to facilitate their application using a learning time (grade, in the English sense, meaning skill or qualification). The most fundamental invariant in the entire system is that these external and internal languages, and their grammar, must share the same semantic, which guarantees, by its unicity, that the technical object/system and its symbiotic communities act and interact in a coherent manner. This shared semantic results from a set of functional and non-functional requirements and demands that are requested by users. This fundamental semantic problem is thus reduced to a pure problem of interoperability of systems, which will be detailed in Chapter 8, as is found in C4ISTAR systems engineering. This is a problem of grammatical equivalence, simple in its non-simplistic formulation, but not simplistic in its constructive aspect, that of engineers, because it is necessary about finding to find a class of solutions that is compatible with the various limitations and engineering constraints so that the system “holds” (which implies resilience and robustness, defined in the previous chapter) in an acceptable manner, in compliance with a scale of acceptable risks (refer to the INES scale) and in compliance with the system invariants. To do this, we have at our disposal a vast array of tools from more than 40 years of good practice and theoretical studies concerning systems engineering (see section 1.3).
Generations of Systems and the System in the System
119
5.2. The company as an integrated system Companies and administrations are, firstly, human communities, artificial organisms in H. Simon’s sense of the term, whose objective is to produce merchant goods and services for some, and services to citizens for others: security, education, health, etc. J. Lesourne, one of the French pioneers in systemics (although he does not use the term) and in the evaluation of prospects8, has dedicated one of his books, Les systèmes du destin9, to showing how this new approach could shed some useful light on how the operation of these large collective organizations, such as administrations, could be better structured, by emphasizing their end purpose and control and explaining the need for this. Reading his conclusion, with nearly 40 years of hindsight, is interesting, because in systemics he saw an action methodology, adapted to future problems. In this, he also saw a means of compensating for the division of scientific disciplines and knowledge that isolates the corresponding communities by making sects that are incapable of communicating; not removing the division, but surpassing it, since the models are there to establish “bridges” between each other, because a model requires agreement between the interested parties, like in an equilibrium. This is therefore not a change of paradigm10, because the specialists will always be essential, but a problem of generalization and an increase in abstraction, avoiding jargon and empty words. Complex systems will always require highly qualified experts, today more than ever, but for the action to be coherent, each of their initiatives must be placed into a global context, which is that of a system taken as a whole. This is the model that then provides the transduction, the change from one domain to another, therefore a “technical object” of high added value. For J. Lesourne, control could only function correctly through a prospective vision of the actions to be carried out, in other words, the end purpose of our companies. This is obvious, because a system without an own end purpose is not a system. If the elements each have their end purpose and there is no command-control, divergence and progressive dissolution of the system will take hold, because each will optimize itself on its own criteria and there will be no overall optimum11.
8 He was the first to hold the position of Prospective chair at Cnam, in the 1990s. 9 See Dalloz, 1976. 10 This notion, introduced by the sociologist T. Kuhn, was contested by high-flying scientists who remarked that it is absurd to say, as an example, that relativity replaces Newtonian mechanics. At our scale, Newton is an entirely satisfactory approximation, easy to teach and useful to know about in order to live and survive; at the scale of the universe and at speeds close to the speed of light, it is no longer relevant. 11 This is what the Bellman theory states: “The sum of local optimums is not the overall optimum.”
120
System Architecture and Complexity
5.2.1. The computer, driving force behind the information system The year 1976 is an interesting date in the history of systems and digitalization. G. Moore created Intel in 1968, because he believed in what later became known as “Moore’s Law”, on the basis of an article written in 1965, in which he announced that the density of transistor circuits would double every 18 months, meaning an improvement in performance of the order of a few million in 40 years. The company Digital Equipment Corporation (DEC), founded a few years previously by K. Olsen, on a technical hypothesis of “all transistors”, triumphed with its ranges of PDP-8, -10 and -11 mini-computers, of which more than 600,000 were sold, in addition to central computers, where IBM, with its 360 range, began its domination, which then became all-encompassing in the 1980s. There are two computer construction companies in France: Bull, historic pioneer of computing in France, with its Gamma-60 machine (the Concorde of national computing12), at the time, an American company with close links to Honeywell and MIT, and CII, which arose from the Plan Calcul government program and General de Gaulle’s will for independence, which federated various French constructors as SAE, except Bull, supported by the creation of a research institute, IRIA, which became a national laboratory a little later. S. Nora and A. Minc published their report “The Computerization of Society” in 1978, which became a best-seller with sales of more than 100,000 copies. Large companies, directly implicated by this first computerization phase, created CIGREF (Club informatique des grandes entreprises françaises) in 1970, presided over by its founder P. Lhermitte, himself the author of another best-seller, Le pari informatique, released in 1968. France Télécom, which at the time was an administration that depended on PTT (Postes, Télégraphes et Téléphones, the French national telecommunications company at the time), renovated the telephone network from top to bottom by completely digitizing it. The electronuclear program was in full development, the first nuclear submarine was launched in 1971, 60 Mirage IV had been on permanent alert since 1974, in other words, a set of totally unimaginable systems, if computing were not involved on a massive scale, which was the occasion for creating a true industry of systems spearheaded by companies such as SESA or ECA-Automation, which then became SYSECA, among others. The supersonic Concorde, an accumulation of impressive innovations developed by the company Aérospatiale, received its airworthiness certificate in 1975, but its calculators were still analog. The strategic end purpose of all this is very clear, and this was the result of 15 years of efforts that mobilized the best engineers and scientists so that all this “works”. 12 Refer to the historical statement by J. Bouboulon, L’aventure Gamma 60, HermesLavoisier, 2005.
Generations of Systems and the System in the System
121
In broad terms, all experts agree that computing will profoundly change society, and all sociotechnical systems, but that the way that this change will take place and the consequences that it will induce in terms of, for example, education is another question. At the time, almost no-one had imagined the birth of Personal Computers (PCs), nor the enthusiasm that they created. Computers were perceived as very technical objects whose use required high-level education, which was beginning to be dispensed in top universities and colleges alike, but not at all as consumer products. The change took place 15 years later, with personal computers. From the epistemological perspective that we are interested in here, it is essential to say a few words about the use that was made of these new machines in organisms such as companies and administrations. At the time, in 1976, programmers, who often came from top universities, were a “rare bread” with an incomprehensible language; this was a non-job, that did not figure into any classification. Users saw computing in a slightly mysterious and magical way because of the containers of perforated cards, information medium and the printed kilometers of lists of the results of processing. The computing industry was first and foremost an industry of paper and card! The communities associated with the technical object, in the G. Simondon sense of the term, were, at best, tribes, with their totems, that related more to anthropology than to a scientific project. Twenty years later, the technical object has become a true symbiotic association in the biological sense of the term, and has caused a massive phenomenon within a time frame that is without precedent in the history of technology, which is now global. There are now millions of programmers, many are not engineers, and nothing more escapes computing. In a first phase that lasted until the 1970s–1980s, computing took charge of functions that, until then, were done by hand: pay, accounting, stock management, invoicing, etc. This first for computing was the occasion for the development of files and databases, of the first networks that use telephone PSTNs, then Transpac with the X.25 protocol created by France Télécom, with Minitel as its star. The first Internet protocols were standardized by the IEEE in 1974. This was reflected by companies and administrations, who adapted to these new tools, and we saw a process view of the company develop progressively and very clearly from a systemic view of the company, with methods and modeling tools such as MERISE, which was promoted by the inventors (Coletti, Rochefeld and Tardieu). It was widely used for the first significant applications on mainframes.
122
System Architecture and Complexity
In the 1980s–1990s, two new evolutions completely overthrew the “epigenetic” landscape, thanks to progress in the integration of components (thanks to Moore’s Law). The landscape had just begun to take shape thanks to, on the one hand, distributed computing and the first PCs, and, on the other hand, the unrelenting increase of company networks and, above all, the Internet and the World Wide Web, the network of networks. Computing in the context of companies then underwent a second, even more profound mutation; a true metamorphosis: (a) it changed from being a domain reserved for a certain mainframe “elite” to becoming more personalized and spreading throughout the company, with each collaborator and each organization wanting their own machine, their own IT, and, above all, replacing the “center” of computing (that “thought police” that curtails initiatives) with computing on a more libertarian basis; (b) it is distributed as closely as possible to the user, everything is interconnected with higher and higher bandwidths, which approach the megabit/second, and which allow the flows of information to be carried, per week, per day and to finish almost in real time. Today, with 4G, we are much further ahead than this. The information stored in files and databases is increasing, and the transfer of files gives way to messaging services in real time. The interaction has become the focal point for the attention of computing departments, but, in particular, individual business departments that are ready to take risks to interact with company clients. In the 1980s, the term information system13 appeared in the environment of computing professionals. Good old monolithic applications, on mainframes, became a composite set with the displacement of the man–machine interface onto the individual work station, an initial concentration level on intermediate machines, known for a certain period as department servers, the central systems for the overall consolidation of information. The information system became the nervous system of companies that assimilated and integrated the immense potential of these new technologies; for others, and many administrations, with the complexification of platforms and the badly controlled growth of applications, this is only creating an immense mess. By means of a new mirror effect, the company itself became the new subject of study in order to align value chains, organizations and information systems. The profession of organizer, traditional in all large companies, became a profession of 13 To be rigorous and honest, the term information and control was used from the 1950s; in a speech given for the National Planning Association, in December 1955, J. von Neumann stated: “The best we can do is to divide all processes into those things which can be better done by machines and those which can be better done by humans and then invent methods by which to pursue the two. We are still at the very beginning of this process.” We are, in fact, no longer really at the beginning, and the science of integration has progressed well, but much remains to be done, hence this book.
Generations of Systems and the System in the System
123
architects, whose objective is to optimize the processes in the various companies that make up the value chains and their interactions. We can summarize the current situation using Figure 4.4, by replacing the “matter” bubble by a bubble that represents the value chains of the company and their processes, in other words, in a developed version of Michael Porter’s model, Figure 5.3.
Figure 5.3. The value chain and digital impact. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The capillarity of the information, the fluid and smooth circulation of it in value chains, requires a relatively fine division of the processes into elementary tasks, which in computing terminology are known as transactions, which function as the quanta of “computing matter”, quanta that have an energy and a human cost. This division is the very essence of the profession of a systems architect, acting for the company (and even for its partners if the extended company is considered). To do this in an effective manner, detailed knowledge of the “matter” to be organized is required. In this particular context, we are talking about enterprise architecture. Taking the example of France Télécom/Orange14, an IT infrastructure of approximately 185,000 employees and more than 200 million clients, in other words, approximately 200,000 standardized work stations (fixed and mobile), 40,000 physical servers (and twice as many logical servers with virtualization), a processing power in the tens of thousands of MIPS, a storage capacity of more than 50 petabytes in 2014, a software heritage of several hundred million lines of code, programmed by the development teams, tens of thousands of TPS/requests to process per second, and an energy expenditure in the tens of MW to supply the infrastructure. 14 Numbers communicated at the Intégration & Complexité seminar; see the website www.cesames.net to download the white paper that resulted from the seminar.
124
System Architecture and Complexity
To interact coherently, all actors, the human and/or technical, all the resources required for the life of the system, including those that would be outside the perimeter of the company, must have a coherent model of the authorized interactions, each according to their functions and their missions, with the information system. In the case of large companies, such as FT/Orange, or large administrations, this model can be used by hundreds of thousands of actors, and we understand the extreme difficulty of maintaining its coherence, because can quite evidently, given the hazards of the environment, this can move. This massive distribution is obviously a risk, but it is also a factor of equilibrium (an invariant, to use our terminology), because we can hope that in the event of an incoherent transformation, the first level surroundings of the deviating actor detect the incoherence and block its propagation. For this to take place, the actors must trust each other, because at this scale, the centralized hierarchical management of the coherence is, quite simply, impossible. Hence the very important role of quality, as much from the technical point of view, as the system safety, as from an organizational and human point of view, in all projects. The quality system, at this stage of complexity, acts like the immune system of the company. It must be relatively autonomous to avoid hierarchical bias, because there will always be hierarchies in the organization of systems or organizations of this kind, to use the language of ISO-9000 standards. We can express all this in diagrams such as those in Figures 4.2 and 4.3, so as to highlight the complexities that the collective organization constituting the technical object associated must organize and manage, as must even the symbiotic organization, if we reason on the basis of the extended company. Lastly, and to conclude this chapter, we can conceive the information and material part of the technical object as a “machine”, in other words, a “logical design” in J. von Neumann’s sense of the term, which will carry out “calculations” that the users wish to carry out to satisfy their computing requirements, in compliance with the emitted requests. Despite our reservations about the terminology and the approach proposed by E. Morin, this is an aspect of systemics about which he had intuition (see his Introduction à la pensée complexe, Chapter 6, “Épistémologie de la complexité”, p. 144–145). In the language full of imagery that he uses as a sociologist, he says: “What is important? It is not the information, it is computing that processes, and, I would even say, extracts information from the universe. We transform the elements and events into signs, we extract information from noise. The information supposes live computation. Life is a computational organization, etc.”. In English-speaking universities, we find courses entitled Business Computing Systems that integrate the business aspect and the computer science aspect, from the undergraduate point of view. Regarding everything concerning the autonomy of infrastructures, in the 1990s, IBM proposed a notion of autonomic computing,
Generations of Systems and the System in the System
125
widely accepted in the profession today, and, more recently, we have seen green computing appear to calculate, within the same infrastructures, how to optimize the resources that are available to economize energy, of which consumption has grown to enormous levels, as companies have become increasingly computerized. Without going into details, which will be given in Chapters 8 and 9, we can say that for a user at their work station, whether it is a PC, a tablet or a smartphone, everything happens as if they were in front of a giant information machine available to them, to which they give orders and from which they receive answers, in other words, Figure 5.4.
Figure 5.4. System and abstract machines
The interaction language has a few basic commands that give the machine a universal capacity, in other words, the acronym CRUDE15. The user can therefore: – create entities specific to the machine; for example, create an entity that allows a piece of equipment to be added, and thus organize the growth of the system; – retrieve, look for the entities that are created, which implies a memory structured like a library, with its multiple indices and cross-references, in which are arranged the various information entities (the quanta of information), and the addressing mechanisms (such as Internet URLs), and edit the search results;
15 A terminology that has been in use since the 1970s, which takes elementary commands from the Turing machines, during the movement of the first databases, and which is still relevant (refer to Wikipedia).
126
System Architecture and Complexity
– update entities, modify the configurations, and in doing so transform them by “calculations”, in the wider sense of the term; by way of an example, a radar calculates abstract entities, known as “tracks” that are temporal trajectories of objects monitored by radar, trajectories that can be extrapolated if we know the dynamic of the target; – delete useless entities from the memory, or temporarily divide the entities by error, in maintenance, etc. It is a non-trivial operation because, if the entity has been duplicated, all occurrences will need to be found to guarantee coherence of the system; – execute an action/task, that is, sequences of pre-registered orders, organized in processes in this same CRUDE language, which materializes the property of universality. Requests and responses use input/output “ports” to the machinery, either its work station, or any other organ that is available to it: printers, television screen, video space, etc. Internally, the machinery must have (a) a management mechanism to manage the requests that are submitted to it, attribute and manage resources, organize alignment and synchronization of tasks, (b) an autonomic management mechanism that ensures system safety of the whole and prevents a user from damaging another, which constitutes the code of ethics of the machine. In brief, we encounter all the mechanisms that constitute the “logical design” of J. von Neumann’s architecture. The highest-level machine can itself be decomposed into the second- or thirdlevel machines, which interact and exchange information between each other in their own languages, until they encounter material equipment and/or human operators that play the role of calculators, with classic recursive mechanisms with which computing specialists are familiar (see Chapter 7). A user who goes home and activates his/her domotics, either manually or automatically, awakens processes in the electrical system or in technical building management that will execute millions of lines of code, and perhaps, in the end, activate an energy station to maintain the equilibrium between supply and demand that we mentioned in Chapter 4. The same is true when they use their telephone or when they consult their bank account to make a transaction. 5.2.2. Digital companies The process of computerization of companies, which was the site of the first applications of systems sciences, in the organization of companies, has been through
Generations of Systems and the System in the System
127
several stages that are all metamorphoses. The most significant of these transformations, the true singularity, was carried out in the 1990s where, thanks to progress in microelectronics and the integration of components, each actor in the company and, today, each individual, has been able to have: – calculation capabilities that allow ICT data to be processed on the go, as well as texts non-structured that are, with images and the voice; – capabilities for interaction with its environment, thanks to bandwidths of merged networks {telephones + data} on a global scale, which themselves result from the implementation of data compression algorithms and multiplexing. From a simple qualitative concept, systemic modeling has become operational. It has allowed complexity to be organized, and to bring out possibilities for interactions that it was not possible to envisage without this contribution from ICT. As we saw in Chapter 2, these two capabilities constitute the AC complexity measure that we will, therefore, be able to quantify. We could adapt the diagram in Figure 4.4, this time featuring the human actors in their public roles, in companies and/or private entities, where each of them has at their disposal a technical object or system, or several of them, that function in the same way as an assistant, and which can interact with either information systems in the company where they work, or systems available to them as a private person, like access to energy, their bank, the Internet, GPS, etc. Figure 5.5 shows the other side of the setting, and what happens behind a connected object like a smartphone, with the thousands of processes that interact and cooperate on behalf of the users.
Figure 5.5. A world of cooperating processes. For a color version of this figure, see www.iste.co.uk/printz/system.zip
128
System Architecture and Complexity
A situation must be imagined where this set of processes, that creates a system, meaning in one way or another controlled as a function of an objective to reach (end purpose), is projected onto structures, such as those in Figures 4.1, 5.1 and 5.2. In scarcely more than a generation, the companies and administrations, which were essentially human organisms, where everything that was a “calculation”, in the wider sense of the term, was done “by hand”, or nearly so, have become mixed technical objects or systems, in which information has become a “raw material” arising from interactions between human and/or machine actors. All information is digitized; this is what we now call the Big Data stored on Clouds that include thousands of servers. Companies and organizations have self-organized themselves so that the flow rate and the interactivity of the value chains provide a better service contract in order to satisfy user requirements (case of lean management) (e.g. refer to the company Amazon and its impeccable logistics). This world of massively digitized processes, but perhaps it is the opposite due to well-managed technological progress that open up new perspectives, is disrupting the economic16 status and dealing us a new hand of cards. Information systems, in the wider sense, are now the central nervous systems, and “governor systems”, to reuse an expression that was used in the 1950s, with their essentially graphic languages, languages that are now part of common culture, “digital”, as we say… They are at the heart of the active units that constitute the system and the symbiotic communities that accompany it, and they are, moreover, indissociable from them. Without its information system, the system object in the material sense of the term is of no use! It is no doubt that neither N. Wiener, nor J. von Neumann17, would have been surprised by these evolutions of which they had, in their own way, been initiators. This will be the subject of Part 2, “A world of systems of systems”.
16 Some hesitate to talk about “Iconomy”, Internet-economy, to reuse the word created by the Institute of the Iconomy. 17 We can read his prophetic article “Can we survive technology”, Collected Works, Pergamon Press, vol. VI, p. 504, where he finishes with these words: “We can specify only the human qualities required: patience, flexibility, intelligence.”
PART 2
A World of Systems of Systems
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
Introduction to Part 2
Originally, at the time of the SAGE (Semi-Automatic Ground Equipment) and NTDS (Naval Tactical Data System) projects, the first systems were entities that were isolated from each other with no connections other than those created by human operators who could switch from one to the other as required. This situation lasted until the 1980s and the appearance of the first specific data networks developed by computer constructors and public organizations such as the CCITT/UIT to set up the first communication standards such as the layered ISO–OSI model. Information was then able to circulate in increasingly massive quantities in real time between systems. The 1990s was a decade of disruption caused by the VLSI revolution which allowed small, powerful, energy-efficient machines to be released onto the market, which progressively but rapidly became substitutes for the mainframes that were symbols of centralized systems and which blended into what we call Clouds or even data centers. This processing power allowed real-time data compression algorithms to be implemented, which unexpectedly improved the flow of networks that were already undergoing great improvements due to the implementation of new technologies such as fiber optics. Everything then came together, all interconnected; “passive” terminals gained properties of “intelligence”, in other words became true computers, autonomous, with software suitable for MMIs but linked to networks of servers in architectures known as client servers where a specialization of machines was observed, obtained this time by the software, whereas the equipment became widespread, with slogans like Intel Inside. “Real time” machines disappeared because the performance that accompanied the rise in power of the VLSI, the famous “Moore’s Law”, means that there is no longer a requirement for any specialization from an equipment point of view; the same is true for operating systems. Sensors and effectors also became
132
System Architecture and Complexity
autonomous systems in interaction with other systems, as we saw with the SCADA (Supervisory control and data acquisition). All this technological development meant that the notion of a system progressively emerged for what it has always been, a finalized operational capacity. Systems will have a processing power where and when it is needed, freeing themselves from the constraint generated by the requirement to centralize the information processing to optimize the calculation resources that have in the meantime become abundant. Systemic modeling will liberate itself completely from a centralistic vision initially imposed by technology – from certain points of view a simplification – to become a more human, more cooperative vision between entities that have themselves become systems and which can become specialized or not, but which have the obligation to cooperate between themselves and with the humans that use them if they do not want to become neutralized or even self-destructed, drowned in a flood of errors that have become uncontrollable. The complexity of systems will increase objectively, which is a risk when it is approached incorrectly, but an opportunity when approached in the right way. This evolution is very clear in defense and security systems where a notion that later became very important came to light: interoperability, from the end of the 1980s/the beginning of the 1990s. This notion is at the heart of systemics, as presented in Part 2 of this book, whose structuring range needs to be measured. The model for an exchange of information between systems is the main motive for this.
6 The Problem of Control
6.1. An open world: the transition from analog to all-digital As briefly mentioned in the recap of historical developments, the 1990s was marked by a profound era of disruption in systems engineering, which have been from then onwards described as “complex” without reference to what “complex” truly means. The polysemy of the word “complex”, at the time, was even more vague than the word system in the 1940s–1950s. In fact, as soon as there is feedback, meaning a non-linearity, there is a complexity, so we can say that all systems are complex; consequently, reference to a “complex system” is almost a pleonasm. The control mechanisms whose importance we have stressed, in other words, the feedback loop, are initially purely electromechanical, hydraulic and/or pneumatic with motors and pumps. Analog devices (we call these analog “calculators”, but this is almost an incorrect use of language!) which provide the command have a latency in the form of a time period which is specific to electronic circuits and generally very small (propagation of signals at our scale is almost instantaneous), which means that synchronization between the magnitude to be controlled and the controller is excellent due to its construction. There is only one disadvantage; these devices cost a lot to develop, the transfer function that they carry out is “hard cast”, as we say in common language, in the circuit itself. In short, they are not programmable, impossible to maintain without a total overhaul and, most significantly, are not very reliable, so devices must be redundant, which causes a proportionate multiplication of the cost and also increases in the complexity. The appearance of the first computers, this time true calculators, in the 1960s, transformed the economic situation because it was possible to program the transfer function, in the modern sense of the term, and to modify it depending on the
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
134
System Architecture and Complexity
requirements of the context of use; it becomes independent of the tangible substrate that can, due to this, be more easily reconfigured. The control requirements (strict compliance with latency time of the feedback loop) and the processing capabilities of the calculators available at the time were such that the control mechanism was designed based on these constraints. The priority requirements are the performance and availability. Programming is certainly simpler than with analog “calculators” but it remains difficult, with engineers qualified in automatisms and electronics, which means that in total the systems in question are limited in terms of both size and number. The “intelligence” of the system designed in this way is reduced to a strict minimum, in the sense that the action carried out is itself a type of reflex, with no room for adaptation to the context of use. For the record, these systems are known as C2, command–control. They will progressively be installed and/or be substituted for the analog devices that are already in place. They are described as “real time”, which is another incorrect use of language, instead of “rapid”, because stricto sensu “real time” simply means compliance with a schedule, of whatever kind. One example would be the Normandy landings, D-day on June 6, 1944, which was a massive operation on the human time scale, with a schedule that must absolutely be complied with, on the penalty of creating total chaos. It is true that in a motor that runs at 3,000 rpm, that is equal to 50 rps (frequency of alternating current in our houses), in other words, 20 ms of calculation time per revolution. Nevertheless, many systems in which the requirement for control is not fulfilled remain in use, in particular, systems that include human actors in the control loop, or even systems where the measure to be controlled is not so much the response time as an equal distribution of available resources over more or less variable time periods in order to optimize the advancement of request input flows to the system, flows that are measured in the number of transactions made per second (this is part of the AC (algorithmic complexity) measure) with priorities that depend on contracts established between the users and the system owner. In the 1980s, progress in microelectronics was so significant, with the arrival of the first VLSI circuits (very large-scale integration, with 100–200,000 components per chip1), that the performance characteristics requirements of C2 systems were fulfilled ipso facto; this is both faster and more reliable. Engineering constraints with a breakdown of the cycle times required for control and in-depth knowledge of the automaton are less constraining. Synchronization with a clock is all that is necessary. The calculation capabilities are overdimensioned, at least when everything is going well, but beware of avalanche effects.
1 One of the major chips of this era, the Motorola 68000, contains 68,000 transistors.
The Problem of Control
135
REMARK.– With an automaton that is capable of carrying out 1–10 million operations each second, in 1 ms, we can “go” from 1,000 to 10,000 operations, which is more than enough in many cases, and we will even be able to integrate “intelligence”. We note that in devices with high dynamics, such as anti-missile missiles, let’s say Mach 10, 1 ms corresponds to a distance of 3 meters, sufficient enough to ensure the destruction of the target missile. For precision to the nearest centimeter, atomic clocks are required, such as in GPS and GALILEO systems. Due to this, the technology of what has become known as C3I or C4I systems (command, control, concurrency/communications, computer, information/ intelligence) is becoming more widespread. It is becoming accessible to less welltrained programmers, available in a far greater number, who do not have the added constraint of mastering in detail system requirements that nevertheless still exist today. This is a risk that needs to be managed in this way, in terms of advantages/disadvantages. However, in a situation of potential or declared breakdown – non-compliance with the service contract, problem of system safety – strict control of the system becomes necessary again. In fact, the two modes exist in parallel, but in a C3I/C4I system, an element of C2 will always remain. The final outcome of this evolution that began in the 1990s is a new family of systems, named systems of systems, for want of a better term (refer to a definition given by M. Maier2, on the author’s website), where the dominant characteristic is interoperability (we refer to the definition given by ISO, in summary: “Interoperability is a characteristic of a product or system, whose interfaces are completely understood, to work with other products or systems, present or future, in either implementation or access, without any restrictions”, as human language can be). Or even, to reuse the classic acronyms, C4ISR or C4ISTAR systems (Command, Control, Communications, Computers, Intelligence, Surveillance, Target Acquisition and Reconnaissance). We observe in passing that the interoperability between systems is a means of growth of organisms/organizations that enter, by virtue of integration, into a symbiotic relationship to increase their operational scope and their survival ability (resilience), up to a certain size threshold: if this is not the case, the system of systems has no competitive advantage. There are numerous examples of systems of this kind. We can cite air traffic control in airport zones with thousands of daily flights and millions of users, rail services across the English Channel and the thousands of ships and boats that pass through it, urban transport networks and cities in general or in the field of state security, crisis management and defense/security systems, etc. In this chapter, we will provide a few fundamental systemic characteristics for these types of systems, because they are obviously systems in their own right that 2 An aerospace fellow, and an author and practitioner of systems architecture.
136
System Architecture and Complexity
elsewhere fulfill the “definitions” that are recalled in Chapters 2 and 8; only the scale has changed. It is not a question of going into technical details here, nor even in the more or less generic architectures of these systems, except when the “detail” is significant from the systemic point of view. On all these systems, there is abundant, good quality research literature (be wary all the same, because a selection needs to be made), some of which is even excellent, like the versions/editions of the book by E. Rechtin, Systems Architecting (the 2nd edition in collaboration with M. Maier), concerning C2, C3, C4ISR, etc. And of course, information systems for which there are also excellent references; see, for example, the various books by Y. Caseau, all published by Dunod). Regarding software alone, but in a system context, refer to the book Architecture logicielle3. The fundamental question thrown up by this development is whether the systems constructed in this way remain controllable, in terms of cybernetics (see Chapters 1 and 2). We can/must even go a step further in our questioning, and ask whether we know how to relax the heavy constraint of controllability, as far as we know how to guarantee an acceptable service contract for the user, taking into account the context of use which can vary in itself. This is the “system safety” dimension which is thus open, in the widest acceptance of the term, which leads ipso facto to the questions: (1) of the acceptable risk; (2) of the ethics problems that become unsurmountable with the behaviors of human actors, users and/or engineers, more or less foreseeable, or even damaging. They will not be considered in this book, but the problems still need to be laid out correctly in order for there to be a hope of finding at least one acceptable answer. We give just one example to provide a basis for a detailed explanation. The user of a network of mobile telephones expects, in “normal” time, an SMS sent at instant T to reach its recipient within a few minutes, at T + Δt. If this takes place on December 31 at midnight, they will not be shocked that, in these exceptional circumstances, the same SMS will take 1 or 2 hours. In this situation, its only requirement will be to have a guarantee that it will be correctly conveyed to its destination, and if this is not the case, to be warned in advance so that a new attempt can be made. However, there is a limit to this type of logic. In information given here and there, users are advised to turn on their household appliances at night because electricity is cheaper. Taking this to the limit, this is equivalent to making each of us a controller of our own behavior and of the hazards that we do not control, which is contradictory to the very idea of control, because it will be very difficult to set out the rules of the game for 30 million users where the whole world is involved in playing the game. Even worse would be that, at the end of the day, the system would 3 J. Printz, Architecture logicielle, 3rd edition, Dunod.
The Problem of Control
137
impose its decision on users. If the rules are not complied with, what will the users and/or the central controller do? We do indeed feel that the reference situation can rapidly degrade towards the unacceptable, from an ethics point of view. This will end up resembling the highway code, with insurances, fines, privileges, legislations and standards, a court… in short, cheating! In the event of a disaster, where does responsibility lie? This is the same question as for self-driving cars. In short, Hell at the corner of the street! Relaxing the rules a little, there are two levels of control: 1) Hard/strong control, which requires actions of great precision, therefore a quantitative measure of the parameters to be controlled (including the behaviors of actors, users and/or equipment, hence an increase in skill) and of the evolving dynamic of these parameters. The system is deterministic, and it is always possible to go back in time for post mortem analyses that allow the system to be improved if a malfunction is found. The system is constructed in a modular fashion, using autonomic blocks for which the outline block diagram has been provided. It is also hierarchized into layers, in the strict sense of the term. 2) Low/weak control, more qualitative whenever this is possible, with variables grouped together and with varying tolerance margins for the parameters that are controlled. The system is no longer necessarily hierarchical. The requirement of reversibility and determinism at all scales is abandoned; in the event of a breakdown it is now sufficient to restore a coherent state of the system as far as it is possible to warn the users who will be subject to an interruption of services. An approach of this kind is equivalent to overdimensioning the system’s capabilities, on the one hand, in such a way as to remove it from risky situations/zones, without really knowing about the corresponding geometry, as far as all this is acceptable from the point of view of PESTEL; and, on the other hand, to permanently monitor the saturation level of the correct operational state of the critical resources that determine and condition the capacity. If the saturation threshold is reached, the system must be reconfigured, and to do so, it must have a central core of functions that are to be defined, from which it will be possible to reconstruct the system. This is what is known as bootstrap mechanisms, many of which exist in operating systems and communications systems (refer to the network stack of protocols such as TCP/IP; also refer to the IT stack, Chapter 3 and Figure 3.3). We can of course imagine situations that are even less controlled4, but in that case, we would be leaving the engineering domain to enter the realm of games of chance. Experience teaches us that situations that are considered improbable often
4 In their key article, already cited, Behavior, Purpose and Teleology, A. Rosenblueth et al. give a certain number of them.
138
System Architecture and Complexity
turn out to be certain, as was demonstrated to us by the tragedy of the Challenger5 space shuttle, or the 2008 financial and economic crisis (that we are still stuck in), arising from one-sided systematic deregulation other than that of the supposed “wisdom” of the market “laws” and of the honesty of the users, presumed all to be saints. In principle, the problem can be presented as follows (Figure 6.1): a) The inside/outside frontier is removed. The criterion of belonging depends on the levels of interoperability and coupling that are sought, depending on user requirements. The set that is then created from the union of several systems defines a certain processing capacity – this is the AC measure of complexity – in such a way as to satisfy the service contracts of possibly large numbers of users (30 million in the case of the French electricity system; 15–20 million for a mobile telephone operator, etc.). b) Flows are eminently variable and defined by statistical laws that are more or less well-known, or by the situation at instant T (unforeseen events, crises, etc.). For example, EDF has consumption statistics, per day, per week, per month, per year, which can allow demand to be anticipated and the electrical system to be subsequently configured, which allows the consumption to be correlated with certain events in the environment (the weather or the football match that must not be missed and which will cause an increase in energy consumption). REMARK.– If we return to the case of sending an SMS on December 31 at midnight, reasoning like an engineer, we see that with 10 million users who want to send their good wishes to 10 friends and receive an answer, this will lead to something of the order of 200 million exchanges. Simple to manage the memory capacity, if 1000 bytes per exchange are counted without considering the back-ups that are required by the service contract, the system must mobilize 200 gigabytes of memory in a few minutes to simply memorize the load. Knowing that an exchange can often generate a dozen internal requests (delivery receipts, various notifications, back-ups, etc.), the system will have to process approximately 2 billion requests without incident, whereas the peak flow is at the very most 50,000 requests per second. We see that in the best cases, the order of 40,000 seconds is required, therefore approximately 10 hours for the load to finish flowing. We understand data center operators’ or directors’ obsessive fear of a breakdown. c) The access points, system inputs and outputs, are numerous, but not ordinary. The system interacts with the environment thanks to a set of sensors and effectors that are all transducers, in other words, energy converters. This equipment has a hybrid status because they are both inside and outside; inside, because they 5 Refer to the comprehensive work by D. Vaughan, The Challenger Launch Decision, 1996.
The Problem of Control
139
contribute to providing the system with the information that is necessary on the various flows that pass through it, outside, because they can be dispersed over the entire spatio-temporal region where the system is present, and due to this, they can be exposed to the hazards of this environment. With SCADA, we will see a concrete example of this type of equipment which plays the role, everything elsewhere being equal, of “sense organs”. This is particularly noticeable in the evolution of the latest generation radars or in robotics. d) Concerning the technical part of the system, system safety engineering needs to be taken into account, in order to maintain the integrity of the constitutive equipment and its interactions, in a coherent manner with the overall service contract that is created from the service contracts that have been negotiated with users (including rates) and in order to find the best compromise, which is never simple when there are millions of users and thousands of heavy pieces of equipment like those in large data centers.
Figure 6.1. System of capability logic. SLA stands for service-level agreement
These systems never operate on an all-or-nothing basis; it is therefore necessary to manage deterioration modes, again in this situation optimizing the service contract and the survivability (system safety). Lastly, certain legislative stipulations must be taken into account to comply with the law, corresponding to the L in PESTEL, which is not necessarily in phase with the specific engineering constraints.
140
System Architecture and Complexity
We can provide the example of the public switched telephone network (PSTN), which is a good illustration of this problem, quite different from that of a system like the energy transport network, where we are truly in a pure control mode. For the PSTN, all users can potentially be connected to any other user. Since there are 30–40 million users that have a landline (many more if we include mobile telephones), a “colossal” resource would in theory be required, equivalent to 30 × 30 × 1012 of physical lines. Given that not everyone telephones everyone else all the time, we will be able to establish a hierarchy of connections to the automatic switches and telephone exchanges, massively digitalized since the 1970s, and ensure that the stock of resources is compatible with an economic equilibrium between the cost of communication/cost of infrastructures. Let us recall that until the 1920s, all this was done manually, via human operators6. The role of the PSTN system is to manage this resource as best possible, ensuring that a line that, at instant T, connects two users is never cut. If there are not enough available resources, the user requester of a service is placed in a queue, or asked to call back later, possibly with automatic recall functions. Systems of this kind relate to a construction/operational logic that is sometimes referred to as “capability”, which means that the system has at instant T a certain capacity for action (refer to the notion of an active unit, fundamental in this context) that must be managed as best possible, taking into account the mission attributed to the system. To do this, it must first be created in the form of an initial germ (an embryo), then grow until it reaches the capacity required to carry out its mission. Lastly, when everything is finished, it must retreat in the right order. In the world of IT systems, this is what is known as “continuous” integration, but this is an entirely general notion that conditions the controlled growth of the system and the means of this growth. In order to correctly understand the nature of this logic, let us start with the notion of a technical object/system as presented in Chapters 2–4. In this approach (see Figures 2.4, 4.2 and 4.3), we are dealing with three categories of entities, Simondon’s triplet {U, E, S}, or three evolving trajectories: the system and its equipment itself (in other words, TS), and the two communities, users and engineering (in other words, TU and TE); each with its specific complexity, as we have seen. Each of the entities has its own growth dynamic, its own constraints and interactions with the two others, in other words, in total seven fundamental interactions (see Figure 6.2). At the current moment, the coherence of these various trajectories and their interaction is more or less guaranteed, which poses the problem of the coherent evolution of the whole, therefore of its control. We do indeed note 6 For more information on the first automatic power station in Paris, in 1928; refer to the Wikipedia entry “Histoire du téléphone en France”: https://fr.wikipedia.org/wiki/Historie_du_ téléphone_en_France.
The Problem of Control
141
that what is denoted as “present” on the figure is not an “instant” in the mechanical or Newtonian sense of the term but a quantified spatio-temporal “zone” in which an action/interaction is possible/feasible, taking into account the timing characteristics of presences, in the active unit sense. This is a quantum or an elementary semantic transformation “step” that can require several atomic transformations in the transactional sense.
Figure 6.2. Trajectories of {U,S,E} capability logic
For example, it is pointless to increase the number of users and/or their level of interactions with the system if it does not have resources that allow it to carry the load. If these two constraints are complied with, from an engineering point of view, the support and maintenance functions will need to be correctly provided, etc. We see that the “truth” of one or the other will depend on the overall context (holistic, as some say) and on the interactions that organize the set in the long term in such a way as to optimize the flow of flows depending on the available capacities, which can vary over time. In the event of saturation, it is best to limit the input flow, creating zones or waiting areas, and minimizing interactions. In the system context, we therefore encounter a notion that is analogous to the principle of least action discovered by physicist mathematicians in the 18th Century such as Maupertuis, but especially in the 19th Century with Lagrange and Hamilton, and also analogous to the new MEP principle in energetics. In systemics, it is always necessary to “conserve” the interaction capacity of the system; a system that no longer interacts is a “dead” system.
142
System Architecture and Complexity
6.2. The world of real time systems We will not explain the history and/or the theory of real time C2 systems (communication and control); there are already excellent books and reports for that7. However, in our systemics optic, there is a fundamental point of these systems which allows the problem of the control loop to be examined, and to understand one of the essential aspects. To act, or react, it is first necessary to be informed: (a) about the evolution of the environment; (b) about the internal state of the system, because as we have said, everything changes. In the very first systems, and this was made perfectly obvious by N. Wiener with the help of C. Shannon and their colleagues at MIT, the communication dimension is essential. On the one hand, it is necessary to acquire the information where it is and transmit it to the command “center” to set up orders, and, on the other hand, one should transmit the order to the relevant actors. Therefore, communications is needed at all levels. The element of the system that is in charge of this function, known in systems at least as “real time”, has been and is always known as SCADA (supervisory control and data acquisition). But the underlying concept of the acquisition of data/information is entirely general. Any system that interacts with its environment necessarily has a function of this kind. Whatever the nature of the supervised processes, it is necessary to collect relevant information, of high quality, at a pace that is compatible with the dynamic of these processes. Rapid acquisition in “real time” relates to the case of an engineer or of a chemical reactor, for highly dynamic processes; acquisition in “reflection time” relates to organizational and/or human processes which take place at the pace of human actions, with a variety of MMIs almost without technical limitations. For example, refer to BYOD (bring your own device) approaches, a growing phenomenon in the world of information systems and not only in civil ISs. In all configurations, it is a case of setting up information, using signals observed either by human means or by measurement instruments, generally a combination of the two, meaning something that has a meaning for the decision/command center. It is obvious that this acquisition has a close relationship, on the one hand, with the system end purpose, the one being the reciprocal of the other, even if the acquisition field is sometimes larger than would strictly be necessary, and, on the other hand, with the nature of the internal parameters of the system which are conditioned by external data. In informational “noise”, which will, in fact, probably have a meaning, it is necessary to know what to look for to have a hope of taking action. In the firing systems studied by N. Wiener, the role of radars is not simply to collect the echoes of electromagnetic waves on airplanes, it is, in particular, to 7 For example, refer to Understanding Command Control by D. Halberts, R. Hayes of the DoD/CCRP Command and Control Research Program; easily downloadable.
The Problem of Control
143
construct the most probable trajectories of target airplanes. In some ways, the radar “knows” that it is looking for airplanes and not a bird’s flight which would not at all have the same dynamic. In an “intelligent” building, there is a data acquisition system that monitors a certain number of parameters which characterize the real or supposed “intelligence” of the building for what is known as the building management system (BMS). This is how real estate and/or energy positive buildings are designed, in compliance with the latest environmental standards (HQE standards). In an online sales system, typical client profiles can be determined and behaviors can be targeted thanks to the autonomic management of the purchases made by clients. This means that systems can be proactive with respect to clients and to suggest such or such a purchase, which can quickly become very unpleasant. All these examples demonstrate that in terms of form, we are dealing with the same problem: formulating the raw data that is collected by various sensors, correlating the data obtained with respect to the end purpose of the system, an end purpose that predetermines the form to be identified given the mission and the service contract. The block diagram of an acquisition system is given by Figure 6.3. The data acquired by the sensors can be processed in “push” mode, in which case, they are immediately transmitted towards the pilot who will do the same with regard to the acquisition and/or the concentration (“fusion” of data). They can also be processed in “pull” mode which means that the data remain with the sensor/pilot until an explicit reading action is carried out.
Figure 6.3. Block diagram of data acquisition. For a color version of this figure, see www.iste.co.uk/printz/system.zip
144
System Architecture and Complexity
In the pushed configuration, the communication network needs to have a capacity that allows it to carry all its data, even data that is possibly incorrect. In the pulled configuration, a minimum of memory is required in the sensor/pilot to avoid losing information. The essential parameter of a data acquisition element/system is the pathway of data, generally a network of communication, which itself has its own logic given the nature of the data transmitted and of the spatio-temporal environment in which the material elements are deployed. A clock is generally essential, like the high precision atomic clocks in a GPS system, in order to synchronize the signals so that they have a meaning, in order to calculate an actual position. In current systems, there can be thousands of sensors that can/must possibly be remotely operated, for example, remote surveillance networks on public roads or in the urban transport of large agglomerations. In systems that operate 24/7, it is necessary to carry out maintenance operations without interrupting the service, which requires suitable architectures that we will not give details of here. But these architectures are essential for carrying out autonomous or semi-autonomous systems which now accompany users in their daily lives. C2 systems are purely reflex systems, with no capacity for memory of past actions, except in certain cases of the last action carried out, therefore without “intelligence”. They follow as closely as possible the logic of the fundamental feedback loop that is described in Chapter 2, as shown in Figure 6.4.
Figure 6.4. Block diagram of a C2 system
For systems of this kind, the loop latency time must be compatible with the dynamic of the processes on which the actions are carried out. If the gap between the decided action and the evolution of processes is too large, the corrections carried out to compensate the drift will be too energetic, which will cause vibrations and/or oscillations.
The Problem of Control
145
This was a serious point of concern for N. Wiener and was the basis for diseases of the nervous system such as Parkinson’s disease. But his words must not be inversed, because the symptom is intrinsic to the control mechanism, independent of all reference to biological systems, some of whose complexities are still a long way out of our reach8. In a “real time” system, synchronization is generally done using a clock, but there is a large number of situations where it can be done on a system status. For example, in air traffic control, taking off or landing can only be done if the runway is free. In systems that manage sharing of resources between users, synchronization is carried out based on the rules of sharing of resources, hence the importance of signaling equipment, such as the traffic lights that we see almost everywhere, in towns, in ports, in airports, etc. 6.3. Enterprise architectures: the digital firm This section further develops the subject that was briefly touched on in Part 1 of this book. Since the 1990s, we have, in fact, been observing a fusion/integration of fields that were until then considered to be different (see Figure 6.5).
Figure 6.5. Digital information and company 8 For further persuasion, refer to the book by J.-C. Ameisen, La sculpture du vivant, Le Seuil; and to the book by P. Kourilsky, Le jeu du hasard et de la complexité, Odile Jacob, 2014, concerning the immune system.
146
System Architecture and Complexity
This fusion is accompanied by an integration of various time-related logics which must be harmonized to allow the invariants of the systems integrated in this way to be maintained (see Figure 6.6).
Figure 6.6. Time-related logics and the digital company. For a color version of this figure, see www.iste.co.uk/printz/system.zip
We thus see three logics come to light: (1) a logic which is human time, given our ergonomic and physiological constraints, and sociodynamics, a “human” time; (2) a logic relating to equipment that is integrated and supervised by the overall system to maintain integrity of the equipment, a “real” time; (3) an intermediate logic, denoted “constraint time” in Figure 6.6, which incorporates the two previous logics in such a way as to ensure an overall coherence of the processes that are implemented in the actions carried out by the system in its environment. There is abundant literature on all these subjects and for information systems, in particular, we recommend the works by Y. Caseau, published by Dunod.
The Problem of Control
147
What we are interested in here are the lessons to be learned in systemic terms from all these evolutions, for example, the temporality, the various representations of information, the general organization of exchanges (interoperability), controllability, and errors. 6.4. Systems of systems As we have seen previously, the 1990s were an era of disruption in artificial systems engineering. From the point of view of systemics, these new developments were an opportunity for better understanding of the mechanics of systems, as much in terms of architectures as in terms of their dynamic behavior and their evolution. Deep structures will also progressively emerge, typically “language-focused”, analogous to grammars, and of primary importance. In the field of defense and security systems, like those referred to in the various white papers that define the general requirement and missions, we will attempt to integrate systems in use into the various professions so as to improve overall efficiency, and make this new set more coherent, without, however, damaging the necessary autonomous action. It’s in this environment that the notions that are now a structural basis will be able to surface (such as the notion of systems of systems from the French MINDEF, of “architecture framework” such as the US-DoD DoDAF, the UK-MoD MoDAF, the NAF for NATO), all of which resulted in a civilian version promoted by the Open Group: TOGAF (The Open Group Architecture Framework). Over and above the opaque acronyms, the objective of all these efforts is to master C4ISTAR systems, more specifically the STAR part ((Intelligence) Surveillance Targeting Acquisition and Reconnaissance). The system monitors its spatio-temporal environment and therefore itself to ensure that it is not threatened in the broad sense of the term, including from a security point of view. This is a BMS pushed to the best that technology has to offer. This autonomic management implies acquisition of all types of information (via SCADAs, as mentioned above) that are likely to be of interest to the system, to establish correlations between events, hence the information gathering dimension that targets actions in progress, and the recognition dimension that is the equivalent of giving a form, a meaning (via correlations) to what is monitored. Since the system’s resources are by definition limited, it is necessary to conserve capacities, to target the actions to be implemented, to evaluate them before deciding on an action. Targeting is therefore the implementation of the logic known as capability.
148
System Architecture and Complexity
Using the block diagram (see Figure 6.1), we now envisage systems whose objective is to optimize and control as best possible what is happening in a given spatio-temporal zone, so as to avoid certain emerging phenomena that are undesirable for users of the zone. Here, we are very close to the geometrical definition of systems given by R. Thom and that of D. Krob (see Chapter 3 and the author’s website). The problem to be solved is not so much the control, which does, however, remain a fundamental characteristic, but ensuring that what happens in the zone in question, of varying perimeter, maintains a certain number of properties that will constitute invariants (in systems engineering, we refer to essential requirements) of the system, in other words, what it must preserve, conserve, in all circumstances. This implies the management of deterioration modes, with rules of priority which can themselves vary as a function of the situations encountered and which are largely unpredictable. If, for example, we take the case of an airport zone, like Roissy or Heathrow, the requirement is that on a permanent basis: – airplanes can safely take off and land; – passengers can access the sectors where they board and/or disembark, drop off/pick up their bags, etc.; – the zone has sufficient resources (energy, trained personnel, equipment, etc.) to function smoothly and to conserve the security of people and the goods located in the zone, etc. In this type of system that has a very broad spectrum, the actions to be carried out by active units (AUs) and the processes to be implemented will require the cooperation of a certain number of systems, of which each is already in itself a complex system but of a lower rank in the hierarchy of the system of systems. Each one of them serves a particular community, one or several professions, and has its own engineering teams. Taking Simondon’s technical object/system approach as a governing principle, we must envisage a triplet of three T trajectories that correspond to Simondon’s triplet, in other words, {TU, TS, TE} (see Figure 6.2). Operationally, this will be expressed as: – cooperation between the user communities TU1, TU2, etc.; – cooperation between the engineering teams TE1, TE2, etc.; – technical coherence of the hardware/software components of the equipment that constitute the system.
The Problem of Control
149
Knowing that all this will change over time, hence the diagram in Figure 6.7 that summarizes the fundamental problem of systems of systems.
Figure 6.7. Establishing coherence between models
As can be seen in Figure 6.7, the fundamental recurring theme of the coherence of a system of systems is the structure consisting of the set of the three families of models which, if they have been correctly formalized in terms of language and grammar, will be possible to analyze and study with suitable tools and methods, in particular, all those relating to the theory of languages, including those that incorporate the various types of controls (synchronous, asynchronous, shared resources, capacities, events, priorities, etc.) and the temporalities that are specific to each one. These three models refer to the same thing, they share the same information about the situation that justifies the system’s existence, with the same meaning, but each with abstract and/or mechanisms that are specific to them. A magnificent example of a model of uses, for C4ISTAR crisis management systems, is the DoD document, Common Warfighting Symbology, which is a basis for the human/machine interfaces of these systems; it is even used by certain strategy video games. This document defines a graphics language, an ideography, containing several thousand symbols, which allows all those who apply it to interact in a
150
System Architecture and Complexity
coherent manner in a crisis zone, and extension facilities that allow new symbols to be created (see Figure 8.9). During use, this type of MMI, based on 2D/3D vision, turned out to be much more effective than the automatic translation of natural languages that was initially envisaged. In information systems, this type of configuration also exists for the same reasons, although the context is completely different in architectural mechanisms with three structures (you could say the same for models, and even metamodels): external (the outside of the system), logical (or conceptual), and physical (or organic). The latter two diagrams characterize the inside, where the logical structure is an abstraction of the physical structure.
7 Dynamics of Processes
Thanks to his famous article “On computable numbers with an application to the Entscheidungsproblem”, then his involvement in deciphering the codes of the German army’s Enigma machine – (using an electromechanical machine: the Bombe) and his work focusing on the “other” Turing machine, the ACE (Automatic Computing Engine) machine that is very similar to von Neumann’s work at the same time on the other side of the Atlantic – we know that A. Turing was interested in two problems: 1) machines that were in the process of being invented, could we reasonably simulate certain behaviors typical of human intelligence such as the resolution of well-defined problems (deciphering of a message, game of chess, analysis of human language, etc.)? In other words, do we know how to record knowledge in machines in the form of logical facts: the sky is blue; snow is white; if I drop a stone, it falls according to the laws of falling bodies and gravitational forces 1 [ x = g × t 2 , v = g × t , E = m × v 2 , ... ], etc.? And do we know how to write about 2 programs that can simulate human questioning? Designing machines that are capable of learning? Or those that will allow us, using requests and/or questions formulated by an operator, to deduce an acceptable answer using the recorded knowledge? Turing summarized this problem in an article published in the journal Mind, in 1950: “Computing machinery and intelligence”, the origin of “artificial intelligence” and of machine learning. This article also sets out the test known as the Turing test, which attempts to distinguish between a man and a machine1;
2) given the newly available calculation capacities which allow non-linear problems to be tackled, would it also now be possible to return to the problem of morphogenesis that all biological processes demonstrate, from the very “simplest” 1 Refer to the translation and the commentary by J.-Y. Girard, La machine de Turing, Le Seuil.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
152
System Architecture and Complexity
such as cellular division to the most complex such as living species, or even to super-organisms like social insects or the symbiotic relationships between different cooperating species? Like all good scientists, Turing left problems that he considered too complex to one side in order to tackle them when there was a real chance of success, given that the correct formulation of the problem was not even within the reach of scientists at the time. Instead, he chose to take an interest in certain recently discovered periodic chemical reactions2, a necessary part of the chemistry of living things. These were much more basic, but it was essential to understand their interactions; their results are in a visible dynamic, oscillating form. Hence in his article from 1952 “The chemical basis of morphogenesis” which is still regularly cited in the literature of theoretical biology, there is the following conclusion: “It must be admitted that the biological examples which it has been possible to give in the present paper are very limited. This can be ascribed quite simply to the fact that biological phenomena are usually very complicated. Taking this in combination with the relatively elementary mathematics used in this paper one could hardly expect to find that many observed biological phenomena would be covered. It is thought, however, that the imaginary biological systems which have been treated, and the principles which have been discussed, should be of some help in interpreting real biological forms.” The article’s bibliography is quite short, but cites two extremely interesting works concerning morphogenesis: the work by D’Arcy Thompson, On Growth and Form, and the work by C. Waddington, Organizers and Genes, in which he began to talk about “epigenetic landscapes” – precursors of the 1957 “Strategy of genes” that Turing never knew about due to his premature death in 1954. We can justifiably acknowledge that Turing was capable of generalizing in order to make simplifications, thus helping us to discover the right conceptualizations. Totally different in appearance, Turing’s two problems have a notion of shared processes: cognitive processes in (1) and physical–chemical processes in (2). Without referring to it by name, Turing applied the thesis that carried his name, a thesis known as Church-Turing, according to the terminology of the 1990s3. Without wishing to bring Turing back to life, his working hypothesis is as follows: “If I understand what is happening, I must be able to display a ‘machine’ – in this case a Turing machine and its programming, which materializes what I think I have understood. I would then be able to carry out comparative tests between reality and what is done by the coded processes in the machine. Depending on the observed results, I will modify the machine in consequence, for a new cycle of tests.” All this was transposable and programmable on machines of that era, with a few difficulties 2 Refer to B. Zhabotinski’s reactions, among others; see Wikipedia, for basic information. 3 As an introduction to this problem, see the work by G. Dowek, Les métamorphoses du calcul, cited previously. For more comprehensive reading, see the articles and works by D. Deutsch, and, in particular, L’étoffe de la réalité, in English, The Fabric of Reality, Penguin, 1997.
Dynamics of Processes
153
given the limitations and the performances of the equipment of the time. Therefore, a fortiori, it is also possible on today’s equipment, with performance approximately a million times better and programming facilities that allow cooperation of thousands of processes to be put in place, notwithstanding a certain “calculation” architecture that is also known as software architecture. This demonstrates that it will be possible for processes, particularly those in the real world, that we attempt to decipher and insofar as we understand them. This will give rise to calculation processes which if interpreted correctly, are an abstract image of reality and/or nature and which are consistent and complete, so long as they remain within the valid domain of the theme in question (never forget Korzybski’s rule: “the map is not the territory”4). A logical form has therefore been constructed, or more exactly a set of coherent logical forms because the machine will be able to vary, thanks to programming, nearly to infinite values. (In other words, the very large numbers, or “immense” numbers associated with information are calculated); these forms are in a dual relationship (in correspondence) with the real world that we are seeking to understand. REMARK.– Let us recall that a software such as a word processing program represents approximately 250,000 lines of source text, in other words, approximately 2 million characters, and a combinatorial of 502,000,00 (with 50 typographical signs), in other words, a number of configurations of the order of 103,400,000 (see our previous studies on complexity). This is obviously an absolutely fundamental point for the foundation of systems sciences, systemics, on a serious mathematical basis that will allow engineers to work with safeguards in place and allow decision-makers to advance in total serenity, or at least a little more serenely, knowing how to anticipate the consequences of the decisions that they make. This is therefore the moment to correctly specify what we understand by process, used intuitively in Chapter 2. 7.1. Processes
Processes are the primary material and the basic motivation behind the language used in natural transformation phenomena that we can all observe, such as a chemist monitoring their reactors or an agriculturalist monitoring the correct growth of the plants that they cultivate. Any transformation is first of all a spatio-temporal energy 4 See A. Korzybski, Science and Sanity, from which the quote used endlessly in certain fields of systemics has been taken. The comprehensive work is worth reading; see https://en.wikipedia.org/wiki/Alfred_Korzybski.
154
System Architecture and Complexity
phenomenon, meaning that it is a point in 3D space and a duration – whether at the scale of a chemist’s molecules and atoms, or at the scale of a farmer, an engineer, an architect who adapts and transforms reality. The quest for a “correct” transformation language has been present for millennia in the human mindset. The language of astrology in Kepler’s era, the Mathematicus of Rudolf II, emperor and alchemist, each had the ambition of demonstrating how the configurations of the stars could transform the course of life of mankind and human societies. This is why it was necessary to measure the movement of the planets as precisely as possible, which resulted in a consideration of elliptical movements rather than circular ones, to “stick” to the measurements made for the planet Mars by T. Brahé. Alchemists, for their part, believed that the transformation of chemical bodies was like a prefiguration or a mysterious analogy between what happened in their furnaces and what could possibly happen by a strange sympathy in their own minds; a little like an aid to Socrates’s “know yourself”, a maxim that they all knew. Newton was one of them5, and his writings on the subject of alchemy are much more abundant than his mathematical writings. Contemplation of mathematical truths was compulsory for the members of Plato’s Academy. At Aristotle’s Lyceum, this was replaced by logic and nature, in a kind of continuum from physics to “metaphysics”, a term that Aristotle never used because books about “metaphysics” are those that follow on “after” (the literal meaning of the Greek word meta), such as Organon where his logic is described. All these languages were inconsistent and incomplete. They were open to varied interpretations, hence the appearance of numerous philosophy schools and just as many supposed “masters”, who gave lessons. But all these languages would later herald those of modern science and the most famous of them has survived the centuries: Employed by Newton in the same fashion as Galileo, Newton was perfectly acquainted with Elements and utilized it as the model for Principia Mathematica. The “modern” version of Elements is D. Hilbert’s treaty, Foundation of Geometry, with its first edition in 1899. In chemistry, the turning point, the watershed, was crossed by Lavoisier. By introducing specific measurements and reproducible experimental protocols, he highlighted the primary major phenomenological law of chemistry: “nothing is lost, nothing is created, everything is transformed without loss of matter”, from which all others later arose. In mechanics, this is the principle of conservation of energy: “the sum of kinetic energy + potential energy is invariable”, written in mathematics by Lagrange and Hamilton, among others.
5 Refer to his reference biography, Never at Rest, by R. Westfall.
Dynamics of Processes
155
When in chemistry, we write: 2H2 + 02 ↔ 2H20 (modulo an energy K(T,P) of approximately 136 kcal) we simply mean that 4 g of hydrogen combined with 32 g of oxygen will produce 36 g of water and a quantity of energy that is equal to K – in the case of combustion, for example, in the Ariane V engines, or inversely will require an energy contribution for hydrolysis. In the case of the Fukushima disaster, the energy produced by the reactor was so great that water decomposed, hence the explosion. In Figure 7.1, we can see a simplified propulsion system for an engine like the Vulcain engines that were fitted to the Ariane space shuttles, with its inputs and outputs.
Figure 7.1. Diagram of a turboprop. For a color version of this figure, see www.iste.co.uk/printz/system.zip
156
System Architecture and Complexity
The constant K is characteristic of the transformation carried out, whose temperature and pressure conditions must, however, still be specified – the equivalent of the PESTEL for engine physics. For example, the water produced can be in liquid form (the case of fuel cells) or in the form of gases, which will give different values of K. In specialized literature, there are tables of K values for different conditions of use, like the calorific value of fuel. Figure 7.1 shows a good illustration of the black box approach in which we would temporarily “forget” the physical/organic structure of the engine to concentrate on what goes in and what comes out. This is where the engine plays the role of a transducer, which constitutes a movement from the chemical world to the mechanical world. In nuclear physics, with the discovery of radioactivity and the transmutation of atoms – the old dream nurtured by alchemists and which Newton would certainly have found captivating – physicists initially used notation like chemists, but the most interesting is what happened when they began to understand the nature of the forces that link atoms together, accompanied by a crowd of more or less bizarre new particles. Richard Feynman was the first to use the diagrams that now bear his name, in his work on quantum electrodynamics6. From the first discoveries in the 1920s to the work that led to the atomic bomb, all kinds of strange particles had come to light, rendering the system of chemistry symbols inadequate. The last of these particles was the BEH boson known as the Higgs boson. In the standard model of particles7, a distinction is made in simple terms between those that determine matter in the macroscopic sense of the term, the “fermions”, and those that are the basis for nuclear forces, represented by interactions, “bosons”. Figure 7.2 simultaneously shows a very simple Feynman diagram and the interpretation of mediatory force particles. This was proposed by the Japanese physicist H. Yukawa, winner of the 1949 Nobel Prize, who discovered mesons. Yukawa’s explanation is as follows: the two fermions that attract each other due to an attractive force are in the end repulsed by a “massive exchange of fire”, the bosons, exactly like two boats that advance towards each other and whose rowers exchange massive balls to deviate their trajectory under the effect of action and reaction forces, maintaining an invariant level of energy.
6 Refer to his work intended for the wider public, QED, The Strange Theory of Light and Matter, Princeton University Press. 7 Refer to the short work by É. Klein, Sous l’atome, les particules, and a more detailed version, J.-P. Baton and G. Cohen-Tannoudji, L’horizon des particules, 1989, and also Le boson et le chapeau mexicain, by G. Cohen-Tannoudji and M. Spiro, Gallimard, 2013.
Dynamics of Processes
157
Figure 7.2. Block diagram of a repulsive force between two atomic entities. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The X axis represents the space dimension in 3D, covering a few nanometers, or much less, and the T axis represents the time dimension, in pico (10–12) or femto (10–15) seconds, or even much less (10-22, for the Higgs boson). This is why there are enormous difficulties involved with its detection (refer to the ATLAS, CMS, etc. detectors at the LHC8 at the CERN). This new boson is very heavy, as well as highly elusive, because the distance traveled is very small, even at the speed of light. In the symbols used in Feynman diagrams, a wavy line ≈≈ represents the black box, where the interaction takes place. In quantum detail, we do not know how this occurs, physically speaking, but we know how to construct a mathematical representation that perfectly represents the observed events to a high degree of precision; we can simply observe the energy balance given the laws of conservation of symmetry, either in the case of fusion (hydrogen bombs, megajoule laser, ITER instrument, etc.) or in the case of fission (natural radioactivity, nuclear power station, etc.). The moral of the story is that to “stick” or “unstick” particles, a lot of energy will be required, which can be represented by the diagram in Figure 7.3. Astrophysics teaches us that the heavy atoms that make up our world, quite young with respect to 8 Refer to the book published by CERN, The Large Hadron Collider: A Marvel of Technology.
158
System Architecture and Complexity
the estimated age of the visible universe, can only be created in extreme energy configurations like supernovas or gamma-ray bursts.
Figure 7.3. Transformation by integration/removal of elements. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The fusion/fission processes are in a certain way symmetrical, notwithstanding an energy expenditure, which is a property that we come across for everything concerning integration in general. One is the dual of the other. In systems engineering and in theoretical computer science, these operations bear the names: (a) integration or concatenation, which corresponds to a fusion and (b) retro-engineering or inverse engineering, when it is necessary to get things “straight” in order to operate at an elementary level and adapt the system (error correction, evolution, etc.). These two operations have an energy cost. 7.2. Description of processes
Following these digressions about the physics of phenomena that we use every day in our technologies, if we now turn to the theory of systems and to systemics, we quickly understood in the 1970s that what made a heterogeneous set into a system was specifically the information that was shared by all the elements in the set, information that allowed all the elements to act/interact in a coherent manner in their spatio-temporal environment. This information could be represented either in an abstract manner (Church-Turing thesis) or in a specific manner by various “calculation” devices involving transducers, such as in the circuits of our modern computers.
Dynamics of Processes
159
Throughout this time, there has been a multitude of languages and notations to describe the processes and their interactions. These include the IDEF protocol9, which was popularized by the US DoD, a protocol that could be applied both for the material part and the computing part of the system. The interaction model constructed using this language and its variants (StateChart, UML, SysML, BPMN/BPML, SGML/XML for the data, etc.) is the fundamental invariant of the system that must be conserved throughout the “life” of the system. We can therefore confirm that in all systems, whether they are artificial or not, there is necessarily a model or a computing structure that is common to all its elements, which organizes and structures the interactions between the elements. This structure is an invariant of the system. REMARK.– For example, an adder circuit made of transistors and/or diodes is a physical model, with its internal limitations of the corresponding abstract arithmetic operation. The system becomes effective when the circuit is supplied, something that our modern computers are now able to do at a pace of a billion times a second (clock calibrated to 1 or 2 gigahertz). There is a complementarity between the static vision of the model and its dynamic effect in the real world in which it interacts. Generally speaking, the expression of this language will always be possible in two ways: (a) either concretely using the material mechanisms of the system, we therefore talk about internal language; or (b) in an abstract manner with computing mechanisms – in this case, Turing machines, which have the virtue of appearing in the form of the computers that we use every day, where we then talk about external language. Here, we encounter the fundamental internal/external dichotomy of the Vienna Circle logicians, articulated with the fundamental inside/outside dichotomy of systemics. Using transducers, whose importance we re-stated in Chapter 6, we can go from one world to another while conserving the meaning of the fundamental invariant of the system. In current symbolisms used to describe the processes, there will always be minima representations such as those in Figure 7.4, in which the inputs and outputs of the process appear, in addition to control in the cybernetics sense that we have specified, including the errors and/or faults and the resources required to make it “work”.
9 Refer to the standard IEEE 1320.1, Functional Modeling Language – Syntax and Semantics for IDEF0.
160
System Architecture and Complexity
The process, in the time dimension, is a succession of states that are all included in the diagram in Figure 7.5, a representation among many others which highlights some important points. The process, if it has a memory capacity, can register the corresponding past events for use later on. This memory capacity is operationally structuring; it is an important aspect of complexity.
Figure 7.4. Representation of the processes
Figure 7.5. Diagram of states of a process. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Dynamics of Processes
161
The diagram is organized in macro states: defined, created, in progress/active, interrupted, suspended, finished, dismantled, linked by arrows that represent the transitions from one state to another; a transition is an energy contribution that permits a change of state. For example: – water changes from the liquid state to the gaseous state (steam) if a large quantity of energy is provided to the thermodynamic system{ice/water/steam}; – in systems engineering processes, the transition from defined (as a requirement) → created (as a technical object/system, with its communities) requires implementation of an engineering project than can represent millions of hours of work. The current/active state is the “normal” state of a process that takes place when the system is in operation. The interrupted, suspended states correspond to situations that require an intervention from the operator. In the event of an incident (refer to the INES scale of incident severity used by EDF, section 4.2.2) the degree of severity can require various types of actions; the system is no longer in its nominal state of absence of risk. The diagram caters for two types of situations. It can be interrupted, meaning the necessary duration, generally short, for the system to be returned to a nominal state and when the incident has been considered of low severity. Alternatively if it is a real accident which is more serious with damage, then the process is suspended, in which case the system stops for a significant duration of time and service is no longer provided to the users for a long period of time. In the real world, many variants can be imagined, which, however, will all need to be represented by state/transition diagrams10 that it is possible to study with all the logical resources of the theory of automatons. The diagram also shows two specific states, beginning and end, which mark the beginning and the effective end of the process within its period of activity (we also say “lifeline”). Once it has been created, the process is not supposed to begin immediately. At the very least, we want to distinguish this state from the effective start-up, a little like a train that is ready and waiting for the signal from the stationmaster to start off. Idem for the transition finished → end, where the train would be at the station having reached its destination, end marking the opening of the doors. Between the two, there are intermediate states that are not represented on the diagram and that will give a certain number of details about the operations that have been carried out. These will constitute the “onboard journal” of the process. All the states that carry information about the “life” of the process and of the system are listed and hierarchized. If there are a large number of them, it is because they demonstrate the understanding that we have of the system processes. In most 10 One of the most frequently used symbolisms is the state chart introduced by D. Harel.
162
System Architecture and Complexity
notations, they are represented by rectangles with rounded corners, with a name and information characterizing the state in question (refer to the diagrams of activity and the sequence diagrams, in systems engineering notations such as SysML). The “life” of the system and its processes will therefore be characterized by a pathway, called a “lifeline”, through the possible states and the interactions authorized by the system designers via the IPC language11 (see Figure 7.6).
Figure 7.6. Lifelines and language of interaction. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Maintenance of these chronologies is fundamental for system safety engineering, and it comes at a cost since through these chronologies, an understanding of what has happened in the event of anomalies and/or accidents/incidents can be gained, and the source of the problem found by retro-engineering. Without these chronological lists, there is a loss of information and therefore a theoretical impossibility of reconstructing the context of the breakdown, because the combinatorial to be analyzed quickly becomes “immense” and inaccessible to our calculation methods. Cooperation between elementary sequential processes is only possible if they have a common language, such as the signals and/or traffic lights represented in the figure, which allows them to exchange information. This in turn means that they can synchronize themselves and control their synchronization. This language is the same for the fundamental feedback loop seen in Chapter 2. We will return to this in Chapters 8 and 9, in an even more general context of cooperation.
11 Inter Process Communication; see E. Dijkstra, Cooperating Sequential Processes, 1968.
Dynamics of Processes
163
The type of symbolisms used means that elementary processes can be just as easily represented, in nanometric spatio-temporal domains such as in the latest generation chips or the quantum states with Feynman diagrams, or macro processes at a very large scale like a city or an airport system that we will simply summarize by way of an example (see Figure 7.8). The diagrams in the two Figures 7.5 and 7.6 can be combined into one single time diagram (Figure 7.7), which shows the energy cost and gains of the implementation of all the elementary processes that constitute a complete system (known as a system lifecycle in the language of engineering systems).
Figure 7.7. System lifecycle. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Talking about “birth” and “death” may seem anthropomorphological, but this is all the same the subject at hand, because we are all born from something – the creative process of a project that follows quite precise rules and whose outline and limits we can begin to master correctly. The terminology used is to be considered in relation to Simondon’s triplet {U, S, E}, because the system S that is “born” results from the requirements of users U are acknowledged by engineering teams E who will transform this latent requirement into a real concrete system, using and integrating the latest technologies, which can moreover vary throughout the lifecycle, in order, to provide a better service to users. This is a little like Aristotle’s sculptor who makes the statue emerge from his own idea, taking into account the wishes expressed by the commissioner. The system S dies when it stops providing a service, for whatever reason, but in all cases, it is necessary to dismantle it for ecological reasons in order to avoid polluting the environment and rendering it progressively improper for all future creation.
164
System Architecture and Complexity
In Figure 7.8, we present the example of an airport zone: the primary mission of an airport zone, its reason for existence, is (1) for airplanes to take off/land, (2) for passengers and/or freight to board/disembark, in total security, and to transport people and goods in compliance with regulations and security. This mission determines the service contract of the system regarding the principal users of the zone: passengers.
Figure 7.8. Macroprocesses in an airport zone
There is a double inside/outside perimeter: (1) 2D on the one hand, concerning installations on the ground, either a zone of several tens of km2 (Roissy, Heathrow, etc.), with its roads, runways, etc., (2) 3D on the other hand, concerning management of the air space to manage the arrival and departure of airplanes. In other words, a spatial volume of several hundreds of km, with an altitude of 0 to 10–15 km. This requires suitable radar equipment for all situations: runways, take-off/landing, approach, in flight, etc. In order to operate, the zone requires various resources, hence the tens of additional flows that condition the service contract of the system. Lastly, an airport system is not isolated from the rest of the world. It interacts with other external systems, for example, the information systems of the airlines present in the zone, the European regulator Eurocontrol if the zone is in Europe, the
Dynamics of Processes
165
defense and security systems, which in France is the aerial defense of the Armée de l’Air and/or NATO12. In systems engineering terminology, this first and absolutely fundamental level is known as a “context diagram”, from which all the systems and their constitutive elements of the zone can then be derived. This is the specific implementation of the principle of organization in layers, with various levels depending on the nature of the systems so as to find the right equilibrium, but always in compliance with the highest level service contract and security requirements. In systems of this kind, there are many unpredictable hazards. Incidents/ accidents can take place at any time, which can endanger the organization that was initially established. The system operates like a giant project under management that permanently plans and re-plans all its activities to take account of an evolving situation, but without ever losing sight of the global service contract which is the invariant of the system. 7.2.1. Generalizing to simplify
With the concept of a process specified in this way, we have a fundamental situation to describe any kind of system by expressing a distinction that is essential for the correct understanding of the profound nature of the system, whether artificial or not. We will distinguish between: – the abstract representation, for the “card” type, in Korzybski’s sense of the term, saying “the card is not the territory”: this is a set of coherent models that are necessary but not sufficient; – the specific representation, with physical, energetic organs that are suitable for the environmental constraints of the spatio-temporal zone where the system operates, meaning with transducers and/or suitable equipment. To organize their interactions, the processes that constitute a system must have a shared “space” that is common to each of them. This space, which is itself organized (this is an abstract machine), can be centralized in a specific position that plays the role of an orchestral conductor, or distributed in each of the processes. This space is the guarantor (the “guardian” as von Neumann would have said) of exchanges, and permitted behaviors/interactions, meaning a game in the game theory context.
12 Refer to the European project SESAR; https://ec.europa.eu/transport/modes/air/sesar_en.
166
System Architecture and Complexity
This architecture of exchanges and of the information they carry materializes (in the jargon used by computer scientists, we sometimes say “reifies” in reference to a thing) the fundamental invariant of the system which is thus integrated. The cost of this architecture can moreover, in certain contexts, be estimated (refer to our book Estimation des projets de l’entreprise numérique, applicable to the IS context). Example 1: the highway code
This is an example of a completely decentralized system that everyone must comply with, after making the effort to acquire it and maintain it. In the energy sense, this is a learning cost, and suitable signaling must cover all the points of the zone that are considered to be dangerous, or which require at least a reminder. In order for the system to be effective, a police body is necessary and plays the role of the “keeper of keepers” since we are all custodians of the code once it has been learnt. This example highlights the wear and/or degeneration of the code which, if it is not constantly maintained, ends up losing its regulatory function, which progressively “disintegrates” the system. One of the natural phenomena associated with aging of any kind is a proliferation of errors that the fundamental controlcommand loop of the system can no longer compensate for. Here, we are faced with a phenomenon of entropic degeneration. The order created by regulation has a maintenance cost. If the maintenance is not carried out, there is a return to the situation of disorder (in fact, an absence of order, which is characterized by an absence of energy linking the elements that are free to move around as they like) which was prevalent before its implementation. Example 2: the ideographical language of defense and security systems
It could be written as a case study using the document released by the DoD, Mil-Std 2525C, Common Warfighting Symbology, and the NATO document APP6. Information available on the Internet. This language perfectly illustrates the essential duality EL/external language (of the MMI) and IL/internal language (of DB and associated processing). We have already used this in previous works, Architecture logicielle and Estimation des projets de l’entreprise numérique (see Figure 8.9). 7.2.2. Constructing and construction pathways
Independent of the details, which are obviously numerous for real systems, the level of abstraction on which we are positioned along with the abstract/specific duality highlights several possible construction routes: – an operational representation based on the relation that exists between the inputs and the outputs of the process/system. This is the representation known as in
Dynamics of Processes
167
extension, which is one of the ways of representing the usual mathematical functions in the form of a correspondence between two sets, meaning a table or a dictionary; refer to the correspondences which are more complex but which are a simplification, such as the correspondence between the groups + and × via logarithms, or the correspondence grammars ↔ automatons of the theory of languages. In the terminology of systems engineering, it is a representation in terms of use cases which is equivalent to giving preference to the observable states of the system; – an operational representation based on prior knowledge of the functions, in the usual sense of the term, with which the processes will be constructed. This is a representation said to be in intension, using the mathematical terminology of set theory. There are in effect numerous situations where the functions are known; in this case, it is of no use to resort to a sterile abstraction exercise which would not contribute any new knowledge of the system; – a representation by the events to which the system must respond. In the examples that we have given, this is equivalent to taking the regulatory aspects as a construction principle. In the electric system, regulations are two-fold, on the one hand, from a French point of view with the CRE, and, on the other, at a European level, which must not be contradictory; – a representation by resources that the system and its processes have available to them in order to carry out the mission. In this case, the “capability logic will organize the construction of the system, focusing on the management of resources (the STAR part, for C4ISTAR systems as we have seen); – a physical or semi-physical representation, for systems that integrate equipment that are seen as black boxes. In this case, the physical element appears as a constraint that must be taken into account by the other elements to be defined. In order to avoid disseminating the constraints that are specific to these black boxes more or less everywhere, the architect can choose to make an abstraction of this via an access interface, which is equivalent to disassociating the constraints from the way they are taken into account, via suitable interfaces that are at the basis of layered architectures. In the end, the representation will be unique, taking into account its/these various aspects, which will take the form of an abstract machine. And this is required to assure ourselves that the service contract for users is fully complied with. Let us recall that the service contract is a global property of the system which involves all its processes, given their frequency of use and their role in the system. A process that is frequently used will need to have an availability in correspondence with its frequency of use. A process that is rarely used, but which is essential for the survival of the system – for example, in the event of incidents/accidents that are by definition
168
System Architecture and Complexity
rare – should also have a high level of reliability, possibly with suitable technologies such as tolerance of breakdowns, auto repairs, etc. All this preventative management is a condition of the survivability of the system; it is the objective of system safety engineering which is itself supported by the requirements that determine the service contract. The most natural representation of the system and its processes, in other words, its information content, is an abstract machine that the architect must construct step by step, given the system requirements. This machine can be materialized, via a suitable translator, on any computing platform, meaning a set of sequential, real and specific machines that cooperate. We will return to this fundamental point in Chapter 9, which refers to the same constructive logic. There can be thousands of physical machines in a modern platform and even more logical machines with virtualization. 7.2.3. Evolution of processes
We have seen (Figures 4.14, 4.15, 7.4 and 7.5) that processes/systems have a form of “life” that is particular to them, with a beginning and an end, framed by a “birth” and a “death”. Like all phenomena in nature, the processes are subject to laws of entropy which mean that their service level (measured by an availability) is necessarily degenerative, with dynamics that are relatively slow; the quality of service can only decrease. If nothing is done, the elements of the system “wear down”, breakdowns become more frequent, they are more and more difficult and costly to repair, initial redundancies are exhausted, users are tired or go elsewhere, and the engineering teams disband, beginning with the R&D teams who work in the mid- to long-term. The “order” that was initially constructed in the transition defined → created, is progressively destroyed up until the moment when Simondon’s triplet {U, E, S} disintegrates, to be compensated for elsewhere. In systems engineering, the laws of wear and/or aging, otherwise known as maturity curves, are well-known; they follow the dynamics of the model of Lokta/Volterra, in other words Figure 7.9.
Dynamics of Processes
169
Figure 7.9. Dynamics of maturity
Lokta/Volterra’s mathematical model of dynamics13 is represented well by the following integro-differential equation:
t
Δ M ( t ) = c0 − c1 × M ( t ) − M (τ ) × f ( t − τ ) dτ × M ( t ) × Δ ( t )
0
M(t) is the function that characterizes the maturity as a function of the use of the system (it is an abstract time). The errors and initial faults are even more rapidly corrected than the increase in the number of users, hence the exponential shape of the curve at the beginning. When everything is going well, the number of residual faults is stabilized and the maturity reaches its threshold which is represented in the equation by the term c1 × M(t) expressing the fact that the more the maturity t
increases, the fewer errors there are to be discovered. The term under the sign
is
0
known as hereditary, because it is a cumulation that expresses the progressive degradation of the system as soon as its operation begins. Initially, its influence is masked by the effects of the coefficients c0 and c1, but as soon as the level is reached, the term hereditary becomes progressively more important, leading to the
13 The historical reference is V. Volterra, Théorie mathématique de la lutte pour la vie, reprinted by J. Gabay; also refer to the more modern version by M. Nowak, Evolutionary Dynamics, Harvard University Press, 2006.
170
System Architecture and Complexity
fall in the curve. The hereditary function is unknown but its general shape is indeed observed. Under Volterra’s working hypothesis, which has been corroborated correctly by the observed facts, this depends on the maturity M and on an unknown function f about which hypothesis must be made in order to solve the corresponding second-order differential equation. Example of hypotheses
The simplest function f is the constant, a simple coefficient h0. In real life, the function f is rather monotonous and increases, either linearly, or in convex/ concave behavior: f (τ ) = h0 + h1 × τ α , with α = 1 [linear], > 1 [convex], < 1 [concave]. The all-inclusive problem faced by system designers is ensuring that in the transition defined → created, everything is implemented so that the system and its processes have as long a life as possible, notwithstanding certain hypotheses about the PESTEL environment and its evolutions. In the terminology used by Waddington, one of Turing’s favorite authors on the subject of morphogenesis, there is the “epigenetic landscape” that R. Thom – also a great admirer of D’Arcy Thompson and Waddington – renamed in disaster theory as the “substrate space” and in which the phenomenology in question is deployed; in our case, the maturity. For this to work, a certain number of new processes must be implemented, with the only objective of extending the lifetime of the system, making it easier to maintain, and avoiding all system shutdowns in the event of a serious problem. This is what we will tackle with processes known as “antagonistic”. 7.2.4. Antagonistic processes: forms of invariants
In Chapter 4, regarding control, we encountered situations where antagonistic processes are implicitly at work. They were used right from the start of the first systems such as steam machines or railways. For quite obvious engineering reasons that have now been forgotten, or at least lost significance in our eyes. These processes are fundamental because without them, control is quite simply impossible.
Dynamics of Processes
171
Figure 7.10. Block diagram of the coupling of antagonistic processes. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In a steam engine, overpressure caused by overheating must be avoided at all costs. Hence, devices such as the safety valve, and a little later on James Watt’s centrifugal governor. If the valve opens unexpectedly for any reason, such as failure, breakdown, etc., the boiler loses its pressure and the turbine stops. The heating process (thermal, nuclear, etc. has the function of providing the energy to carry out the water → steam transformation which requires a lot of energy (540 kcal per liter of water, therefore a pressure of approximately 2,300 KW). On the other hand, once it has vaporized, the steam behaves like a gas that obeys Mariotte’s law, P × V = R × T . In a confined space, the pressure will very quickly rise and can be destructive. On the one hand, we have a heating process that manufactures the steam at the right pressure, with the risk of destructive overheating. On the other hand, we have a process that controls the pressure consisting of opening the valve, or not, to supply the boiler with water, (but not too much because then the water will be heated too much and the thermodynamic yield of the system will fall). We therefore clearly have two antagonistic processes, which must exchange information to operate coherently. A stationary train must have its brakes on to remain immobile on the tracks, even if it is on a slope. This is another example of an antagonistic process.
172
System Architecture and Complexity
In an automobile, the acceleration process is antagonistic with the braking process, although the two initiating pedals are next to each other; this can cause accidents if both feet are used. If the braking blocks the wheels, the vehicle loses its adhesion and becomes uncontrollable, like on black ice, as we saw briefly in Chapter 4. The two processes are necessary for driving the vehicle. In a launcher like Ariane, there is a self-destruction process, because if the launcher leaves its nominal trajectory (this is part of the service contract) for any reason, it can then fall anywhere and cause a disaster; it is better to take the risk of destroying it while it is in the air. The self-destruction process is antagonistic with thrust/steering for reaching orbit. In addition, it must be completely autonomous and have its own energy resources, without expecting anything from what can happen elsewhere in the launcher, except an interaction capacity required for autonomic management. Once again, we are faced with von Neumann’s dilemma “Quis custodiet ipsos custodes? (Who guards the guardians?)” which is at the heart of system safety engineering. Chemical engineering14 provides numerous examples of antagonistic processes. Many reactions are self-catalyzing, which means that certain products of the reaction will amplify the speed of the transformation, to the point of possibly making it explosive. If there is no opposite process that inhibits the self-catalyst, the reaction is of no interest to industry because it is dangerous. We also encounter the inverse situation where this time the products destroy the catalyst, and the reaction stops if they are not evacuated as and when they are produced. The fission reaction is to a certain extent self-catalyzing because it produces more neutrons than it consumes. This leads, on the one hand, to atomic bombs, and, on the other, to nuclear power stations, where the reaction is controlled (the atomic reactor absorbs the excess neutrons). We could cite an infinite number of examples, especially if we take into account biological processes which, at a certain level, are only ultra-complex chemical processes whose consequences and results we do not always understand. Loyal to Turing, and also to von Neumann, we will not use them for methodological reasons. Explaining the complex by the most complex is not a good method to resolve problems. Let us remain constructive; we will first tackle what is complex by using something simple, something that we understand perfectly, and we will proceed in successive stages of integration, which has always been the “route” taken by engineers.
14 Refer to the work by B. Ogunaike and W. Ray, Process Dynamics, Modeling and Control, Oxford University Press, 1994.
Dynamics of Processes
173
7.3. Degenerative processes: faults, errors and “noise”
All systems and the processes that constitute them are subject to laws of “wear”. They are simply an expression of the entropic nature of all the natural phenomena and from which nothing will escape. This applies to a silicon crystal that is used to make integrated circuits to the most advanced societies of animals and/or humans arising from evolution. From the outset, all processes contain a specific set of faults, which will manifest themselves in the form of errors and/or “noises” (in other words, an information signal that cannot be identified by the system, but a signal that requires/immobilizes resources all the same) as soon as the system becomes active (see Figures 4.14, 7.5 and 7.7). The creation process, in this case engineering, is itself subject to this same entropy law, leading to faults. If they are not detected, then corrected, the errors will propagate in the system as a function of the interactions, like those in Figure 7.6, and amplify the disorders which will subsequently become fatal to the system. This means that if nothing is done to remedy the situation that has thus been created, the system will have a dynamic similar to that shown in Figure 7.9, with an accelerating decline, in other words, Figure 7.11. This caused von Neumann to say “what is stable is predictable, what is unstable is controllable”, according to his biographer F. Dyson.
Figure 7.11. Dynamics of errors and/or of “noise”
The normal situation of the system corresponds to case no. 4. The cumulative effect of the errors and corrections that have followed has meant that little by little
174
System Architecture and Complexity
the system deteriorates, which is always expressed by the costs of in-service support which increase over time. The system has provided the services that we expected of it, and the level of maturity has been such that all the costs of engineering, development and operations have been compensated for. Case no. 1–3 correspond to pathological situations that will significantly shorten the lifetime of the system. Case no. 3 is a system whose design means that the maturity limit is brief, corresponding to a short “life”. For example, the missile guidance system will, by definition, have a short life of the order of 15–20 minutes, depending on the orbit altitude of the satellites. It is of no use to design the system as if it were a space shuttle or from the ISS. If it is the inverse, this guarantees disaster. Premature wear to the heat shield of shuttles meant that the program was virtually stopped after the Columbia shuttle disaster. Case no. 1 and 2 result from design errors which mean that the maturity level will never be reached, nor even the growth zone in case no. 1. The means available to systems engineering teams in terms of simulation are employed to simulate and therefore assess the robustness of architectures, taking into account the aging models that are suitable for the equipment and for the environment. The quality of these studies depends on the relevance of aging models with respect to the physical reality, models that can only be validated through experience, in the current state of our knowledge and the calculation capacities that we can mobilize. The occurrence of errors generates a flow of anomalies internal to the system, which will add to those caused by the PESTEL environment and which will disturb its operation, if the effects of this flow are not compensated for by corrections. We are therefore clearly in a situation of antagonistic processes that we will consider further below. In parallel with the processes that guarantee the mission assigned to the system, the architect designer of the system must construct a sub-system made of processes whose reason for being is to observe what happens in order to detect all situations that are deemed abnormal. Once this detection has finished it should carry out a diagnostic analysis that allows countermeasures to be put in place to compensate for the errors before they do too much damage to the system. Figure 7.12 shows the shape of this flow, taking into account the pathological situations previously described.
Dynamics of Processes
175
Figure 7.12. Dynamic of flows of errors and/or of “noise”
The flow can be represented by a number of errors per quantum of time or of action, or per cumulation that would be the integral of the curve characterizing the intensity of the flow; this is a maturity curve in the usual sense of the term, compliant with the general model by Lokta–Volterra. Case no. 3 and 4 correspond to curves known as “bathtub” curves that are encountered in systems engineering literature, except that here we have an analytical integro-differential representation that would allow all its aspects to be studied by simulation. The general logic behind the mechanisms of SDR compensation/autonomic management, diagnosis, repair is described by the diagram in Figure 7.13, analogous to the organization of an autonomic element such as in Figure 4.15.
Figure 7.13. Compensation for errors and/or “noise”. For a color version of this figure, see www.iste.co.uk/printz/system.zip
176
System Architecture and Complexity
The role of the SDR sub-system is to compensate for the errors/anomalies that are “natural” – insofar as we have mentioned the entropic nature of everything that exists in our physical world and evolving from a situation of order towards a situation of disorder, to which we will return in section 7.5. Returning to the dynamics of systems and their processes presented in the diagram in Figure 7.9, we clearly see that the effect of the anomalies and their corrections is a cumulative process that will “wear down” the system to the point of making it inoperable. The structure of the SDR sub-system determines the function f of the Lokta–Volterra model. The effectiveness of the function is potentially fixed during the period of creation of the system, but it can be designed in such a way that it improves its effectiveness as and when the system “lives”. System safety engineering therefore depends on two fundamental parameters, one arising from the design of the system that the SDR sub-system “guarantees” (this is by definition a game with complete information), and the other parameter that depends on environmental conditions (a PESTEL in the wider sense; a game with limited information, even intrinsically incomplete) for which we can only observe the effects on the system. If the environment undergoes significant changes, outside the learning capacities of the SDR sub-system, this will reduce the “normal” lifetime of the system itself by the same amount, it being understood that in any case it will “die” in the end. The SDR sub-system determines the capacity for the evolution of the system S. This is therefore a major element of all systemic analysis, whether for artificial systems like those that we create, or for natural systems that are given to us by nature15. Understanding a system first requires identifying and understanding the structure of its autonomic manager/diagnosis/repairs sub-system (in other words, assembly and dismantling), its reason for being with respect to the service contract, without which it cannot “live” for a long time. 7.4. Composition of processes
Perhaps the most meaningful analogy in terms of interaction/composition of processes is that of automobile assembly lines, especially if we look at the current situation, with robotized lines, compared to the first production lines in Ford factories at the beginning of the 20th Century (see Figure 7.14).
15 Refer to the analyses of the immune system in the work by P. Kourilsky, Le jeu du hasard et de la complexité, previously cited; an example of antagonistic processes that allow us to survive.
Dynamics of Processes
177
Figure 7.14. Block diagram of an assembly line
Systemic modeling of this type of industrial system does not pose a problem, in particular, since the effective beginning of J. Forrester’s modeling, with suitable notations, thanks to which more or less detailed simulations will be possible. His various works are totally explicit and although they are relatively old, it is worth reading them regularly because ideas do not all come at once. In today’s automobile industry, the material and physical structure of the assembly line – with its shops, its machines, its robots, etc. – is coupled with an abstract informational structure which, in a certain manner, virtualizes it from the point of view of the decisions that the operators will/can make. These are the wellknown lean management approaches and, in general, relate to quality and/or methods, which have historically been pioneers of the use of systemics in organizations. Each industrial process has a virtual image, in an abstract process space with known laws (they have been established by the designers of the production line, at the same time as the vehicles which will be manufactured by this same line), that will allow management of the line. There is therefore a fundamental relationship between the real spatio-temporal specific space where vehicles are manufactured, and the abstract space of virtualized processes that allow management of the whole as a function of the manufacturing requests of such and such a model of vehicle. This is a situation that is entirely similar to C4ISTAR systems, of which one
178
System Architecture and Complexity
of the primary functions is to manufacture this common vision shared by all the actors/operators and overall operators of the system; in C4ISTAR jargon, this is what is called Common Operational Picture (COP) which is of primary importance in synchronizing the active units in the daily reality of all these systems. 7.4.1. Antagonistic interactions However, there is a type of interaction which deserves particular focus, namely, the interaction concerning processes that are described as antagonistic. We have seen that these processes play a critical role in control, because they contribute to maintaining the system in an optimal equilibrium situation, or close to optimal, given the various constraints with which the system must comply. This is a space with N dimensions that can be studied with all the mathematical resources that operational research provides us with. Two or more antagonistic processes can be drawn up as in Figure 7.15.
Figure 7.15. Antagonistic processes. For a color version of this figure, see www.iste.co.uk/printz/system.zip
As we have seen in the examples, the dynamics of process P1, given the situation, can lead to the overall process P in a situation of disequilibrium which can be fatal for the system. Before this is the case, it is therefore necessary to activate a compensation process, denoted P1Sym on the diagram, to effectively demonstrate the symmetry that the architect must put in place. The SDR sub-system that we
Dynamics of Processes
179
mentioned in section 7.3 regarding the compensation of errors is an excellent example of antagonistic processes whose objective is to improve the general availability of the system. It relates both to the technical part of the system and the communities that are associated with it, communities that are constrained to certain skills/experiences, corresponding to the term “graduation”, in the graduate sense. In the equilibrium diagrams that we use in systems dynamics, these situations are represented by energy “potentials”, as they are in Figure 7.16. If the system moves away from its zone of equilibrium, it will be necessary to mobilize significant “energy” resources to bring the system into its stability zone, so long as the general dynamics allows this as mentioned in Chapter 4. If the threshold is surpassed – this must therefore be monitored – the system becomes unstable.
Figure 7.16. Equilibrium–disequilibrium
The more we move away from the equilibrium position which is the fundamental invariant of the system (in general, defined by its service contract), the more the counter-measure will be difficult to implement. Knowledge of the two dynamics, disequilibrium and the return to equilibrium, are fundamental aspects of systemics. Without wanting to push the analogy too far, (which would cause it to lose its relevance) in the general context evoked here, we see laws of symmetry that are entirely analogous to those that are found in physical and/or chemical, energy and/or thermodynamic systems, in a general manner16. The overall invariant of the system, represented here by the zone of stability, results from the service contract, as we have already specified. No matter what happens, no matter what the P processes in the system are and the PESTEL external constraints taken in by the system, it must have a set of “nominal” processes P_xn, 16 For those who are not put off by physics, refer to Noether’s theorems which play a fundamental role in particle physics; see How Mathematician Emmy Noether’s Theorem Changed Physics or Emmy Noether’s Wonderful Theorem written by Dwight E. Neuenschwander.
180
System Architecture and Complexity
when everything goes well, and a set of associated processes PSym_ym, to carry out compensations when things do not go well, with the fundamental rule that all composition/interaction of P_x and PSym_y leave the “point” of the system in its zone of stability. This therefore plays the role of a neutral element if we wish to digress to the “algebraic” aspect of these composition mechanisms concerning the organization of the space of abstract processes P. Figure 7.17 visualizes the situation. We have three zones: – a green zone that corresponds to the nominal operation of the system, in compliance with the service contract, perfectly controlled; – a yellow zone where there could potentially be a danger if there are no processes PSym in place which are required to return the system to the green zone, and therefore no resources that are necessary for these (resources that must therefore be stated); – a red/orange zone which, if the red frontier is not monitored, will lead to loss of control of the system, which does not, however, mean that the system will be lost.
Figure 7.17. Block diagram of controllability zones of unstable processes. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In the case of the accident on Three Mile Island, an incident was indicated by the control system of the power station, but the operators panicked and took unsuitable, inopportune actions, causing the incident to be transformed into a serious accident (but nothing like the Chernobyl accident resulting from a deliberate violation of security rules).
Dynamics of Processes
181
In systems involving risk, using scales of risks, we can structure a controllable zone by distinguishing the incidents from the accidents, or more precisely, for each sub-zone the suitable processes PSym. When we have crossed the red line, it is simply a case of crossing fingers and trusting that the engineers designed the system correctly. However, it is necessary to note that moving into the red zone does not automatically signify a disaster. This simply means, and this is the real danger, that any incident, even minor, in another context can trigger the disaster, which for those who are not familiar with the problem of system safety can generate a feeling of false security, and then lead to risky behaviors as was the case in Chernobyl, where nothing is controlled any longer at all. This type of situation relates to the organized criticality of the physicist P. Bak that we mentioned in Chapters 2 and 3. A last remark concerning the relationships between the specific physical space and its abstract digitalized virtual image. In a digitalized system, it is always possible to go backwards, because when they are well-designed, computer science and the software it is materialized by are deterministic and reversible by definition, and by means of construction of the computers, which is very amenable to postmortem analyses in the event of a breakdown. Unfortunately, the real world is not reversible, at least not at zero energy cost (energy + duration). A resource consumed is indeed consumed, it is no longer available, it has disappeared from the stock. What is outlined here is the very articulation between “capability” logic relating to resources in real physical space and the computing logic which also has its own capability constraints of an entirely different order. The distinction between these various logics, a difficult exercise which requires solid knowledge of logic, including temporal logic, and a lot of experience, is one of the essential tasks of a system architect. It is up to them to correctly understand, in depth, the capacities expected by the users (spatio-temporal dimension), and what can be done with technology at the instant T (and in the foreseeable future) and the engineering teams given their levels of maturity. A view of the system that is too focused on computer science, since computer science is now all around us, can hide the physical meaning of the architect, which will lead automatically to serious disappointments if not a disaster. These risks have been effectively identified by our colleagues at EDF/RTE. To conclude this chapter, we will say a few words about the energetics of processes. 7.5. Energetics of processes and systems
All systems are plunged into a spatio-temporal physical environment, and in doing so cannot escape the laws of nature, in particular, those of thermodynamics. In order to transform, even in a computer, in order to communicate and interact, even
182
System Architecture and Complexity
in fiber optics, there is a requirement for energy and physical media for these operations. Here, transducers play a fundamental role (refer to the authors’ website). All the systems that we mention in this work are “open” systems, in terms of thermodynamics, meaning they are constantly exchanging with their environment. They are, in addition, “dissipative” (refer to the works of I. Prigogine17 on the subject, which earned him his Nobel Prize in 1977) and not conservative, because their “equilibrium” is only stable thanks to the energy flows that pass through them. These are all excellent examples of “dissipative structures” in the sense that I. Prigogine gave to this term, created entirely by human intelligence. To illustrate this fundamental concept in the energetics of systems, we can go back to the image of the set {boiler, turbine, alternator} that we saw previously, as can be found in all EDF power stations (Figure 7.18) and which demonstrates the various flows.
Figure 7.18. Specific dissipation system. For a color version of this figure, see www.iste.co.uk/printz/system.zip
17 Refer to his small work, Introduction à la thermodynamique des processus irréversibles, Dunod, 1968.
Dynamics of Processes
183
The installations have giant proportions, several tens of meters in length, for the turbine/alternator groups and several dozen metric tons in weight with paddlewheels of the order of 3 meters in diameter turning at 1500 rotations/minute18. Each small blade on these requires perfect machining at the scale of microns because they function like the wings of airplanes so that the flow of steam is correctly channeled, without turbulence, in order to produce the stated energy, in other words, in total a maximum power of 1.6 gigawatts. In all artificial systems, and a fortiori for “natural” systems that living beings constitute and that we have decided not to mention for reasons already mentioned, we can still reason in terms of energy balance. In a dissipative system, the system is traversed by flows of energy which give their structure to the system, with two complementary results. On the one hand, the creation of order which has a cost, and, on the other hand, the production of disorder, which also has a cost, but which in total compensates for the creation of order which if isolated would be equivalent to violating the second principle of thermodynamics. This is the paradox of Maxwell’s demon (see Figure 2.3) which was not resolved until the works of L. Szilard19, in 1929, and L. Brillouin20. Physics of information is the extension of these considerations. Today, with dissipative structures and the theory of information, the energy balance is perfectly clear. Figure 7.18 gives an interpretation of this balance in the macroscopic structure of an electrical energy production plant, such as the one operated by EDF. An abstract representation of this device which is, in fact, an energy transducer, is given in Figure 7.19. In this diagram, the disorganized/disordered thermal energy produced by the boiler – whether this energy is a fossil fuel (fuel oil/gas) or nuclear, – is partially converted into structured mechanical/electrical energy that can then be converted and transmitted as we have explained in the EDF/RTE case study. The resources immobilized by the structure of the power station itself, the bricks and mortar, the machines and those that operate them, contribute to the transduction at a close-up scale because the specific installations are enormous, as we can see first hand when we visit a nuclear site.
18 At this speed, the wings at the exterior have a speed of around 850 km/hour and a significant centrifugal force that is easy to calculate; if there are vibrations, there will be frequent accidents. 19 Refer to the Wikipedia entry “Leo Szilard”. 20 Refer to L. Brillouin, Science and Information Theory, Academic Press, 1956; and an interesting analysis of L. de Broglie, Sens philosophique et portée pratique de la cybernétique, NRF, 1953.
184
System Architecture and Complexity
Figure 7.19. Abstract dissipation system. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The “organized/structured energy” (for which the information image is a power, a tension, an amperage, etc.) has a production cost, but it is above all of the well-conditioned “order”, that can then be sold to user consumers, therefore leading to gains for the producer. The emissions, assimilated in this abstraction to disorder, are returned to the environment which will take charge of “digesting” them (at least, we hope). In practice, this is pollution, which can take various forms. This needs in any case to be removed, with the fundamental question of who will, in the end, pay the bill for the total cost. Order created benefits for everyone, and this is the largely preferable solution to burning, as is practiced individually by disadvantaged African populations. This creates widespread pollution, while devastating the meagre forests of sub-Saharan Africa, and accelerating desertification of the territory (refer to the “tragedy of common goods”21). Concerning disorder, it is as of yet unknown whether this must be totally compensated for in the production cost, (which is equivalent to mechanically increasing the price of the order) or if on the contrary this is part of the common “goods” (in this case, badly named) to be paid by everyone. This includes nature that procures the corresponding fossil energies for us, including fissile uranium which is produced by natural processes22. The energy applied to systems sciences tells us that in any case, it is necessary to compensate in some way to guarantee equilibrium.
21 To begin with, see https://en.wikipedia.org/wiki/Tragedy_of_the_commons. 22 Refer to the Wikipedia entry “Le réacteur nucléaire naturel d’Oklo”, a mine of uranium located in Gabon; B. Barré, L’énergie nucléaire, published by Hirlé, 2007, a work destined for the wider public.
Dynamics of Processes
185
As we are all aware, the energy demand is showing no signs of decreasing to begin with, we have China and India, who between these two countries will very soon have 3 billion inhabitants. Then there is Africa, for whom it is not clear why they should tighten their belt if Westerners are not doing the same. To find the right equilibrium, the PESTEL factors must be managed at the scale of the planet – which itself operates like an immense solar energy transducer – and of internal geothermal energy that undergoes periodic convulsions, volcanoes, earthquakes, tsunamis, hurricanes, etc. All this energy micromechanics does, however, comply with the latest results of thermodynamics, the law of maximum entropy production (MEP) that certain physicists do not hesitate to place on the same level as the first two principles. In other words: (1) the law of conservation of energy, (2) the Carnot/ Clausius principle which stipulates that heat moves spontaneously in an irreversible manner, from hot to cold, which is the basis for all our thermal engines. Using entropy (from the Greek ἐντροπία, entropia, which means transformation or evolution), which is a state dimension discovered by R. Clausius in the second Q half of the 19th Century, in other words S = , where Q is a certain quantity of T heat that we are able to measure with suitable instruments, and T is the absolute temperature. A certain quantity of energy Q does not have the same “value” if it is on a hot source rather than on a cold source. The 0°C that corresponds to a temperature of 273 K. Clausius’s formula shows that a cold universe has a very high level of entropy, hence the formulation of the law in a developing form: “Entropy only increases”, which expresses the fact that the universe has been cooling down, since the initial Big Bang, 13.7 billion years ago. But from this flow, organized structures can emerge, like living organisms or our digitalized systems. In our physical world, the lowest known temperature is found in the furthest reaches of the visible universe, measured by COBE, (Cosmic Background Explorer) and Planck satellites, at approximately 4 K. Dissipative phenomena have been well-known to physicists and chemists for nearly two centuries. There are multiple manifestations of this at our scale. The phenomena of tides and the currents that accompany them, are periodic oscillations of billions of metric tons of water of the oceans, caused by the relative movements of the Earth, the Moon and the Sun, on a lunar cycle. Newton was the first to analyze this phenomenon correctly, where gravitational forces are at play. It is possible to recover part of this energy using tidal power stations and water turbines.
186
System Architecture and Complexity
At the mouth of certain rivers, depending on the geometry of the zone, the antagonistic movements of the river and tide can lead to the spectacular phenomenon of a tidal bore which is a single wave, a quantum of energy measuring several meters, which can ascend the river for many kilometers (to Rouen, on the Seine), before melting away into the river and onto the banks. The Earth’s crust (Figure 7.20) is the seat of giant convection movements whose visible effects are volcanoes, earthquakes, plate tectonics and continental drift that was not finally accepted until the 1950s. Some time was required to understand that the convection cells caused by the thermal gradient, approximately 5,000 K, were analogous to the Bénard eddies that can be observed in a laboratory, in a large number of phenomena related to turbulence. The heat that is transmitted in this way escapes the Earth, moving towards the sidereal void which is at 4 K.
Figure 7.20. The Earth as a thermodynamic system. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Dynamics of Processes
187
This flow leads to a geometric structure of eddies whose effect is to accelerate transmission by convection, which is much more efficient than conduction. The “emerging” structure – which according to the current theories should/could not exist, namely, these eddies – is the dissipative structure which facilitates re-establishment of equilibrium, transferring heat from the hot source towards the interplanetary cold, thereby complying in this context with the second principle of thermodynamics. What the third law says is that in a situation of this kind, the intermediary physical “system” between the hot source and the cold source, which assures the transmission of heat, will organize itself spontaneously to maximize the speed of transfer from the hot source → cold source, in such a way as to find an equilibrium situation as quickly as possible (time therefore plays an important role in the phenomenon). This is why the law states that the transfer of entropy is maximal, where cold plays the attractor role since its entropy is infinite at T = 0 K. This law was formulated in 1988 by the American physicist R. Swenson who provided a justification in the language of statistical thermodynamics23. This is essentially new knowledge, because if this law is true (to date, it has not been unanimously accepted), this means that the black box treasured by R. Thom is the site of dissipative phenomena where it is possible for this structure to appear. The black box therefore becomes less black. With respect to the phenomena we are observing, the dissipative structures are improbable and rare phenomena like tidal bores that only appear under certain conditions (in particular, the geometry of the opening). Even in the Oklo natural nuclear reactor in Africa, it cannot have been nice to live near the area 1 or 2 billion years ago. They are therefore rich in information, in the sense of Shannon’s theory of information, and if we understand them effectively, they will enrich our stock of knowledge. It will be even better if thanks to our knowledge in engineering we know how to construct them (or reconstruct them), where organization of the engineering team is itself a dissipative phenomenon, to the great benefit of user consumers who will have more energy available to them (order has value, disorder does not). Our ancestors who constructed cathedrals equipped all the rivers in France with watermills, thus economizing much human effort expended previously in milling, hammering, sawing, forging, etc., and manufactured all kinds of objects that made their lives more pleasant. These watermills are the ancestors of our modern power plants, in which the paddlewheels still play the role of the organizer
23 This thesis is still a subject of debate in the community of physicists: see https://en.wikipedia. org/wiki/Principle_of_maximum_entropy.
188
System Architecture and Complexity
of the energy transducer but now with greater efficiency, whose geometry would have intrigued them but not surprised them because they knew Euclid by heart. A modern energy generation plant, from the point of view of the theory of information, is a highly probable structure which has not resulted from chance. It has instead been devised by human intelligence which in engineering projects has been able to organize and integrate the knowledge that is essential for the implementation of physical phenomena, patiently observed and understood. The expertise of technicians and engineers have also played their part who have in the meantime become knowledgeable engineers under the impetus of the great educational reformers behind the Convention; such as Abbé Grégoire, founder of the CNAM, or G. Monge, founder of the École polytechnique. Thanks to this knowledge, correctly assimilated and rigorously transmitted, we are able to construct devices that implement dissipative phenomena with an extraordinary effectiveness from the point of view of this third law. Silicon circuits in our computers are a magnificent example of concentration and integration of knowledge that allow them to exist and provide services that we know. Silicon is abundant in nature, but not in the useable form required for manufacturing semi-conductors. Galena crystals, in fact, lead sulfide with a few impurities, have this useful property, but lead sulfide is not abundant enough to support the explosive development of the digital industry. To become a semiconductor, natural silicon must undergo a double transformation; first, it must obtain crystals of a certain size (currently, these are cylindrical bars 30–40 cm in diameter), and then undergo an extreme purification by the method known as zone refining, which conserves the structure of the crystal while concentrating the impurities in order to eliminate them. Without this method, which was discovered almost by accident when studying diffusion phenomena, there could be no silicon chips, and perhaps no digital industry. But the most remarkable is still to come, because the transistor effect is a collective phenomenon. In current technologies, a transistor gathers a few million atoms of silicon in dimensions of 20–30 nanometers. Each atom taken individually is subject to the hazards of the quantum world, and without quantum mechanics, it is impossible to understand how “it works”. But taken collectively, our transistor will become a “Newtonian” object with perfectly deterministic properties. Therefore, we all have within reach an extraordinary emergence phenomenon because for individual atoms, the interesting phenomenon not only does not exist, but quantum mechanics tells us that it cannot exist. Whereas collectively, quantum hazards will be compensated for by a statistical phenomenon whose explanation is found in good works on the subject of physics of condensed matter, in such a way as to produce a perfectly stable and repetitive collective phenomenon that will make circuits constituted in this way programmable, knowing that there are today 5–10 billion transistors. By correctly arranging the transistors,
Dynamics of Processes
189
we will also be able to prepare for the hazards due this time to cosmic radiation that permanently pass through us, like alpha particles of natural radioactivity. Intimate knowledge of matter – in this case correctly “dosed” silicon, and layered architecture of the components and of the entire computer, including of its software systems – allows what can almost be called a miracle to exist, so improbable it is (see Figure 3.3). All these structures are dissipative, and the circuit only functions if it is correctly supplied with energy, which will heat it up. In the same way as anybody that conducts electricity, it will be necessary to evacuate this heat so that the circuit maintains its useful properties. Its internal, abstract structure is described by suitable languages well-known to relevant specialists, such as the VHDL24 language, since we have even succeeded in separating the very foundations of silicon, from the logical design of the circuit that attributes useful calculation properties to it (thanks to this new type of transducer known as a silicon compiler, in the 1980s). Here, we have a perfect example of the notion of the logical design implemented by von Neumann and/or Turing in the design of the first true computers, which demonstrate without any possible argument that we can sometimes obtain perfect separation of the logical world from the physical world, while conserving the interactions that will materialize this logic in real materials that require some structural organization. Thus, we can fully generate new properties, which only exist by the virtue “more is different”25, like those in the artificial calculations that only exist in our computers. We can state that all this reinforces the veracity and solid foundation of the Church–Turing thesis, of which the computer itself is an existential “proof”. Thus, we have a general methodology for computing mechanics, based on the implementation of abstract machines with their specialized languages (this is a new form of energy, according to the physics of the information) that is required for the fabrication of technical objects/systems that are useful to the users that the engineering teams will be able to use; learning these new approaches is a linguistic exercise, it is de facto a new language. The logical design exists independently of the material/physical substrate, but without this substrate, it is “dead” intelligence or “dead” information; inversely, without this integrated intelligence, the substrate is simply an inanimate body! Without the modified silicon in our transistors, our algorithms would be nothing more than mathematical curiosities.
24 See https://en.wikipedia.org/wiki/VHDL for an initial insight. 25 See http://en.wikipedia.org/wiki/Philip_Warren_Anderson, title of a famous article by the physicist and Nobel Prize winner P.W. Anderson; also refer to the work by R. Laughlin, also a Nobel Prize winner, A Different Universe – Reinventing Physics from the Bottom Down.
190
System Architecture and Complexity
The fundamental invariant in this approach is the maintenance of the symbiotic relationship between users, engineering and the hardware/software part of the system, (as we have defined it) using the three corresponding models that are, in fact, three abstract machines that must operate together in perfect coherence. This is at least true in theory, given the limitations inherent to their energy/material representation and to the communications phenomena that we know to contain “noise”. Complexity is a disorder that can only be organized by the injection of sufficient energy flows, at the right time and in the right places. This is stated in the laws of thermodynamics. This flow must be distributed across the three components of Simondon’s triplet on the trajectories {TU, TS, TE} and their combinations (see Figure 6.2 and section 6.1), in such a way as to “align” them, using a term from enterprise architecture, meaning to organize their interactions to maximize the creation of order between them while conserving the evolutionary capacity as a function of the environmental hazards (PESTEL factors). An increase in complexity, without an energy counterpart to create the compensatory order that is essential to maintain the service contract, will be mechanically fatal to the durability of the system. Unorganized complexity or complexity that is not very organized is always expressed by an increase in errors, in particular, in the interaction part of the trajectories {TU, TS, TE}; the errors that are not compensated for progressively “poison” the system which degenerates slowly, or rapidly, in compliance with the Lokta–Volterra model of which they are the hereditary term. As long as we know where and how to compensate, the system lasts at its maturity limit (its “life” is made longer), supposing that it can still be modified without being completely reworked; here, we see a variant of the “law” known as the Requisite Variety Law which is itself at heart simply a variant of Shannon’s second theorem, the “noisy” channel theorem, which is the most fundamental theorem because in order to evolve, a code must never be saturated. Certain physicists consider this model to be a primitive version of the third phenomenological law of thermodynamics, mentioned above, whose importance we are progressively becoming aware of because it explains and reinforces the general notion of energy equilibrium by including an information aspect. However, it is important to note that the formal analogy of Shannon’s formula which defines “entropy” of the sources using Boltzmann’s formula does not, however, signify causality. This is a step that some take perhaps a little too quickly. Boltzmann’s formula is “energetic”; it refers explicitly to an energy concept, whereas Shannon’s is an expression purely in terms of numbers!
8 Interoperability
The concept of interoperability is at the very center of the justification for writing this work. The emergence of this new concept dates back some time: in a timeframe of scarcely 10 years in the 1980s we witnessed a progressive, though rapid, replacement of “large” mainframes and their environment of thousands of “dumb” terminals, in the sense that the user cannot program them – they can only be configured – by groups of machines of varying power that are organized to cooperate in architectures described as clients/servers. Communications networks are at the center of these new groups, an architecture that for a while was described as Network-centric to really emphasize its importance. The processing power that until then was concentrated in the central system, with up to 8 or 16 processors, but not many more for performance reasons, was from then on able to distribute itself across all equipment containing a programmable processor, thanks to the microelectronics and VLSI (very large-scale integration revolution [of components]). Terminals were replaced by work stations and/or PCs which came into use at that time; these are individual work stations that are all programmable. The central system was split up over potentially tens/hundreds of servers that can specialize in data servers, application servers, safety and autonomic manager/back-up servers, lists of personnel and authentication servers. The power provided, as long as the software architectures allow it, is virtually unlimited, or “on demand” as they say, as a function of the requirements of users who could also choose to obtain their equipment from a different supplier. The computer science platform becomes heterogeneous; a heterogeneity that will be necessary to manage in order to guarantee coherence and reliability from start to finish, and to facilitate migrations. For both system users and designers, this is a conceptual revolution. Applications that were initially designed for a given type of material must increase in abstraction to become more widespread and install themselves on the resources available on the platform. In applications, a separation is required between what is specific to the platform and what can be separated from it because it is specific to the application
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
192
System Architecture and Complexity
which has its own separate operation. Interconnection capacities become generalized or more specialized, as a function of the requested information flows and safety problems. Transactional systems initially designed to interconnect the terminals and the databases in the central systems have undergone a major change and have become generalized exchange systems, with suitable protocols constructed above the ISO and/or TCP/IP layers, which will allow a massive interconnection, in asynchronous real time: these are intermediary middleware, or even ESB (enterprise service buses), and include numerous services such as those that guarantee system safety and autonomy. This is, as they say, the right moment for architects who will have the role of organizing this new set, leading to abstractions that are independent of the platform’s physical structures, the semantic, in other words the what of the functions that fulfill the users’ requirements, and the related role of organizing the platform so that it can develop and adapt to the offers on the market while complying with the service contract that was negotiated with the users, in other words the how. In order to satisfy these requirements, the architects will have to invent new tools and new methods to organize the complexity caused by these evolutions, including those of the exchange model and the associated languages which become the central structuring structure of this new state of affairs with virtually limitless possibilities, restricted only by our own capacity to organize ourselves to ensure that correct use is made of it. We will outline details of this in the two fundamental chapters, 8 and 9. The revolution of the interoperability of systems: to begin with, a diagram (Figure 8.1) and the image of a control which implements J. Watt’s governor (Figure 8.2).
Figure 8.1. Mechanical and human interoperability1 1 Many more images which are more realistic are available on Wikipedia, in the CNAM archives, etc. Refer to: https://en.wikipedia.org/wiki/Watt_steam_engine, or https://en.wikipedia. org/wiki/Centrifugal_governor.
Interoperability
193
Figure 8.2. Interoperability with an end purpose
The interoperability of systems is a new term, forged in the 1980s and 1990s in systems engineering environments. It refers to things that are old and already known, without necessarily knowing in detail how they worked, and the mastery of their engineering. The problem of interoperability is the organization of the cooperation between organisms and/or equipment of all kinds with an objective in sight, focused on action, that none of them could reach if they remained isolated. This is fundamentally a semantics problem. Living organisms cooperate and provide each other with mutual services which improve their survival capacity. In biology, these are all symbiosis mechanisms which play an essential role – for example, the symbiotic relationship between pollinating insects and flowering plants. We ourselves are dependent on this because there is a symbiotic relationship between our digestive metabolism and the billions of bacteria that we house in our intestines and without which full digestion would be impossible. In human systems, extended by tools of all kinds invented over the course of the ages by our ancestors, there are also all kinds of symbiotic relationships and cooperations that we are all familiar with. J. von Neumann, as we recalled in Chapter 1, invented game theory specifically in order to model certain fundamental behaviors that structure the interactions between economic agents. This theory, despite its limitations, has spectacularly advanced our understanding of the basic mechanisms that structure the exchanges between organizational actors and/or humans, as demonstrated by the number of Nobel Prizes for Economics that have been attributed to economists that have used it, such as the two Nobel Prizes
194
System Architecture and Complexity
awarded to French economists, M. Allais, more than 30 years ago, and in 2014, J. Tirole, both of whom were experts of this theory. For the US, this concerns at least 10 to 15 economists, such as Thomas Schelling from RAND, and many others. In the artificial systems that are at the center of this work, because we have total control over the engineering, the cooperation between the constitutive equipment and/or the human operators that command and control them is relatively old, as shown in Figures 8.1 and 8.2, but with very serious limitations due to the technology that was available. Figure 8.1 represents the principle of manufacturing at the end of the 19th Century. The true rupture, after the generalization of electricity in the 1920s–1930s, came in the 1990s where each piece of equipment was fitted with one or several computers; these items of equipment were then able to interact with each other, exchange information, via wired and/or Hertzian electronic connections, thus turning into reality one of the dreams maintained by N. Wiener and his colleagues in early cybernetics. The transfer function is completely dematerialized and becomes a pure abstraction carried out, or reified, by a “program” in the computing sense of the term. Awareness is wide-reaching but we can say without insult to the past that it is focused on defense and security systems, like a reminiscence of the firing systems that had been the source of the reflections of N. Wiener and the continuation of the SAGE and NTDS2 systems. All these systems operate in coordination with each other and constitute what has been called a system of systems3 since the 1990s– 2000s, which are interlinked in such a way as to constitute a new organized collective entity which is capable of fulfilling assignments that none of the systems could carry out separately when taken individually, where the interaction could, as in the case of the centrifugal governor, also be operational but this time totally dematerialized. We are in a case of emergence, in the strictest sense of the term, emergence organized by the architect of the overall system which must control all its aspects, including and very obviously the undesirable aspects, human errors or not, breakdowns, environmental hazards, malice, which will most certainly occur. With these systems as a common theme, we are going to present the problem of interoperability, a problem that is today widespread, which has become significant given the massive computerization of equipment of all kinds, of companies and of society, that we have been experiencing since the 2000s. 2 Refer to SAGE: https://en.wikipedia.org/wiki/Semi-Automatic_Ground_Environment and NTDS: https://en.wikipedia.org/wiki/Naval_Tactical_Data_System. 3 Refer to a definition of this type of conglomerate given by M. Meier, on the author’s website: architecting principles for systems-of-systems (SoS).
Interoperability
195
8.1. Means of systemic growth A good way to present the problem of the interoperability of systems is to start with the notions introduced in Chapters 5 and 9, in the logic that belonged to G. Simondon (in particular, see Figure 2.4). Made up of its communities of users/end-users and its technical/physical component, itself comprising a “hard” material part, also known as brick and mortar, and a “soft” programming part. In other words made up of the software which makes its first appearance in systems from the 1950s onwards, with suitable human/machine interfaces, the technical system/object forms a coherent or supposedly coherent whole, at a given instant in its history. As we have briefly discussed in Chapters 6 and 7, all this moves, is transformed at a slower or faster speed, given the evolution of the spatio-temporal environment which surrounds all systems. We have characterized this environment using the acronym PESTEL (see section 8.3.3), which allows us to grasp the main constraints of evolution to which the systems that we are interested in here are subject, independent of the “fight for life” aspects of technical systems between themselves, rendered remarkably well by M. Porter’s well-known diagram describing the forces4 that structure this evolving dynamic and the competitive advantages that result from it when everything is neatly organized. In many ways, this evolving dynamic presents numerous analogies with what we can observe in the living world, but by methodological rigor, we will not use metaphors from something more complex, namely the living world, to “explain” something that is less complex5, the world of our artificial systems, even though these are now indissociable from their human components, as pointed out by G. Simondon. What is remarkable in the current evolution of these systems is the “real time” component, which becomes important and is present everywhere thanks to the technical capacities of the means of communication and to the intermediary software known as “bus” software or middleware. The adaptative cycle, with respect to the systems that have known the “founding fathers” and their successors, including J. Forrester with his model describing the means of industrial growth6 in the 1960s– 1970s, has become sufficiently short/flexible (in marketing jargon the word “agility” is used, which from the point of view of engineering does not mean anything! What is important, is how, so that we can talk properly about real time. Real time that 4 For an introduction, refer to: http://en.wikipedia.org/wiki/Porter_five_forces_analysis. 5 This was recommended by J. von Neumann, as we have seen, but it was also recommended by R. Descartes in his Discourse on the method, precept no. 3 (see the edition with comments by E. Gilson, published by Vrin). In a certain way, we can say that R. Descartes anticipated or foresaw the theory of logic types that came to light three centuries later with B. Russel, at the beginning of the 20th Century; a theory whose point of culmination was the programming languages of the 1980s and 1990s (PASCAL, Ada, then the object languages Java, C# and Python), of which the theory of types is one of the foundations. 6 Refer to his work Industrial Dynamics, MIT Press, 1961; reprinted by Pegasus, 1999.
196
System Architecture and Complexity
must be taken here in its true sense, meaning compliance with a time schedule, which is determined by the adaptation capacities of the entities present and their situation in the system’s environment. A community of users will react as a function of the interaction capacities that are specific to this community and the motivations of its members, quality requirements included, which thus defines a kind of quantum, in length and/or in transformation capacity that characterizes it (a human “energy” expressed in hours worked, unless there is a better measure of human action in the projects7). Figure 8.3 demonstrates the symbiotic aspects that are specific to the analysis made by G. Simondon. If for whatever reason the connections that maintain this symbiosis, in other words the system reference framework, break apart, then the system disappears progressively from the landscape. The system and its constitutive communities are plunged into an environment where, on the one hand, the uses evolve and, on the other hand, technological innovation is going to create new opportunities, therefore leading to a competition between existing systems and/or systems to be created, according to the classical diagrams set out by M. Porter in his works8.
Figure 8.3. Dynamics of a technical system. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In a certain way, we can say that the system is “fed” by its environment, either by new users who see an advantage for themselves in using the system, or by new functions made available to the users by the community that provides the engineering, which must therefore be equipped to ensure that what it does complies with the expressed, or latent, requirements of their communities of users. Regardless of the 7 Refer to our work Estimation des projets de l’entreprise numérique, Hermes-Lavoisier. 8 Among others: Competitive Advantage, Competitive Strategy (see footnote 4).
Interoperability
197
service contract, the ternary relationship {users, hard/soft physical system/ component, engineering/end-users}, Simondon’s triplet {U, S, E}, is indeed symbiotic. Understanding of the dynamics of the set, quality included, and of its organization is essential to guarantee stability and longevity of the system considered as a whole, including its sociodynamics. We will now analyze this in detail. 8.2. Dynamics of the growth of systems If we take the example of an electrical system, we see that the growth of the physical component of the system is firstly induced by the concomitant increase in the number of users and the requirements created by this new technology which was revolutionary for its time, it being understood that the engineering and technology that are put in place are going to be able to/have to accompany, or even anticipate, this growth. In the case of an electrical system there will simultaneously be, regarding the physical component: (a) an increase in the intrinsic capacity of production factories up to the latest 1630 gigawatt EPR nuclear reactors (for Flamanville); (b) increase in the number of production plants; (c) complete progressive interconnection of the transport network. To adapt supply to demand, a whole range of equipment with a variety of powers and capacities must be available, in such a way as to adjust them as finely as possible, as was briefly explained in the case study of the electrical system. Since the variety of the equipment also induces a form of redundancy which will play a fundamental role in guaranteeing the system’s service contract, via the quality system considered here to be indissociable from the system itself, and the associated engineering, with among others the aspect of system safety that is essential for the survival of the overall system and its parts. From this example, we can infer a general law that we will come across again in the physical component of all systems: Physical growth takes place in three ways: (1) growth in size/power of the equipment that constitute the system; (2) growth in the number of items of equipment; (3) growth in capacity of interactions of the equipment to optimize the available resources. The no. 1 factor of growth comes firstly from the users’ demand rather than the supply. A new offer, resulting from a variety of innovations, which at one time or another would not be expressed by a requirement from the users, could not cause dynamic growth in itself. Once this initial dynamic set in, the engineering and
198
System Architecture and Complexity
technology will trigger their own related dynamic of development, exploitation/ operation and maintenance to support both the supply and the demand, notwithstanding their intrinsic limitations that we will analyze in section 8.4. We can therefore represent the phenomenology of the growth of systems in Figure 8.4.
Figure 8.4. Phenomenology of the growth of systems
Studying the “growth” of a system therefore comes down to studying the interactions and reciprocal influences between the three components of the symbiotic relationship that defines the system. Each of the components, according to its own specific logic, is subject to ageing/wear phenomena which generally follow Lokta–Volterra’s S dynamic which we mentioned in Chapter 7. In each of these components there is an aspect that, for lack of a better term, can be described as “mass”, with which a characteristic “inertia” will be associated. A community with many members, without a weak structure and lacking in coherence will, all things being equal elsewhere, be difficult to put into motion in a given direction; the forces and the energies to correct the trajectory will be more intense and, due to this, more difficult to control. Intuitively, it appears to be obvious that any system must comply with a structural invariant which defines an equilibrium between the three components, thus allowing it to “hold sway” when faced with PESTEL environmental hazards; in particular everything related to system safety. This invariant defines in a certain way a conservation law that maintains the system in an operational condition as long as there is compliance with this fundamental invariant; in other words that the constraints which define it are complied with.
Interoperability
199
To complete what has already been written in Chapter 4, in the case of an electrical system we have a C3 type potential that must be interpreted as indicated in Figure 8.5.
Figure 8.5. Interpretation of the lines of potential. For a color version of this figure, see www.iste.co.uk/printz/system.zip
If the energy requirements of users cannot or can no longer be fulfilled, for any reason whatsoever (loss of equipment, fluctuation in climate which affects the “green” component of the system) the supply/demand point of equilibrium will move to the left of the risk diagram and move out of the optimal zone. If nothing is done by the operators running the system that are part of the engineering teams, the autonomous security component will abruptly and without warning unload the users to preserve the equipment which are at risk of destruction due to the ensuing disequilibrium. If the physical component and the engineering teams are overdimensioned, there is no more security risk (even though the complexity related to the number is in
200
System Architecture and Complexity
itself a security risk if it is not properly organized), but it can appear to be an economic risk because the cost of production becomes too heavy. If the users are in an “open” energy market, they can then go and look elsewhere, which further increases the disequilibrium. Taking into account the inertia of these various components (10 years of construction works for a nuclear production plant such as Flamanville, or for a EHV line, up to 700 kilovolts), we understand why we must not attempt to get ahead of ourselves by making inopportune decisions which, given the inertia involved, will turn out to be fatal 10 or 15 years later, therefore too late, where equilibrium can only then be re-established with difficulty (see section 7.2.3). In large-scale systems, it is generally very difficult to obtain an analytical expression of the optimum of a system and its invariant, but well-constructed systemic modeling can provide a qualitative expression, which is already better than nothing. Failing this, it will be intuition or worse – the political-economic constraints, indecision and/or wishful thinking of the “communicants” focusing on short term constraints – that will prevail, with the risks that are known (refer, among many other possible examples, to the subprime mortgage crisis and the economic and social consequences that followed; or even Chernobyl, the power station managed by a geologist who was academically gifted but incompetent on the subject of nuclear risk). 8.2.1. The nature of interactions between systems When two systems get close to each other, for any reason (mutualization and specialization of systems; pooling of resources that can be shared; fusion of companies; integration of subcontractors into the “extended” company; multinational combined operations in the case of C4ISTAR Defense and Security systems; etc.), the systemic situation of their relationships must be represented as it is in Figure 8.6. Situations of this kind occurred towards the end of the 1980s, then became generalized in the 1990s when they changed scale with the large-scale spread of processing of all kinds arising from the development of information and communication technologies (ICTs, and from now on, in the near future, NBICs with “intelligent connected objects” and 5G).
Interoperability
201
Figure 8.6. Interoperability of systems
Making systems interact with each other obviously requires organization of the interactions of the physical component with the networks, as some had already understood correctly in the 1970s, but it especially involves making the communities of users and the engineering teams interact, in other words the behaviors, uses, methods, expertise, etc. which had brought the whole semantic problem to the forefront of concerns and which had not been distinguished as such until that time. This was a true shock for many, and a true rupture for all. If we only consider the combinatorial aspect, each of the components of a system S1 will move towards the three others for S2, and inversely, in other words a total of 18 arrows for two systems approach each other; with each arrow materializing a type of flow, which can be expressed in several physically different flows. In the case of three systems or more, all parties must be considered, which means that the combinatorial “explodes” exponentially and becomes totally unmanageable if the architect of the whole set doesn’t manage to organize the interactions by means of a suitable global architecture. However, they must accept to take on this role. REMARK.– If a set includes N elements, the set of the parts increases by 2N; if this new set is reinserted as a part, the combinatorial becomes 2 iteration 2
2
N
.
2N
and so on, at each
202
System Architecture and Complexity
In the example of the extended company, automobile constructors have developed an interaction with their equipment manufacturers in such a way as to operate “just-in-time”, which allows them to eliminate costly equipment storage on each side, but ties the company, in all senses of the word, to their equipment manufacturers and makes it dependent on correct operation of the transport system for logistics flows. Quality of service becomes essential. The block diagram of this interaction is represented in Figure 8.7.
Figure 8.7. Interactions in the extended company. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Logistics flows and financial flows for payments are added to traditional information flows of paper order/billing forms, all of which become progressively totally electronic, or doubled by electronics which allows the status of the logistics flow to be visualized in real time. This is what integrated industrial management software does with high precision (PGI, MRP (manufacturing and resource planning)) in which the progression of orders/deliveries is monitored in real time by the transporter thanks to geo-localization systems. The logistics flows associated with payment flows constitute transactional flows which guarantee that for all equipment deliveries, there is a corresponding payment that balances out the transaction. The coherence of these flows is fundamental because this guarantees that the company balances its books and makes a profit. In the case of C4ISTAR defense and security systems, the highly simplified block diagram of an air–earth operation is constructed as in Figure 8.8, where each
Interoperability
203
of the represented French land army and airforce communities are themselves composed of sub-communities that represent the whole range of professions that are required for highly complex operations. One can refer to the various scenarios of projections for the armed forces that were described in the last two white papers, available on the government websites of the French Ministry of Defense.
Figure 8.8. Interactions in C4ISTAR systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In the real case of the French land army and airforce as physical components (PC), the systems that are specific to each of the components can be counted in dozens and totalling millions of lines of code. These are systems that equip all active units, ranging from an individual infantryman with the FELIN system, land vehicles, Rafale airplanes and others still, to the systems that equip the decision-making centers and which provide the link with political power. Since France is a senior nation in NATO, the system must be able to talk to the specific systems of the various nations that are participating in the operation. The capability logic guarantees coherence of management of resources, in the sense that a resource that has been attributed to an AU, or consumed by an AU, is no longer available for other AUs, until it is returned in the case of a temporary loan, or replaced by an equivalent resource. Capability logic therefore necessarily integrates a transactional logic which takes into account the problem of re-supply9.
9 For more information about these types of systems, refer to the authors’ web page on CESAMES; concerning aspects relating to software, refer to J. Printz, Architecture logicielle, Dunod 3rd edition, and J. Printz, Estimation des projets de l’entreprise numérique, HermesLavoisier.
204
System Architecture and Complexity
8.2.2. Pre-eminence of the interaction The remarkable thing about the recent evolution of systems due to the importance of ICT is that interactions that implement flows of materials and/or energy (see Figure 4.4) are multiplied by two in a set of information flows that need to be managed as such and which will constitute a “virtual image” of this physical reality. This virtual image only has meaning if all the elements that make it up are “signs” of something in reality; speaking like a linguist, these are meaningful things of which the meaning is a very specific physical reality. Operating on one (refer to linguists’ “performative”) is equivalent to operating on the others, and reciprocally, on the condition that the signs of the virtual image remain perfectly coherent with physical reality. The interaction takes precedence over the action undertaken (a change of state, an adaptation/transformation of reality), in other words the way in which the actions carried out connect and balance each other out, in the same way as in linguistics it is the relationships between the language terms that provide a framework for the meaning of the phrase, at least in Indo-European languages. As explained in Chapter 5, this grammar of actions, here in the context of a system of systems, is essential for the stability of the whole set that is constituted in this way. This is exactly what Wittgenstein said in his aphorism 3.328 in the Tractacus logico-philosophicus, “If a sign is useless [in other words, if it represents nothing], it is meaningless”; we note that the aphorism is an excellent definition of the semantic. The systemic analysis of a situation must therefore indicate as clearly as possible the rules of correspondence between signs and reality, because the rules guarantee that reasoning on the basis of a virtual image is the equivalent of reasoning on the basis of a real situation. This is the condition sine qua non of the effectiveness of action. If the coherence is broken for any reason, we no longer know on what basis we are reasoning. We then risk making a ridiculous conclusion, to use the phrase of medieval philosophers, “ex falso sequitur ad quod libet” (from falsehood, anything follows); hence the importance of security/safety for all these systems, because what is real is only understood via its virtual image which can be manipulated/altered either voluntarily or involuntarily. In the case of the C4ISTAR, this virtual image has even received a name: the COP, in other words the common operational picture, in NATO jargon. Figure 8.9 gives an example of symbolic representation, in other words a virtual image, using signs taken from the situational representation standards10 that are typical of Defense 10 Refer in particular to the standard defined by the DoD, Common Warfighting Symbology, which explains this graphical language in detail. For NATO, it is the APP6 document; for an initial idea, refer to: http://en.wikipedia.org/wiki/NATO_Military_Symbols_for_Land_ Based_Systems.
Interoperability
205
and Security systems, as well as in certain strategy video games. The classifications used to code the situations comprise thousands of symbols which constitute a graphical language (literally, this is an ideography) used in all crisis management groups. We can visualize the image at different scales, and with different levels of synthesis, up to the ultimate level which corresponds to individual “atomic” actions (in the non-divisible sense, or even in the done/not done sense of the transactional sense. Note that here it is a case of pure convention between individuals). Without the development of ICT, all this would obviously be quite impossible.
Figure 8.9. Pictograms of a situation in C4ISTAR systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
A very general principle of the construction of abstractions can be drawn from these considerations, which will allow the architect of the system of systems to know exactly what they are doing and what they are not doing. This is shown in Figure 8.10. This diagram shows a construction structure with three aspects: – The aspect of the real world and the phenomenology that is of particular interest to us; a sub-set of phenomena that need to be characterized in order to
206
System Architecture and Complexity
separate out the elements to abstract; the rest, which has been eliminated, constitutes the “environment” or the outside, reusing our own terminology. – The abstract aspect of the phenomenology which is a world of signs11 independent of all material media and all representations, although it is necessary to select one if only to communicate. Signs constructed in this way correspond to those of the theory of abstract types which can be composed and hierarchized in the same way as in the theory of languages, which we have known how to do for a long time (in computing, this is known as an “object”). These are disembodied, logical entities, “pure” in the sense used by mathematicians to refer to pure mathematics. The system of signs thus formed functions like a theory of the phenomenology that it denotes; we also say “symbolic system”. At this level, we can represent the time of phenomenology, the transformations, consumption of resources, and more generally the finite nature of the physical world, using logic procedures classed as temporal12. – The virtual aspect, which is a projection of the abstract level into a logically mastered computing structure, either a Turing machine if we wish to remain independent from real machines, or a logic abstract machine in J. von Neumann’s sense of the term, that can be compiled and/or interpreted with real machines. For this aspect, the abstractions are executable, which will authorize simulations even before the system truly exists.
Figure 8.10. Equilibration of cognitive structures
11 Concerning this semiological aspect, for a philosophical point of view, see the subtle analyses of U. Eco, in several of his works, such as: Semiotics and the Philosophy of Language, and Kant and the Platypus, an unclassable creature as we are well-aware. 12 For initial reading, refer to: https://en.wikipedia.org/wiki/Temporal_logic.
Interoperability
207
The rules of correspondence between these various constructions must be explicit and reversible, which is a fundamental demand of the engineering of real systems and which must be validated before implementation. The notion of process seen in Chapter 7 is going to allow a link to be made between these various aspects because it is common to all three, thereby allowing translations/transductions between them. The systems of systems constructed according to these rules are themselves systems that are ready to enter a new construction cycle with mastered engineering. Step by step, we can therefore organize what J. von Neumann designated as ultrahigh complexity in his Hixon Lectures. We will return to this subject in Chapter 9. However, we must not believe that this type of organization emerges from the architect’s brain at the wave of a magic wand. If we take the example of the genesis of the “computing stack” (see Figure 3.3 and Chapter 7 for more detail); almost 20– 30 years of R&D began at the MIT with the MULTICS system towards the end of the 1960s, to then finish in the 1990s–2000s with transactional Middleware in a distributed environment, seen in classic works by J. Gray and A. Reuter (Transaction Processing: Concepts and Techniques) or P. Bernstein and E. Newcomer (Principles of Transaction Processing) concerning transactional aspects. The procedure to be undertaken complies exactly with what was said by A. Grothendieck (see section 1.3), where for a correct abstraction a high level of familiarity is required with the underlying phenomenologies, which requires a lot of time and a lot of effort. Without this, modeling takes place in a void, and this time Wittgenstein administers the correction (refer to aphorism 3.328 of Tractacus, already cited). 8.3. Limits of the growth of systems In the real world, limits are everywhere. For decades, the major science of engineering was, and still is, resistance of materials13. In-depth knowledge of the “matter” that they manipulate is a requirement for engineering designer architects of the system, including when the latter relates to information. To avoid breaking a part, there must be a deep understanding of how and why a part breaks. If a part heats up due to bad dissipation of the heat produced by friction, its mechanical characteristics will change, possible causing the part to break. For all these reasons, the size of the machines cannot exceed certain dimensions. In the case of turbines used by the nuclear industry, the rotation speed of the turbine (1500 rotations/minute for the most powerful) which can weigh several dozen metric tons, means that the extremity of the blades of the water wheels (around 3 meters in diameter) will approach the speed of sound (3 × 3.14 × 1500 × 60 = 848 km/h, in other words 13 One of the two new sciences identified by Galileo in his last work, Two New Sciences, published in 1638. The other is dynamics.
208
System Architecture and Complexity
235 m/s). This is a threshold speed that must not be surpassed due to the vibrations induced when nearing this speed (weak warning signs). The moral of the story: the size of the machine, and therefore its power, is intrinsically limited by the internal constraints that it must fulfill. An integrated circuit which can have billions of transistors and hundreds of km of nanometric cables buried in its mass, is primarily an electric circuit subject to Ohm’s Law and Kirchoff’s Law, therefore “it heats up”! If the heat is not dissipated by a suitable device, in general a ventilator mounted on the chip, the thermal noise will end up disturbing the state of the circuit, making it therefore unusable because it generates random errors that the error correction codes will not be able to compensate for. We know that fine engraving will allow chips to be made which can integrate 10 or 20 billion transistors, on condition of slowing down the clock, so of changing the sequential programming style for which programmers have always been trained, for a parallel style involving a complete overhaul of all the engineering, at least if we want to benefit from the integration of components to increase the processing capacity of the chip. We remark in passing that the “size” of the equipment refers not only to its geometric dimension or its mass, such as in the case of the turbine, but to the number of elementary constituents and/or links/couples between the constituents. Therefore, software such as an operating system is more complex (in terms of “size”) than an Airbus A380, although it can be saved on a USB key. With the development of nanotechnologies, these constituents can be on a nanometric scale, therefore very small, but all the same it must be possible to observe their behaviors. Instrumentation, required to effectively engineer them, becomes in itself problematic because it must be possible to test the device that has been created. This is a major engineering problem for high-level integration; a problem made more serious by the interactions that are possible between the components of the same chip. The nanoworld is a complex world where we can no longer see what we are doing! Correct operation becomes a problem which can only be tackled by probabilistic considerations related to the occurrence of inevitable random breakdowns, as we have managed to do with error correction codes, except that this time engineers are also faced with a problem of scale. This type of approach can make the technology impossible to use in very high reliability devices if we do not know how to/cannot implement design to test. The construction methodology, in other words the integration, prevails. For all these reasons, and for many others, there is always an intrinsic limit to all technology which means that the physical capacity (denoted here by the “size”) of the equipment is necessarily restricted and limited.
Interoperability
209
There are more subtle limitations induced by human components. Equipment that is operationally very rich such as the latest generation of smartphones requires learning and an MMI that is suitable for the user. A large dose of naivety would be required if we were to believe that all this will take place without difficulty, a little like the air that we breathe. Even M. Serres’ character, Thumbelina, eventually learned how to use the technology put in her hands with confidence. A smartphone equipped with payment methods, if it is pirated, is a godsend for the crooks that roam the Internet, without mentioning the adverse effects of geolocation. An unmastered technology is a factor of societal instability. And it is also necessary to take into account the ageing of the population and the reduction of cognitive capacities with age. The more complex society is, the more users will have to be on their guard and comply with protocols or rules that guarantee their security in the broad sense of the term. Even better, they must be trained. A human–machine interface (MMI) must absolutely comply with the ergonomic constraints of the user which can vary from one individual to another, with age, or from one culture to another. The limits which affect the engineering teams have been the subject of many organizational evaluations, including and above all in the field of ICT. Today, there is no longer any equipment used in everyday life which does not integrate ICT to a significant degree. Automobile constructors estimate that the added value resulting from the use of ICT in a vehicle accounts for around 30% of the cost of production of a vehicle. The smart grid that is the dream of users of the energy transition will require en masse use of ICT and more particularly software, because each time the word smart is used, the word “software” must be read instead. We are going to live in a world where the software “material” is going to dominate, in particular in everything that affects relationship and intermediary aspects, whether this is: (a) relationships of individuals between themselves (refer to GAFA – Google, Apple, Facebook, Amazon), within organizations and/or communities (the inside); or (b) their relationships with their daily environment (the outside). In our two most recent works, Écosystèmes des projets informatiques and Estimation des projets de l’entreprise numérique, we looked in detail at the maximum production capacities of the software engineering teams who are the most at-risk teams in the world of ICT and of equipment using ICT. These evaluations show that there are two thresholds, a first threshold of around 80 people ± 20 %, and a second threshold of around 700 ± 20 %, which appears unsurpassable for sociodynamic reasons as we explain in these two books. The basic active unit in management of computing projects which have the main characteristic of being the most interactive projects, is a team of 7 people ± 2 people, a size required to implement “agile” methods known as. Over and above this, communication problems within the team are such that it is preferable for it to be divided in half. For projects that require more members in the teams, teams must be set up regardless of
210
System Architecture and Complexity
the architecture of the software that needs to be created, itself dependent on the system context as we have demonstrated in our book Architecture logicielle. To create a market standard video game (2015), a team from the first category must be deployed, in other words a maximum of 80 to 100 people for a duration of two to three years. For software such as is required for an electrical system or C4ISTARtype system, one or several teams from the second category should be deployed, for durations of three to five years. These numbers are obviously no more than statistical averages, but they have been corroborated so many times that they can be considered to be truths14, and the fact that the teams in charge of the maintenance often do not have the same skill level as the initial development teams must be taken into account. Knowing that the average productivity of a programmer within a team of this type is around 4000 lines of source code actually written and validated, with residual error rates of the order of 1 to 2 per thousand lines of source code, it is easy to calculate the average production. The size of the “program” component, in the broad sense, required for all systems is an intrinsic limit of the size of the system in addition to the others. But the real limiting factor is the socio-dynamic nature because beyond a certain size, we no longer know how to organize the project team that is required for deployment of the system and its software in such a way that it remains controllable. In particular, we no longer know how to compensate for the human errors made naturally by the users/end-users (engineering) in suitable quality procedures (rereading, pair reviews, more or less formal reviews, etc.) (refer to the project problem, already mentioned). In conclusion, we can say – and this is no surprise for engineers and scientists who understand what they are doing – that there is always a limit to the size of the parts that can be constructed making intelligent use of the laws of nature, for internal reasons related to physics of materials, or for external reasons related to the socio-dynamics of human interactions. The silicon in our computers does not exist in nature, although its primary material, sand, is very abundant on Earth. In order for it to be of interest to us it must be purified using a technique discovered at the end of the 19th Century, zone melting15, which only allows a few “useful” faults to remain, because if the silicon crystal was perfect, faultless, it would be of no interest, simply an insulator like glass. The artificial “matter” (informational in nature) that constitutes our programs does not exist in nature. Only our brains, individually and/or in a team, are capable of producing it, but with faults which this time are entirely harmful and that we do not 14 For example, refer to the results of the CESAMES working group Intégration & Complexité in which numerous witness statements from industrialists were grouped together and all moved in the same direction. 15 Refer to the article on silicon, available at: https://en.wikipedia.org/wiki/Silicon, and on the zone melting without which ICT would not exist, available at: http://en.wikipedia.org/ wiki/Zone_melting.
Interoperability
211
know how to eliminate completely. The information “parts” that are thus produced, like their material colleagues, have a limited size that must not be exceeded, and they still contain faults. But by the “magic” of architecture and human intelligence, we would be able to arrange and organize them in such a way as to produce inconceivable constructions if it were necessary to “machine” them into a single indivisible block16. This is what we are going to examine in Chapter 9, implementing cooperation. 8.3.1. Limits and limitations regarding energy Interoperability leads us back to the relationship/dependence between all systems and the sources of energy which allow them, in fine, to operate. Without energy, a system cannot function, including systems that produce energy such as an electrical system which, in order to remain autonomous, must have a source that is independent of their production. Conservation of the energy source which supplies the system is very obviously part of the system invariant which must be conserved at all costs17! Consequently, the system has the top priority of completely controlling the energy that makes it “live”. With interoperability, and the capability logic that characterizes systems of systems, this property fundamentally disappears. The situation can be drawn up as shown in Figure 8.11.
Figure 8.11. Energy autonomy of systems of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
16 This refers us back to J. von Neumann’s fundamental studies, in particular: Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components. 17 We recall that in the Fukushima tragedy, the disaster was actually caused by the destruction of generators in the power station by the tsunami, not the earthquake.
212
System Architecture and Complexity
This figure shows that the AU are now dependent on correct operation of the exchange and coordination mechanisms, which by definition are not controlled by any of them. If this system of exchanges is repositioned in a diagram in Simondon’s terms, we see: (a) that the community of users is the sum of the communities of each of the systems taken individually; (b) that it has its specific community in terms of engineering, but that this must absolutely not act independently of the engineering communities of each of the systems; and lastly (c) that it has its material and virtual resources but that the service contract for the whole (it is a system in itself whose invariant is the rules given in the model of exchanges) must not under any circumstances contradict the specific service contracts for each of the systems. This creates a set of constraints/requirements that make engineering of these devices particularly problematic because they are necessarily abstract with respect to the various businesses of which the common elements must be identified. The corresponding semantic coherence of the models is a logic problem known to be difficult and which requires deployment of temporal logic. As for all systems that are based on cooperation, everyone, in other words the AU, plays the game; and we could say, the same game. If one of the systems does not comply with or no longer complies with the rules, in other words they have behaviors that are incompatible with survival of the whole for any reason, they must immediately and without any means of avoidance be removed from the federation or at least strictly confined. The operations carried out by the deviant system must, if necessary, be taken over by other systems in the federation. If this is not the case, the capacity of the SoS (system of systems) is necessarily reduced and the users must accept a deterioration in services. In terms of energy, it becomes almost impossible to estimate what is necessary for correct operation of the “bus” without overdimensioning its capacities, which is generally contradictory to the very objective of the federation. In the same way, for each of the systems that are responsible for their autonomy it is possible to adopt and to ensure application of a strategy thanks to the unique C2 command–control center (see Figures 4.1–4.4); this is no longer possible for the bus, because this must manage a complexity which is the “sum”, or a combinatorial of the complexities of each of the systems participating in the federation. The only way in which to prevent the “bus” from transforming into a sort of crystal ball with unpredictable operation, contrary to all the rules of engineering when the latter is mastered and not endured, is to have a prediction/measure of the load in real time at the bus for each of the flows and unloading rules if the predicted and/or authorized load is exceeded. This is part of the exchange models which set up the rules of use. These rules “of the game” necessarily constitute the language of authorized interactions, and reciprocally those that are prohibited, which can be analyzed with all the methods available in the theory of languages and abstract machines (see Chapter 5). With interoperability and
Interoperability
213
SoS, we have crossed a level of abstraction within the systemic stack, but by using the same methods to organize the complexity. The situation is then as follows (Figure 8.12): each of the systems in the federation induces a flow that varies with time, it is up to the bus to ensure that their sum does not exceed its capacity and/or to ensure in real time that there are sufficient resources to guarantee its service contract.
Figure 8.12. Control of energy load of systems of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In many ways, this situation is entirely similar to the situation seen by clockmakers in the 18th and 19th Centuries who manufactured the first watches and chronometers to use complications18, which were systems renowned as the most complex of those produced by the human brain until the beginning of the 20th Century and the beginning of aviation and the first telephone switchboards, or electromechanical weaving professions. In a complications timepiece, like in all mechanical astronomical watches and/or clocks, there is only one source of energy; either a spring, or a weight. Each of the complication mechanisms extracts its energy from the equivalent of a mechanical “bus”, a set of gears and cams, which thanks to the escapement is regularly liberated (a time base ensures the precision of the chronometer) until the resource is exhausted or until it is then necessary to “wind up” the chronometer without stopping it. If the samples exceed the capacities of the transport mechanism of mechanical energy (significant problems with friction, hence the diamond or ruby pivots, which are part of the rules of the model of exchanges), the watch stops and/or the escapement is irremediably slowed, which means that the watch will never be able to show the correct time. In systems of systems such as an electrical system (see Figure 4.4), the control function is now totally virtualized thanks to ICT capabilities, but it is still necessary to have sufficient energy to supply the various organs of the corresponding information system, an energy known as control energy which, for obvious reasons of autonomy, must absolutely have its own energy source. This regulation energy is critical for the system of systems, as it also is for each of the constitutive systems. 18 Refer to https://en.wikipedia.org/wiki/Complication_(horology).
214
System Architecture and Complexity
If the organ carrying this energy becomes saturated, meaning that the system no longer has enough control energy, the controlled system and its environment can put themselves at a significant amount of risk. This is because, for example, in the case of an electrical system, a quantity of energy (note: a power) of the order of 100 gigawatts (equivalent to an atomic bomb of average power, in one minute) is no longer controlled? An Ariane 5 launcher builds up an energy of around 17 gigawatts on take-off to escape the Earth’s gravity! 8.3.2. Information energy With the rise of ICT which today is indissociable from any system that uses it, and which therefore has become its most critical resource, a new form of energy has appeared. Independent of the fact that any electronic systems, with any function, is primarily an electrical system which as such must have a suitable source of electrical energy, it must, if it is ICT, have sufficient information processing capacity to carry out what is expected of it at the right speed. For lack of a better term, we can describe this new form of energy as information energy. Since the 1980s–1990s it has held the attention of physicists, as demonstrated in the works and symposiums published under the label “information physics” as we briefly mentioned in Chapters 2 and 4. This physics was already in development in A. Turing’s works on logic, and the work of C. Shannon on communications as we have also mentioned. The corresponding information energy capacity has three indissociable aspects: (1) a calculation capacity expressed in millions/billions of operations carried out per second, that the wider public has seen in the imprecise form of “Moore’s Law”19; (2) a capacity for interaction of the calculation organ with the “exterior” to supply its working memory with useful information, via, for example, SCADA systems that we will mention in Chapter 9, a capacity determined by the bandwidth of the information transport network (today in gigabytes per second); and (3) a capacity to store information to memorize everything that needs to be memorized given the system’s mission. For example, for an instrument like the LHC at the CERN20, this capacity can be measured in hundreds of terabytes of data coming from measurement appliances that detect trajectories when an experiment is carried out, in other words of the order of 1012 to 1015 bytes (petabytes). The same is so for the storage required for the calculation of rates for telephone operators, or for calculation of the state of the energy transport network for the smart grids that are required for “intelligent” management of renewable energies. 19 Refer to http://en.wikipedia.org/wiki/Moore’s_law. 20 Refer to The Large Hadron Collider: A Marvel of Technology, EPFL Press, 2009; Chapters 5 and 6.
Interoperability
215
To conduct an automatic control of any physical device, it is therefore necessary to ensure that the corresponding information processing system has enough “information energy”, whatever the circumstances, to accomplish its mission, complying with the temporal constraints of the phenomena to be regulated. In a high-speed train system such as the French TGV, the signaling system on the tracks and in the carriages in circulation must be able to react and make necessary decisions given the speeds of the carriages and the braking distances. With trains that circulate at more than 300 km/h, every 15–20 minutes during rush hour, any break in service can lead to a catastrophic collision. Hence the back-up of control of the induced load, where it becomes necessary to reveal the induced information load (see Figure 8.13). We note that this back-up obliges us to look at the detail of the coexistence of fundamentally continuous phenomena, and of discrete phenomena, and therefore asynchronous phenomena via transactional mechanisms that are specific to computing.
Figure 8.13. Control of the information load of systems of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The two mechanisms that can appear as separate or independent form a single one that must be analyzed and integrated as a single block. These are correlated mechanisms because one cannot be designed/organized without reference to the other, including in non-nominal situations which are always at the origin of service breakdowns, or disasters. This mechanism is transactional, in the strongest sense of the term, because coherence requires that what happens in reality is the exact reflection of what happens in the virtual information world, and inversely. The fact that the physical–organic mechanisms are backed up by an information mechanism allows a trace to be generated which is a true image of all the interactions between federated systems, which will allow post mortem analyses in the event of failure. We note in passing that the spatio-temporal synchronization of these traces requires availability of a unique universal time for all federated systems, analogous to what
216
System Architecture and Complexity
exists for constellations of GPS satellites and/or Galileo fitted with atomic clocks that are themselves synchronized. In the case of large breakdowns suffered by the electrical system, for example, breakage of a high voltage transmission line, an event that affects physical resources, in this case loss of an EHV line, the energy transported is instantaneously transferred to other available physical connections, in application of electromagnetic laws: the laws of Ohm and Kirchhoff. The information system which controls everything must analyze the capacities of the available physical resources that remain and decide in a few seconds whether the transport system will “retain” the load or whether unloading must be organized. The information system must therefore be perfectly informed of the situation, in real time, and itself have the processing capacity for all the information that comes from the organic/physical resources that constitute the transport network, to make the right decision with the help of the human operators that use the network. This feedback loop that manages considerable amounts of energy is the critical element of the system which in a certain manner determines its size. 8.3.3. Limitations of external origin: PESTEL factors Many external factors, the “outside”, are going to constrain the system, and therefore limit its “normal” growth, up to certain intrinsic limits that are presented above. By convenience we use the PESTEL sign (in other words political, economic, social, technical, ecological, legal), which has already been mentioned21, to describe the environment. An entire book would be required in order to present them in detail. Here, we are going to provide a few analysis clues to show how to go about figuring out this type of external limitation, because for SoS, these factors become fundamental. Politics, in the Greek sense of the term, meaning the organization of the City, and more generally of the State and the Nation, intervenes directly in the development and the “life” of a great number of systems, in particular those that provide a service to the public (police, army, energy, post and telecommunications, health, education, etc.). For example, the electrical system in the case study is completely constrained by the energy policy decided on by successive governments, and by European directives, which have deregulated the energy market. The energy transition that we find ourselves at the center of is political in nature, rather than technical, at least for the moment. EDF and RTE are not allowed to make their own decisions about what prices they are going to apply for distribution of the energy 21 Do a search using the keywords PESTEL framework and/or PESTEL analysis; there will be many references.
Interoperability
217
resource to their clients, in other words all of us. The decision made in 1988 to stop experiments at the power plant prototype at Creys-Malville22 in France was not EDF’s decision, and the dismantling of the facility will be finished in 2027. As a regulator, the State decided that this type of reactor cooled with melted sodium as a heat transfer fluid was dangerous and that the engineering was not mastered to a sufficient degree, in particular regarding the impossibility of putting out a sodium fire. REMARK.– Water cannot be used, which makes an explosive mixture, and sodium burns spontaneously in the air. In the United States, the American government has decided to dismantle certain monopolistic companies that are considered to be dangerous, which violate the American constitution and the liberty of American citizens, hence the anti-trust laws with the latest, AT&T, of which parts are to be found in Alcatel-Lucent, with a part by Alcatel, distant heir of CGE Energy, which itself is a result of the various restructurings of French industry, both nationalization and privatization. A company such as France Télécom was for a long time a national company under the guardianship of the ministry for Post and Telecommunications, with a national level laboratory, the CNET, to guarantee its mission of service to the public whose research fed into the entire telecommunications industry. Privatization of the company led ipso facto to the disappearance of the CNET, replaced by a classic R&D type structure typical for large companies but which, due to the privatization, no longer needed to play a national role. This led to some question of systemics posed to the regulator State: who was then going to take on the mission at a national scale? The CNRS? The INRIA? Or a new structure that needed to be created from different entities? How can the transition be guaranteed without a loss of skill, when we know that direction of a research team of 40–50 people often relies on leadership from two or three people? Economic policies described as liberal have the role, at least in theory, of organizing economic competition between the companies in compliance with the rules, which is another way of limiting growth of certain systems. Here, we refer back to the previously cited works of M. Porter. The economic factor characterizes the capacity of a country, of its companies and its banks, its State structure, to mobilize resources to create infrastructures, goods and services, which are required for the well-being of its citizens. In the United States, since World War II, the Department of Defense (the DoD) has fulfilled the mission of providing American technological leadership. As everyone knows, the United States does not have a research ministry, because it is primarily the DoD that 22 Refer to https://en.wikipedia.org/wiki/Superphénix, for an initial insight (Superphoenix).
218
System Architecture and Complexity
takes care of it, in close collaboration with the large American universities, and with a few large agencies such as NASA and research organisms such as RAND or MITRE which are FFRDCs23. President Reagan’s Strategic Defense Initiative (SDI), better known as “Star Wars” financially supplied American universities and created many research projects which were fully beneficial to American industry, which now enjoys an obvious competitive advantage. In France, a country with a tradition of centralizing and a strong State, large projects that have given rise to large systems such as the electronuclear program and the “Force de Frappe” or strike force, the plan for modernization of the telecommunications, motorways, TGV high speed train, space program, etc. all arose from the voluntarist action of public powers supported by large organizations such as the CEA, or by national companies like the SNCF or France Télécom, which themselves played the role of project managers or project owners of an entire network of industry in charge of “the action”. An English colleague pointed out that a system like the TGV high speed train is almost impossible to set up in Great Britain because the financing capacity that must be obtained is virtually inaccessible to private companies, even large ones, which in addition do not have the same degree of guarantee as the State. Dissolution of the CNET due to privatization was a tragic loss for the telecommunications industry in France and for industrialists who did not see the situation looming on the horizon, in particular Alcatel. A research center is not simply about patents, it is also a way of training high level professors, engineers, and scientists who are required for the development of sectors of industry. It is a center of excellence which reaches out to its environment, hence the importance of the social factor about which we are now going to say a few words. The social factor, in the widest sense of the term, is everything that relates to human development of the citizens of a country and, in a first instance, education. The analysis of technical objects/systems using Simondon’s method has shown the importance of the communities of users and of engineering for the “life” of systems. The human factor, namely the average level of education of these communities (the “grade” in the graduate sense), obviously plays a major role. A well-educated, responsible, motivated and entrepreneurial population, is an essential competitive advantage; this is the condition required for the appearance of socio-economic phenomena such as the development of Silicon Valley in California since the 1980s, and prior to that around “Route 128” which surrounds Boston and its large 23 In other words, Federally Funded Research and Development Center; refer to the websites, for the FFRDC, http://en.wikipedia.org/wiki/Federally_funded_research_and_development_ centers and, in general, http://en.wikipedia.org/wiki/Science_policy_of_the_United_States, for an initial introduction.
Interoperability
219
universities (MIT, Harvard, etc.); the birthplace of the computing industry. Implementation of sensitive technologies such as nuclear power demand a highlevel of maturity, both at the user level (so, in the case of the nuclear sector, this relates to the service industries involved) and at the level of the engineering teams (who develop the technology) (in France, the CEA, the constructor AREVA, also in charge of fuels, and the EDF operator in addition to their direct sub-contractors), in other words the entire nuclear sector and all the workers, technicians, engineers and directors who are involved as actors, without forgetting the parties concerned. In the case of the systems that we focus on in this book, training in systems sciences at the level of an engineer (Baccalaureate + 5 years, or more) is essential, on the one hand to avoid leaving each of the engineering teams to reinvent the wheel several times, each in their own way, and on the other hand to create transversality, a common language from which all the sectors that implement systems will benefit (here we refer back to section 1.3); the “Babel” effect, as we have seen, is fatal to interoperability. In all artificial systems created by humankind, there is always knowledge and expertise that is specific to the phenomenology of the system such as electricity or nuclear, but also knowledge and general expertise which characterize a class of systems. This category of knowledge must come from teaching at a higher level, but the difficulty is of presenting it simultaneously in a specific and abstract manner. For the technical factor, we have highlighted throughout the book the importance and all-pervasive nature of ICT in the design, development and operation of systems, which are integrated into the rather more vast context of NBICs. The “digital” sector has been recognized as a great national cause in many countries, including France, slightly late if we compare it to the United States or Israel, taking two extremes on the spectrum of States. The information component which is now present in all systems needs to be assessed properly, and prepared for. It is essential to understand that the “physics of information”, to use the terminology introduced by physicists themselves, will play the same role for ICT/NBIC that electronics played in its era, or that physics of condensed matter and particle physics played for the entire energy sector. While the engineering teams were “not up to standard”, this does not mean that they cannot be effective. It just means that productivity will be lower, that the rate of errors will be higher, breakdowns will be more frequent with a higher level of risk (e.g. nuclear power plants such as Chernobyl, without a safety enclosure!), and that the quality in general will not be what users should expect, etc. Quality is a fundamental component of the technical factor and, in France at least, it is not taken as seriously as it should be, in particular by the education system and by many industrialists, including among the greats of the CAC 40.
220
System Architecture and Complexity
Financialization of industrial activities has been accompanied by uncontrolled (and uncontrollable?) risk-taking whose deleterious effects are beginning to appear. Is the technical capacity to be effective dependent on the capacity to carry out system development projects of all kinds? Here again, we should avoid delighting in hollow words which are never a substitute for competence. Since the 2000s we have talked a lot about “agility” in systems engineering environments, a hollow word selected from a good collection of them, because what should be done is not only to name and/or define, but to specifically explain what it is for, how to go about it, in other words the what and what for, or the objective, taking into account the phenomenology of the project. This was indicated to us a very long time ago by Wittgenstein in his Tractacus logico-philosophicus (Routledge & Kegan Paul Editions, translated by B. Russell), when he said: “If a sign is useless, it is meaningless”, (aphorism 3.328). Here again, we see that the technical factor can only develop if individual and collective actors (that we have called “active units”) are humanly up to standard, as much in terms of the knowledge required and/or to be acquired, collective expertise within projects, but also interpersonal skills, in other words the ethics that in fine determine the cooperation capacities within the teams themselves24. This truly is a symbiosis, and it cannot be bought like a pocket watch or a hamburger. Concerning the ecological factor, we will only say a few words, remaining at a very generalized level, because for any system that is constructed, its dismantling must also be included. The dismantling is the normal terminal stage of the life of a system (it is the equivalent of the programmed “death” or biological apoptosis), but in order for it to be carried out in good conditions, it must be integrated as a demand right from the start in the design phase of the system. In the case of nuclear energy, reactors under construction will have an estimated lifetime of one hundred years, in other words the teams in charge of the dismantling will know those that designed it only through the documents that they have been left. All systems are traversed by flows of energy and materials that will be transformed by the system and delivered to the users. As for the system itself, it must be possible for objects manufactured in this way to be recycled once they are no longer being used. These flows are moreover never totally transformed; there is always waste of all kinds: unused materials, thermal, sound, electromagnetic pollution, etc. Ideally, it must be possible to recycle everything in order to avoid the hereditary term of the growth model of the Lokta–Volterra equation becoming dominant. This is the necessary condition for general equilibrium of the terrestrial system in the long term. 24 We have processed some of the points evoked here in greater detail in our previous works; refer to Écosystème des projets informatiques – agilité et discipline, Hermes-Lavoisier.
Interoperability
221
The ecological component is a vital requirement for future generations and a true technological challenge to be met for engineers in the 21st Century who must now reason in a world of finite resources. All waste that is not compensated for and left to the goodwill of natural processes is a mortal risk whose importance we are beginning to measure. The additional devices need to be taken with the same level of seriousness as operational devices, and to be honest, one never happens without the other. Concerning the legal factor, all advanced countries have a diversified legislative arsenal and standards to be complied with, with agencies to control them, and finally tribunals to judge those who violate the law. This is not without its difficulties, as we all know, with multinational companies who can offshore certain production activities to countries which pay less attention to this aspect. In conclusion, all these external constraints are going to play a full role when it is necessary to cooperate in order to grow, as has become the rule in globalized economies; the PESTEL factors which then become deciding factors for the choice of a country in which to organize growth and invest, either for the quality of its workforce, of its infrastructure, or for its taxation, or for its lack of legislation, etc. 8.4. Growth by cooperation Whether we take the example of the electrical system, that of a computing stack, or that of the hierarchical composition of an automobile – we could take many others – we see that through a progressive, methodical approach, we get to very large constructions that are originally inconceivable, but whose complexity is organized by architecture and mastered by the project director. For this, we must not hurry into the complexity of a problem that is initially too vast to be understood, but follow the good advice given by J. von Neumann, or Descartes: constructing step by step from what is correctly understood and mastered. The LHC25 at the CERN which allowed the Higgs boson to be discovered and which explained the mystery of mass is a fantastic machine of colossal size, which is the result of a history which began in the 1940s in the United States, and at the CERN in the 1950s. Without cooperation at all levels and organization of complexity, systems of this kind would be entirely unrealistic. The problem therefore relates to understanding how to organize the cooperation so that the efforts of some are not ruined by the failure of others when the entire set is integrated, which implies a good 25 Refer to The Large Hadron Collider, already cited.
222
System Architecture and Complexity
understanding of the sociodynamics of projects. The effort made must be cumulative at the level of the system of systems. For this, progression in stages is essential. Prior to reactors of 1.3 and 1.6 gigawatts like the EPR, there were intermediate stages which began with machines at 200–300 megawatts, stages that were essential to the development of learning and technological maturity. Before the computing stack of abstractions that we are now familiar with via the MMI in the form of our smartphones, tablets, etc., there were also numerous intermediary stages and thousands of man-years of effort in R&D which have been built up. Cooperation is an emergence, and for the phenomenon to be cumulative, it is imperative that what emerges can rely on stable foundations, as shown in Figure 8.14. This figure simply systematizes what was presented in Chapter 3. The fundamental point is that the stacking of layers is not expressed as a combination of the underlying complexities. At any level we place ourselves at in the hierarchy, the next level down must appear as a perfect “black box”, in the sense that it lets nothing show through its internal complexity other than via its access interfaces, in other words the dashboard of the black box. However, it must be possible for the whole to be observed, hence the importance of components/equipment that we have described as “autonomic” in Chapters 4 and 5, in particular Figure 4.15, and the generalized mechanisms of traces that are associated with these modular components.
Figure 8.14. Cooperation and growth of systems of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
Interoperability
223
The diagram of the autonomic component/equipment is an abstract model which applies to all levels, the interface, for its part, materializing the distinction between the internal language that is specific to the component/equipment, and the external language which only allows what is functionally required to be seen by the user of the component. The cost of “infinite” growth, in the sense that it has no limit except that of our own turpitude, is a logic cost, a pure product of human intelligence which does not exist in the natural state; a cost that we know how to master thanks to the progress of constructive logics and their associated languages, as much from the most abstract point of view as from the point of view of specific realizations with transducers that allow a change from one world to the other with the perfectly mastered technologies of compilation and/or interpretation. We note the deep analogy that exists between this potential capacity and C. Shannon’s second theorem, concerning the “noisy” channel26. The “raw” information that comes from the human brain which passes and is refined from brain to brain thanks to the implementation of quality systems (refer to the references by the author for project management, given in the bibliography) until it becomes, via organized interactions, a common good which is then “ripe” for transformation into programs and rules understood by all communities and interested parties, and can therefore be reused. The example of an electrical system provides us with a completely general model of this evolution. We find this again in the evolution of company systems or in C4ISTAR strategic systems for defense and security. It can be explicit or implicit, but it is always there. We are now going to specify this general model. 8.4.1. The individuation stage Initially, systems are born from the conjunctural requirements expressed by the users, both individually and collectively, or by anticipation from the R&D teams that are internal and/or external to the company or to the administration. At this stage, it is often technology, a source of new opportunities, that takes control. Defense and security, strategic leadership, have been at the origin of a great number of systems in the United States and in Europe. Development of the electronuclear sector in France in the immediate after-war period is typical of this situation. Progressively, the company is going to host a whole array of systems which each have their own individual dynamic and which are going to organize their growth as a function of the specific requirements and demands of the community that has led to their creation. 26 On this point, as for many others, it is recommended to go back to the source and to avoid paraphrases which deteriorate the signal; refer to the book by C. Shannon and W. Weaver, Mathematical Theory of Communication.
224
System Architecture and Complexity
In Figure 8.15, we see systems that are born and develop at dates T1, T2, … and mono-directional connections that are established case by case at dates that are compatible with dates of birth of the systems and/or of their stages of development. All this is initially perfectly contingent and almost impossible to plan globally. Individual dynamics mean that at an instant Tx, a function will be added in the system Si, whereas this is not its logical place, because the system which should host it has simply not been created, or even, as frequently is the case, the director of system Si wishes to keep its technical and decisional autonomy. If on the contrary the function to be developed exists elsewhere and the two directors are in agreement to cooperate because they see a win-win reciprocal interest, then an explicit connection is born which will allow an interaction.
Figure 8.15. Birth of systems and their connections. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The logical functions which answer a user’s semantic will take root wherever this is possible, with possible redundancies in different systems in which they will exist in various ways that are more or less compatible. At a certain stage of this anarchical development without a principle of direction, but anarchical
Interoperability
225
by necessity because it is unplannable, the situation will become progressively unmanageable, which was the case of the electrical system in the 1940s–1950s. After the time taken for action and disordered development without cooperation nor coordination, we have the time to return to an energy-economical optimum where the interactions will be considered as a priority, such as the first data of a global system that will have to optimize itself on the network of interactions (see Figure 8.5 and the comments). This is an illustration of the implementation of the MEP principle. REMARK.– In the world of strategic systems, known in the 1990s–2000s as networkcentric warfare, accompanied by many publications by the institutes and think tanks working in the world of defense and security. The SDI project, “Star Wars”, is an emblematic example of this. A few words about the evolution of the nature of the connections are represented by arrows in Figure 8.15. At the very beginning of the industrial revolution, the connections that existed between the various pieces of equipment in a factory were purely mechanical. With gears, cams, belts/chains, steel cables, etc. it was possible to organize workshops such as those in Figures 8.1 and 8.2 for machines; but it was very rigid, extremely fragile, and also not very reliable. Mechanics was the queen of engineering sciences, along with the strength of materials27. Airplanes from World War I had flight commands that were entirely mechanical, with cables more or less everywhere. Very quickly, in particular thanks to the implementation of electrical energy on a large scale, it was possible to distribute the energy using cables that conduct electricity towards electric motors and/or hydraulic circuits to distribute the pressure to the valves. During World War II, the first flexible hydraulic circuits and electrical cables to carry electronic signals appeared, which could be processed/ transformed by suitable analog devices; cables for the telegraph had already been in existence for a while, with the first transatlantic cable in 1858 (4300 km, 7000 metric tons, communications in Morse code). Transducers came into use at that time. The end purpose of all these mechanisms whose function is to operate a coupling between the equipment of a system in such a way as to inform some of what others are doing, and transmit commands, was generalized in the 1970s–1980s: the computer progressively took charge of all the analog functions of coupling, whether they are electronic, mechanical or human. The computer therefore becomes the central machine of the integration operator. The connections become virtual and are carried by information transmission networks made up of wires, with fiber optics whenever this is possible, or electromagnetic waves of various magnitudes
27 Refer to Galileo, in his last work, already cited.
226
System Architecture and Complexity
(Bluetooth, Wi-fi, 4G, 5G, etc.); the information is no longer transmitted in the form of analog signals, but in coded form, with robust codes chosen as a function of specific noises in a given environment, including space and the interstellar environment for very long distance waves for satellites and interplanetary robots/probes. The entire “information” function that is specific to all systems, as the inventors of these technologies had perfectly noted (Wiener, von Neumann, Shannon, etc.) but for which they did not have the technical means, given the technologies at the time, is today entirely “digested” in the form of digitalized multimedia connections, to such a point that the digitalized connections device provides “intelligence” to the equipment and in fact to the systems. Hence the all-pervasive nature of what we have called information energy (see section 8.3.2) and the central role that information plays today in all systems, small in size (like our smartphones) but complex or enormous (such as electrical systems) where the energies that need to be controlled are colossal. 8.4.2. The cooperation/integration stage The dynamic of interconnections which began with the apparent ease and tranquility offered by ICT from the 1980s onwards, quickly transformed into a digital nightmare, at least for those who did not see the rise in concomitant complexity of “chains of connections”, as they were known in the 1980s–1990s, and of industrial networks. Potentially, if we interconnect N systems point to point, the number of physical connections increases like the square of the number of systems, in other words: Number_of_connections = N × ( N − 1) , because the flows carried by these connections can be bidirectional. In terms of engineering, this means that validation of the connections will require an effort which increases, at a minimum, as the square of the number of systems, hence the material impossibility of mechanical solutions. If in addition, since the connections are digital, the systems store details of past interactions (which is both very easy and very tempting because this opens an entire set of new possibilities and optimizations) research work to validate a physical link is accompanied ipso facto by consideration of the possible states related to the history of interactions that have taken place in the systems. The combinatorial of these states therefore evolves in the following way: Combinatorial_
Interoperability
227
of_states = E [ S1] × E [ S 2 ] × × [ SN ] , in other words an exponential28, the worst kind of situation for an engineer. The integration effort is out of our control, more so than quadratic as we had initially believed we would restrict it. To extract ourselves from this predicament, intelligence will be required, and as David Hilbert recommended, we will generalize to make a simplification. The structure that emerged progressively in the 1990s–2000s, was named a pivot structure, shown as a flow chart in Figure 8.16.
Figure 8.16. Pivot structure
The pivot structure of the organizing center of interoperability of systems is fundamentally a model of exchanges where all flows have been virtualized. This abstract center is linked to the elements of the system by exactly N bidirectional connections. The global system is perfectly symmetrical and none of its elements play a particular role, which only makes the whole set more fragile from the point of
( )
28 More exactly, this is at a minimum an exponential of exponentials of the type k n
m
with
n and m variables! As factorial functions are the same, refer to Stirling’s formula n n! ≅ e
n
2π n . We have seen that in the event of memorization the order of growth 2
increases as 2
N
.
228
System Architecture and Complexity
view of the exchange model. A connection between two systems Si and Sj passes necessarily via the pivot, which has the useful additional property of physically uncoupling the communication between Si and Sj, in other words Si ↔ Pivot ↔ S j , which has led some to say that the pivot played the role of an “expansion joint” which is well-recognized in mechanics (gimbals, differentials, enveloping worms, etc.) and in civil engineering in large engineering structures; a rather pertinent metaphor. Others have talked about orchestration or choreography, doubtful metaphors chosen for media purposes. For the detail of this logic and its implementation, refer to the document “Technologies pour la mise en oeuvre des NCS” on the authors’ website. We are going to concentrate on the fundamental properties of this new abstraction that will radically simplify the engineering of connections and provide systems with a virtually unlimited growth capacity, which is indeed virtually unlimited notwithstanding a few simple rules. In order for the pivot to have a meaning, it must: (1) concentrate all and nothing but the information that is strictly necessary for coherence of the whole, including in the event of an error and/or abnormal behavior; and (2) have a representation, in other words the language that expresses this coherence, which is independent of the representation modes adopted in each of the systems. In Chapter 5, we have formally established the requirement for the distinction between the internal language and the external language, and the respective grammars of these two languages, GLInt and GLExt. The interoperability pivot is necessarily manipulated and expressed by a well-formed sub-set of LExt and GLExt for each of the systems that wish to cooperate, in other words LPvt and GLPvt. The semantic coherence of these various sub-sets must therefore be guaranteed, possibly along with any required re-adjustments in each of them to improve this coherence. But the existence of a pivot must not tie the systems to each other (spatio-temporally) which would be expressed mechanically as: Any modification of one leads to a synchronized modification of the others, which would be equivalent to simultaneously synchronizing all the elements in the SoS, therefore a “fatal” rigidity for the evolution of the system because everything would need to be stopped. Figure 8.17 shows how things should be done progressively so that everything is done correctly without destabilizing the entire system. This is a modification of Figure 5.2.
Interoperability
229
Figure 8.17. Detail of the pivot architecture. For a color version of this figure, see www.iste.co.uk/printz/system.zip
In the diagram, the model Ext ↔ pivot translators/transducers take the place of “expansion joints”; they are indispensable for autonomy of the system S even though S is a member of a federation with which they cooperate. As a consequence, it is not even necessary for the descriptive formalisms of each of the systems to be the same, although the semantic of the exchanges is, itself, perfectly invariant. The semantic of the exchanges is a fundamental invariant, but the representation of this invariant can be numerous, in other words all the representations used can be expressed reversibly, with no loss or addition of information, into each other. The pivot representation, known as “pivot language” or LPiv, plays the role of a neutral element in this whole, in such a way that LPiv ⇔ LExt (strict equivalence) is always Trad
⎯→ LExt and inversely true, meaning explicitly stating the translations L Piv ⎯⎯ −1
LExt ⎯Trad ⎯⎯→ LPiv . The existence of the pivot, and conformity of the representations used in relation to the pivot, guarantee that in all cases the external languages of all system couples Si and Sj are semantically equivalent, in other words S
LSExti ⎯Pivot ⎯ ⎯→ LExtj . REMARK.– The implementation and detail of the engineering of pivots is a significant and difficult problem in engineering but which will not be tackled here (refer to our book Architecture logicielle, 3rd edition, published by Dunod).
230
System Architecture and Complexity
We have seen that command–control systems all evolve towards remote control and/or remote operation devices, which involves deporting ergonomic engineering functions (operation, exploitation, maintenance) to outside the site where the system is located. In at-risk technologies such as the nuclear or petrochemical industries, we thus improve the safety of people by minimizing presence on site, which will only be necessary if the physical elements of the system are affected. This means that the geographical sites where systems are located must be linked to each other by a communications infrastructure, either wired, or Hertzian, or optical, or satellites or a mix of these for reliability, to manage and organize all interactions from a distance, infrastructure which was first seen in the 1970s for remote diagnosis and remote maintenance of the first mainframe29 computers. This is also the date of the discovery, thanks to optical fibers, of a method for isolating the disks required for mass storage in secure rooms at a distance of several kilometers from the central unit disks. In the field of C4ISTAR, automatic motors such as drones are completely remote-controlled, at distances of thousands of kilometers thanks to satellite connections. In the 1990s–2000s, it became clear that all these communications infrastructures are destined to merge together to create a single one, in charge of all exchanges, whether these are for remote-control or for MMI in a wider sense (remote operation), and those concerning mutualization of services/resources that are shared among the various equipment in the system of systems. Hence, the final diagram of this evolution, which was carried out over around 30 years (Figure 8.18).
Figure 8.18. Virtual and/or real generalized exchanges. For a color version of this figure, see www.iste.co.uk/printz/system.zip
29 For the same reason, EDF developed at that time a suitable network for remote control of their installations, which is the ARTERE network of which some evidence still remains 30 years later.
Interoperability
231
In an architecture of exchanges of this type, all sharing is potentially possible. Only external constraints, like safety or performances, are going to allow a choice of organization that is best able to fulfill the service contract. In systems engineering jargon, this is known as the FURPSE constraints (function, use/ease of use, robustness/reliability, performance, serviceability, evolutivity), in line with the ISO-912630 standard. Infrastructure constructed in this way is a common good of the communities that have decided to federate to be more effective overall in their respective missions, mutualizing all or part of their resources. Infrastructure that is created in this way can therefore not only guarantee the exchanges with a quality of service that can be measured by an overall average availability (this is a statistical measure) weighted to reflect uses (this is a flow), but it can also provide transverse functions like those guaranteeing the autonomy of the systems, each taken individually. Functions known as “autonomic”31 (see Figure 4.15) can thus be moved up a level in the hierarchy of federated systems, and manage/have available specific equipment: measurement probes, various sensors and effectors, devices to monitor the environment (refer to building management systems for secure buildings32), decision aids, etc. In doing so, we have defined two categories of systems within the federation of systems that have decided to cooperate because this presented an obvious competitive advantage: 1) Functionally dominant systems: they carry out the actions/transformations that are required by the users (performance aspect; this is the “action” of the system). In Figure 8.18, these are specific functions of the various system blocks. 2) Service dominant systems: carried out for the benefit of the former, either as a supplier of mutualized resources (such as the infrastructure of exchange networks), or as functions that are transverse to the federation (e.g. the safety, autonomic management of the environment, etc.). These are common functions of the system blocks that are dedicated to providing a service common to all. They are seen as black boxes by the specific functions.
30 Refer to http://en.wikipedia.org/wiki/ISO/IEC_9126; unless you are able to access the ISO/CEI standard in its entirety, then refer to ISO/IEC 25010:2011 Systems and Software Engineering – Systems and Software Quality Requirements and Evaluation (SQuaRE) – System and Software Quality Models. 31 In systems engineering jargon, this is known as autonomic computing. 32 Refer to the corresponding Wikipedia entries; http://en.wikipedia.org/wiki/Building_ automation, for example.
232
System Architecture and Complexity
At this stage of their evolution, it is perfectly possible for some of the systems in the federation of the system of systems to relinquish some of their functions with full knowledge of the consequences, if they have an absolute guarantee that the function is available elsewhere in another system of the SoS, with the same guarantee of service as if this function had remained within the system. Savings made in this way will allow investment to be made in the transverse functions/services which will improve the robustness and the resilience of the federation. With the development of the electrical system, we have seen an example of implementation of this capability logic which globally improves the guarantee of service, because the breakdowns become extremely rare. Thus, this leads us to a specialization of systems taken individually, an individuation using G. Simondon’s terminology, hence the rule: each of the systems is more efficient when it is hosted in a well-constructed and well-organized federation, rather than if they were left to themselves when facing a “hostile” or risky environment whose hazards they would not be able to or would not know how to compensate for (in the language of physicists, this corresponds to a break in symmetry). Complexity organized in this way is the price to pay for better efficiency. It therefore becomes imperative to filter the inside/outside frontier of the SoS federation in order to avoid the individual systems being subject to hazards that they can no longer control. Engineering is based on cooperation. The diagram in Figure 8.18 gives an overview of this new logic known as capability. In all cases, what is essential is maintenance of the coherence between the virtual digital world and the real world of the physical–organic component of the system. Coherence of actions, on the one hand, which is a fundamental transactional coherence, must comply with the classic properties of transactions (properties known as ACID) which are typical of the informational world and of transformation quantas, and, on the other hand, temporal coherence because clocks in both worlds are set to different time scales, it being understood that the scale of the real world sets down the semantic, in other words the “real time” in the strict sense of the term complying with a schedule no matter what the duration of the operations carried out in the real world was. When all this is in place, we can envisage the last stage of growth, the opening of the system or of the system of systems.
Interoperability
233
8.4.3. The opening stage It is firstly necessary to specify correctly what the question relates to when we refer to “opening”, because interoperability is in itself a form of opening, perfectly controlled by the acceptance of common rules. Opening is something different whose requirement we observe in a certain number of systems. We give a few examples: – The electrical system, for its own requirements, manages statistics to fill various holding lakes for the hydraulic part of the system which plays an important role because hydraulic energy can be immediately mobilized. It is simply a case of opening the sluice gates, in terms of control purposes. No other energy has this capacity. These statistics are of interest to various ministers or territorial bodies, for example, municipalities that have these installations within their perimeters; a holding lake can be used as a leisure base. This is an example of opening, and in this case, it is what we call, or would call, open data. Inopportune use of this data can create dependent relationships which in the end will be detrimental to the capability logic of the electrical system. The user of the “opening” must accept certain rules. This is necessarily the case for a user of the system who has an energy source (solar panels, etc.) and who wishes to interface with the electrical system. – In air traffic control, not only an airport zone, but also “en route”, airlines that also have their own systems request hosting services within the air traffic control system of certain functions, to stay in contact with their airplanes, or with the personnel on board. In this case, there is an interpenetration of functions, but this must not under any circumstances be expressed as a risk in terms of safety. All the more so that various airline companies can ask for services that are not necessarily coherent and can even be contradictory. Here again, there are rules. – In C4ISTAR defense and security systems, each of the nations participating in an operation arrives with their own systems, and the responsibility falls to the “governing nation” (in NATO jargon) to host everyone. Here again, rules are required, and this is the reason why, during the first Gulf War, French airplanes were not able to participate in night operations orchestrated by the American airforce – due to a lack of suitable equipment. This was not the case for naval forces. In all configurations, a function, in the broad sense, of a system outside the SoS federation is deported within the hosting SoS, without being part of it. The block diagram of opening an SoS to another SoS is represented in Figure 8.19.
234
System Architecture and Complexity
Figure 8.19. Opening of systems of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
For an architecture of this kind to be viable, a reciprocal level of confidence between the host and the hostee is required, which will guarantee the effectiveness of the common action, without damaging the host including in the event of anomalies. However, it is a level of confidence that does not extend to making exchange models of the SoS host available, which would be purely and simply an integration of the two SoS. This is not sought in the “opening” in which each one maintains their own full and total liberty, while accepting a few minimal constraints; as a consequence, ethics begins at the heart of the engineering process, or PESTEL factors. The minimum level of safety required for the opening is non-refusal of the actions of the hostee, which it must be possible to trace in order to guarantee if it is needed that both the host and the hostee have done the expected work. The connections are fundamentally asynchronous.
9 Fundamental Properties of Systems of Systems
9.1. Semantic invariance: notion of a semantic map The interoperability of systems and the general context of the organized growth of systems that is demonstrated by this fundamental notion allows us to review and specify the initial definitions discussed in Chapters 2 and 3. When we say “a set of elements in interaction”, questions arise concerning (a) the quantum of elementary information, the smallest element below which the concept of an element disappears, and (b) the quantum of elementary interaction between the elements, in other words, a transaction. Everything else being equal, this is equivalent to the notion of atoms that is found in particle physics and in the theory of the standard model of particles. This basic element of a system is generally known as a “module”, a process or even a building block, but it is, fundamentally, the same thing: it is, the element that conveys the transformation that is carried out, a quantum of information, something that gives meaning to the use. REMARK.– This is L. Wittgenstein’s “ask for a use”. The definition of the inside/outside of a module is purely conventional in nature; it is a contract. There is no module “in itself”. To give an example, the concepts of a bolt or gears, on the one hand, have not become particular in their meanings, whereas an automobile, on the other hand, has many specific meanings. However, a piece of equipment such as a gear box or a clutch is something that has a meaning in the context of a motor vehicle, but which does not have meaning in the context of an airplane. A gear box, clutch, steering column, and a speed limiter or cruise control are semantic (or functional) modules of a vehicle system. However, from the viewpoint of a manufacturer of gears for gear boxes, a gear
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
236
System Architecture and Complexity
tooth is a high technology object that must be perfectly machined, because if there is friction there will be heat, which can cause serious damage to the gear box. Therefore, each tooth is a module with regard to the quality system of the gear manufacturer. The functional modules and the way in which these modules interact (in other words, the inter-module communication) are conveyors of the semantic of the system of systems, in the same way as they are for the constituent systems. “Function” and “interaction“ are two inseparable aspects of the semantic. REMARK.– Communication between an emitter and a receiver has been the subject of intense research work, which was first introduced by C. Shannon to code the exchange signals and compensate for the “noise” generated by the environment, and was later enhanced by the development of communication protocols in telephone systems and, particularly, the transmission of data created by computing in the 1960s onwards1. In Figure 8.14, we can see the construction levels of modules, given that a module itself can be constructed from modules of a lower rank, up to level 0 of construction, seen from the viewpoint of the system which, as we have already said, is pure convention. In this figure, the notion of an elementary system is, in itself, a construction of modules that are the result of an integration from modules of rank 0 which constitute the true “atomic elements” of the system. Integration of the elementary system from its constitutive modules is a recursive mechanism, where each level can have its own communication/interaction logic. This notion of module/process was presented in Chapter 7, but here we see its capital importance in the SoS (systems of systems) context. When we see things in their wider context, such as in the problem of the interoperability of systems, we gain a better understanding of the requirement to precisely define the quantum of elementary action (there can be several, although they are all perfectly defined) from which all system construction logic arises. From elementary systems, we will be able to construct what is denoted as a “sub-system” in the diagram. For example, in the aerospace industry, we sometimes call this level a “segment”, but this is simply to give it a name, which will allow us to talk about the “ground segment” of a satellite system (in this case, an SoS, which integrates all the ground equipment) such as in GPS/Galileo constellations.
1 Refer to the work of the CCITT, replaced by the ITU-T (see http://en.wikipedia.org/wiki/ ITU-T).
Fundamental Properties of Systems of Systems
237
This recursive mechanics continues up to the highest level of construction, described in the diagram as “systems of systems”, the understanding being that an SoS is first and, above all, a system, because it has all the properties of one, if we exclude the engineering problems that are created by the size of the SoS. The inside/outside frontiers that are specific to each level, whose conventional nature we clearly see here, must coincide with the frontiers of projects and/or the frontiers of companies that are interested parties in the system, which, in fine, will be made available to its users. In this type of set, the architecture must be such that it complies with both the specific technical capacities and the specializations of the participant companies, and the size characteristics of engineering projects that must be put in place for the creation, the understanding being, as already stated, that beyond a certain size, projects themselves can no longer be controlled. It should be possible to divide a very large system, such as the electrical system, into entities: systems of systems, systems, sub-systems, etc., none of which will require implementation of projects which themselves would not be controllable. In order to correctly understand the problem of the semantic of systems, it is necessary to start from G. Simondon’s triplet (Figure 9.1).
Figure 9.1. Semantic of systems of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
This is what users do with the system which, in fine, determines the semantic of it, known as the “semantic map”, shown in Figure 9.1. This map integrates all aspects of the semantic, summarized in the acronym FURPSE mentioned in Chapter 8. This map is translated and taken into account, in part or in total, by the elements
238
System Architecture and Complexity
(equipment and/or systems) of the SoS, or of the SoS operating on the zone. Here, we see the importance of the spatio-temporal definition of the system, a definition attributed to René Thom, as mentioned in Chapter 3, because this is the actual land area in question at this stage of the definition. From this land area, we can define various semantic maps, depending on the community of targeted users’ area of interest; these maps are already abstractions. Then, using these abstracted semantic maps, we can derive, using reversible “translations” carried out by engineering teams, system models, in the usual sense of the term. The element in common (which is an equivalence class, in the logical sense of the term) for all these models is the concept of the module/process whose importance we completely understand here, because processes and their interactions, in fine, are the elements that convey the semantic of the system. This being the case, we have constructed a hierarchy of models that have all been taken from reality, in successive abstractions and in coherence with reality, in other words: Reality (unique, by definition) → abstract semantic models (one or several maps, depending on the points of view) → functional models (one or several, for each of the abstract models) → physical–organic models (one or several hard+soft implementations per functional model, depending on the technological constraints) When a user acts in reality, activating one or several of the physical–organic systems, they must ensure the coherence of their actions by suitable means, and a fortiori, when there are several of them. Here, we see a notion of “transaction” that is entirely general and which provides a guarantee of the coherence between the virtual world of the models, on the one hand, and reality, by definition unique, on the other hand. In our two examples, this notion is at the heart of the coherence of actions carried out by the active units: – for the C4ISTAR systems that combine land–air actions with the families of various systems, the actions of troops on the ground (infantry, machine guns, etc.) must be made coherent with the actions carried out by aerial means (airplanes, helicopters, drones, etc.) to stop them from falling to friendly fire; – for an electrical system that has tens of thousands of pieces of equipment, each with their own life and hazards, management via the information system of the electrical system must involve balancing supply/demand, notwithstanding maintenance, the scheduled and/or unexpected shutdowns, and breakdowns due to hazards of the environment and/or malicious acts, all without shutting down the entire
Fundamental Properties of Systems of Systems
239
system, which would plunge all users into “darkness”. Systems of this kind must function 24 hours a day and 365 days a year, without interruption to the service. This “transactional” coherence for modules/processes determines the semantic invariant that must be maintained at any cost so that the system conserves the meaning that its designers gave it, given its requirements and demands expressed by the users (which is known as the “service contract”/service-level agreement). The underlying capability logic is equivalent to defining a quantification. 9.2. Recursive organization of the semantic The semantic perimeter of an SoS, due to its size, means that focus can be on an organization principle, which will be expressed as a general structure encountered at all abstraction levels, and which will allow the global nature of the system to be understood without getting lost in the details and avoiding confusion regarding the levels, of which the computing stack and/or telecoms is an excellent example. Figure 9.2 shows the topological structure of this fundamental pattern. It illustrates the role of three elements: (1) the FS block, which characterizes the functions that are specific to the system for the user in the environment, (2) the FC block, which characterizes the functions that are common, that all the FS blocks can use without restriction and (3) the ECM (Exchange and Coordination Mechanism), which provides for communication between all the blocks in order to ensure coherence of the whole set of interactions.
Figure 9.2. Principle of recursive organization. For a color version of this figure, see www.iste.co.uk/printz/system.zip
240
System Architecture and Complexity
In this figure, the access points have been mutualized and deported outside the FS and FC system blocks, because the network technology allows this to take place with no particular difficulty. However, this is not an obligation; it is a simplification factor, a simple convenience despite the access control and the users’ rights to access knowledge. Access is carried out on the general mechanism of exchanges and is then redistributed towards the system blocks that execute the user’s requests. The fundamental property of this recursive diagram is that the central pattern, {ECM, FS, FC} constructed around the ECM, is repeated recursively inside each of the system blocks BS, until the atomic elements constituting the SoS are reached. Each of the patterns is monitored by a “guardian” in compliance with the specific rules of the block (without forgetting the advice given by J. von Neumann: “Who guards the guardians?”). The factor that determines the size of the architectural pattern is its controllability, in the sense described in Chapter 2. We can focus the global vision of the SoS on the hierarchical structure of the network (a bus hierarchy, using computing language), where each network has an autonomic management device. This network centric tree diagram that doubles and/or is a substitute for the physical–organic material connections will allow the various categories of blocks to be organized by finding the best possible placements, given the general semantic of the SoS and of the specific constraints of the equipment. The semantic map that we previously mentioned is thus found to be projected onto the operational topology created. The outline obtained and the incorporation of this semantic into the various system blocks means that the “transactional” coherence, whose importance has been demonstrated, is maintained. 9.3. Laws of interoperability: control of errors Having become one of the major problems of systems engineering and information systems from the 1990s onwards, as well as a strategic stake, as we saw with President Reagan’s Strategic Defense Initiative (SDI, also known as “Star Wars”), interoperability has been the subject of a very large number of studies in France, Europe, and, above all, in the United States. In France, the driving force of the French Ministry of Defense and the DGA, the AFIS, and particularly the CESAMES Academy, founded by Daniel Krob, must be acknowledged (see the Preface).
Fundamental Properties of Systems of Systems
241
Of all the studies carried out, only a few laws and/or fundamental principles can be extracted, analogous to the phenomenological laws of physics (energy conservation, least action, etc.) that must not under any circumstances be infringed on. Interoperability is fundamentally a transduction where the languages play an essential role, for which we have provided the principles in Chapters 8 and 9. 9.3.1. Models and metamodels of exchanges Throughout Chapters 5, 8 and 9, we have focused heavily on exchanges of information between the elements of a system or of an SoS, exchanges that are quantified in terms of information energy by transactions, as we have seen. For two elements to cooperate, one must give to the other what has been agreed between them, and vice versa. Cooperation is therefore based on what the element must demonstrate of what it knows how to do, in the language of the element, denoted as LExt, that is expressed in compliance with a grammar GLExt, for all the elements participating in the cooperation. It should be possible for all these languages to be translated from one to another, which defines an equivalence class, which we call “pivot”, in compliance with the most widespread terminological use. The language associated with this equivalence class is the pivot language, with its corresponding grammar that conveys the semantic, in other words LPiv and GLPiv. All elements that do not or no longer comply with the rules of the pivot are ipso facto excluded from the cooperation. The “guardian” has the task of ensuring that the rules are complied with. The pivot plays the role of an organizing center of the interoperability, around which the elements are organized. The equivalence class LPiv/GLPiv determines the level of semantic power expressed by the languages associated with that system layer, in terms of interactions and cooperation. The power of a language is characterized by its expressive capacity in terms of both CRUDE (Create, Retrieve, Update, Delete, Execute) and organization (architecture). To give an example taken from general programming languages, languages such as Ada or Java use the concept of abstract data types2, which, when used correctly, allow the capture of elements that are related to the architecture of programs and systems. This capacity does not exist in previous generation languages, such as 2 Refer to the website https://en.wikipedia.org/wiki/Abstract_data_type, for an introductory basis.
242
System Architecture and Complexity
ALGOL, PL1 or C, where there is, however, the notion of modularity and recursiveness, which is absent from the first languages, such as FORTRAN and COBOL. In specific terms, this means that we can compile/translate from Ada/Java to C, with the loss of all architectural information, but that the inverse translation would be impossible, as the inverse compiler is incapable of reconstructing the architecture. A fortiori if the target is the hardware/software interface where organization and modularity are lost. To interface with sub-systems such as the management of permanent data (via DBMS), networks for external interactions or man/machine interfaces for ease of use, general languages also use abstractions that have evolved over time and are themselves organized, for example, the network stack, the most well-known of these stacks, which has resulted from the work of CCITT/ITU-T.
Figure 9.3. Equivalence class for the pivot language
The same can be said for for more specialized libraries of functions such as scientific calculation libraries that target a particular business. From the 2000s onwards, specialized languages for specific businesses have appeared, for example, DSL (Domain Specific Language), which allows an application programmer to reason on the basis of abstract elements directly arising from the business, a magnificent example of which is the language of the standard Common Warfighting Symbology, in the field of C4ISR (defense and security; refer to successive white papers). A specialized DSL can always be translated into a general language with some loss of information, but the inverse is not possible, for the reasons outlined previously. Within a single semantic domain, various languages can coexist, but if we go about it in the right way, which presumes a reasonable understanding of the theory of languages and compilation techniques, it will always be possible to translate from one to another, hence the equivalence class that unites them all. This is the very essence of the notion of a pivot language. If the languages that arise within a single field of business do not comply with these hierarchies of abstractions that come from the deepest part of our
Fundamental Properties of Systems of Systems
243
cognitive structures3, chaos is guaranteed, similar to what we saw in the first 30 years of ICT, with the multiplication of invented languages4. While the actions undertaken must be coordinated in time and space, as is generally the case, suitable languages must be available to express the cooperation of the system processes that represent reality, with suitable temporal logics taking account of the situations encountered. Through a strange change of perspective, the pivot language, which initially appeared as an abstraction and a generalization of the various representations used in the various elements that interoperated, has now become a central structure of the construction of systems around which engineering will be able to or has to organize itself, with the pivot appearing like a general model, a template or a pattern, in the sewing sense, from which what is necessary and sufficient for the interoperability of elements will be specialized and organized. The pivot language must never be extended beyond what is necessary because this makes structures visible which perhaps do not need to be, thereby increasing the complexity beyond what is required. In the field of interoperability, the principle of parsimony (refer to Ockham and his razor; but also refer to L. Wittgenstein and his aphorisms) needs to be applied to the letter. At this stage, we cannot prevent ourselves from thinking of the theory of diagrams and patterns that was ultimately produced by A. Grothendieck, in his attempt to replace the static view imposed by set theory with a much more dynamic theory, which is much better suited to representing the ever-changing world in which we live. REMARK.– Refer again to the citation given in the historical reminder, taken from A. Grothendieck’s book Récoltes et semailles. 9.3.2. Organization “in layers” of the models and systems We have seen that the increase in the size of a system, if it is not organized by a suitable architecture, leads to an unavoidable degeneration of the system, through a loss of control of the integrity of the fundamental feedback loop, due to the increase in latency and in the number of errors. There are too many events to manage within the time frame available. To avoid this, we have introduced an initial fundamental distinction between internal language LInt and external language LExt, in such a way 3 Refer to the numerous studies by J. Piaget, and most particularly Équilibration des structures cognitives, PUF. 4 Refer to R. Wexelblatt, History of Programming Languages, ACM series.
244
System Architecture and Complexity
as to define the means of the use of the system seen from the outside, excluding its actual physical–organic representation, hence the importance of transducers, to go from one world to another without losing information and on the condition of keeping track of the transformations carried out, as is the case with transformational grammars. The grammars of these two languages, GLExt and GLInt, set up the rules of correct use, rules that need to be verified and enforced by the “guardian” of the site. The outside, materialized by the couple LExt/GLExt, is therefore simpler because it is more general; it hides the complexity of the internal language. In addition, the correspondence found between the two traces – the external trace, on the one hand, generated from what the user does with the external language, and the internal trace, on the other hand, obtained from the autonomic management of transducers – means that the construction can be validated, both from the point of view of the user and from the point of view of internal structures, while avoiding merging points of view. A paradigmatic example of constructive logic defined in this way is given by organizing the computing stack, which allows us to progress in successive stages from the random and quantum world of silicon atoms integrated into “chips” to the perfectly deterministic world of application programmers. This means that the cooperation of pieces of equipment on just as large a scale as those in electrical systems is possible in real time. This recursive pattern allows the autonomic manager of the system to be organized, level by level, complying with the capability logic that is specific to each. Without this separation, the traces would remain forever incomprehensible because, without these levels that materialize the recursive succession of inside/outside, everything would be mixed in an unstructured, undifferentiated trace. We will observe, in passing, that the level of abstraction materialized by the external language can be activated, not only by systems higher up the hierarchy, but also by external operators via suitable interfaces that do the job of control panels, which can be located on the element to be activated, or deported in the command– control centers, as in the case of the teleoperation of the electrical system. Abstraction therefore means simpler and more certain. Generalizing for simplification, this is the price of “simplexity”5. The corresponding engineering relies on the theory of languages and compilation techniques, which will allow strict control of the size of the constitutive elements and the resumption mechanisms. 5 The term was introduced by A. Berthoz, professor at the Collège de France, in his work Simplexity, Yale University Press, 2012.
Fundamental Properties of Systems of Systems
245
9.3.3. Energy performance of the interaction between systems To guarantee informational coherence of a system, two aspects must be ensured: 1) each element must have the resources for carrying out the transformations/actions described by the functions, in compliance with the requests made by users and independent of the context of use; 2) interactions that are essential for maintaining the coherence of the transformations/actions, including those generated by errors that are otherwise inevitable and whose numbers cannot be anticipated because they depend on the hazards of the environment, must also have enough resources to comply with the time schedule. This information energy therefore has two characteristics: (1) “salient”, on the one hand, because it is correctly located on the functional elements of the system, and (2) “prevalent”, on the other hand, because it is diffuse among all the elements with which the activated element can interact, which, in fine, will determine the behavior of the transformation carried out. We will note the analogy with what exists in the physical world in terms of the mechanical energy related to the matter and the mass of material bodies, and the connection energy, essentially electromagnetic in nature, which ensures that the form of the material bodies is coherent and maintained. However, this is not an analogy. In the standard model of particles physics, two categories of particles are distinguished: fermions, which are the support for matter in the usual sense of the term (protons, electrons, etc.), and bosons, which are conveyors of the interactions associated with the forces in quantum field theory (the photons of electromagnetic waves and the Higgs boson that was recently discovered at the CERN, which explains the “mass”). Quantification of the available information energy will therefore be distributed both on the salient aspect of the functions carried out and on the prevalent aspect of the interactions between processes that host the functions. It is understood that the sum of the two cannot exceed the total available energy, and that what is consumed is no longer available; the transformation of the information energy is irreversible, but the trace of the transformations carried out can be kept if the support languages are constructed according to standard practice. We will detail this in the following. As in the quantum world, information transformations operated by the users of a system are “discrete”. An action is done or not done, whether it concerns updating a bank account following a purchase or the production of an energy power plant in the electrical system. The action cannot be divided into intervals; that is, there is no continuity. The energy power plant functions or does not function, or it is
246
System Architecture and Complexity
connected or not connected to the network. On the one hand, this is the property of atomicity, the a in ACID (Atomicity, Consistency, Lsolation, Durability), as seen from the outside, because there is no visible intermediate state. On the other hand, inside the element, there can be intermediate states that are defined recursively, as indicated previously. Here, the metaphor of Maxwell’s demon is entirely significant (see Chapters 6 and 7 and Figures 7.13 and 7.15). When an action is triggered by a request to the system, it is imperative for the system to have the necessary resources to fulfill the user requirement. Before triggering an action, the system must ensure in advance that the necessary resources exist, which has a cost (known as the “cost of quality” (COQ) in quality systems). If, for any reason, the action needs to be interrupted, the system must be returned to a coherent state, and, if possible, the resources that had been designated as available to other users need to be restituted. This repairing action also has a cost (known as the “cost of non-quality” (CONQ) in quality systems), which means that, in order to comply with its service contract, the system must have an energy reserve at all times (described as “powerful” because time is a fundamental parameter), allowing all the actions undertaken at instant T to be repaired. Other than the intrinsic information energy required to carry out a series of actions a1, a2, …, ai, the system must have sufficient resources to: – validate the instruction that guarantees that the action is executable; – carry out the repair in real or deferred time if, for any reason, the action is interrupted. In all cases, what has been consumed cannot be restituted; in the same way as entropy, the transformation energy is irreversible. FIRST INEQUALITY.– We must, therefore, permanently, for all systems, be able to verify that in all possible configurations there is: information energy for transformation + energy for orders (COQ) + energy for repairs (CONQ) + security margin < available information energy We have already seen that all systems imply control in such a way that the actions undertaken are constantly directed with respect to the end purpose of the system, where this end purpose is materialized by an invariant (see Figure 2.1). We have also come across another type of loop (see Chapter 4, Figure 4.15, and Chapter 7), whose role is to guarantee a level of autonomy of the element, independent of its context of use. SECOND INEQUALITY.– In all cases of use, the functional loops require a certain quantity of information energy, among other things, to function correctly. If this is
Fundamental Properties of Systems of Systems
247
not the case, the system is no longer controlled (problem of latency), or is no longer autonomous, which is a crippling fault. Hence the additional inequality: information energy of the functional loop (a series of transformations) + control energy (function of the loop itself and its complexity) + security margin on control < available information energy All the actions carried out are due to the functions located well within the spatiotemporal structure of the system processes, visible to users; hence, the term “salient” is used to describe them. 9.3.4. Systemic approach to system safety 9.3.4.1. Law of the quantum of information energy: interactions We will begin with an example of the characteristic properties of the importance of interactions. Safety covers a whole range of properties that are addressed, on the one hand, (a) to system designers in the wider sense (availability, integrity, confidentiality), and, on the other hand, (b) to the users and uses (traceability of access, authentication, non-rejection), using the terminology defined in safety standards6. These properties perfectly illustrate what we call “prevalence”. In general, any system immersed in an environment is exposed to various categories of risk, which can only be countered by compensatory mechanisms integrated into the system, for those whose occurrence can be analyzed (reflex-type action), or mechanisms which allow an appropriate response to be established for a new risk that was, until then, unidentified (thinking-type action), before the system becomes unavailable. Engineering of SoS specifically and unavoidably brings up the fundamental problem of the integration of complex systems, and, more particularly, that of transverse properties, such as safety, which are diffuse in all the elements of the system, with suitable compensatory and/or repair mechanisms. These diffuse properties are global emerging properties of the system, which can only be observed when all constituent elements are assembled: for these reasons, we say that they are “prevalent”. We will later clarify the problem of complexity, but the fundamental question tackled here is as follows: How can the interactions between the various systems and elements that constitute the SoS be organized in order to avoid translating the integration of a new element into the SoS by reduced safety of this element once it is integrated or by an increased risk for the SoS? 6 Refer to the family of standards ISO/CEI 27000 and the standard ISO/CEI 15408, that deals with “Common criteria”.
248
System Architecture and Complexity
This can occur if the SoS has suitable compensatory and/or repairing mechanisms, given its mission, defined in the service contract, robustness + resilience (see also “system safety”). These mechanisms materialize the understanding and knowledge that the system has of its environment and of its constituent parts (e.g. their failure methods, the acceptable error rate). If it does not, the SoS is doomed to fail at some point in the future, because the slightest anomaly will be fatal to it. It is therefore necessary to ensure that the integration not only guarantees maintenance of the safety level of the integrated element, but also improves it. Organization of the computing stack and/or the telecom stack, and ICT engineering in general, provides existential proof that this is possible. Another characteristic is complexity, which, when correctly organized, can be used to improve the survival of the SoS. Reconciliation with the complexity is therefore necessary. This is what is demonstrated every day by a certain number of SoS such as electrical systems, some of the largest that are currently known, on which our energy well-being and the operation of our economies depends. Complexity allows mechanisms to develop it which, without, cannot exist, for example, the energy compensation of one region for another, as explained in the case study of the electrical system. We will briefly summarize the fundamental characteristics that demonstrate the demand for information energy in such a way as to guarantee correct consideration of transverse properties that are described as prevalent. Each element of a system is likely to be subject to the influence and the unpredictable hazards of the spatio-temporal domain in which it operates, which characterizes the exposure to risk. Once it is damaged or deregulated, in the absence of a diagnosis, the faulty element will contaminate its neighbors via the interactions that it has with its environment. In this very particular, but entirely general, case, it is necessary to monitor both the element with respect to the nominal conditions of its correct operation, and the interactions, in such a way as to detect everything that would be likely to indicate a failure of an element which has not itself detected its own malfunction (induced failure). REMARK.– Refer to the authors’ website7 for the document: “Ingénierie des exigences non fonctionnelles dans le contexte SDS – Quelques exemples: fiabilité, performance, maintenabilité et évolutivité” (Non-functional requirements engineering in the context of SDS – Some examples: reliability, performance, maintainability and scalability), which goes into detail about certain aspects of safety.
7 http://cesam.community/en/
Fundamental Properties of Systems of Systems
249
9.3.4.2. Law of errors: centralization of the phenomenon of errors and controllability Everything that is created by nature is subject to error. This is one of the great laws of nature. No construction is perfect. Silicon monocrystals in our computers do not exist in nature, although silicon is one of the most abundant bodies on Earth; they need to be manufactured. Artificial systems, similar to human creations, do not escape this law. It is necessary to learn to live with errors, because if we do not manage errors, the errors will instead manage us and will reduce all our creations to nothing. This is an unavoidable evolution that translates the entropic nature (in Greek, entropy signifies evolution) of our universe: what is “natural” is disorder. If nothing is done, there is disorder and undifferentiated chaos progressively prevails. Interoperability and SoS are a powerful indicator of the all-pervasive nature of this fundamental problem and do indeed show, echoing the words of R.W. Hamming (see Chapter 2), and before him J. von Neumann, why error engineering needs to be given a central place in systems and information sciences, in the same way as the strength of materials has a central role in the engineering of material systems. Figure 9.4 is an adaptation of Figures 8.12–8.14, now we have focused on the possible sources of errors and the malfunctions that they can cause. The SoS provides a perfect illustration of the fact that simply bringing systems that share the same spatio-temporal semantic zone together can have devastating effects, due to the very fact that they have been brought closer together and due to the management of resources that this implies. The SoS requires an explanation for all the functional and non-functional couplings that the systems maintain with the environment, because these couplings can lead to contagious effects. This is one of the objectives of modeling of systems that goes hand in hand with architecture.
Figure 9.4. Hazards, errors, faults. For a color version of this figure, see www.iste.co.uk/printz/system.zip
250
System Architecture and Complexity
Without going into details, which would require an entire chapter about system safety, it is obvious that one of the fundamental roles that the architect designer of the SoS will fill is taking care of all the hazards that can occur on one of the systems due to the presence of the others and the new modes of operation which are necessarily going to emerge from the new uses that the SoS will make possible. If this does not happen, it is impossible for the SoS to provide the expected new services, except if we believe in miracles, because the availability of the systems taken individually cannot decrease, due to the presence of the others, therefore there will be an overall availability that is out of control: this is no longer engineering. The SoS needs to be re-centered based on errors, starting with the initial phases of the design of the new whole. By transitivity, something that is applicable to the SoS level is a fortiori also applicable to each of the systems, including the elements constituting the level 0 (which are “black boxes”, in the cybernetic sense of the term, the elementary “atoms” of construction of the SoS), which determine the classification and the extension of the whole that is formed in this way. On each of the elements and/or spatio-temporal spaces occupied by the elements, it will be necessary to have “observers”, these aforementioned “guardians” of J. von Neumann, which will ensure that everything goes well, taking into account the autonomic management mission, which has been attributed to them, and their knowledge of the hazards of the zone in which they operate (hence the diagram in Figure 9.5).
Figure 9.5. Account of the hazards encountered. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The planned autonomic management depends on the field of error that the autonomic manager is required to detect and compensate for, in a logic of operation which is the autonomic loop (see Chapters 4, 5 and 7). The autonomic management logic therefore needs to be constructed by the architect designer, because it is not
Fundamental Properties of Systems of Systems
251
given by magic; it is totally interlinked, more exactly, intertwined, with functional logic, of which a computing stack is an excellent example. Each of the “observers” is made up of one or several systems with the task of “autonomic management”, so as to detect everything that can or could harm the functional mission of the monitored element, whether this is a level 0 element or a higher level element, or the SoS overall. We therefore see, in symbiosis with the SoS and its functional elements, a parallel “non-functional” organization being drawn up of sub-systems, with the task of autonomic management of both the SoS and the spatio-temporal environment of the SoS, in order to increase the survival capacity of the SoS in the wider sense of the term. In this specific case, we talk about non-functional system requirements that are also constraints to be taken into account in the service contract, so that the SoS and its constitutive elements live and survive for as long as possible in compliance with the requirements of the various communities of users. Viewed by the users, the systemic vision of the whole is summarized by the diagram in Figure 9.6. Ideally, SoS_NF must be invisible to the users, but its absence would be synonymous with a very rapid “death” of the SoS. Survival engineering operates in close symbiosis with the engineering of the SoS, while being completely autonomous, in order to avoid cognitive bias, because its logic is for survival, and nothing but survival.
Figure 9.6. Systemic vision of the survival of a system of systems. For a color version of this figure, see www.iste.co.uk/printz/system.zip
The survival loop is in the position of the most fundamental loop of the SoS, because it guarantees the general equilibrium of the system and protects the system from the hazards that may “break” its symmetry. This is what will preserve and
252
System Architecture and Complexity
conserve the system invariant at a certain energy cost, which means that, from a thermodynamic point of view, the energy of the SoS is a dissipative structure. This fundamental loop must be synchronized with the various functional feedback loops of the SoS_F and the constitutive systems, and have autonomous resources even though, circumstantially, it is able to use the general resources of the SoS. Hence the clocks are placed at various observers and the associated temporal logics. REMARK.– Constellations of satellites such as GPS or Galileo have very high precision atomic clocks that are essential for both the quality of the signal and the fine synchronization of the systems that use this signal. For general organization and architecture, two broad options are available: either an external autonomic manager, like the service system or processor, such as those that have existed in computers since the 1980s, or an internal autonomic manager, completely integrated within the SoS_F and its elements, depending on the non-exclusive means, given the requirements of the communities of users and the engineering feasibility constraints, since the component SoS_NF must be approved. SoS_F and SoS_NF are reciprocal constructions of each other or (we can say) symmetrical. By a strange inversion of viewpoint, systems engineering initially focused on the functional. However, with the development of complexity, it is now focused on the non-functional, in the capability logics known as design to test or design to build, which is well-known to engineers and classic engineering sciences, thus reaching the objective set out, initially by J. von Neumann, and then by R.W. Hamming, for information sciences, to give errors pride of place, in other words, to the way in which they are detected and in order to correct them and/or control their effects. As for all living organisms, in order to live and provide the expected service, it is first necessary to survive the hazards. 9.4. Genealogy of systems The logic of the growth of SoS allows quite a general engineering law to be pointed out, which was already identified a few decades ago by technology historians8. Technical progress advances in a series of successive generations. This is more of a continuum of adaptations, a slow evolution with reuse of what has worked, rather than a permanent revolution; from time to time there are ruptures, which reshuffle the cards and give everyone an opportunity again. ICT is a rupture. Technical knowledge is cumulative, and we must never skip steps; at least, this has been the case until now and certainly in the foreseeable future in human terms. What was true for tool machinery, with more and more precise clockwork towers, since 8 For example, refer to M. Daumas, Histoire générale des techniques, PUF or B. Gille, Histoire des techniques, Gallimard.
Fundamental Properties of Systems of Systems
253
the generation N allowed a new generation N + 1 to be manufactured with better performance, is also true for turbines in nuclear power stations or particle accelerators. This is also perfectly verified with the successive generations of computers since the 1950s, except that, in the case of ICT, dematerialization has allowed the movement to be accelerated, but has not led to the removal of the stages, despite what some would say, who do not understand the constraints of engineering, where the law known as “All we need to do is… We’ve got to…” never works. The development of engineering does not function in the way described by T. Kuhn in his work La structure des révolutions scientifiques, contrary to what many people believe9. A single exception to this rule is the tools that are used to manufacture the purely informational part of systems. This new kind of “machine tools” is an integration of various tools that are, themselves, information. These tools are also fully dematerialized systems of systems, which are perfect examples of this “information energy“ that we are slowly learning to domesticate and to shape. These are primarily programming languages, and everything that accompanies them: compilers, programming environments, software “workshops”, modeling tools and/or simulation tools, test equipment and environments, etc. These tools can be modified themselves by self-referencing, insofar as the abstract model of the tool is itself modifiable (the case of compilers, for example, where there are generic tools known as compiler compilers), which is quite similar to the grammar of a language written in its own language. The purely logical aspects of this language construction were the subject of intense work, even before computers existed, from the 1930s onwards. A. Turing himself, relayed by C. Shannon, established the notion of the universal Turing machine (refers to the Church-Turing thesis, which is the starting point of thoughts on the “physics of information”10), whereas J. von Neumann used his last strength in the theory of self-reproducing automatons11, which was published posthumously, 10 years after his death. All these works have allowed a rapid development of information engineering. Large systems, such as the electrical system, cannot suddenly arise at the wave of a magic wand (even with many billions of euros and/or dollars). There are too many problems to be resolved simultaneously, which makes the construction of
9 This is also the opinion of renowned scientists such as the Nobel Prize winner S. Weinberg, who rejected T. Kuhn’s thesis in his article The Revolution That Didn’t Happen, and also D. Deutsch, professor at Oxford, in his work The Fabric of Reality, Chapter 13, or even our colleague, G. Cohen-Tannoudji. 10 Refer to the work by D. Deutsch, The Fabric of Reality, Penguin Books, 1997. 11 Refer to Theory of Self-Reproducing Automata, University of Illinois Press, 1966, edited and completed by A.W. Burks. The first part of the work, comprising a series of five conferences, sets down the basis for future theories of complexity.
254
System Architecture and Complexity
systems of this kind impossible, as engineers say, from scratch. Engineering is always a long journey with many pitfalls. If we start off on the wrong foot, something we never know in advance, or if we skip steps, we can be almost certain that we will not reach the end; there are numerous examples of this. Modularity remains the fundamental rule for controlled progressive integration, a true principle of precaution, known as “continuous integration” in project jargon. It is a necessary condition, but is not sufficient. If the rule is violated, failure is certain. In all cases, intuition and creativity remain essential in engineering. The common factor of the two great technologies that have revolutionized the 20th Century, nuclear energy and information energy, is that, in both cases, there are remarkable scientific and engineering personalities right from the outset, all extremely well educated. The extraordinary talent of J. von Neumann, a pure product of European culture in the 1920s and the 1930s, allowed him to be present in both, and also in a few others. In the field of systems that we are concerned with here, men such as A. Turing, N. Wiener, C. Shannon and, of course, J. von Neumann, if we are to cite only four, were each exceptional personalities in their field, who were able to and knew how to work together without neutralizing each other. Perhaps the tragedy of World War II and the mortal risks induced by totalitarian regimes were something to do with this. Development of a system such as SAGE, with J. Forester at MIT, then the space program, orchestrated by NASA, following the Manhattan project12 co-directed by the unusual team of the physicist R. Oppenheimer and the general L. Grove13, placed in opposition by many things, but who turned out, in the end, to be complementary, were the cauldron in which systems science was constructed, relayed by renowned academic institutions such as MIT, which was the catalyst, as explained in section 1.3. Interoperability, to use the “modern” term, is present right from the beginning of the history of all these systems. Capability logic progresses, step by step, which means the stages can be controlled, and we can learn how to conquer our ignorance without taking too many risks. As J. von Neumann said in one of his very last articles, a short time before his death, at the Walter Reed hospital in Washington: “We can specify only the human qualities required: patience, flexibility, intelligence.”14 If we do not organize complexity, it will destroy us. The laws of interoperability presented previously provide us with the key to this organization, and also the method with which the problem must be approached. These are the required conditions. For the rest, we must trust the human qualities mentioned by 12 Refer to R. Rhodes, The Making of the Atomic Bomb. 13 The military engineer who constructed the Pentagon before World War II. 14 Can We Survive Technology, Fortune, June 1955 (refer to Collected Works, Pergamon, volume VI).
Fundamental Properties of Systems of Systems
255
J. von Neumann, knowing that none of them is a merchant good which can be bought like pocket watches or tablets. When faced with complexity, there is no escape from competence and conscientious work. The double scientific and technical culture, the architect’s finesse, are the precious product of free and open societies, and of a humanism that respects individuals, which, itself, is not a good to be traded, and is, in fact. “To live, we must survive”, biologists tell us. The laws of interoperability, which are, exactly the opposite in fact, rather simple, allow us to survive complexity and even to make it an ally for new emergences, as we have seen with the computing stack. This is the price to pay for “more is different”.
Conclusion The Three Principles of Systemics
This compendium assembles the three principle results presented in this book, in the form of three phenomenological principles in an analogy with phenomenological laws and/or principles of thermodynamics that determine all energy phenomena at all scales, whether micro, meso or macroscopic. The phenomena that we consider here are related to intra- and intersystem exchanges of information, and to the laws that govern these exchanges. Systems as we consider them here have two components: a physical component, on the one hand, obliged to comply with all the known constraints of the physical world, and an immaterial information component, on the other hand, which allows all energy processes that have been implemented by the system to be coordinated and controlled, including human processes, via the various languages that cover these exchanges. These two components are in a complementary relationship like the one that exists in quantum mechanics. This complementarity was already taking shape in N. Wiener’s first works, with what he called communication and control. It is now, since the 1990s, an obvious truth that emerges from the engineering of systems of systems. These two components also have the fundamental characteristics of having faults that are part of their very nature. Those in the physical world have been known to engineers for a long time, whereas those in an information world, a new world, have the particularity of resulting from erroneous actions of human operators who intervene at all levels of the creation of this immaterial component which is also expressed by means of the physical component, in this case simply a circumstantial support. It is essential to control the effects of these faults to guarantee the service contract, which defines the invariant of the system and is to be conserved at all times.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
258
System Architecture and Complexity
In his famous theories concerning invariants, E. Noether, a student of D. Hilbert in Göttingen, showed that all conservation laws are associated with a group structure that takes into account the physical fact of maintaining the energy equilibrium, corresponding to this invariant, hence the phenomenon known as compensation or correction, which is the subject of the third principle of systemics applied to information interpreted in this context as an “energy”. We will also revisit some epistemological consequences of this new way of approaching human knowledge which emerges from the interactions between the various processes, whether they are natural, or artificial, or more and more often from a coherent mix of the two where we will state very precisely that they are integrated by virtue of an integration operator that we will define formally and concretely. This is inexact knowledge with limits of convergence that can be modeled, and thereby controlled. C.1. A universe of processes We live in a world of energy processes, starting with the initial process from which the universe was born, the Big Bang 13.7 billion years ago if we believe the standard cosmological model set out by astrophysicists following the discovery of fossil radiation. This is the ultimate residue of this gigantic phenomenon, in other words the universe’s cosmic microwave background whose temperature, measured by the satellites COBE and Planck, is 2.7°K. We are ourselves made up of hundreds of billions of elementary physicalchemical processes that are extremely stable, such as the replication of DNA in our chromosomes, or even all life as it exists on our planet. In addition to the infinitely small and the infinitely large that were discovered during the Renaissance period, after the Copernican Revolution, physicists added a third infinity, namely of complexity and information which is shown in an image that I have borrowed from G. Cohen-Tannoudji’s book, L’horizon des particules (Figure C.1). Under the influence of the elementary forces that have been progressively unveiled over the course of the 20th Century, we see an aggregation of fundamental energy at play in a triple format {particle, wave, information}: first the corpuscular matter in the ordinary sense that we perceive at our scale, then energy in a wave form that we know to be equivalent to mass, according to Einstein’s famous E = m × c 2 initially indicated by Maxwell and his equations and then by de Broglie’s wave–particle duality, and lastly in an information format, an atemporal organizing force which shapes atoms and molecules of the mineral and physical worlds, and leads to Life, which we ourselves are elements of at the end of an evolution lasting more than two billion years and that has led to the emergence of
Conclusion
259
conscience and free arbitration in line with hominids. Cellular life that began more than 1 billion years ago appears as an extraordinarily stable process, which has resisted several mass extinctions.
Figure C.1. The three infinities in science. Architectures in stages1
We are ourselves also great creators of processes whose still visible results would not have been possible without human action or intelligence, ranging from the sculpted flints of the very first tools to the integrated circuits of latest generation chips, present at the core of the most complex systems that make our lives a little easier than those of our ancient hunter-gatherer ancestors. We are also great organizers of processes and projects, including from a cultural point of view in our economic and social systems, using the elementary processes that nature abundantly provides us with and that we know how to observe and perfect, sometimes destroy when we lack a total understanding of them, on a scale that now covers 45 orders of magnitude. It is, to use a term styled by F. Gonseth in his epistemological work and reused by G. Cohen-Tannoudji in several of his books, our “horizon of perception” (Figure C.2), a horizon that determines our field of view and which evolves in symbiosis with the tools and the techniques that we have progressively constructed over the course of our evolution.
1 http://en.wikipedia.org/wiki/Water_splitting.
260
System Architecture and Complexity
Figure C.2. Our perception horizon
C.2. The three principles of systemics The three principles presented in this compendium relate to the life and survival of artificial systems that we have created from elementary processes of nature, considered as a whole, in other words in interaction/equilibrium with their environment (consumption of resources, rejections, disturbances, withdrawal, etc.). The phenomenology relating to them is materialized by Simondon’s triplet {U,S,E}, in homage to his founding work on information; these principles are inseparable from the human component of systems, which radically distinguishes them from analogous laws in physics where humans are eliminated. Life itself relates, on the one hand, to the individual life of systems, and on the other hand, to a form of collective life where the systems help each other and cooperate, which means they can have a heightened specialization, therefore in fine a larger robustness/resilience from a collective point of view. The waste and emissions from one are recycled by others in a universe where everything is connected, it would appear from the outset. The first two principles correspond to this double reality, therefore regarding: (1) systems as individuals/monad; and (2) systems of systems as collective entities that form an organized “society” in compliance with the reference framework that defines the means of belonging and the semantic. These are the principles of conservation and transformation/action that organize informational reality. Systems of systems formed in this way can themselves be organized in a hierarchy, in
Conclusion
261
successive levels of abstraction and constitute new individual/monad entities which will, in turn, be integrated. The third principle concerns the survival of the two ways of life, individual and collective, determined by the first two principles. This third principle takes into account the all-pervasive nature of errors and faults among elementary systems and their organs, both from a material point of view and an information point of view (their “programming” in a wider sense, to state things simply) which is also a fundamental fact of nature. All processes, whether natural or artificial, include structural faults and produce waste. They are subject to laws of wear that are specific to them, to hazards of the environment, to internal and/or external failures, or to modifications to their environment due to emissions that sooner or later make them unsuitable for this environment. These failures can randomly disturb a correct initial function and falsify their results. These principles are unavoidable and taking them into account is a fundamental requirement for engineering which must comply with them under all circumstances. C.2.1. First fundamental principle of systemics: conservation of logic form – structural invariant of the information which defines the system This principle relates to the “size” of the system (hence the term “allometry”2) and its logical form, in the architectural and/or organizational sense, relating mainly to the semantic. In other words: the “size” of the system relates to the maximum number of elements that make up the system that the feedback loop of the elements is able to control on themselves, so as to ensure that the system invariant, also known as its logic form, functional model or logical design (J. von Neumann’s terminology) is complied with given the internal and external hazards, the physical and programming particularities of the system, independent of the service contract owed to the users (also known as “quality or service contract”/service-level agreement, or “safety”, in the broad sense). This invariant, in other words the end purpose of the
2 The term allometry was created in 1936 by J. Huxley and G. Teissier as a conventional designation, in biology, of the differential growth phenomena of organs, tissues or activities, insofar as these growth phenomena are determined by a law in a specified mathematical form, generally power laws. Huxley and Teissier took inspiration from the principle of allometric growth outlined by the biologist D’Arcy Thompson in 1917 – estimation of this growth by grids of geometrical deformation – presented in his book On Growth and Form. In a mechanical system, the energy involved and the strength of materials used will dictate the dimensions of the parts. In systems science, we have notions that are analogous to what is called load leveling, control of the acceptable level of load, and scalability, in other words the capacity of the system to grow linearly, given its organization, at the same time as the level of load generated by the users.
262
System Architecture and Complexity
system, its reason for being in terms of the semantic that it carries, is defined by a set of rules/regulations that must be complied with in all circumstances. This is the frame of reference/architecture framework of the system. The maximum size of the system in its physical dimension is therefore determined by the processing capacity (the “performance”, in the sense of Aristotle’s efficient cause) of the general feedback loop, centralized and/or hierarchically distributed, that must therefore not be saturated with useless or irrelevant information. Between this loop and what must remain the responsibility of elements/equipment, the principle of subsidiarity must be strictly applied; this is an acceptable compromise in terms of PAC3. Engineering of this loop, in its “system project dimension, is itself a function of: (a) the skill of the engineering team that develops it, maintains it and operates it throughout the life cycle of the system; and (b) the behavior of the users (refer to Simondon’s triplet {U,S,E}), so as to define the modular architecture that is best adapted to the overall situation. This architecture is the materialization of the system invariant. The size of the system in its physical component is therefore itself determined by the control capacity of the project system loop, and of the resources managed by this loop, which develops and maintains the technical system in a state of correct operation. REMARK.– System safety results from a set of collective, diffuse or “non-functional” properties, known as such because they cannot be reduced to a function, which concerns: (a) engineering itself, in other words availability, integrity, confidentiality; and (b) users and authorized uses, in other words the traceability of the access points, authentication and non-rejection. It is implied that the users must have the necessary skill to use the system in an acceptable manner (grade/graduate, in other words the level of maturity in the CMMI sense, which will determine the number of errors that are acceptable to the users, which is again a compromise related to a PAC logic. Safety groups together serviceability (RAS).
the
characteristics
of
reliability,
availability,
3 PAC, in other words probably approximately correct, in reference to the work by MTBF L. Valiant. The notion of availability of a system defined as D = (MTBF/mean MTBF + MTTR time to failure and MTTR/mean time to repair) is a good illustration of this logic because if the MTTR is lower than the threshold of perception that is specific to the system – in other words its latency – then everything happens as it would if strictly D = 1 because then it is legitimate to write MTTR ≤ ε = 0 , as is done in non-standard arithmetic. The system will appear to be perfect because the repair speed of the feedback loop makes it appear in this way with respect to the latency times of users that are external actors.
Conclusion
263
C.2.2. Second fundamental principle of systemics: cooperation between systems – capability logic, subsidiarity, autonomy This principle relates to the conditions required for the interactions and cooperation between the elements that constitute the system (elementary processes and/or modules that carry the semantic) taking into account the end purpose, which implies communication protocols organized like a language, in the linguistic sense of the term. This principle has two main aspects: 1) A system is necessarily in interaction with its environment, starting with its users, therefore in its internal structures there must be a model of this environment including the behavior of the users, without which it cannot live or in other words maintain its structural invariant which carries the semantic, because it cannot/does not know how to distinguish between what is beneficial and what is harmful to it, or even fatal (it is an informed “demon”, in Maxwell’s sense of the term which is able to distinguish “hot” beneficial factors from “cold” harmful factors). Symmetrically, it must also have a model of its own operations for self-diagnosis, repairs and survival of its own errors, wear and hazards, etc. using suitable means such as functional redundancies, or by calling on an external operator. This immaterial structure is known as a metamodel, or even a reference framework, or an “architecture framework” with its multiple languages in systems engineering terminology (in Aristotle’s language, the formal cause materializes the objective defined by the final cause, set down by the users, in other words the system semantic) to distinguish it from its specific physical manifestation (the material cause and the efficient cause, according to Aristotle). 2) When systems cooperate with a view to a wider objective of constituting a system of systems, the necessary and sufficient condition for cooperation is that each of them must have a generic model of what is expected of the other, known in its own language as an exchange model (or even a metamodel in engineering systems jargon; in other words rules that place the reference frameworks of each of the constitutive systems in two-way correspondence, such as the case of the computing stack), because if this is not the case it loses autonomy. Cooperation will allow each one to be more effective within its field, to specialize, and the community of the system of systems that is thus formed, taken as a whole, or as a new monad, will be able to increase its global semantic capacities, while improving its own robustness and resilience and those of the systems that make it up. The exchange model, and the associated stack of interfaces, is the fundamental mediator for these highly organized interactions which preserve the integrity and coherence of the system, while guaranteeing cooperation between its elements and improvement of the overall yield in energy terms. This model defines the languages of interaction; it is a grammar.
264
System Architecture and Complexity
We should note that cooperation requires a set of standards to be taken into account, materialized by various models (this is the information), shared by all elementary systems that will form a company, including regarding the system safety. Acceptance of these standards, in other words the system reference framework, defines the criteria of belonging to the system of systems as well as its non-belonging because non-compliance of the reference framework for whatever reason puts the entire company that is formed in this way in danger. Acceptance of these standards, which represents a cost, is gauged by an improvement of the global survival, which must be verified in particular for everything relating to system safety because what can be acceptable for one of the systems may not be so for another; in any case, the erroneous states of some must not metastasize to the others. New states that arise from the cooperation must remain controllable at their own level, in compliance with correct implementation of the exchange model. Due to this, an element that does not comply with these standards, for whatever reason, must be neutralized to avoid putting the survival of the set in danger, hence the absolute necessity for autonomic management. We note that at this stage there is no limit in size for a system of systems other than compliance with an information constraint in terms of complexity that we know how to organize, concerning the SLA of level N with respect to the expected service for level N + 1, which must in any case comply with principle no. 1. C.2.3. Third fundamental principle of systemics: autonomy – surviving hazards and errors – capability logic This principle is related to the evolution and survival of the system, from its initial integration, its “birth” (or ontogenesis) when it is first commissioned, its maintenance, up to its removal from service, in other words its lifecycle. Throughout this process, the system consumes the resources that it requires and emits the waste that must be eliminated, at risk of irreversible poisoning of the environment. It must resist the hazards of the environment and the errors that result from the engineering process that ensure it exists. During their life cycle, all systems are subject to external and/or internal hazards that are unpredictable and will/can deteriorate its service contract and destroy its structural invariant. The system leaves its nominal stable and controlled state to evolve towards an unstable situation of total shutdown and general breakdown which is irreversible if no action is taken, since the system no longer has enough mobilizable resources to compensate/correct the hazards.
Conclusion
265
To survive, the system must have mechanisms for: 1) autonomic management, which takes into account its architecture (and the reverse) and that can detect situations/states that do not comply with the rules and regulations defining its structural invariant, in other words its reference framework, and can propose recovery strategies for which cost evaluation must be possible, taking into account the time period/resources that are compatible with the mission (this is “capability logic”); 2) functional redundancies and capacity reserves, repairs and/or countermeasures, strategy choices within the array of proposed options, so as to restore the service contract and return the system to a nominal state using its resources/redundancies in the best possible way (capability logic); in other words, a controlled return to a state of stable equilibrium. It must be possible to observe the system at various abstraction levels in compliance with the controls and regulations defined by the first principle. These various mechanisms, in other words the system safety in systems engineering terminology, are themselves organized as a system that lives in symbiosis and evolves along with the monitored system, for which it is an element of its specificity. The property of possibility of autonomic management is a functional requirement (safety, in the broad sense) of the system to be monitored, which integrates histories and feedback from users and/or engineering teams. The autonomic loop of system engineering guarantees survival. The costs/delays in terms of energy for the implementation of strategies proposed for living and survival result from compromises based on the observation and knowledge of the system’s real environment; these are compromises set up using PAC logic, for which we have empirical models and theoretical models. “Capability” logic (in reference to the capacity planning4 of the system management where this terminology comes from) takes into account the planning of real existing resources at instant T in the life of the system to improve the global availability of the system and its survival possibilities, to the detriment of some of its equipment. It can involve irreversible choices of allocations in compliance with the management of priorities, which can also vary given the situation of the system and its environment at instant T. It is a logic of irreversibility; it is neither associative nor commutative. All operations are described in terms of resources required for its execution; any consumed resource disappears from the stock which must be reconstituted to guarantee its survival.
4 Refer to https://en.wikipedia.org/wiki/Capacity_planning and https://en.wikipedia.org/wiki/ Autonomic_computing.
266
System Architecture and Complexity
C.3. Epistemological consequences From the outset of its creation by N. Wiener, cybernetics has been considered to be a general architectonic science that allows exploration of the complexity of nature and human relationships, integrating the contribution of the technical systems that are available at an instant T, in particular those at work in the engineering of large projects that arose during World War II and the Cold War. The principles of systemics that we have extracted after more than 50 years of good, loyal service punctuated by a production of systems that have revolutionized the relationships that we maintain with our environment and with ourselves, lead us to a better understanding of the initial intention of its creators, J. von Neumann and N. Wiener. When we isolate by abstraction a portion of the infinitely complex reality that G. Cohen-Tannoudji describes, in other words by tracing a rigorous inside/outside frontier that distinguishes the controlled system from the processes of its environment, we are now in a position to ask three fundamental questions in a precise manner and to answer them to ensure that our constructions have a meaning, including in terms of survival: 1) Who controls what we consider to be a system – in other words its reference framework? What is the nature of this invariant that needs to be controlled so that the system exists as a system? What forces and energies are at work for the system to live and survive, in particular those that provide the essential resources and eliminate waste – garbage as we say in STIC/information sciences jargon? This being the case, we must identify what is considered to be inside the system and what remains outside, in other words the environment, and how to organize the fundamental inside/outside interface? Capability logic of this structuring loop must be explained because it is this that will in fine determine the lifeline of the system and its survival capacity. 2) Who communicates with who to act and transform reality in successive coherent stages, stable state after stable state, first regarding individual equipment, then in the case of cooperation, the systems that constitute the cooperation? What is the nature of the language that allows cooperation to be installed, while complying with the languages that are specific to each of the systems that constitute the cooperation? What is the model that organizes these exchanges, or in other terms what is the grammar of these exchanges – in other words, here again, the reference framework? And which languages are compatible with this grammar? 3) Who corrects and detects faults of all kinds that are inherent to the world of processes that we organize for our benefit that can compromise the survival of the system defined by its reference framework? Who organizes the road networks and recycles waste? And first, what is the nature and phenomenology of these faults that we want to compensate for and of this waste? How do we organize waste recycling
Conclusion
267
and organized withdrawal of end-of-life processes and the elements arising from this dismantling? Can we benefit from the random nature of the failures by optimizing the use of redundancies? Once these three questions have been determined in a highly specific and explicit context, that we have called a system framework, grammar and system language, it will be even easier to organize later concerns, as good practices in systems engineering have been teaching us over the course of the last 60 years, and to approach the specific aspects of each of the elements without getting lost in the combination of possible scenario evolutions of which most will have no meaning if these previous questions have not be answered. This framework determines a sort of “self” and functions like a filter which distinguishes what is part of the system, the inside, and what comes from the environment, the outside. It is essential for the conservation of the informational semantic invariant which gives the “form” of the system. The three infinites in Figure C.1 can then be organized and integrated in a unique logical form which materializes the self-interlocking of all the structures present on Earth, an image of “everything is linked”, as shown in the diagram in Figure C.3.
Figure C.3. Relationship between the system and its environment
Over the course of its history, the environment has produced and constructed organized structures, initially elementary processes, and then processes integrated in systems and systems of systems within this same environment from energy flows which circulate in it, in compliance with the second principle of thermodynamics. Processes and systems follow lifecycles, and include environmental hazards and
268
System Architecture and Complexity
those that are specific to them for their survival and adaptation. The inside/outside systemic loop defined in this way is initiated “as best possible” to use an expression of the mathematician and epistemologist Ferdinand Gonseth to reach a regime of dynamic equilibrium where the information produced is memorized and integrated, repaired if there are hazards that lead to errors, in atemporal information structures that are larger and larger, not determined by the second principle of thermodynamics, in which humankind and what we, as a species, have created are interested parties; an image of the cumulative “more is different” as expressed by the physicist Philip Anderson. The systemic “truth” proposed to us by this systems science, that we know how to construct and maintain in operational condition5, because we have mastered all the stages, will first allow us to tackle with a certain level of serenity the complex problems that we must resolve in the 21st Century such as the sharing of resources, the management of waste, crisis management, etc., in other words, as N. Wiener said, for “human use by human beings”, if we know how to teach it and transfer it on. As we have already said: if we do not organize complexity, it will destroy us. In other words it will render the world the irrational, something it is not, at the moment an unpredictable world with no purpose, potentially unstable and chaotic, something that it could become if we give in to excessiveness. On this subject J. von Neumann remains an effective guide. He imparted this advice: “we can specify only the human qualities required: patience, flexibility, intelligence”6, two years before his death. With the results presented in this conclusion, we are a little closer to this general architectonic science of systems, conceived by N. Wiener and J. von Neumann.
5 Refer to the prospective document published by INCOSE, SE Vision 2025; http://www. incose.org/AboutSE/sevision. 6 “Can we survive technology”, Fortune, June 1955.
List of Acronyms
AC/TC: Algorithmic Complexity/Textual Complexity AF: Architecture Framework, found in many acronyms such as TOGAF (The Open Group Architecture Framework) AU: Active Unit; the indivisible core of a project team composed of 5–10 people BPML/BPMN: Business Process Modeling Language/Notation C/CCAM: Cybernetics or Control and Communication in the Animal and the Machine (founding work by N. Wiener) C2/C3/C4ISR/C4ISTAR: Acronyms used to represent the group of military functions designated by C4 (Command, Control, Communications, Computers), I (Military Intelligence) and STAR (Surveillance, Target Acquisition and Reconnaissance) and intended to allow coordination of operations CCITT: Consultative Committee for International Telephony and Telegraphy, until 1993. It deals with technical subjects and standardization, and has been replaced by the ITU-T CERN: European Organization for Nuclear Research CMMI: Capability Maturity Model Integration, reference model. A structured set of good engineering practices, from the work of W. Humphrey at the Software Engineering Institute created by the US/DoD CMS: Centralized Management System
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
270
System Architecture and Complexity
COP: Common Operational Picture, NATO terminology; virtual maps shared by C4ISTAR operators COQ/CONQ: Cost of Quality/Cost of Non-Quality CQFD: Cost, quality, reliability, lead time CRUD/CRUDE: Create, Retrieve, Update, Delete; the basic operations carried out by all data models (Data Retrieval), with E for Execute DNA/ATCG: Deoxyribonucleic acid and its four chemical bases ECC: Error Correction Code EHV: Extra High 400 to 700,000 volts
Voltage
lines
in
the
electricity
grid,
from
ERA: Entity Relationship Attribute, data modeling language ESB: Enterprise Service Bus, middleware for interconnection of all the IS in a company FELIN: French integrated infantryman equipment and communications; individual combat system for French infantrymen, developed by the French land army FFRDC: Federally Funded Research and Development Centers; network of research agencies directed by the US government FURPSE: Functionality, Usability, Reliability, Performance, Serviceability, Evolutivity; ISO/IEC terminology GAFA/GAFAM: Acronym grouping together the large Internet companies in the USA: Google, Apple, Facebook, Amazon, Microsoft GPS: Global Positioning System, constellation of satellites put in place by the US army HFT: High-Frequency Trading, of the order of a millisecond, developed by the finance and banking sector IDEF: Integration Definition; modeling language standardized by the US DoD
List of Acronyms
271
IEEE: Institute of Electrical and Electronics Engineers; with more than 400,000 members INCOSE: International Council on Systems Engineering INES: International Nuclear Event Scale, international classification scale of nuclear events; this includes seven risk levels IoT: The Internet of Things IPC: Inter Process Communication, layer of interfaces on which languages such as BPML are built IS/ICS/OIS: Information System/Information and Communication System/Operational Information System (in the field of defense and security) ITER: International Thermonuclear Experimental Reactor ITU: International Telecommunications Union; UN level LHC: Large Hadron Collider; machine at the CERN where Higgs boson was first proven to exist and measured. MEP: Maximum Entropy Production MERISE: Method for evaluation and modeling by sub-sets used in business computing or for company systems; analysis method, of design and constitution of information systems, very widely used in France in the 1980s MIB: Management Information Base MIPS/MFLOPS: Millions of Instructions Per Second; FLOP for scientific instructions known as “floating-point operations”; characterizes the calculation power MMI: Man–machine interface MTTF/MTTR: Mean Time To Failure/Mean Time To Repair NBIC/ICTS/ICT: Nanotechnology, Biology, Information Technology, Cognitive science/Information and Communication Technologies and Sciences/Information and Communication Technologies
272
System Architecture and Complexity
NCS: Network Centric Systems, systems whose architectural core is the network NTDS: Navy Tactical Data System; first naval system for the US Navy PAC: Probably Approximately Correct (title of the book by L. Valiant) PESTEL: Political, Economic, Social, Technological, Ecological, Legal PSTN: Public Switched Telephone Network QI: Quantity of Information, using Shannon’s formula RAS: Reliability, Availability, Serviceability SADT/SART: Structured Analysis and Design Techniques (Real Time); basis of the IDEF language SAGE: Semi-Automatic Ground Environment; ancestor of all air traffic control systems SCADA: Supervisory Control And Data Acquisition/Data control and acquisition system; all C4ISTAR systems have an SCADA for their periphery SCCOA: French surveillance and command system for aerial operations SENIT: French naval tactical information exploitation system; derived from the US NTDS SLA: Service-Level Agreement; service contract SoS: System of Systems STRIDA: French system for processing and representing aerial defense information; derived from the SAGE system SysML: Systems Modeling Language; extension based on UML TCO: Total Cost of Ownership; in project management TOGAF: Architecture frame advocated by the Open Group
List of Acronyms
273
TPS: Transactions Per Second; measures the flow rate of a system, in the number of interactions per second UML: Unified Modeling Language; advocated by the Object Management Group U, S, E: User, System, Engineering; Simondon’s triplet VHDL: Very High Description Language, for description of VLSI, inspired by Ada, including parallelism VLSI/LSI: Very Large Scale Integration, from the 1990s onwards when the number of transistors per chip entered the tens of millions XML/SGML: Extended Markup Language, successor of Standard Generalized Markup Language, which appeared along with the Internet and the World Wide Web
References
ANDERSON P., “More is different”, Science, vol. 177, no. 4047, 1972. BRILLOUIN L., Science and Information Theory, Dover Publications, New York, 1959. BRILLOUIN L., Vie, matière et observation, Albin Michel, Paris, 1959. CASEAU Y., Urbanisation et BPM, Dunod, Paris, 2005. COHEN-TANNOUDJI G., SPIRO M., Le boson et le chapeau mexicain, Gallimard, Paris, 2013. COPELAND J., The Essential Turing, Clarendon Press, Wotton-under-Edge, 2004. DIJKSTRA E., Cooperating Sequential Processes, Academic Press, Cambridge, 1968. FORRESTER J., Industrial Dynamics, MIT Press, Cambridge, 1961. FORRESTER J., Principles of Systems, Pegasus, Cambridge, 1971. HAMMING R., Coding and Information Theory, Prentice Hall, Upper Saddle River, 1980. HUMPHREY W., Managing the Software Process, Addison-Wesley, Boston, 1989. IEEE, System Engineering Standards Collection, Standard 24748, 2019. Available at: http:// ieeexplore.ieee.org/abstract/document/8809977. INCOSE, Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities, Wiley-Blackwell, Hoboken, 2015. ISO/IEC, Systems Engineering Standard Covering Processes and Lifecycle Stages, Standard 15288, 2015. Available at: http://www.iso.org/standard/63711.html. KORZYBSKI A., Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics, 5th edition, The Institute of General Semantics, Forest Hills, 1994.
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
276
System Architecture and Complexity
KROB D., CESAMES Systems architecting method, 2017. Available http://www.cesames.net/wp-content/uploads/2017/05/CESAM-guide.pdf.
at:
LESOURNE J., Les systèmes du destin, Dalloz, Paris, 1977. MARTIN J., Information Engineering, Prentice Hall, Upper Saddle River, 1989. NASA, System Engineering Handbook, 2007. Available at: https://www.nasa.gov/ sites/default/files/atoms/files/nasa_systems_engineering_handbook.pdf. PRINTZ J., Écosystème des projets informatiques, Hermes-Lavoisier, Paris, 2006. PRINTZ J., Architecture logicielle, Dunod, Paris, 2013. PRINTZ J., Estimation des projets de l’entreprise numérique, Hermes-Lavoisier, Paris, 2013. PRINTZ J., Survivrons-nous à la technologie ?, Les acteurs du savoir, 2018. RECHTIN E., The Art of Systems Architecting, CRC Press, Boca Raton, 2000. SENGE P., The 5th Discipline, Random House, New York, 1990. SHANNON C., WEAVER W., The Mathematical Theory of Communication, University of Illinois Press, Champaign, 1959. SIFAKIS J., Rigorous system design, 2013. Available at: www-verimag.imag.fr/~sifakis/. SIMONDON G., Du mode d’existence des objets techniques, Aubier, Paris, 1958. SIMON H., The Sciences of the Artificial, MIT Press, Cambridge, 1996. VALIANT L., Probably Approximately Correct, Basic Books, New York, 2013. VAUGHAM D., The Challenger Launch Decision, University of Chicago Press, Chicago, 1997. VON NEUMANN J.,
Collected Works, Volume V: Design of Computers, Theory of Automata and Numerical Analysis, Pergamon Press, Oxford, 1961.
WEINBERG G., An Introduction to General Systems Thinking, Dorset House Publishing, New York, 1975. WIENER N., Cybernetics or Control and Communication in the Animal and the Machine, Hermann and MIT Press, Cambridge, 1948.
Index
A, B, C abstraction, 9, 21, 38, 65, 119, 150, 166, 167, 184, 191, 194, 213, 228, 238, 239, 243, 244 acquisition, 100, 142–144, 147 architecture, 4, 11, 13, 25, 31, 59, 62, 69, 70, 82, 84, 88, 96, 107, 117, 123, 126, 147, 153, 166, 189, 190, 191, 201, 210, 211, 221, 231, 234, 237, 241, 243, 249, 252 operational, 88 software, 153, 191 autonomous, 99, 124, 172, 199, 211, 247, 251 availaility, 92, 108–111, 116, 134, 167, 168, 179, 215, 231, 250 balance, see also imbalance, 29, 68, 83, 97, 100, 103, 111, 119, 124, 126, 140, 165, 178, 179, 182, 184, 185, 187, 198, 199, 202, 220, 238, 251 capability, 117, 140, 147, 167, 181, 203, 232 capacity, 9, 10, 20, 22, 23, 35, 39, 45, 50, 52, 64, 65, 70, 71, 76, 80, 87, 94, 98, 106, 109, 112, 114, 123, 125, 132, 135, 137, 138, 140, 141, 144, 160, 172, 190, 192, 193, 196, 197, 208, 212–214, 216–218, 220, 223, 228, 233, 241, 251
causality, 92, 190 centralized, see also decentralized, 17, 64, 165 chance, 16, 18, 28, 56, 57, 60, 89, 137, 145, 176, 188 communication, 15, 25, 28, 33, 38, 39, 41, 45–47, 49, 58, 62, 63, 75, 81–83, 116, 131, 140, 142, 144, 200, 228, 236, 239 complexity algorithmic, 40, 50, 134 textual, 40, 46, 50, 94 contract, 3, 14, 49, 53, 60, 83, 87, 93, 95, 100, 101, 103, 106, 128, 135, 136, 138, 139, 143, 164, 165, 167, 172, 176, 179, 180, 190, 192, 197, 212, 213, 231, 235, 239, 246, 248, 251 control, 23, 33, 39, 40, 42–44, 48, 58, 64, 70, 73, 78, 86, 92, 95–99, 104, 105, 110, 111, 114, 118, 119, 133–136, 145, 159, 166, 170, 171, 180, 181, 212, 213, 215, 216, 227, 233, 240, 243, 244, 250 cooperation, 23, 27, 61, 74, 148, 162, 193, 194, 211, 212, 220–222, 225, 226, 232, 241, 243, 244 cybernetics, see also systemics, 5, 11–15, 17, 18, 38, 39, 45, 48, 51, 93, 136, 159, 183, 250
System Architecture and Complexity: Contribution of Systems of Systems to Systems Thinking, First Edition. Jacques Printz. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
278
System Architecture and Complexity
D, E, F
G, H, I
decentralized, see also centralized, 64, 166 disorder, see also order, 11, 14, 28, 92, 122, 166, 176, 183, 184, 187, 190, 249 effectors, see also sensor, 7, 12, 86, 111, 131, 138 element, 40, 60, 63, 65, 66, 70, 77, 79, 82, 85, 87, 109, 110, 142, 144, 167, 175, 176, 180, 216, 229, 235, 238, 241, 244–248, 251 emergence, 20, 59, 64, 73, 188, 191, 194, 222 energy, 12, 20, 21, 31, 35, 37, 39, 44, 45, 49, 50, 59, 60, 63, 67, 68, 71, 90, 93, 95, 97, 98, 100–103, 105, 108, 114, 116, 125–127, 131, 138, 140, 143, 148, 154, 155, 157, 161, 166, 171, 182–185, 187, 189, 190, 196, 204, 211, 213–216, 219, 225, 226, 233, 241, 245–248, 253, 254 environment, 3, 6, 8, 9, 11–14, 23, 24, 29, 39, 55, 63, 65, 74, 76, 83, 85, 87, 93, 94, 98, 100, 101, 106, 107, 109, 110, 114, 116, 117, 124, 127, 138, 142, 144, 146, 147, 158, 163, 170, 174, 176, 181, 182, 184, 190, 191, 194–196, 198, 207, 209, 214, 216, 218, 226, 231, 232, 236, 238, 239, 245, 247–249, 251 equipment, 82, 83, 87, 88, 92, 115, 125, 194, 199, 208, 209, 223, 235 error, see also failure and fault, 11, 22, 24, 26, 44, 46, 50, 74, 95, 106–109, 126, 228, 248 failure, see also error, 109, 171, 215, 221, 248 fault, see also error, 15, 19, 24, 66, 107, 109, 193, 200, 210, 231, 247 feedback, 23, 39, 42–45, 64, 92, 109, 110, 117, 133, 134, 144, 162, 216, 243 functional, see also physical, 3, 41, 55, 68, 70, 71, 135, 238, 252
grammar, see also transduction, 9, 24, 29, 62, 63, 65, 96, 117, 204, 241, 253 hazards, 4, 14, 20, 29, 45, 69, 74, 83–85, 93, 99, 103, 105, 106, 117, 124, 136, 139, 165, 188, 190, 194, 198, 232, 238, 245, 248, 250–252 imbalance, see also balance, 24, 96, 97, 103, 178, 179, 199, 200 information, 4, 5, 8, 9, 11, 12, 15, 20, 22, 23, 26, 34, 36, 38–41, 43–49, 51, 52, 57, 61–63, 68, 71, 72, 78, 83, 85, 86, 87, 91, 92, 94–96, 110, 111, 114, 115, 120–125, 127, 128, 131, 132, 136, 139, 142, 144, 146, 147, 149, 150, 152, 153, 158, 161, 162, 164, 166, 171, 176, 183, 187–190, 192, 194, 200, 203, 213–216, 219, 223, 225, 226, 228, 229, 235, 238, 240–242, 244, 249, 252, 253 inside, see also outside, 56, 57, 61, 138, 150, 159, 164, 209, 232, 235 integration, see also interoperability, 3, 4, 8, 16, 21, 28, 29, 55, 56, 59, 62, 67, 72, 76, 78, 93, 114, 119, 122, 127, 135, 140, 145, 146, 158, 166, 172, 188, 191, 200, 208, 220, 225–227, 234, 236, 247, 248, 253, 254 interaction, 22, 35, 50, 55, 64, 76–79, 84, 86, 87, 122, 125, 141, 157, 159, 162, 172, 176, 180, 190, 194, 202, 204, 214, 224, 235, 236, 245 interoperability, see also integration, 55, 61, 88, 109, 118, 132, 135, 138, 191–195, 211, 212, 219, 227, 228, 233, 235, 236, 240, 241, 243, 249, 254, 255 invariant, 29, 41, 48, 68, 72, 74, 84, 86, 92, 93, 98, 106, 114, 115, 118, 124, 156, 159, 165, 166, 179, 190, 198, 200, 211, 212, 229, 239, 246, 252
Index
279
L, M
O, P
language, 9, 12, 14, 15, 17, 18, 24, 27, 29, 38, 39, 41, 49, 58, 60, 61, 63–65, 72, 74, 75, 77–80, 82, 84, 87, 88, 108, 109, 115–118, 121, 124–126, 133–135, 149, 151, 153, 159, 162, 163, 166, 187, 189, 204, 206, 212, 219, 223, 228, 232, 240–244 external, 49, 72, 74, 79, 84, 110, 116–118, 159, 166, 223, 228, 244 internal, 84, 110, 116–118, 159, 166, 223, 228, 244 latency, 11, 44, 47, 98, 103, 109, 114, 117, 133, 134, 144, 243, 247 layer, see also stack, 9, 72–74, 189, 241 logic, 3, 9, 12, 20, 23, 35–38, 41, 47, 50, 61, 72, 73, 85, 87, 91, 93, 95, 98, 110–112, 117, 118, 136, 139–141, 144, 146, 147, 150, 153, 154, 167, 168, 175, 181, 189, 195, 198, 203, 206, 211, 212, 214, 223, 224, 228, 232, 233, 236, 238, 239, 244, 250–252, 254 capability, 87, 98, 139, 141, 203, 211, 232, 233, 239, 254 loop, 39, 43–45, 50, 64, 92, 99, 101, 109, 110, 112, 114, 117, 133, 134, 142, 144, 162, 166, 216, 243, 246, 247, 250, 251 material, 3, 9, 85, 96, 128, 131, 134, 189, 191, 206 model, 4, 8, 9, 24, 26, 28, 29, 36, 37, 43, 46, 58, 60, 65, 84, 85, 95, 101, 119, 123, 124, 131, 132, 149, 154, 156, 159, 168, 169, 175–177, 190, 192, 195, 212, 213, 220, 223, 227, 235, 238, 243, 245, 253 exchange, 4, 132, 192, 212, 213, 227 modular, 73, 137
order, see also disorder, 10–12, 24, 26, 27, 29, 34, 36, 41, 42, 50, 51, 67, 77, 79, 87, 92, 104, 105, 117, 120, 138, 140, 153, 166, 168, 170, 174, 176, 181, 183, 184, 187, 190, 210, 211, 214, 227 organization, 16, 27, 34, 38, 47, 49, 50, 53, 64–66, 72, 82, 85, 92, 98, 107, 111, 122, 124, 126, 147, 165, 175, 180, 187, 193, 197, 207, 209, 216, 221, 231, 239, 241–244, 248, 251, 252, 254 outside, see also inside, 56, 57, 61, 63, 138, 150, 159, 164, 206, 209, 232, 235, 244 performance, 50, 52, 131, 134 physical, see also functional, 3, 4, 9, 66, 150, 156, 167, 189, 195, 197, 199, 201, 232, 244 potential, 62, 63, 78, 96, 102–104, 122, 199 predicative, 38, 56, 61, 62, 76 processes, 11, 18, 25, 27, 29, 34, 39, 47, 49, 56, 59, 69, 70, 75, 76, 78, 85, 86, 89, 96, 110, 114, 118, 121, 123, 126–128, 142, 144, 146, 148, 151–153, 158–168, 170–174, 176–182, 184, 207, 221, 234–239, 243, 245, 247 antagonistic, 170–172, 174, 176, 179 project, 14, 16, 19, 40, 44, 51, 93, 121, 161, 163, 165, 210, 220, 221, 223, 225, 254 R, S, T, W redundancy, 110, 197 regulation, 7, 14–16, 29, 42, 44, 50, 56, 58, 87, 95, 96, 98–100, 117, 133–137, 140, 142, 145, 148, 166, 178, 192, 215, 233, 246, 247 resilience, see also robustness, 110, 118, 135, 232, 248
280
System Architecture and Complexity
resource, 6, 45, 63, 87, 97, 110, 140, 181, 203, 213, 214, 216 risk, 16, 60, 66, 78, 96, 97, 101, 107, 109, 115, 124, 132, 135–137, 161, 171, 172, 181, 199, 200, 204, 219–221, 232, 233, 247, 248 robustness, see also resilience, 20, 112, 118, 174, 232, 248 safety, 4, 44, 76, 108, 116, 124, 126, 135, 139, 162, 168, 171, 172, 176, 181, 192, 197, 198, 204, 247, 250 sensor, see also effectors, 143, 144 signal, 8, 12, 41, 45, 46, 84, 68, 161, 173, 223, 252 stack, see also layer, 69, 72, 74, 89, 92, 137, 207, 213, 221, 222, 239, 242, 244, 248, 251, 255 state, 15, 24, 45, 48, 50, 60, 69–72, 83, 85, 92, 98, 100, 101, 103, 105, 108, 109, 111, 115, 137, 142, 145, 161, 174, 185, 202, 204, 208, 214, 223, 246 supervision, 115 surveillance, 46, 61, 63, 101, 109–112, 115, 126, 147, 172, 176, 191, 231, 240, 244, 250, 252 system, 3, 4, 7, 9, 12, 14, 16, 19–21, 24–26, 29, 31, 35, 38, 39, 41–43, 45–53, 55–77, 80, 82, 83, 85–89, 91–122, 124–128, 132–150, 155, 158, 159, 161, 163–171, 173, 174, 176, 178–181, 183, 186, 187, 190, 191, 193–208, 210–216, 218–233, 235–239, 241, 243–248, 251–254
systemics, see also cybernetics, 1, 3–6, 9, 14, 17, 18, 23, 24, 26, 28–31, 33, 34, 38, 40, 44, 45, 51, 55–57, 70, 85, 97, 98, 105, 113, 119, 121, 124, 127, 132, 136, 141, 142, 147, 153, 158, 159, 176, 177, 179, 200, 204, 213, 217, 247, 251 time, 3, 11, 17, 22, 31, 33, 34, 36, 37, 39, 43, 44, 47, 50, 55, 59–61, 63, 64, 77, 86–91, 94, 96, 98, 103, 106, 109, 110, 114, 117, 118, 121, 122, 131–134, 136, 137, 140–142, 144–146, 149, 169, 174, 175, 177, 186–188, 191, 192, 195, 197, 202, 206, 207, 212, 213, 215, 216, 219, 225, 232, 242–244, 246, 252, 254 real, 22, 98, 114, 122, 131, 142, 145, 192, 195, 202, 212, 213, 216, 232, 244, 246 thinking, 64, 142 transduction, see also grammar, 12, 74, 110, 115, 118, 119, 183, 241 transformation, 23, 40, 50, 51, 57, 58, 124, 141, 153, 156, 171, 172, 185, 188, 196, 204, 232, 235, 243–246 tree, 66, 79–82 triplet, 52, 53, 64, 73, 76, 79, 92, 114, 140, 148, 163, 168, 190, 197 wear, 60, 106, 166, 168, 173, 174, 198
Other titles from
in Systems and Industrial Engineering – Robotics
2019 ANDRÉ Jean-Claude Industry 4.0: Paradoxes and Conflicts BENSALAH Mounir, ELOUADI Abdelmajid, MHARZI Hassan Railway Information Modeling RIM: The Track to Rail Modernization BLUA Philippe, YALAOU Farouk, AMODEO Lionel, DE BLOCK Michaël, LAPLANCHE David Hospital Logistics and e-Management: Digital Transition and Revolution BRIFFAUT Jean-Pierre From Complexity in the Natural Sciences to Complexity in Operations Management Systems (Systems of Systems Complexity Set – Volume 1) BUDINGER Marc, HAZYUK Ion, COÏC Clément Multi-Physics Modeling of Technological Systems FLAUS Jean-Marie Cybersecurity of Industrial Systems
JAULIN Luc Mobile Robotics – Second Edition Revised and Updated KUMAR Kaushik, DAVIM Paulo J. Optimization for Engineering Problems TRIGEASSOU Jean-Claude, MAAMRI Nezha Analysis, Modeling and Stability of Fractional Order Differential Systems 1: The Infinite State Approach Analysis, Modeling and Stability of Fractional Order Differential Systems 2: The Infinite State Approach VANDERHAEGEN Frédéric, MAAOUI Choubeila, SALLAK Mohamed, BERDJAG Denis Automation Challenges of Socio-technical Systems
2018 BERRAH Lamia, CLIVILLÉ Vincent, FOULLOY Laurent Industrial Objectives and Industrial Performance: Concepts and Fuzzy Handling GONZALEZ-FELIU Jesus Sustainable Urban Logistics: Planning and Evaluation GROUS Ammar Applied Mechanical Design LEROY Alain Production Availability and Reliability: Use in the Oil and Gas Industry MARÉ Jean-Charles Aerospace Actuators 3: European Commercial Aircraft and Tiltrotor Aircraft MAXA Jean-Aimé, BEN MAHMOUD Mohamed Slim, LARRIEU Nicolas Model-driven Development for Embedded Software: Application to Communications for Drone Swarm
MBIHI Jean Analog Automation and Digital Feedback Control Techniques Advanced Techniques and Technology of Computer-Aided Feedback Control MORANA Joëlle Logistics SIMON Christophe, WEBER Philippe, SALLAK Mohamed Data Uncertainty and Important Measures (Systems Dependability Assessment Set – Volume 3) TANIGUCHI Eiichi, THOMPSON Russell G. City Logistics 1: New Opportunities and Challenges City Logistics 2: Modeling and Planning Initiatives City Logistics 3: Towards Sustainable and Liveable Cities ZELM Martin, JAEKEL Frank-Walter, DOUMEINGTS Guy, WOLLSCHLAEGER Martin Enterprise Interoperability: Smart Services and Business Impact of Enterprise Interoperability
2017 ANDRÉ Jean-Claude From Additive Manufacturing to 3D/4D Printing 1: From Concepts to Achievements From Additive Manufacturing to 3D/4D Printing 2: Current Techniques, Improvements and their Limitations From Additive Manufacturing to 3D/4D Printing 3: Breakthrough Innovations: Programmable Material, 4D Printing and Bio-printing ARCHIMÈDE Bernard, VALLESPIR Bruno Enterprise Interoperability: INTEROP-PGSO Vision CAMMAN Christelle, FIORE Claude, LIVOLSI Laurent, QUERRO Pascal Supply Chain Management and Business Performance: The VASC Model FEYEL Philippe Robust Control, Optimization with Metaheuristics
MARÉ Jean-Charles Aerospace Actuators 2: Signal-by-Wire and Power-by-Wire POPESCU Dumitru, AMIRA Gharbi, STEFANOIU Dan, BORNE Pierre Process Control Design for Industrial Applications RÉVEILLAC Jean-Michel Modeling and Simulation of Logistics Flows 1: Theory and Fundamentals Modeling and Simulation of Logistics Flows 2: Dashboards, Traffic Planning and Management Modeling and Simulation of Logistics Flows 3: Discrete and Continuous Flows in 2D/3D
2016 ANDRÉ Michel, SAMARAS Zissis Energy and Environment (Research for Innovative Transports Set - Volume 1) AUBRY Jean-François, BRINZEI Nicolae, MAZOUNI Mohammed-Habib Systems Dependability Assessment: Benefits of Petri Net Models (Systems Dependability Assessment Set - Volume 1) BLANQUART Corinne, CLAUSEN Uwe, JACOB Bernard Towards Innovative Freight and Logistics (Research for Innovative Transports Set - Volume 2) COHEN Simon, YANNIS George Traffic Management (Research for Innovative Transports Set - Volume 3) MARÉ Jean-Charles Aerospace Actuators 1: Needs, Reliability and Hydraulic Power Solutions REZG Nidhal, HAJEJ Zied, BOSCHIAN-CAMPANER Valerio Production and Maintenance Optimization Problems: Logistic Constraints and Leasing Warranty Services
TORRENTI Jean-Michel, LA TORRE Francesca Materials and Infrastructures 1 (Research for Innovative Transports Set Volume 5A) Materials and Infrastructures 2 (Research for Innovative Transports Set Volume 5B) WEBER Philippe, SIMON Christophe Benefits of Bayesian Network Models (Systems Dependability Assessment Set – Volume 2) YANNIS George, COHEN Simon Traffic Safety (Research for Innovative Transports Set - Volume 4)
2015 AUBRY Jean-François, BRINZEI Nicolae Systems Dependability Assessment: Modeling with Graphs and Finite State Automata BOULANGER Jean-Louis CENELEC 50128 and IEC 62279 Standards BRIFFAUT Jean-Pierre E-Enabled Operations Management MISSIKOFF Michele, CANDUCCI Massimo, MAIDEN Neil Enterprise Innovation
2014 CHETTO Maryline Real-time Systems Scheduling Volume 1 – Fundamentals Volume 2 – Focuses DAVIM J. Paulo Machinability of Advanced Materials ESTAMPE Dominique Supply Chain Performance and Evaluation Models
FAVRE Bernard Introduction to Sustainable Transports GAUTHIER Michaël, ANDREFF Nicolas, DOMBRE Etienne Intracorporeal Robotics: From Milliscale to Nanoscale MICOUIN Patrice Model Based Systems Engineering: Fundamentals and Methods MILLOT Patrick Designing HumanMachine Cooperation Systems NI Zhenjiang, PACORET Céline, BENOSMAN Ryad, RÉGNIER Stéphane Haptic Feedback Teleoperation of Optical Tweezers OUSTALOUP Alain Diversity and Non-integer Differentiation for System Dynamics REZG Nidhal, DELLAGI Sofien, KHATAD Abdelhakim Joint Optimization of Maintenance and Production Policies STEFANOIU Dan, BORNE Pierre, POPESCU Dumitru, FILIP Florin Gh., EL KAMEL Abdelkader Optimization in Engineering Sciences: Metaheuristics, Stochastic Methods and Decision Support
2013 ALAZARD Daniel Reverse Engineering in Control Design ARIOUI Hichem, NEHAOUA Lamri Driving Simulation CHADLI Mohammed, COPPIER Hervé Command-control for Real-time Systems DAAFOUZ Jamal, TARBOURIECH Sophie, SIGALOTTI Mario Hybrid Systems with Constraints FEYEL Philippe Loop-shaping Robust Control
FLAUS Jean-Marie Risk Analysis: Socio-technical and Industrial Systems FRIBOURG Laurent, SOULAT Romain Control of Switching Systems by Invariance Analysis: Application to Power Electronics GROSSARD Mathieu, REGNIER Stéphane, CHAILLET Nicolas Flexible Robotics: Applications to Multiscale Manipulations GRUNN Emmanuel, PHAM Anh Tuan Modeling of Complex Systems: Application to Aeronautical Dynamics HABIB Maki K., DAVIM J. Paulo Interdisciplinary Mechatronics: Engineering Science and Research Development HAMMADI Slim, KSOURI Mekki Multimodal Transport Systems JARBOUI Bassem, SIARRY Patrick, TEGHEM Jacques Metaheuristics for Production Scheduling KIRILLOV Oleg N., PELINOVSKY Dmitry E. Nonlinear Physical Systems LE Vu Tuan Hieu, STOICA Cristina, ALAMO Teodoro, CAMACHO Eduardo F., DUMUR Didier Zonotopes: From Guaranteed State-estimation to Control MACHADO Carolina, DAVIM J. Paulo Management and Engineering Innovation MORANA Joëlle Sustainable Supply Chain Management SANDOU Guillaume Metaheuristic Optimization for the Design of Automatic Control Laws STOICAN Florin, OLARU Sorin Set-theoretic Fault Detection in Multisensor Systems
2012 AÏT-KADI Daoud, CHOUINARD Marc, MARCOTTE Suzanne, RIOPEL Diane Sustainable Reverse Logistics Network: Engineering and Management BORNE Pierre, POPESCU Dumitru, FILIP Florin G., STEFANOIU Dan Optimization in Engineering Sciences: Exact Methods CHADLI Mohammed, BORNE Pierre Multiple Models Approach in Automation: Takagi-Sugeno Fuzzy Systems DAVIM J. Paulo Lasers in Manufacturing DECLERCK Philippe Discrete Event Systems in Dioid Algebra and Conventional Algebra DOUMIATI Moustapha, CHARARA Ali, VICTORINO Alessandro, LECHNER Daniel Vehicle Dynamics Estimation using Kalman Filtering: Experimental Validation GUERRERO José A, LOZANO Rogelio Flight Formation Control HAMMADI Slim, KSOURI Mekki Advanced Mobility and Transport Engineering MAILLARD Pierre Competitive Quality Strategies MATTA Nada, VANDENBOOMGAERDE Yves, ARLAT Jean Supervision and Safety of Complex Systems POLER Raul et al. Intelligent Non-hierarchical Manufacturing Networks TROCCAZ Jocelyne Medical Robotics YALAOUI Alice, CHEHADE Hicham, YALAOUI Farouk, AMODEO Lionel Optimization of Logistics
ZELM Martin et al. Enterprise Interoperability –I-EASA12 Proceedings
2011 CANTOT Pascal, LUZEAUX Dominique Simulation and Modeling of Systems of Systems DAVIM J. Paulo Mechatronics DAVIM J. Paulo Wood Machining GROUS Ammar Applied Metrology for Manufacturing Engineering KOLSKI Christophe Human–Computer Interactions in Transport LUZEAUX Dominique, RUAULT Jean-René, WIPPLER Jean-Luc Complex Systems and Systems of Systems Engineering ZELM Martin, et al. Enterprise Interoperability: IWEI2011 Proceedings
2010 BOTTA-GENOULAZ Valérie, CAMPAGNE Jean-Pierre, LLERENA Daniel, PELLEGRIN Claude Supply Chain Performance / Collaboration, Alignement and Coordination BOURLÈS Henri, GODFREY K.C. Kwan Linear Systems BOURRIÈRES Jean-Paul Proceedings of CEISIE’09 CHAILLET Nicolas, REGNIER Stéphane Microrobotics for Micromanipulation DAVIM J. Paulo Sustainable Manufacturing
GIORDANO Max, MATHIEU Luc, VILLENEUVE François Product Life-Cycle Management / Geometric Variations LOZANO Rogelio Unmanned Aerial Vehicles / Embedded Control LUZEAUX Dominique, RUAULT Jean-René Systems of Systems VILLENEUVE François, MATHIEU Luc Geometric Tolerancing of Products
2009 DIAZ Michel Petri Nets / Fundamental Models, Verification and Applications OZEL Tugrul, DAVIM J. Paulo Intelligent Machining PITRAT Jacques Artificial Beings
2008 ARTIGUES Christian, DEMASSEY Sophie, NERON Emmanuel Resources–Constrained Project Scheduling BILLAUT Jean-Charles, MOUKRIM Aziz, SANLAVILLE Eric Flexibility and Robustness in Scheduling DOCHAIN Denis Bioprocess Control LOPEZ Pierre, ROUBELLAT François Production Scheduling THIERRY Caroline, THOMAS André, BEL Gérard Supply Chain Simulation and Management
2007 DE LARMINAT
Philippe Analysis and Control of Linear Systems
DOMBRE Etienne, KHALIL Wisama Robot Manipulators LAMNABHI Françoise et al. Taming Heterogeneity and Complexity of Embedded Control LIMNIOS Nikolaos Fault Trees
2006 FRENCH COLLEGE OF METROLOGY Metrology in Industry NAJIM Kaddour Control of Continuous Linear Systems