135 83 7MB
English Pages 259 [253] Year 2023
E. W. Udo Küppers
A Transdisciplinary Introduction to the World of Cybernetics Basics, Models, Theories and Practical Examples
A Transdisciplinary Introduction to the World of Cybernetics
E. W. Udo Küppers
A Transdisciplinary Introduction to the World of Cybernetics Basics, Models, Theories, and Practical Examples
E. W. Udo Küppers Küppers-Systemdenken Bremen, Germany
ISBN 978-3-658-42116-8 ISBN 978-3-658-42117-5 (eBook) https://doi.org/10.1007/978-3-658-42117-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH, part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
Preface
The journey through space remains reserved for a few people. They all agree that, when looking from the outside at the blue Earth, they see an extremely fragile planet, which is protected by only a wafer-thin layer, the ozone layer (O3), not only protecting our lives but also guaranteeing further development. Only the view from space of the entire Earth makes us awestruck by the achievement of life and at the same time fearful of dangers that destroy life. If we take a position on the Earth’s surface, which we inhabit, we recognize its immense diversity of life (biodiversity), which has advanced over billions of years of evolutionary progress, from one stage of development to the next. This evolutionary pressure is also the guarantee of life and survival for us humans, who, thanks to acquired abilities, are able to take a cautious look into the future in order to think and act with foresight. If we delve deeper, from the macroscopic to the microscopic space, we recognize in every single living cell of an organic body a multitude of processes that all – within certain limits – contribute with far-reaching networking to individual and collective progress on our planet. The idea that every biological organism, especially the human one, functions like a mechanism similar to the workings of a clock, as was suggested in the 16th and 17th centuries, for example by the philosopher and mathematician René Descartes (1596–1650), has long been outdated. Today we know of fungal networks in the soil that spread over square kilometers and send signals to each other. Similar information exchange processes are known among trees, which, for example, exchange chemical or electrical signals in advance about approaching predators. Not all the secrets of nature have been revealed yet. But one fact overlays all preconceived viewpoints, with different perspectives on nature or life: Our world, more specifically: our Earth is a networked system, interwoven with infinitely many communications. Communication is one of three fundamental processes, alongside material and energetic processes, without which no life would exist. • Communication is everything! Without communication, everything is nothing.
v
vi
Preface
The dynamic, adaptively evolving structures and processes of communication of our networked Earth, with its biological diverse manifestations, are well balanced. They serve the differentiated progress of all and exclude no organism. From this grows the realization: That which is composed of parts in such a way that it forms a unified whole, not in the manner of a heap, but like a syllable, is evidently more than just the sum of its parts. A syllable is not the sum of its sounds: ba is not the same as b plus a, and flesh is not the same as fire plus earth. (https://de.wikiquote.org/wiki/Ganzes, detailed quote from Aristotle: from Metaphysics VII 10, 1041 b. Accessed on 06.02.2018)
In short: The whole is more than the sum of its parts. Communication, as evolution operates it, is not a singularity (!), but holistically networked. The mere fact that our complex, constantly changing environment requires permanent adaptation to things makes it, the evolutionary communication, an unsurpassable model of progress. The mathematician and cybernetician Norbert Wiener, who will be discussed in more detail in Chap. 4, created a communication bridge between nature and technology with his work from 1948/1961 (1st German edition 1963): CYBERNETICS or control and communication in the animal and the machine (German: KYBERNETIK Regelung und Nachrichtenübertragung in Lebewesen und in der Maschine)
His research led, among other things, to an insight that can be highlighted as a central feature of communicative processes, wherever they occur: the feedback. Seventy years after Wiener’s first publication, our lives and work are permeated by feedback actions of various kinds. They manifest themselves in personal conversations or discussions, in the planning and implementation of infrastructure measures, or through the networked consideration of many individual work processes in the construction of a building. Without a functioning network of coherent communicative feedback, it becomes more prone to disturbances, conflicts, and even catastrophic events similar to those of nuclear power plant accidents in an increasingly complex environment. Communication without feedback, which forwards individual information or signals from A to B via C to ... N in a linearly linked way, has not and will not do justice to finding solutions in a complex environment. This type of causal or monocausal communication is inherent to us humans because we have difficulty grasping and processing complex relationships in their real entirety. Therefore, we usually solve complex tasks by reducing complexity, through simple if-then causal chains that generate short-term successful solutions. However, we simultaneously ignore real, effective influences from communication networks. These can be delayed – since they do not disappear – but causes for subsequent problems that have an impact and undo the once achieved shortterm successes. Who does this help? No one! Communication in nature and technology is based on two completely different strategies and pursues two different goals:
Preface
vii
Evolution, with its unimaginably diverse, networked, and complex network of information exchange as well as material and energetic processes, strengthens the individual progress of all living beings by unfolding an adaptive effect that, due to dynamic development, is the best of all development strategies. Sustainability is ensured, and survivability is strengthened. In contrast, we humans still largely operate with short-sighted and error-accumulating strategies to achieve temporary goals that generate progress but are not free from cascade-like subsequent problems. • Biological organic cybernetics has a communicative advantage over technical cybernetics when complexity and dynamics are continuously at play. • Technical machine cybernetics, even though the boundaries are blurring with the increasing robotics trend towards humanoid robotics and thus seemingly becoming more organic, is programmed for largely precise functional processes. A communicative networking of data and information, as associated with the term “Industry 4.0”, is still far away and will be able to partially imitate the communicative capabilities of a biological sustainable cybernetics, but will hardly achieve its quality. E. W. Udo Küppers
Contents
1 Introduction and Learning Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Part I Fundamentals 2 A Special Look at the Origin and Mindset of Cybernetics. . . . . . . . . . . . . . . 7 2.1 What Cybernetics Is and What Cybernetics Is Not. . . . . . . . . . . . . . . . . . 8 2.1.1 Two Examples of Cybernetic Perspectives. . . . . . . . . . . . . . . . . 9 2.1.2 Cybernetics in the Dictionary of Cybernetics. . . . . . . . . . . . . . . 15 2.1.3 Cybernetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Systemic and Cybernetic Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.1 Circular process in six steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.2 System Delimitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.3 Part and Whole. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.4 System of Effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.5 Structure and Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.6 Control and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.7 Perception or the Cybernetics of Cybernetics. . . . . . . . . . . . . . . 31 2.3 Control Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3 Basic Concepts and Language of Cybernetics. . . . . . . . . . . . . . . . . . . . . . . . . 35 3.1 System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2 Control Circuit and Elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3 Negative Feedback—Balanced Feedback. . . . . . . . . . . . . . . . . . . . . . . . . 46 3.4 Positive Feedback—Reenforced Feedback. . . . . . . . . . . . . . . . . . . . . . . . 47 3.5 Aim Resp. Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.6 Self-Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.7 Flow Equilibrium—Steady-State and Other Equilibria. . . . . . . . . . . . . . . 50 3.8 Homeostasis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.9 Variety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 ix
x
Contents
3.10 Ashby’s Law of Requisite Variety. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.11 Autopoiesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.12 System Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.13 Control Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Part II Cyberneticians and Cybernetic Models 4 Cybernetics and its Representatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.1 Norbert Wiener and Julian Bigelow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.2 Arturo Rosenblueth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.3 John von Neumann. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 Warren Sturgis McCulloch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.5 Walter Pitts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.6 William Ross Ashby. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.7 Gregory Bateson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.8 Humberto Maturana and Francisco Varela. . . . . . . . . . . . . . . . . . . . . . . . . 76 4.9 Stafford Beer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.10 Karl Wolfgang Deutsch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.11 Ludwig von Bertalanffy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.12 Heinz von Foerster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.13 Jay Wright Forrester. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.14 Frederic Vester. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.15 Concluding Remark. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.16 Control Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5 Cybernetic Models and Orders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.1 Cybernetics of Mechanical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2 Cybernetics of Natural Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3 Cybernetics of Human Social Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.4 First-order Cybernetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.5 Second-Order Cybernetics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.6 Control Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Part III Cybernetic Theories and Practical Examples 6 Cybernetics and Theories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.1 Systems Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.1.1 Günther Ropohl and his Systems Theory of Technology . . . . . . 120 6.1.2 Niklas Luhmann and his Theory of Social Systems . . . . . . . . . . 123 6.2 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Contents
xi
6.3 Algorithm Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.4 Automata Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 6.5 Decision Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 6.6 Game Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.7 Learning Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.8 Control Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 7 Cybernetic Systems in Practice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.1 Cybernetic Systems in Nature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 7.1.1 Blood Sugar Control Loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 7.1.2 Pupils Control Loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 7.1.3 Cybernetic Model in the Forest Ecosystem. . . . . . . . . . . . . . . . . 154 7.2 Cybernetic Systems in Technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 7.2.1 Control of Image Sharpness of a Camera . . . . . . . . . . . . . . . . . . 156 7.2.2 Position Control of the Read/Write Head in a Computer Hard Disk Drive. . . . . . . . . . . . . . . . . . . . . . . . . . 157 7.2.3 Control of Power Steering in a Motor Vehicle. . . . . . . . . . . . . . . 158 7.2.4 Control of Room and Heating Water Temperature . . . . . . . . . . . 159 7.3 Cybernetic Systems in the Economy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 7.3.1 A Cybernetic Economic Model of Procurement-Induced Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.3.2 The Cybernetic Control Loop as a Management Tool in the Plant Life Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.3.3 The CyberPractice Method by Dr. Boysen . . . . . . . . . . . . . . . . . 171 7.4 Cybernetic Systems in Society. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7.4.1 Sociocybernetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7.4.2 Psychological Cybernetics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 7.4.3 Cybernetic Governance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 7.4.4 Cybernetics and Education—Learning Biology as an Opportunity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 7.4.5 Cybernetics and Military. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 7.5 Control Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 8 Control Questions (Q N.N) With Sample Answers (A N.N) For Chapters 2 to 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 8.1 Chap. 2: A Special Look at the Origin and Way of Thinking of Cybernetics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 8.2 Chap. 3: Basic Concepts and Language of Cybernetics . . . . . . . . . . . . . . 212 8.3 Chap. 4: Cybernetics and its Representatives . . . . . . . . . . . . . . . . . . . . . . 214 8.4 Chap. 5: Cybernetic Models and Orders. . . . . . . . . . . . . . . . . . . . . . . . . . 217 8.5 Chap. 6: Cybernetics and Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
xii
Contents
8.6 Chap. 7: Cybernetic Systems in Practice. . . . . . . . . . . . . . . . . . . . . . . . . . 230 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Annex I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
1
Introduction and Learning Objectives
This textbook on “Cybernetic Worlds” could also be titled: “The Power of Negative Feedback”. Negative feedback is a process that occurs between at least two subjects or objects, linking an amplifying and a balancing effect. The following example is typical for this: Department head Müller provokes his employee Ms. Meier with the amplifying accusation: “Once again, the production on the machine is incomplete and faulty!” Ms. Meier responds in a balancing manner, stating that she is aware of this and has already started troubleshooting and consulted additional experts. Positive feedback is a process that occurs between at least two subjects or objects, linking two opposing amplifying effects. The following example is typical for this: Department head Müller provokes his employee Ms. Meier with the same amplifying accusation as in the previous example, only that Ms. Meier now accuses Mr. Müller just as strongly, stating that his specifications for machine production were imprecise, to which Mr. Müller responds in an amplifying manner that she does not have the qualifications to judge this, to which Ms. Meier objects to this insinuation and threatens to go to Mr. Müller’s superior… etc. Negative feedback loops are called “angel circles” because of their interconnected stabilizing effect. Positive feedback loops are called “vicious circles” because of their interconnected escalating – often conflict-prone – effect. The more complex a system is, regardless of its nature, and the higher the number of influences in a networked association of mutual effects, the more crucial a well-balanced ratio of positive and negative feedback loops is for system stability, system robustness, and system progress! In the overwhelming diversity of nature – much more so than in technology – feedback loops play a central role in biocybernetic or cybernetic processes. © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_1
1
2
1 Introduction and Learning Objectives
• Cybernetics is therefore nothing more than effective communication or low-loss data and information exchange, which strengthens survivability in nature and machine, procedural functionality in technology, and thus helps to avoid errors. Cybernetics and systems theory are often mentioned in the same context or equated. Both view problems through the lens of networked, circular relationships. Negative and positive connections or feedback loops are elements of causal relationships that are assigned to systemic or cybernetic strategies. The development of the scientific discipline of cybernetics is generally attributed to Norbert Wiener (1963), as already mentioned in the preface. In the 1940s, his fundamental insight and value of the process of “negative feedback” in a system for targeted control of moving objects initiated a major technological breakthrough. Cybernetic control processes operate in a wide range of technical, economic, and social products and processes, ranging from simple thermostat control in coffee machines to heat regulation in residential buildings, autonomous vehicles or robots, and self-optimized flight paths of drones in military operations. Products and processes that involve cybernetic control processes are therefore confronted with ethical principles in individual cases, which people have given themselves and which are becoming a new challenge for the increasing human-machine interactions. Cybernetic approaches in the field of politics, which were addressed very early on, in the mid-1960s, by Karl. W. Deutsch (1969) and others, should not be neglected either, but must be adapted to the complex dynamic processes of today’s political events. One of the main learning objectives of this book is to adopt a new perspective on things, i.e., objects or processes of everyday life. This is a circular view, a view of relationships rather than the objects themselves - or technically speaking: the system elements. It is a perspective that allows us to follow dynamics and draw realistic conclusions for goals, as far as possible. • However, anyone who wants to create sustainable solutions in reality, which is usually complex, will not be able to avoid taking more than one standpoint with a view to the problem or problems to be solved. This demonstrates a special, but fundamental type of operative communication, which is perfectly mastered in the nature of organisms and still poses great challenges for us humans. You will also learn essential features and basic concepts of cybernetic thinking and how to practically apply them (Sects. 2.1, 2.2, Chap. 3). A variety of applications with integrated cybernetic processes are available as examples, which provide us with useful services every day, but whose hidden control mechanisms we do not always understand. The journey through cybernetics with its representatives (Chap. 4), who have all advanced cybernetics one step further through their special achievements, is exciting to follow. Among other things, it is intended to encourage those interested in cybernetics to
References
3
explore new perspectives in their own environment, possibly by taking a systemic view of things and embarking on paths that lead to new insights and solutions. Cybernetic models and orders (Chap. 5) as well as theories that are subsumed under cybernetics (Chap. 6) form the transition to a series of practical examples of cybernetics from different applications (Chap. 7). Understanding the effects of relationships is a crucial prerequisite for dealing with cybernetic systems and will increase from example to example. This is particularly true when we explore the complexity in our - not only technology- and economy-centered environment to understand it and derive sustainable solutions from it. In doing so, the following guiding principle of thinking and acting – and thus also of communicating – will always accompany us: • Long-term prudent thinking and acting outweighs short-term misguided thinking and acting. • Long-term foresight trumps short-term misdirection.
References Deutsch KW (1969) Politische Kybernetik. Modelle und Perspektiven. Rombach, Freiburg im Breisgau Wiener N (1963) Kybernetik. Regelung und Nachrichtenübertragung in Lebewesen und in der Maschine (Original: 1948/1961 Cybernetics or control and communication in the animal and the machine), 2., erw. Aufl. Econ, Düsseldorf/Wien
Part I Fundamentals
2
A Special Look at the Origin and Mindset of Cybernetics
Summary
With a special focus on the origin and way of thinking of cybernetics, Chapter 2 introduces the topic of circular thinking inherent in cybernetics. Starting from the central question “What is cybernetics and what is not cybernetics,” with accompanying practical examples, you will be confronted with numerous definitions of cybernetics, all derived from the respective fields of application of cybernetics. Finally, special attention is paid to “Systemic and cybernetic thinking” in six circular steps. A number of authors who deal with the topic of cybernetics in book form or in specialist articles often begin their texts with a look back at the past, towards the origin of cybernetics. This results in temporally differentiated statements about the source—or sources—of cybernetics, which do not all have to be repeated in this context. Instead, we orient ourselves for this text on cybernetics or Cybernetic Systems on one of the pioneers of German computer science, Karl Steinbuch (1917–2005). The information theorist, communications engineer, and cybernetician, who was heavily involved in political and social issues in his later years, had a not to be underestimated influence on Machine Learning and the—today controversially discussed—fields of work Artificial Neural Networks, Artificial Intelligence and last but not least on Cybernetics itself. Terms such as Computer Science and Cybernetic Anthropology can be traced back to Karl Steinbuch. Excursion The independent field of Computer Science encompasses the science of electronic data processing. It includes data imaging, processing, storage, and transmission. From primitive computing systems that filled entire rooms to today’s computers in the form of handheld mobile phones, electronic data processing takes place, even today, quite classically according to the Von © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_2
7
8
2 A Special Look at the Origin and Mindset of Cybernetics
Neumann Architecture—VNA -, which was first published in 1945 (von Neumann 1993; cf. https:// de.wikipedia.org/wiki/Von-Neumann-Architektur. Accessed on 16.01.2018: “The Von Neumann Architecture—VNA—is a reference model for computers, according to which a common memory holds both computer program instructions and data.”).
Under cybernetic anthropology (Steinbuch 1971; Rieger 2003), a cognitive science field is understood that combines anthropology (science of humans) and cybernetics “with a technology-induced theory formation” (https://de.wikipedia.org/wiki/Kybernetische_ Anthropologie. Accessed on 16.01.2018). In “Automat und Mensch” (Steinbuch 1965, pp. 322–323), Steinbuch formulated the origin of the term cybernetics: First, let’s briefly consider the historical origin of the word “cybernetics”: Plato (427 to 347 BC) used the word κζβερνετικε (kybernetike) in the sense of the art of control. In Plutarch (50 to 125 AD), the helmsman of the ship is referred to as κζβερνετες (kybernetes). In Catholic church terminology, κζβερνεσις (kybernesis) refers to the management of the church office. It should also be noted that the French “gouverneur” and the English “to govern” are etymologically related to cybernetics. In 1834, Ampère referred to the science of possible government procedures as “cybernétique” in his “Essai sur la philosophie des sciences”. In the last generation, the term has been particularly promoted by Norbert Wiener with his book “Cybernetics” [Original 1948, German 1963, d. A.].
We will discuss the representatives of cybernetics and their attributed achievements in detail in Chap. 4.
2.1 What Cybernetics Is and What Cybernetics Is Not Summary
In a brief and concise form, the concept of cybernetics is described, followed by various perspectives or viewpoints on an object to demonstrate. They show how differently the same object—or subject—can be judged and how necessary it is, in a complex environment, to make judgments only based on the adoption of multiple viewpoints on a problem to be solved, which is linked to a sustainable goal. Further explanations of cybernetics by individuals from different scientific fields follow, who therefore have differentiated views on cybernetics. Finally, a series of definitions of cybernetics is presented, culminating in a comparison of the terms cybernetics and cybernetics. Anschütz (1967, p. 9) writes in his book “Cybernetics—Short and Sweet”: Cybernetics […] is not a single science, such as geography or physics. It is no more a single science than the theory of evolution that emerged in the middle of the last century (19th century). Both, the theory of evolution and cybernetics, are overarching ideas about many, if
2.1 What Cybernetics Is and What Cybernetics Is Not
9
not all, sciences. Accordingly, many features of their history share common characteristics. Their ideas were first discovered in one or a few sciences and have found their way into a large number of specialized sciences over time. Cybernetics, in particular, is a collection of ideas and theories whose coherence was discovered around the middle of this century (20th century).
The “overarching ideas” cited here are based on interdisciplinary collaboration, which is based on communicative—interpersonal1—information exchange and is the driving force of progress. The prevailing disagreement about the concept of cybernetics at that time—and partly still today—is intended to reflect a humorous remark made at a “cybernetician congress” in 1963 in Karlsruhe (ibid.): One actually does not know exactly what a cybernetician is. Of course, only cyberneticians are in this hall. But when the colleagues then drive home, they are all again mathematicians, physicists, linguists, doctors, biologists, etc.—anything else, but certainly no cyberneticians anymore.
Up to the present day, the beginning of the 21st century, cybernetics has dug itself into many other individual scientific disciplines. The humorous remark from 1963 still resonates today and highlights that cybernetic thinking is capable of contributing to progress in individual disciplines. A first mnemonic for cybernetics can therefore be formulated: Mnemonic Cybernetics is not a single science. Cybernetics is a communicative
meta-science capable of contributing to progress in natural, engineering, and social scientific individual or specialized disciplines.
2.1.1 Two Examples of Cybernetic Perspectives What distinguishes the view of a cybernetician, an engineer, and an interested citizen on a robot? See Fig. 2.1.
1 The
adjective “interpersonal” should be emphasized here because it is the evolutionary basis for communication, which in today’s world, due to the increasing digitalization of things, is experiencing a different kind of communication. In particular, there is an increasing influence of digital media on communication between humans and machines or between machines and machines in the field of education. As a result, the societal, value-laden interpersonal exchange of information is sometimes massively—often for purely economic (!) reasons—pushed back. The long-term consequences of this communicative transfer process for our coexistence are not yet foreseeable. However, it is clearly recognizable at many of these communicative transfer construction sites that short-sighted misguided and thus risky thinking still outranks the more necessary, sustainable forward-looking thinking.
10
2 A Special Look at the Origin and Mindset of Cybernetics
Programmer Helpers in everyday life? Ease of work?
ENVIRON MENT Interested citizen
Collaborating WORKERS
Energy consumption? Joint technology? Axes of motion?
NAO
Engineer
Cybernetics
©2018 Dr.-Ing. E. W. Udo Küppers
Fig. 2.1 Three perspectives on the NAO robot object. (Source: NAO robot: https://www.ald.softbankrobotics.com/en/cool-robots/nao. Accessed on 16.01.2018)
Let’s start with the interested citizen. This person may view the robot as a technical machine, as a helper for tasks that are difficult for them, e.g., lifting heavy loads or as a replacement for monotonous work steps to be carried out over hours. This help from the robot may relieve them of high physical stress, making their working life easier. However, the worker may also see a threat from the machine, especially the closer they are connected to it in a work process. The experienced engineer focuses more on the technical-electronic details of the robot, on its energy consumption per unit of time, on the coordinated stepper motor control for handling, on the circuitry, on special materials for gripping techniques, on the type of joints, and so on. And how does the cyberneticist view the robot? He analyzes the individual elements of a system, that is, the machine itself and its movements as well as the surrounding environment. He will also take into account the working human(s) in a possible humanmachine collaboration. Ultimately, he will explore the relationships (!) of the involved elements of the “robot” system and draw specific conclusions from them. The word relationships already vaguely hints at the core process or the cybernetic tool feedback, which will be discussed in more detail further below. So far, this first differentiated view from three perspectives. Another example, familiar to all readers and analogous to Fig. 2.1, is shown in Fig. 2.2. Again, there are three people with their respective specific views of the object,
2.1 What Cybernetics Is and What Cybernetics Is Not
11
Electricity costs? Driving distance? Price? ENVIRONMENT Car driver Driver Energy capacity? Drive axle? Chassis structure?
Tesla Model S Cybernetics
Automotive technician
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 2.2 Three perspectives on the object passenger car—car. (Source: Klaus and Liebscher 1970, pp. 9–22, with changes by the author; car: Tesla Model S: tesla.com/de DE/about. Accessed on 16.01.2018)
this time the car system. The three people are the driver of the car, the automotive technician, and the cyberneticist. The example describes in a simple way the cybernetics in road traffic, as it is inherent to every driver of a car. If cybernetics were extended to a system of road users, with numerous car and truck drivers, motorcyclists, cyclists, pedestrians, special vehicles such as police and fire trucks, roadways, rail traffic, traffic jams, signaling systems, environmental influences, and other influencing factors on the car system, which is the singular subject of consideration here, then we get a first realistic view of the highly dynamic, complex cybernetic structures and processes that take place in today’s world of increasing mobility—often invisibly—for the people involved. Let’s start with the driver of the car. The car itself is still an object to be controlled by the driver, although so-called “autonomous” or driverless—“cybernetically” controlled— vehicles are already rolling through our streets. The driver or buyer of the car is interested in the design of the car, its handling, comfort, color, horsepower, type of fuel, and last but not least, the purchase price of the car. He sees the car as a whole and, after a convincing test drive, will decide to buy the car. In contrast, the technician is more interested in details of the engine and transmission, the suspension, the materials of the disc brake, speeds, assemblies, and much more. For the cyberneticist, who examines the car system, it is of little interest which materials were used in the car, which fuel is used, how high the maximum achievable speed
12
2 A Special Look at the Origin and Mindset of Cybernetics
and horsepower are. The cyberneticist sees the car as an abstract dynamic system, as a moving system, as a function-oriented system. He also distinguishes between two clearly distinguishable subsystems: the biological organismic subsystem driver and the technical subsystem car. Both subsystems in turn consist of further subsystems, the further breakdown of which appears to make sense up to a certain degree depending on the investigation. The units considered at this point are called system elements. Driver and car form an open system. There are not only interactions between driver and car, but also with the environment, which in this example is characterized by the condition and direction of roads, traffic signals, traffic police, pedestrians, etc. The driver-car system reacts to all influences from the environment with certain behaviors. Up to this point, the cyberneticist has focused on the system aspect in his analysis.
Key point The system aspect is a core of cybernetic methodology.
Let us now focus on the peculiarity of interactions between system elements and the environment, which leads us to another cybernetic characteristic. For example, if a road bump or obstacle forces the driver to make a steering reaction, this does not happen through a simple physical action-reaction action, similar to a double pendulum with two balls colliding and repelling each other. In reality, a kind of control takes place, which brings the car back into its original straight direction due to the sudden change of direction and subsequent counter-steering. Fig. 2.3 shows the principle of control as a circular representation. The steering process just described, followed by others as required, is a system with feedback.
Key point The principle of feedback is the basis of all cybernetic control systems.
The autonomous, unmanned, and control-oriented car is rapidly approaching our road traffic. However, the driver and car as a system are still dominant in road traffic. And this system is also striving to achieve a predetermined goal through purposeful reactions against external influences. The central processing location in the driver-car system is located in the biological subsystem, in the driver’s neural network. Before a steering reaction is forced by the appearance of a specific traffic situation, a information processing process occurs rapidly in the driver’s neural network, which leads into the control loop of the car through the steering movement, see Fig. 2.4. The driver and car system has not been influenced by material or energetic processes so far, which was also not necessary. Rather, the driver and car system is characterized by the process of generating, storing, transmitting, and processing information. In cybernetic language, we speak of encoding and decoding of information or a message. The processing of information leads us to another cybernetic aspect:
2.1 What Cybernetics Is and What Cybernetics Is Not
13
Reference variable CONTROLLER
Controlled variable
Manipulated variable
Direction as controlled variable
REGULATION Deflection due to disturbance Steering as manipulated variable
Car as regulator
Steering wheels as controlled system © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 2.3 Plastic image of a car ride with control principle and feedback in a self-regulating car. (Source: Car Tesla S: https://www.adac.de/infotestrat/autodatenbank/default.aspx. Accessed on 16.01.2018)
Reference variable Pulse conduction to muscle actuators
Steering intervention electromagnetic detection of disturbances
DRIVER
REGLER Man
Controlled variable direction
CAR
Neural sensory processing
Reference variable
Control variable steering angle
REGULATION Steering wheels hv = energy of a photon
Fig. 2.4 Control system with double feedback driver-car
©2018 Dr.-Ing. E. W. Udo Küppers
14
2 A Special Look at the Origin and Mindset of Cybernetics
Principle Cybernetic systems are also systems that receive, store, and process
information and convert it into an effect on the environment. There are differences in behavior between novice drivers and experienced car drivers, with the novice driver trying to absorb as much information as possible from road traffic to avoid conflicts. The experienced driver, on the other hand, has a wealth of solutions for dangerous situations acquired through his long-term practice, which he unconsciously retrieves as soon as the traffic situation requires it. These differences in driving behavior between beginners and “professionals” are particularly evident in extreme situations, such as icy roads, heavy rain, sandstorms, etc. For each extreme situation, the experienced driver has an optimization strategy at hand, which he usually retrieves unconsciously. In the past, he has stored a specific situation-adapted strategy for each case by playing with different traffic situations. This allows the driver or the driver-car system to continue driving without losses, in most cases. Despite all the specific optimization strategies that experienced drivers acquire for critical situations, there is always an aspect of the unexpected, a surprise process in a complex dynamic environment that cannot be completely ruled out. The playful variant in the description leads to another cybernetic aspect: Principle Finding optimal strategies and processes is a game-theoretical feature of
cybernetic systems. Finally, there is one aspect left to mention, which is connected to all the previous explanations in the traffic example. The previously described behaviors in road traffic follow certain logical conditions. Steering to the right, continuing straight ahead, braking, accelerating, etc. are linked processes of logical steps that can also be mathematically described as a solution rule, a calculation rule, or as an algorithm. Algorithms appear in many variations and specific applications, such as control algorithms, game algorithms, optimization algorithms, and last but not least, algorithms in the context of artificial intelligence development.
Key statement The algorithmic aspect is also unmistakably a characteristic of a cybernetic system.
A brief summary of the second example of a cybernetic perspective is provided by Klaus and Liebscher (1970, p. 22): [Cybernetics or cybernetic systems deal with the investigation of] “processes in dynamic systems under the aspects of the system, control, information, game, and algorithm.”
At this point, we would like to refer again to Chap. 4, in which we will learn more about the systemic and cybernetic thinking of outstanding cyberneticians.
2.1 What Cybernetics Is and What Cybernetics Is Not
15
2.1.2 Cybernetics in the Dictionary of Cybernetics One of the most comprehensive, if not the most comprehensive description of cybernetics was written by Georg Klaus and Heinz Liebscher (eds., 1976). This is shown below as a facsimile of the original text from the “Dictionary of Cybernetics”. It should be noted that the text was written over 40 years ago, but its fundamental technical cybernetic statements have retained their validity to this day. Description of the term from the “Dictionary of Cybernetics” (Source: Klaus and Liebscher 1976, pp. 318–326)
Cybernetics: Science of cybernetic systems. Cybernetic systems exhibit general characteristics such as control, information processing and storage, adaptation, self-organization, self-reproduction, strategic behavior, and others. Cybernetics aims to) describe and model the structure and function of classes of dynamic systems, whose representatives exhibit such characteristics, with increasing perfection mathematically. It derives its knowledge mainly from the study of concrete cybernetic systems in the various forms of motion of matter and from experiments on model systems, by constructing idealized theoretical systems that abstract from all peculiarities of a specific form of motion of matter. In this respect, cybernetics uncovers laws (cybernetic) of dynamic systems that can apply to several forms of motion of matter. This particularly concerns laws of control processes and informational processes. Cybernetics has developed its own conceptual system, which is partly based on long-known concepts used in various traditional sciences, which have been refined or generalized within the framework of cybernetics. This conceptual system is not yet completely uniform and comprehensively elaborated. The various subfields of this relatively young scientific discipline are rapidly evolving. This is also related to the fact that the definitions of the subject of cybernetics given by various authors seem to differ greatly from one another. In fact, one or the other of the various aspects of this science are emphasized or placed at the center in these definitions. From a philosophical perspective, the system aspect appears as the fundamental aspect, as all other features of cybernetic systems (such as information exchange, control processes, strategic behavior, etc.) must be understood as features or behaviors of this class of material dynamic systems. A satisfactory classification of the cybernetic sub-disciplines would have to build on this. However, the current state of development of cybernetics has not yet led to a uniform conception of the systematic overall structure of this science. Therefore, the development of cybernetics takes place within a series of disciplines, some of which are strongly structured within themselves. These include, among others: control and regulation theory, automata theory, neural network theory, reliability theory, theory of large
16
2 A Special Look at the Origin and Mindset of Cybernetics
or complex systems, information or communication and signal theory, algorithm theory, game theory. Similar to how the conceptual system of cybernetics has partly developed from traditional sciences, cybernetics also uses various methods that have their origin in other scientific disciplines. These include, for example, the model or analogy method, the black box method, and the trial-and-error method, which have been adapted to the more general subject of cybernetics and its mathematical methodology. One characteristic of the new science is its close connection to the conceptual formations and methods of mathematics. The mathematical concepts and methods used in the various subfields of cybernetics originate, insofar as they were not newly introduced into mathematics, from probability theory and mathematical statistics, analysis—especially the theory of differential equations -, mathematical logic—especially the theory of propositional calculus as well as investigations on decidability and computability -, algebra, and topology. In cybernetics, as in theoretical physics, it is possible to build certain subfields as deductive disciplines using mathematics. One consequence of this close connection between mathematics and cybernetics is that mathematical methods can be increasingly introduced into some traditional individual sciences or that these sciences can be opened up to mathematics in the first place. This includes, for example, important subfields of economics, biology and medicine, psychology, and linguistics. However, before these areas can be made accessible to mathematical treatment, a precise conceptual analysis, taking into account cybernetic concepts in the respective individual science, is required. Often, new mathematical concepts must be coined or methods found. Sometimes it is even necessary to develop new subfields of mathematics. However, cybernetics and mathematics are not natural or social sciences; they cannot produce any results on their own that would be binding, for example, for social sciences, such as political economy or historical materialism or the entire MarxistLeninist philosophy. The starting point of the investigations cannot, therefore, lie with mathematics and cybernetics, but must be sought in the respective natural or social scientific discipline. For although cybernetics abstracts precisely from all the facts that are fundamental to traditional individual sciences, this does not mean that cybernetic research could proceed in isolation from individual scientific research. An instructive example of the interaction between specialized science and cybernetics is the research in biocybernetics. The insights gained, for example, from the analysis of biological control systems, do not originate exclusively from purely biological insights that are, so to speak, “cybernetically interpreted,” nor from purely cybernetic insights that are “biologically interpreted.” Rather, the cybernetic and biological approaches in the (biocybernetic) process of gaining
2.1 What Cybernetics Is and What Cybernetics Is Not
knowledge are inseparably linked. Cybernetics provides, based on certain biological insights, a relatively rough conceptual model for a biological control process in each case. The research then focuses on finding the actual, inherently biological realizations of this cybernetic approach in detail. If successful, the results of the corresponding investigations usually lead to a refinement, supplementation, or further development of the original cybernetic approach in some other sense. Often, however, this is only a small step towards approaching the actual processes in biological systems. The new cybernetic model is then again confronted with further investigations on the concrete biological object which usually leads to a renewed refinement, supplementation, etc., of the model. In this way, the dialectical interplay between cybernetic and biological concepts leads to a continuous approximation of the often extraordinarily complex control processes in biological systems. It is quite clear that in this dialectical relationship between cybernetic and biological concepts, the primary role is assigned to the biological concept and thus to the objectively real processes in biological systems. Despite significant differences between biocybernetic research and the application of cybernetic methods to social processes, the relationship between these two areas cannot be fundamentally different: Here, too, success will only be achieved through mutual interaction, and the primacy is not given to cybernetics, but to the specialized science. These are dialectical and historical materialism*, Marxist-Leninist political economy**, and—depending on the type of specific investigation—one or another economic or other social science discipline. * **The Lexicon of Cybernetics, published in 1976, was politically aligned with the system of the German Democratic Republic (GDR), which in turn was closely connected to the Union of Soviet Socialist Republics (USSR) as a “brother country.” Therefore, at that time, arguments were made using social concepts such as “dialectical and historical materialism” and “Marxist-Leninist political economy” in the sense of Marx, Engels, Lenin, and Stalin. However, these are irrelevant for the subject-specific information technology arguments of cybernetics (editor’s note). The mentioned dialectic of cybernetic and individual scientific research has another significant aspect. Since cybernetics is not an “absolute schema of thought” that is to be grafted onto individual sciences, but rather has its actual source in concrete individual scientific research, the course of cybernetic research not only advances the science for the respective subject of investigation, such as biology or physiology in the first example, but also cybernetics itself. The analysis of biological cybernetic systems leads to new biological insights, e.g., into
17
18
2 A Special Look at the Origin and Mindset of Cybernetics
the construction and functional mechanism of biological control systems. This refers to concrete biological facts, such as the control centers in the nervous system that govern a specific process, the nerve connections that transmit corresponding signals, the fine structure of the cells involved in the maturation process, their function, etc. An important aspect is the classification of a specific control system within the overall system of the respective organism, as well as insights into relationships with other control processes. It is noteworthy that corresponding new insights are also gained at the level of cybernetics in line with the biological insights. Generally speaking, new insights into the functioning and structure of complex biological systems are reflected here as insights into the functioning and structure of possible dynamic systems. In this way, the dialectic of cybernetic and individual scientific research simultaneously leads to a continuous expansion of the more general knowledge of cybernetics. Of course, what has been said here applies not only to the relationships between cybernetic and biological or social scientific research but also to the relationships between cybernetics and all specialized sciences with which it comes into contact. Particularly worth mentioning here are the relationships of cybernetic research to those in economics as well as in psychology and pedagogy. In these cases, too, it is by no means a matter of building up new (“cybernetic”) sciences alongside or even replacing the aforementioned sciences. It is not even about the constitution of any special disciplines within the mentioned areas. It is all the more necessary to emphasize this, as some designations that have been introduced for the relevant research suggest such a view of the situation. Just as biocybernetics does not take the place of or even replace traditional biology and does not represent a biological special discipline, economiccybernetics is not a kind of “higher economics” that would have to replace a “lower” one. In the dialectical interrelationship of cybernetic and economic scientific research, the determining side lies in the field of economic science. Since the currently available theoretical and methodological means of cybernetics have mainly been developed in connection with natural scientific research, it is necessary to emphasize this particularly with regard to the social sciences. The investigation of highly complex economic systems with their specific equilibrium, homeostatic, and development conditions is still in its infancy and has not yet reached the stage required to expand the theoretical and methodological arsenal of cybernetics through appropriate generalizations. In comparison, research in the field of psychology appears to be much more advanced, where—mainly based on system and information theory as well as algorithm theory—remarkable results could be achieved. Considerable progress has been made through the application of cybernetic ways of thinking both in
2.1 What Cybernetics Is and What Cybernetics Is Not
p sychology and in pedagogy (here, for example, in the form of various concepts regarding automated teaching systems). The high degree of abstraction of cybernetic concept formations and the related possibility of making the ways of thinking of cybernetics fruitful in various disciplines characterize cybernetics as a cross-disciplinary discipline that requires the cooperation of specialists from various fields. Application possibilities for cybernetics and its methods are currently opening up in automation technology, biology, communication science, linguistics, medicine, pedagogy, philosophy, psychology, sociology, and economics. Since its foundation, cybernetics has been the subject of philosophical debate. Its claims as a science, its sometimes far-reaching results, its methodology, and its conceptual system contribute to this, as do the actual or potentially possible applications (electronic computers with large and variable performance, adaptive and learning-capable robots, etc.) that provide various impulses for philosophical reflection. However, a careful distinction must be made between serious investigations of the consequences of cybernetics and speculation. Cybernetics is essentially materialistic and dialectical. It is one of the most impressive confirmations of dialectical-materialist philosophy in the 20th century and, at the same time—provided philosophical generalization—contributes to enriching some categories of dialectical materialism (such as reflection, interaction, contradiction, etc.). The assessments and evaluations of cybernetics made by bourgeois authors are extraordinarily diverse and range from strict rejection to euphoric enthusiasm. In the short history of cybernetics as a science, it has been abused by bourgeois theorists to support the most diverse idealistic worldviews and justify the most varied political decisions. However, a clear change has taken place when comparing the attitude of representatives of bourgeois ideology towards cybernetics in its first development phase with their attitude towards cybernetics today. While in the early 1950s, the prevailing view in bourgeois philosophy was that cybernetics should be rejected because of its alleged mechanism and reductionism (for which theological arguments were usually put forward), later and today, predominantly neopositivist conceptions are in the foreground. Among other things, attempts are made to exploit the cybernetics and its results for a reconciliation of materialism with idealism. Another variant of neopositivist interpretation, which is no less absurd, sees cybernetics as a kind of “universal philosophy” that leads to a “new unity of science.” In this context, various variants of a “cybernetic idealism” are constructed, which are no less untenable than the physical idealism criticized by LENIN at the time.
19
20
2 A Special Look at the Origin and Mindset of Cybernetics
History: Cybernetics as a new, independent scientific discipline emerged in connection with the rapid development of productive forces in the first half of our century, particularly from the requirements of military technology (mainly air defense) during the Second World War. Overall, it is the result of the efforts of numerous specialists—especially mathematicians, physiologists, physicists, logicians, communication and control engineers—in various countries. Some principles that are now being developed within the framework of cybernetics have a more or less long prehistory within other disciplines. This applies, for example, to the control principle. A leading role in the formation of the new science was played by the mathematician Norbert Wiener (Cybernetics or Control and Communication in the Animal and the Machine, 1948), who also gave the new field its name. Preliminary work, which was carried out independently of this development, was provided in the Soviet Union by the physiologist P. K. Anochin, the mathematician A. N. Kolmogorov, and others. The further development is no longer the work of just a few specialists from various countries. This is evidenced by a large number of national and international congresses dedicated to the overall complex of cybernetics or the problems of individual application areas, as well as the large number of practical applications in various fields of science, technology, and economy. The text has been adapted to the current German spelling. Reproduction with the kind permission of Karl Dietz Verlag Berlin.
2.1.3 Cybernetics 2.1.3.1 In the Definition Cosmos of Cybernetics On the cover text of the second, revised and expanded edition of his book on Cybernetics (1963), Norbert Wiener describes cybernetics as Definition Cybernetics “Theory of communication and control and regulation processes in machines and living organisms.” It continues: This means that cybernetics summarizes the diverse efforts to combine communication technology, psychological, sociological, biological, and more recently [1960s, ed.] medical research projects.
Wiener was therefore committed from the beginning to uniting previously independent disciplines, each with its own specialized terminology, through cybernetics. Regardless of this idea of Wiener’s holism, over the years, cybernetics developed in various
2.1 What Cybernetics Is and What Cybernetics Is Not
21
disciplines, from which—influenced by cybernetically oriented scientists—their own definitions of cybernetics emerged. Flechtner (1984, pp. 9–11, first published 1970) subsumes some of them under the heading “Preliminary Concept Determination”. He writes: Today [1970s, author’s note] there is already a wealth of definitions and essential determinations of cybernetics that capture everything essential about it, but mostly with an overemphasis on one or a few aspects. According to W. Ross Ashby [British psychiatrist and cybernetician, see Sect. 4.6, author’s note] (1903–1972):
Definition Cybernetics Cybernetics is the “general formal science of machines.”, However, such a determination is meaningless as long as one does not know that here “machines” also include living beings, communities, economies, and the like. If one takes the concept of the “machine” so broadly, one overshoots the mark—even if one narrows it down to the “behavior” of such machines.
Machine behavior can, for example, be the striving and achieving of a specific goal. From this, according to the French cybernetician Albert Ducrocq (1921–2001), see https://fr.wikipedia.org/wiki/Albert_Ducrocq (accessed on 16.01.2018)—the definition can be derived (Flechtner 1984, p. 9): Definition Cybernetics “Cybernetics is a science that systematically enables us to achieve any goal, including any political goal.” This definition builds a bridge back to the history of the year 1834, in which the […] great physicist André Marie Ampère [1775–1836, author’s note] developed the idea of a science he called “cybernetique” and which was to be a procedural doctrine of governance. (Flechtner 1984, p. 9).
In “Cybernetic Foundations of Pedagogy” (1964 edition), Helmar Frank (1933–2013) defines, according to Flechtner (ibid.): Definition Cybernetics “Cybernetics is the theory or technique of messages, message turnover, or the systems performing these.” Governing and communication are brought together by both definitions of cybernetics. The British management scientist Stafford Beer (1926–2002), see section 4.9, formulates it similarly, while approaching Wiener’s definition again: Definition Cybernetics “Cybernetics is the science of communication and control.” A very general formulation for cybernetics, like the one by Bernhard Hassenstein (1922– 2016), co-founder of biocybernetics, (1972, p. 123):
22
2 A Special Look at the Origin and Mindset of Cybernetics
Definition Cybernetics Cybernetics is the “science of control“ – is not very meaningful. The Russian mathematician Alexey Andreevich Lyapunov (1911–1973) formulates it more clearly and again related to the origin of control engineering: Definition Cybernetics “The basic procedure of cybernetics is the algorithmic description of the functional sequence of control systems. The mathematical subject of cybernetics is the study of controlling algorithms.” The German information theorist and cybernetician Karl Steinbuch (1917–2005) also attempted definitions of cybernetics, although he knew that a universally valid definition for cybernetics exists as little as for mathematics or philosophy (Steinbuch 1965, p. 325): Definition Cybernetics “Cybernetics is understood, on the one hand, as a collection of certain thought models (of control, communication, and information processing) and, on the other hand, as their application in technical and non-technical areas.” Furthermore, it says (ibid.): Since, however, control, communication, and information processing are essentially characterized by the investigation of informational structures, one could also briefly formulate as follows:
Definition Cybernetics “Cybernetics is understood as the science of informational structures in technical and non-technical areas.” (ibid.) Flechtner (1984, p. 10) attempted his own definition of cybernetics by linking cybernetics with the concept of systems, as is generally practiced today through the interconnectedness of the concepts of systems theory and cybernetics: Definition Cybernetics “Cybernetics is the general, formal science of the structure, relations, and behavior of dynamic systems.” This definition approaches that of Georg Klaus (1912–1974), a philosopher, logician, and cybernetician who worked in the former GDR (German Democratic Republic) and became internationally known with his publications, including “Cybernetics and Society” (1964), who defined in his book “Cybernetics from a Philosophical Perspective” (Klaus 1961, p. 27): Definition Cybernetics “Cybernetics is the theory of the connection of possible dynamic self-regulating systems with their subsystems.”
2.1 What Cybernetics Is and What Cybernetics Is Not
23
The list of definitions of what cybernetics is could be continued. A series of further definitions of cybernetics, beyond those mentioned here, is listed by Obermair in his compact pocketbook “Man and Cybernetics” (1975, pp. 14–16). From this, we conclude in our definition cosmos of cybernetics with a definition from the Brockhaus Encyclopedia (1966–1974): Definition Cybernetics “The specific disciplines of mathematical cybernetics are information theory, control system theory, and automata theory. All three disciplines are not closed but also use other mathematical subfields, such as probability theory, mathematical logic (logistics), number theory, game theory, and others. We cannot yet say that the mathematical theories for cybernetic use are fully developed.” Control and communication, and thus also information processing and communication in living beings, in humans, animals, and plants, obey cybernetic principles or the language of cybernetics, as discussed in Chap. 3. The outstanding cybernetic principle of negative feedback directs a system of high dynamics and complexity, as found in nature, technology, and society in various variations, towards a state of increasing stability, while maintaining the possibility of system progress. Evolution has been successful with this for billions of years. Humans have not yet learned to use the language of cybernetics in such a way that societal progress can be sustainably designed to be fault-tolerant, at least with reduced consequences and follow-up costs, as it happens. The consistent, adapted, and sensitive application of cybernetic principles—especially negativefeedback– in social, economic, political, and other decision-making processes could and can lead to the avoidance and reduction of problems and subsequent problems that increasingly overwhelm us. An absolute prerequisite for this would be a clear reversal of monocausal thinking and acting into systemic thinking and acting. Cybernetics and systems theory are suitable means of choice for this purpose.
2.1.3.2 Everything “Cyber” or what? It was the 1960s and 1970s that became the heyday of cybernetics. From this period, an immense variety of literature on cybernetics emerged, dealing with structures, definitions, fields of application, and a number of other cybernetic features that still guide us through the cybernetic realms today. The leap in time to the present, to the emerging age of digitalization, robotics, and artificial intelligence, shifts the focus again and more strongly to cybernetics or cybernetic systems. Thomas Rid traces the path of cybernetics up to the present day in his book “Maschinendämmerung” (2016). When the nouns Kybernetik or Kybernetiker are translated into English, we get cybernetics and cyberneticist. But what does the prefix “Cyber” mean? Rid (2016, pp. 7–10) poses this question right at the beginning of his preface. Not only his students, but also officers of the American Air Force, Pentagon strategists, members of Congress, bank employees, secretive British spies, scientists, hackers (someone who gains unauthorized access to other people’s computers) and many others are curious about what “Cyber”
24
2 A Special Look at the Origin and Mindset of Cybernetics
means. The increasing networking of electronic systems means that not a few users of these systems are asking about the meaning of “security and freedom” (ibid., p. 7). Space, Security, War, War, Space, Security do not sound exciting at first hearing; but with the prefix “Cyber”—Cyberspace, Cybersecurity, Cyberwar, Cyberwar, Cyberpace, Cybersecurity—they acquire a technical and modern, sometimes even frightening, meaning, especially in the digital age. Rid highlights a series of people in a short sequence whose understanding of the prefix “Cyber” could not be more different (ibid., pp. 9–10): “Cyber” is a chameleon. Politicians in Washington think of power outages that can plunge entire cities into chaos at any time when they hear the word. Intelligence agents in Maryland, on the other hand, think of conflict and war—and of data stolen by Russian criminals and Chinese spies. Managers in London’s City associate it with massive security breaches, banks that have to bleed financially, and companies whose reputation can be ruined in the blink of an eye. For inventors in Tel Aviv, it conjures up visions of people merging with machines, wired prosthetics with sensitive fingertips, and silicone chips implanted under delicate human skin. Science fiction fans in Tokyo associate it with an escapist [realityfleeing, ed.] but retropunk aesthetic, with sunglasses, leather jackets, and worn, battered devices. Romantic internet activists in Boston see it as a new realm of freedom, a space beyond the control of repressive governments and police apparatuses. Engineers in Munich associate it with steel control and factory operation from the computer console. Aging hippies in San Francisco think nostalgically of holism and psychedelics, and how they “turn on” their gray cells. And for the screen-addicted youth in between, “Cyber” simply means sex via video chat. The word defies definition by norms and prefixes. Its meaning is just as elusive, shadowy, and indistinct. Whatever it consists of, it is always in motion, always has to do with the future, and is at the same time always already past.”
Cybernetics and Cybernetics turn out to be two terms that—each in its own way—cannot be grasped concretely. Neither does the definition of cybernetics exist, nor can “Cyber” or Cybernetics be assigned to a clear application. Both are meta-concepts for a multitude of applications across all societal fields of application. With this elaborated final statement on the two concepts of cybernetics and Cybernetics, we have completed the circular reasoning that brings us back to the beginning of Section 2.1.
2.2 Systemic and Cybernetic Thinking Summary
This chapter is guided in a concise manner by cybernetic thinking, which, in its diversity and changes of perspective, highlights essential foundations for working with cybernetic systems. These are primarily supported by a cybernetic—communicative—progression in circular processes. Probst describes this strategy in great detail and precision in six process steps. The circular process is complemented or supported
2.2 Systemic and Cybernetic Thinking
25
by linked system-specific statements from various authors, all of whom are committed to the cybernetic mindset and the effective and sustainable treatment of complex interrelationships. All authors share the dominant goal strategy of “long-term-farseeing”.
2.2.1 Circular process in six steps “Basically, it is believed that systems theory and cybernetics provide the basis for understanding complex systems and capturing a phenomenon such as self-organization.” This statement by Probst (1987, p. 26) is still valid today. In the context of his consideration of self-organization, which is closely related to the cybernetic process characteristic “feedback”, Probst describes features of systemic and cybernetic thinking in a circular arrangement for capturing and understanding complex systems, as shown in Fig. 2.5 and subsequently explained step by step in an appropriate manner.
2.2.2 System Delimitation Step 1: System Delimitation (all quotes in steps 1 to 5 according to Probst 1987, pp. 26–45): Determining the boundary of a system, especially a complex system, usually
System delimitation
Part and whole
Perception Systemic Cybernetic THINK
Impact structure
Steering and development
Structure and behavior according to Probst, 1987
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 2.5 Systemic and cybernetic thinking in a circular representation for capturing and understanding complex systems in six steps. (Source: based on Probst 1987, p. 27, slightly modified by the author.)
26
2 A Special Look at the Origin and Mindset of Cybernetics
presupposes the question: “What is actually the system I am looking at?” What problems does it contain? What processes take place? Etc. Ultimately, the system boundary is influenced by the problem or problems within the system. Probst answers the question of the relevance of the system for specific problems as follows (ibid., p. 29): 1. How and where is the problem, the situation, the system to be delimited? What position, reference frame, premise do I choose as an observer? What other perspectives are possible? 2. The position of the observer is central: the system delimitation is determined by the standpoint, the reference frame, the prior knowledge, the premises, expectations, values, etc. 3. Which elements and subsystems appear relevant to me as an observer? Which “system” produces a “specific situation”? What is part and whole? 4. Various system delimitations and subsystem formations are relevant and possible—however, it is of central importance to capture those elements and relationships that produce a specific problem situation. 5. In which environment is a system located and what environmental relationships exist? What relationships and dependencies to other systems exist? 6. Systems are always embedded in an environment; they are part of a larger whole.
All the following principles according to Probst (1987, pp. 29–45): Principle Systemic and cybernetic thinking is: Holistic thinking in open systems.
2.2.3 Part and Whole Step 2: Part and Whole Probst speaks of the central importance of “how a system is to be modeled and investigated” (ibid., p. 29). Many systems are more complex and less complicated. It is not the large number of system parts and their diversity that is of concern. What is essential (ibid., p. 30) […] is the dynamics or the degree of predictability of the behavior of the system as a whole. […] Thus, the systems thinker is not simply interested in the parts or components of a system, but above all in the question of how these components are interconnected, i.e., what relationships exist between the parts of a system. The diversity of the parts, but especially the diversity of the relationships between the parts, determines the behavioral possibilities or the possible states that a system can assume and thus the variety of a system. Here, genuine cybernetic questions [text emphasis by the author] arise: 1. 2. 3. 4. 5.
How can the variety of a system be kept under control? How can a system with an enormous number of behavioral possibilities be steered? Which variables and relationships can we influence? Which parts or variables can we not influence or are not controllable? In what way can the system be steered? What reactions, side effects, changes, tipping effects, etc. can be expected when intervening in the system? Etc.
2.2 Systemic and Cybernetic Thinking
27
The systems and cybernetics thinking problem solver captures the holistic nature, the full range of variation of a system, and thus allows the possibilities inherent in a system to persist. The systems and cybernetics thinking problem solver does not primarily focus on a handful of detailed causalities between system elements, without completely abandoning the analytical detailed view. Instead, he tries to recognize and make workable the networked fabric of the elements of the system, their interactions in the overall system (cf. Küppers 2013). For the investigation and modeling of a complex system, its parts and its entirety, Probst (ibid., p. 32) identifies the following six points as guiding principles: 1. What relationships exist between the parts; how are they connected? What behavioral possibilities does a system contain, or which behavioral possibilities are excluded? What boundaries, limitations, and tolerance limits exist for the individual elements, subsystems, and the whole? 2. Which parts (subsystems) form meaningful units in turn? What new properties does an integrated whole have from its parts? 3. At which level are we interested in which details? Are (sub-)systems to be further resolved or is a black-box view sufficient? [Emphasis by the author] 4. Attempt to understand networks (tangle), to do justice to complexity; inclusion of diversity, multitude of dynamics, adaptability; prevention of unnecessary reductionism; acceptance of not being able to know. 5. Consciously dissolve and assemble the system without losing sight of the whole; the whole is something different from the sum of its parts, it belongs to a different category. 6. Constant awareness of the level of thinking and acting is necessary; conscious work on different levels of abstraction.
Maxim Systemic and cybernetic thinking is: analytical and synthetic thinking.
2.2.4 System of Effects Step 3: System of Effects It has not changed over the decades up to the present: people of all stripes still predominantly think in monocausal if-then relationships; figuratively speaking, along an action chain in which the consequence of a cause, an effect, is traced back to a cause. This thinking along a causal chain of causes and effects is pronounced in many scientific disciplines and has its justification there when detailed problem solutions are required. At the same time, however, no system can be considered completely isolated in our complex environment. In this respect, every result of a detailed analysis is simultaneously part of a higher-level entirety of a system. Dealing flawlessly with complex—especially dynamic—systems is only possible to a certain degree. Rather, the opposite is the case when dealing with complex systems, in which people make many mistakes, “complexity errors.” The vivid example of Dietrich Dörner’s “Tanaland” experiment, in which test subjects were supposed to care for the well-being of the inhabitants of the artificial country Tanaland, clearly shows the
28
2 A Special Look at the Origin and Mindset of Cybernetics
c omplexity errors due to the test subjects repeatedly falling back into monocausal thinking patterns when searching for solutions (Dörner 1989, pp. 22–46). Tanaland eventually faced a lamentable fate due to the test subjects’ extensive inability to recognize and evaluate the reality of the system elements and their relationships with each other. Regarding central thinking errors that repeatedly occur when complex systems are the subject of investigation, Probst (ibid., p. 33) refers to two earlier works by Dörner (1976,1981) with three arguments: 1. Humans usually assess the current state of a system without becoming aware of the temporal processes and interdependencies. 2. We lack the ability to deal with phenomena such as side effects, threshold values, tipping effects, or exponential developments. Complex systems usually behave counterintuitively for us in this regard, as Jay W. Forrester has expressed it. [Emphasis by the author; computer scientist J. W. Forrester was a computer pioneer and is considered the founder of the field of system dynamics, see also Sect. 4.13]. 3. We are used to thinking in causalities rather than networks. We expect that an effect has a cause, that an effect is in turn a cause for another effect, and so on. But not only human thinking, but also human action is accustomed to linear causal chains (see above “Thinking in chains of effects”) and expects an effect from a measure, which is again the basis for an effect, and so on.
We are approaching, among other things with the three preceding statements, the foundations of systems theory and cybernetics and their representatives of various cybernetic systems, which are discussed in Chapter 4. Also, the structure of effects of a complex system or the way system parts interact with each other is supported by Probst through six argumentative questions (Probst 1987, p. 35): 1. What is the nature of the relationships between the parts? • negative/positive effect relationships? • quantitative effect relationships? • Time aspect of the effect relationships? 2. What feedback effects—even over numerous stages—exist? • negative feedback loops? • positive feedback loops? Have feedback effects been omitted or cut off in the system delimitation? 3. What multiple effects exist due to networked relationships? How would the system behave in other situations? What redundancy, substitutability, vulnerability, dependency is preserved in the system? 4. Only mutual effects (interdependencies) and temporal sequences allow understanding the dynamics and thus the complexity of a system. 5. Causal thinking in control chains does not do justice to reality; usually, it is about circular systems with feedback loops over several interconnected parts; circularities and networks must not be broken up. 6. The relationships of networked systems require attention to side effects, multiple effects, threshold values, tipping effects, exponential developments etc.,—only in this way can the variety of the system be captured [all emphases by the author].
2.2 Systemic and Cybernetic Thinking
29
Key statement Systemic and cybernetic thinking is: thinking in circular relation-
ships and networks.
2.2.5 Structure and Behavior Step 4: Structure and Behavior Until today, the science of organization is characterized by clear organizational structure and process structures. Organizational principles are the triggers of these structures. They are supported by organization charts, diagrams that describe the functions of individual areas or persons, define performance standards, and much more. Systemic and cybernetic structures break through these classical, often rigid structures of construction and process, allowing the possibilities and capabilities of both to be expanded, changed, adapted to new situations, and—in the sense of cybernetics—controlled. Control models are always connected with information and information transmission, whereby information also determines the emergence of order, or avoids the emergence of disorder (entropy) (cf. ibid., p. 37). To the question “Which structures determine the behavior of a system?” Probst again provides six argumentative aids (ibid., p. 38): 1. Are behaviors produced by specific structures? What kind of structures determine the behavior of a system? 2. What role do information and communication structures play? What does the absence of information and communication channels mean? What are the central information and sources? 3. What patterns can be recognized? Can we determine if something is missing, added, or changed? 4. Structures determine the behavior of a system and thus adaptation, change, self-organization, learning, development. Structures are formal and informal, conscious and unconscious mechanisms, rules, norms, etc. 5. Information is central; information and communication possibilities are prerequisites for control. 6. Order (as we perceive it) arises from the cycle of structure and behavior [all emphases by the author].
Key Statement Systemic and cybernetic thinking is: Thinking in structures and
(information-processing) processes.
2.2.6 Control and Development Step 5: Control and Development Steering models and their mechanisms are widespread. They are present in natural, technical, economic, and social systems. Norbert Wiener, in his seminal book on cybernetics (1963, original 1948), described different systems—those of living beings and machines. Only the abstraction of a model, which
30
2 A Special Look at the Origin and Mindset of Cybernetics
maps various systems “in the light of steering mechanisms, makes system theory and cybernetics fully understandable in the sense of Ludwig von Bertalanffy (theoretical biologist and system theorist) and Norbert Wiener (the actual founder of cybernetics).” (Probst 1987, p. 39) Working with abstract systems or models of systems is not always easy; rather, dealing with concrete systems with steering structures that are manageable, clearly delimited, and understandable is. Examples can be found in all areas of work and life, whether it is a regulated heating system, a technical servo mechanism for mobile automatons, homeostatic blood pressure regulation, or homeostasis of the brain through the blood-brain barrier, among others. It becomes somewhat more complex when social systems or parts thereof become the subject of steering processes. In section 7.4.3, we will learn about an example from South America, where it is about the state control of nationwide agricultural yields. Steering mechanisms in different areas such as nature, technology, and social have led to differently pronounced models and specific patterns of seven examples of cybernetic thinking, which are listed below (ibid., p. 41): 1. 2. 3. 4. 5. 6. 7.
Thinking in models: The goal is to form and explore steering models for specific systems or situations. Thinking in different disciplines: Knowledge about steering mechanisms is drawn from various disciplines. Thinking in analogies: Systems depicted under the steering aspect become comparable and applicable as useful analog models. Thinking in control loops: Instead of linear causality, thinking in circular causalities, in networks, takes place. Thinking in the context of information and communication: Information is placed on an equal footing with energy and matter and forms the basis for steering. Thinking in the context of complexity management: Complexity is not reduced or bypassed, but accepted in the sense of the law of variety. Thinking in ordering processes: Steering structures determine the complexity of an order and vice versa. Organized order can only be of low complexity, while self-organized order can be of high complexity.
Of course, various patterns of cybernetic thought processes are not excluded when it comes to solving tasks or exploring the causes of disasters in concrete complex and highly complex systems. Often, an approach to problem-solving from different perspectives is useful, if not even necessary. Examples from the energy sector (Fukushima, Chernobyl), the chemical sector (Sandoz, BASF, Bhopal), and increasingly also in the information and communication sector (global hacker attacks, such as the recent one in May 2017 with the malware “Wanna Cry” that disabled electronic systems worldwide) are highly complex systems in which monocausal strategies, even if used multiple times, fail to uncover errors and prevent the destruction of complex processes.
2.2 Systemic and Cybernetic Thinking
31
In conclusion, the question of knowledge about steering and development processes or models that can be helpful is also supported here with six points (ibid., p. 42): 1. What knowledge from other fields, concerning the phenomena of steering and development, can be useful? 2. Are analogies possible under the balance of steering and development? 3. What self-determined, autonomous possibilities for steering and development does the system have? To what extent can the system create something new and develop itself? 4. There are universally valid “system laws” and models for “problems” of complexity, steering structures, and system behavior. 5. Cybernetic models from system research at the mechanistic, natural, and social levels can be applied beneficially. 6. Systems can react, respond, and/or act. Depending on this, state-preserving, goal-oriented, or purpose-oriented processes are in the foreground.
Key statement Systemic and cybernetic thinking is: an interdisciplinary, multidi-
mensional thinking in analogy models.
2.2.7 Perception or the Cybernetics of Cybernetics Step 6: Perception (or the Cybernetics of Cybernetics) Regarding this, Probst (ibid., p. 43) writes: There is disciplinary knowledge, but there are no disciplinary contexts. […] “Problems” must be seen or, provocatively expressed: “Problems” must first be invented. We still act as if reality or real problems could be clearly assigned to certain disciplines or complex situations could be designed and controlled in such a way that tasks could be given independently to economists, energy technicians, biologists, water managers, etc. The cybernetic perception curve of a human being does not begin immediately after birth, although we exist from birth as cybernetic dynamic beings with a complex network of effects. We learn and recognize only in the course of progressive education, through foreign instructive influence or through self-initiative, that the basis of life in nature and all the instruments and principles developing from it form a network of relationships for ecological, societal-social, and economic development. Nothing on our planet can be considered in isolation. This fundamental insight is familiar to many people—including leaders and designers in all areas of life and work, not least in politics. The great misunderstanding—not to say: the sustainable progress-hostile error—is that people, despite better knowledge, more or less refuse the cybernetic fundamental laws of nature, more specifically: their own existential progress. With their short-term development strategies, they produce catastrophes that have reached the limits of our planet’s viability, some of which have already been exceeded. It is the often-cited monocausal and shortsighted thinking and acting that opposes sustainable networked thinking and acting and thus the unconditional strengthening of the cybernetic perception and learning curve.
32
2 A Special Look at the Origin and Mindset of Cybernetics
How long the progress-hostile dominating process of monocausal thinking and acting in a complex networked—cybernetic—nature and environment will continue to cut its destructive swathes across, on, and above our Earth is uncertain. What is certain is that evolution will continue without us in case of emergency.
Not wanting to solve real problems exclusively in specific subject areas, as Probst (see above) admonished 30 years ago, has found little resonance to this day. Exceptions prove the rule, see Ulrich and Probst (1995). Currently, the “subject explosion” is progressing to a much greater extent than could have been expected 30 years ago. New study programs are constantly being created alongside the classics. The number of study programs at German universities totaled 19,011 in the WiSe 2017/2018 (according to https://www.hrk.de/fileadmin/redaktion/hrk/02Dokumente/02-03-Studium/02-03-01-Studium-Studienreform/HRK_Statistik_BA_MA_ UEbrige_WiSe_2017_18_Internet.pdf (Accessed on 01.02.2018); see also HRK Statistical Data on Study Offers at Universities in Germany, Winter Semester 2017/2018, Published by the German Rectors’ Conference (HRK), November 2017, Bonn). Ten years earlier, in WiSe 2007/2008, there were 11,265 study programs. This enormous increase testifies in many cases to a fragmented knowledge transfer, which maintains a strongly ambivalent relationship with solutions to problems in the real complex environment. This also includes the increasing trend towards digitization in education. Frederic Vester (1925–2003), a vehement advocate of systemic, biocybernetic thinking and action, is quoted by Probst—in the author’s opinion, quite rightly—with the words: Reality is not an incoherent subject catalog—the individual developments of which could be added up, even if this is then mistakenly called “system analysis” –, but always a network of feedback loops and nested control circuits. A network of effects, in which constellations and their overall dynamics matter far more than visible individual effects. [Also worth reading on the subject are Vester (1980), “Neuland des Denkens. Vom technokratischen zum kybernetischen Zeitalter”, and Vester (1999), “Die Kunst, vernetzt zu denken”, d. A.]
How we perceive or explore systems is described by Probst (1987, p. 45) as follows: 1. What knowledge should reasonably be included in a context? Are there alternative perspectives for meaningfully perceiving a context? 2. How do we perceive structures and behavior? Where are the limits of human perception? What can we not know? Is the system aware of the behavioral possibilities, the systemic connections (self-reflection)? 3. What do we want with our model building/observation? Does the model we construct “fit” our intentions? Does it fulfill its purpose? 4. Depending on how we perceive the model in a particular situation, we act; different constructions of reality are possible; the observer is part of the observed system [observer 2nd order, d. A.]; we are responsible for our thinking, knowledge, and actions.
2.3 Control Questions
33
5. Perception is holistic, but we do not see the whole; it depends on experiences, expectations, etc.; it is selective; it is structure-determined; a complete explanation of complex phenomena is not possible. 6. The awareness of the purpose of observation and the peculiarities of the observer is essential. Models fit or do not fit, they are not the image of an objective reality.
Key statement Systemic and cybernetic thinking is: transdisciplinary and con-
structivist (constructing realities).
2.3 Control Questions Q 2.1 D escribe the historical origin of the word cybernetics according to Karl Steinbuch. Q 2.2 What do you understand by Cybernetic Anthropology? Q 2.3 What is cybernetics and what is it not? Q 2.4 Describe the three perspectives of an interested citizen, engineer, and cybernetician on a robot. Q 2.5 Describe the three perspectives of an interested citizen, engineer, and cybernetician on a car. Q 2.6 Describe and sketch in detail the control loop function of an autonomously driving car. Q 2.7 Describe and sketch in detail the double control loop function of an autonomous driver-car system. Q 2.8 The definition cosmos of cybernetics provides various explanations for what is understood by cybernetics. 1. Name the listed 12 explanations. 2. To which persons can the explanations be attributed? Q 2.9 Describe the acronym “Cyber …”. Name the meanings and who they originate from. Q 2.10 Sketch the six steps of cybernetic thinking in a circular process according to Probst. Q 2.11 For the investigation and modeling of a complex system, its parts and its entirety, Probst identifies six relevant characteristics as essential. What are they? Q 2.12 For the control and development of cybernetic systems, Probst mentions seven types of cybernetic thinking. Name them and argue their goals and purposes. Q 2.13 How do we perceive and explore systems? Six criteria are listed for this. Name and argue them.
34
2 A Special Look at the Origin and Mindset of Cybernetics
Q 2.14 W hat is the general misunderstanding regarding the “cybernetic perception curve” in humans? Q 2.15 Describe in your own words what is meant by “cybernetics of cybernetics”. Which term is often used instead of “cybernetics of cybernetics”? To whom does this term go back?
References Anschütz H (1967) Kybernetik—kurz und bündig. Vogel, Würzburg Brockhaus Enzyklopädie (1966–1974) F. A. Brockhaus, Wiesbaden Dörner D (1976) Problemlösen als Informationsverarbeitung. Kohlhammer, Stuttgart Dörner D (1981) Über die Schwierigkeiten menschlichen Umgangs mit Komplexität. Psychologische Rundschau 7:163–179 Dörner D (1989) Die Logik des Misslingens. Rowohlt, Reinbek bei Hamburg Flechtner H-J (1984) Grundbegriffe der Kybernetik. (Erstveröffentlichung 1970). dtv, Stuttgart Hassenstein B (1972) Biologische Kybernetik. (Erstveröffentlichung 1967, Quelle & Meyer, Heidelberg). VEB G. Fischer, Jena Klaus G (1961) Kybernetik in philosophischer Sicht. Dietz, Berlin Klaus G (1964) Kybernetik und Gesellschaft. VEB Deutscher Verlag der Wissenschaften, Berlin Klaus G, Liebscher H (1970) Was ist—Was soll Kybernetik. H. Freistühler, Schwerte/Ruhr Klaus G, Liebscher H (1976) Wörterbuch der Kybernetik. Dietz, Berlin Küppers EWU (2013) Denken in Wirkungsnetzen. Nachhaltiges Problemlösen in Politik und Gesellschaft. Tectum, Marburg von Neumann J (1993) First draft of a report on the EDVAC. IEEE Ann Hist Comput 15(4):27–75, weitere Quelle. http://www.di.ens.fr/~pouzet/cours/systeme/bib/edvac.pdf. Accessed: 16. Jan 2018 Obermair G (1975) Mensch und Kybernetik. Heyne, München Probst GJB (1987) Selbstorganisation. Ordnungsprozesse in sozialen Systemen aus ganzheitlicher Sicht. Parey, Berlin/Hamburg Rid T (2016) Maschinendämmerung. Eine kurze Geschichte der Kybernetik. Propyläen/Ullstein, Berlin Rieger S (2003) Kybernetische Anthropologie. Eine Geschichte der Virtualität. Suhrkamp TB, Berlin Steinbuch K (1965) Automat und Mensch. Kybernetische Tatsachen und Hypothesen, 3., neubearb. und erw. Aufl. Springer, Berlin/Heidelberg/New York Steinbuch K (1971) Automat und Mensch. Auf dem Weg zu einer kybernetischen Anthropologie, 4., neu bearb. Aufl. Springer, Berlin/Heidelberg/New York Ulrich H, Probst GJB (1995) Anleitung zum ganzheitlichen Denken und Handeln. Haupt, Bern/ Stuttgart Vester F (1980) Neuland des Denkens. Vom technokratischen zum kybernetischen Zeitalter. DVA, Stuttgart Vester F (1999) Die Kunst, vernetzt zu denken. Ideen und Werkzeuge für einen neuen Umgang mit Komplexität. DVA, Stuttgart
3
Basic Concepts and Language of Cybernetics
Summary
In the chapter “Basic Concepts and Language of Cybernetics,” various basic concepts of cybernetics are discussed in the necessary detail, contributing to a fundamental understanding of complex cybernetic relationships. Graphical representations with practical process examples of cybernetic applications complement the texts and familiarize the reader with the various social, technical, and economic areas that are permeated with cybernetic systems of various kinds. How the language of cybernetics influenced society, culture, and technology in its early days is described by Rid (2016, pp. 197–199) as follows: The computer was something so new in the post-World War II era that it seemed to promise limitless progress. The new “thinking machines” could calculate the construction of skyscrapers, the operation of stock exchanges, and the course of moon missions. The imagination knew no bounds. The “giant brains” were miracles in waiting, which would change everything: war and work would be automated; organisms and machines would merge and create new life forms. Many of these visions of the modern future, however, anticipated the state of technology at the time by decades. […] The comparison with the human brain was most immediate. If the thinking machine was a simplified brain, the reverse question arose almost by itself: Was not the real brain just a complex machine? The mind suddenly transformed into something that could be understood, described, and analyzed in the language of technology. And cybernetics provided the language: input and output, negative feedback, self-regulation, equilibrium, goal, and purpose. […] The myth of cybernetics had significant cultural impacts. In its countercultural and highly symbolic interpretation, Wiener’s work forms one of the oldest and deepest roots of the unshakable belief in technical solutions that would later characterize Silicon Valley culture. © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_3
35
36
3 Basic Concepts and Language of Cybernetics
Without delving more deeply into the undoubtedly exciting and culturally diverse spheres of influence of cybernetics in the 1960s and 1970s, which are also associated with names such as Timothy Leary, who stood for a worldwide counterculture to technology, and L. Ron Hubbard, the later founder of Scientology, two overarching features of cybernetic systems will be pointed out here before a series of specific basic concepts of cybernetics is discussed. The concept of information. Every piece of information has a material or energetic carrier to which it is bound. Consequently, this concept of information is central to cybernetics. Anschütz (1967, p. 12) formulates this as follows: Cybernetics considers natural processes from the aspect of being information. The material or energetic arrangement in which the carrier of information appears is called the information-processing system (IPS).
From this, it can be deduced:
Key statement The terms “information” and “information-processing sys-
tem” (IPS) are the basic concepts of cybernetics. Any theory that makes proper use of these concepts is a cybernetic theory. From a communicative perspective, which connects all chapters of this book, the following also applies:
Key Statement Information-processing systems are communication systems
in which information is exchanged and processed through linear and circular transport routes. Cybernetic Abstraction Anschütz (1967, p. 14) further states: “By separating information from the accompanying energetic and material processes, cybernetics abstracts from the nature of these carrier processes.” A characteristic of cybernetics as a whole is therefore:
Key Statement “Any mathematical treatment of natural processes that wants to describe properties of energy and matter does not belong to cybernetics.” (Anschütz 1967, p. 14)
“It follows that for cybernetics, a large number of processes are equivalent, which differ only in the carrier of information” (ibid.), from which another key statement can be derived:
Key Statement “Information-processing systems are considered similar for cybernetics if their function is the same.” (Anschütz 1967, p. 14)
3 Basic Concepts and Language of Cybernetics Fig. 3.1 Black-box model as an information-processing system—IPS
37
Black Box Input
Output IPS
Input
Output © 2018 Dr.-lng. E. W. Udo Küppers
A typical representation of an IPS, but also of other systems whose internal structure and function are unknown, is the black-box model, as shown in Fig. 3.1. The information inputs can be considered as cause and the information outputs as effect. Fig. 3.2 shows the Black Box of a human with the senses as information inputs and the associated physical carriers of information as well as the information outputs. The cause of a human’s effect on the outside world is information originating from the brain in the informationprocessing system IPS of the human. We now turn to some conceptual tools that we take for granted today in dealing with cybernetic systems.
Black Box Input
Output Language
see (read) listen
Font
smell
Gestures
taste keys Heat sensation
HUMAN
Mimic Action on the object
Muscle position Gravity
Input
Output ©2018 Dr.-Ing. E. W. Udo Küppers
Fig. 3.2 Black-box model as the IPS of a human. (Source: after Anschütz 1967, p. 16, modified by the author)
38
3 Basic Concepts and Language of Cybernetics
3.1 System According to the Dictionary of Cybernetics (Klaus and Liebscher 1976, pp. 800–816), the term system, in connection with cybernetic systems, is subjected to a broad spectrum of analysis. In general, a “system” is considered as a […] ordered totality of material or intellectual objects [and, in relation to cybernetics,] […] a multitude of interpretations, which are usually associated with a specification for special purposes in cybernetics and other mathematical disciplines. Within the framework of cybernetics, there are system concepts of this kind, especially in automata theory (various concepts of automata, […] of the nervous system), control theory (concept of control systems), and information theory (in particular, concepts of message transmission or communication systems). In addition, there are various approaches to a general cybernetic system theory, within which, among other things, various concepts (cybernetic) of self-organizing, in particular self-optimizing, learning, and self-reproducing systems have been developed. (ibid.)
Key statement “In general terms, cybernetics deals with a special class of
material dynamic systems, whose most important characteristic is their relative stability against influences from the environment.” (ibid.) The most perfectly constructed cybernetic systems in this sense are known to us so far from the world of living organisms. Here, it can also be studied how independence from the environment has developed step by step in a long-lasting process. (ibid.).
Evolution, so to speak, makes use of cybernetic tools over an unimaginably long period of time, enabling it to achieve progress despite significant disturbances, without destroying the system as a whole. At the highest level of development of life known to us (at the level of development of mammals and humans), for example, a large number of parameters are kept relatively constant within certain limits, independent of corresponding changes in the environment. In this way, a relatively stable internal milieu is maintained, with specific homeostatic processes taking place. The simplest cybernetic system in this sense is the (simple) control loop; the homeostatic process is maintained here by a compensating feedback. It can be shown that every such control system is dominated by an (internal) […] contradiction, which constitutes the essence of the respective homeostatic process. As a result, in addition to relative stability against external influences [which we also know as disturbances, d. A.] the relatively independent activity is an essential characteristic of cybernetic systems. In this respect, the concept of the cybernetic system has a relatively extensive meaning; the type of systems investigated by cybernetics is apparently an essential type in all areas of reality. (ibid.)
Key statement The real world is not simply a system of systems, but a sys-
tem of relatively independent, “self-regulating and self-moving […] (cybernetic) systems” due to internal contradictions. (ibid.).
3.2 Control Circuit and Elements
39
We will discuss some of the concepts associated with the preceding text in more detail later.
3.2 Control Circuit and Elements The simplest cybernetic system is a simple control circuit, as can be seen in Fig. 3.4. First, however, we will discuss some related properties of control circuits in more detail. Control Circuit A control circuit is characterized as a closed loop of action that, influenced by external disturbances, dynamically moves towards a specific goal given by the reference variable. If the difference between the given reference variable as the goal and the actually measurable controlled variable tends towards zero, a certain stability of the control circuit is achieved. The negative feedback is the decisive component of the control circuit to keep it in a dynamic equilibrium or steady-state equilibrium, e.g., in biological control processes, through regulation or adaptation (see also Sect. 3.7). Stability The stability of control circuits is one of the most important properties of a control circuit. Networked control circuits in organisms possess sufficient potential to steer themselves back into stable positions through their dynamically self-developed setpoints and self-regulation processes in the event of disturbances that lead to unstable behavior of control circuits. However, the danger of oscillation due to overshooting and undershooting around the setpoint—unstable deflection due to disturbance variables—is permanently present. Diseases, for example, that affect a healthy organism, whether through bacteria, viruses, or other external disturbance influences, disrupt the stability of the organism (oscillation of control variables of the organism outside given tolerances). The organism now tries to counteract this by activating its self-healing powers. If successful, a steady-state equilibrium within normal tolerance limits is reached after some time. If not, additional external manipulated variables (medications, etc.) must be dosed and activated until the equilibrium of the organism is sustainably restored. Already in the early years of cybernetics, Norbert Wiener and his collaborator Julian Bigelow were confronted with the problem of oscillation in control systems. They investigated the question of whether oscillations were present in living beings, specifically in human goal-directed movements, similar to technical control processes. In Chapter IV “Feedback and Oscillation” (pp. 145–170) of his book “Control and Communication in Living Beings and in the Machine,” Wiener describes various examples that are also described in this chapter. Wiener writes (1963, p. 32): Mr. Bigelow and I came to the conclusion that an extraordinarily important factor in volitional action is what control engineers call feedback.
40
3 Basic Concepts and Language of Cybernetics
And further on p. 34: However, excessive feedback is as serious an obstacle to organized action as disturbed feedback. In view of these possibilities, Mr. Bigelow and I approached Dr. Rosenblueth [Mexican physiologist, d. A.] with a very specific question: Is there any pathological condition in which the patient, when attempting to perform any volitional act such as picking up a pencil, overshoots the target and falls into uncontrollable oscillation? Dr. Rosenblueth immediately replied that there is such a well-known condition called intention tremor (purpose-tremor), which is often wrongly attributed to the cerebellum.
For Wiener and his colleague Bigelow, this answer was an important confirmation of their assumption of the similarity or equality of control mechanisms in living beings and machines. Fig. 3.3 shows three different control systems of varying stabilitys.
Fig. 3.3 Time behavior of control systems with different stabilities or oscillation behavior. (Source: after Steinbuch 1965, p. 137)
Stable control system Overshoot
Control system at the stability limit
Unstable control system ©2018 Dr.-Ing. E. W. Udo Küppers after K. Steinbuch, modified by the Author
3.2 Control Circuit and Elements
41
A significant problem in the construction of controllers is to keep the control deviation from the setpoint or reference variable as low as possible despite changing disturbances. Theoretically and practically, this test can be carried out by specifying a step function of the reference variable—in Fig. 3.3 this is w(t)—which sets the value w(t) = 0 to the value w(t) = 1. The time delay of the controlled variable x(t), in response to the step function w(t), is coupled with the inevitable transit time in the controller and control path, leading to a “dead time” until the controlled variable can react. Fig. 3.3 shows under A the qualitative time course of a controlled variable for a stable control system, in which the time course of the controlled variable leads to a targeted minimization of the control deviation (xe—w1). Under B, the time course of the controlled variable shows a more unstable behavior of the control system than under A due to multiple overshooting and undershooting around the setpoint. The sketch under C finally represents a completely unstable control system (Steinbuch 1965, pp. 137–138). Control—Regulation—Adaptation I Control—Regulation—Adaptation Fundamentally, control, regulation, and adaptation are three methodical approaches to influencing a process. In control, an open process is based on a control chain of sequentially connected control elements, as shown in Fig. 3.4. In addition to control or steering, feedback (in connection with regulation) and human-machine relationships were the three ideas at the center of the new discipline of cybernetics in the mid-1940s. Rid (2016, p. 70) writes: The basic idea of cybernetics was that of control or steering. The purpose of machines and living beings is to control and steer their environment; not just to observe, but to dominate. The aspect of control is fundamental. The concept of entropy [disorder, ed.] illustrates this. […] By nature, there is a tendency for entropy to increase. […] To halt or reverse this trend towards growing disorder requires control. Control means that a system interacts with its environment and can shape it, at least to some extent.
Maintaining this so-called order through control or regulation in an environment of increasing disorder (entropy) is, however, thermodynamically bound to two properties, which 1. the supply of energy and 2. the limited lifespan. For example, every living being maintains its internal order structure only by having sufficient energy (e.g., in the form of food) available and at the expense of a limited lifespan. Similarly, machines, whose “food” usually consists of electricity, also have a limited lifespan due to unavoidable wear and tear.
Control difference e(t) = w(t) -x(t)
Manipulated variable y(t) Controlled system
Disturbance variable z(t)
Control path
Controlled system 1
Disturbance variable z(t)
Reaelar size xopt (t) - actual value - neaative feedback
©2018 Dr.-Ing. E. W. Udo Küppers
Controlled system 2
Controlled variable xopt(t):
Controlled variable x(t)
Output variable actual value
Controlled variable x(t)
Controlled variable x(t) - process value - negative feedback
Controller
Manipulated variable y(t)
Controlled variable x(t) - process value - negative feedback
Controller
Control mechanism
Manipulated variable
Fig. 3.4 Control, regulation, adaptation, control chain, and control loops as the simplest cybernetic systems in the form of block diagrams
Control difference e(t) = w(t)-x(t)
Regulation
Adaptation - Optimization
internal reference variable w(t), wopt(t) setpoint, optimum value -
External Reference variable w(t) - Setpoint -
Control
Input variable setpoint
42 3 Basic Concepts and Language of Cybernetics
3.2 Control Circuit and Elements
43
Rules are, in contrast, found in a closedsequence of actions. The reference variable, which specifies the setpoint or the goal of the regulation, is given from the outside, while the control system autonomously changes its behavior in such a way that the setpoint is achieved. With adaptation, a control process is meant that tends towards a balance between the system and the environment, with the setpoint being developed by the adaptation-oriented control process itself and serving as the starting point for subsequent control processes. This type of adapted control seems to be inherent in biological control processes with high probability and in far more complex and interconnected processes than can be represented here. Keywords for this are the terms self-regulation or self-organization. (see, among others, Flechtner (1970, p. 44). All three types of control or regulation are shown in Fig. 3.4. Of course, in addition to the basic control loops in Fig. 3.4, there are also a variety of composite variations of control loops in practice, which appear, among other things, as cascade control, control with pre-filter and feedforward, or as a controller for a multivariable system. All three types of control can be seen in the sequence of figures Figs. 3.5, 3.6 and 3.7. The block diagrams are supplemented by practical application examples. A cascade control includes several controllers, with the associated control processes interconnected. Cascade controllers are set from the inside out, meaning: First, disturbances in the control path are compensated for in the inner control loop, which is fed an auxiliary control variable, via a so-called follower controller, whereby disturbances no longer pass through the entire control path. In addition, the follower controller can provide a limitation of the auxiliary control variable, which can be an electrical current, a mechanical feed, or a hydrodynamic flow, depending on the process. The outer control process includes the master controller and the outer control path, thus the control variable of the follower controller is derived from the manipulated variable of the master controller. A typical application area for cascade control systems is thermal processes with a large time delay (e.g., heating workpieces in a furnace). The master controller regulates the workpiece temperature and specifies the setpoint for the faster slave controller, which controls the temperature of the thermocouple (see Eurotherm Regler GmbH, 2604_Kaskadenregelung_HA151069GER.pdf, https://www.eurotherm.de/index.php?route=module/downloads/ get&download_id=1248‘. Accessed on 18.01.2018). Control designs often face the problem that good control guidance is not always synonymous with good disturbance behavior. The basic structure of the control device then tries—for example, by connecting an auxiliary control variable (see Fig. 3.5)—to minimize the disturbance variable. If this is not successful, an extension of the basic control loop by pre-filters and feedforward control can provide additional degrees of freedom to remedy the situation.
External regulation
negative feedback
Work-piece
Follower controller
Inner regulation
Slave slave controller
Master controller
Master
Heating output
Controlled system 2 XH(t) Auxiliary control variable
Controlled variable x(t)
after H-W. Philippsen, modified by the Author.
Controlled system 1
©2018 Dr.-Ing. E. W. Udo Küppers
©Eurotherm-Regler GmbH, 2001, modified by the Author
negative feedback
Manipulated variable y(t) R2 = w(t) Ri
Heating elements
Master controller
Control difference e(t) = w(t) - x(t)
Fig. 3.5 Cascade control system as a cybernetic system variation. (Source: Adapted from Philippsen 2015, p. 170; practical example: Eurotherm Regler GmbH, 2604_Kaskadenregelung_HA151069GER.pdf, https://www.eurotherm.de/index.php?route=module/downloads/get&download_ id=1248. Accessed on 18.01.2018)
Cascade controller
Reference variable w(t) R2
44 3 Basic Concepts and Language of Cybernetics
3.2 Control Circuit and Elements
45
Pilot control Control difference e(t) = wv(t) -x(t)
Reference variable w(t)
Prefilter
Controlled variable x(t)
Controller
Controlled system
negative feedback Regulator with pre-filter and pilot control
after H.-W. Philippsen, modified by the Author
Controller with pilot control for optimal setting of the room temperature
New room temperature controller Fa.Vaillant
Older room temperature controller Fa.theben ©2018 Dr.-Ing. E. W. Udo Küppers
Fig. 3.6 Control system with pre-filter and feedforward as a cybernetic system variation. (Source: Adapted from Philippsen 2015, p. 174; practical example: feedforward in the context of room temperature control symbolized by the current and previous model of a room temperature controller)
The control loop is set for fast disturbance response. The possibly strong overshooting of the reference step response in the basic control loop is avoided with the help of the pre-filter. […] The extension of the structure with pre-filter by a feedforward control […] [see Fig. 3.6, author’s note] theoretically offers the possibility to design reference and disturbance behavior independently of each other. (Philippsen 2015, p. 174)
A typical application area for the basic control loop extended by pre-filter and feedforward control is drive technology, which is used in various electrical, mechatronic, combustion engineering, and other application fields. “The feedforward control is implemented as a speed or acceleration feedforward control. The symmetry filter [pre-filter, author’s note] ensures that the controller essentially only compensates for the deviation from the course of the reference variable.” (ibid., p. 176). The practical example refers to a room temperature control. Since different heat demands occur in living rooms over the seasons, the flow temperature in the heating circuit is limited by feedforward control via a controller—as a function of the outside temperature. This is intended to prevent excessive overshooting of the room temperature (controlled variable) and consequently heat losses. Multivariable control systems, such as the block diagram in the form of a two-variable control without coupling, e.g., for the supply of cold and hot water to a container, are found in many everyday application areas. Highly topical—but also very critical (!)—is
46
Reference variable w1(t)_
Reference variable w2(t)
3 Basic Concepts and Language of Cybernetics
Control difference
Control difference
Controlled variable x1 (t) Controlled variable x2(t)
Controller Controlled system
Controller negative feedback Controller of a Multi-size system without Consideration of coupling
after H.-W. Philippsen, modified by the Author Servo drive system iTAS for automated guided vehicles with highresolution current control and high torque accuracy in the system.
Photo from current product catalog Wittenstein cyber motor GmbH, download v. 23.1. 2018 ©2018 Dr.-Ing. E. W. Udo Küppers
Fig. 3.7 Multi-variable control system as a cybernetic system variation. (Source: Adapted from Philippsen 2015, p. 178; practical example: servo drive system by Wittenstein cyber motor GmbH, https://www.wittenstein.de/download/itas-de.pdf. Accessed on 18.01.2018)
a multivariable control in connection with car exhaust component measurement, which has been a topic of conversation in many industrialized countries in recent years (since 2008). However, multivariable control systems, whether in cars, industrial machines, or everyday appliances, are predominantly very useful.
3.3 Negative Feedback—Balanced Feedback A control loop or control system is controlled by negative feedback. Feedback occurs when the output signal of an information-processing system is fed back to the input, creating a closed loop. It is not surprising that new disciplines in science have their origins in the practical environment of the military. This was the case with the interdisciplinary discipline of bionics (Küppers 2015), and cybernetics also emerged in the military environment, with pilot-aircraft behavior or the targeting accuracy of artillery guns playing a role in connection with the problem of control. Rid (2016, p. 71) states:
3.4 Positive Feedback—Reenforced Feedback
47
As another example, Wiener mentions, unsurprisingly, an artillery gun, in which feedback ensures that the muzzle is actually aimed at the target. The mechanism that controlled the rotation of the gun turret also relied on feedback […]. Its performance could vary: extreme cold thickened the lubricant in the bearings and made the rotation more difficult, which could be further impaired by sand and dirt. It was therefore crucial to check the output, the actual performance, through feedback. Feedback often counteracts what a system is already doing, for example by stopping a motor that is turning a turret or instructing a thermostat to switch off a heater. This is called negative feedback.
Wiener describes the core of the cybernetic worldview as follows (ibid., p. 70): I claim that the physical functioning of the living individual and some of the newer communication machines in their analog attempts to control entropy through feedback are completely parallel.
And Wiener continues (ibid., p. 71):
Key Statement “[Feedback is] […] the property of being able to regulate
future behavior based on past performance.” In 1788, long before the term cybernetics became known and a scientific discipline developed from it, the Scottish inventor James Watt (1736–1819) already implemented the cybernetic feedback principle in his steam engine, and he did so twice, as Anschütz describes (1967, p. 74): James Watt realized this feedback principle in his inventions twice, because the Watt steam engine alone represents a feedback system. The position of the piston determines the position of the steam inlet valve, so that the steam can always act on the correct side of the piston.
The second feedback is generated by the so-called centrifugal control, in which a throttle valve in the steam supply line is connected to a rotating centrifugal governor. This consists of two ball masses attached to rod joints, rotating around a vertical axis. The higher the steam pressure, the higher the ball masses are lifted against gravity, and the more the throttle valve closes in the area of the steam supply—and vice versa. The purpose of this feedback is to achieve a constant working speed (Fig. 3.8).
3.4 Positive Feedback—Reenforced Feedback In contrast to the negative feedback aimed at balance and stability, there is also a socalled positive feedback. In generally multiple circulation cycles of positive feedback in a control loop, it can either lead to a standstill of the control function or to a one-sided build-up beyond the physical limit of the system, ultimately resulting in the destruction of the system itself.
48
3 Basic Concepts and Language of Cybernetics
Steam engine with sliding control valve RS for steam supply and centrifugal governor F for constant process flow Source: https://de.wikipedia.org/wiki/Dampfmaschine
Source: https://de.wikipedia.org/wiki/Fliehkraftregler
Fig. 3.8 Feedback systems in Watt’s steam engine with centrifugal governor. (Source: left—steam engine—https://de.wikipedia.org/wiki/Dampfmaschine. Accessed on 20.01.2018), modified by the author; right—centrifugal governor—https://de.wikipedia.org/wiki/Fliehkraftregler. Accessed on 20.01.2018)
Beyond technical processes, positive feedback is present in the economic and social environment, and nature also works with it in a well-regulating way. It is mostly quantitative effects that are associated with positive feedback. For example, in pioneer plants that colonize a fallow area in nature, there is an explosive growth until a certain limit, which marks a transition to a higher species with a different, less strong, but adapted growth behavior. Typical for these coupled growth phases is the development to a forest, which presents the final stage of development as a climax community. Characteristic for the functioning of this type of coupled, biological positive feedback is the so-called logistic curve function—or S-curve. It is characterized by the fact that the growth periods of the organisms alternate at a turning point, thus ensuring progress without risking crossing the system boundary and thereby having a destructive effect on the overall system. In technology and economy, for example, positive feedback is shown by one-sided, economically driven growth specifications (e.g., maximization of sales figures and profits). Their results often lead to immature mass products, due to a lack of suitable controls through negative feedback. As additional networked positive feedback, negative side effects (cartels, product manipulations, planned obsolescence, as known from various industries, e.g., car industry, electrical industry, energy industry, etc.) can further significantly reduce the quality of the products. The economic damage is further fueled by the linking of several positive circulation processes (for the example of the politically desired “energy turnaround 2011” in Germany, see Küppers 2013, pp. 190–209).
3.6 Self-Regulation
49
In the social communicative context, positive feedback plays a dominant role. Almost every type of conflict-ridden communication, which can lead to non-verbal, verbal, or physical injuries of the communication partners, has its triggering in mutually escalating positive feedback. The conversation gets out of control, everyone insists on their point of view and their rights. Often, only a third external instance can ultimately lead out of this so-called “vicious circle” and offer solutions that, in the best case, include a negative feedback “angel’s circle” to calm the conflict.
3.5 Aim Resp. Purpose In control loop processes, the so-called reference variable—whether externally specified or internally determined by the control system—is linked to a goal or purpose of a control function. For example, the reference variable body temperature specifies a certain setpoint—for humans approx. 37 °C—for an organism, which has proven itself for further development and survival (purpose) in a dynamic environment.
3.6 Self-Regulation Self-regulation in a control loop is given when the reference variable, which specifies the setpoint, is set as part of the control process itself and continuously adapts “optimally” to new situations during the course of the process. In the 1970s, the term self-regulation was often used in connection with pedagogy, in the form of self-regulated, self-determined, or self-controlled learning in children (Hentig 1965). Organisms are permeated with self-regulatory functions. An example is respiration or breathing frequency. When an untrained runner runs, their breathing frequency will increase rapidly with increasing running speed because more oxygen is needed, up to a limit that forces the runner to reduce their speed due to lack of performance, whereupon the breathing frequency is reduced again and normal values are established. Trained runners also regulate their oxygen supply through the variable breathing frequency. However, they do this much more effectively and adaptively, allowing them to cover longer distances without reaching their absolute performance limit. In the economic environment, the so-called “Invisible Hand of the Market” as a means of self-regulation of markets still enjoys great popularity to this day—but why? It states that if all market participants have their own well-being in mind or think rationally, self-regulation of economic activity leads to optimal production conditions, i.e., optimal products or product qualities and their distributions in the market. Market realities have been speaking a clearly different language for decades, revealing the underlying economic self-regulation as a chimera. The fact that many people still cling to this despite the clear refutation of this decades-long economic misorientation
50
3 Basic Concepts and Language of Cybernetics
(invisible hand, Homo oeconomicus) is surprising and can be explained psychologically by cognitive dissonance (Aronson et al. 2004, p. 226): According to cognitive dissonance, people experience discomfort (dissonance) whenever they are confronted with cognitions about any aspect of their behavior that do not match their self-concept.
Technical self-regulation processes can be seen in all machine operations when it comes to maintaining a certain characteristic, geometry, or process sequence over time. Followup controllers or optimal controllers are examples of this. In technical cutting processes, for example, self-sharpening cutting inserts are also used, which maintain the workpiece quality over a longer tool life—the time between the replacement of two tools due to wear and quality loss in workpieces to be machined—than usual. By the way: The biological material-technical model for self-sharpening technical tools are the self-sharpening teeth of rats (Rattus).
3.7 Flow Equilibrium—Steady-State and Other Equilibria As a steady-state or dynamic equilibrium, a state in an open system, such as an organism, is understood in which inflows and outflows of energy and substances balance each other over time. In contrast, an open system like an organism is in a so-called transient state when “substance flows are not balanced and functional rates probably depend on rapidly changing concentrations and the interaction of many factors” (Odum 1991, p. 145). It is the famous complex food webs of nature that are significantly involved in the equilibrium conditions and which need to be understood if we want to grasp how nature works. If the material production rate was decisive for organismic equilibrium conditions during growth periods, it becomes, with increasing exploitation of space and nutrients, “limited by the rate of decomposition and nutrient regeneration. A climax steady-state develops [e.g., the climax community of a forest, author’s note], in which production and respiration balance each other and in which there is little or no net production and no further increase in biomass—growth—to be recorded.” (ibid., p. 203)
In the context of his research with the biophysics of open systems and the thermodynamics of living systems, Austrian biologist and systems theorist Ludwig von Bertalanffy (1901–1972) introduced the concept of steady-state. According to von Bertalanffy, different types of system equilibria can be distinguished (after: https://de.wikipedia.org/ wiki/Ludwig_von_Bertalanffy. Accessed on 22.01.2018): 1. The concept of dynamic equilibrium is seen as an umbrella term for true equilibrium and steady-state. 2. A true equilibrium occurs in “closed systems” that do not exchange energy or matter with their environment. Entropy is maximal, making no work possible.
3.8 Homeostasis
51
The macroscopic state variables remain constant, while microscopic processes continue, as the example of chemical equilibrium shows: Chemical equilibrium is a state in which the overall reaction appears to be at rest, i.e., no changes are detectable. The externally observable reaction rate is zero. Nevertheless, the chemical reactions (“forward” and “reverse” reactions) continue to occur, at the same rate in both directions. It is therefore not a static equilibrium, as it appears externally, but a dynamic equilibrium in which reactions continue to take place. (https://de.wikipedia.org/ wiki/Chemisches_Gleichgewicht. Accessed on 22.01.2018)
3. A steady-state requires an “open system” that can exchange energy and matter with its environment. The steady-state is a stationary state with temporally constant, persistent system inputs and system outputs, whose net difference is approximately zero. 4. The homeostatic equilibrium is also a steady-state that requires an open system. Secondary regulations linked to an information system are the triggers that lead to a homeostatic system equilibrium through negative feedback. We began Sect. 3.7 with an example from nature of a flow equilibrium and we want to end with a technical example familiar to all of us, the inflow and outflow of a water container, such as a bathtub. If the hydrodynamic volume flows of inflow and outflow are in equilibrium, so that the water level remains constant, then the system is in a dynamic equilibrium. It will immediately transition to a non-equilibrium state if, due to disturbance—e.g. setting a higher water inflow or blockages in the drain—the water level varies.
3.8 Homeostasis To explain this term, which was mentioned in advance under Sect. 3.7, we let Karl Steinbuch speak, using an organism as an illustrative system (Steinbuch 1965, p. 145): The totality of all control processes that ensure that certain states of the organism (e.g. body posture, body temperature, blood sugar content, blood oxygen content, etc.) remain within the permissible limits for survival is called “homeostasis”. The English neurologist W. R. Ashby built a technical model of multiple interconnected control processes, which he calls “homeostat”.
We will discuss Ashby and his homeostat in more detail in Sect. 4.6. Steinbuch also refers to the interesting example of a mixed organic-technical control system, similar to the combined control between a car driver and the car, as shown in Fig. 2.4. It can be concluded that every healthy organism is in a state of homeostasis. And the ability of living systems for self-regulation and self-organization, which also includes self-healing processes within certain limits, can compensate for a disturbance of the homeostatic equilibrium. Ashby’s technical-electrical homeostat did—after initiating various disturbances—exactly this, it searched for and found the system equilibrium again.
52
3 Basic Concepts and Language of Cybernetics
3.9 Variety In cybernetics, variety is understood as an increase in “tools” that are characterized by various actions, types of communication, effect processes, and more. According to W. R. Ashby, “variety serves to measure the complexity of a system” (https://de.wikipedia.org/ wiki/Varietät_(Kybernetik). Accessed on 22.01.2018). The terms variety, variety theorem, variety number, and variety degree are also related. The variety theorem (V) states how large the disturbances (S) acting on a system are in relation to the system reactions (R) and what consequences (K) result from this. In formulaic terms, this results in (ibid.):
V(K) > V(S)/V(R)
(3.1)
An example from information technology will put the variety theorem on practical feet. The increasing “hacker attacks” via the Internet—acting disturbances V(S)—on networked digital information systems, such as mobile phones, company servers, public power grids, state computer systems, etc., have in the past and continue to lead to enormous failures and thus subsequent problems—system reactions V(R)—of the affected information technology systems. The resulting consequences—V(K)—can be diverse and—under certain circumstances—expensive in nature, such as remedying the resulting subsequent problems, new redundant security systems, measures for system decentralization, hiring knowledgeable IT specialists, etc. In our increasingly digitized environment, the variety theorem (V) and related problems and problem solutions have become a constant companion of our digital activities. Strategies for problem prevention—minimizing V(S)—are always in a race with the further development of disturbance attacks from V(S). It is the famous “hare-and-hedgehog game,” the outcome of which is completely uncertain from today’s perspective. The parameters variety number V(Z) and variety degree V(G) for measuring complex project structures in cybernetic systems, introduced by Frahm (2011, p. 25), are given as: V(Z) is equal to the sum of all interactions W of a project structure divided by the number of order levels OE:
V(Z) =
W/
OE
(3.2)
V(G) is equal to the sum of all interactions W of a project structure divided by the number of nodes K:
V(G) =
W/
K.
(3.3)
For example, if we consider our animated nature with tens of billions and more interactions between organisms on the one hand and organisms with inanimate nature, as well
3.10 Ashby’s Law of Requisite Variety
53
as order levels that extend from atomic to biospheric spaces, the sheer number of interactions alone results in an unimaginably high variety number. Not nearly as large variety numbers are achieved by social, technical, or economic systems, although their variety numbers, e.g., those of a dynamic traffic flow or a stationary power plant, are no less critical to consider due to their high complexity levels. For all systems whose measurement of complexity is given by variety according to Ashby, the fundamental communicative prerequisite remains: Principle We must first learn and understand how to deal properly with the
complexity of systems—of whatever kind. The core of this learning and understanding is networked, systemic thinking and action. Without this, any—especially sustainable—solution approach in the environment of complex structures and processes is doomed to fail.
3.10 Ashby’s Law of Requisite Variety The law states that a system controlling another can compensate for more disturbances in the control process the greater its action variety is. Another formulation is: The greater the variety of a system, the more it can reduce the variety of its environment through control. Often, the law is cited in the stronger formulation that the variety of the control system must be at least as large as the variety of the occurring disturbances for it to be able to perform the control. (https://de.wikipedia.org/wiki/Ashbysches_Gesetz. Accessed on 22.01.2018)
Ashby himself speaks in his book “Introduction to Cybernetics” of “difference” as the most important concept of cybernetics (Ashby 2016, p. 25): […] difference(s); i.e., either two things are obviously different, or one thing has changed over time.
Mathematically interested readers can follow the derivation of Ashby’s Law of requisite variety in Ashby (2016, pp. 293–314) in Chapter 11. In it, Ashby considers the move variety of two players R and D, who must select numbers in turn according to a certain scheme. From this, he derives the law of requisite variety (variety), which states (ibid., p. 299): Given and fixed VD (variety of moves by D [author’s note]), then VD – VR (VR = variety of moves by R [author’s note]) can only become smaller by a corresponding increase in VR.Variety in the results can only be further reduced, if a corresponding increase in the variety of R occurs. […]. This is the law of requisite variety. To put it more clearly: Only variety in R can lower the variety in D; only variety can destroy variety. This thesis is […] fundamental in general control theory […].
54
3 Basic Concepts and Language of Cybernetics
The difficulties in comprehending changes in order to make them more tangible and explainable, especially when they occur in imperceptibly small steps in highly complex environments, are demonstrated not least by our efforts to understand the course of climate on our planet. Even in a small dynamic system, perhaps comprising six to ten interconnected elements in a private or professional environment, we already recognize our limits in reliably capturing and processing differences according to temporal and structural changes of system elements. Another mnemonic is offered: Mnemonic In a cybernetic system, it is not the elements themselves, but
the informational communicative changes through the coupled or feedbacklinked linking processes (transport processes, flows) between the system elements that are decisive.
3.11 Autopoiesis The Chilean scientists Humberto Maturana as a neurobiologist and Francisco Varela as a biologist, philosopher, and neuroscientist coined the autopoiesis theory and the term Autopoiesis, which is attributed to Maturana and refers to the self-creation and selfmaintenance of living systems. Autopoietic systems are recursively—retroactively— constructed or organized, which means that the result of the interaction of their system components leads back to the same organization that produced the components. This characteristic way of internal organization or self-organization is a clear distinguishing feature from non-living systems. The product of an organization is the product itself. System elements and their structure or structural changes arise from the circulation of existing system elements. Maturana points out that nervous systems do not have direct interfaces to the environment and therefore must rely on their own processes (see Varela et al. 1974, pp. 187–196). An operational closure and self-referentiality—self-referentiality—thus seems to be inherent in every autopoietic System. Operational closure refers to the internal organization and order of a system, while the system itself is open to the environment. If an external observer, a human, were to observe another human, they would not be able to gain any knowledge about the internal structural coupling of the autopoietic system. Consequently, they would be able to analyze inputs and outputs, but the system itself would be comparable to a black box for them.
3.12 System Modeling Real structures of systems, be they natural, technical, economic, or social systems, in which negative and positive feedback loops—often non-linear—interact with each other, are hardly fully comprehensible and describable in their entirety. The reason for this is
3.12 System Modeling
55
the extremely high degree of complexity and dynamics inherent in systems. Natural food webs are the best example of this from the biosphere, highly complex technical processes in the energy sector are another from the economic sector, and communicative processes in the municipal, urban environment are a final one from the social sector. The approximate recording of processes within real complex processes, which are always associated with conflicts and uncertainties, therefore relies on the analysis of system sections within certain limits. At this point, the tool of modeling comes into play. It attempts to capture the real conditions, structures—interconnected system elements—and processes—transport processes between the system elements—as realistically as possible and to describe their state and development in a model-like manner. Without going into detail on all the prerequisites and boundary conditions that modeling requires (it would far exceed the scope of this book), Figs. 3.9, 3.10 and 3.11 show three
WATER Precipitation, Boder water, groundwater, damming, runoff PRECIPITATION DURATION HOURS
FOREST SHARE
LOW QUANTITY mm mm per m
EVAPO TRANSPIRATION MONTHLY AVERAGE
Season week
Year length weeks
Year length hours
SOIL MOISTURE CONTENT
WATER INTAKE CAPACITY
THICKNESS OF THE WATER RETAINING SOIL LAYER
Year length months Precipitation amount
Interception function
Thick water retaining soil layer
Evapo Transpiration
Area Evapo Transpiration
Precipitation rate
Relative water saturation
Soil water initial value
Precipitation on the ground SoilWater Inflow
Soil Water
Field capacity of the soil
USABLE FIELD CAPACITY
SoilWater Discharge Water saturation capacity of soil and litter
Infiltration. Excess
Soil water excess
Infiltration rate
INFILTRATION RATE
Expiration 'Surplus AREA
Water saturation capacity
Infiltration current ► Drain quantity
Storage capacity of the litter
STORAGE CAPABILITY OF STREU mm © 2004 H. Bossel
Fig. 3.9 Cybernetic system with negative feedback loops in a modeling example from nature. (Source: Bossel 2004, p. 21)
56
3 Basic Concepts and Language of Cybernetics
Accommodation for migrants
Social conflicts Establishment of an "environmental one Commercial traffic load
;Bel. W-V.
Forest clearing
Reduction in the educational offer
Bild.-Kred.
Waldrod.
Soil sealing
Shorter image.
Construction E. plant.
Financial crisis F. Crisis
Arb.-L.
River straightening
Construction of an energy plant
Bank loans
Debt budget Removal of industries
Monocultures "Energy' plants
Natural load
Increase in the cost of products and services Wayz.-Ind.
Monoc. E.
Closure of the cultural institution
Unemployment
Wayz. ind.
Laying an overland power line
Office. S
Citizen Security Wellbeing
Bür.-S.
Arb.-L.
Stroml.
© 2013/2018 Dr.-Ing. E. W. Udo Küppers
Fig. 3.10 Cybernetics of a municipal/regional budget
odelable complex systems as sections of much more extensive systems from nature, m technology, and society. All models have in common that they reflect a state of high complexity, which includes interactions of control loop functions that have both negative problem-balancing and positive problem-amplifying effects. Which of the system-specific feedback loops ultimately tip the balance for a potentially sustainable development for a system risk, whether negative feedback loops are missing at appropriate points in the system or model-like structures are incorrectly assembled, and much more, can be tracked and operationally influenced more quickly through simulation than in the real system. Modeling can help to grasp certain problems more quickly in order to take measures that have a problem-preventing effect. However, the following sentence applies: Remember Even the most precise simulation cannot replace reality in the
long run! The regional water balance simulation program, as seen in Fig. 3.9, has at its core the system container element “soil-water,” whose capacity is fundamentally influenced by inflow and outflow flows. In addition, there are numerous parameters that influence this
3.12 System Modeling
57
Construction and operation of solar energy conversion plants
Consumer load due to electricity costs
Government initiates "energy turnaround
Environm ental policy
Construction of new gas-fired power plants
Nuclear power plant residual life shutdown
Economic policy
Competition
Construction and operation of large "off-shore" wind turbines
Profit
Risk of the plant operator
Construction and operation of supraregional electricity grids
"Lazy" compromises
Law on liability in the event of power failure
Demand for subsidies
© 2015/2018 Dr.-Ing. E. W. Udo Küppers
Fig. 3.11 Cybernetic system with negative feedback loops in a modeling example from energy technology. (Source: Küppers 2013, p. 193); Legend: The rectangles indicate political, rounded rectangles economic, and ellipses civic activities. The V symbol stands for process delay, the B symbol for process acceleration, plus signs have a reinforcing effect, minus signs have a weakening effect from … on. “Vicious circles” (plus symbol) lead to system boundaries with the potential for destruction, “virtuous circles” (minus symbol) have a circular balancing effect—contain negative feedback(!)
basic transport process, and their interconnected quantitative influences in the overall analysis provide a realistic representation for the corresponding natural region. Furthermore, very different boundary conditions can be modeled to investigate their effects on the water balance. In Bossel (2004, pp. 20–27), results of this simulation, carried out with the System-Dynamics software, can be found. While Fig. 3.9 and the following Fig. 3.10, which deals with the cybernetics of a municipal/regional budget, simulate quantitative results, Figs. 3.11 and 3.12 show qualitative results in the form of impact networks with corresponding feedback loops between the system elements. For decades, constantly updated in rigid structures of public administrations: an extremely hierarchical communication process. The consequence of this bureaucratic information processing system—bIVS—is an increasing misjudgment of realistic dynamic processes in municipal/regional/state environments. The learned causal or
58
3 Basic Concepts and Language of Cybernetics
Precarious contract work
r
"Mini Job" Political, short-term new, socially fatal legislation.
Independently Persons in marginal employment
Unequal assets
Man as the means / center!,
Trend towards acceleration of work
Unequal educational opportunities
Neoliberal Decentralization Practice
The human being in the center of the welfare state!
Ominous societal trend toward growing poverty/ wealth gap
Family educational pressure
Conflicting goals of humane work and flexible capitalism
Professional work pressure
Politics of an ordering system of social inequalities
Political structure and action patterns of sustainable development
Tension between private prosperity and public neglect
Lack of basic legislation, e.g. tur asylum issue.
Inadequate, partly humanly completely inappropriate municipal asylum accommodations
Domestic Administrative and Cost conflicts
Municipal overburden, overindebtedness © 2016/2018 J.-R Küppers u. E.W. U. Küppers
Fig. 3.12 Cybernetic system with negative feedback loops using a modeling example from the social environment. (Source: Küppers and Küppers 2016, p. 38)
monocausal problem-solving strategies regularly lead to rotten—because misguided— compromises. Process sequences are trivially linearized in ignorance of the interconnected dynamic relationships, although systemic strategies for problem-solving would be absolutely necessary. One consequence of this is the permanent misjudgment—in personnel, material, and finances—of individual administrative system departments or players, which, in the worst case, can lead to veritable organizational crises within the often isolated departments. Public insight into social welfare offices, education senate administrations, environmental departments, and transport departments of various municipalities and cities provides a sometimes shocking picture of public services for citizens, not to mention the billions of wasted costs that responsible leading bureaucracy employees invest in publicly financed projects (prestige buildings such as the Hamburg Elbphilharmonie, BerlinBrandenburg Airport, or Stuttgart 21 Central Station, and many more). Municipalities, cities, regions, and the federal government in Germany, specifically: the responsible persons, are noticeably suffering from two symptoms: 1. their partial—sometimes collective—inability to perceive the municipal environment as a holistic terrain of their design possibilities for sustainable development, 2. their partial—sometimes collective—inability to change their ingrained causal thought patterns in order to open themselves up to new design possibilities, through different viewpoints and perspective changes, for solving problems.
3.12 System Modeling
59
Only when it is possible to defuse both symptoms and channel them in a way that promotes the overall municipal viability and consistently uses appropriate system tools, a path is created to successively transform encrusted and risky administrative or organizational structures into highly attentive administrative or organizational processes that recognize the complex dynamic reality as the perpetual basis of their thinking and actions (Küppers 2011a, b, c). The impact network of a municipal/regional budget outlined in Fig. 3.10 realistically shows the relationships between individual participating system areas (organizational units such as departments, divisions, etc. of a municipal administration). Cybernetics and impact networks pull together because both concepts are characterized by diverse networked and feedback properties in a dynamic environment. The linear and non-linear relationship flows between the municipal system areas, whose functional sequences are determined based on data and information research, are clearly visible. A highly attentive municipality that strengthens viable development will only prevail when the dynamic, realistic overall context outlined in Fig. 3.10 forms the basis of every calculation and simulation. This requires—as mentioned earlier—new system-relevant tools, the efficient and effective handling of which—not at the push of a button, but through successive error-tolerant learning and practicing—promises success. Fig. 3.11 shows an excerpt from a qualitative impact network of an energy policy with consequential vicious circles, which goes back to the political decision of the federal government in 2011 (already mentioned in Sect. 3.4) after the Fukushima nuclear power plant disaster. Three striking characteristics can be seen: 1. The predominance of self-reinforcing feedback processes. 2. The predominance of delayed impact flows. 3. The central system influence variable “demand for subsidies” has a reinforcing and accelerating effect on the system influence variable “consumer burden due to electricity costs.” Taken together, the three features of the “energy transition 2011” alone depict an immature consequential planning strategy that continues to this day. This can be justified, among other things, by the fact that no uniform overall strategy of politics in the social environment is discernible. In view of the immense macroeconomic importance of energy supply and the inevitable problem-solving in the context of highly complex relationships, it is not surprising that politics and the economy are creating a multitude of repair sites, largely at the expense of the large number of energy consumers, although a well-founded, sustainability-oriented holistic strategy and risk analysis would be necessary. Political blockades, such as those between the Ministry of the Environment and the Ministry of Economic Affairs, coupled with economic progress through high state risk coverage based on subsidies, as can be seen in the vicious circle for the “construction of new gas power plants,” are a disastrous combination that produces more consequential
60
3 Basic Concepts and Language of Cybernetics
problems than it tries to avoid. Negative feedback that can lead to sustainable social stability and conflict minimization is scarce. The currently recognizable social system of the Federal Republic shows little cybernetic approach to error-tolerant development with regard to the energy supply of the population. To put it in control engineering terms: The activities of politics and economy on the step function of the control variable “sustainable, nuclear-free energy supply,” triggered by the nuclear power plant disaster in Fukushima, Japan, in 2011, is still in a state of highly unstable control today, in 2018, as symbolically shown in Fig. 3.3 C. In Fig. 3.12, an ordering system of social inequality is presented, from which an excess of reinforced feedback and at the same time a lack of corrective measures through negative feedback between the system elements can be recognized. Küppers and Küppers (2016, pp. 37–38) write: If one tries to include the selected effect network influence variable tension field of private wealth and public neglect through […] selected […] touchstones in one’s considerations about the prevention of consequences of social inequality, it becomes obvious how timid today’s politicians act. To put it bluntly: the polarization of rich and poor is undoubtedly the wrong strategy for crisis resolution (Negt et al. 2015, p. 15). Not seeking a way out of the situation of this polarization is scandalous and a consequential mistake. Historian Tony Judt describes the tension relationship and the absurd societal image of growing private wealth and public neglect presented here as “symptoms of collective impoverishment” (Judt 2011, pp. 20 ff.), which can be seen everywhere and significantly shape the social order. Judt clearly formulated the effects of this disintegrating inequality based on unequal income distribution: The greater the gap between the few rich and the many poor, the greater the social problems.
A brief look at the current fifth “Report on Living Conditions in Germany,” from April 2017 (BMAS 2017), from a country that is one of the leading industrial nations, reveals the whole truth of the societal, progressive division that has been continuously mutating into a paradox for years because politics fails to ensure the participation of all societal players in the country’s progress, despite existing wealth and real possibilities—on the contrary. Maxim “Politics and responsibility” emerges here as an essential dual sys-
tem element in a cybernetic system of societal interrelations. This particularly puts the mutual communication or the “talking to each other” between citizens and politicians at the center. In control engineering terms, this is formulated by a clear lack of negative feedback between citizens and politicians. This deficiency management is further reinforced by politics and economy entering into harmful alliances at the expense of citizens, as in the area of—widely known to the public—automotive exhaust regulations by automotive corporations, a technical regulation, by the way, that is less characterized by negative— health-preserving—than by positive—economy-strengthening—feedback effects.
References
61
3.13 Control Questions Q 3.1 W hat is a black-box model and what is its counterpart? Q 3.2 Sketch and explain a black box as an information-processing system in general representation and as a black-box human. In the latter case, explicitly describe at least four inputs and four outputs. Q 3.3 S ketch and explain three different time behaviors of control systems. What is a significant problem in controller design? Q 3.4 Sketch and explain the difference between control, regulation, and optimization (adjustment)? Q 3.5 Sketch and explain a cascade control. Name three typical use cases. Q 3.6 What is a multivariable control system? Name three typical use cases. Sketch the cybernetic control engineering process. Q 3.7 Explain “negative and positive feedback” in a control system. Q 3.8 What does self-regulation mean? Q 3.9 What does Ashby’s Law state? Q 3.10 E xplain the autopoiesis theory. Who developed it? Q 3.11 Why are systems modeled?
References Anschütz H (1967) Kybernetik—kurz und bündig. Vogel, Würzburg Aronson E, Wilson T, Akert RM (Hrsg) (2004) Sozialpsychologie, 4., ak. Aufl. Pearson, München Ashby WR (2016) Einführung in die Kybernetik. Suhrkamp, Frankfurt am Main BMAS (2017) Lebenslagen in Deutschland. Armuts- und Reichtumsberichterstattung der Bundesregierung. Bundesministerium für Arbeit und Soziales, Bonn Bossel H (2004) Systemzoo 2, Klima, Ökosysteme und Ressourcen. Books on Demand GmbH, Norderstedt Flechtner H-J (1970) Grundbegriffe der Kybernetik. dtv, Stuttgart Frahm M (2011) Beschreibung von komplexen Projektstrukturen. PMaktuell, Heft 2/2011 von Hentig H (1965) Die Schule im Regelkreis. Klett, Stuttgart Judt T (2011) Dem Land geht es schlecht. Carl Hanser, München Klaus G, Liebscher H (1976) Wörterbuch der Kybernetik. Dietz, Berlin Küppers EWU (2011a) Systemische Denk- und Handlungsmuster einer neuen nachhaltigen Politik im 3. Jahrtausend. Z Polit 3(3–4):377–398 Küppers EWU (2011b) Wirkungsnetzanalyse des Kommunalhaushaltes. Der Neue Kämmerer, Jahrbuch 2011, S 39–41 Küppers EWU (2011c) Die Wirkungsnetz-Organisation—ein Modell für öffentliche Verwaltung? apf 5(2012):129–136 Küppers EWU (2013) Denken In Wirkungsnetzen. Nachhaltiges Problemlösen in Politik und Gesellschaft. Tectum, Marburg Küppers EWU (2015) Systemische Bionik. Springer Essential, Springer, Wiesbaden Küppers J-P, Küppers EWU (2016) Hochachtsamkeit. Über unsere Grenze des Ressortdenkens. Springer Fachmedien, Wiesbaden
62
3 Basic Concepts and Language of Cybernetics
Negt O, Osrtolski A, Kehrkaum T, Zeuner C (2015) Stimmen für Europa. Ein Buch in sieben Sprachen. Steidel, Göttingen Odum EP (1991) Prinzipien der Ökologie. Spektrum der Wissenschaft, Heidelberg Philippsen H-W (2015) Einstieg in die Regelungstechnik. Hanser, München Rid T (2016) Maschinendämmerung. Eine kurze Geschichte der Kybernetik. Propyläen/Ullstein, Berlin, S 2016 Steinbuch K (1965) Automat und Mensch. Kybernetische Tatsachen und Hypothesen, 3., neubearb. und erw. Aufl. Springer, Berlin/Heidelberg/New York Varela FJ, Maturana HR, Uribe R (1974) Autopoiesis: the organization of living systems, its characterization and a model. Biosystems 5:187–196 Wiener N (1963) Kybernetik. Regelung und Nachrichtenübertragung in Lebewesen und in der Maschine (Original: 1948/1961 Cybernetics or control and communication in the animal and the machine), 2., erw. Aufl. Econ, Düsseldorf/Wien
Part II Cyberneticians and Cybernetic Models
4
Cybernetics and its Representatives
Summary
In the context of this chapter, a number of representatives of cybernetics are introduced who have had a significant influence on the development of this interdisciplinary discipline. The one who undoubtedly provided the greatest impetus for cybernetics as a scientific branch and broad application was Norbert Wiener, which is why we begin with him. The number of influential people from various fields who have each made their contribution to cybernetics is too large to be fully assembled here. On the internet alone, 56 people are mentioned, some of whom are influential figures to be mentioned briefly (see https://de.wikipedia.org/wiki/Liste_bekannter_Kybernetiker. Accessed on 25.01.2018).
4.1 Norbert Wiener and Julian Bigelow Figures 4.1, 4.2. Many refer to Norbert Wiener as the “father of cybernetics,” which is largely due to his 1948 book “Cybernetics or Control and Communication in the Animal and the Machine” (German, 1963: “Kybernetik. Regelung und Nachrichtenübertragung im Lebewesen und in der Maschine”) that the American mathematician Wiener dedicated to his longtime scientific companion Arturo Rosenblueth (see Sect. 4.2). Wiener’s starting point for his journey of discovery, which is associated with the central concept of cybernetics—“negative feedback”—was the military sector, where so many new research areas—e.g., the already mentioned bionics—began. In the preface to the second E nglish
© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_4
65
66
4 Cybernetics and its Representatives
Fig. 4.1 Norbert Wiener, American mathematician (1894–1964). (Source: https:// www.flickr.com/photos/ tekniskamuseet/6979011285/ in/photolist-52hopp-bCHfUM. Accessed on 10.01.2019)
Fig. 4.2 Julian Bigelow, American electrical engineer (1912–2003) and close collaborator of Norbert Wiener. In the picture from left, Julian Bigelow, Herman Goldstine, Robert Oppenheimer, and John von Neumann. (Source: https://de.wikipedia.org/wiki/Julian_Bigelow#/media/File:Julian_ Bigelow.jpg. Accessed on 10.01.2019)
edition in 1961, which is identical to the German first edition, Wiener (Wiener 1963, preface) wrote: If a new scientific branch is truly alive, the focus of interest must and should shift over the years. When I first wrote about cybernetics, the main difficulty in taking a stand was that the ideas of statistical information theory and control theory were new and even shocking to the mindset of the time. Today, they are common tools for telecommunications engineers and developers of automatic controls, […]. The consideration of information and the technique of measuring and transmitting information has become a regular discipline for the engineer, the physiologist, the psychologist, and the sociologist.
As Chaps. 6 and 7 will show, the idea of cybernetics or a cybernetic control and thus Wiener’s liveliness of cybernetics has spread to many disciplines and practical applications.
4.1 Norbert Wiener and Julian Bigelow
67
Everything that would later be connected with the development of Wiener’s cybernetics began in the early 1940s, with an electrical fire control system, a Bell Labs Computer M-9, which, according to Rid (2016, p. 46): […] fed mathematics into a feedback loop […] and enabled the fire control system to calculate simple mathematical functions such as sine and cosine using resistors, potentiometers, servomotors, and sliding contacts. Elaborate—carefully executed—mathematics would thus control a heavy 90-millimeter anti-aircraft gun. […] The aiming process, however, was an open loop: There was no feedback to the shell once it had been fired. […] Warren Weaver, a science manager at the Rockefeller Foundation, headed the project under the designation D-2. From Johns Hopkins University, also funded by the NDRC (National Defence Research Committee), came a brilliant idea to close that feedback loop: the proximity fuse.
In contrast to the conventional time fuse, which required the explosion of the missile to be set before the projectile was launched, missiles with proximity fuses detonated based on information collected during flight. This realized a true feedback loop. However, the new ignition mechanism had to withstand enormous forces of the projectile during launch—20,000 times that of g (g corresponds to the Earth’s gravitational acceleration with an average value of 9.81 m/s2)—and even more stress. “The technical challenge was dizzying” (ibid., p. 47). To improve the air defense fire control system, Wiener submitted an exposé titled “Aircraft Defense Predictor” in November 1940. It involved “investigating the purely mathematical possibilities of prediction by a device and then constructing the device” (ibid., p. 51). Together with electrical engineer and mathematician Julian Bigelow as chief engineer, Wiener began implementing his approved project at the end of 1940. The air defense problem led Wiener to consider that pilots under fire would evade in a zigzag line or through aerobatics, which is more difficult the more weight the aircraft has. Rid (2016, p. 52) states: Wiener gradually recognized that the psychological stress of humans and the physical limitation of the aircraft made the human-machine system predictable. This made it easier to calculate the future trajectory of an aircraft based on its past behavior.
The previously assumed by Wiener—not easy to fly—zigzag line of an aircraft became more calculable through a newly assumed, slightly undulating flight line. After several months of theoretical and experimental investigations, Wiener and Bigelow realized in 1942 that human and machine formed a unit, a system, a coherent mechanism that would function practically like a servo, a device with the property of self-correcting deviations in its operation. (ibid., p. 54)
Even though Wiener’s predictor never came into practical use, the considerations of human-machine system, control, and negative feedback would significantly shape his cybernetic worldview and the emerging field of cybernetics.
68
4 Cybernetics and its Representatives
4.2 Arturo Rosenblueth Figure 4.3. Mexican physiologist Arturo Rosenblueth was a close scientific companion of Norbert Wiener. This connection was based, among other things, on Rosenblueth confirming Wiener’s view that feedback plays a crucial role in both the control technology of machines and living organisms. Notable works by Rosenblueth and Wiener on “Behavior, Purpose and Teleology” (Rosenblueth and Wiener 1943) and “Purposeful and NonBehavior” (Rosenblueth et al. 1950) addressed negative feedback.
4.3 John von Neumann FIgure 4.4. John von Neumann was an early computer pioneer, mathematician, computer scientist, and cybernetician. Cybernetic mathematics and game theory (see Sect. 6.6) as part of theoretical cybernetics were some of his areas of interest. Von Neumann is considered one of the fathers of computer science. In the early 1940s, von Neumann discussed with Wiener the benefits of cybernetic research, analyzing, among other things, similarities between the brain and computer. Both founded the “cybernetic circle” in 1943, which eventually led to the influential conference series in Manhattan, sponsored by the Macy Foundation. Around 1945, von Neumann participated in the construction of the ENIAC (Electronic Numerical Integrator and Calculator), a computer of thirty tons in weight and 25 meters in length from today’s perspective, the predecessor of the EDVAC (Electronic Discrete Automatic Computer).
Fig. 4.3 Arturo Rosenblueth, Mexican physiologist (1900–1970). (Source: http:// con-temporanea.inah.gob.mx/ node/42, Archivo fotográfico de Arturo Rosenblueth de El Colegio Nacional, México, D.F, Accessed on 25.01.2018)
69
4.3 John von Neumann Fig. 4.4 John von Neumann, Hungarian-American mathematician and computer scientist (1903–1957). (Source: https://upload.wikimedia.org/ wikipedia/commons/d/d6/ JohnvonNeumann-LosAlamos. jpg. Accessed on 10.01.2019)
Von Neumann Architecture Memory Arithmetic Unit
Control plant Keyboard
Input
Bus system
Input/output unit
Storage plant
Components
Address bus
ALU calculator Steuerwerk Data bus
Screen interfaces
Output
Control bus
Data transport system
Fig. 4.5 Von-Neumann-Architecture of a computer. (Source: https://de.wikipedia.org/wiki/VonNeumann-Architektur. Accessed on 25.01.2018)
It already had the basic structure of many computers of today’s generation, as shown in Fig. 4.5, which was later called Von-Neumann-Architecture (see also Chap. 2). It is also worth mentioning that von Neumann dealt with the Theory of Self-Reproducing Automata, the theory of self-reproducing automata. In his considerations and descriptions of objects, he constantly switched between the technology of machines and natural systems, which was creatively intended (cf. Rid 2016, pp. 147–149). Furthermore, Rid (ibid., p. 149) writes:
70
4 Cybernetics and its Representatives He [John v. Neumann, the author] clothed his theory in cybernetic terms, thus blurring the boundaries between machine and organism. On the one hand, von Neumann studied the mechanisms of natural evolution and found that natural organisms produce something more complicated than themselves. Nature does not simply copy, it changes its offspring beyond mere self-reproduction. In the technical world, however, the opposite was achieved. While organismic self-reproduction was capable of development, mechanical self-reproduction was degenerative. Von Neumann concluded:
Key Statement “An organization that synthesizes something is necessarily
more complex, of a higher order, than an organization that is synthesized by it.” (Rid 2016, p. 150) The construction of a “fruitful” machine by a machine that was at least as complex as the “mother machine” was a theoretical question, but no less difficult to answer, with which we close the insights into von Neumann’s work within the scope of this contribution. The fact that this topic of artificial machine evolution is currently a research topic again is demonstrated on the one hand by the links between robotics techniques and Artificial Intelligence and on the other hand by the fusion of humans and technology into socalled “Cyborgs”. The point in time when machines with “machine intelligence” are able to independently and self-organize machine mutations to design and build—with what goal and for what purpose?—is still in the fog of the future—and this is uncertain.
4.4 Warren Sturgis McCulloch Figure 4.6. The American neurophysiologist and cyberneticist Warren Sturgis McCulloch became known for his foundational work on theories of the brain and his involvement in the cybernetics movement of the 1940s (McCulloch 1955). Together with Walter Pitts (see Sect. 4.5), he created computer models based on mathematical algorithms, the so-called threshold logic. They divided the investigation into two individual approaches, one focusing on biological processes in the brain, and another focusing on applications of artificial neural networks of Intelligence (McCulloch and Pitts 1943). The result of this investigation was the model of a McCulloch-Pitts neuron. McCulloch and Pitts were able to show that programs computable with Turing machines can be calculated by a limited neural network. Turing machines, named after the British logician and mathematician Alan Turing (1912–1954), are computer models that model the operation of a computer in a simple way. They represent a program or an algorithm.
4.5 Walter Pitts
71
Fig. 4.6 Warren Sturgis McCulloch, American neurophysiologist and cyberneticist (1898–1969). (Source: With Permission from the American Philosophical Society Library, Philadelphia PA, USA)
Fig. 4.7 Walter Pitts, American logician (1923– 1969), together with Ysroael Lettvin, American cognitive scientist, on the left. (Source: http://en.wikipedia.org/wiki/ Image:Lettvin_Pitts.jpg and Family album. Accessed on 10.01.2019)
4.5 Walter Pitts Figure 4.7. Walter Pitts was an American logician, his field of work was cognitive psychology. Pitts became McCulloch’s collaborator in the 1940s, resulting in the well-known McCulloch-Pitts neuron model. In 1943, he took up an assistant position and became a doctoral student at the Massachusetts Institute of Technology—MIT—under Norbert Wiener.
72
4 Cybernetics and its Representatives
Sketches of McCulloch-Pitts neurons with logical connections.
Three McCulloch-Pitts neurons with different thresholds and logic gates.
And
Or
Not
Fig. 4.8 McCulloch-Pitts neuron models. (Source on the left: McCulloch and Pitts 1943, p. 130)
The McCulloch-Pitts cell, also known as the McCulloch-Pitts neuron, is the simplest model of an artificial neural network, which can only process binary—zero/one—signals. In analogy to biological neural networks, inhibitory signals can also be processed by the artificial neuron. The threshold of a McCulloch-Pitts neuron can be set by any real number. Figure 4.8 on the left shows a series of sketched cells during development by McCulloch and Pitts, while Fig. 4.8 on the right shows three simple McCulloch-Pitts neurons with different thresholds using logical gates.
4.6 William Ross Ashby Figure 4.9. The English researcher and inventor William Ross Ashby had a significant influence on the development of cybernetics through his influential research results, such as his homeostat and his law of requisite variety (see Sects. 3.8 and 3.9), as well as his book “Design for a Brain” (1952, second edition 1954). Rid describes Ashby’s development as follows (Rid 2016, pp. 77–78): Ross Ashby, a 45-year-old major in the Royal Army Medical Corps, headed the research department at Barnwood House [where traumatized officers were treated, author’s note]. In this remote clinic, Ashby invented the “homeostat,” a strange machine inspired by his work
4.6 William Ross Ashby
73
Fig. 4.9 William Ross Ashby, British psychiatrist (1903– 1972). (Courtesy of Mick Ashby. Image is reproduced with permission of the Estate of W. Ross Ashby)
with mentally disturbed patients. […] It took Ashby fifteen years to design his proto-brain, and two more to build it. It cost him fifty pounds (compared to the then German Mark and taking into account an exchange rate GBP to DM, (1:11.702 for the first time from 1953), the current value is approximately 299 euros, (https://de.wikipedia.org/wiki/Deutsche_ Mark. Accessed on 25.01.2018, author’s note). The device looked like four old-fashioned car batteries arranged in a square on a large metal plate. […] Ashby and his assistant, David Bannister, had built magnet-driven potentiometers, electrical wires, tubes, switches, and small water containers into their machine. […] The most noticeable visible parts were four small magnets that swung like compass needles in four small water containers mounted on top of each of the boxes. Each of the four boxes had fifteen rotary and toggle switches that could be used to change parameters. […] The goal of the machine was to keep its four electromagnets in a stable position, with the needle centered above each box in the middle of its water container. This was the normal, “comfortable” position of the homeostat. The experiment was to make the machine “uncomfortable” and see how it reacted.
Ways to create “discomfort” or imbalance in the homeostat included: reversing the polarity of the magnets, connecting them with a grid rod, or restricting the movement of the magnets (ibid.). No matter what Ashby did to throw the homeostat (see Fig. 4.10) off balance, ultimately the device found ways to re-center the compass needles and restore the equilibrium state (cf. ibid., p. 78). Ashby’s lecture at the Macy Conference in 1952 in New York led to controversial debates, including with Wiener’s assistant Bigelow. While Bigelow distinguished between environment and organism, for which Ashby’s machine either stood for the environment or an organism, Ashby himself saw his homeostat as a unity of environment and organism (Ashby 1954). And Rid states (2016, p. 83): This was Ashby’s fundamental, even historical, insight
The American Gregory Bateson (Sect. 4.7), anthropologist, biologist, social scientist, and cybernetician, was also a participant in the conference, as was the English ecologist George Evelyn Hutchinson (1903–1991). When Bateson asked the ecologist Hutch-
74
4 Cybernetics and its Representatives
Fig. 4.10 Homeostat according to William Ross Ashby. With kind permission of Mick Ashby. Image is reproduced with permission of the Estate of W. Ross Ashby
inson whether Ashby’s machine could be compared to nature, whether it exhibited the same learning property as found in an ecosystem, Hutchinson definitely agreed. Wiener, who did not attend the conference but learned of Ashby’s apparatus, called the homeostat not only an experimental machine but a “learning machine” (ibid., p. 86). Ashby himself wrote in 1948 (ibid., p. 87): The brain is not a thinking machine, it is an acting machine. […] It receives information and then does something based on that information.
The brain could therefore also be seen as a black box, with input signals that—however they are processed—lead to output signals. Ashby saw the nervous system as a physical machine, “a physicochemical system,” which constantly works to adapt the organism to the environment. Thus, according to Ashby (ibid., p. 88) [t]he free-living organism and its environment […] together form an absolute system, comparable to the homeostat.
And to finally close the circle from the homeostat to cybernetic control, Rid writes:
Key statement “Like his [Ashby’s, the author] homeostat, the brain simply
uses negative feedback to adjust to present disturbances.” (Rid 2016, p. 91) This brief insight into Ashby’s contributions to cybernetics shows the great influence that his experiments have had and continue to have on Wiener, Bigelow, Bateson, and many other cyberneticians of his time and today. From earlier homeostats today, intelligent machines with artificial intelligence are permeated with negative feedback for adaptive control and avoidance or reduction of disturbances.
4.7 Gregory Bateson
75
The spontaneous collaboration of mathematicians, computer scientists, electrical engineers, biologists, psychologists, physiologists, physicians, sociologists, and other specialists, who laid the foundation for the field of cybernetics in the 1940s, has become the standard for cybernetic, inter- and intradisciplinary research and development today, without which no humanoid or collaborative robot, no fundamental cybernetic knowledge gain and progress would be possible.
Key statement And let us always remember the “mother” of all negative and positive feedback: evolutionary nature. Over billions of years, under the strictest quality and control features, it has managed to achieve individual and collective masterpieces through adaptive progress that benefits countless individuals and populations. Communication through information-processing systems was—alongside energy and material—the driving force of development in the biosphere. Humans, who have created the technosphere, would do well to take a cue from the fundamental interconnected control processes of nature, for reasons of sustainable and fault-tolerant processes and not least because of their own further development.
4.7 Gregory Bateson Figure 4.11. [Gregory Bateson’s] areas of work included anthropological studies, the field of communication theory and learning theory, as well as questions of epistemology, natural philosophy, ecology, or linguistics. Bateson treated these scientific fields not as separate disciplines, but as different aspects and facets in which his systemic-cybernetic thinking comes into play (https://de.wikipedia.org/wiki/Gregory_Bateson. Accessed on 25.01.2018).
As a 38-year-old participant in the Macy Conference in 1942, the young Bateson benefited from the “fathers” of cybernetics, especially from Ashby’s work on the homeostat. Bateson consistently expanded the ideas of Wiener and Ashby, with Wiener speaking of a bomber pilot who acts as a servo valve and is thus part of a human-machine unit, while Ashby described his homeostat as a unit of machine and environment. Rid writes about Bateson’s considerations (2016, p. 220): If the axe was an extension of the woodcutter’s self, then so was the tree, for without the tree the man could hardly use his axe. It is the connection tree-eye-brain-muscles-axestroke-tree; “and it is this whole system that has the characteristics of the immanent mind,” wrote Bateson in [his 1972 (German: 1981) first published essay collection, d. A.] Ecology of the Mind. […] Bateson was aware of how crazy this sounded to most of his American and European readers, who were used to understanding the world individualistically, not in such a radically holistic, comprehensive form. “But this is not the way an average Westerner sees the sequence of events of a falling tree.” […]
76
4 Cybernetics and its Representatives
Fig. 4.11 Gregory Bateson, Anglo-American anthropologist, biologist, social scientist, cyberneticist, and philosopher. (1904–1988). With kind permission from the American Anthropological Association, Arlington, VA, USA
For Bateson, however, the loop tree-eye-brain-muscles-axe-stroke-tree was an elementary cybernetic thought.
Bateson expanded his thoughts on the holistic view of things to society, which he compared with Ashby’s homeostat. The dynamics of society were those of an “ultra-stable system. […] The system was ‘self-correcting’.” (ibid., p. 222). Who does not think of the Gaia hypothesis, developed by microbiologist Lynn Margulis (1938–2011) and chemist, biophysicist, and physician James Lovelock (1991) (*1919) in the mid-1960s, when reading Bateson’s last sentence? According to this hypothesis, the Earth’s biosphere is considered a living organism that also includes societies, thus making an even greater holistic claim than Bateson had in mind. If today’s strategies in the global, highly interconnected field were analyzed for solutions to conflicts of all kinds, causal or monocausal thinking would still be the intellectual tool for operational action in the Western Hemisphere or in industrialized nations, largely including significant consequential problems and at the same time largely excluding negative— system-stabilizing—feedback processes. The current examples from societal poverty-wealth divisions and divisive tendencies (BMAS 2017), not only in the middle of the industrial continent of Europe, speak a clear language. Therefore, Bateson’s holistic considerations, thinking in cycles and (negative) feedback, are ecologically, economically, and socio-politically as relevant as never before—if not even more relevant and urgent than—before.
4.8 Humberto Maturana and Francisco Varela Figure 4.12. Figure 4.13.
4.8 Humberto Maturana and Francisco Varela
77
Fig. 4.12 Humberto Maturana, Chilean biologist and philosopher (*1928). (Source: File:Maturana, Humberto -FILSA 2015 10 25 fRF09.jpg, https://upload. wikimedia.org/wikipedia/ commons/0/02/Humberto_ Maturana-FILSA2015.jpg. Accessed: 10.01.2019)
Fig. 4.13 Francisco Varela, Chilean biologist, philosopher, and neuroscientist (1946– 2001). (Source: Photographer Joan Halifax (Upaya). flickr_ url: (https://www.flickr.com/ photos/upaya/143621045/in/ set-72157594148545142/. Accessed on 10.01.2019)
The concept of autopoiesis, which was already explained in Sect. 3.11 (Varela et al. 1974), is a central component of the biological theory of cognition developed by the two Chileans Humberto Maturana and Francisco Varela, which they presented in their book “The Tree of Knowledge” (1987, first published 1984). In an interview (Ludewig and Maturana 2006), the central aphorisms of the theory of cognition are highlighted, as (ibid., p. 7): Every action is cognition, and every cognition is action.
And: Everything said is said by someone.
Another explanation describes (https://de.wikipedia.org/wiki/Der_Baum_der_ Erkenntnis#Autopoiese. Accessed on 26.01.2018):
78
4 Cybernetics and its Representatives The concept of all living things is connected (M/V) [Maturana/Varela, n.d.] with the autopoietic (= self-creating) organization, which they demonstrate using the example of a cell and transfer to multicellular organisms. The goal of evolution is the survival of the species with the help of individual beings. Preconditions for this are both an autonomous organization and an adaptation (structural coupling) to the environment, but not as a onesided execution of the demands of the outside world: In all these processes, there is not one actor and the target group, but mutually overlapping processes: Already in reproduction, not only the DNA is involved, but an entire network of interactions with, for example, the mitochondria and membranes in their entirety. This interplay for self-preservation consists of give and take, whereby the selected and adopted substances must fit the system and be processed by it. This means: The involved organs are connected in a continuous network of interactions with each other.
Maturana’s treatment of the question about life, the properties of systems, and the possibilities of distinguishing between living and non-living systems led him to the realization that it depends on the “organization of the living.” With this, he linked two traditional features of systems thinking: 1. the organismic biology, which deals with the nature of biological forms, and 2. cybernetics, which—as is well known—is dedicated to goal-oriented control and regulation processes in—biological and technical—systems. Maturana and Varela see in Autopoiesis a necessary and sufficient expression to characterize the organization of living systems. In conclusion, we let Maturana himself speak by explaining (Ludewig and Maturana 2006, p. 22) […] that an autopoietic unit is a closed production network of components in which the components generate the network that produces them in turn. This constitutes the living being as an autopoietic unit. Only when this happens, it is about living units; if not, something other than an autopoietic system is present. Only then does the living being have a being. The word autopoiesis exists only to point out this circumstance.
Interesting seems to be Maturana’s statement that the phenomenon of self-organization does not exist for him, with the reasoning that the organization is invariant and coupled to the observer (ibid., p. 39): The observer says […], that if the complex unit is defined by a certain organization, then this organization must necessarily be invariant, otherwise it would become something else when changing. Under these conditions, there can be no self-organization. The “self” refers to a unit that organizes itself, and that cannot be. It does not organize itself because its organization is unchangeable, otherwise it would have become something else.
When asked about the statement that self-organization therefore means that a number of components combine to form a unit, Maturana replied with the concept of spontaneous organization, which he would prefer in this case (cf. ibid., p. 40).
4.10 Karl Wolfgang Deutsch
79
4.9 Stafford Beer Figure 4.14. “The science of effective organization” was Stafford Beer’s interpretation of cybernetics, which the management scientist first presented in his book “Cybernetics and Management” in 1959 (German: “Kybernetik und Management”, 1970). Fig. 4.14 Stafford Beer, British management scientist (1926–2002). (Photo source: S. Beer in 2001, with kind permission from Eden Medina, Indiana University, Bloomington, IN, USA, Dep. of History)
The management cybernetics is a branch of management theory founded by Beer (Beer 1994b, 1995, 1981) and on which, in particular, the St. Gallen Management Model in Switzerland, based on the work of economist Hans Ulrich (1919–1997), has been built—since 2014 in its 4th generation (Rüegg-Stürm and Grand 2015). Beer’s “Viable System Model”—VSM—for organization is based on systems thinking. In this context, interconnected system elements influence each other. According to Beer, the VSM can be applied to any organization or organism, making it a universally applicable framework tool, with a preferred application in companies. The basic structure of the VSM has five system elements or subsystems, which are described in detail in Sect. 7.4. As another cybernetic approach by Beer, this time applied to the economic structures of a country—Chile, during the Allende government 1970–1973—his project attempt “Cybersyn” can be considered (Beer 1994a). The central economic administration of the country was to be controlled in real-time through a computer or teletype network. However, the attempt failed early on, partly due to the overthrow of the government by the Pinochet regime. Nevertheless, it is interesting to look at the details of this cybernetic social experiment, which is done in Sect. 7.4.
4.10 Karl Wolfgang Deutsch Figure 4.15. In 1986, social and political scientist Karl Wolfgang Deutsch—at that time working at the Science Center for Social Sciences in Berlin, Germany—wrote a short article about his 1963 book “The nerves of government: models of political communication and control” (German “Politische Kybernetik”, 1969) After a brief introduction (Deutsch 1986, p. 18):
80
4 Cybernetics and its Representatives
Fig. 4.15 Karl Wolfgang Deutsch, American social and political scientist (1912–1992). (Photo courtesy of Peter Rondholz, Berlin)
The Nerves of Government applies concepts of the theory of information, communication, and control to problems of political and social science. Key notions are Norbert Wiener’s use of the concepts of “feedback,” “channel capacity,” and “memory.” From these, concepts of “consciousness,” “will,” and “social learning” are developed by the present author. These ideas have found further application in the development in the computer-based political world model GL000S at the Science Center Berlin for Social Research.
Deutsch describes the beginning of his work with models and perspectives on “Political Cybernetics” as follows: This book began in 1943, when the mathematician Norbert Wiener walked into my office at MIT and recruited me at the point of a cigar into a long process of communication. It started with a discussion about my field, international politics, but soon turned to his own work on communication and control in machines, animals, and societies. His was the most powerful and creative mind I have ever encountered. We remained in close intellectual contact until I moved to Yale in 1958, and we remained close friends until his death in 1974. His ideas were just what I needed to develop my own. […] The main obstacles to the wider acceptance and use of a cybernetic approach to politics have been twofold. It seemed too complex to those colleagues habituated to a humanistic and literary approach for politics, and even to those already used to simple analyses of statistics, correlations, and regressions. And secondly, there was a lack of suitable data, particularly on time variables and changes over time.
From today’s perspective, the reservations of some of Deutsch’s colleagues towards a new way of thinking and acting seem understandable, as this intellectual pattern has been preserved to this day. Less understandable today is the obstacle or lack of suitable data in the computer age of Big Data—on the contrary! Deutsch also takes into account feedback, especially negative feedback, in his considerations of cybernetic approaches in politics or the “government process as a control
4.12 Heinz von Foerster
81
process” (Deutsch 1969, pp. 255–276). He finds a striking similarity between technical and biological control processes, goal-oriented movements and autonomous regulations on the one hand, and certain processes in politics on the other. Reference is also made to Sect. 7.4, where more detailed aspects of the application of cybernetics and politics are discussed.
4.11 Ludwig von Bertalanffy Figure 4.16. Looking back at Sect. 3.7, the achievements of Austrian biologist and systems theorist Ludwig von Bertalanffy have already been acknowledged. The term “system equilibrium” remains associated with his name: Fig. 4.16 Ludwig von Bertalanffy, Austrian biologist and systems theorist (1901–1972), 1958, Mt Sinai Hospital, LA, USA. Photo courtesy of BCSSS, Bertalanffy Center for the Study of Systems Science, Vienna
[Bertalanffy] wrote a General Systems Theory, which attempts to find and formalize common laws in physical, biological, and social systems based on methodological holism. Principles found in one class of systems should also be applicable to other systems. These principles include: complexity, equilibrium, feedback, and self-organization. (https:// de.wikipedia.org/wiki/Ludwig_von_Bertalanffy. Accessed on 26.01.2018)
It was a theory of open systems (Theory of Open Systems in Physics and Biology) that Bertalanffy further developed—for the systems theory of open systems itself, see von Bertalanffy 1950 and for General Systems Theory, see von Bertalanffy 1969; for biophysics of flow equilibrium, see von Bertalanffy et al. 1977.
4.12 Heinz von Foerster Figure 4.17. Austrian physicist Heinz von Foerster is one of the co-founders of cybernetic sciences. Inseparably linked to his name are terms such as first-order cybernetics and second-order cybernetics, as described in detail in Sects. 5.4 and 5.5.
82
4 Cybernetics and its Representatives
Fig. 4.17 Heinz von Foerster, Austrian physicist (1911– 2002). (Source: Heinz von Foerster personal file, from University of Illinois publicity department, USA, licensed under the Creative Commons Attribution-Share Alike 4.0 International)
In a short biography about Heinz von Foerster, Albert Müller (2001) describes Foerster’s versatility and—as we would say today—his ability to look beyond the boundaries of his own field into other disciplines. Heinz von Foerster’s versatility is also evident in the fact that he—parallel to his professional careers—developed research interests of an innovative nature: in 1948, he published a book on the problem of memory, Das Gedächtnis. Eine quantenmechanische Untersuchung, with the Viennese Deuticke-Verlag. With this, he not only achieved a quantum-physical interpretation of Ebbinghaus’s measurements of memory performance, but above all, a first opus in which Foerster’s “way of thinking and working” clearly emerges. […] During a visit to America shortly after the publication of his book, Foerster gained the recognition and support of Warren McCulloch, and he was able to present his ideas on memory at a conference of the Josiah Macy Jr. Foundation, which dealt with interdisciplinary problems of cybernetics. In 1949, Foerster received a position at the Electron Tube Lab of the University of Illinois, where he was promoted to Professor of Electrical Engineering in 1951. From 1949 onwards, Foerster was also the secretary of the Macy conferences, whose conference reports he co-edited. Thus, he obtained a central position in the development of the still young science of cybernetics. Foerster’s interest in cybernetics and its further development culminated in 1957 with the founding of the Biological Computer Laboratory (BCL) at the University of Illinois, which would become one of the most important innovation centers for cybernetics and cognitive research over the next twenty years. The founding of the BCL corresponds to a turning point in Foerster’s publications. While works in electrical engineering and physics dominated the 1950s, he now turned to topics such as homeostasis, self-organizing systems, system-environment relations, bionics, bio-logic, machine communication, etc. […] Characteristic of Foerster’s working and research style in those years is the recurring “digression” into “foreign”, non-scientific, and non-technical fields: computer music, symbol research, or library sciences are examples here; […]. Finally, the didactic innovations at the BCL, which were primarily aimed at the participation of students, are also significant. Publications such as Cybernetics of Cybernetics or the Control of Control and the Communication of Communication provide an impressive picture of this.
4.13 Jay Wright Forrester
83
See Foerster’s essay “Understanding Understanding” (2003) on this topic:
Key Statement It is not least this concept of Heinz von Foerster’s “Understanding Understanding” or “Grasping Insight” that makes communication in cybernetic worlds so dominant.
In conclusion, we let a student of the German sociologist and social theorist Niklas Luhmann (1927–1998, see Sect. 6.1) have the last word, who describes Foerster’s views on publications in a pointed manner (Baecker 1998): True to his insight that most books contain nonsense, even though they never have the courage to write “nonsense” on the cover, he never wrote his own monographs, but instead wrote contributions for conferences and edited conference volumes.
4.13 Jay Wright Forrester Figure 4.18. The American computer scientist Jay Wright Forrester is a pioneer of computer technology and systems science. He is the originator of the research field of system dynamics, the model structure of which is still used today in many disciplines for the analysis of complex systems through simulations. In 1956, Forrester founded the System Dynamics Group at the MIT Sloan School of Management. “Industrial Dynamics” (1961) was Forrester’s first book, in which he used system dynamics to analyze industrial circular business processes. Years later, a meeting with Boston Mayor John F. Collins prompted Forrester to write the book “Urban Dynamics” (1969), which sparked a debate about the feasibility of models of far-reaching social systems. This approach was adopted by many municipal and urban planners around the world. His encounter with the “Club of Rome”, an association of scientists from various disciplines concerned with a sustainable future for humanity, discussing approaches to global sustainability, ultimately resulted in his book “World Dynamics” (1971a), which dealt with complex interactions of the global economy, population, and
Fig. 4.18 Jay Wright Forrester, American computer scientist (1918–2016). (Courtesy of the MIT Museum, Cambridge, MA, USA, many thanks to Amy MacMillan Bankson, MIT Sloan School of Management)
84
4 Cybernetics and its Representatives
Fig. 4.19 Jay Wright Forrester’s world model in hand sketch. (Source: Forrester et al. 1972, pp. 118–119)
ecology, although the conclusions drawn from it were not uncontroversial (see https:// en.wikipedia.org/wiki/Jay_Wright_Forrester. Accessed on 28.01.2018). Figure 4.19 shows Forrester’s sketch of his world model, including its predominantly nonlinear flow patterns, represented by the function graphs. Figure 4.20 shows a more structured representation of the flow diagram of the world model. In both figures Figs. 4.19 and 4.20, rectangular symbols represent “stationary containers” that can be filled or emptied. The round symbols represent “dynamic flow variables” that change their strength, speed, and possibly direction over time through their mathematical connections with each other and with the “containers”. The topic also prompted the American environmental scientist and biophysicist Donella Meadows (1941–2001) and her colleagues to publish the book “The limits to growth” (German: “Die Grenzen des Wachstums”, 1972), a report for the Club of Rome project on the “Dilemma of Humanity”—on the predicament of mankind. In the current edition, following several interim reports from the Club of Rome, the book “2052. A Global Forecast for the Next 40 Years” by Jørgen Randers has been available since 2012 as the new report to the Club of Rome (Original: “2052. A Global Forcast for the Next Forty Years”)—40 years after “The limits to growth”.
Fig. 4.20 Jay Wright Forrester’s world model in structured form. (Source: Forrester et al. 1972, pp. 34–35)
4.13 Jay Wright Forrester 85
86
4 Cybernetics and its Representatives Urbanization
Healthcare
Fertility
Life expectancy
Population Work productivity
Labor force
Social tension
Injustice
Production (GDP)
Consumption
Growth in per capita consumption
Energy consumption Investments CO2 emissions Resource and climate problems
Fig. 4.21 Jørgen Randers’ world model as a deterministic framework. (Source: Randers 2012, p. 81)
Fig. 4.21 shows, in comparison to Fig. 4.20, the current worldview according to Randers, with the most important cause-effect relationships for the 2052 forecast. Details can be found in Randers 2012. The influence of Forrester on the world of thought, approaching real dynamic complex relationships with systemic networked perspectives and thus also taking into account a cybernetic approach through circular feedback loops between the influencing factors of a system, is unmistakable and immeasurably large. Some of his achievements are documented in books and articles, including: “Counterintuitive Behavior of Social Systems”, 1971b; Principles of a System Theory, 1972; “The Devilish Feedback Loop—The Global Model of the Human Crisis”, 1972; “Designing the future”, 1998; “Economic theory for the new millennium”, 2003 and many more.
4.14 Frederic Vester
87
Key statement Forrester’s influence on the world of thought is an early
indication of the need to train and practice communication through systemic rather than causal (monocausal) perspectives. Without systemic thought patterns, no realistic view and no realistic solution approach can be achieved in the dynamic complex processes around us. In the application-oriented Chap. 7 we will discuss System Dynamics—SD—further through practical examples. The methodical approach with the typical SD structure and the flow processes between individual “containers” of the SD model has found numerous imitators to this day, who work on qualitative and quantitative problems of any kind of dynamic complex processes with modified software.
4.14 Frederic Vester Figure 4.22. Frederic Vester was a German biochemist and systems researcher. In 1970, he founded the “Study Group for Biology and Environment,” from which numerous research results, books, and articles on cybernetics and systemic or networked thinking emerged. System thinking was probably his greatest motivation to revive research and application in various societal—not least conflict-ridden—areas and to elevate them to a new level of thinking and acting, in stark contrast to the prevailing, unrealistic causal strategies in complex dynamic environments. At a very early stage of his cybernetic work, Vester was given the opportunity to create a study as a guide for important environmental issues for the city of Munich (Vester 1972). From this, he developed a new concept beyond conventional analyses from individual disciplines, emphasizing the dynamics and interaction between the individual
Fig. 4.22 Frederic Vester, German biochemist and systems researcher (1925– 2003). (Source: Image without specific source reference, kindly provided by Malik MZSG Management St. Gallen AG, Switzerland)
88
4 Cybernetics and its Representatives
disciplines on environmental issues. A study on the “system context in environmental issues” was created with the title: “The Survival Program”. The content of the “cybernetic study” combined “products” such as water, wastewater, exhaust gases, dust, stress, noise, “environmental areas” such as water, soil, food, ocean, climate, spatial planning, as well as links between economy-science-technology, research gaps, public relations, and much more in a networked impact system. The results of this study, which were far more realistic than conventional results on the subject matter, demanded nothing less than (ibid., p. 205): A prophylaxis appropriate to the environmental problem requires—since the temporal shift among the identified control circuits and their respective feedback effects [feedback effects, author’s note] often only occurs years after the cause—apparently the inclusion of much larger time periods in the “political” planning of humans than corresponds to our previous cultural level.
Comparing today’s urban and regional planning approaches for a fictional “Survival Program 2018” with that from 1972, 46 years later, it can be soberly stated without exaggeration, through numerous worldwide examples of short-term thinking and acting results: • Current “survival programs” of cybernetic design for cities and regions lead a comfortable niche existence in their pronounced networked structures and sustainable solution approaches. • In the last 46 years, there has been no shortage of necessary and sufficient warnings for local and global consequences that occur when the real networking with its diverse feedback mechanisms—especially the stabilizing “negative” feedback— remains unconsidered, i.e., is preferably ignored due to economically driven progress. • Countless vicious circles of destruction and disaster, driven by each other, have been insidiously built up by human decision-makers who are blind to their own basis for life, coolly weighing personal profit against societal burden. • Our evolutionary nature, which we are in the process of destroying, will hardly be impressed. It will certainly find new ways of sustainable progress. Whether the majority of people are still capable of this depends primarily on their handling of nature, environment, and society. The result of the 1972 Survival Program presents three alternatives in dealing with nature, environment, technology, economy, and people (ibid., p. 207): 1. “Back to nature”—is not an option because it would also mean a decline in advantageous civilizational achievements. 2. “Continue as before”—perpetuates the conflict-ridden short-sighted course of humanmade destructions, which are now described by their own era, the Anthropocene. 3. “Introduction of cybernetic ways of thinking and technologies”—would be a successful path for problem-preventive, robust, fault-tolerant, and ultimately sustainable
4.14 Frederic Vester
89
progress. The insight to provide problem prevention not only for oneself but also for future generations now (!) will be and remain a pivot of thinking and acting in the cybernetic mindset. With “The Cybernetic Age. New Dimensions of Thinking,” Vester (1974) laid another foundation for his application-oriented, widely diversified research fields. These ranged from genetics, brain, health, healing, biotechnology, bionics, cybernetics, computers to agriculture, food, ocean, plastics, nuclear fission, energy, transport, traffic, culture, and more. Vester seemed to intuitively grasp the networked effects of each of his socially relevant work areas on neighboring areas. In the traveling exhibition “Our World. A Networked System,” accessible to a broad public, Vester (1978) showed in 27 thematic areas a multitude of cybernetic processes that allow nature, technology, economy, society, and people to be as they are. Vester’s aim was to show which—often invisible to us humans—mechanisms were and are at work when surprising accidents occur. Growth is not equal to growth, as one, quantitative, can lead to instabilities and the other, qualitative, to stabilities in a system. Questions like “How do things affect each other?”, “What are feedback loops?”, “How do control circuits work in us humans?”, “What does doping do to athletes, unconnected thinking in energy policy?”, “What happens when intervening in ecosystems?” as well as topics such as the waste carousel, the cybernetic house, and more are instructive examples for dealing with nature and in a society that thinks cybernetically. With “Urban Areas in Crisis” (1983) and “Exit Future” (1990), Vester addressed the societal issue and problem of traffic early on, which currently, in 2018, has reached an inglorious peak in urban agglomerations in terms of environmental pollution and health burden due to car exhaust emissions and their manipulated emission values by car manufacturers and suppliers. It is striking evidence of the obvious unwillingness of car manufacturers to abandon their traditional short-sighted thinking strategies of profit maximization and to embark on a more cybernetic development path that considers human health, environmental impact, traffic, product sales, and more in an impact network, as would be far more realistic and problem-reducing than the currently existing one-sided economic orientation. Interest groups and, in particular, politicians hold a crucial key position in this cybernetic traffic and infrastructure game, whose activities should also be analyzed in a networked manner and—if faulty—corrected like those of all others. Referring to the automotive industry courted by politicians in Germany—to name just one dominant sector—it can be pointed out in cybernetic terms that industry and politics act within the framework of a cartel consisting of successively developed networked vicious circles of positive feedback, in which ecological, economic, and social system boundaries have long been exceeded, as the multitude of subsequent problems with high risk and destructive potential undoubtedly proves. With “Neuland des Denkens. Vom technokratischen zum kybernetischen Zeitalter” (New Territory of Thinking. From the Technocratic to the Cybernetic Age) (1984), “Leitmotiv vernetztes Denken” (Leitmotif of Interconnected Thinking) (1989) and “Die Kunst
90
4 Cybernetics and its Representatives
Effect of
at
Urban planning Green spaces Air pollution Health Individual traffic public opinion 0 = no influence 1= weak influence 2= medium impact 3= strong influence
Fig. 4.23 “Paper Computer” according to Frederic Vester. (Source: Vester 1983a, p. 143)
vernetzt zu denken. Ideen und Werkzeuge für einen neuen Umgang mit Komplexität” (The Art of Interconnected Thinking. Ideas and Tools for a New Approach to Complexity) (1999a, English: “The art of interconnected thinking”, 2007), Vester consistently continues his enlightening path for a new way of thinking in contexts. Vester’s Papiercomputer (Paper Computer) (see Fig. 4.23) remains a useful tool, which can also be used as a computer program. Through mathematically simple analyses of various influencing factors in a complex system, their effects on each other are captured and evaluated. The result leads to a first realistic approximation of a complex system, in which the interconnected influencing factors are interpreted according to their degree of influence and the influenceability of others. In this way, the operator gains a sense of the dependencies of the influencing factors in the system, with which they can sharpen their interconnected—cybernetic—view of things, approaching reality. In an operational analysis of the same, but not interconnected system influences, the result would certainly be more remote from reality, because the existing interactions would be completely ignored. In this very manageable model of five interconnected parameters, they are evaluated in a first very rough approximation, using simple mathematical means, through columns of numbers in the columns AS, Q, and the rows PS, P. Under AS = Active Sum, the respective matrix number rows are added, under PS = Passive Sum, the respective matrix number columns are added. The respective Q = Quotient values are calculated from AS : PS, P = Product values result from AS × PS. The classification into one of the four-quadrant fields, set up by the ordinate Influenceability and the abscissa Influence, distributes all five influencing factors according to their activity (highest Q value), passivity (lowest Q value), most critical variable (highest P value), and buffering, reactive variable (lowest P value). From this, a first holistic overview of the positions of the involved variables or system influencing factors can be gained.
4.14 Frederic Vester
91
The Paper Computer by itself can only be a rough aid to train interconnected thinking. Its true strength is only revealed as an integral part of the Sensitivity Model Prof. Vester®. Vester’s further outstanding achievement in the field of biocybernetics is the development of eightbasic rules of biocybernetics (Vester 1983a, pp. 66–86). According to Vester, a handful of laws operate in nature that […] have proven themselves within the framework of nature’s evolutionary strategy as the internal guiding variables of viable systems and subsystems. They must therefore also apply to the system of human civilization—as a subsystem of the biosphere—and can guarantee its survival and development-capable design far more than such stupid premises as the single-track compulsion for economic growth. […] The fact that these basic rules of complex systems have hardly interested us [and still hardly interest us, d. A.] is therefore […] one reason why cybernetic technologies rooted in networked thinking are still in their infancy […]. These eight basic rules, which are actually valid for every open complex system and enable its necessary self-regulation and thus viability, can be directly implemented in practice.
Despite advancing knowledge about interrelationships in societies over the past decades and exemplary approaches to a cybernetic economy, through new cybernetic organizational structures—e.g., St. Gallen Management Model –, material recycling, etc., the majority of all problem-solving strategies in complex social environments still remain at the archaic level of causal—monocausal—solution strands of sequentially connected operational steps. The technical product solutions, as well as their consequences and consequential problems, are visible everywhere. Vester’s contributions to cybernetics or biocybernetics would be incomplete without his developed tools Ecopolicy and Sensitivity Model Prof. Vester®, with which networked thinking and action can be practically experienced in modeled complex systems (Vester 1983b, 1975). Ecopolicy has been developed as a board game and computer-generated program. It puts players in the position of a state government of a fictional society in which eight essential influencing factors are interconnected. By distributing points to directly controllable variables in the state, such as renovation, production, quality of life, and education, politics, environmental pollution, population, and reproduction rate are indirectly influenced through networked flows (see Fig. 4.25). To complicate matters, random events affect the controlled point allocation, and the networked relationships of the “state” influences are not always linearly calculable. The goal of the simulation is to keep the fictional country in a stable state over the course of the game. The “Sensitivity Model Prof. Vester®” emerged from a preliminary study in 1976, which dealt with “Urban agglomerations in crisis—a guide to understanding and planning human habitats using biocybernetics.” The study was also the German UNESCO contribution to the international program “Man and the Biosphere” (Vester 1999b, p. 4).
92
4 Cybernetics and its Representatives
This study was followed by the regional planning approach “Ecology in the conurbation. Representation of the overall dynamics and development of a sensitivity model.” Its goal was to focus on a system compatibility test far beyond an environmental impact assessment (Vester and Hesler 1988). It also includes the eight biocybernetic basic rules, as shown in Fig. 4.24. Figure 4.26 shows the recursive structure of the sensitivity model, with solid lines as reinforcing and dashed lines as balancing feedback loops. Subsequently, Fig. 4.27 shows an overview sketch of the refined structural sequence of the sensitivity model according to Vester and Hesler. In retrospect, Frederic Vester deserves great credit in the German-speaking world— more than others—for tirelessly researching and advocating for a real systemic view of the things that happen around us. He has, like only a few after him, opened our eyes to the ingenious cybernetic tricks of nature and how their principles—under technospheric boundary conditions—can be advantageously used.
Key statement Biocybernetic thinking, communicating, and acting are
inseparably linked to the name Frederic Vester.
4.15 Concluding Remark It should be noted that, apart from Maturana, all the representatives of cybernetics mentioned have already passed away, and in more recent times, in which the networked Internet of Things, the networking of machines or the networking of human-machine systems—in a word: cybernetics—are spreading rapidly, no new, groundbreaking insights into cybernetics are being developed or are discernible, or are already distinguished by successful products and processes. The adherence to tried and tested methods and the lack of courage for cybernetic experiments, as can be found particularly in organizations and thus in information-processing systems, contribute to the lethargy of cybernetic initiatives. Overcoming this is a commandment—an absolute must (!)—for the present and a viable future. Last but not least, this textbook on cybernetics should also encourage students of relevant disciplines and interested education seekers to look beyond the confines of narrow disciplines. For these still make up the majority of our educational landscape. Perceiving things from different perspectives or points of view, reflecting on them, and only then embarking on the search for sustainable solutions is a mandate of the hour. It is all the more imperative because the dynamics and complexity of technological progress lead to areas such as the digitalization of industrial processes, humanoids, the Internet of Things, which were unknown just a few years ago and which will hold many surprises in the future.
4.15 Concluding Remark
93
The eight basic rules of biocybernetics
p2
p1
F
p3
p5
1
Negative feedback must dominate over positive feedback.
Positive feedback makes things work by self-amplification. Negative feedback then provides stability against disturbances and limit violations
2
The system function must be independent of quantitative growth.
The flow of energy and matter is constant in the long term. This reduces the influence of irreversibilities and the uncontrolled exceeding of limit values.
3
The system must be functionally oriented and not work in a product-oriented manner.
p4
E
A B
D
Corresponding interchangeability increases flexibility and adaptation. The system survives even when offers change. External energy is used (energy cascades, energy chains), while own energy serves mainly as control energy. Profits from existing constellations, promotes self-regulation.
4
Use of existing forces according to the jiu-jitsu principle instead of fighting according to the boxer method.
5
Multiple use of products, functions and organizational structures.
Reduces throughput. Increases the degree of cross-linking, Reduces the energy, material and information input.
Recycling. Use of circular processes for waste and wastewater recycling.
Initial and final products melt together. Material flows run uniformly. Irreversibilities and dependencies are mitigated.
Symbiosis. Mutual use of diversity through coupling and exchange.
Favors small processes and short transport distances. Reduces throughput and extreme dependency, increases inteme dependency. Reduces energy consumption.
6
C
7
8
Biological design of products, processes, and organizational forms through feedback planning.
Considers endogenous and exogenous rhythms. Uses resonance and functional fits. Harmonizes system dynamics. Enables organic integration of new elements according to the eight basic rules.
Fig. 4.24 Self-explanatory “eight basic rules of biocybernetics”. (Source: After Vester 1983a, p. 84; 1999b, p. 6)
94
4 Cybernetics and its Representatives
Politics Redevelopment
Quality of life
Population
Environmental impact
Production
Reconnaissance
Reproduction rate
Fig. 4.25 “Ecopolicy”. (Source: After Vester 1997, p. 24)
System description
System evaluation
Factors, data problems, goals. First system picture
The recursive structure of the sensitivity model
Set of variables Recording of the influencing variables
7
according to the basic rules of biocybernetics
Simulation If-then forecasts and policy tests
Sub-scenarios
Criteria matrix
Strategy development
Check for system relevance
Influence Matrix Questioning the interactions
Distribution of roles Role of variables in the system
Impact structure Investigation of networking and control loops
Fig. 4.26 Recursive structure of the “Sensitivity Model Prof. Vester®”. (Source: After Vester 1999b, p. 11)
Fig. 4.27 Refined structure of the “Sensitivity Model according to Vester and Hesler”. (Source: Vester and Hesler 1988, cover illustration)
4.15 Concluding Remark 95
96
4 Cybernetics and its Representatives
4.16 Control Questions Q 4.1 D escribe the special achievements associated with tive Norbert Wiener. Q 4.2 Describe the special achievements associated with tive Arturo Rosenblueth. Q 4.3 Describe the special achievements associated with tive John von Neumann. Q 4.4 Describe the special achievements associated with tive Warren Sturgis McCulloch. Q 4.5 Describe the special achievements associated with tive Walter Pitts. Q 4.6 Describe the special achievements associated with tive William Ross Ashby. Q 4.7 Describe the special achievements associated with tive Gregory Bateson. Q 4.8 Describe the special achievements associated with tives Humberto Maturana and Francisco Varela. Q 4.9 Describe the special achievements associated with tive Stafford Beer. Q 4.10 Describe the special achievements associated with tive Karl Wolfgang Deutsch. Q 4.11 Describe the special achievements associated with tive Ludwig von Bertalanffy. Q 4.12 Describe the special achievements associated with tive Heinz von Foerster. Q 4.13 Describe the special achievements associated with tive Jay Wright Forrester. Q 4.14 Describe the special achievements associated with tive Frederic Vester.
the cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representathe cybernetics representa-
References Ashby WR (1954) Design for a brain, 2. Aufl. Wiley, New York Baecker D (1998) “Meine Lehre ist, dass man keine Lehre akzeptieren soll”: Der Anfang von Himmel und Erde hat keinen Namen—eine Buchbesprechung. Die Tageszeitung, 16. Juni Beer S (1970) Kybernetik und Management. Fischer, Frankfurt am Main Beer S (1981) Brain of the firm—the managerial cybernetics of organization. Wiley, Chichester Beer S (1994a) Cybernetics of national development (evolved from work in Chile). In: Harnden R, Leonard A (Hrsg) How many grapes went into the wine—Stafford Beer on the art and science of holistic management. Wiley, Chichester
References
97
Beer S (1994b) Decision and control: the meaning of operational research and management cybernetics. Wiley, Chichester Beer S (1995) Diagnosing the system for organizations. Wiley, New York von Bertalanffy L (1950) The theory of open systems in physics and biology. Science 111:23–29 von Bertalanffy L (1969) General system theory. Foundations, development and applications. George Braziller, New York von Bertalanffy L, Laue R, Beier W (1977) Biophysik des Fließgleichgewichts, 2. Aufl. Akademie, Berlin BMAS (2017) Lebenslagen in Deutschland. Armuts- und Reichtumsberichterstattung der Bundesregierung. Bundesministerium für Arbeit und Soziales, Bonn Deutsch KW (1969) Politische Kybernetik. Modelle und Perspektiven. Rombach, Freiburg im Breisgau Deutsch KW (1986) The nerves of government: models of political communication and control. In: Current contents, This week’s Citation Classics, Number 19, May 12, 1986 Forrester JW (1961) Industrial dynamics. MIT Press, Cambridge, MA Forrester JW (1969) Urban dynamics. MIT Press, Cambridge, MA Forrester JW (1971a) World dynamics. Wright Allen Press, Cambridge Forrester JW (1971b) Counterintuitive behavior of social systems. Theory and decision 2(2):109– 140 Forrester JW (1972) Grundzüge einer Systemtheorie. Gabler, Wiesbaden Forrester JW (1998) Designing the future. Presented at Universidad de Sevilla Sevilla, Spain December 15, 1998. www.clexchenage.org Forrester JW (2003) Economic theory for the new millennium. Plenary address at the International system dynamics conference, New York, July 21, 2003, see also Syst Dyn Rev 29(1):26–41 (January–March 2013) von Foerster H (2003) Understanding understanding: essays on cybernetics and cognition. Springer, New York Forrester JW, Heck HD, Pestel E (Hrsg) (1972) Der teuflische Regelkreis—das Globalmodell der Menschheitskrise. DVA, Stuttgart Lovelock J (1991) GAIA: Die Erde ist ein Lebewesen. Anatomie und Physiologie des Organismus Erde. Heyne, München Ludewig K, Maturana HR (2006 Original: 1992) Gespräche mit Humberto Maturana. Fragen zur Biologie, Psychotherapie und den „Baum der Erkenntnis“. © by Kurt Ludewig Cornejo and Humberto Maturana-Romesín, http://www.systemagazin.de/bibliothek/texte/ludewig-maturana. pdf. Zugegriffen am 26.01.2018 Maturana HR, Varela FJ (1987) Der Baum der Erkenntnis. Die biologischen Wurzeln menschlichen Erkennens. Scherz, Bern/München McCulloch W (1955) Information in the head. Synthese 9(1):233–247 McCulloch W, Pitts W (1943) A logical calculus of ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133 Meadows DH, Meadows DL, Randers J, Behrens WW III (1972) The limits to growth. Universe Books, New York Müller A (2001) Kurzbiographie Heinz von Förster 90. Edition echoraum, Wien Randers J (2012) 2052. Eine globale Prognose für die nächsten 40 Jahre. Der neue Bericht an den Club of Rome. oekom, München Rid T (2016) Maschinendämmerung. Eine kurze Geschichte der Kybernetik. Propyläen/Ullstein, Berlin Rosenblueth A, Wiener N (1943) Behavior, purpose and teleology. Philos Sci 10(1):18–24
98
4 Cybernetics and its Representatives
Rosenblueth A, Wiener N, Bigelow J (1950) Purpose and non-purpose behavior. Philos Sci 17(4):318–326 Rüegg-Stürm J, Grand S (2015) Das St. Galler Management-Modell. Haupt, Bern Varela FJ, Maturana HR, Uribe R (1974) Autopoiesis: the organization of living systems, its characterization and a model. Biosystems 5:187–196 Vester F (1972) Das Überlebensprogramm. Kindler, München Vester F (1974) Das kybernetische Zeitalter. S. Fischer, Frankfurt am Main Vester F (1975) Denken Lernen Vergessen. Was geht in unserem Kopf vor, wie lernt das Gehirn, und wann lässt es uns im Stich? DVA, Stuttgart Vester F (1978) Unsere Welt—Ein vernetztes System. Klett-Cotta, Stuttgart Vester F (1983a) Ballungsgebiete in der Krise. dtv, München Vester, F. (1983b) Neuland des Planens und Wirtschaften. In: Die Krise als Chance, 13. Int. Management-Gespräch an der Hochschule St. Gallen. St. Gallen, Schweiz Vester F (1984) Neuland des Denkens. dtv, Stuttgart Vester F (1989) Leitmotiv vernetztes Denken. Heyne, München Vester F (1990) Ausfahrt Zukunft. Strategien für den Verkehr von morgen. In: Eine systemuntersuchung. Heyne, München Vester F (1997) Ecopolicy. Das Handbuch. Studiengruppe für Biologie und Umwelt GmbH, München Vester F (1999a) Die Kunst ,vernetzt zu denken. Ideen und Werkzeuge für einen neuen Umgang mit Komplexität. DVA, Stuttgart Vester F (1999b) Sensitivitätsmodell Prof. Vester®. Ergänzende Information. Studiengruppe für Biologie und Umwelt GmbH, München Vester F (2007) The art of interconnected thinking. MCB, Munich Vester F, von Hesler A (1988) Sensitivitätsmodell. (2. Aufl aus 1980). Umlandverband Frankfurt, Frankfurt am Main Wiener N (1963) Kybernetik. Regelung und Nachrichtenübertragung in Lebewesen und in der Maschine. Econ, Düsseldorf/Wien (Original 1963: Cybernetics or control and communication in the animal and the machine, 2., erw. Aufl. MIT Press, Cambridge, MA)
5
Cybernetic Models and Orders
Summary
Natural, social, and technical-economic systems are permeated by cybernetic principles or characteristics, as discussed in detail in the chapter “Basic Concepts and Language of Cybernetics.” They are – in the case of natural systems per se—open systems to the environment, with which the three fundamental flows of our life process, energy, matter, and information, are exchanged. The following three cybernetic systems sections “Cybernetics of Mechanical Systems,” “Cybernetics of Natural Systems,” and “Cybernetics of Natural Systems” are intended to provide a manageable introduction to their organization and associated principles, with the content largely based on Probst (1987, pp. 46-52 Self-organization. Order processes in social systems from a holistic perspective. Parey, Berlin/ Hamburg) and drawing on current additions to topic complexes that took place within the framework of a conference “Exploring Cybernetics—Cybernetics in Interdisciplinary Discourse” in 2015 (Jeschke et al. 2015 Exploring Cybernetics. Cybernetics in Interdisciplinary Discourse. Springer, Wiesbaden). The concluding sections “First-Order Cybernetics” and “Second-Order Cybernetics” finally describe two interconnected well-known order characteristics of cybernetic systems. The presented cybernetic models interpret, through their system character, three central pillars of a sustainability-oriented development: ecology, social, and economy (technology/economics), as developed by the Brundtland Commission in 1987 and as a result of which the United Nations Conference on Environment and Development took place in Rio de Janeiro in June 1992.
© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_5
99
100
5 Cybernetic Models and Orders
5.1 Cybernetics of Mechanical Systems Probst introduces the topic with an excerpt from Chapter 4 of his book “Self-organization—Order processes in social Systems from a holistic perspective” (1987, pp. 46–47): In analogy models to mechanical systems, the objects of investigation are considered as regularly operating machines [system elements, author’s note], whose behavior is determined by the internal structure in the sense of a simple causal relationship. This has largely led to so-called control theories. The understanding underlying the analogy object is controllability and predictability. Specialized behavior is firmly predetermined for the parts, and it is changed at most by the intervention of an external “engineer.” If one knows the individual parts and the nature of their interaction, then the whole system is determined. The system can ultimately be fully understood and analyzed. These models [of cybernetics of mechanical systems, author’s note] are largely oriented towards an analytical, dissecting reduction-to-last-causes and exact thinking of physics or mechanics. Disturbances can be easily remedied by simply replacing faulty parts, and the entire system can be easily replicated with the existing blueprint.
The described cybernetic model of mechanical systems, with control functions that distinguish between guiding and guided system elements (the control process of the thermal cycle of a heating system is typical here), can be traced back not least to the founders and representatives of cybernetics—Wiener, Ashby, Beer, and others—and their definitions of cybernetics. An example of mechanical controlling systems in the cybernetic sense is shown in Fig. 5.1. A production assembly line with workers processing cars and trucks can be seen. The control by an external steering instance, e.g., a technician or engineer—with reference to Fig. 5.1 the person in the center of the picture could take this position –, can be imagined as negative feedback if larger problems, which e.g., lead to the standstill of the entire system, require such corrective and controlling intervention.
Fig. 5.1 Classic production assembly line at Mercedes-Benz in Wörth 1966. (Source: http:// media.daimler.com/marsMediaSite/de/instance/ko/50-Jahre-Lkw-Werk-Woerth-von-Mercedes-BenzMehr-als-36-Millionen-Lkw-in-einem-halben-Jahrhundert-Die-aussergewoehnliche Geschichte-des-groessten-Lkw-Werks-der-Welt.xhtml?oid=9917765 (Accessed on 01.02.2018))
5.1 Cybernetics of Mechanical Systems
101
Similarly, a kind of negative feedback can be given by, for example, wear sensors on tools in a production line indicating a change, so that the quality tolerance of the processed workpiece is not exceeded. The leap from controlled, technical mechanical systems to controlled organizational systems is obvious, especially with regard to the beginnings of organizational development. Probst rightly mentions Frederick Winslow Taylor’s concept of “Scientific Management” (Taylor 1911). It is complemented by Frank Bunker Gilbreth’s scientific motion studies (Gilbreth and Gilbreth 1920). Taylor and Gilbreth divided tasks in the company into the smallest units of work processes, which were easier to observe, catalog, measure, and add up to a larger unit. The framework of this division of labor was standardized and strictly controlled. A higher-level management person monitored the organizational process and intervened—if necessary—in the process and corrected it. The mechanical process in machines extended to the workers in the company and ultimately became a steering tool for the entrepreneur, who knew how to use it for his purposes. Mnemonic The state of cybernetic mechanical systems is determined exter-
nally. They react to environmental influences, causing disturbances to affect the system. Fixed reaction patterns are used to counteract these disturbances (negative feedback). Any change outside a norm is compensated by a counter-reaction until a stable, statically determined state is re-established. The increasing digitization and networking of objects with each other, also known as “Internet of Things (IdD)” or “Internet of Thinks (IoT)”, are taking on a new—not always accepted—twist in dealing with mechanical systems. In this context, efforts are increasingly being made to enable machines to communicate with machines, which ultimately also saves labor. For more on this and other consequences in a digitizing work environment, see Küppers 2018. Today’s production and work processes, at least in industrialized nations, have a completely different production and work structure when considering, for example, emerging work processes with collaborating robots in factory halls or—as shown in Fig. 5.2— digitally networked robot assembly islands. At the same time, however, companies in industrialized countries are outsourcing work to so-called developing countries, where production and working conditions resemble those at the turn of the 20th century. Excursion Cybernetic negative feedback from increasingly networked and self-organized machines will become a matter of course in the near future as part of a new industrial production technology in industrialized nations such as Germany, Japan, USA, etc. It is not yet foreseeable whether this trend towards “Industry 4.0” will lead to increased productivity with the “release” of workers or whether it will strengthen the collaborative human-machine productivity or whether a series of unexpected events will cross the path of the desired progress.
102
5 Cybernetic Models and Orders
Fig. 5.2 Digital networked robot assembly islands. (Source: http://www.staufen.ag/de/newsevents/news/article/2017/04/ende-des-fliessbandskomplexitaetszuwachs-stellt-automobil-zulieferer-vor-grosse-herausforderungen/ (Accessed on 01.02.2018))
At the same time, however, there is a lack of a minimum of negative feedback in the linked supplier productions from countries with low industrial standards and low wages, which is manifested, among other things, by catastrophic working conditions, diseases, and deaths of workers. The corrective of negative feedback in the work and production process could perform valuable work in these developing countries, which would sustainably strengthen the overall system of industrial and supplier production and labor force.
In Fig. 5.2, the networked handling devices as stationary robots could characterize a selfregulating process sequence in the production line, which only requires human correction in the case of larger problems—disturbance variable influence, which, for example, leads to production downtime. Brecher and co-authors describe cybernetic mechanical systems or “Cybernetic Approaches in Production Engineering” (Brecher et al. 2015) from today’s perspective as follows: Cybernetic approaches have long been an important part of production engineering. Control engineering—as part of cybernetics—is the prerequisite for state variables in production machines to be guided or kept constant, while disturbance variables are compensated without human intervention. Classical control engineering approaches assume that control paths can be described with a fixed structure of transfer functions. However, this prerequisite no
103
5.2 Cybernetics of Natural Systems
Self-optimization
Regulation Performance supply
Actuators
Adaptivity
Cognition
Information processing
Man
Basic system Modification of VDI 2206
CPS capability
Networked Systems
Sensors
Environment Legend: Information flow Energy flow Substance flow
Fig. 5.3 Development from classical controlled cybernetic control engineering to modern cybernetic systems. (Source: Brecher et al. 2015, p. 87) longer applies with regard to automation solutions for customer-specific products. In this context, the term self-optimization is used for systems that “are able to make autonomous (‘endogenous’) changes to their internal state or structure due to changed input conditions or disturbances” (Schmitt et al. 2011, p. 750). The step from classical control engineering to self-optimization thus consists of adapting the target system using model-based or cognitive methods. […] It […] is being researched how self-optimizing production systems can be designed at different levels […] and that IT-supported cybernetics will continue to be an important tool in production engineering in the future.
Figure 5.3 illustrates the development of production engineering from classical control engineering—RT—to cyber-physical systems1—CPS -, from which the cybernetics of networked interacting system components can be recognized.
5.2 Cybernetics of Natural Systems Interconnected, natural biotic and abiotic systems, as we know them today as a variety of plants, animals, humans, and geological formations, are—without ifs and buts—evolutionary or geophysical products of a development history spanning billions of years.
1 Cyber-physical
systems are electronic devices (hardware) with embedded or integrated software. They perceive signals from the environment through sensors and pass them on to so-called actuators. An example can be an acoustic-optical warning system that emits an actuator alarm tone with light signals by sensorially detecting movement and noises. Often, a so-called “silent alarm” is also forwarded to responsible watch stations or authorities. CPS are also built into washing machines, refrigerators, and other household appliances.
104
5 Cybernetic Models and Orders
Evolution, the developmental history of living beings, owes its incredible diversity and distribution on Earth to cybernetic laws, especially the principle of negative feedback. In this respect, it can be claimed that nature is the “mother” of all cybernetic systems. Some examples will illustrate this. Cybernetics in Plants—Example 1: Plants Protect Themselves and Warn Fellow Species Plants are stationary and thus location-bound living beings. They do not have the ability of animals to run away from impending dangers, or at least to attempt to run away. Their protective mechanisms for species preservation are fragrances, volatile substances that attract insects for pollination on the one hand, and try to ward off predators on the other hand, and ultimately even warn neighboring plants of the same species of approaching enemies through their type of chemical communication. For biocybernetic communication among trees, see, among others, Wohlleben (2015, pp. 14–20), from which an example of cybernetic plant communication is explained: Often, however, it does not even necessarily have to be a specific call for help that is required for insect defense. The animal world generally registers the chemical messages of the trees and then knows that some attack is taking place there and attacking species must be at work. Anyone with an appetite for such small organisms is irresistibly attracted. But the trees can also defend themselves. Oaks, for example, channel bitter and poisonous tannins [such as tannins, d. A.] into bark and leaves. They either kill gnawing insects or at least change the taste so much that it turns from delicious salad into biting gall. Willows produce salicin for defense, which has a similar effect. […] In addition to the chemical signal transmission in the many interconnected cybernetic control circuits, the trees also help each other in parallel through the more reliable electrical signal transmission via the roots, which connect the organisms largely independent of the weather. Once the alarm signals have been spread, all oak trees around—the same applies to other species—pump tannins through their transport channels into bark and leaves.
An insight into the multifunctional, diverse self-protection of plants is provided by the editorial team of Pflanzenforschung.de (2011): From their roots, flowers, leaves, and fruits, natural substances evaporate, which are scientifically summarized under the term “Volatile Organic Compounds” (English VOCs). They mainly include substances from the class of terpenes, whose smells are known to us as menthol, resin, or also as limonene obtained from lemon oil. […] The VOCs play an important role in the so-called indirect plant defense. In this defense strategy, plants attract the predators of their pests through complex mixtures of fragrances. For example, it was observed that an attack by caterpillars of the nocturnal butterfly Spodoptera littoralis in the leaves of corn plants leads to the formation of fragrances that attract the parasitic braconid wasp Cotesia kariyai. This lays its eggs in the butterfly caterpillars, which die as the wasp larvae grow.
The stronger the attack from enemies, the stronger the attempt of a plant’s counterdefense, the less plant damage. The latter causality corresponds to the negative feedback in this complex Evolution game of survival of the fittest (see Fig. 5.4).
5.2 Cybernetics of Natural Systems
105
Fig. 5.4 When plant pests bite, the plant seeks help—with volatile scents, it attracts the predators of the pest and simultaneously warns its neighbors. (Source: http://www.pflanzenforschung.de/de/ journal/journalbeitrage/wie-pflanzen-ihre-nachbarn-warnen-1540 (Accessed on 01.02.2018))
That plants—so to speak, in anticipatory species protection—also signal an attack by plant pests to their conspecifics in order to activate protective measures is known from many species, including tobacco plants. Sources on the subject include, among others, Muroi, A. (2011) and Degenhardt, Jörg (2007). Cybernetics in plants—Example 2: Thorn protection The habitus of a kapok tree bark is covered with thorns. They primarily ensure that the organism strengthens its survival by defending itself against various enemies with this stationary defense mechanism. Figure 5.5 shows a tree of this kind. How the function of this special defense could be integrated into the complex forest-tree network is shown by the superimposed control loop. Cybernetics between plants and animals—Example 3: Dynamics between plant and animals Characteristic of dynamic cybernetic processes in nature are the socalled predator-prey relationships. Over a certain period of time, the involved interconnected populations cyclically change their number of individuals. This ensures under normal evolutionary habitat conditions (by this, we mean the functionalities in a local habitat that lead to natural adapted growth of organisms without targeted intervention by humans) that no population can expand beyond a certain growth limit, which would otherwise inevitably lead to an escalation of the system followed by destruction. With the simple example of three organisms fighting for survival in a local network, nature already demonstrates how elegantly it uses negative feedback to achieve sustainable dynamic growth in cybernetic systems. Scaled up to the entirety of organisms and their
106
5 Cybernetic Models and Orders
Forest as Reference variable
Influenced by location, population, species diversity....
Climate change, human interventions etc. influence the forest ecosystem
Reciprocal influence of extraneous organisms
Organism tree as a growth regulator Variables influenced by chemical signals, such as thorn growth, thorn number, etc., act together as a manipulated variable
negative feedback
Food intake of the tree, number of its predators, inhabitants etc. act together as a control variable Thorn function acts as Controlled system
Frost, mechanical damage, etc. act together as a Disturbance variable
© 2017 Dr.-Ing. E. W. Udo Küppers
Fig. 5.5 Habitus of the kapok tree (Ceiba pentandra) with superimposed cybernetic function. Details of a biological (biocybernetic) control loop depicted on a tree that helps itself against foodseeking animals with a special defense mechanism through thorns. Insights from control engineering research are combined here with biological insights in the form of model ideas. However, the snapshot of the biological control loop should not deceive about the material, energetic, and communicative processes of the organism in nature, which hide behind it and, in reality, are unimaginably complex
cybernetics, the outstanding performance of nature can only be appreciated in a very modest way, because we are still far from understanding the mechanisms of evolutionary progress even remotely. Figure 5.6 shows four highlighted control loops of far more complex connections between three organisms in nature than can be represented here. The influence of negative feedback in this network of relationships is striking, contributing to keeping the outlined system dynamically stable. This means nothing other than that all involved organisms have the chance to continue developing. The predator-prey model developed by the Austrian/US-American chemist and mathematician Alfred Lotka (1880–1949) and the Italian physicist Vito Volterra (1860–1940) shows the dynamic interrelationship between predator and prey in a growth-time diagram, as can be seen in Fig. 5.7 for two organisms.
5.2 Cybernetics of Natural Systems
Amount plant food
107 negative feedbacks
Foxes speed
Number of hares
Foxes weight
Amount of hunting prey of foxes
Reproduction rate of foxes
Number of foxes
Template: F. Vester (1978) © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 5.6 Cybernetic predator-prey relationships between two animal species and one plant species. (Source: Template of the sketch: Vester 1978, p. 81) Population size
Rabbits
Prey population
Foxes
Predator population
Plants
Quantity
Fast forward the Prey population according to LotkaVolterra rule 1
Time Template: see text to fig., extended by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 5.7 Predator-prey relationships over time. Coupled dynamic growth of the two populations. (Source: https://de.wikipedia.org/wiki/Räuber-Beute-Beziehung#Das_Lotka-Volterra-Modell (Accessed on 02.02.2018))
108
5 Cybernetic Models and Orders
Regarding the cybernetics of natural systems, Probst (1987, p. 48) writes: In organic models, it is assumed that the systems being studied pursue their own purpose, in contrast to mechanical systems [see Sect. 5.1, author’s note], which receive their purpose from the outside. The goal of natural systems is survival, for which growth and maintenance are considered crucial. A shrinking is associated with decay and deterioration. The system is seen as open to its environment, from which it obtains vital resources and to which it must adapt. It is therefore capable of maintaining a steady state [see Fig. 5.7, author’s note] by changing the behavior of parts to keep the whole within acceptable limits. This contrasts with mechanical systems, which are concerned with maintaining a static rather than a dynamic equilibrium. The [natural, author’s note] system undergoes development over its lifespan, which is why one can speak of a life cycle and different levels of maturity.
Typical for a coupled life cycle is the development of a forest, from initial pioneer plants through several intermediate vegetations to the ecological maturity stage, the climax community of the forest. The state of cybernetic natural systems is not predetermined. It develops through evolutionary principles or mechanisms, dynamic equilibria—steady states—and adaptive growth. Organismic networks of action are a characteristic of cybernetic natural systems. In them, negative feedbacks, in particular, act alongside positive ones. The former strengthen the stability of the system.
Key statement It may seem unnecessary to point out, but it is always
important to remember the unconditional role model of nature, whose organisms exchange information in sophisticated, cross-species communication networks that serve their own protection and development. That these information-processing systems use all available natural communication channels is a matter of course for plants and animals—less so for humans (!). Disturbances, even of the most trivial nature, in communication between humans often trigger consequential, destructive conflicts. In this field of anticipatory conflict avoidance, evolutionary nature is a master and role model for us humans.
5.3 Cybernetics of Human Social Systems In models of human social systems, the object of investigation is understood as purposeoriented and thus distinguished from the state-preserving mechanical systems and the goaloriented natural systems. What does this mean? Mechanical systems react, as in the case of a heating thermostat, to changes in order to maintain their predetermined state even under different environmental conditions. In this case, the state to be maintained is externally predetermined [reference variable, ed.] and a fixed reaction pattern [controller-manipulated variable-control loop-controlled variable, ed.] is triggered by deviations [e.g., disturbance variables, ed.].
5.3 Cybernetics of Human Social Systems
109
In contrast, goal-oriented natural systems can respond to changes with different behavioral variants. Only the goal to be pursued—survival—is [system-inherent, ed.] predetermined, but there is freedom in the selection of specific behavior within the repertoire of behaviors. This expresses that a natural system can adapt to different environments, thus demonstrating viability. (Probst 1987, p. 50).
The freedom of behavior in “goal-oriented” natural systems, as Probst calls it, is concretized in Darwin’s theory of evolution by the fundamental mechanism of mutation (change) and selection (choosing the fittest), while molecular biologist and Nobel laureate Jacques Monod (1979) speaks of chance and necessity (see Monod 1979, Chap. VII, Evolution, pp. 110–123). Furthermore, Probst 1987, p. 50, states: A purpose-oriented social system can not only select its specific behavior from a certain repertoire of behaviors, but it can also expand its behavioral potential and, even more importantly, it can determine the goals to be pursued according to its own purposes. The purpose of social systems can be seen in the development of their own possibilities. This expresses the free will, which—although questioned by brain researchers like Roth and Singer [Bauer 2015; Roth 2001, ed.]—is typical for human systems. […] In order to understand a social system, one must therefore not only know the goals of the system, but also those of its parts and its encompassing system, and the influence of their interactions [causalities, feedback loops, ed.] on it. […] The system not only passively adapts to environmental changes, but also actively shapes its environment.
The primary focus for human social systems is therefore on “actions, decisions (and) choices.” (Probst 1987, p. 51) As an example of a cybernetic process within a human social system, Fig. 5.8 shows a typical entrepreneurial situation that includes all three previously mentioned criteria of acting, deciding, and choosing. Cybernetics in the social entrepreneurial environment—Example 4: Control process of a socio-economic entrepreneurial decision The impending loss of sales markets and thus a drop in profits prompt the company management to take cost-saving measures—action –, which are linked to a specific goal—decision –, with various possibilities for implementation—selection—available. Figure 5.8 shows this typical approach as a control-oriented cycle, in which the negative feedback is recognizably coupled to the manipulated variable. Cybernetic, humane social systems are purpose-oriented. They orient themselves to or use not only the available behavioral repertoire; they can also expand it and use it for their own purposes. Understanding a cybernetic, humane social system is promoted by an appropriate scientific approach. For this purpose, the complementary strategy of analysis and synthesis is available. Both are necessary to understand the system. With the examples from Sects. 5.1, 5.2 and 5.3, three basic cybernetic systems have already been addressed, which will be expanded in Chap. 6.
110
5 Cybernetic Models and Orders
Disturbance variable External influences z. E.g. supplier cancels
Profitability with a trend towards declining sales
Management Reacts
negative feedback
Controlled system Profit maximization Manipulated variable Reducing staff, reducing material resources, etc.
Controlled variable Costs
Reference variable Savings measures, reduce costs by x
Controller Finance Targetactual?
Cybernetic social economic system ©2017 Dr.-Ing. E. W. Udo Küppers
Fig. 5.8 Cybernetics in the social entrepreneurial environment—self-explanatory
5.4 First-order Cybernetics Cybernetic systems, as they have been described so far in various variations and applications, were given the addition “first order” for differentiation—after Heinz von Förster created the term of a second-order cybernetics in 1974 (Scott 2004). First-order cybernetic systems are systems that we analyze as observers. We recognize the typical characteristics of such systems, such as their dynamic behavior, their ability to process messages, and the outstanding circular property of negative feedback. No further explanation is needed, as first-order cybernetic systems accompany us all in everyday life, often without us knowing: thermostats regulate heating systems, coffee machines or irons, any type of autopilot, many of which are included in the new “driverless vehicles”, our own multi-cybernetic metabolic system in the body, etc. Heinz von Foerster described in 1990 at a conference in Paris (von Foerster 1993) how the concept of cybernetics was in its beginnings (1940s/1950s) and how it finally led to the transition from first-order cybernetics to second-order cybernetics (ibid., pp. 60–65): As is generally known, cybernetics is spoken of when effectors, such as a motor, a machine, our muscles, etc., are connected to a sensory organ that reacts with its signals to the effectors. It is this circular organization that distinguishes these cybernetic systems from other organized systems. It was Norbert Wiener who reintroduced the term “cybernetics” into scientific discourse. He noted: “The behavior of such systems could be interpreted as an instruction to achieve a goal.” One could assume that these systems pursue a purpose.
5.4 First-order Cybernetics
111
Subsequently, von Foerster quotes further paraphrases on cybernetics, citing the “ideas of the women and men” who can be rightfully called “mothers and fathers of cybernetic thinking and action” (ibid.) • Margaret Mead (Anthropologist): As an anthropologist, I have been interested in the effects of cybernetic theories on our society. I am not referring to computers or the electronic revolution […]. In particular, I would like to point out the significance of the interdisciplinary concepts that we initially called “feed-back,” then “teleological mechanisms,” and then “cybernetics,” according to which:
Maxim (Mead) “(Cybernetics) is a form of interdisciplinary thinking that
has enabled members of many disciplines to communicate with each other in a language that all could understand.” (ibid.) • Gregory Bateson (epistemologist, anthropologist, cybernetician, and as some say, the father of family therapy):
Maxim (Bateson) “Cybernetics is a branch of mathematics that deals with
the problems of control of recursivity and information.” (ibid.) • Stafford Beer (the philosopher of the organizational and sorcerer of management, as von Foerster calls him)
Maxim (Beer) “Cybernetics is the science of effective organization.” (ibid.)
And finally, the poetic reflection of “Mister Cybernetics,” as we affectionately call him, the cybernetician of cyberneticians, • Gordon Pask (English author, inventor, educational theorist, cybernetician, and psychologist)
Maxim (Pask) “Cybernetics is the science of defensible metaphors.” (ibid.)
This small enumeration, supplemented by the words of Heinz von Foerster, provides the corresponding definitions from Sect. 2.1. When the perspective of circular thinking, which is now taken for granted, emerged in the mid-20th century, it violated “fundamental principles of scientific discourse […], which dictate the separation of observer and observed. This is the principle of objectivity. The properties of the observer must not enter into the description of the observed.” (ibid., p. 64). Heinz von Foerster therefore proposed the definition:
112
5 Cybernetic Models and Orders
Maxim (by von Foerster) “First-order cybernetics is the cybernetics of
observed systems.” (ibid.) Let us briefly recapitulate von Foerster’s remarks in the previous paragraph about “the perspective of circular thinking, which is now taken for granted” (reproduced in a contribution from 1990, see von Foerster 1993, pp. 60–65). By “today,” v. Förster means the 1990s. From today’s perspective, circular thinking among decision-makers in many areas of our society is anything but arrived! In the vast majority of cases, with a one-sided focus on the realization of economic goals, monocausal thinking and action prevail. The resulting accumulation of local and global conflicts, which can also take on disaster-like proportions, is evident to everyone. Von Foerster’s “self-evidence” of circular thinking evaporates when looking at the anthropogenic effects caused by humans themselves in the fog of uncertainty. Therefore, the following maxims can only be emphasized: Maxim Monocausal misguided thinking and action promote conflict-prone
communication and set short-term isolated impulses of blockage against sustainable strengthening of life and progress capability in society. Maxim Circular, cybernetic far-sighted thinking and action promote coop-
erative communication and set skillful systemic impulses for sustainable strengthening of life and progress capability in society. As a practical example, derived from Fig. 5.9, the observer could be a technical leader in a sociotechnical company who controls the organization or assumes to control it through his knowledge and experience. Predominant is a function-oriented causal, often monocausal thinking and action. Emerging problems are usually dealt with and solved using deterministic, planning-controlled solution approaches. Incidentally, the adjective sociotechnical reveals that in the organization, humans and machines interact with each other, which thus requires a probabilistic rather than a dirigiste approach or control of the organization, demonstrating the limits of the classical causal approach. The observer of the organization, therefore, cannot rely solely on organizational control (of machines) to be successful. They must also take into account the probabilities of human thinking and action within the organization, and—as a human observer, they themselves are subject to these probabilities in thinking and acting.
5.5 Second-Order Cybernetics
113
First-order observation of a perceptible system, energetically open, organizationally closed. autopoietic -self-sustaining SYSTEM
ENERGY OBSERVANT Evolutionary and ontogenetic development of systems that observe and communicate. © 2017 Dr.-Ing. E. W. Udo Küppers
Fig. 5.9 Illustration and explanation of a first-order cybernetic system according to the traditional, causal, deterministic, and objective approach
5.5 Second-Order Cybernetics Von Foerster describes in his own way how the entry of cyberneticians into the circulation of observing and conversing, which until then had been forbidden territory beyond the principle of objectivity, was perceived by its representatives (1993, p. 64): • In the general case of circular conclusion, A implies B; B implies C; and to general horror—C implies A. • Or, in the reflexive case: A implies B; and—Oh horror!—B implies A! • And now the devil’s cloven hoof in its purest form, in the form of self-reference: A implies A!—an abomination!
Von Foerster further describes that the […] shift from observing what lies outside to observing the observer […] took place in the course of significant advances in the field of neurophysiology and neuropsychology. […] What is new about all this is the profound insight that it takes a brain to write a theory about the brain. It follows that a theory about the brain that claims to be complete must do justice to the writing of this theory. And what is even more fascinating, the writer of this theory must account for themselves. Applied to the field of cybernetics, this means: As the cybernetician enters his own territory, he must do justice to his own activities: Cybernetics becomes the cybernetics of cybernetics, or second-order cybernetics. […] [T]his realization entails not only a fundamental
114
5 Cybernetic Models and Orders
change in the field of scientific work but also in how we perceive teaching, learning, the therapeutic process, organizational management, etc.; and—as I believe—how we perceive relationships in our daily lives.
Not least from the last argument of social relationships among each other, which largely determine our development, it should be clear that social cybernetics is a second-order cybernetics, a cybernetics of cybernetics, in which the observer, who is part of the observed system, determines their own goals. Von Foerster’s proposal for the definition of second-order cybernetics is consistent, in agreement with Gordon Pask’s (1969) distinction of two orders of analysis. These state in the first case: An observer penetrates a system to determine the purpose of the system. In the second case, the observer’s penetration into a system results in determining their own goals.
Key Point Second-order cybernetics is the cybernetics of observing systems.
The continuation of the practical example from Fig. 5.9 can be seen in Fig. 5.10. The observer as a technical leader might feel prompted to ask whether his own thoughts, plans, and actions are not an integral, inseparable part of the organizational dynamics he controls. In a kind of self-reflection, he could come to the conviction that before directly and controlling intervening in the entrepreneurial organization, he might need to change his own ideas and plans in order to possibly realize new organizational perspectives before actively intervening in the dynamics of the organization. This kind of selfreflection is also supported by the fact of changing the observer’s spatial position, taking
First-order observation of a perceptible system, energetically open, organizationally closed. autopoietic -self-sustaining SYSTEM
ENERGY OBSERVANT Second-order observation of a perceptible system, the observer declares himself to be himself
Evolutionary and ontogenetic development of systems that observe and communicate. © 2017 Dr.-Ing. E. W. Udo Küppers
Fig. 5.10 Epistemological explanation of observer circularity in a second-order cybernetic system; see also Scott 2004, p. 1374
References
115
a new standpoint on the system to be observed, thus developing new perspectives that seem very unlikely according to the approach of first-order cybernetics (Fig. 5.9). With the step of questioning oneself and understanding oneself as part of the organization, the observer naturally leaves a seemingly secure position and also becomes more vulnerable. At the same time, however, he embarks on a path of better understanding of real dynamic relationships that have surrounded us for billions of years through evolution and that do not stop at factory gates and corporate walls.
Key Point Error-tolerant and sustainable organizations are cybernetic sec-
ond-order systems.
5.6 Control Questions Q 5.1 H ow can the state of cybernetic mechanical systems be described? Q 5.2 What type of signal transmission do trees use among each other in case of danger? Q 5.3 What means of defense against enemies does the kapok tree use? Q 5.4 Which cybernetic control process strengthens the protection of the kapok tree against enemies? Sketch and describe the process. Q 5.5 Sketch and describe the cybernetic predator-prey relationships between foxes, hares, and plants, and highlight the peculiarity of the circular connections. Q 5.6 What does the Lotka-Volterra predator-prey model state? Q 5.7 Sketch the course of the predator-prey model according to K 5.5 in a LotkaVolterra diagram. Q 5.8 Sketch and describe the cybernetic course in the social entrepreneurial environment according to Fig. 5.8. Q 5.9 Briefly explain the term “First-order Cybernetics” and show the process using a sketch. Q 5.10 Briefly explain the term “Second-order Cybernetics” and show the process using a sketch.
References Bauer J (2015) Selbststeuerung. Die Wiederentdeckung des freien Willens. Blessing, München Brecher C et al (2015) Kybernetische Ansätze in der Produktionstechnik. In: Jeschke S, Schmitt R, Dröge A (Eds) Exploring cybernetics. Springer, Wiesbaden, pp 85–108 Degenhardt J (2007) Die Funktion flüchtiger Stoffe bei der Verteidigung von Pflanzen gegen Schädlinge. Forschungsbericht des Max-Planck-Instituts für chemische Ökologie, Jena von Foerster H (1993) KybernEthik. Merve, Berlin Gilbreth FB, Gilbreth LM (1920) Angewandte Bewegungsstudien. Neun Vorträge aus der Praxis der wissenschaftlichen Betriebsführung. VDI, Berlin
116
5 Cybernetic Models and Orders
Jeschke S, Schmitt R, Dröge A (2015) Exploring Cybernetics. Kybernetik im interdisziplinären Diskurs. Springer, Wiesbaden Küppers EWU (2018) Die humanoide Herausforderung. Springer, Wiesbaden Monod J (1979) Zufall und Notwendigkeit. Philosophische Fragen der modernen Biologie, 4. Ed. dtv, München Muroi A (2011) The composite effect of transgenic plant volatiles for acquired immunity to herbivory caused by inter-plant communication. PLoS ONE 6(10). http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0024594. Accessed 5 Febr 2018 Pask G (1969) The meaning of cybernetics in the behavioral sciences (the cybernetics of behavior and cognition: extending the meaning of „goal“). In: Rose J (Ed) Progress in cybernetics, Vol 1. Gordon an Breach, New York, pp 15–44 Pflanzenforschung.de (2011) Wie Pflanzen ihre Nachbarn warnen. Redaktion Pflanzenforschung. de, 27.10.2011. http://www.pflanzenforschung.de/de/journal/journalbeitrage/wie-pflanzen-ihrenachbarn-warnen-1540. Accessed 5 Febr 2018 Probst GJB (1987) Selbstorganisation. Ordnungsprozesse in sozialen Systemen aus ganzheitlicher Sicht. Parey, Berlin/Hamburg Roth G (2001) Fühlen, Denken, Handeln. Wie das Gehirn unser Verhalten steuert. Suhrkamp, Frankfurt am Main Schmitt R et al (2011) Selbstoptimierende Produktionssysteme. In: Brecher C (Ed) Integrative Produktionstechnik für Hochlohnländer. Springer, Wiesbaden, pp 747–1057 Scott B (2004) Second-order cybernetics: an historical introduction. Kybernetes 33(9/10):1365– 1378 Taylor FW (1911) The principles of scientific management. Harper & Row, New York Vester F (1978) Unsere Welt – Ein vernetztes System. Klett-Cotta, Stuttgart Wohlleben P (2015) Das Geheimnis der Bäume. Was sie fühlen, wie sie kommunizieren – die Entdeckung einer verborgenen Welt. Ludwig, München
Part III Cybernetic Theories and Practical Examples
6
Cybernetics and Theories
Summary
With this chapter “Cybernetics and Theories,” we enter a space full of theories whose common reference is cybernetics. However, it is not the aim and purpose to extensively describe all listed and even more theories, which would fill books that have already been written on the respective topics of the subchapters. Therefore, we will focus on concise statements about the individual theories in the foreground of this chapter and start with systems theory.
6.1 Systems Theory Cybernetic systems are part of a large network of thinking in systems or systemic holistic thinking. Jay Wright Forrester holds a prominent position with his work on cybernetic systems, which has already been discussed in Sect. 4.13. To understand the term systems theory itself as a theory generale would be misguided. Rather, the term “General Systems Theory” indicates that it is not a specific discipline, but an interdisciplinary topic. This is also demonstrated by the path of systems theory, which—starting from Ludwig von Bertalanffy’s methodical holism approach Sect. (4.11)—bears fruit in various social fields and is applied interdisciplinary. Two examples will illustrate this.
© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_6
119
120
6 Cybernetics and Theories
6.1.1 Günther Ropohl and his Systems Theory of Technology The trained mechanical engineer and philosopher Günther Ropohl (2012) has dealt with topics such as “sociotechnical systems,” “work studies,” “technology assessment,” “technology didactics,” as well as “systems theory” and “cybernetics.” The General Systems Theory, which is based on wholeness and diversity, Ropohl traces back to Aristotle (384-322 BC) (Ropohl 2009, p. 71), attributing four roots to “modern systems theory,” the first root of which he refers to Ludwig von Bertalanffy (ibid., p. 72). Ropohl explicitly emphasizes that Bertalanffy recognizes that the rationalholistic approach: […] is not only applicable to the objects of individual scientific disciplines, but also to the interaction of the sciences if one wants to counteract the atomization of scientific knowledge and establish a new unity of the sciences with a “Mathesis universalis” (ibid., p. 72).
Ropohl’s second root is that of Norbert Wiener’s cybernetics Sect. (4.1), which covers the entire field of control and regulation technology and information theory. Regarding the third root of modern system theory, Ropohl (ibid., p. 73) writes: A third root of current systems thinking I see in various approaches to the scientification of practical problem-solving. In doing so, it was inevitable that the notorious relationship conflicts between theory and practice had to be reflected. Due to their constitutional principle, individual scientific theories, as mentioned, always only concern partial aspects of a complex problem. Since such problems in practice do not follow the subject division of the university, they can only be satisfactorily solved if all important partial aspects and the relationships between these partial aspects are taken into account. Practical problem solutions always have to deal with entireties; they therefore require that problem-oriented integration of individual scientific findings, which I have already characterized as interdisciplinary. In other words, they require a system formation at the level of scientific statements that corresponds to the complexity of the practical problem.
The educational structures of today’s educational institutions, up to universities, are still characterized by a variety of monodisciplinary subjects, with few exceptions of interdisciplinary cooperation, as required and presupposed by, for example, mechatronic processes or robotics technology. Finally, Ropohl describes the fourth root of modern system theory, among certainly other approaches, as the “structural thinking of modern mathematics”. He writes (ibid., p. 74): If mathematics today is understood as the science of general structures and relations, indeed as the structural science par excellence, it not only offers itself as a tool for system theory but also proves to be, in a certain sense, the system theory itself. Based on set algebra, the concept of the relational structure has emerged, which is defined by a set of elements and a set of relations and thus precisely clarifies the difference between the set and the entirety that Aristotle already saw. Thus, I will use this mathematical system concept for the basic definitions of General System Theory.
6.1 Systems Theory
121
As a basis for his system theory of technology, Ropohl describes a system as the model of an entirety, with three system concepts (Fig. 6.1), which are often treated separately but can be linked with each other. These are: • System model 1: Functional concept, relationships between inputs, outputs, states, etc. • System model 2: Structural concept, interconnected elements and subsystems • System model 3: Hierarchical concept, distinguishable from their environment or a supersystem. Building on this and to fill the formal system model with content in the sense of technology, Ropohl chooses the concept of the action system as orientation. He bases his approach on the concept of action in the humanities and social sciences, on philosophical anthropology, particularly on Arnold Gehlen’s (1904–1976) view of
Environme
Inputs States System Outputs
Environme
(a)
(b)
FUNCTIONAL CONCEPT
Element
System
Relation
STRUCTURAL CONCEPT
Supersystem
System
(c)
Subsystem
HIERARCHICAL CONCEPT
Fig. 6.1 Concepts of system theory. (Source: after Ropohl 2009, p. 76)
122
6 Cybernetics and Theories
humans as primarily acting beings. Max Weber (1864–1920) already linked social action as fundamental in sociology, and Jürgen Habermas (*1929) formulated the distinction between “technical” and “social” action, which is also known to Ropohl, as dualism of labor and interaction (cf. ibid., pp. 89–91). Ropohl expresses the Western “dilemma” in his own words as follows: It is the distinction between “poiesis” and “praxis”, between production and action, which has been deeply rooted in Western thought since Aristotle.
In contrast, there is the Eastern Buddhist philosophy of a more holistic way of thinking, which tries to unite the separating aspects through a middle way. “Poiesis and Praxis” would thus be considered an inseparable concept of action. Ropohl bases his system theory of technology as an action system on the general concept of action by the philosopher Jürgen von Kempski: “Action is the transformation of one situation into another” (ibid., p. 93). According to Ropohl, the term action system is dualistic; on the one hand as a “system of actions” and on the other hand as a “system that acts”. Ropohl himself describes the structure in Fig. 6.2 merely as a theoretical model of functional decomposition into goal setting, information, and execution with the transport flows, mass, information, and energy. Figure 6.3 shows an example of the more detailed hierarchical structure of the object systems, with the addition from today’s environmentally oriented perspective that the system should be supplemented by the lowest hierarchical level “raw materials”, as they are of extraordinary importance for sustainable processes. In summary, Ropohl describes his system theory in the following words (ibid., pp. 305–306): Technology has a natural, a human, and a social dimension; each of these dimensions can be viewed from different perspectives of knowledge, which have been pursued to varying degrees so far. None of these perspectives, which are determined analogously to the individual scientific disciplines, can claim to do justice to the problems of technology on its own. The complexity of technology can only be captured with an interdisciplinary approach that combines the heterogeneous strands of description and explanation into a coherent network. Such an undertaking requires a theoretical integration potential, without which interdisciplinary work would remain a mere accumulation of disparate elements of knowledge. […] In contrast to certain social-philosophical system speculations, I consider the general system theory of mathematical-cybernetic origin as an exact model theory that has to be filled with empirical content at several levels of increasing concretization. Systems are fundamentally models of reality that allow organizing the knowledge of complex subject areas holistically without invoking any mysticism of totality. The particular strength of system theory lies in formulating comprehensive, interdisciplinary description models of multilayered problem contexts and thus making at least a heuristic contribution to the construction of explanatory hypotheses. The program of system theory is designed to capture unity in diversity; thus, it is predestined to provide a formal and terminological framework for a general technology.
6.1 Systems Theory
123
TARGET SETTING
ZS
IS Information Processing
Receptor
Storage
Effecto
Internal model
AS Recording
Energy conversion
Influence
Recording
Levy
Guide
Handling
I S = Information system
Levy
A S = Execution system
Fig. 6.2 Fine structure of the action system as system theory of technology.
6.1.2 Niklas Luhmann and his Theory of Social Systems At the beginning of his treatise on “Social Systems—Outline of a General Theory,” Niklas Luhmann (1991, p. 30, first edition 1984) makes the following statement: The following considerations assume that there are systems. They do not start with an epistemological doubt. They also do not adopt the fallback position of a “merely analytical
124 Fig. 6.3 Hierarchy of object systems. (Source: after Ropohl 2009, p. 122, supplemented by the author)
6 Cybernetics and Theories Plant network (global) Plant network (regional) Attachment Aggregate Machine, device Assembly Item Material
relevance” of system theory. Even more so, the narrow interpretation of system theory as a mere method of reality analysis should be avoided. Of course, one must not confuse statements with their own objects; one must be aware that statements are only statements and scientific statements are only scientific statements. But they refer, at least in the case of system theory, to the real world. The concept of the system thus denotes something that is really a system and thus commits itself to a responsibility for the validation of its statements in reality.
After listing various requirements that Luhmann associates with the theory, he writes (ibid., p. 31): These requirements culminate in the necessity of designing system theory as a theory of self-referential systems. The approach just outlined implies self-reference in the sense that system theory must always keep in mind the reference to itself as one of its objects; […].
Here, the connection to Maturana’s and Varela’s autopoiesis theory of biological cognition becomes evident (see Sect. 4.8). Luhmann sees social systems as self-referential systems, which means developing the ability to establish a relationship with oneself and to differentiate this in relation to the environment. Through this differentiation, systems maintain themselves. For Luhmann, the environment is an “extension of the action sequence to the outside” and “everything else” and much more complex than the system itself (Dieckmann 2004, p. 21).
6.1 Systems Theory
125
Without delving into the extensive details of the theory of Social Systems, we want to highlight a dominant criterion of Luhmann’s theory, namely that of communication. Communication is, for Luhmann, the smallest unit in social systems. The emergence of social systems, the construction of their structures, their autopoietic and operationally closed processes are determined by communications. They shape an inner closedness, keep systems viable, and separate them from the environment.
Key statement (Luhmann_1) Communication is the smallest unit in social
systems.
Key statement (Luhmann_2) Communication is not the achievement of an
acting subject, but a self-organization phenomenon: It happens (Simon 2009, p. 94). Luhmann himself describes his view of communication in social systems as follows (Luhmann 1997, p. 81 in Simon (1997)): The general theory of autopoietic systems requires a precise specification of those operations that carry out the autopoiesis of the system and thus distinguish a system from its environment. In the case of social systems, this is done through communication. Communication has all the necessary properties: It is a genuinely social (and the only genuinely social) operation. It is genuinely social insofar as it presupposes a majority of participating consciousness systems, but (precisely for this reason) as a unity cannot be attributed to any individual consciousness.
Communication is therefore not to be understood as the actions of individual actors or as an expression of their individual abilities. Communication can only take place between several actors, whereas action can be attributed to individual actors (cf. Simon 2009, p. 88). In his contribution: “What is Communication?” Luhmann (1988, pp. 19–31) formulates: A communication system is therefore a completely closed system that generates the components from which it is composed through communication itself. In this sense, a communication system is an autopoietic system that produces and reproduces everything that functions as a unity for the system through the system. That this can only happen in an environment and under dependence on restrictions imposed by the environment is self-evident. More concretely formulated, this means that the communication system not only specifies its elements – that which is an indivisible unit of communication – but also its structures. What is not communicated cannot contribute to this. Only communication can influence communication; only communication can decompose units of communication (for example, analyze the selection horizon of information or ask for the reasons for a message) and only communication can control and repair communication.
In order to practice Communication as an individual (realize), one must learn (Simon 2009, p. 93)
126
6 Cybernetics and Theories
[…] that meaning can be attributed to a behavior (or must be). This is, in a way, the access criterion to become a participant in a social system.
It does not matter how a participant behaves in communication; in any case, a meaning is attributed to their behavior, which results in the form of a “communication of information.” This expectation must be taken into account, “and this expectation of mutual expectation structures mutual understanding.” (ibid.)
Key statement (Luhmann_3) To realize communication, three components
are necessary: information, communication, and understanding. They come about through their respective selections, with no component occurring on its own. The system-environment relationship of social systems can also be applied to other autopoietic systems, such as biological systems and psychological systems. In this case, all three systems are considered operationally closed and delimited from one another. For social systems, the other two systems are specific environments that are considered given, thereby influencing the development of social systems. While psychological systems operate through “thoughts and feelings” and biological systems become active through “biochemical reactions,” communication is the modus operandi for social systems (cf. Simon 2009, p. 90). Social systems are therefore, for Luhmann, communication systems. From communication involving at least two participants and a psychological system, contingency or double contingency develops. Simon writes the following about the concept of contingency used in sociological theory (ibid., p. 94): Thus, a triple selection [key statement Luhmann_3, author’s note] must take place for communication to occur. This makes communication an unlikely phenomenon. For each of the communication participants could also interpret the perceived signals, the behavior of others, the speech of others, etc., differently, attributing a different or even no meaning to them.
For this, sociological theory uses the concept of contingency, precisely “by excluding necessity and impossibility” (ibid.). However, as soon as a psychological system is involved, the problem of double contingency is virtually always present. It accompanies “all experience, until it encounters another person or a social system to which free choice is attributed.” (ibid., p. 95)
Key statement (Luhmann_4) Social systems are communication systems.
With this brief excursion into the world of Luhmann’s theory of social systems, we will leave it at that. The general difficulty in interpreting his theory and writings in a contemporary way and applying them to practical applications also lies in the particular mode of expression and terminology of Luhmann’s formulation art. In any case, the author is not
6.2 Information Theory
127
aware of any entrepreneur who, in his company, which is undoubtedly a social system in which probabilistic action is taken and must be taken, optimizes his strategic and operational company goals according to Luhmann’s theoretical criteria for social systems.
6.2 Information Theory The American mathematician and electrical engineer Claude Elwood Shannon (1916– 2001) laid the foundation for the information theory with his 1948 work “A Mathematical Theory of Communication”. In the introduction, he writes (Shannon 1948, p. 379; see also Fig. 6.4): The recent development of various methods of modulation such as PCM [Pulse code Modulation, author’s note] and PPM [Pulse Position Modulation, author’s note] which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist and Hartley on this subject. In the present paper, we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information. The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. INFORMATION SOURCE
TRANSMITTER
RECEIVER
SIGNAL
DESTINATION
RECEIVED SIGNAL
MESSAGE
MESSAGE
NOISE SOURCE
Fig. 6.4 Schematic of a general communication system. (Source: Shannon 1948, p. 380, supplemented and modified by the author.)
128
6 Cybernetics and Theories
Information theory includes concepts such as information, its transmission and data compression, and not least the concept of entropy borrowed from thermodynamics. With the information-specific entropy, the information content or information density of a message is to be determined. This means: The more uniformly a message is structured, the smaller the specific entropy and thus the information loss. Technical, information-processing systems are the major beneficiaries of Shannon’s consideration of making the physical quantity information tangible or countable by linking it to the smallest digital unit, the bit –binary digit. It is the unit of measurement for digitally stored and processed real data that can have two values with equal probability, usually zero and one. Under the search term “information theory”, it says (https://de.wikipedia.org/wiki/ Informationstheorie. Accessed on 10.02.2018): […] This allowed quantitatively exact comparisons of the effort for the technical transmission of information in various forms (sounds, characters, images), determining the efficiency of codes as well as the capacity of information storage and transmission channels. […] A sequence of electrical impulses […] [is] expressed by a binary code […]. In practice, however, the digital revolution in information technology only became possible later – associated with the rapid development of microelectronics in the second half of the 20th century.
Up to the present time of increasing transformation of digitized technology, economy, and society, which is revealed in our working and living environment through keywords such as “Industry 4.0” and “Internet of Things,” the binary logic introduced by Shannon more than 60 years ago for technical, information-processing machines has not changed. Even modern “mobile phones” still use the basic binary logic, Shannon’s ingenious invention.
6.3 Algorithm Theory Algorithm theory is a (http://universal_lexikon.deacademic.com/204271/Algorithmentheorie. Accessed on 10.02.2018) mathematical theory derived from formal logic [that] deals with the construction, representation, and machine realization of algorithms and provides the foundations of algorithmic languages (algorithm). It gained importance, among other things, for the application of computing machines by developing methods that help to find equivalent algorithms of different structure (e.g., with shorter computing time or fewer computing steps) for given algorithms and at the same time to investigate the principal solvability of mathematical problems.
Algorithms are composed of well-defined individual steps that generate a clear set of instructions for solving a task, the algorithm. In principle, algorithms are not computerbound. However, they are often processed with mathematical means in information technology programs for a wide range of applications. From simple mathematical calculations, for word processing programs, in the financial world for stock trends, through
6.3 Algorithm Theory
129
special control programs for machine processes to highly complex control sequences in aerospace or for remote-controlled military drone operations, algorithms are in use. Particularly noteworthy is the so-called Rete algorithm – network algorithm. It is an expert system (https://de.wikipedia.org/wiki/Rete-Algorithmus. Accessed on 10.02.2018) […] for pattern recognition and for mapping system processes through rules. […] The Rete algorithm was developed with the aim of ensuring very efficient rule processing. In addition, even large rule sets can still be handled performantly. At the time of its development, it was 3000 times superior to existing systems.
Today, the Rete algorithm (rete = Latin, stands for network or net) is present in many control systems. The American computer scientist Charles Forgy (1982) developed it with the involvement of the US Department of Defense. The algorithm theory today provides a rich collection of various classes of algorithms, of which individual class-related algorithms—A—should be mentioned as representatives for many: • Class problem statement: Optimization A.: linear and non-linear optimization, search for optimal parameters of mostly complex systems. • Class method: a. Evolutionary A., b. Approximation A. a. Class of stochastic, metaheuristic optimization methods, whose functionality is modeled after evolutionary principles of natural organisms. b. Solves an optimization problem approximately. • Class Geometry + Graphics: a. De Casteljau A., b. Floodfill A. a. Enables the efficient calculation of an arbitrarily accurate approximation representation of Bézier curves—parametrically modeled curves—by a polyline—union of connecting distances of a sequence of points. b. Aims to capture and fill areas of contiguous pixels of a color in a digital image with a new color. • Class Graph Theory: a. Dijkstra A., b. Nearest-Neighbor Heuristic a. Solves the shortest path problem for a given starting node. b. Heuristic opening procedure, used among other things to approximate a solution to the traveling salesman problem. Randomized algorithms, encryption algorithms, queue algorithms in production processes, dynamic programming algorithms, and last but not least, Big Data algorithms, which in the context of internet search engines and social networks seem to recognize our potential purchase desires better than we do ourselves, are further algorithms with high attention value. Algorithm theory and its products are not only functionally oriented, but some of them are more relevant than ever in terms of ethical issues according to current knowledge. The area of human-machine cooperation or collaboration stands out here in the
130
6 Cybernetics and Theories
development of “Cobots”—collaborative robots—and “humanoid robots” (see Küppers 2018, pp. 305–370). For example, how should algorithms be programmed in the course of spreading digital processes that—linked to the Internet of Things, transport processes, robot services for humans, and much more—take ethical aspects into account, even under the influence of unexpected events? These and other questions about specific existing and upcoming algorithms in the context of human-machine interactions remain exciting. Sources on algorithms include, among others, Sedgewick and Wayne (2014), Ottmann and Widmayer (2012), Kruse et al. (2011).
6.4 Automata Theory In the Encyclopedia of Business Informatics, author Stefan Eicker describes the automata theory as follows (http://www.enzyklopaedie-der-wirtschaftsinformatik.de/lexikon/ technologien-methoden/Informatik%2D%2DGrundlagen/Automatentheorie. Accessed on 10.02.2018): Automata theory is an important subject area of theoretical computer science; its findings on abstract computing devices—called automata—are applied in computability and complexity theory, but also in practical computer science (e.g., compiler construction, search engines, protocol specification, software engineering).
And regarding the concept of automata, Eicker explains (ibid.): The starting point of automata theory were considerations by Turing in the 1930s on the theoretical performance of a computing machine. He examined certain abstract computing devices, the so-called Turing machines; since these machines possess the capabilities of today’s computer systems, the results of his considerations also apply to these systems. Other scientists developed and investigated other types of automata, including finite automata, Moore automata, Mealy automata, nondeterministic finite automata, pushdown/stack automata. Digression A Turing machine is a computational model of theoretical computer science that models the operation of a computer in a particularly simple, mathematically easy-to-analyze way. A Moore automaton is a finite automaton whose outputs depend solely on its state. A Mealy automaton is a finite automaton whose outputs depend solely on its state and its input. A stack automaton is a finite automaton and a purely theoretical construct that has been extended by a stack memory. With two stack memories, the automaton has the same power as a Turing machine. Automata process input strings/words; various automata such as Moore automata and Mealy automata can also output characters. The approach of automata can be illustrated using the example of finite (deterministic) automata; it consists of:
6.4 Automata Theory
131
• • • •
a finite set of input symbols/characters, a finite set of states, a set of final states as a subset of the state set, a state transition function that returns a (new) state as a result for an argument consisting of state and input symbol, and • a start state as an element of the set of states. The language recognized by the automaton includes all words/strings that, when input, transfer the automaton from the start state to a final state by successively applying the transition function to the current state and the next character to be processed after processing the last character. For example, the operation of an ATM can be described with such an automaton: By entering the symbol “withdrawal,” the automaton enters the PIN entry state, through the states First PIN digit to Fourth PIN digit into the state PIN entry successful (each by processing the input symbol digit; if another symbol is read, the automaton switches to the state Incorrect PIN symbol) etc. (ibid.)
The Dictionary of Cybernetics describes a machine in the sense of cybernetics [as a] dynamic system that receives information from the environment, stores, processes, and releases information to the environment. (Klaus and Liebscher 1976, p. 66).
Regarding the grammars associated with the machines, Eicker writes again (http:// www.enzyklopaedie-der-wirtschaftsinformatik.de/lexikon/technologien-methoden/ Informatik%2D%2DGrundlagen/Automatentheorie. Accessed on 10.02.2018): Chomsky [Noam Chomsky, *1928, American linguist, d. A.] developed a hierarchy of formal grammars, the so-called Chomsky hierarchy [Chomsky 1956; Chomsky and Miller 1963]. Grammars are not actually machines, but they have a close relationship to machines in that a language is defined/generated by a grammar, and a suitable machine can determine whether words belong to the language. For example, finite automata recognize regular languages, pushdown automata recognize context-free languages. The fact that formal languages can also be understood as “problems” by assigning semantics to the words of a language (e.g., numbers, logical expressions, or graphs) establishes the connection to computability. The languages defined by grammars are, in particular, the programming languages. A grammar includes • • • •
an initial symbol, a finite set of variables that must not be contained in the derived words of the language, an alphabet, i.e., a set of terminals as symbols of the words of the language, a set of derivation rules, by which a specific combination of terminals and variables (in a specific order) is transformed into another combination of terminals and variables, in which the variables of the initial combination are each replaced by a sequence consisting of variables and terminals, and • a start symbol as an element of the set of variables.
132
6 Cybernetics and Theories
Hints for a more detailed introduction to automata theory can be found in the following sources: Hoffmann and Lange (2011); Hopcroft et al. (2011) as well as extensive literature on theoretical computer science, including Asteroth and Baier (2003); Erk and Priese (2008).
6.5 Decision Theory Decision theory is a branch of applied probability theory for evaluating the consequences of decisions. Decision theory is widely used as a business management tool (Gäfgen 1974). Two well-known methods are the simple utility analysis (NWA) and the more precise Analytic Hierarchy Process (AHP) [by mathematician Thomas Saaty, d. A.]. In these methods, criteria and alternatives are presented, compared, and evaluated to find the optimal solution for a decision or problem. (https://de.wikipedia.org/wiki/Entscheidungstheorie. Accessed on 10.02.2018)
Three subareas are distinguished in decision theory (cf. ibid.): 1. The normative decision: The basis for this is rational human decisions made on the basis of axioms—unproven presupposed arguments. The question arises as to how decisions should be made. 2. The prescriptivedecision: Normative models are used that include strategies and methodological approaches that enable people to make better decisions, taking into account the limited cognitive abilities of humans. 3. The descriptive decision: It refers to actual decisions made in the real environment based on empirical questions. Here the question is: How are decisions made? The basic model of (normative) decision theory can be represented in a result matrix. This contains the decision field and the target system. The decision field includes: • Action space:
Set of possible action alternatives
• State space:
Set of possible environmental states
• Result function:
Assignment of a value for the combination of action and state (ibid.; see also beginning of Sect. 6.5).
In her article “Decision Theory” in the Encyclopedia of Business Informatics, Jutta Geldermann describes its foundations, albeit strongly related to economics, as follows (http://www.enzyklopaedie-der-wirtschaftsinformatik.de/lexikon/technologien-methoden/Operations-Research/Entscheidungstheorie. Accessed on 10.02.2018): Complex decisions often overwhelm the so-called “common sense” of decision-makers, as too many aspects and information need to be considered simultaneously. A decision problem is characterized by the presence of at least two alternatives (courses of action, decision
6.6 Game Theory
133
options, actions, strategies) between which at least one decision-maker (e.g., individual, company, state) can or must make a decision (choice, selection) […]. The decision-making process is accordingly the logical and temporal sequence of analyzing a decision problem, in which a solution to the decision problem is achieved by logically linking factual (objective) and evaluative (subjective) decision premises to evaluate the available alternatives. Because decisions necessarily rely on subjective expectations, which can only be verified within certain limits, and on subjective goals and preferences of the decision-maker, there are often no objectively correct decisions. Rather, it is important to adequately consider the subjective expectations and preferences of the decision-maker in decision support. In addition to pure decision logic, descriptive and prescriptive decision theory are distinguished.
And regarding models of Decision Theory, Geldermann (ibid.) explains: Decision theory forms models to describe situations in which one or more decision-makers decide on a specific course of action by applying a rational decision maxim and a value system, which creates a new situation […]. A model is defined as a purpose-oriented simplified representation of reality […], with descriptive, explanatory (or prognostic), and decisionmaking (or planning) models being distinguished. Decision models essentially comprise objectives, alternatives (courses of action, actions) and environmental conditions (information and evaluations) in order to analytically derive the consequences of the decision on the one hand and to subjectively evaluate them on the other hand […]. For this purpose, a suitable model must be consistent, have a connection to reality, contain information, and be verifiable. [Fig. 6.5, author’s note] […] shows possible characteristics of decision situations […].
6.6 Game Theory In his book on “Game Theory. Dynamic Treatment of Games,” Krabs (2005, p. XI) introduces game theory as follows: The game theory began in 1928 with a paper by John v. Neumann titled “On the Theory of Social Games” in volume 100 of the Mathematical Annals. In this paper, he starts with the following question: “n players, Sl, S2, …, Sn play a given social game G. How should one of these players, Sm, play in order to achieve the most favorable result?” This question, of course, needs to be clarified. In a first step, John v. Neumann describes a social game as follows: A social game consists of a certain series of events, each of which can turn out in finitely many different ways. For some of these events, the outcome depends on chance, i.e., it is known with which probabilities the individual results will occur, but no one can influence them. The remaining events, however, depend on the will of the individual players Sl, S2, …, Sn. That is, for each of these events, it is known which player Sm determines its outcome and which results of other (“earlier”) events he is already aware of at the moment of his decision. Once the outcome of all events is known, it can be calculated according to a fixed rule which payments the players Sl, S2, …, Sn have to make to each other.
134
6 Cybernetics and Theories Criterion
Expressions Uncertainty
(In)security
Security
Uncertainty Risk
Unsharpness
discrete solution space continuous
single decision Alternatives
absolute Relative
advanta- advanta geousness
Targets
geous
solution space Usage
Program
duration
decision
ness one target
Multiple targets dynamic
Time
static
single-level
multilevel rigid
flexible
Fig. 6.5 Characteristics of decision models. (Source: after http://www.enzyklopaedie-derwirtschaftsinformatik.de/lexikon/technologien-methoden/Operations-Research/Entscheidungstheorie. Accessed on 10.02.2018)
Bartholomae and Wiens (2016, p. V) begin their introduction to game theory as follows: As a scientific discipline, game theory deals with the mathematical analysis and evaluation of strategic decisions. Game-theoretical fields of application are omnipresent in our everyday life, as ultimately every social issue involving at least two parties interacting and making strategic considerations can be examined using the tools of game theory. Examples from the field of economics include financial and social policy measures, entrepreneurial decisions such as estimating the effects of market entry, a merger, or a tariff structure, negotiations between tariff parties, and even extreme behavioral risks such as economic espionage or terrorism. The high relevance of game-theoretical issues and the simultaneous increasing compatibility with other disciplines, such as psychology or operations research, make game theory an indispensable part of basic economic education.
Game theories differ from decision theories (Sect. 6.5) in that the successes of individual players always depend on or are influenced by the activities of other players. Decisions are therefore always interdependent decisions. Game theory can be divided into cooperative and non-cooperative game theory, which can be explained as follows (according to: https://de.wikipedia.org/wiki/Spieltheorie. Accessed on 10.02.2018):
6.6 Game Theory
135
• If the players can enter into binding contracts, this is referred to as cooperative game theory. However, if all behaviors (including possible cooperation between players) are self-enforcing, i.e., they result from the self-interest of the players without the need for binding contracts, this is referred to as non-cooperative game theory. • Cooperativegame theory is to be understood as an axiomatic theory of coalition functions (characteristic functions) and is payout-oriented. • Non-cooperativegame theory is action- or strategy-oriented, and is closely related to the “Nash equilibrium” (Nash, 1950a). Non-cooperative game theory is a subfield of microeconomics, while cooperative game theory represents a separate branch of theory. Well-known concepts of cooperative game theory include the core, the Shapley solution, and the Nash bargaining solution (Nash, 1950b). See also Holler and Illing 2016. Well-known examples of game theory include, among others: • The Prisoner’s Dilemma (according to: https://de.wikipedia.org/wiki/Gefangenendilemma. Accessed on 10.02.2018) Two prisoners are accused of having committed a crime together. Both are interrogated separately, without being able to speak to each other. If both prisoners deny the crime, both receive a minor punishment, as they can only be proven guilty of a less severely punished offense. If both confess to the crime, they receive a high punishment, but not the maximum penalty. However, if only one of the two prisoners confesses to the crime, that person remains free as a key witness. The other prisoner is considered convicted without having confessed to the crime and receives the maximum penalty. How do the prisoners decide? • The stag hunt (according to: https://de.wikipedia.org/wiki/Hirschjagd. Accessed on 10.02.2018) The stag hunt is a parable that goes back to Jean-Jacques Rousseau and is also known as hunting party. In addition, the stag hunt (engl. stag hunt or assurance game), also called assurance game, represents a fundamental game-theoretical constellation. Rousseau dealt with this in the sense of his investigations into the formation of collective rules under the contradictions of social action, so that paradoxical effects lead to the institutionalization of compulsion (to cooperate) in order to prevent contract breaches. He describes the situation as follows: Two hunters go hunting, where each has so far only been able to catch a hare on their own. Now they try to coordinate, that is, to make an agreement to catch a stag together, which would bring both more than a single hare. During the hunt, the dilemma develops analogously to the prisoner’s dilemma: If a hare crosses the path of one of the two hunters during the hunt, he must decide whether to catch the hare now or not. If he catches the hare, he forfeits the opportunity to catch a stag together. At the same time, he must ponder how the other would
136
6 Cybernetics and Theories
act. If the other is in the same situation, there is a risk that the other will catch the hare and ultimately suffer a loss: neither getting a hare nor a share of a stag. • The Braess Paradox (according to: https://de.wikipedia.org/wiki/Braess-Paradoxon. Accessed on 10.02.2018) The Braess Paradox is an illustration of the fact that an additional action option, assuming rational individual decisions, can lead to a worsening of the situation for everyone. The paradox was published in 1968 by German mathematician Dietrich Braess. Braess’ original work shows a paradoxical situation in which the construction of an additional road (i.e., an increase in capacity) leads to an increase in travel time for all drivers with the same traffic volume (i.e., the capacity of the network is reduced). It is assumed that each road user chooses his route in such a way that there is no other option with a shorter travel time for him. There are examples that the Braess Paradox is not just a theoretical construct. In 1969, the opening of a new road in Stuttgart led to a deterioration of traffic flow in the vicinity of Schlossplatz. In New York, the reverse phenomenon was observed in 1990. The closure of 42nd Street led to fewer traffic jams in the surrounding area. Further empirical reports on the occurrence of the paradox can be found on the streets of Winnipeg. In Neckarsulm, traffic flow improved after a frequently closed railway crossing was completely removed. The meaningfulness became apparent when it had to be temporarily closed due to construction work. Theoretical considerations also suggest that the Braess Paradox occurs frequently in random networks. Many networks in the real world are random networks. • The Tragedy of the Commons (according to: https://de.wikipedia.org/wiki/Tragik_ der_Allmende. Accessed on 10.02.2018) Tragedy of the Commons (engl. tragedy of the commons), tragedy of the common goods, refers to a social science and evolutionary theoretical model, according to which freely available but limited resources are not used efficiently and are threatened by overuse, which also threatens the users themselves. This behavioral pattern is also studied by game theory. Among other things, it investigates why individuals stabilize social norms through altruistic sanctions in many cases despite high individual costs. For Tragedy of the Commons, see also Hardin (1968), who spoke of an inevitable fate of humanity, and Radkau (2002), who describes a broader societal view on this concept. Within the framework of systems theory, the tragedy of the commons is particularly attributed to behavior-oriented, positive feedback loops that lead to self-reinforcing vicious circles. This means – as previously quoted analogously – nothing else than that the strength of the use of common goods is inversely proportional to the scarcity of resources and the associated competition (Küppers 2013; Diamond 2005; Senge et al. 1994).
6.7 Learning Theory
137
6.7 Learning Theory In the Lexicon of Psychology, the definition of learning theory is as follows (http://www. spektrum.de/lexikon/psychologie/lerntheorie/8813. Accessed on 12.02.2018): […] [Learning theory is the] systematics of knowledge about learning. Learning theories describe the conditions under which learning takes place and enable verifiable predictions. Meanwhile, a variety of learning theories exist that must be considered as complementary. Roughly two directions of learning are distinguished: stimulus-response theories, which deal with the investigation of behavior, and cognitive theories, which deal with processes of perception, problem-solving, decision-making, concept formation, and information processing.
The Springer Gabler Business Dictionary describes learning theory as follows (http:// wirtschaftslexikon.gabler.de/Definition/lerntheorien.html. Accessed on 12.02.2018): Learning theories are models and hypotheses that attempt to describe and explain learning psychologically in a paradigmatic way. The seemingly complex process of learning, i.e., the relatively stable change in behavior, is explained using the simplest possible principles and rules. 1. Behaviorist learning theories: Assumes a connection between observed stimuli and the resulting reactions (repeating rewarded behavior, refraining from punished behavior) 2. Cognitive learning theories: Learning as a higher mental process; acquisition of knowledge as a consciously designed and complex process 3. Social learning theory: Acquisition of knowledge unconsciously through observation and imitation.
Under the keyword learning theory, various learning theoretical approaches are listed, which refer to individual forms of learning (according to: https://de.wikipedia.org/wiki/ Lerntheorie. Accessed on 12.02.2018): • Behaviorist learning It is a scientific concept to investigate and explain the behavior of humans and animals using natural scientific methods. The American psychologist Burrhus Frederic Skinner (1904–1990) (1969, 1999) and the Russian physician Ivan Petrovich Pavlov (1849–1936) (see Mette 1958) are two early representatives of this school. • Instructionalistlearning (according to: https://de.wikipedia.org/wiki/Instruktionalismus. Accessed on 12.02.2018) Learners are instructed in a learning activity, knowledge is imparted to them, which is passively absorbed and then deepened through exercises. The advantage of this learning model is that the learning process is very simple, the learner has little responsibility for their learning process since it is predetermined, and the learning success is easily controllable since the learning objectives are predefined for the learners. The imparted knowledge is thus collaborative.
138
6 Cybernetics and Theories
The disadvantage here is that the learner as an individual is not taken into account. Little attention is paid to their prior knowledge, experiences, and strengths. As a result, the learned knowledge is also not very individual. This leads to the learned knowledge being poorly stored by the learner. • Cognitive learning—learning through insight (according to: https://de.wikipedia.org/ wiki/Lernen_durch_Einsicht. Accessed on 12.02.2018) Learning through insight, or cognitive learning, refers to the acquisition or restructuring of knowledge based on the use of cognitive abilities (perceiving, imagining, etc.). Insight here means recognizing and understanding a situation, grasping cause-effect relationships, the meaning and significance of a situation. This enables goal-oriented behavior and is usually recognizable by a change in behavior. Learning through insight is the sudden, complete transition to the solution state (allor-nothing principle) after initial trial-and-error behavior. The behavior resulting from insightful learning is almost error-free. • Situational learning—constructivism (according to: https://de.wikipedia.org/wiki/ Konstruktivismus_(learning psychology). Accessed on 12.02.2018) Constructivism in the context of learning psychology postulates that human experience and learning are subject to construction processes influenced by sensory-physiological, neuronal, cognitive, and social processes. Its core thesis states that learners create an individual representation of the world during the learning process. What someone learns under certain conditions depends heavily, but not exclusively, on the learner themselves and their experiences. • Biocybernetic-neuronallearning (according to: https://de.wikipedia.org/wiki/Lerntheorie. Accessed on 12.02.2018) Biocybernetic-neuronal approaches are learning methods that originate from the field of neurobiology and primarily describe the functioning of the human brain and nervous system. One subject within biocybernetic-neuronal learning theories are mirror neurons, which, in addition to empathy and rapport skills, could also be involved in neuronal basic functions for learning by imitation. See also Rizzolatti and Fabbri Destro (2008). An early representative of these learning methods was Frederic Vester (1975), who laid the foundation for biocybernetic communication with his detailed description of biological neuronal learning processes—thinking, learning, forgetting—which is further strengthened by his numerous books on networked thinking and acting. More recently, Manfred Spitzer (1996) has dealt with models for learning, thinking, and acting in his book “Geist im Netz”. The author has become particularly well-known recently with a controversially discussed and increasingly influential learning model concerning digital education (Spitzer 2012). The provocative title of his book is: “Digital Dementia. How we and our children are driving ourselves crazy.” • Machine Learning (according to: https://de.wikipedia.org/wiki/Maschinelles_Lernen. Accessed on 12.02.2018)
6.7 Learning Theory
139
Machine learning is an umbrella term for the “artificial” generation of knowledge from experience: An artificial system learns from examples and can generalize them after the learning phase is completed. This means that the examples are not simply memorized, but the system “recognizes” patterns and regularities in the learning data. In this way, the system can also evaluate unknown data (learning transfer) or fail to learn from unknown data. Automatic diagnostic methods are included, as well as the recognition of market analysis trends, speech and text recognition, or the increasingly used analysis methods of internet operators for predicting behavior, such as purchase intentions of network users using Big Data and Deep Mining methods. The algorithmic approaches used for this can be roughly divided into: • SupervisedLearning– supervised learning The algorithm learns a function from given pairs of inputs and outputs. During the learning process, a “teacher” provides the correct function value for an input. The goal of supervised learning is to train the network to establish associations after several computational cycles with different inputs and outputs. A subfield of supervised learning is automatic classification. An application example would be handwriting recognition. • Semi-supervised learning—semi-supervised learning Corresponds to supervised learning with limited inputs and outputs. • Unsupervised learning—unsupervised learning The algorithm generates a model for a given set of inputs that describes the inputs and enables predictions. There are clustering methods that divide the data into several categories that differ from each other by characteristic patterns. The network thus independently creates classifiers according to which it divides the input patterns. An important algorithm in this context is the EM algorithm (Expectation-Maximization algorithm of mathematical statistics), which iteratively determines the parameters of a model so that it optimally explains the observed data. It assumes the existence of unobservable categories and alternately estimates the affiliation of the data to one of the categories and the parameters that make up the categories. An application of the EM algorithm can be found, for example, in Hidden Markov Models (HMMs) (stochastic model). • ReinforcementLearning– reinforcement learning The algorithm learns a strategy through reward and punishment on how to act in potentially occurring situations to maximize the benefit of the agent (i.e., the system to which the learning component belongs). This is the most common form of learning for humans. • Active learning—active learning The algorithm has the option to request the correct outputs for a part of the inputs. The algorithm must determine the questions that promise a high information gain in order to keep the number of questions as small as possible.
140
6 Cybernetics and Theories
For more information on machine learning, see Hofstetter (2014). In connection with the current controversial discussion about suitable practiced digital learning models in educational institutions of all kinds, we take another look back at the recent past and present learning theories with a focus on the didactic background, which Susanne Meir created together with the Austrian sociologist and educational scientist Peter Baumgartner and Sabine Payr (media didactics) (Meir n.d.; see also Baumgartner (2012)). At the beginning, three questions are raised (Meir n.d., p. 9): What happens during learning? How can learning be explained? What role do teachers and learners play in this? In explaining learning processes in the field of e-learning, three learning theories are in the foreground, which have a significant influence on the design and implementation of e-learning. All three theories have their importance in terms of the construction and design of virtual learning environments and are therefore briefly outlined here. These theories are • behaviorism—learning through reinforcement, • cognitivism—learning through insight and understanding, • constructivism—learning through personal experience, perception, and interpretation. Each of these theories provides a practical approach to implementing learning processes, although they show considerable differences and contrasts in their attempts to explain them.
These explanatory attempts of the three learning theories are briefly compared: 1. Behaviorism (ibid., pp. 10–11) How islearningexplained according to this theory? According to the doctrine of behaviorism, learning is triggered by a stimulus-response chain. Certain stimuli are followed by certain responses. Once a stimulus-response chain has been established, a learning process is complete and the learner has learned something new. As a result of certain stimuli, positive and negative reactions can occur. While desired positive reactions can be strengthened by rewards, undesired or negative reactions are reduced by remaining unrewarded. Reward and punishment thus become central factors of learning success. This explanation is expanded by “operant conditioning” or instrumental learning. In this case, behavior depends heavily on the consequences that follow it. These consequences are the starting point for future behavior. What role does the learner play? The learner is passively driven from within, becoming active in response to external stimuli and reacting. […] What role does the teacher play? The teacher assumes a central role. They set appropriate incentives and provide feedback on the students’ reactions. In this way, they intervene centrally in the learner’s learning process with their positive or negative evaluation or feedback. What happens between the areas of “creating incentives” and the learners’ reactions does not need to concern the teacher, as these areas belong to the so-called “black box”.
6.7 Learning Theory
141
2. Cognitivism (ibid., pp. 12–13) How is the learning process explained according to this theory? According to the theory of cognitivism, learning refers to the intake, processing, and storage of information. The focus is on the processing process, tied to the correct methods and problem statements that support this process. The learning material itself, the information processing, and the problem statement and methodology play a decisive role, as they greatly influence the learning process. The focus is therefore on problems whose solution allows the learner to gain insights and thus expand their knowledge. […] What role does the learner play? The learner takes on an active role that goes beyond mere reaction to stimuli. They learn by independently absorbing, processing information, and developing solutions based on given problem statements. Due to their ability to solve problems, their position in the learning process becomes more significant. What role does the teacher play? The teacher has a central role in the didactic preparation of problem statements. They select or provide information, set problem statements, and support learners in processing the information. They have the primacy of knowledge transfer.
3. Constructivism (ibid., pp. 14–15) How is the learning process explained according to this theory? The learning process itself is very open. It is seen as a process of individual construction of knowledge. Since, according to this theory, there is no right or wrong knowledge, but only different perspectives that have their origin in the personal experience of the individual, the focus is not on the controlled and guided transmission of content, but on the individually oriented self-organized processing of topics. The goal is not for learners to find correct answers based on correct methods, but for them to be able to deal with a situation and develop solutions from it. What role does the learner play? The learner is at the center of this theory. Information is provided to them with the aim of defining and solving problems themselves from the information. They receive few specifications and must find a solution in a self-organized way. They already bring competencies and knowledge. Therefore, the focus is on the recognition and appreciation of the learners and the concentration on the individual knowledge that each student brings with them. What role does the teacher play? The role of the teacher goes beyond the tasks of information presentation and knowledge transfer. They not only convey knowledge or prepare problems, but also take on the role of a coach or learning companion who supports independent and social learning processes. It is their responsibility to create an atmosphere in which learning is possible. In this sense, the establishment of authentic contexts and appreciative relationships with the learners becomes of central importance.
The scope of the learning theory compared to the other described system theories justifies the fundamental importance that learning has for people—all the more so as digital techniques penetrate deeper into the human learning cosmos. On the one hand, people
142
6 Cybernetics and Theories
Category
Behaviorism
Cognitivism
The brain is a
passive tank
Computer
informationally closed system
Knowledge becomes
Deposited
Processed
constructed
Knowledge is
a correct input/output relation
an adequate internal To be able to operate processing procedure with a situation
correct answers
correct methods for finding answers
manage complex situations
Problem solving
Construction
Learning objectives
Constructivism
Paradigm
StimulusResponse
Strategy
Gauges
The teacher is
Authority
Tutor
Coach, Player, Trainer
Feedback will
externally specified
modeled externally
internally modeled
Interaction
stanpredetermined
Dynamic depending on the external lem model
Self-referential, circular, structurally determined (autonomous)
Program features
Rigid process, quantitativeTime and response statistics
Dynamically controlled process, predefined problem, response analysis
Dynamic, complex networked systems noSpecified, Problem
observe and help
cooperate
Fig. 6.6 Learning paradigms in comparison. (Source: Baumgartner 1994, p. 110, p. 174)
as teachers in educational institutions are supported or replaced by digital machines and their teaching algorithms. On the other hand, learning people increasingly collaborate, cooperate, and compete with digital machines and their learning algorithms in their profession and leisure time for work, jobs, and services. Reverence (Küppers and Küppers 2016) is one of the decisive commandments here, which is of particular importance in system theory and especially in the education sector. Figure 6.6 completes the learning theory with a comparison of the presented learning methods.
6.8 Control Questions
143
Characteristic value
Feature Surrou ndings
completed
Relatively isolated
static
dynamic
Function values
continuous
discreet
Function type
linear
non-linear
Functionality determinacy
deterministic
stochastic
Behavioral form
unstable
stable
ultra stable
rigid
flexible
self-organizing
Number of subsystems
simply
complicated
Number of relations
simply
complex
Very complex
Structural form
unspecific forms
Specific graphs
feedback
Relationships
Function
Time dependency
Structure
Time dependency
open
Fig. 6.7 Morphological System Classification. (Source: after Ropohl 2012, p. 91)
Ropohl (2012) has, in his book “General Systems Theory,” described not only his own basic principles of General Systems Theory Chap. (2), but also some specific system approaches Chap. (3–5), some of which will also be discussed here. His presentation of a Morphological System Classification (ibid., p. 91) will certainly be helpful for one or the other who delves into the variety of systemic and cybernetic theories and practical approaches, in order to accommodate or classify their specific handling of systems theories and systemic/cybernetic applications in this classification scheme (Fig. 6.7).
6.8 Control Questions Q 6.1 O utline and describe the three system concepts (system models) of the Systems Theory of Technology according to Ropohl. What specific properties can be identified in the three system concepts? Q 6.2 Name and describe the four roots of modern systems theory according to Ropohl. Q 6.3 What do natural, psychic, and biological systems operate with according to Luhmann?
144
6 Cybernetics and Theories
Q 6.4 L uhmann provides several explanations for what he understands by communication. Name four of them. Q 6.5 Outline and describe the schema of a general communication system according to Shannon. Q 6.6 Why are technical, information-processing systems the major beneficiaries of Shannon’s insights into information theory? Q 6.7 What is understood by algorithm theory? Q 6.8 Describe the Rete algorithm. Q 6.9 Name four different classes of algorithms, each with two concrete algorithmic applications or names of the respective algorithms. Q 6.10 D escribe or define what is understood by automata theory. Q 6.11 Name and describe four different automata. What can finite (deterministic) automata process? List five features. Q 6.12 What do you understand by “Chomsky hierarchy”? Q 6.13 Languages defined by grammars are called programming languages. What features does the grammar include? List five of them. Q 6.14 Decision theory distinguishes three subareas. What are they and how do they differ? Q 6.15 The basic model of (normative) decision theory can be represented in a result matrix. This includes the decision field and the target system. How is the decision field structured? Q 6.16 Describe game theory according to Bartholomae and Wiens. Q 6.17 How do game theories differ from decision theories? Q 6.18 How do the Nash bargaining solution and the Nash equilibrium differ? Q 6.19 Describe the game theory example of the prisoner’s dilemma. Q 6.20 Describe the game theory example of the stag hunt. Q 6.21 D escribe the game theory example of the Braess paradox. Q 6.22 D escribe the game theory example of the tragedy of the commons. Q 6.23 D escribe what is meant by “learning theory.” Q 6.24 Name and describe five different learning theory approaches. Q 6.25 In the context of big data and deep mining methods, various algorithmic approaches are used. Name and explain five of these approaches. Q 6.26 In explaining learning processes in the field of e-learning, three learning theories are in the foreground. Name and describe these. What role do the learner and teacher play in the respective learning theories?
References Asteroth A, Baier C (2003) Theoretische Informatik. Pearson, München Bartholomae F, Wiens M (2016) Spieltheorie. Ein anwendungsorientiertes Lehrbuch. Springer Gabler, Wiesbaden
References
145
Baumgartner P (1994) Lernen mit Software. Studien, Innsbruck Baumgartner P (2012) Taxonomie von Unterrichtsmethoden. Waxmann, Münster Chomsky N (1956) Three models for the description of language. IRE Trans Inf Theory 2:113–124 Chomsky N, Miller GA (1963) Introduction to the formal analysis of natural Languages. In: Handbook of mathematical psychology. New York, Wiley Diamond J (2005) Collapse. How society choose to fall or succeed. Viking, Pinguin Group, New York Dieckmann J (2004) Luhmann-Lehrbuch. UTB 2486. Fink, München Erk K, Priese L (2008) Theoretische Informatik. Springer, Berlin Forgy C (1982) RETE: A fast algorithm for the many pattern/many object match problem. Artif Intell 19(1):17–38 Gäfgen G (1974) Theorie der wirtschaftlichen Entscheidung. Untersuchung zur Logik und Bedeutung des rationalen Handelns. Mohr, Tübingen Hardin G (1968) The tragedy of commons. Science, New Series, 162(3859):1243–1248 Hoffmann M, Lange M (2011) Automatentheorie und Logik. Springer, Berlin/Heidelberg Hofstetter Y (2014) Sie wissen alles. Wie intelligente Maschinen in unser Leben eindringen und warum wir für unsere Freiheit kämpfen müssen. Bertelsmann, München Holler MJ, Illing G (2016) Einführung in die Spieltheorie. 8. Ed. Springer, Berlin Hopcroft JE, Motwani R, Ullmann JD (2011) Einführung in Automatentheorie. Pearson, München Klaus G, Liebscher H (1976) Wörterbuch der Kybernetik. Dietz, Berlin Krabs W (2005) Spieltheorie. Dynamische Behandlung von Spielen. Teubner, Stuttgart/Leipzig/ Wiesbaden Kruse R et al (2011) Computational intelligence. Vieweg Teubner, Wiesbaden Küppers EWU (2013) Denken in Wirkungsnetzen. Nachhaltiges Problemlösen in Politik und Gesellschaft. Tectum, Marburg Küppers EWU (2018) Die Humanoide Herausforderung. Leben und Existenz in einer anthropozänen Zukunft. Springer, Wiesbaden Küppers J-P, Küppers EWU (2016) Hochachtsamkeit. Über die Grenzen des Ressortdenkens. Reihe Essentials. Springer, Wiesbaden Luhmann (1988) Was ist Kommunikation? In: Simon FB (Eds) (1997) Lebende Systeme – Wirklichkeitskonstruktionen in der Systemischen Therapie. Suhrkamp TB, Berlin, pp 19–31 Luhmann, N. (1991; Erstausgabe: 1984) Soziale Systeme – Grundriss einer allgemeinen Theorie. 4, Suhrkamp, Frankfurt am Main Luhmann N (1997) Die Gesellschaft der Gesellschaft. Suhrkamp, Frankfurt am Main Meir S (o. J.) Didaktischer Hintergrund. Lerntheorien. https://lehrerfortbildung-bw.de/st_digital/ elearning/moodle/praxis/einfuehrung/material/2_meir_9-19.pdf. Accessed 12 Febr 2018 Mette A (1958) J. P. Pawlow. Sein Leben und Werk. Dobbeck, München Nash JF (1950a) Non-Cooperative Games. Dissertation, Princeton University. https://rbsc.princeton.edu/sites/default/files/Non-Cooperative_Games_Nash.pdf. Accessed 5 Jan 2019 Nash JF (1950b) The Bargaining Problem. Econometrica 18(2): 150–162 Ottmann T, Widmayer P (2012) Algorithmen und Datenstrukturen, 5. Ed. Spektrum Akademischer, Heidelberg Radkau (2002) Natur und Macht. Eine Weltgeschichte der Umwelt. C. H. Beck, München Rizzolatti G, Fabbri Destro M (2008) Mirror neurons. Scjolarpedia 3(1):2055 Ropohl G (2009) Allgemeine Technologie. Eine Systemtheorie der Technik, 3. Ed. Universitätsverlag Karlsruhe, Karlsruhe Ropohl G (2012) Allgemeine Systemtheorie. Einführung in transdisziplinäres Denken. Edition sigma, Berlin Sedgewick R, Wayne K (2014) Algorithmen. Algorithmen und Datenstrukturen. Pearson, München
146
6 Cybernetics and Theories
Senge et al (1994) The fifth discipline fieldbook. N. Brealey Publ., London (deutsch: Das Fieldbook zur Fünften Disziplin. Klett-Cotta, 1996) Shannon CE (1948) A Mathematical Theory of Communikation. Bell Syst Tech J 27:379–423, 623–656 (Reprinted with corrections) Simon FB (Hrsg) (1997) Lebende Systeme – Wirklichkeitskonstruktionen in der Systemischen Therapie. Suhrkamp TB, Berlin Simon FB (2009) Einführung in Systemtheorie und Konstruktivismus. Carl Auer, Heidelberg Skinner (1969; Erstausgabe: 1948) Walden Two. Utopische Erzählung. Verlag Macmillan, New York. Neuauflage 1969 mit aktuellem Essay des Autors: Walden Two Revisited Skinner (1999) The behavior of organisms: an experimental analysis. Nachdruck durch die B. F. Skinner Foundation, erstveröffentlicht 1938, Appleton-Century-Crofts, New York Spitzer M (1996) Geist im Netz. Modelle für Lernen, Denken und Handeln. Spektrum Akademischer Verlag, Heidelberg Spitzer M (2012) Digitale Demenz. Wie wir uns und unsere Kinder um den Verstand bringen. Droemer, München Vester F (1975) Denken Lernen Vergessen. Was geht in unserem Kopf vor, wie lernt das Gehirn, und wann lässt es uns im Stich? DVA, Stuttgart
7
Cybernetic Systems in Practice
Summary
In the final chapter, “Cybernetic Systems in Practice,” we will get to know cybernetic systems from various practical areas. We will focus on four dominant environmental areas that affect us all—nature, technology, economy, and society. These are concretized by various “application scenarios,” ranging from control loops of the human organism and the forest ecosystem to control mechanisms of various technical apparatuses and tools, economic models and management instruments, to models of sociology/psychology, “cybernetic governance,” or even in the military field. An introductory overview of the cybernetic “status quo” of these four environmental areas is provided. Nature: It is at the same time the evolutionary model for all cybernetic systems. Its functionality through adaptive progress, its fault-tolerant behavior, its well-dosed risk strategy in highly complex spaces, and its exemplary strategies of sustainability for all organisms are holistically outstanding compared to everything that humans have ever achieved in their development. The application of cybernetic systems by humans in practice must therefore—within certain limits—be measured against the technical achievements of “nature-cybernetics.” Technology: Apart from historical technical achievements of humanity that date back thousands of years, we take the beginning of industrialization in the late 18th century to the present as a time scale for human creativity in making use of cybernetic systems. One of the first pioneers of technology is James Watt, who realized the cybernetic control mechanism with negative feedback in his steam engine—Fig. 3.8—. Through a multitude of further applications, in particular Norbert Wiener’s cybernetic control in a flight system in the 1940s, machines, automata, apparatuses, or devices—be it passenger © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_7
147
148
7 Cybernetic Systems in Practice
cars, washing machines, coffee machines, old stationary and new mobile robots, or the upcoming “networked things of the Internet” (Küppers 2018), from the work and leisure environment—nowadays have cybernetic control mechanisms that make many things easier for us without us directly recognizing how the cybernetic mechanism works. Economy: Cybernetic control systems have been and continue to be criminally neglected for a long time—and this will surely be the case tomorrow as well. Companies are by no means economic systems that follow a growth philosophy through monocausal “uncybernetic” strategies of economic progress—at what price and for whom?—which many economists believe can achieve sustainable success without systemic networking or cybernetic strategies. The cybernetic realities in the economy, which do not disappear just because they are not recognized and used(!), speak a thousand times different language. There are initial tentative approaches from research and entrepreneurial applied environments—e.g., the St. Gallen Management Model, SGMM—that show that cybernetic control strategies in entrepreneurial processes do indeed produce successes—compared to classical control instruments—that are worth further developing. The society: In it, an unimaginable number of processes of information, energy, and material processing take place. It is inherent in humans, as subjects of evolutionary progress, to think and act cybernetically. However, it is astonishing that upon closer examination of human activities, the vast majority of cases involve a tangle of causal and monocausal development paths that have nothing to do with the actual abilities given to us humans on the path of development. The system-stabilizing cybernetic element of negative feedback is found only marginally in complex societies and even more so in the politics that guide them. Rather, the opposite is the case. Whether between politicians themselves, politicians and citizens, politicians and other decision-makers in societies: misguided short-sighted compromises of all kinds are the multiple causes for what the author today calls the “Age of radical human unreason” and what the Dutch climate researcher J. P. Crutzen (*1933) and the American biologist E. F. Stoermer (1934–2012) call the “Anthropocene” (cf. Küppers 2018; Crutzen and Stoermer 2000): the widespread ignoring by societal decision-makers of cybernetic laws in climate change, in the water, soil, and air changes of a planet Earth that is limited and not expandable at will, a policy by politicians who are elected by citizens but show a lack of respect towards them. Confrontation through strength follows confrontation through more strength, although nature proves that cooperation and self-organization bring more sustainable progress. The following examples all show the advantageous system-stabilizing effect of negative feedback. Especially in societies or in sociotechnical systems, where humans interact with humans and humans with machines, it must be learned to apply again the basic standards of sustainable progress, to which cybernetic systems contribute more than the
7 Cybernetic Systems in Practice
149
fixation on one-sided target strategies, as practiced, for example, by the economy. This can and will never work well in the long run in an environment full of complex dynamic interrelationships. Thus, in Chapter 7, various fields of application are discussed in which cybernetic processes perform their functional sequences. It quickly becomes clear that cybernetic processes have conquered a multitude of specific applied problem solutions since their “childhood” in the 1940s. In his standard work on cybernetics, Norbert Wiener wrote in 1948 (quoted from the 1st German edition 1963, pp. 26–27): For many years, Dr. Rosenblueth [a Mexican physiologist and close scientific companion of Norbert Wiener, see Sect. 4.2, the author] and I shared the conviction that the most fruitful areas for the progress of science were those that were neglected as no man’s land between the various existing disciplines. […] It is these border areas of science that offer the qualified researcher the richest opportunities. But at the same time, they are the most resistant to the established techniques of mass work and division of labor.
This foresight of Wiener’s about the lack of interdisciplinary collaboration in science and thus research and development across disciplinary boundaries has been heard by now— albeit not implemented everywhere in practice. Specialization still dominates our technology, economy, society, and even our activities in nature and the environment—not only in their technical terms; even if exceptions prove the rule. In addition to systems theory, cybernetics relies on its own strength as an instrument of practical application, that of an interdisciplinary character. Cybernetic approaches, wherever they are applied with mindfulness, are therefore not limited to specific disciplines. Let us embark on a journey of discovery to cybernetic applications, as they have been presented, among others, by Jäschke et al. (2015) in “Exploring Cybernetics. Cybernetics in interdisciplinary discourse”—albeit not as broadly as shown here. For the sake of clarity, various discipline-like or -related areas of cybernetic applications are subsumed under one of the four general headings: • • • •
Cybernetic systems in nature, Cybernetic systems in technology, Cybernetic systems in the economy, Cybernetic systems in society.
Applications or models that deal with, for example, the linking of various disciplines within the framework of cybernetic processes, leave the originally narrow corset of technically fixed control processes with feedback loops and expand it to many social areas. From this, a “basic principle of cybernetic operational processes” can be derived, which states: Maxim In whatever networked complex environment cybernetic pro-
cesses—with a view to specific or combined discipline solutions—are used,
150
7 Cybernetic Systems in Practice
their results are only sustainably resilient under the influence of negative feedback. Actively perceiving this perspective not only strengthens mindfulness but also error tolerance. With the term mindfulness, in view of cybernetic networked systems, particular reference should be made to the work of social psychologist Ellen Langer (2014), see also Küppers and Küppers (2016).
7.1 Cybernetic Systems in Nature It cannot be repeated often enough that nature is the “mother” of all cybernetic systems. Orienting oneself on its developmental principles, without blindly copying them, and working out advantageous solutions for technology, economy, and society is nothing other than practicing bionics (Küppers 2015). It is due to the fundamental strategy of evolution that, over the course of billions of years, biological systems have developed that are permeated with cybernetic control processes for the preservation of the species. The automatic pupil control process of our visual system, the regulation of blood circulation by heart activity, regulation of blood sugar, our automatic regulation of breathing, or body temperature are examples of this (Röhler 1974; Hassenstein 1967). In addition, there are many differentiated control processes for the self-cleaning of chemical-physical processes in biotic-abiotic environments, such as control processes for self-cleaning in flowing waters. The zoologist, philosopher, and physician Ernst Haeckel (1834–1919), whose definition of ecology—“the entire science of the relations of the organism to the surrounding external world” (Begon et al. 1998)—was already cybernetic or systems-theoretical in the mid-19th century, is now consistently concretized by the independent field of systems ecology. Systems ecology encompasses systems and mathematical models in ecology (Odum 1999, p. 318; see also ecological system models: Ropohl 2012, pp. 165–180). One of the most prominent representatives of biological cybernetics or biocybernetics, who saw the world as a networked system and therefore consistently raised networked thinking and action to the maxim, was the biochemist and cybernetician Frederic Vester (see Sect. 4.14). In a series of publications, he has pointed out the invisible connections in our environment for many people, which—because they are not recognized or ignored—are causes of many disasters. The increasing influence of digital data processing in the world of robots also consistently looks at human neural processes and tries to understand them, with the goal of transferring their functions to artificial machines. The optimization of human-machine communication is one of many intensively pursued research and development areas within the framework of cybernetics. From a German perspective, the Max Planck Institute for Biological Cybernetics in Tübingen and the University of Bielefeld, whose
7.1 Cybernetic Systems in Nature
151
Faculty of Biology operates the Department of Biological Cybernetics, should be mentioned. An international publication on the subject of cybernetics is the journal Biological Cybernetics—Advances in Computational Neuroscience published by Springer Verlag. The following three examples, as well as many others in the context of the topic, such as the predator-prey model already shown in Chap. 5, can only represent a very small spectrum of cybernetic processes of nature and its environment. It applies: Principle The dynamic nature is too complex for us to ever fully understand it.
7.1.1 Blood Sugar Control Loop The concentration of glucose in the blood is regulated by various hormones (mainly insulin, growth hormones, epinephrine, and cortisone). These hormones influence the various ways the organism can produce glucose from storage substances or, conversely, break down excess glucose and store it in the form of glycogen or fat. Conversely, the glucose concentration—but not it alone—influences the concentration of these hormones in the blood. If one combines all hormones in their effect on glucose regulation into a fictitious hormone H, one can distinguish two inputs, namely the supply rate of glucose and hormone in the blood, and two output variables, the concentrations of glucose and hormone in the blood. (Röhler 1974, p. 119)
In Fig. 7.1, the relationship between glucose and hormones is shown in a highly simplified manner. The two main transmission paths of glucose and hormone can be seen. It means: Xg supply rate of glucose, Yg concentration of glucose in the blood, Xh supply rate of the fictional hormone H, Yh concentration of hormone H in the blood. The associated system of equations is:
negative Feedback
Template: R. Röhler (1974) supplemented by the Author ©2018 Dr.-lng. E. W. Udo Küppers
Fig. 7.1 Simple block diagram of the blood sugar regulation model with feedback. (Source: According to Röhler 1974, pp. 119–123, supplemented by the author)
152
7 Cybernetic Systems in Practice
Yg = Hgg Xg –Hgg Kgh Yh
(7.1)
Yh = Hhh Khg + Hhh Xh
(7.2)
Here, Hgg and Hhh are the transfer functions of the main transmission paths and Kgh, Khg are the transfer functions of the coupling paths. As can be seen in Fig. 7.1, an increase in hormone blood concentration contributes to an inhibition of glucose intake or an increase in the degradation rate. The glucosehormone model assumes the simplest reaction kinetics, namely that a temporal change in glucose concentration is also proportional to the concentration of the hormone and vice versa (cf. ibid., pp. 119–121). However, the reality of glucose regulation in the blood is much more complex. If the human being is understood as an open psychosomatic control system in its entirety, additional control loops complement the basic regulation shown in Fig. 7.1. These are: • • • • • •
Control process through genetic disposition, Control process through mobility (movement), Control process through nutrition, Control process through constitution, Control process through medication, Control process through environmental influences and other regulatory functions.
Only the approximate recording of the entirety of all cybernetic control loops of a person, as a biologically open system to the environment, allows a sustainably reliable statement about the state of individual glucose treatment in the blood.
7.1.2 Pupils Control Loop The pupil of the (human) eye changes with the luminance of the visual field, namely, the pupil becomes smaller when the luminance increases, and vice versa (pupil reflex). Since this reduction causes a decrease in retinal illuminance, the pupil reflex compensates for fluctuations in retinal illuminance to a certain degree, resulting in the stabilization of retinal illuminance at a setpoint.1 The photoreceptors of the retina form the sensors of the system, the involved nerve centers take care of signal processing, thus acting as controllers, and the pupil muscles correspond to the actuator or, more generally, the actuating element. These greatly simplified ideas underlie […] [the block diagram in Fig. 7.2]. (Röhler 1974, p. 37)
1 It
is difficult to impossible to specify a true setpoint as a localizable structure in biological systems, because usually other, interconnected control loops—meshed control systems—also influence one or the other setpoint of biological systems.
7.1 Cybernetic Systems in Nature
153
illuminated retinal area Light source
Lens
Pupil Eye
Pupil
negative Feedback
Photoreceptor Template: R. Röhler (1974) supplemented by the Author ©2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.2 Path of light into the eye—upper graphic—with a simple block diagram of the pupil control loop—lower graphic. (Source: Adapted from Röhler 1974, pp. 37–40, 64, supplemented by the author) The average retinal illuminance B, i.e., the stabilized value, is measured by the photoreceptors and leads to the generation of an electrical potential S, which is compared with a hypothetical set value So. Differences between the actual value and the set value of the potential are reported as an afferent [input signal, d. A.] error signal ∆S to specific nerve centers, which in turn sends an efferent [output signal d. A.] signal to activate the pupillary musculature. Subsequently, the pupillary area A changes by an amount ∆A, which leads to a [g as geometrically conditioned, d. A.] change ∆Bg in retinal illuminance B. Bg = VE A
(7.3)
E here represents the average corneal illuminance, which corresponds to the undisturbed state, and V is a factor that takes into account the geometry of the illumination and the unit when converting the corneal illuminance to the retinal illuminance. At this point, the disturbance signal enters the control loop by causing a change ∆E in the corneal illuminance, which results in a change Bs = cA E
(7.4)
154
7 Cybernetic Systems in Practice
of the retinal illuminance. A here denotes the pupillary area corresponding to the undisturbed state, and c again represents a constant factor that takes into account the geometry and the unit. The total change in retinal illuminance B = VE A + cA E
(7.5)
is integrated over time, forming the mean effective retinal illuminance B, which in turn determines the potential S. […] There are couplings between components of the control loop and the branch for the interference signal, indicated by dashed lines in […] [Fig. 7.2, author’s note]. A change ∆A in pupil diameter also causes a change in A in block c A, so that the error signal is not ∆Bs, but Bs ∗ = cA E + c A E
(7.6)
is [for very small ∆A compared to A, ∆Bs* is negligible, author’s note] (ibid., pp. 37–38).
Analogous to the classical control loop, the following are formed: ∆S is the “control deviation”, the functions of opening and closing the pupil are caused by two different muscles, a pair of antagonists. The “actuator” of the pupil control loop plus associated control is then assigned to the pupil block. This acts on the corneal illuminance E, resulting in the “controlled variable” ∆Bg of the geometrically induced change in retinal illuminance B, which in turn is combined with the “disturbance variable” ∆Bs and processed as ∆B in the “control path” (integral block). The “controlled variable” B is compared with the “setpoint variable” So via the photosensor as the “actual value-controlled variable” S (cf. Röhler 1974, p. 39). With the two biological control loops of blood sugar regulation and pupil change, two classical cybernetic subsystems in humans have been described, which—as already mentioned—directly determine life-sustaining functions in humans and indirectly through their environment in an extensive network of further cybernetic regulations. In contrast, the third example of biological cybernetic control intervenes in the networked environment of a plant-animal population, which—mutually influencing each other to ensure their continued existence—competes for survival.
7.1.3 Cybernetic Model in the Forest Ecosystem The state of ecosystems is determined by the complex interconnection between their components, which have evolved over the course of evolutionary development. Usually, there are diverse dependency relationships between organisms through nutrient cycles, food chains, and food webs: predator-prey systems [as in Fig. 5.6 and 5.7, ed.], symbioses, pollination, seed dispersal, and many other processes. The interacting dynamic processes control and regulate each other, so that a dynamic equilibrium typical of the respective ecosystem emerges. Interventions that particularly affect or promote individual components can therefore lead to the system tipping over into another state. […]
7.1 Cybernetic Systems in Nature END OF YEAR DEFORESTATION
INITIAL VALUE FOREST BIOMASS GROWTH FOREST
155 COLLECTION RATE
COLLECTION
FOREST BIOMASS
MAX FEEDING RATE INSECTS SEMI-SEDIMENTATION CONSTR. BIRD'S-EYEGRASS
INSECTFRAß
GROWTH RATE FOREST
GROWTH INSECT
INSECTS BIOMASS
CAPACITY FACTOR INSECTS PRODUCTION RATE BIRDS CAPACITY FACTOR BIRDS
MAX FEEDING RATE BIRDS
INITIAL VALUE INSECT BIOMASS
MAX BIOMASS FOREST INSECT REPRODUCTION RATE
SEMI-SATURATION CONSTANTS INSECTICIDE
BIRDFRAß INITIAL VALUE BIRDS BIOMASS
GROWTH BIRDS
BIRDS BIOMASS
Template: H. Bossel (2004) supplemented by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.3 System-Dynamic (SD) simulation model of a plant-animal ecosystem. (Source: After Bossel 2004, p. 153, supplemented by the author) The model [in Fig. 7.3, ed.] describes […] the following relationships: A region with a maximum biomass capacity K consists partly of forest x, partly of grassland vegetation (K-x). Birds need the forest for nesting sites and feed on insects. Insects need the forest as a food source and the grassland for the growth of their larvae. As the forest is increasingly destroyed, conditions worsen for the birds and improve for the insects. At a certain stage, the insects get out of control and destroy the remaining forest. (Bossel 2004, p. 152)
Without going into detail on the parameters, initial states, and dynamic equations (ibid., pp. 154–157), this type of simulation can cleverly balance cybernetic feedback processes of positive and negative effects on each other, so that a steady state is established for all involved organisms, ultimately leading to the preservation of the forest and its inhabitants.
156
7 Cybernetic Systems in Practice
The SD model is based on a real process, and the simulation results confirmed the researchers’ initial suspicions: If the forest share is large enough, birds and insects can maintain small populations. If the forest share decreases, conditions for the insects improve significantly, leading to an explosive mass reproduction of insects that either completely destroy the forest or temporarily and partially decimate it. The forest losses due to deforestation decisively determine the further development and the possibility of collapse. (ibid., p. 156)
7.2 Cybernetic Systems in Technology First came nature, then the machine. Whether Norbert Wiener chose the subtitle “Control and Communication in the Animal and the Machine” for his seminal book on “Cybernetics” (Wiener 1963) accordingly is not known. The technology of control, regulation, and signal processing, as well as their designs and implementation, have always involved various types of negative feedback up to the present day—since the historical centrifugal control with feedback on Watt’s steam engine in the 18th century (see Fig. 3.8)—which is the characteristic core element of a cybernetic system and contributes in this special way to system stability. The following four practical examples of a feedback-controlled regulation show—representing countless other examples—various applications of everyday technical control processes with negative feedback, which we find helpful—often without knowing the mechanism in detail (see Mann et al. 2009, pp. 30–35)
7.2.1 Control of Image Sharpness of a Camera The camera in [Fig. 7.4, A, author’s note] […] is supposed to automatically focus on a selected subject. Task size xA is thus the image sharpness, which is very complex to capture with technical means. It is easier to capture the distance to the object […] and a lens position xL dependent on it. Thus, the targeted influence on image sharpness is achieved by means of control and regulation: xL controls the task size xA (image sharpness) via the “Optics” block [Fig. 7.4, B, author’s note] […]. The lens position xL is the control variable in the control loop of [Fig. 7.4, C, author’s note] […], which counteracts any deviation from the set position xL,S. xL,S is obtained by converting from the subject distance d. D is determined by measuring the runtime ∆t = tSend—tReceive of reflective infrared or ultrasonic pulses. (ibid., p. 30)
UR and UM are voltage values of the controller and for the lens motor.
7.2 Cybernetic Systems in Technology
a
157
Camera
Subject
IR or US beam
Reflected beam
d
b
IR-/USTransmitter/receiver
∆t
d Conversion
Conversion
XL,S
Lens position control
XL
Optics
XA
c XL,S
Controller
negative feedback
XL,M
UR
Verystronger
UM
DC motor With lens
XL
Position sensor
Template: Mann, Schiffelgen, Froriep (2009) supplemented by the Author
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.4 Automatic image sharpness adjustment, A: Recording situation, B: Block diagram image sharpness control, C: Block diagram lens position control. (Source: Mann et al. 2009, p. 31, supplemented by the author)
7.2.2 Position Control of the Read/Write Head in a Computer Hard Disk Drive A read and write head […] [Fig. 7.5, A, trans. A.] must be positioned on a data track […] of about 1 μm width in less than 10 ms before data can be read or written on the rotating hard disk. The positioning (head position x as task size) is achieved by pivoting the arm with a rotary voice-coil motor (motor voltage uM as manipulated variable). Disturbances z, such as aerodynamic forces or vibrations on the block “Voice-Coil-Motor and Arm” as a path [in Fig. 7.5, B, trans. A.] […] affect the head position x. The current position x (actual value) is determined by the read/write head using position data scattered in the data track. They appear as digital numerical values xk at discrete time points tk, k = 1, 2, 3 …. These values can be directly processed in a digitally implemented comparator and control element (e.g., microcomputer), with the target position also being specified as a digital numerical value xk,S. The digital controller output variable yR,k is converted into the analog electrical voltage uR (DAC: Digital-to-Analog Converter), […], which, smoothed and amplified as manipulated variable y = uM, drives the Voice-Coil-Motor. (ibid., pp. 31–33)
158
a
7 Cybernetic Systems in Practice
Voice coil motor arm
Hard disk stack
Read-write head Data track
b Z Xk,S
Digital comparator
ek
negative feedback
Digital control element
Xk
yR,k
DAU
uR
Verystronger
uM
Voice-Coil-Motor and arm
X
Read/write head
Template: Mann, Schiffelgen, Froriep (2009) supplemented by the Author
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.5 Position control, A: Device structure, B: Block diagram position control. (Source: Mann et al. 2009, p. 32, modified by trans. A.); Hard disk photo: http://www.sammt.net/pr-informatik/ magnetisch/festplatte_hard_disc.html. Accessed on 08.02.2018
7.2.3 Control of Power Steering in a Motor Vehicle Power steering is intended to reduce the driver’s effort when steering. The task size is the deflection xR of the wheels [Fig. 7.6, A, d. A.] […]. Disturbances are, for example, external forces acting on the wheels. The steering deflection xR is achieved by shifting the tie rod (Sp) by xA via a lever connection [last block in Fig. 7.6, B, d. A.] […]. Without power steering, the driver must directly move the tie rod with the steering wheel (Lr) via a gearbox (Gt). With power steering, he only moves the control piston (Sk) of a hydraulic drive [see Fig. 7.6, B, d. A.] […], which provides the control force by the pump (P) driving a fluid under pressure into the working cylinder (Az) (fluid flow q). The fixed connection of the working piston rod Ks with the control cylinder Sz results in a follow-up control. If, for example, the control pistons Sk are moved from the position shown to the right, the working piston (and thus the wheel deflection xR) follows in the same direction. In doing so, the working piston Ak pulls the control cylinder Sz with Ks, so that Ak comes to a standstill exactly when the two flexible lines V1 and V2 are covered again by the two control pistons Sk and thus q = 0 [is, d. A.]. The setpoint/actual value comparison takes place between the paths of the control piston (reference variable) and the control cylinder (controlled variable)*. Supply disturbances include, above all, the supply voltage of the pump. (ibid., pp. 33–34)
* Reference and controlled variables are mistakenly reversed in the original text (Mann et al. 2009, p. 34).
7.2 Cybernetic Systems in Technology
a
159
Lr
q Sz
q Sk
xA,S
V1
Ks
V2
Az Ak
xA
Gt
xR
xR
P
Sp xA
b XR,S
Driver
Gear unit (Gt ), control piston (Sk)
XA,S
e
Hydraulic drive with working piston(Ak) and tie rod(Sp)
XA
Lever
XR
negative feedback XA
Working piston rod (Ks) and control cylinder (Sz)
Template: Mann, Schiffelgen, Froriep (2009) supplemented by the Author
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.6 Power steering, A: Device structure, B: Block diagram power steering. (Source: Mann et al. 2009, p. 33, modified by d. A.)
7.2.4 Control of Room and Heating Water Temperature [In Fig. 7.7, A, left] […] a room temperature ∂ is to be specifically influenced by heat supply via a radiator (Hk). Fluctuations in the outside temperature ∂a are particularly disruptive. The heat is supplied by heating water (Hw), which is applied by the thermostatic valve (TV, details in [Fig. 7.7, A, right] […]) with the pressure pV and the temperature ∂V. The thermostatic valve is a measuring and control device: The desired set temperature ∂S in the room is set with the setpoint screw (S). The actual value is detected by the expansion xB of the bellows (Ba) filled with a liquid (Fl). The set/actual value comparison is made between the two paths xS (position setpoint screw) and xB (actual value). The control difference e = xS – xB controls the inflow valve position sZ for the radiator (without auxiliary energy). The supply disturbance variables pV and ∂V should be as constant as possible. A constant speed of the flow pump P is sufficient for pV. ∂V can also drop more significantly depending on the room heat demand. Therefore, ∂V is kept in the boiler (K) with another control at a setpoint ∂V, S [Fig. 7.7, A and C, d. A.] […]. The set-actual value comparison is carried out with the electrical voltages u∂ (from sensor Se1) and u∂, S. If the control difference e = u∂, S – u∂ is positive, the control device (Rg) switches on a burner (Br) and switches it off again if e is negative, etc. (so-called two-point controller […]). In this process, ∂V oscillates slightly around the setpoint, but this has little effect on the room temperature. To save heating energy, the setpoint ∂V, S or u∂, S is lowered with a control device when the outside temperature ∂a (sensor Se2) rises (and vice versa). (ibid., p. 35)
160
7 Cybernetic Systems in Practice
a S s a
Tv
Se2
Fl Ba
Hk
Hw
xS –xB
Fe
P Se1
Hw pv, v
K
Rg
v
Ve
Gas
sz
Br
b
a s
Set point screw (S)
xS
e=xS –xB
SZ
Valve (Ve)
Heating element per (Hk) room
negative feedback
xB
c
a
Sensor Se2
Control device
uϑS
e=uϑS –uϑ negative feedback
Bellows (Ba,) Spring (Fe)
Controller
uϑ
uR
Burner
qW
Boiler
V
Sensor Se1
Template: Mann, Schiffelgen, Froriep (2009) supplemented by the Author
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.7 Room and heating water temperature control, A: System structure with thermostatic valve for room temperature control, B: Block diagram of room temperature control, C: Block diagram of boiler water temperature control. (Source: Mann et al. 2009, p. 34, modified by the author)
7.3 Cybernetic Systems in the Economy Economy generally refers to the sum of all producing and service-providing companies in a country that manufacture, distribute, and offer products in the form of services. The—not uncontroversial (!)—GDP or Gross Domestic Product of all goods and services is used as an economic indicator for the strength or weakness of the economy, representing the total economic value of a country (the calculated GDP is, among other things, controversial because it also includes costs for products and services that contribute to the destruction of social structures and their reconstruction, such as the costs for any assistance in traffic accidents and natural disasters, which are counterproductive to a real economic increase in value in society). Companies that produce goods or provide services are always socio-technical systems, i.e., a group of people working together with technical apparatus to achieve a specific result.
7.3 Cybernetic Systems in the Economy
161
The peculiarity of this collaboration is that machines are generally mathematically calculable in their functions and movements, while humans, on the other hand, can only be calculated in probabilities, with a considerable degree of uncertainty. When cybernetic approaches are tested in a business environment with the aim of achieving a specific result, technically mathematically exact actions must be combined with probabilistic actions. To an increasing extent, this type of collaboration between humans and machines will play a decisive role when mobile machines in the form of humanoids collaborate or cooperate with humans. Here too, negative feedback in the human-machine system has a not insignificant function for productivity and at the same time for human safety. Before we approach corporate cybernetics, let us consider the overarching economic cybernetics and its significance. “Economic Cybernetics and System Analysis” has been dealt with since 1970 in a series of scientific publications at periodic intervals and from different perspectives, with titles such as “Cybernetics and Transformation” (2017), “Digital Worlds” (2016), “Corporate Cybernetics 2020” (2009), or “Cybernetic Forecast Models in Regional Planning” (1970). However, it is also evident that the practical implementation of cybernetically meaningful solutions in economic sectors and thus interconnected social or societal areas can hardly keep pace with cybernetic theoretical approaches. The great lack of transfer of cybernetic theory into practice is also evident, especially in the economic and entrepreneurial environment, where strictly hierarchical organizational structures have shaped the image of economic action for decades up to the present, although cybernetic processes and order structures would correspond much more closely to the dynamics of development in the real environment. In the field of “Management Support and Business Informatics” at the University of Osnabrück, under the heading Economic Cybernetics, the following statement can be read, which supports the aforementioned remark on dynamics (https://www.wiwi.unirueck.de/fachgebiete_und_institute/management_support_und_wirtschaftsinformatik_ prof_rieger/profil/wirtschaftskybernetik.html. Accessed on 10.02.2018): Entrepreneurial action has not only been seen as a permanent, dynamic interplay of sociotechnical systems since the increasing globalization of (world) economy and society(ies). The timely recognition of critical developments of exogenous influencing factors as well as the (consequential) effects of one’s own decisions requires, in view of the complexity of diversely interlinked control circuits with often exponentially shaped effect delays, a computer-based, model-like support of strategic planning processes. The research focus deals with the application of the “System Dynamics” method by J.W. Forrester [see Sect. 4.13, author’s note], which became known through the studies of the “Club of Rome”, to business management issues in the macroeconomic context. Examples range from internal models of personnel development to product launches (life cycles) or changes in manufacturing technology to the consequential effects of government framework conditions (labor costs, infrastructure, etc.). The most recent application example is model-based analyses of the system effects of formula-based resource allocation or tuition fees in higher education. In addition to concrete model applications, the main benefit of activities in the field of economic cybernetics is seen in the consistent training of intuitive recognition and understanding of complex systems.
162
7 Cybernetic Systems in Practice
The economic and entrepreneurial framework for cybernetic approaches can be seen as fluid from the previously quoted text. The development of a Corporate Cybernetics, as “a variant of economic and social cybernetics, which forms the concrete application of the cybernetic laws of nature to any kind of human-created organizations and institutions […]” (https://de.wikipedia.org/wiki/Unternehmenskybernetik. Accessed on 10.02.2018), began in the late 1980s. The interdisciplinary character of cybernetic approaches and the associated networking or interrelationships in companies led Guiseppe Strina (2005, cited after: https://de.wikipedia.org/wiki/Unternehmenskybernetik. Accessed on 10.02.2018) to the following definition of corporate cybernetics: Corporate cybernetics is the application-oriented and integrative, i.e., transdisciplinary scientific discipline that considers companies and organizations as open, socio-technical, economic, and diversely networked systems; due to this holistic, systemic approach, methods from various disciplines, in particular from engineering, economics, and social sciences, are applied and combined with cybernetic methods to form integrated method modules for both the description and explanation of observed phenomena and the derivation of control and design recommendations.
Currently, the Institute for Corporate Cybernetics—IfU, which emerged as an affiliated institute from RWTH Aachen—presents the following definition of corporate cybernetics (http://www.ifu.rwth-aachen.de/forschung.html. Accessed on 10.02.2018): Corporate cybernetics is an application-oriented and integrative (transdisciplinary) science. It considers companies and organizations as open, socio-technical, economic, and diversely networked systems. Corporate cybernetics describes and explains complex phenomena in companies using this holistic, systemic approach. With approaches from engineering, economics, and social sciences, holistic models and solutions for these phenomena are developed.
Economic and corporate cybernetics are complemented by management cybernetics, which was founded in the 1950s by Stafford Beer (see Sect. 4.9). With his so-called Viable System Model—VSM–, which can also be translated as a model of viable systems, Beer created a reference model for the analysis and structure of management in organizations. Systems thinking deals, among other things, with the interaction of system elements and is a crucial feature for the development of Beer’s VSM. Sources for VSM include Beer (1970, 1981, 1994, 1995), Lambertz (2016), Espinoza and Walker (2013), Espinoza et al. (2008), Espejo and Reyes (2011). The fundamental importance of Stafford Beer’s Viable System Model, is also evident in today’s cybernetic St. Gallen Management Model—SGMM (see Ruegg-Stürm and Grand 2015). The VSM subsystems 3–5 mentioned below are recognizable—albeit not completely in all details of the VSM—in the SGMM as operational, strategic, and normative management. It becomes more manageable, but not trivial. Beer sees in his VSM the strengthening of an organization’s viability, in the sense of networked evolution. The drive for profit maximization is of lower rank in comparison.
7.3 Cybernetic Systems in the Economy
163
The regulation of entire organizations in the environment replaces centralized control or organizational leadership in hierarchies. The role model function of living, self-organizing systems for a VSM becomes clear. The universally applicable VSM consists of five subsystems of a viable organizational system, which are briefly named, referring to in-depth information on the associated sources (Beer 1995.https://de.wikipedia.org/wiki/Viable_System_Model. Accessed on 02.08.2017): System 1: P roduction, the operational units (value-creating activities), these units must be viable in themselves. System 2: Coordination (of the value-creating System 1), the place of self-organization of System 1 among themselves. System 3: Optimization (resource utilization in the here and now), System 3*: Punctual, supplementary information gathering on the state of the operational systems (Audit). System 4: Future analysis and planning (resource planning for there and then), the world of options. It deals with the future and the environment of the overall system […]. System 5: Top decision-making unit (basic decisions and interaction of System 4 with System 3), if System 3 and 4 cannot agree on a common course, System 5 makes the final decision.
Beer noted (1990/Original 1985, p. 128): Maxim The purpose of a system is what it does. And what the viable system
does is done by System One. In German: Maxim Die Absicht oder die Zweckbestimmung eines Systems ist, was es
tut. Und was das überlebensfähige System tut, wird von Systems 1 getan. Fig. 7.8 shows, according to Lambertz (2016, p. 137), the overall representation of Beer’s VSM with two operational subsystem units (System 1) in a technologized structure and linkage. With this introductory review and outlook on the increasing digitalization of the economy and in companies, which are permeated by cybernetic processes, three examples of concrete cybernetic applications follow.
7.3.1 A Cybernetic Economic Model of Procurement-Induced Disturbances Uncertainty, risk, risk management, and cybernetics are the key terms that emerge in the development of the cybernetic model (Printz et al. 2015). Various risk assessment techniques are analyzed for their suitability for use in management models, and from their
164
7 Cybernetic Systems in Practice
ENTIRE ENVIRONMENT
Meta system to system 1
ETHOS
Atmosphere Thinking wisdom
FUTURE 1 FIVE
algedonian
Exterior & Future
FUTURE 2
Strategic, Simulation, Scenarios, Innovation
FOUR
EMBEDDED ENVIRONMENT
Inside & Now
LOCAL ENVIRONMENT
THREE
3*
sporadic audits*
Selforganization, win-win, synergy optimization
TWO
antioscillatory
ONE
local management
5
4 3 ONE
SUPPLIERS
TWO
2
c 1
local regulation
1 CUSTOMERS
iv
2 2
Value added
vi
iii
ii
ONE
3 ONE 1
5
after Stafford Beer
Channels:
TWO 2 local regulation
1
Value added
v
i local management
4
The model for viable systems
Production planning
Production planning
2 2
i Interventions & Rules ii Resource Negotiations iii Operational connections
iv Overlaps in the environment v Anti-oscillation, autonomous vi Sporadic and regular audits
Fig. 7.8 Viable System Model according to Beer. (Source: Sketch from Lambertz 2016, p. 137)
7.3 Cybernetic Systems in the Economy
165
results, the cybernetic model of procurement-induced disturbances is designed. The model results from the not unexpected realization that with globalization and the dynamics of markets, their complexity and uncertainty in economic systems also increase. The example discussed here from the procurement department of a company is just one of many. Printz et al. (ibid., p. 238) write: This department is confronted with the challenge of assessing uncertainties due to changing geographical supplier locations and operational risks in the form of internal and external disturbances. In particular, supply chains face the challenge of assessing and capturing complex, interrelated procurement risks. This presents both opportunities (e.g., maintaining delivery capability despite a disruption) and risks for decision-makers […]. In this context, there is a need for a risk analysis of the procurement process to support management decision-making. This risk analysis enables a simulation of procurement-induced disturbances. Cybernetics […] as a meta-science offers a suitable solution approach.
According to Printz et al., existing simulation models neglect the practical challenge they identify with the availability of valid information, resource availability, and simple/application-oriented representation of complex evaluation objects. They continue (ibid., p. 251): This lack of modeling prevents the representation and evaluation of the effects of such risk management measures. Consequently, the described models serve to identify potential approaches to risk minimization but not their verification: In addition to neglecting risk treatment, simplifications are also made in the area of risk impacts. In the […] models, it is assumed that errors in production or non-produced parts directly affect sales revenues. Possible buffers due to inventory levels or delayed shortage points are not taken into account […]. These criticisms of the existing models require an adaptation of existing models.
Using a combination of qualitative and quantitative methods, a cybernetic simulation model for risk capture was developed, with the following four prerequisites for risk management (ibid., p. 254): 1. 2. 3. 4.
Identification of uncertainties, Description of the impacts, Model building, Identification of options.
Fig. 7.9 graphically shows the block structure of the cybernetic model. There are five model blocks that determine the functional sequence in the cybernetic model and pursue the goal of implementing a continuous improvement process with a permanent risk management of procurement risks (ibid., p. 255): • Block a: Creation of a System Dynamic Model, • Block b: Statistical evaluation of the database to determine correlations and interactions, • Block c: Simulation of risks using the System Dynamics Model, • Block d: Transfer of results to a database, • Block e: Evaluation of simulation results.
166
7 Cybernetic Systems in Practice
Data
Data, experts
Expert estimate
statistical evaluation
Risk analysis
Risk identification Simulation
Database
Risk assessment negative feedback Template: Printz et al. (2015) supplemented d© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.9 A cybernetic model for the simulative quantification of risk consequences in complex process chains. (Source: Printz et al. 2015, p. 254, modified by the author)
As a starting point (a), a qualitative System Dynamics Model is created using expert estimates (1) and with the support of a risk database (3). The exemplary modeling of negatively acting risks is shown in [Fig. 7.10] […]. The risk classes and their detailed characteristics of procurement are used as variables to be modeled [see Tab. 7.1, author’s note] […], with the mentioned risks being composed of various sources [see Printz et al. 2015, p. 244, author’s note] […]. Any change in individual risk causes a potential delay in the delivery date. In the process of risk analysis, company-specific data is examined by experts. By deriving information, risk identification is enabled and transferred into a model. This qualitative model fulfills both the demand for low administrative effort in creation and the basic prerequisite for a subsequent simulation.
The numerical data in Fig. 7.9 refer to: 1. Expert estimates—implicit 2. Simulation model (System Dynamics) 3. Risk database—explicit 4. FMEA (Failure Modes and Effects Analysis) 5. ETA (Event Tree Analysis)
7.3 Cybernetic Systems in the Economy Capacity
Regulations
Politics
167
Information delay Production rate
Environment
Process risks
Weather External Procurement risks
Prediction error
Quality
Demand
Delay
Transportation
Delivery
Customer price
Processing time Template: Printz et al. (2015) supplemented by the Author
positive feedback negative feedback
© 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.10 Example of an aggregated System Dynamics Model for procurement-induced disturbances. (Source: Printz et al. 2015, p. 256, supplemented by the author)
Tab. 7.1 Classification of procurement risks. (Source: Printz et al. 2015, p. 244, modified by the author.) Risk class
Exemplary risk
Countermeasure
Information requirements of all risk classes
Environment
Politics, weather, regulations
Consultation with representatives, insurance, increasing buffer times to derive further specific countermeasures
Delivery
Processing time, quality, transport
Buffer stocks, supplier audits, contractual penalties
• Time of discovery • Cause • Probability of occurrence • Extent of damage • Aggregated total risk • Alternative strategies • Chosen strategy • And the overall effect
Process risks
Production rate, Rescheduling of produccapacity, information tion, use of contractors, delay increasing buffer time
Demand
Forecast error, delay, customer price
Safety margin, replanning, special release
168
7 Cybernetic Systems in Practice
A comparison of the cybernetic model in Fig. 7.9 with a classical cybernetic control loop could lead to the following conclusions: 1. The reference variable w(t) would be comparable to an SD model, the dynamics of which would lead to a minimum of procurement risk. 2. The control deviation e = w(t) – y(t) would be determined by the reference variable and the results of the error and event analyses (Block e) and supplied to the 3. controller (Block a and b). 4. The results of the controller are used as manipulated variables u(t) for the 5. controlled system (Block c), which is influenced by various disturbance variables z(t). The simulated results of the controlled system are evaluated as 6. controlled variables in Block e through an error and event analysis and fed back to the control deviation for reconciliation. The cycle and its dynamics start anew. In Fig. 7.10, the positive and negative flow variables were unfortunately not differentiated, which would have provided the reader with a better insight and overview. For some stock variables, such as the incalculable (!) “environment,” this would have been associated with certain difficulties due to the chosen arguments, which could have been avoided by clearer and more unambiguous terms, despite some rough hints in the text. For simulation methods thrive on clear, operationally manageable, and as realistic as possible arguments, whether as stock or flow variables. The more unambiguous their relationships, the more understandable and comprehensible their processes, and the more promising the expectation of a comprehensible result.
7.3.2 The Cybernetic Control Loop as a Management Tool in the Plant Life Cycle Plant engineering takes into account standards in the development of plants, which are specified by various DIN standards (e.g., DIN 2800 from 2011, Chemical Apparatus Construction) and/or VDI guidelines (e.g., VDI 4500 from 2106, technical documentation). Regarding the process cycle of a renewable energy plant described here—in the specific case, the regenerative energy plant is a biogas plant—Krause et al. (2014, p. 25) write: Plant operators must fulfill various operator obligations along the life cycle of a renewable energy plant. From the moment of investment intention, information is exchanged between various actors. Each individual life phase in which the plant and its installed equipment parts are located can be precisely defined. The main phases in the life cycle of a renewable energy plant are ”preparation, planning, construction“, use, and decommissioning. The cybernetic control loop as a management concept offers a suitable method for optimizing the flow of information in the company, taking into account views, with the views as information profiles structuring the information flows. […]
7.3 Cybernetic Systems in the Economy
169
The operating time of renewable energy plants is usually longer than the 20-year period supported by the Renewable Energy Act (EEG) [Act on the Priority of Renewable Energy 2012, author’s note] […]. Plant operators strive to extend the life cycle of their plant as much as possible to increase its economic efficiency. With the investment intention, plant operators are obliged to comply with legal, technical, and economic framework conditions. However, during the plant life cycle, numerous phase and hazard transitions can be recorded, which may be accompanied by a change in the actors involved. Therefore, it is necessary on the one hand to define as precisely as possible where, when, and which information flows between the actors, and on the other hand, which actors are involved in which life cycle phase. For this purpose, a uniform understanding of time, location, and type of information is required to ensure high quality. This is intended to ensure that at a later point in time, the information can be traced back to its origin.
Fig. 7.11 provides an overview of process steps in the life cycle of a renewable energy plant. The capture and coordination of information flows from different process sequences of the plant play a crucial role in the ultimate success of such complex tasks. This is to be achieved with a cybernetic control loop.
arati
in t
he
f the
Implemen
ito
aw
rin
ard
ing
awa
rd
tation plan
ning
Approval planning
g n plannin
relim
P
ning
lan ry p
Ba
s sic
ea
Id
pe an Pl
s In
g)
nin
ing sio ett mis s m ll Si eco (D
Decommissi
oning
De
Re ve rs a
Desig
ina
Use Obj ec ts u repair, improve , e m nc
g
n
tio
c pe
documentat ion and ort pp t en
on o
on
planning, constru ctio ion, rat n a ep r P Main ten a
Prep
tm
l
tion
jec
ing
ipa
ion
Ob
to
iss
rtic
ra
mm
tio
n
Co
Pa
Di
con
sp
stru
ctio
n
os
al
Source: bse Engineering Leipzig GmbH) (2014)
Fig. 7.11 Plant life cycle of renewable energy plants and their components. (Source: bse Engineering Leipzig GmbH)
170
7 Cybernetic Systems in Practice
Specifications
negative Feedback
W..ll(t)
Operator
Measuring element and decision
DeScheidungs template
Decision TARGET value
Transformation
Recommendat ion for action
Acquisition ACTUAL value
Assessment relevance of the deviation
Definition of the deviation
Controller
Manipulated variable
economic
Malfunction
legal material technical technological
Controlled variabley(t)
Template: bse Engineering Leipzig GmbH) (2014) supplemented by the Author
Controlled system Control loop structure: ©2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.12 Cybernetic control loop of technical systems. (Source: bse Engineering Leipzig GmbH; superimposed technical control loop sketch by the author)
The cybernetic control loop is understood as a management tool for the dynamic control of information flow within the company or for cross-company communication. This is because the central subject of cybernetics is the ”control and steering processes of and in systems as well as the exchange of information between the subsystems and their dynamic environment“ […]. Since these processes form a recurring cycle [see Fig. 7.12, author’s note] […], which reacts to changing environmental influences (disturbances) and carries out a permanent target-actual comparison, it can be referred to as a control loop. In order to reduce the risks in ensuring the flow of information for the customer, the approach should be pursued to standardize it. (ibid., p. 31)
The result is the thus defined “cybernetic control loop” in Fig. 7.12, which can be described in six steps (ibid., p. 32): 1. The plant operator derives its requirements based on the specification of target values. 2. In the next step, the actual data is recorded. This actual data represents a continuation of the past, as this actual data can no longer be changed. They provide information about a state at a defined point in time. 3. In the next step, the actual data is compared with the target specifications. 4. This comparison reveals a deviation (target-actual comparison). 5. A check is made on the relevance of the deviations identified. Depending on the importance of the deviation, measures are proposed to achieve or change the target specifications. This recommendation for action is submitted for decision-making. 6. From the decision-making process, a new target value is set.
7.3 Cybernetic Systems in the Economy
171
Depending on the situation or process, the differentiated information flows to be analyzed are divided into specific profiles in the form of five views for better handling: information from an economic, legal, material, technical, and technological perspective. It should be critically noted that the two views on energetic and ecological information are not explicitly listed. However, both are—especially for future renewable energy technologies—fundamental decision criteria with regard to sustainability.
Key Point Even renewable energy systems require energy for their produc-
tion, which must be reflected in the life cycle of a system if costs are to be based on a realistic foundation.
7.3.3 The CyberPractice Method by Dr. Boysen “CyberPractice”, a portmanteau of cybernetics and practice, is presented as a method (Boysen 2011, p. 81) that […] applies its leverage […] directly within the system and in the actions themselves. It is based on the idea that the actors will proceed in a systemically meaningful way if they grasp the overall picture, recognize systemic connections, and—this is very important as a driver of action—if they expect greater benefits from a systemically meaningful approach than from an isolated one, which supposedly increases their own benefit. For this purpose, the CyberPractice approach chooses a view of the events from a systemic perspective. Any “event” has an impact on processes. Therefore, it makes sense to primarily deal with processes rather than organizational units, which are actually means to an end to carry out processes. And now comes the crucial conceptual step, namely that each process can also be understood as a system [whose definition is already known, d. A.].
In order to be able to carry out systemically meaningful actions in the process events using the CyberPractice method, it is assumed that the acting persons are familiar with the cybernetic basics, which is achieved through training, attributed to leadership tasks. As potentials are indicated (ibid., pp. 83–84), […] that the participants capture the states of the system elements in the process, as also suggested by the system dynamics approach [according to J. W. Forrester, cf. Sect. 4.13, author’s note]. However, it is deliberately avoided to explicitly document the system dynamics. Instead, the aim is to make the participants recognize the dynamics and shape them from a systemic perspective. The main advantage over purely analytical approaches is that the recognition of relationships is immediately linked to a systemically meaningful implementation, i.e., identified potentials are directly tapped. Another advantage is the simultaneous enablement of organizations for dynamic adaptability. These implementation advantages are lacking in purely analytical description approaches, which include sensitivity analysis [according to F. Vester, see Sect. 4.14, author’s note] and the system dynamics method. The CyberPractice method provides methodically sound implementation results regarding process design and fully meets the application criteria of business practice. The fact that
172
7 Cybernetic Systems in Practice
the model remains methodically at the qualitative level is its strength, as it allows focusing on the capture and influence of essential relationships without suggesting an alleged numerical precision. Another strength of this approach is that the method can be applied free of any commitment to a specific IT application in intensive interaction between executives in organizations. […]. A core idea of the CyberPractice approach is to dig deep enough in the cause analysis to ensure that relationships are better understood and the effects of the interaction are specifically influenced. […]. Therefore, at least at the operational performance level, where measures and effects fully flow into the classic categories of “cost”, “time”, and “quality”, the search for problem causes must be conducted. However, it is also not sufficient to tighten the Key Performance Indicators (KPIs) in these categories and pay attention to implementation discipline. Rather, the systemic prerequisites must be created to enable operational results to be truly improved. These systemic prerequisites can only be influenced by optimizing the interplay of the relationships. The CyberPractice procedural model can be effectively used for this optimization task.
The thin arrow lines in Fig. 7.13 indicate the consequences of actions, while the thick arrow line shows feedback effects of the results on the initial conditions. To simplify, the relationships are as follows: If the understanding of executives for systemic relationships is sharpened, the top executives will no longer primarily consider the business units and functional areas as the drivers for successful business. Rather, they will also
System Management Setup
Management awareness of cause-effect relationships
Operational system results
Satisfaction
costs Interface management
Stabilizing feedback mechanisms
System services
Motivation of the workforce Lead time
Resource allocation
(Partially) integrated IT applications
Reputation with customers Quality
(Multi-)Project Management
Vertical communication, "leadership" Source: Boysen, 2011
Fig. 7.13 Structure and course of the CyberPractice method in three blocks: System management, operational system results, and satisfaction of employees and customers. (Source: Boysen 2011, p. 84, modified by the author)
7.3 Cybernetic Systems in the Economy
173
focus on the in-between, on the connections between specialists, between business units and companies. And they will better recognize the potential of the capabilities that can result from such connections. They will also perceive the benefits of redundancies that result from meaningful networking, not as a duplication of resources in the classical sense, but in such a way that different elements in the system can take over the same functions if they are versatile. (ibid., p. 84)
It should be critically noted that a real comparison of the postulated advantages and disadvantages of the CyberPractice method on the one hand and the mentioned System Dynamics methods and the sensitivity model on the other hand is not known. Therefore, the emphasis on the advantages of one—CyberPractice—method over the other two can only be acknowledged. A scientifically founded evaluation would shed more light on diffuse arguments. It is certain that systemic or cybernetic methods for solving problems in complex dynamic environments require a new perspective on events from those acting strategically and operationally. It is irrelevant whether economic processes—or processes of other priorities—are qualitative or quantitative, directly or indirectly influencing real processes through modeling, etc.
Key statement “For to see clearly, a change of perspective is enough.”
This valuable insight of the French writer Antoine de Saint-Exupéry (1900–1944), which he expressed in his 1951 published work “The City in the Desert,” remains valid. Economic strategies and operational processes in particular show in various ways what is only hinted at in Fig. 7.13, namely the problem of consequences, especially when they inevitably result in further consequences of a burdensome extent. These are often accepted as a necessary evil or—if not insignificant follow-up costs are associated with them—classified as so-called external effects—environmental costs, social costs— outside the economic process. The current process of economic cartels, coupled with fraudulent manipulation of technology in passenger cars by German car manufacturers (e.g., Friedmann et al. 2017; Faller et al. 2017; Osman and Menzel 2017), whereby executives in companies have not only deliberately deceived millions of car drivers, but also knowingly polluted the air in our environment, is one of many clear alarm signals that conventional rigid hierarchical corporate management has become outdated in many economic sectors.
Key Statement There is an increasing lack of cybernetic “negative feed-
back” in the economic mechanism of our time, which prevents ethical boundaries and sustainable, survival-enhancing economic activity from being exceeded by arbitrary actions of individuals or a group of individuals.
174
7 Cybernetic Systems in Practice
7.4 Cybernetic Systems in Society Starting with the most complex structures and dynamic processes of evolutionary nature, to which we owe our existence, through subareas of our work processes—technology and economy—we end this chapter on cybernetic systems with a functional unit society, which is no less complex than the two previously considered ”systems“ with their cybernetic processes—on the contrary. In societies, highly dynamic processes take place, which would be completely impossible to fully comprehend. Societies in which people act as social actors with each other are characterized by a diverse variety of individuals and dynamics and cannot be considered as closed systems in the definitional sense. A society or an ethnic group as a system is open to the environment and thus also open to other ethnic groups. The term society, which encompasses humanity as a whole, is also applicable to plant and animal societies. Nature clearly shows the networking of different, evolutionarily developed societies between humans, plants, and animals in countless examples. In doing so, structures of energy supply, material processing, and communication are established that allow the species to strengthen their partial survivability. Behind these survival strategies are evolutionarily optimized networked control loop processes, which contribute to system stability in local habitats through suitable positive and negative feedback. Human societies seem to be an inglorious exception here. For they have—more or less—introduced the path of nature domination through targeted control processes through their “intelligence” and “creativity,” which is in complete contradiction to nature’s strategy and causes enormous damage to nature. The vicious circle of human development strategies is reached when we have made our own development basis unworthy of life. The fact that we have been on the best path in this direction for some time is demonstrated by the proclamation of a new geological epoch—the Anthropocene (cf. Crutzen and Stoermer 2000; Küppers 2018), according to which the destructive changes on our planet can be clearly traced back to human activities. How do approaches of cybernetic systems in our human society, which we will examine in the following examples, affect us? Can they help to redirect misguided development strategies in many societal areas, which are coupled with sometimes enormous consequential problems, towards strengthening sustainability, robustness, and progress capability? Or will cybernetic systems in the narrow, control technology-oriented sense, coupled with new digital possibilities of algorithms and “artificial intelligence,” be used to achieve exactly the opposite of what can be called strengthening survivability? Between both poles lies a gray zone of action alternatives, however they may be used (see ibid.). We begin with a concept that links cybernetics with social concerns in societies: sociocybernetics.
7.4 Cybernetic Systems in Society
175
7.4.1 Sociocybernetics An explanation of what sociocybernetics means is provided by the following text (https://de.wikipedia.org/wiki/Soziokybernetik. Accessed on 13.02.2018): Sociocybernetics summarizes the application of cybernetic insights to social phenomena, i.e., it attempts to model social phenomena as complex interactions of several dynamic elements. An important problem of sociocybernetics lies in second-order cybernetics [see Sect. 5.5, author’s note], as sociocybernetics is a societal self-description.
The sociocybernetics superimposes the system science on the social science, with a field of work ranging from fundamental epistemological questions to application-oriented research, including ethical problems. The complexity of a society is analyzed and evaluated by sociocybernetics—from a systemic perspective—consequently less through linear causal processes, but primarily through dynamic feedback processes including self-organized systems. From the aforementioned source, it continues (ibid.): System processes, especially the relationship between systems and their environment, are understood as ”informational processes.“ Information is the factor responsible for the formation of structure and thus for the internal order of systems. But information contains contingencies and requires selection; there are no necessities in the sense of strict causality. In response to control attempts from their environment, systems react with self-organizing (dissipative) structures, which is why they are fundamentally unpredictable and have become the privileged subject of sociocybernetic research.
According to the International Center for Sociocybernetics Studies Bonn (CSSB), which emerged from the Fraunhofer Institute for Intelligent Analysis and Information Systems (http://www.sociocybernetics.eu/wp_sociocybernetics/2015/12/01/soziokybernetischegrundprinzipen/. Accessed on 14.02.2018) Sociocybernetics […] the application of systemic thinking and cybernetic principles in the analysis and handling of social phenomena with regard to their complexity and dynamics. Using a cybernetic approach in sociological research implies engaging with some fundamental principles that have been differently emphasized by the classics of systems theory and cybernetics. While mathematician Norbert Wiener highlights aspects of control and communication in natural and human science contexts, neurophilosopher Warren McCulloch defines cybernetics as an epistemology concerned with the generation of knowledge through communication. Stafford Beer sees cybernetics as a science of the organization of complex social and natural systems. For Ludwig von Bertalanffy, cybernetic systems are a special case of systems that are distinguished from other systems by the principle of selfregulation. According to Bertalanffy, cybernetics as a scientific discipline is characterized by its focus on the exploration of control mechanisms, relying on information and feedback as central concepts. Similarly, Walter Buckley formulates that he would like to understand cybernetics less as a theory, but rather as a theoretical framework and a set of methodological tools that can be applied in various research fields. Philosopher Georg Klaus sees cybernetics as a fruitful epistemological provocation (Klaus 1964). For Niklas Luhmann, the fascination of cybernetics lies in the fact that it addresses the problem of constancy and
176
7 Cybernetic Systems in Practice
invariance of systems in an extremely complex, changing world and explains it through processes of information and communication. For Heinz von Foerster, self-referentiality is the fundamental principle of cybernetic thinking. He speaks of circularity, meaning all concepts that can be applied to themselves, processes in which ultimately a state reproduces itself.
After this brief and concise enumeration of differentiated, classical cybernetic approaches, the question arises: Which path or paths should a sociocybernetics, a cybernetics of society take to demonstrate its problem-solving competence? Following the previous source, it continues (ibid.): Sociocybernetics is a research area in which sociology meets some neighboring disciplines from the natural and technical sciences to overcome the […] usual conception that the social and humanities on the one hand and the natural and technical sciences on the other hand, as different scientific cultures standing side by side, have nothing to say to each other in everyday scientific life. […] In view of the increased public reflection on how to develop precautionary strategies for cross-system risks, how to change traditional production forms and consumption patterns in a socially and ecologically more appropriate direction, which societal control instruments should be used, for example, to address the most serious problems of globalization, how to implement global social standards, or how to develop realistic strategies for sustainable development, sociocybernetics is recommended as an approach to address the complexity and dynamic problems associated with such questions. Not only through its epistemological and paradigmatic foundations, but also in the intensive use of computer systems supported by information technology, cybernetics increasingly succeeds in practicing a mutual reference between the two scientific cultures. This makes it increasingly possible to process traditional problems of sociology with mathematical methods. With growing success, for example, the new methods of computer modeling are being applied to more and more areas of the social and humanities—from the simulation of language acquisition and language production processes to the simulation of market processes of economic action to the formal modeling of the evolution of societies. By no means can these methods replace the proven research methods of sociology, but with their help, it could be possible to scientifically capture the problem of overcomplexity of social phenomena more adequately. Conversely, computer models always depend on the methodological and content-related know-how of established sociological research, without which the best models must remain empty. Changes can also be observed in the opposite direction, made possible by the use of common descriptive languages and modeling methods: In the field of software engineering, for example, the influence of neocybernetic thinking has contributed to overcoming naive ideas about the observation and modeling of social facts and replacing them with new methods (e.g., evolutionary and cyclical software development methods based on a constructivist epistemology). Current developments in the field of internet technologies (e.g., the “Semantic Web”) [= exchangeability and usability of data between computers, d. A.] are in close dialogue with sociological, communication science, and philosophical research. The same applies to information technology work in the field of autonomous systems and Distributed Artificial Intelligence (DAI), where, among other things, the development of software agents is being worked on, which are characterized by autonomous cooperation relationships and can
7.4 Cybernetic Systems in Society
177
thus emerge into new forms of sociality [= elevate to a higher quality level that cannot be achieved by individual system elements, d. A.]. Research in this regard is hardly conceivable without a sociological foundation.
It seems worth mentioning a research approach by Hummel and Kluge (2004), which deals with the linking of social and ecological concerns through cybernetic regulations. Both systems are closely connected, but have different target strategies. Interestingly, the authors also explicitly address critical arguments that still exist today, concerning the linking of cybernetic regulations with social environments. It says (ibid., p. 12): The unreflected transfer of technical-cybernetic terms and models to society is rightly criticized as problematic. But what about a reflected analysis of social contexts with generalized cybernetic-system scientific terms and models? A second objection may relate to a politically narrowed understanding of cybernetics. In the 60s/70s, political cybernetics emerged. These control-theoretical approaches of a politically oriented cybernetics were embedded in a growing state interventionism of the Keynesian-influenced welfare economy, with a strongly prospering performance administration (and economy). Authors such as Karl W. Deutsch [1969, “Political Cybernetics”, Original 1963, “The nerves of government: models of political communication and control”, ed. A.] or Amitai Etzioni [1968, “The Active Society”, ed. A.] were concerned with increasing the learning ability of society and the self-determination (self-organization) of social groups.
We will discuss the impact of cybernetics in the political environment in more detail later. After the previously quoted passages from the International Center for Sociocybernetics Studies Bonn (CSSB) on systemic influences on social processes, we have a well-founded introduction, which we will now supplement with further cybernetic applications to social areas.
7.4.2 Psychological Cybernetics The previously discussed connection of cybernetics with the social sciences is not far from what links cybernetics with psychology. The behavior of humans and animals, their observation and conclusions, encompasses a wide field of knowledge, as a definition of psychology suggests (https://de.wikipedia.org/wiki/Psychologie. Accessed on 15.02.2018): The psychology is an experience-based science. It describes and explains human experience and behavior, their development over the course of life, and all relevant internal and external causes or conditions. Since not all psychological phenomena can be captured by empirical means, the importance of humanistic psychology should also be pointed out.
178
7 Cybernetic Systems in Practice
In an early contribution to cybernetic approaches in behavioral psychology from the environment of the Austrian zoologist and medical Nobel laureate Konrad Lorenz (1903–1989), who led the Max Planck Institute for Behavioral Research in Seewiesen in the 1960s, Norbert Bischof raised the question in the Psychologische Rundschau (1969, p. 237–256): “Does cybernetics have anything to do with psychology?” Bischof provided an initial answer himself a year earlier, in his article “Cybernetics in Biology and Psychology” (1968, pp. 63–72), in which he speaks of “processes in or on “a human”, “a living being” or “an organism” (ibid., 63)”, contrasting the thesis of the atomistic approach with the antithesis of the holistic approach to ultimately arrive at the synthesis of the cybernetic approach in psychology. Bischof (ibid., pp. 69–70) uses the obvious term of biocybernetics, as does Frederic Vester: Biocybernetically oriented behavioral physiology, like holistic theories, tends to understand the organism from the “inner perspective” […] in this respect, it is of historical scientific relevance that the life’s work of Erich von Holst [German biologist and behavioral researcher (1908–1962), who not only provided evidence of physiological self-activity of the central nervous system, but also contributed fundamental insights into bird flight, e.g., the flight principle of dragonflies, d. A.] culminated on the one hand in the investigation of the spontaneous activity of the organism, and on the other hand in the formulation of the reafference principle. The organism thus appears as a system that not only reacts to stimuli (afferences) but also always receives reafferences of its (spontaneous) actions. Here, too, the reflex arc closes into a “circle”, which is now, however, precisely determined as a control loop. The behavioral physiological approach can also be characterized as “holistic” insofar as the organism is not reconstructed by synthetically assembling the smallest, elementary components, but is initially viewed globally as a kind of “black box” (Black Box), which unfolds into a structure of subsystems in the course of gradually differentiating analysis. The behavior of this overall structure is then, as every control theorist knows, more than the sum of the behavioral characteristics of its substructures—in the sense that, in general, far more complicated mathematical operations are required to determine the properties of the whole from those of the parts (including their mutual relationships). While parallels to the aforementioned group of thought systems [atomistic and holistic, d. A.] can indeed be demonstrated with regard to the “holism” of the perspective, there is a significant deviation in that the vague category of “interaction” is abandoned in favor of the consistent reduction to unambiguous directed courses of action. However, in contrast to reflex theory, these now no longer appear linked only in chain structure; by adding the structure types of the “mesh” and especially the feedback (feedback), the conceptual tool gains a degree of flexibility that readily permits an adequate description even in cases of genuine interaction.
It is worth mentioning that at the same time, cybernetician Karl Steinbuch dealt specifically with “Considerations on a Hypothetical Cognitive System” (Steinbuch 1968, pp. 53–62) and generally with “Automaton and Man. On the Way to a Cybernetic Anthropology” (Steinbuch 1968, original 1961). Whether, as previously mentioned, cybernetics according to Bischof has something to do with psychology or contributes new insights, Bischof (1969, pp. 255–256) summarizes as follows:
7.4 Cybernetic Systems in Society
179
I have […] pointed to a current in contemporary psychology that believes scientificity finds its most distinguished expression in the attitude of critical reservation. In contrast to such “adversaries of the soul,” cyberneticians are rather naive. However, naivety has always been related to creative productivity—and this must be granted to cybernetics in any case: it does not throw obstacles in our way, it does not constantly question our own ideas (at most, it cares too little about them); mainly, it is busy providing us with new ideas, questions, and solution proposals, and if only ten percent of them should be useful, the effort would have been worthwhile. For the psychological practitioner, cyberneticians are much more pleasant partners than the above-mentioned critics from their own ranks. Cyberneticians still have genuine awe for intuition (this comes from their mathematical training): They do not disapprovingly tap the fingers of the interpreter of a projective test or the psychoanalyst or the practitioner of human understanding because their ways of knowing are too complex, too confusing, or too “subjective.” Instead, they sneak up on them from behind and try with all system-analytical sophistication to find out how they do it! To then recreate it. Of course, this may not be to everyone’s liking, but at least—the cybernetician does not initially mistrust the performance of his victim.
Bischof concludes self-critically: I cannot hope to have dispelled all mistrust with these remarks. However, one thing should be considered: The ideologically motivated confession to the expectation that, despite all efforts for a natural-scientific or quantitative or cybernetic—in any case, rational analysis of humans, an insoluble residue must remain, cannot help us further. We are dependent on recognizing what this residue consists of.
The following passage, despite its zeitgeist of the 1960s/1970s and with a view to science—especially education—is more relevant than ever (ibid.): To experience limits, one must dare to ignore them. Put differently: We must not—out of fear of overrunning existing boundaries—shy away from expeditions into the boundless. We must trust that the truth remains stronger than our arrogance. “To explore the explorable and to quietly revere the unexplorable”—this maxim, as Konrad Lorenz likes to say, must not lead to declaring what one wants to revere as unexplorable out of fear of profanation.
Key statement (Bischof) “The truth has many enemies. For example, mal-
ice, or arrogance, convenience, and of course stupidity. But the mortal enemy of truth is fear.” (ibid.) After these decades-old insights into a cybernetic Psychology, we now jump to the present and rely on two sources that, on the one hand, consider the “application of systems theory to explain human behavior” (Kalveram 2011—text and illustrations are taken from slides of a lecture and therefore without page numbers) and, on the other hand, deal with the “influence of cybernetics on psychological research methodology” (Krause 2013). In response to Kalveram’s question of how organismic behavior can be captured, he proposes four consecutive steps (ibid., numbering by the author):
180
7 Cybernetic Systems in Practice
1. Observing acts of behavior. Questioning the seemingly self-evident. Highlighting noteworthy phenomena. 2. Description of these phenomena with suitable terms, here with cybernetic terms. 3. Experimental verification of hypotheses about relationships between statements. 4. From this, derive behavior theory to explain human (and animal) behavior, i.e., predict within the chosen theory framework.
The establishment of a matrix in which humans, animals, and plants interact is the basis of organismic behavior. The close interaction between mother and child is just as much a part of this as the chemical warning signals of a plant species to its conspecifics in the vicinity against impending predators (see Fig. 5.4). But how do you measure such behavior? Kalveram uses the Automata Theory (Hopcroft et al. 2011) which is assigned to theoretical computer science and is considered an essential tool of complexity theory (see Fig. 7.14). The block representation often used in technical cybernetics for a control or regulation process, with input variables, black box, and output variables, can also be applied to behavioral observations in Psychology, as shown in Fig. 7.14. On the right in Fig. 7.14, a biocybernetic control loop is set up between the individual and the material environment, initiated by the respective input variables (command variables) w and x, and forming feedback loops to the respective systems individual and environment through the reactive linkage variables e and a.
Abstract automaton
Biocybernetic system with two automata
Abstract automaton Input variable Input
Internal condition z
Material
environment Output variable Output
Result function (behavior equation) a' = F(z, e) Transfer function (equation of state) z' = G(z, e) (a' and z' mean the value of the variables a and z in the next time cycle)
Sensory, Perception
Motor skills, action
Individual
Environment: Individual:
Template: Kalveram (2011) supplemented by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.14 Abstract time-discrete automaton, left, and biocybernetic system of two automata, consisting of individual and material environment, right. (Source: after Kalveram 2011, supplemented by the author)
7.4 Cybernetic Systems in Society
181
Condition mother: care drive:
Social
Environment Sensory, Perception
Motor skills, action
Individual
Output MotherBehavior: 0: Rest 1: Search food 2: Offer food 3: Eat yourself
Child
Arbitrariness Environment: Individual:
Nut
Chance Output Child behavior: 0: Play 1: Begging 2: Food Condition child: eating drive:
Template: Kalveram (2011) supplemented by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.15 Biocybernetic system of two automata, consisting of an individual (child) and social environment (mother), left: general scheme, right: mother-child scheme. The results of the individual result functions F and transfer functions G are omitted here and can be found in Kalveram (2011). The values for w and x entering the respective social systems as control variables or “free variables” are assigned “arbitrariness” and “randomness” according to Kalveram. (Source: after Kalveram 2011, supplemented by the author)
Comparable biocybernetic functions can also be constructed—following the same scheme as in Fig. 7.14 on the right—with the individual and social environment, for example between a child as an individual and a mother as a social environment, as shown in Fig. 7.15. This is intended to demonstrate that certain characteristics of human behavior can be more or less abstracted by automata. However, Kalveram does not want to draw the conclusion that humans are automata. Kalveram’s representations of biocybernetic influences on psychology or behavioral psychology culminate in a simplifiedindividual-environment model, as shown in Fig. 7.16. In addition to the three-stage process of strategic, volitional (will-determined), and tactical sequences in the individual, the part in which cybernetics intervenes is also highlighted. Kalveram summarizes his automata-based models as follows: Human behavior is described within the framework of an individual-environment system. The individual and the environment are defined as “abstract automata,” with the output variable of one automaton being the input variable of the other. The biological-psychological requirement for the individual is to induce the environment to provide the necessities of life. On-stimuli stimulate the drive to seek out the essentials of life, while off-stimuli signal success and turn off the drive. The strategic apparatus houses these drives. The arbitrary apparatus serves to select one of several stimulated drives. The tactical apparatus plans and realizes the elicitation of perceptions [sensory perceptions, author’s note] that contain the relevant off-stimulus.
In his article “The Influence of Cybernetics on Psychological Research Methodology,” Krause initially provides an overview of the influence of cybernetics in the
182
7 Cybernetic Systems in Practice
Cybernetics control processes x
Environment
e
Chance a
sensory input, perception
motor output, behavior
Perceptual Apparatus
Motor Apparatus
Individual current perception
Drive strength of appetence behavior
Strategic apparatus
Volitional apparatus
Tactical Apparatus
Selection, arbitrary
Behavioral drives
w Vegative somatic functions
Want
desired kinematic Trajectory
Target planning, environmental knowledge desired perception
Template: Kalveram (2011) supplemented by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.16 Simplified individual-environment model. (Source: after Kalveram 2011, supplemented by the author)
erman-speaking world. This was done on the occasion of the 100th birthday of the G GDR cybernetician Georg Klaus, who became internationally known in the 1960s through his publications, including “Cybernetics and Society” (1963). In the further course of his explanations, Krause discusses the influence of probability and information concepts on psychology. Examples include (ibid., pp. 4-5): • Hick’s Law, which describes the logarithmic relationship between reaction time and choice options: Rt = kld(n)
(7.7)
7.4 Cybernetic Systems in Society
183
with Rt as reaction time, k as a factor, and the dual logarithm of the number of choice options n. • The entropy concept (according to Shannon) in psychology: I(pi) = −ld pi = − log 2 pi
(7.8)
with I = information content, pi as the probability of the occurrence of the symbol zi of a symbol set Z = (zi; i = 1, …, n). The average information content of a transmitted symbol is then obtained as the expected value in the form H(Z), the size of the entropy. H(Z) =
pi · I(pi) = −
pi · ld pi
(7.9)
Krause writes about this (ibid. p. 5): For psychological research methodology, this magnitude of entropy opens up another influencing variable for stimulus-response investigations. As with Hick’s law, the set of symbols can be varied by their number, or one can also keep the average information content constant and examine further influencing variables under these then constant conditions. […] An investigation approach with a choice reaction device that has 36 stimuli […] leads to […] the following result for entropy with a […] constant number and equally probable occurrence of stimuli: H=−
p · ld p = −
1/36 · ld(1/36) = −ld 1/36 = ld 36 = 5.17
(7.10)
Krause also discusses neural networks in psychology, specifically associative learning processes, which he traces back to “Pavlov’s experiments with dogs” and “Skinner’s rat box.” Without directly addressing this context, Krause concludes his remarks on the influence of cybernetics on psychological research methodology with two observations (ibid., p. 9): 1. Through the interdisciplinary interaction with cybernetics, the concept of information for psychology became both a measurable quantity, necessarily connected with a receiver, and a theoretical explanatory model of information transmission. 2. Through the interdisciplinary interaction with cybernetics, modeling methods in general and specifically as methodological tools were developed and successfully applied. The attempt to reproduce or explain mental events through modeling became decisive for the development of the field.
Cybernetic approaches are interdisciplinary and can be useful in many application areas of our society. A special field in this regard is politics, or the political shaping of a society for the benefit of its citizens. “To avert harm from the citizens,” as it is spoken in oath formulas by governing politicians who hold new offices, is one thing. However, the practice of harm prevention or avoidance is noticeably incomplete. At these points of immature application of short-sighted compromise solutions, cybernetics could and can provide helpful services to politics.
184
7 Cybernetic Systems in Practice
7.4.3 Cybernetic Governance A quote by Norbert Wiener (1952), with which the cultural scientist Benjamin Seibel begins the introduction to his book “Cybernetic Government” (2014, p. 7), should also introduce this chapter: [O]ur view of society [deviates] from the societal ideal represented by many fascists, successful businessmen, and politicians. […] People of this kind prefer an organization in which all information comes from above and none goes back. The people under them are degraded to effectors for an allegedly higher organism. I would like to dedicate this book to the protest against this inhuman use of human beings.
Maxim (Ashby) “Cybernetics […] does not ask ‘What is this thing?’ but
‘What does it do?’ […] Cybernetics examines all [sic!, author’s note] forms of behavior that are in any way organized, determined, and reproducible.” (Ashby 1985, pp. 15–16)
Maxim (Gilles Deleuze (1925–1995), French philosopher) “One should not
ask, ‘What is power? And where does it come from?’ but ask how it is exercised.” (Deleuze 1992, p. 100)
Maxim (Michel Foucault (1926–1984), French philosopher and psychologist) “I am not […] sure whether it is worth pursuing the question again and
again whether governing can be made the subject of an exact science. On the other hand, it is interesting to see in the […] practice of governing and in the practice of other forms of social organization a technê that can make use of certain elements from physics or statistics. […] I believe that if one treated the history [of governing—BS] within the framework of a general history of technê in the broadest sense, one would have a more interesting guiding concept than the opposition between exact and non-exact sciences.” (Michel Foucault, from Seibel 2016, p. 48) These are impressive quotes that show: • that already in the early stages of the scientific discipline cybernetics, Wiener and other cyberneticians thought far beyond the actual technical control processes in focus, and thus had a visionary view of a broad field of cybernetic applications. This was already pointed out at the beginning of Chap. 7; • that cybernetics focuses on the process and not the product. This applies to the technical environment as well as to the political, economic, or social environment— cybernetics, or rather: the biocybernetics of nature, masters its complex networked
7.4 Cybernetic Systems in Society
185
processes, and this is usually also the case in other areas of our technologized, economic, and social environment; • that looking beyond disciplinary boundaries can also lead to connecting previously barely related areas of human activity in order to create sustainable emergent values from them. Seibel examines in his book traces of cybernetic approaches in political regulatory rationality from 1943 to 1970, which pursued the claim to “put the problem of ‘governing’ on a new technical basis.” (Seibel 2016, p. 9) It continues (ibid., p. 9): Where governmental target objects—individual and population—could be modeled and addressed in terms of information processing and feedback, it also became apparent at the same time that the government’s behavior could only take place in the mode of cybernetic control: “Government apparatuses [and political parties, author’s note]”, formulated Karl W. Deutsch 1969, “are nothing more than networks for decision and control, […whose] similarity to the technology of message transmission is great enough to arouse our interest.” [Deutsch 1969, p. 211, author’s note] In such formulations, the intention was expressed to apply the ideal-typical logic of a frictionless cybernetic machine control as a model for the establishment of a social order. This, however, marked significant shifts in the orders of political knowledge: In the mediality of the cybernetic dispositif [arrangement, in the formulation in Michel Foucault’s conceptual inventory, to which Seibel refers not only at this point, but much more: the constantly shifting and seeking new connections totality of all linguistic, institutional, discursive, etc. strategies that characterize, classify and ultimately regulate the behavior of socialized humans as social action, author’s note] governmental rationality crossed a “threshold of technology”, […] behind which new means and ends, problems and solutions, limitations and possibilities of a “good government” became visible. The initial thesis of the work is that a transformation in the technicality of the governing process itself can be observed using the example of cybernetics. A first preliminary decision is therefore to not hastily consider cybernetic government models as ideological aberrations, categorical confusions, fashionable adaptations, or merely metaphorical embellishments, but rather to take them seriously as technical descriptions that can also be read as monuments of a power dispositif. If a technicality of governing has so far rarely been made the explicit subject of political historiography, this is also because “technology” and “politics” are usually considered as two separate fields of study, between which at best improper transfers can take place, but which would then be strictly ”inappropriate“ in the strict sense. […] In this case, one could agree with Wolfgang Coy [2004, p. 256, author’s note] that the cyberneticians simply “confuse governing with controlling and rules”, […] assume an error and let the matter rest. Instead, it will be argued that governmental knowledge is indeed a technical knowledge, which is precisely for this reason highly dependent on the state of technical development in its concrete manifestations. A genealogy of political technologies does not ask for the “meaning” of technical government models nor for their “appropriateness”, but first and foremost in a power-analytical intention for the structure of their mediality and the strategies and effects that emerge from it.
186
7 Cybernetic Systems in Practice
As a conclusion of his cybernetic investigations (for a deeper penetration of Seibel’s considerations of cybernetic governmental action, reference is made to that very book) in the political environment, Seibel starts from the vision of a cybernetically governed society in the 1960s, which briefly blossomed, only to disappear all the more quickly (ibid., p. 250): [The] […] significance of political cybernetics [is] perhaps less to be found in the ultimate failure of its attempts at solutions, but rather in its production of problems. Wherever the cybernetic view of governmental contexts revealed specific deficits and deviations, these were ultimately not simply “objective” characteristics of populations or other target objects, but disturbances occurring relative to an observer’s standpoint, which stood in the way of an ideal technical execution. In dealing with such disturbances, power techniques have a peculiarity: they shape the recalcitrant subjects whose behavior they seek to influence.
This demonstrates a high degree of sensitivity in the introduction of technical methods in sociotechnical systems, which must be considered when people and their behavior are integrated as “system elements” in cybernetic connections. Seibel identifies three problem areas (ibid., pp. 250–252): A first point of application for cybernetic governmentality was to view social orders in a specific way as orders of communication. In Claude E. Shannon’s model, which was decisive for cybernetics, communication could be understood in a highly abstract but mathematically precise way as the transmission of statistical signal streams. The focus was not on the semantic dimension of messages, but on the capacity of channels, the accuracy of transmissions, and the density of connections. In the sociological studies of Paul F. Lazarsfeld [et al. 1969, author’s note] or Stuart C. Dodd [et al. 1952, author’s note], the population appeared as an empirically measurable communication network. Norbert Wiener and Karl W. Deutsch, meanwhile, pointed out that the degree of cohesion and integration of a liberal society could be read from technical criteria such as the intensity and precision of the communications circulating in this network. […] A second cybernetic model was based on the analogy between the brain and the computer and conceived decisions as a problem of calculation. In operations research and game theory, human decision-makers appeared as utility maximizers who were not equipped with intuition, emotion, or experience, but with an objective mathematical rationality. The fact that this rationality was not actually present but had to be produced became apparent at the latest in the works of Herbert A. Simon [et al. 1947, 1977, author’s note], which dealt with the production of decidability. Instead of individual rationality, a systemic one was to be introduced, which was achieved through a coordinated interaction of various components. […] Finally, a third hypothesis understood government action as a problem of cybernetic control. Individuals and communities were conceived as self-regulating systems, whose leadership was only conceivable as “management by self-control” [Peter F. Drucker 1998, author’s note]. In behavioral psychology and management theory, possibilities were discussed for achieving a more productive and at the same time “more humane” form of governance by specifically strengthening regulatory mechanisms and simultaneously designing action spaces: Instead of issuing prescriptions and prohibitions, it aimed at enabling subjects to independently cope with problems—and thus to steer disturbances within the social fabric already at the local level.
7.4 Cybernetic Systems in Society
187
A current use of cybernetic systems in an extended framework, beyond what classical control technology provides, for practical use in political environments and public administrations is not discernible. Meanwhile, decades-old public administrations and governments are mutating into information-chaotic behavior, in whose wake mountains of consequential burdens can be found from a less systemic or less cybernetic perspective. Even though the heyday of cybernetics and its protagonists from the 1960s has faded and today only isolated approaches—especially in the economic organizational area— can be identified, it is worthwhile to recall the forward-looking considerations of the founders of cybernetics. That we are today and in the future more than ever dependent on thinking and acting systemically cybernetically is undisputed. A thorough revival of cybernetic or biocybernetic basic rules in a societal environment is—in view of the disasters caused by humans—long overdue. As late as 1977, an appeal to young readers broke the decline of cybernetics with the exclamation: “Study cybernetics!” (Pias 2003, p. 9). It continues (ibid.): The time when cybernetics was still spelled with a “K” had come to an end. […] Spelled with a ”C“, it returned in the 80s and early 90s in the ubiquitous talk of cyborgs and cyberpunk, cyberspace or cyberculture […].
Whether cybernetics is spelled with a “K” or a “C” is not important. What is essential is that when using cybernetic systems, human survivability is linked to the progressiveness of technology (see Küppers 2018). They are, so to speak, two sides of the same coin. In Sect. 7.4.3.2, we will look back at the example of a cybernetic state regulation experiment that took place in the early 1970s.
7.4.3.1 The Cybernetic Model of Karl Deutsch Karl W. Deutsch introduces his book on “Political Cybernetics” (1969, original 1963: “The Nerves of Government”) with the following words (ibid., pp. 29–30): This book is an interim report on an intellectual enterprise that is still ongoing [until the present, ed.]. At the end of this enterprise, there should be a theory of politics that spans both the national and international spheres. […] Such a theory should provide us with appropriate analytical concepts and models that can help us make our thinking about politics more rational and effective. […] It should help us assess the significance of certain institutions and political behavior patterns, especially when the actual behavior patterns differ significantly from those expected based on formal laws and institutions. In short, it should be as unswervingly realistic as the social scientist’s commitment to truth and reality can make it. Ultimately, a mature theory of this kind should enable us to recognize which political values and actions can be viable, growth-oriented, and creative.
Deutsch compares this claim to the theory of politics with the theory of evolution or genetics in biology or the national economic theory and at the same time points out the only fragmentary elements that his book can contain.
188
7 Cybernetic Systems in Practice
His acquaintance with Norbert Wiener also led Deutsch to cybernetics. He hoped that with the scientific model, the theory of communication and control, he had found an instrument that would prove relevant for political research “and ultimately also stimulating and useful in the development of a coherent political theory” (ibid., p. 31). As demanding as this cybernetic project may be for politics, up to the present day, it has not been possible to establish something comparable for a political system as Deutsch envisioned. Much was still unknown in Deutsch’s time, such as the massive pressure of globalization on political and social systems or the emergence into a digitized, networked environment. In this, information and communication seem and are the fuel for “everything”—including politics. At the same time, however, it is also evident that digitizing politics operates beyond the political values and modes of action whose qualities, according to Deutsch, are linked to viable, growth-capable, and creative. How else could it be explained that people have created their own geological age of the Anthropocene, with massive destruction of viable systems? Beyond Deutsch’s description of a simple cybernetic model with associated terms such as control, self-control, or feedback (Chap. 5), the treatment of consciousness and will as structural patterns of information flow (Chap. 6), political power and social transaction (Chap. 7), authority, integrity, and meaning (Chap. 8), communication models and decision systems (Chap. 9), as well as learning ability and creativity in politics (Chap. 10), the government process as a control process is finally highlighted in Chap. 11 The Government Process as a Control Process and analogies are drawn between technical and political systems. The English word “governor” combines two meanings: First, it refers to a governing person who administratively directs a political community, and second, a mechanical centrifugal governor, like the one Watt’s steam engine had, to keep the system flow stable, i.e., the function of a negative system feedback (see Fig. 3.8). Following this double meaning of the word governor, Deutsch writes (ibid., p. 258):
Maxim (Deutsch_1) “The similarity between such processes of control, pur-
poseful movement, and autonomous regulation on the one hand and certain processes in politics on the other hand appears astonishing.” Thus, governments are always striving to achieve goals in domestic and foreign policy. The approach to the goal is regulated by a flow of information, which also contains disturbances. Smaller or larger changes are made in response to these disturbances in order to get closer to the goal. Technically speaking, this is called sequential control. At the same time, a government is also striving to stabilize a once-achieved advantageous state, e.g., a growth phase or a conflict-free relationship with a neighboring state, to maintain it in a static equilibrium (cf. ibid., p. 258). The problem of “balance politics” is often that unexpected disturbances can lead to unexpected subsequent problems that no one is prepared for. Deutsch writes, in essence (cf. ibid., p. 259):
7.4 Cybernetic Systems in Society
189
Maxim (Deutsch_2) Moreover, actions based on static equilibrium prin-
ciples (political) cannot capture dynamic processes. A negative feedback principle that has a system-inherent disturbance-compensating or dynamic stabilizing effect is much more capable of directing political disturbances into orderly channels. Deutsch lists four arguments that enable an applied feedback model in politics to explore the performance capacity of governments with a series of questions that would hardly come into the field of attention within the framework of a conventional approach [the following arguments are only partially reproduced, d. A.]: 1. How large are the extent and relative speed of change in the international and domestic situation that a government must cope with? In other words: How large is the burden on a state’s decision-making system? And how large is the burden on the decision-making system of individual interest groups, political organizations, or social classes? How large is the intellectual burden on their leaders? How large is the burden on the institutions that are supposed to ensure the participation of their members? 2. How large is the delay with which a government or party responds to a novel crisis situation or challenge? How long do political decision-makers take to become aware of a novel situation, and how much longer do they take to reach a decision? How large is the time loss due to more widely dispersed consultation or participation? […] 3. How large is the gain of the response, i.e., the speed and magnitude of the reaction with which a political system reacts to newly received data? How quickly do bureaucracies, interest groups, political organizations, and individual citizens respond by providing their resources for new tasks? How large is the advantage that authoritarian regimes have by being able to enforce mass support for each new policy? […] 4. How large is the extent of leadership, i.e., the ability of a government to effectively anticipate and preempt new problems? To what extent do governments try to increase their leadership speed by creating special intelligence services, leadership and planning staffs, and similar institutions? […] (ibid., pp. 263–265)
Even just a superficial reference to current political action processes, whether they concern financial transactions and political countermeasures or economic cartel initiatives bypassing political decision-makers, or state subsidy allocation without suitable feedback with enforcement of consequences or state direct investments with high financial loss potential, makes it clear: They all would have been, after a prior confrontation with the preceding questions, directed into—with near certainty—less risky channels for politics and society. It is clear that the mentioned influencing factors of a politically acting cybernetic system can develop a strong action position over the overall performance, their effective interactions, and thus over their ability to develop emergent properties, which would never be achieved by political systems based on balance strategies. Here, once again, the analogy to principles of nature is recognizable, whose primary goal is the preservation of their survivability, which must be individually strengthened.
190
7 Cybernetic Systems in Practice
It is clear, however, that cybernetic systems can provide political systems with more robust support in decision-making processes and error prevention than other common approaches, but they can only imperfectly represent the dynamics of reality. The critical size is the human system size, which can only be captured probabilistically in its thinking and action processes. A good overview of “Cybernetics and Political Science,” which also covers the “cybernetic model of Karl Deutsch,” is provided by Senghaas (1966). Figure 7.17 (Deutsch 1969, p. 340) shows the schema of “information flows and control functions in the process of foreign policy decision-making” outlined by Deutsch. It shows a system with which politics can be practiced from the perspective of a cybernetic communication network. In this context, negative feedback of information (see also Appendix I) plays a significant role in the fault-tolerant stabilization of this political cybernetic communication network. Table 7.2 supplements the English terms contained in Fig. 7.17 with German ones, while in Appendix I the most important information flows (thick black arrows), feedback paths (blue arrows), test fields, decision areas, and others are explained in the cybernetic political network (according to Deutsch 1969, pp. 342–345, in the order of the book template).
7.4.3.2 State Cybernetic Economic Control—Stafford Beer’s Chile Experiment The socialist government in Chile under its President Salvador Allende (1970–1973) made efforts in economic and social policy in such a way that, on the one hand, it nationalized natural resources that were being mined by private companies without compensation, expropriated foreign corporations and banks, and divided large estates in an agrarian reform between farmers and socialist collectives. The goal was to achieve Chile’s independence—especially from the USA. The Unidad Popular [an electoral alliance of left-wing Chilean parties and groups, ed.] set state-controlled prices for rent and essential basic goods. Education and healthcare were provided free of charge. Every child received shoes and half a liter of free milk daily. With his social policy, Allende followed both socialist ideals of the 1970s and a South American tradition of “populist” demand policy. (https://de.wikipedia.org/wiki/Salvador_Allende. Accessed on 17.02.2018; see also Larraín and Meller 1991).
Even though demand policy initially promoted growth and led to real wage increases through state subsidies and thus an increase in the money supply, negative effects soon followed due to the limited availability of various products: particularly the increase in the inflation rate from 29% (beginning of Allende’s presidency) to 160% in 1972 (Chilean Central Bank—Memento from 12 March 2007 in the “Internet Archive”, https:// web.archive.org/web/20070312052841/http://si2.bcentral.cl/Basededatoseconomicos/951_455.asp?f=A&s=IPC-Vr%25M-12m; see also https://de.wikipedia.org/wiki/ Salvador_Allende#cite_note-bank-12. both accessed on 18.02.2018) as well as increasing boycotts from foreign countries, especially the USA and Europe, and a political erosion
Fig. 7.17 Schema of a cybernetic political communication with decision-making. (Source: after Deutsch 1969, p. 340, modified and supplemented by the author)
7.4 Cybernetic Systems in Society 191
192
7 Cybernetic Systems in Practice
Tab. 7.2 List of German-English terms for Fig. 7.17. (Source: after Deutsch 1969, p. 34) For the adjacent illustration: (Original)
(Translation)
Areas of decision processes
Bereiche der Entscheidungsbildung
Confrontation and Simultaneous Inspection of Abridged Secondary Symbols („Consciousness“)
Konfrontation und simultane Sichtung sekundärer Kurzsymbole („Bewußtsein“)
Current Memory Recombinations
Neukombination aktueller Erinnerungen
Deeply Stored Memory
gespcicherte Erinnerungen
Domestic Input (Receptors)
innenpolitische Informationseingabe durch Empfangsorganc
Domestic Output (Effectors)
innenpolitische Informationsausgabe durch Wirkungsorgane
Final Decisions
endgültige Entscheidungen
Foreign Input (Receptors)
außenpolitische Informationseingabe durch Empfangsorgane
Foreign Output (Effectors)
außenpolitische Informationsausgabe durch Wirkungsorgane
Information about Consequences of Output Information über Auswirkungen der Ausgabeleistung Main stream of information
Hauptströme des Informationsflusses
Screen of Acceptable and Feasible Policies Prüffeld für zweckdienliche and durch-führbare Verfahrensweisen Screen of Repression from Consciousness
Prüffeld zur Abschirmung des Bewußtseins
Screen of Selective Attention
Prüffeld zur selektiven Informations-aufnahme
Screen of Acceptable Recalls
Prüffeld zur Entnahme zweckdien-licher Erinnerungen
Screens, i. e. filtering or selective functions Prüffelder (mit der Funktion eines Filters oder Ausleseorgans) Secondary streams of information
sekundäre Informtionsströme
Selective Memory
selektive Erinnerungen
Selective Recall
selektive Entnahme von Erinnerungen
Tertiary streams of information
tertiäre Informationsströme
„Will“, or internal control signals, setting screens
interne Steuerungssignale („Wille“) zur Regulicrung der Prüffelder
Tentative Decisions
vorläufige Entscheidungen
initiated by the USA through CIA agent activities, with the aim of destabilizing the country and preparing a military coup through an economic crisis (Hanhimaki 2004, p. 103). In this politically, socially, and economically heated situation of Allende’s socialist Chile experiment, Allende’s Finance Minister Fernando Flores developed the idea for Cybersyn, an acronym for Cybernetic Synergy, which he developed together with
Fig. 7.18 Flowchart of Cybersyn with the Cyberstride software based on a sketch by Stafford Beer for the cybernetic rescue of Chile. (Source: Medina 2011, p. 136)
7.4 Cybernetic Systems in Society 193
194
7 Cybernetic Systems in Practice
tafford Beer (Fig. 7.18). This offered Beer the opportunity to test his Viable System S Model, which served as a template, under societal practice conditions. The cybernetic experiment in Allende’s Chile was described by various authors, including Larraín and Meller (1991), Pias (2005), Philippe Rivère (2010), Medina (2006,2011) and, of course, by Stafford Beer himself (1994), with more or less detailed explanations. The fact that it ultimately did not lead to successful results in socialist Chile is due to the military coup by Pinochet, supported by the USA, in 1973. Rivère vividly describes the emergence and development of the cybernetic state experiment in Chile as follows (Rivère 2010): On November 12, 1971, Beer went to the Moneda, the presidential palace in Santiago de Chile, to present his CyberSyn project to Salvador Allende. […] Beer had been invited by Fernando Flores, the technical director of “Corfo,” an umbrella organization of the companies nationalized by the Allende government. The young engineer Flores wanted to introduce “scientific management and organizational techniques at the national level,” as he put it in an invitation letter to Beer. In order to be able to cope with pre-programmed economic crises in real-time, Flores and Beer envisioned connecting all factories and businesses in the country through an information network. […] The CyberSyn team, composed of scientists from various disciplines, set to work, collecting unused teletype machines and distributing them to all state-owned enterprises. Under the direction of German designer Gui Bonsiepe, they developed a prototype of an “Opsroom” (Operations room), a control room like in the “Star Trek” universe, which was never realized. Data on daily production, labor, and energy consumption were sent throughout the country via telex and radio connections and evaluated daily by one of the few computers that existed in Chile at the time, an IBM 360/50 (absenteeism from the workplace was counted as an indicator of “social malaise,” among other things). As soon as one of the figures fell out of its statistical margin, an alarm—in Beer’s vocabulary, an “algedonic signal”—was sent, giving the respective plant manager some time to solve the problem before it was reported to the next higher authority if the signal was repeated. Beer was convinced that this “gave Chilean companies almost complete control over their activities on the one hand and allowed intervention from a central point when a serious problem arose…” [Medina 2006, p. 572, author’s translation]. The CyberSyn project, although technically demanding, writes computer historian Eden Medina, “was not just a technical attempt to regulate the economy from the outset. From the perspective of those involved, it could help advance Allende’s socialist revolution. […] The conflicts over the conception and development of CyberSyn simultaneously reflected the struggle between centralization and decentralization that disturbed Allende’s dream of democratic socialism.” [ibid., p. 574, author’s translation] On March 21, 1972, the computer produced its first report. By October, the system faced its first test in the face of strikes organized by the opposition and professional interest groups (“gremios”). The CyberSyn team formed a crisis unit to evaluate the 2,000 telex messages arriving daily from all over the country. Based on this data, the government determined how to get the situation under control. They then organized 200 loyal truck drivers (compared to about 40,000 strikers) to ensure the transport of all vital goods—and overcame the crisis. The CyberSyn team gained prestige, Flores was appointed Minister of Economy, and the British Observer headlined on January 7, 1973: “Chile run by Computer.” On September 8, 1973, the president ordered the central computer, which had been housed in the abandoned offices of Reader’s Digest, to be moved to the Moneda. Just three days later, the army’s fighter planes bombed the presidential palace, and Salvador Allende took his own life.
7.4 Cybernetic Systems in Society
195
What would have become of the socialist cybernetic experiment in Chile if Allende had continued to rule, if better communicative technical infrastructure had been available? Nobody knows. However, it is conceivable that the questions raised in the previous chapter about German cybernetically regulating politics, had they fertilized the thoughts of the people involved in the Chile cybernetics experiment, the experiment—assuming a coup could have been avoided—would have been guided into promising channels. Cypersyn or Cybernetic Synergy remains to this day, in which the prefix “Cyper-” is used for many digital development strategies, the only worldwide cybernetics experiment, without a second attempt with today’s considerably more advanced cybernetics tools having been tried in any country. It would be an even more exciting experiment if and since it would not be conducted with a (sic!) IBM computer and data transmissions via telex or radio connections, but with an Internet of Things, artificial intelligence, humanoid robotics, and so on. Technically speaking, there are worlds between the two approaches. But both experiments must—without distinction—struggle with the unexpected, with today’s approach being exposed to an incomparably higher and differently structured risk due to the enormously increased complexity and dynamics than at the time of Allende in the 1970s.
7.4.4 Cybernetics and Education—Learning Biology as an Opportunity Maxim Education is—a human right!
In a comment on this, it says (https://de.wikipedia.org/wiki/Recht_auf_Bildung. Accessed on 18.02.2018): The right to education is a human right according to Article 26 of the Universal Declaration of Human Rights of the United Nations of December 10, 1948, and was further expanded in the sense of a cultural human right according to Article 13 of the International Covenant on Economic, Social and Cultural Rights (ICESCR). The right to education is also enshrined in Article 28 of the Convention on the Rights of the Child. Article 22 of the Geneva Refugee Convention prescribes access to public education, especially primary school education, for refugees as well. The right to education is considered an independent cultural human right and is a central instrument for promoting the realization of other human rights. It addresses the human claim to free access to education, equal opportunities, and the right to schooling. Education is important for a person’s ability to stand up for their own rights and to engage in solidarity for the fundamental rights of others. This applies to everyone equally without discrimination on the grounds of race, color, sex, language, religion, political or other opinion, national or social origin, property, birth, or other status (Article 2.2 ICESCR). The Covenant was unanimously adopted by the UN General Assembly on December 19, 1966, and is a multilateral (multilateral) international treaty intended to guarantee compliance with economic, social, and cultural human rights.
196
7 Cybernetic Systems in Practice
Without going into detail on the many facets of education, we will focus in particular on the connection between education and cybernetics. There exists a series of statements, evaluations, discussions, and critiques on this topic, which are categorized under the terms cybernetic pedagogy and cybernetic didactics and are, among others, reflected in the following early works from the 1960s/1970s: Hentig (1965), Frank (1969), Weltner (1970), Cube (1971), Pongratz (1978). In the Lexicon of Psychology, Cybernetic Pedagogy and the aforementioned authors can be found (https://portal.hogrefe.com/dorsch/kybernetische-paedagogik/. Accessed on 18.02.2018): Consideration of pedagogical measures according to the control loop model of cybernetics. The cybernetic problem (information and control or communication processes in and between closed systems) is generally posed in educational science as a general pedagogical and subject-specific issue. In general pedagogy, educational styles are regarded as setpoint determinations for settings with ideological quality in the respective sociocultural frame of reference. If the primary function of the teacher is determined as a control function, the pedagogical constellation appears in its process character as fundamentally logical and calculable (Frank 1969; Weltner 1970). Attempts to establish a cybernetic didactics have particularly promoted programmed instruction, and the control loop scheme has also been adopted for curriculum research (Hentig 1965). Above all, however, teaching and learning processes have been developed in the direction of a redundancy theory of learning and didactics (Cube 1971). The advantage of the cybernetic approach lies in making various questions visible, while the disadvantage lies in its limited range in the human sciences. If this is not taken into account, cybernetic pedagogy moves into the vicinity of behaviorist ideology (Behaviorism) of the fundamental feasibility and calculability of human behavior and thus a technologization of educational and instructional processes. The consistent separation of structure and content of learning in redundancy theory overlooks the meaning-making performance of the learner and absolutizes the instrumentalization of learning, thus reducing educational theory to educational technology.
An information-theoretical-cybernetic didactics is defined as (https://service.zfl.uni-kl. de/wp/glossar/informationstheoretisch-kybernetische-didaktik. Accessed on 18.02.2018): The didactic understanding of information-theoretical-cybernetic didactics focuses on teaching and learning as a concrete method in the sense of technological feasibility. The goal is to achieve the greatest possible efficiency in the teaching and learning process for the purpose of optimization. A further development can be seen in the approach of cybernetic-constructive didactics.
The associated description states (ibid.): In the cybernetic-information-theoretical didactics, didactics is assumed as a control loop in which the teaching objective is introduced from the outside as a target value and the teacher as a controller functions as the orientation towards it as a strategy of didactic action. Persons and media are used as actuators to achieve the teaching objective. Dysfunctional events such
7.4 Cybernetic Systems in Society
197
as conflicts are included as disturbances. The students represent the actual values in this process. In this sense, the teaching system and learning system as well as the resulting interactions can be distinguished. This model expresses the renunciation of the target problem and the criticism of norms (societal or power criticism). It is only about the investigation of how learning processes can be optimized. However, the model could only prevail in learning processes that are behaviorist, i.e., convey small pieces of information and are thus particularly suitable for PC learning programs or self-study.
In Pongratz’s 1978 published book “On the Critique of Cybernetic Methodology in Pedagogy”—with the main heading of Chapter III On the Critique of System Theoretical, Control Theoretical, Automata Theoretical, and Algorithmic Theoretical Approaches in Pedagogy or: Aspects of the Cybernetic Escamotage of Human Freedom [Escamotage = sleight of hand, conjuring trick, disappearing act, transl.]
– he critically examines various information-theoretical approaches, including the control-theoretical model of the teaching process by v. Cube. Pongratz discusses Cube’s learning approach as follows (ibid., pp. 148–149): The system-theoretical analysis of the teaching situation reveals teaching clearly as a process of goal achievement in the sense of a cybernetic control. According to the various subprocesses of control, five areas can be distinguished: goal area, controller function, control function, learning system, and sensor area. The function of the controller (in the terminology of system-theoretical didactics; the selection element) is usually taken over by the teacher in the concrete teaching situation. On the one hand, he designs the teaching strategy (depending on the given setpoint) and, on the other hand, acts as a sensor in the interaction with the learner, controlling the respective learning outcome (the actual value). The position of the controlled variable is taken by the learner, on whom the controller acts. The action takes place by means of the actuator. (In traditional terminology, the actuating device of the control process would roughly correspond to the teaching media.[)] […]. If one wants to represent the control-theoretical context in a (simplified) diagram, the following control loop (according to v. Cube) results [see Fig. 7.19, d. A.]: […] In the analysis of teaching as a control process, cybernetic pedagogy realizes its goal of describing school learning processes as a process in which a measurable variable (student) in a system to be controlled is brought to a desired setpoint (learning objective) by an automatic device (program), regardless of disturbances affecting the system. This corresponds to the concept of “didactics” as postulated by v. Cube: “Didactics as a science investigates how the learning processes of a learning system can be initiated and controlled and how predetermined behavioral goals can be achieved in an optimal way.” […] [see Cube in Dohmen et al. 1970, pp. 219–242, d. A.] The specific contribution of cybernetics lies, on the one hand, in the automation and objectification of the controller function and, on the other hand, in the information- and algorithm-theoretical analysis of teaching strategies.
198
7 Cybernetic Systems in Practice Target value: Teaching goal negative feedback
Education Strategy (Teaching Strategy)
Controller: Educator, Trainers as planners
Actuator: People, Media
Control
Actual value
Probe: Learning control
Control variable: Addressee
Reactions of the addressee
Disturbance variables Internal-external influences Base template: F. v. Cube (1986) supplemented by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.19 Teaching as a control loop. (Source: Cube 1986, p. 49, quoted from Kron 1994, p. 151, supplemented by d. A.; see also Cube 1971)
Pongratz expresses criticism of system- and automata-theoretical approaches in pedagogy (ibid., p. 156): The cybernetic approaches in didactics have to face the question to what extent they can still take into account the spontaneity and self-activity of students and teachers within their theoretical and practical concept, to what extent they do not merely pay lip service to human reflexivity and autonomy, but preserve the idea of human freedom and promote its concrete individual and social realization.
In general, education for people is a lifelong endeavor, a pursuit of education and more education and even more education, within the possibilities that someone possesses and/ or that are offered to him/her. This leads directly to a traditional Swedish proverb, in which, in addition to the fellow student and the teacher, the surrounding space is called the third teacher (Hlebaina 2015). A cybernetically oriented, holistic educational theory must therefore be significantly expanded compared to the model of v. Cube and the criticism of Pongrantz, as outlined in Fig. 7.20.
7.4 Cybernetic Systems in Society
199
Target value: Teaching goal Education Strategy (Teaching Strategy)
Actuator: People, Media
Controller: Educator, Trainers as planners
Initiative Spontaneity
Control variable: Addressee Control
Spatial
learning environment
negative feedback Actual value
Probe: Learning control
ControlReactions of the addressee
Disturbance variables Internal-external influences Base template: F. v. Cube (1986) supplemented by the Author © 2018 Dr.-Ing. E. W. Udo Küppers
Fig. 7.20 Extended cybernetic teaching model. (Source: after Cube 1986, p. 49, supplemented by the author)
Without straying too far from the core of the chapter “Cybernetics and Education,” it is nevertheless appropriate, not least because of the human right to education, to refer to one of the most critical minds and advocates for a new school, which he realized with the “Glockseeschule” in Hanover in 1972: Oskar Negt. In his polemic from 2013 “Philosophy of the Upright Walk,” he speaks openly, as so often, about the grievances—this time in the education sector. In his review of the book, Schnurer (2014) writes: He raises the warning finger when individual and societal developments threaten to get out of hand, when egoisms begin to rule over collectivisms, when capitalist surplus value mentalities unhinge social justice.
Schnurer describes two challenges for the new school according to Negt with embedded quotes from Negt (2013): One could correspond to the spirit of the times and express that the social institution is as it is and cannot be changed; so it should not be changed either! This view is conveniently held by people who consider the selection instrument of school as a proven means, “in which social polarization is continued and cemented, and children are sorted as early as possible according to future winners and potential losers.”
200
7 Cybernetic Systems in Practice
Another interpretation could be that school should be a living and experiential space for humanity, equality, and social, societal competence, in which the human right to education is really taken seriously. However, such an institution does not arise with traditionalist, conservative, and egoistic insistence that “the conditions are as they are,” but through changes and real transformation processes, with the awareness: “In trying lies the true idealism” (Ludwig Marcuse). The former Swiss manager and later human rights and environmental activist Hans A. Pestalozzi (1929–2004) gave the right answer with a “positive subversion”: Where would we go if everyone said, where would we go, and nobody went to see where we would go if we went (After us the future, Bern 1979).
To reconnect with the previously described didactics and cybernetics, Schnurer writes about Negt (Schnurer 2014): Interest should also be aroused by the experiences that Oskar Negt conveys on the question of “experimental school and regular school,” especially when it comes to the transferability of experiences, concepts, and didactic-methodological instruments. Significant and indispensable in this context is the reference that school learning, education, and upbringing must never be neutral, i.e., non-binding, but must contain goals, such as those pursued at the Glocksee School: “Establishing context in learning, increasing people’s ability to be autonomous, overcoming prejudices, courage, tolerance, patience in negotiating compromises …”
As a conclusion of the review, it follows (ibid.): “It cannot go on like this”; with this full stop! Oskar Negt points out that education, upbringing, and school learning must not be understood as a detached area in capitalist, neoliberal, and economic utility thinking, in which business management premises take precedence over learning the “upright walk.” It requires the mediation, testing, and experiencing of a societal and political interest in knowledge, “which consciously gains judgment and knowledge from the historically unused liberation potentials of people.”
Oskar Negt’s approach to a new school or a new way of learning is still far ahead of his and our time. This applies not only to the lower school segments but also extends to university education. The latter is, among other things, striving to enter into cooperation with industrial companies due to a lack of politically meaningful support, which is precisely in line with the neoliberal and economic utility thinking that has no place in a new school, whatever its design and level of education. The increasing digitalization leads to another path of education, also moving away from Negt’s new school. Knowledge transfer via internet channels, as is increasingly being carried out in university areas of state and private providers, primarily has neoliberal and economic utility thinking in mind. Neither the close physical communication of the “students” in a seminar room nor the much-needed holistic educational perspective in our progressively fragmented society is taken into account. What counts is the dominant educational currency of profit maximization; what is considered a necessary evil is the mandate of education. Both private educational providers and state educational institutions are by no means excluded from this—on the contrary. The primary mission
7.4 Cybernetic Systems in Society
201
of private entrepreneurs in education is to make a profit; the mission of state educational institutions, linked to state funding, is—at least in recent years in our country—to reduce debt, even at the expense of education. Another approach to cybernetics and education is the “learning biology according to Vester” (1989). In his 1976 essay “Chance Learning Biology. Crisis management through proper learning” (Vester 1989, pp. 44–73), Vester explains that in the highly networked environment for all living beings, information exchange, processing, and learning occupy an important place for survival. From a cited study on the inadequate recognition of crisis signs of our civilization (ibid., pp. 45–46), Vester concludes: In examining the reasons for the inability to understand the situation of our industrial society, to recognize certain specific relationships in the human/environment relationship, we encounter the peculiar learning forms of our school and their far-reaching historical roots. The detachment of the intellectual from the physical, the extraction of humans from our environment in the same direction, took on ever more extreme forms with the development of the school system. […] Learning without the involvement of the organism and thus without the inclusion of the environment is unnatural and uneconomical.
This is seen quite differently, for example, by independent educational institutions— which the author was able to get to know comprehensively through several years of activity as a study director. Learning in a home environment via internet portals, video presentations, etc. is convenient and cost-saving for some students. Why shouldn’t this be supported, especially since the educational institution also saves considerable costs, e.g., by providing and servicing infrastructure in-house? However, these are very shortsighted and highly questionable approaches. From this type of educational practice, students learn nothing through the so necessary physical and psychological relationship structures between themselves and other learners, as well as between learners and their holistic learning environment. Moreover, according to Vester, the most promising guarantee of efficient learning is achieved in a relaxed state through play. However, this probably does not mean conducting internet games in the educational area from couch to couch of the learners. For Vester, the learning dilemma, which is no less present today, lies even deeper: “Learning forms of our schools and universities […] [have] not only become alienated from the reality of our living space as a complex system, […], but also from the reality of our social relationships.” (ibid., p. 56) The digitization of education not only shows certain usefulness in learning, but even more frightening effects on social competence. In view of the looming global crises, with significant negative consequences for nature, environment, society, and people, there can be no “business as usual” according to seemingly proven educational concepts. Educational institutions and politicians are called upon to stop the trend of inadequate holism in education and increasingly turn to solutions for tasks in training and further education that come closer to reality. This does not exclude subject-specific educational content; it is only integrated into an overall context.
202
7 Cybernetic Systems in Practice
Analogies to nature and “its education” are always instructive. Individuals are both highly specialized experts and parts of a whole of their kind. They survive on the one hand through specific achievements, e.g., individual defensive behavior against competitors, camouflage properties, high-quality material composite properties, and other principles. On the other hand, they know how to protect themselves in the collective species association through holistic principles, e.g., swarm formation, networked communication in dangerous situations, such as the chemical signal transmission between trees, from enemies. There are countless networked control loop functions with negative feedback that allow species to survive, each in their own special way. Only humans are the only species that do not adhere to the game of life and, in doing so, also accept harmful consequences of the greatest extent for themselves and other living beings. A cybernetics that is not limited to technical control functions, and an education that understands how to link specialized knowledge with a holistic view, could help pave a new way for advantageous solutions to our problems in education in general and in particular.
7.4.5 Cybernetics and Military With cybernetics, which already attracted the attention of the US military in connection with communications technology and ballistics during Norbert Wiener’s time in the 1940s (Rid 2016, pp. 65–99), we also want to conclude this treatise on cybernetic systems. Since the search for suitable control and regulation processes for ballistic maneuvers of projectiles and flight maneuvers of pilots and aircraft, which were particularly associated with dynamic target finding of dynamic flying objects, and since the search for the decisive advantage feature, negative feedback, succeeded, extraordinary changes have taken place in the human-machine system through innovative techniques and processes. Cybernetic control processes for various military applications have long since been incorporated into the technology of weapons and warfare. The Boston Dynamics Group develops humanoids like Atlas, which are extremely agile, run, jump, fall and get up again, and these are of great interest to the military due to their cybernetic control processes linked with artificial intelligence. At the annual competition organized by DARPA, the US Defense Advanced Research Projects Agency, the best robots are determined for their suitability in everyday life through various functional tests. Without a doubt, potential deployments for military purposes are also being pursued. Another field for cybernetically controlled functions are drones of any shape and performance, which can be remotely controlled over thousands of kilometers, or completely autonomously—also as a swarm—intervene in warfare. The MQ-9 Reaper (“Grim Reaper”) by General Atomic, the also American RQ-4 Global Hawk by Northrop Grumman, the combat drone of type HERON TP produced in
7.5 Control Questions
203
Israel by Malat (UAV = Unmanned Aerial Vehicle) Division of Israel Aerospace Industries, which is in the field of view for a purchase by Germany, the Russian droneRSK MiG-Skat (“Stingray”), manufacturer Mikoyan-Gurevich/Sukhoi, as well as the stealth drone Lijan (“sharp sword”) produced by the Chinese company Shenyang, among others, are examples of how cybernetics, in conjunction with artificial intelligence, has been shaping military events on Earth for years and will do so even more in the future. Following drones in the military focus are drones as multi- or quadrocopters in countless variants for civilian purposes, whether for construction monitoring, traffic controls, hazard detection, in the film industry, as children’s toys, as flying objects in the form of so-called “air taxis” for passenger transport, and much more. Many problems related to ethical aspects in the military sector or personal legal protection rights of individuals, e.g., in the case of civilian drone surveillance, have not yet been legally clarified. Here, politics is only just beginning to recognize that conventional monocausal views on solving new problems of digitization and “unmanned objects” do not lead any further. We are still at the very beginning of a new type of traffic control, on land, in the air, and in the not too distant future, certainly also in the water.
7.5 Control Questions Q 7.1 S ketch and describe the model of a cybernetic control loop “blood sugar”. Q 7.2 S ketch and describe the model of a cybernetic control loop “pupils”. Q 7.3 Sketch and describe the model of a cybernetic control loop “camera image sharpness”. Q 7.4 Sketch and describe the model of a cybernetic control loop “position control of the read/write head in a computer hard disk drive”. Q 7.5 Sketch and describe the model of a cybernetic control loop “power steering in a motor vehicle”. Q 7.6 Sketch and describe the model of a cybernetic control loop “room and hot water temperature”. Q 7.7 Describe the term “economic cybernetics”. Q 7.8 Sketch and describe the five functional blocks of a cybernetic model for the simulative quantification of risk consequences in complex process chains. Highlight the path of negative feedback in the sketch. Assign the five blocks to the three risk areas. Q 7.9 Explain the concept of sociocybernetics. Q 7.10 E xplain the concept of psychological cybernetics. Q 7.11 What do you understand by biocybernetically oriented behavioral physiology? Q 7.12 How can organismic behavior be captured in four consecutive steps according to Kalveram?
204
7 Cybernetic Systems in Practice
Q 7.13 S ketch and describe the block diagram of an abstract automaton. Supplement the picture by representing a human-environment relationship using block representation as a cybernetic system of two automata. Q 7.14 Describe in your own words the course of the cybernetic experiment by Beer in Chile Allende of the 1970s according to the template. How do you personally view the application of the cybernetic experiment to a society? Q 7.15 What do you understand by “information-theoretic-cybernetic didactics”? Q 7.16 Sketch and describe the process in the cybernetic educational control loop according to v. Cube. What are the dominant criticisms raised by Pongratz against v. Cube’s cybernetic educational model?
References Ashby WR (1985) Einführung in die Kybernetik. Suhrkamp, Berlin Beer S (1970) Kybernetik und Management. Fischer, Frankfurt am Main Beer S (1981) Brain of the firm – the managerial cybernetics of organization. Wiley, Chichester Beer S (1994) Cybernetics of National Development (evolved from work in Chile). In: Harnden R, Leonard A (Eds) How many grapes went into the wine – Stafford Beer on the art and science of holistic management. Wiley, Chichester Beer S (1995) Diagnosing the system for organizations. Wiley, New York Begon ME et al (1998) Ökologie. Spektrum Akademischer Verlag, Heidelberg Bischof N (1968) Kybernetik in Biologie und Psychologie. In: Moser S (Eds) Information und Kommunikation. Referate und Berichte der 23, Internationalen Hochschulwochen Alpach 1967. R. Oldenbourg, München/Wien, pp 63–72 Bischof N (1969) Hat Kybernetik etwas mit Psychologie zu tun? Psychologische Rundschau, vol XX. Vanderhoeck & Ruprecht, Göttingen, pp 237–256 Bossel H (2004) Systemzoo 2, Klima, Ökosysteme und Ressourcen. Books on Demand GmbH, Norderstedt Boysen W (2011) Kybernetisches Denken und Handeln in der Unternehmenspraxis. Komplexes Systemverhalten besser verstehen und beeinflussen. Springer Gabler, Wiesbaden Coy W (2004) Zum Streit der Fakultäten. Kybernetik und Informatik als wissenschaftliche Diszipli-nen. In: Pias C (Ed) Cybernetics – Kybernetik. The Macy-Conferences 1946–1953, Essays und Dokumente, Bd II. Diaphanes, Zürich/Berlin, pp 253–262 Crutzen PJ, Stoermer EF (2000) The „Anthropocene“. The International Geosphere–Biosphere Programme (GBBP) Newsletter No. 41, May, pp 17–18 von Cube F (1971) Kybernetische Grundlagen des Lernens und Lehrens. Klett, Stuttgart von Cube F (1986) Fordern statt Verwöhnen. Piper, München Deleuze G (1992) Foucault. Suhrkamp, Frankfurt am Main Deutsch KW (1969) Politische Kybernetik. Modelle und Perspektiven. Rombach, Freiburg im Breisgau (Original (1963): The nerves of government: models of political communication and control. Reprint in: Current Contents, This week’s Citation Classics, Number 19, May 12, 1986) Dodd SC (1952) Testing message diffusion from person to person. Public Opin Q 16(2):247–262
References
205
Dohmen G, Maurer F, Popp W (Eds) (1970) Unterrichtsforschung und didaktische Theorie. Piper, München Drucker PF (1998) Die Praxis des Management. Econ, Düsseldorf Espejo R, Reyes A (2011) Organizational systems: managing complexity with the viable system model. Springer, Wiesbaden Espinoza A, Walker J (2013) Complexity management in practice: a viable system model intervention in an Irish eco-community. Eur J Oper Res 225:118–129 Espinoza A, Harnden R, Walker J (2008) A complexity aproach to sustainability – Stafford Beer revisited. Eur J Oper Res 187:636–651 Etzioni A (1968) The active society. A theory of social and political processes. Collier-Macmillan, London Faller H, Kerbusk S, Tatje C (2017) Das Bundesdieselamt. Die Zeit 32:21–22 Frank H (1969) Kybernetische Grundlagen der Pädagogik, 2 Bde. Agis, Baden-Baden Friedmann J, Hengstenberg M, Knaup H, Traufetter G, Weyrasta J (2017) Der Mogel-Pakt. Der Spiegel 28:24–27 Hanhimaki JM (2004) The flawed architect: Henry Kissinger and American foreign policy. Oxford University Press, Oxford Hassenstein B (1967) Biologische Kybernetik, 3rd edn. VEB Gustav Fischer, Jena von Hentig H (1965) Die Schule im Regelkreis. Klett, Stuttgart Hlebaina EM (2015) Der Raum als dritter Lehrer. AV Akademieverlag, Saarbrücken Hopcroft JE, Motwani R, Ullmann JD (2011) Einführung in die Automatentheorie. Pearson, München Hummel D, Kluge T (2004) Sozial-ökologische Regulationen. netWORKS-Papers, Heft 9, Forschungsverbund netWORKS. Deutsches Institut für Urbanistik, Berlin Jeschke S, Schmitt R, Dröge A (2015) Exploring Cybernetics. Kybernetik im interdisziplinären Diskurs. Springer, Wiesbaden Kalveram KT (2011) Psychologische Kybernetik: Anwendung der Systemtheorie zur Erklärung menschlichen Verhalten. Vortragsunterlagen aus Ringvorlesung Technische Kybernetik am 26.1.2011, TU-Illmenau Klaus G (1964) Kybernetik und Gesellschaft. VEB Dt. Verl. D. Wissenschaften, Berlin Krause B (2013) Der Einfluss der Kybernetik auf die psychologische Forschungsmethodik. Leibniz Online, Nr. 15, Zeitschrift der Leibniz-Sozietät e. V., Berlin. http://leibnizsozietaet.de/wpcontent/uploads/2013/04/bkrause.pdf. Accessed 22 Febr 2018 Krause F, Schmidt J, Schweitzer C (2014) Der kybernetische Regelkreis als Managementinstrument im Anlagenlebenszyklus. In: Kühne S, Jarosch-Mitko M, Ansorge B (Eds) EUMONIS – Software- und Systemplattform für Energie- und Umweltmonitoringsyteme, Bd XLIV. InfAI, Institut für Angewandte Informatik e. V., Universität Leipzig, Leipzig, pp 25–36 Kron FW (1994) Grundwissen Didaktik. UTB, München Küppers EWU (2018) Die humanoide Herausforderung. Leben und Existenz in einer anthropozänen Zukunft. Springer, Wiesbaden Küppers J-P, Küppers EWU (2016) Hochachtsamkeit. Über unsere Grenze des Ressortdenkens. Springer Fachmedien, Wiesbaden Lambertz M (2016) Freiheit und Verantwortung für intelligente Organisationen. Eigenverlag, Düsseldorf, ISBN 978-3-00-052559-9 Langer E (2014) Mindfullness. 25 th Anniversary Edition. Merloyd Lawrenz Book, Philadelphia Larraín F, Meller P (1991) The Social-populist Chilean Experiment, 1970–1973. In: Dornbusch R, Edwards S (Eds) The Macroeconomics of Populism in Latin America. The University of Chicago Press, Chicago, pp 175–221
206
7 Cybernetic Systems in Practice
Lazarsfeld PF, Berelson B, Gaudet H (1969) Wahlen und Wähler. Soziologie des Wahlverhaltens. Luchterhand, Neuwied Mann H, Schiffelgen H, Froriep R (2009) Einführung in die Regelungstechnik, 11. neu bearbeitete Aufl. Hanser, München Medina E (2006) Designing freedom, regulating a Nation: Socialist Cybernetics in Allende’s Chile. J Lat Am Stud 38:571–606 Medina E (2011) Cybernetic revolutionaries. The MIT Press, Cambridge, MA Negt O (2013) Philosophie des aufrechten Gangs. Steidl, Göttingen Odum EP (1999) Prinzipien der Ökologie. Spektrum der Wissenschaft, Heidelberg Osman Y, Menzel S (2017) Im Visier der Finanzaufsicht. Handelsblatt Nr. 151:1, 8, 12 Pestalozzi HA (1979) Nach uns die Zukunft – Von der positive Subversion. Bertelsmann, Lizensausgabe, Gütersloh Pias C (2003) Zeit der Kybernetik – Eine Einstimmung. https://www.leuphana.de/fileadmin/user_ upload/PERSONALPAGES/_pqr/pias_claus/files/herausgaben/2003_Cybernetics-Kybernetik_ Einleitung.pdf. Accessed 25 Febr 2018. Pias C (2005) Der Auftrag. Kybernetik und Revolution in Chile. In: Gethmann D, Stauff M (Eds) Politiken der Medien. Diaphanes, Berlin/Zürich, pp 131–154 Pongratz LJ (1978) Zur Kritik kybernetischer Methodologie in der Pädagogik. Europäische Hochschulschriften. Lang, Frankfurt am Main Printz S, von Cube P, Vossen R, Schmitt R (2015) Ein kybernetisches Modell beschaffungsinduzierter Störgrößen. In: Jeschke et al (Eds) Exploring Cybernetics. Kybernetik im interdisziplinären Diskurs. Springer, Wiesbaden, pp 237–262 Rid T (2016) Maschinendämmerung. Eine kurze Geschichte der Kybernetik. Propyläen/Ullstein, Berlin Rivière P (2010) Der Staat als Maschine. Das Kybernetik-Experiment in Allendes Chile. Le Monde diplomatique (deutsche Ausgabe), 12.11.2010, p 19 Röhler R (1974) Biologische Kybernetik. Teubner, Stuttgart Ropohl G (2012) Allgemeine Systemtheorie. Einführung in transdisziplinäres Denken. Edition sigma, Berlin Ruegg-Stürm J, Grand S (2015) Das St. Galler Management-Modell. Bern, Schweiz, Haupt Schnurer J (2014) Rezension vom 26.6.2014 zu: Oskar Negt: Philosophie des aufrechten Gangs. Streitschrift für eine neue Schule. Steidl (Göttingen) 2013. ISBN 978-3-86930-758-9. In: socialnet Rezensionen, ISSN 2190-9245. https://www.socialnet.de/rezensionen/16273.php. Accessed 22.02.2018 Seibel B (2016) Cybernetic Government. In der Reihe: Haubl et al (Eds) Frankfurter Beiträge zur Soziologie und Sozialpsychologie. Springer, Wiesbaden Senghaas D (1966) Kybernetik und Politikwissenschaft. Ein Überblick. In: Politische Vierteljahresschrift, vol VII. Westdeutscher Verlag, Köln/Opladen, pp 252–276 Steinbuch K (1968) Überlegungen zu einem hypothetischen cognitven System. In: Moser S (Eds) Information und Kommunikation. Oldenbourg, München, pp 53–62 Strina G (2005) Zur Messbarkeit nicht-quantitativer Größen im Rahmen unternehmenskybernetischer Prozesse, Habilitationsschrift. RWTH Aachen University, Aachen Vester F (1989) Leitmotiv vernetztes Denken. Heyne, München Weltner K (1970) Informationstheorie und Erziehungswissenschaft. Schnelle, Quickborn Wiener N (1952) Mensch und Menschmaschine. Alfred Metzner, Frankfurt am Main/Berlin Wiener N (1963) Kybernetik. Regelung und Nachrichtenübertragung in Lebewesen und in der Maschine (Original: 1948/1961 Cybernetics or control and communication in the animal and the machine), 2., erw. Aufl., Econ, Düsseldorf/Wien
8
Control Questions (Q N.N) With Sample Answers (A N.N) For Chapters 2 to 7
Summary
With a special focus on the origin and way of thinking of cybernetics, Chap. 2 introduces the topic of circular thinking inherent in cybernetics. Starting from the central question “What is cybernetics and what is not cybernetics,” with related practical examples, you will be confronted with numerous definitions of cybernetics, all derived from their respective fields of application. Finally, special attention is paid to “Systemic and cybernetic thinking” in six circular steps.
8.1 Chap. 2: A Special Look at the Origin and Way of Thinking of Cybernetics Q 2.1
Describe the historical origin of the word cybernetics according to Karl Steinbuch?
A 2.1
In “Automat und Mensch” (Steinbuch 1965, pp. 322–323), Steinbuch formulated the origin of the term cybernetics: “First, let us briefly consider the historical origin of the word ‘cybernetics’: Plato (427 to 347 BC) used the word κζβερνετικε (kybernetike) in the sense of the art of control. In Plutarch (50 to 125 AD), the helmsman of the ship is called κζβερνετες (kybernetes). In Catholic church terminology, κζβερνεσις (kybernesis) refers to the management of the church office. It should also be noted that the French ‘gouverneur’ and the English ‘to govern’, i.e., to rule, are etymologically related to cybernetics. In 1834, Ampère referred to the science of possible methods of government as ‘cybernétique’ in his ‘Essai sur la philosophie des sciences’. In the last generation, the term was mainly popularized by Norbert Wiener with his book ‘Cybernetics’ [Original 1948, German 1963, d. A.].”
© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5_8
207
208
8 Control Questions (Q N.N) With Sample Answers …
Q 2.2
What do you understand by Cybernetic Anthropology?
A 2.2
Cybernetic Anthropology is understood as a cognitive science field that combines anthropology (the science of humans) and cybernetics “with a technology-induced theory formation.”
Q 2.3
What is cybernetics and what is it not?
A 2.3
Cybernetics is not a single science. Cybernetics is a meta-science capable of contributing to advances in natural, engineering, and social scientific disciplines.
Q 2.4
Describe the three perspectives of an interested citizen, engineer, and cybernetician on a robot?
A 2.4
See Fig. 2.1: Interested citizen:
Helper in everyday life? Labor-saving?
Engineer:
Energy consumption? Joint technology? Axes of motion?
Cybernetician:
Control loop of collaborating worker, programmer, environment?
Q 2.5
Describe the three perspectives of an interested citizen, engineer, and cybernetician on a passenger car?
A 2.5
See Fig. 2.2: Interested citizen:
Electricity costs? Driving distance? Price?
Engineer:
Energy capacity? Drive axle? Chassis structure?
Cybernetician:
Control loop of driver, passenger car, and environment?
Q 2.6
Describe and sketch in detail the control loop function of an autonomously driving passenger car?
A 2.6
See Fig. 2.3
Q 2.7
Describe and sketch in detail the double control loop function of an autonomous driverpassenger car system?
A 2.7
See Fig. 2.4
8.1 Chap. 2: A Special Look at the Origin and Way …
209
Q 2.8
The definition domain of cybernetics provides various explanations for what is understood by cybernetics. 1. Name the listed 12 explanations. 2. To which persons can the explanations be attributed?
A 2.8
For 1. and 2. see Sect. 2.1, among others: “Theory of communication and control and regulation processes in machines and living organisms.” (Norbert Wiener). Cybernetics (is) the “general formal science of machines.”(W. Ross Ashby). “Cybernetics” is a science that enables us “systematically to achieve any arbitrary goal, and thus any political goal as well.” (Albert Ducrocq). “Cybernetics is the general, formal science of the structure, relations, and behavior of dynamic systems.” (Hans-Joachim Flechtner). “Cybernetics is the science of communication and control.” (Stafford Beer). “Cybernetics” is understood on the one hand as a collection of certain thought models (of control, message transmission, and message processing) and on the other hand as their application in the technical and non-technical field.” (Karl Steinbuch). “Cybernetics is the theory of the connection of possible dynamic self-regulating systems with their subsystems.” (Georg Klaus).
Q 2.9
Describe the acronym “Cyber …”? Name the meanings and who they originate from.
A 2.9
See Sect. 2.1; among others, according to Rid (2016, p. 9), “Cyber …” is a “chameleon” because it is interpreted differently from different perspectives or application contexts. What one or the other understands by “Cyber …” is listed below as examples: Politicians:
Power outages
Intelligence agencies:
Spies, hackers
Bankers:
Security breaches, data manipulation
Inventors:
Visions
Romantic internet activists:
Freedom, space beyond any control
Young people:
Video chat, sex
Q 2.10
Outline the six steps of cybernetic thinking in a circular process according to Probst.
A 2.10
Step 1: System delimitation Step 2: Part and whole Step 3: Network of effects Step 4: Structure and behavior Step 5: Control and development Step 6: Perception (or the cybernetics of cybernetics)
210
8 Control Questions (Q N.N) With Sample Answers …
Q 2.11
According to Probst, six relevant characteristics are essential for examining and modeling a complex system, its parts, and its entirety. What are they?
A 2.11
“1. What relationships exist between the parts; how are they connected? What behavioral possibilities does a system contain, or which behavioral possibilities are excluded? What limits, restrictions, and tolerance limits exist for the individual elements, subsystems, and the whole? 2. Which parts (subsystems) form meaningful units in turn? What new properties does an integrated whole have from its parts? 3. At which level are we interested in which details? Are (sub-)systems to be further resolved or is a black-box view sufficient? [Emphasis by the author]. 4. Attempt to see through networks (tangle), to do justice to complexity; inclusion of diversity, multitude of dynamics, adaptability; prevention of unnecessary reductionism; acceptance of not being able to know everything. 5. Consciously dissolve and assemble the system without losing sight of the whole; the whole is something different from the sum of its parts, it belongs to a different category. 6. Constant awareness of the level of thinking and acting is necessary; conscious work on different levels of abstraction.” (Probst 1987, p. 32)
Q 2.12
Probst mentions seven types of cybernetic thinking for the control and development of cybernetic systems. Name these and argue their goals and purposes.
A 2.12
“1. Thinking in models: The goal is to form and explore control models for specific systems or situations. 2. Thinking in different disciplines: Knowledge about control mechanisms is drawn from various disciplines. 3. Thinking in analogies: Systems depicted under the control aspect become comparable and applicable as useful analog models. 4. Thinking in control loops: Instead of linear causality, thinking in circular causalities, in networks, takes place. 5. Thinking in the context of information and communication: Information is placed on an equal footing with energy and matter and becomes the basis for control. 6. Thinking in the context of complexity management: Complexity is not reduced or bypassed but accepted in the sense of the law of variety. 7. Thinking in ordering processes: Control structures determine the complexity of an order and vice versa. Organized order can always only be of low complexity, self-organized order can be of high complexity.” (Probst 1987, p. 41)
8.1 Chap. 2: A Special Look at the Origin and Way …
211
Q 2.13
How do we perceive and explore systems? Six criteria are listed for this. Name and argue them.
A 2.13
“1. What knowledge should reasonably be included in a context? Are there alternative perspectives for meaningfully perceiving a context? 2. How do we perceive structures and behavior? Where are the limits of human perception? What can we not know? Is the system aware of the possible behaviors, the systemic connections (self-reflection)? 3. What do we want with our model building/observation? Does the model we construct “fit” our intentions? Does it fulfill its purpose? 4. Depending on how we perceive the model in a particular situation, we act; different constructions of reality are possible; the observer is part of the observed system (second-order observer); we are responsible for our thinking, knowledge, and actions. 5. Perception is holistic, but we do not see the whole; it depends on experiences, expectations, etc.; it is selective; it is structure-determined; a complete explanation of complex phenomena is not possible. 6. The awareness of the purpose of observation and the peculiarities of the observer is essential. Models fit or do not fit, they are not the image of an objective reality.” (Probst 1987, p. 45)
Q 2.14
What is the general misunderstanding regarding the “cybernetic perception curve” in humans?
A 2.14
The great misunderstanding—not to mention the persistent progress-hostile error— is that humans, against their better knowledge, more or less refuse the cybernetic fundamental laws of nature, more specifically: their own existential progress. With their short-term development strategies, they produce catastrophes that reach the limits of the viability of our planet—some of which have already been exceeded. It is the often-cited monocausal and shortsighted thinking and acting that opposes sustainable networked thinking and acting and thus the unconditional strengthening of the cybernetic perception and learning curve.
Q 2.15
Describe in your own words what is meant by “cybernetics of cybernetics.” What term is often used instead of “cybernetics of cybernetics“? To whom does this term trace back?
A 2.15
Probst describes the perception of problems as follows (Probst 1987, p. 43): “There is disciplinary knowledge, but there are no disciplinary contexts.” […] “Problems” must therefore be seen or, provocatively put: “Problems” must first be invented. We still act as if reality or real problems could be clearly assigned to certain disciplines or complex situations could be designed and controlled in such a way that tasks could be given independently of each other. Cybernetics of cybernetics is also known under the term “second-order cybernetics,” in which an observer of a situation is observed by another observer. This “secondorder cybernetics” also traces back to Heinz von Foerster.
212
8 Control Questions (Q N.N) With Sample Answers …
8.2 Chap. 3: Basic Concepts and Language of Cybernetics Q 3.1
What is a black-box model and what is its counterpart?
A 3.1
In cybernetics or systems engineering, the black box is a form of representation. The black box is a system whose internal structure and processes can only be inferred by measuring the external behavior of input and output variables. The counterpart of the black box is the white box. It is transparent, allowing the processes within the system to be observed or measured.
Q 3.2
Sketch and explain a Black Box as an information-processing system in general representation and as a Black-Box-Human. For the latter, explicitly describe at least 4 inputs and four outputs.
A 3.2
For the sketch, see Figs. 3.1 and 3.2. B-B-Human-Input: seeing, hearing, smelling, touching, tasting, etc. B-B-Human-Output: language, writing, gestures, facial expressions, etc.
Q 3.3
Sketch and explain three different time behaviors of control systems. What is a significant problem in controller design?
A 3.3
For the sketch, see Fig. 3.3. A significant problem in the design of controllers is to keep the control deviation from the setpoint or reference variable as low as possible, despite changing disturbances. Theoretically and practically, this test can be carried out by specifying a step function of the reference variable—in Fig. 3.3 this is w(t)—which sets the value w(t) = 0 to the value w(t) = 1. The time lag of the controlled variable x(t), in response to the step function w(t), is coupled with the inevitable transit time in the controller and control path, which leads to a “dead time” until the controlled variable can react. Fig. 3.3 shows under A the qualitative time course of a controlled variable for a stable control system, in which the time course of the controlled variable leads to a targeted minimization of the control deviation (xe—w1). Under B, the time course of the controlled variable shows a tendency towards a more unstable behavior of the control system than under A due to multiple overshoots and undershoots around the setpoint. The sketch under C is finally an expression of a completely unstable control system.
Q 3.4
Sketch and explain the difference between control, regulation, and optimization (adjustment).
A 3.4
For the sketch, see Fig. 3.4. Control is based on an open-loop process characterized by the control chain of sequentially connected control elements. In contrast, regulation takes place in a closed-loop process. The reference variable, which specifies the setpoint or the goal of the regulation, is given from the outside, while the control system autonomously changes its behavior in such a way that the setpoint is reached. Optimization (adjustment) refers to a control process that tends towards a balance between the system and the environment, with the setpoint being developed by the adjustment-oriented control process itself and serving as the starting point for subsequent control processes.
8.2 Chap. 3: Basic Concepts and Language of Cybernetics
213
Q 3.5
Sketch and explain a cascade control. Name three typical use cases.
A 3.5
For the sketch, see Fig. 3.5. A cascade control involves several controllers, with the associated control processes interconnected. Cascade controllers are set from the inside out, meaning: First, disturbances in the control path are compensated for in the inner control loop, which is fed an auxiliary control variable, using a so-called slave controller. In this way, disturbances no longer pass through the entire control path. In addition, the slave controller can provide a limitation of the auxiliary control variable, which, depending on the process, can be an electric current, a mechanical feed, or a hydrodynamic flow. The outer control process includes the master controller and the outer control path, thus the control variable of the slave controller is derived from the manipulated variable of the master controller. Applications: e.g., heating a workpiece in a furnace, temperature processes in metals, galvanic processes.
Q 3.6
What is a multivariable control system? Name three typical use cases. Sketch the cybernetic control process.
A 3.6
Multivariable control systems are control systems in which several physical or chemical variables, separated or coupled, act on a control path through special controllers. Multivariable control systems, such as those used in the form of a two-variable control without coupling, e.g., for the supply of cold and hot water into a container, can be found in many everyday application areas. Cars have a multitude of multivariable controllers, as do gas, water systems, power systems, and others. For the sketch, see Fig. 3.7.
Q 3.7
Explain “negative and positive feedback” in a control system.
A 3.7
A control loop or control system is controlled by negative feedback. Feedback occurs when the output signal of an information-processing system is returned to the input, creating a closed loop. Negative feedback ensures stability in the control loop by counteracting damping effects on control deviations, while positive feedback causes instability in the control loop by reinforcing or weakening effects on control deviations in the same direction.
Q 3.8
What does self-regulation mean?
A 3.8
Self-regulation in a control loop is given when the reference variable, which specifies the setpoint, is set as part of the control process itself and continuously adapts “optimally” to new situations during the process. See also K 3.4.
Q 3.9
What does Ashby’s Law state?
A 3.9
“The law states that a system that controls another can compensate for more disturbances in the control process the greater its action variety is. Another formulation is: The greater the variety of a system, the more it can reduce the variety of its environment through control. Often the law is cited in the stronger formulation that the variety of the control system must be at least as large as the variety of the disturbances occurring in order for it to perform the control.” (https://de.wikipedia.org/wiki/Ashbysches_Gesetz. Accessed on 22.01.2018)
214
8 Control Questions (Q N.N) With Sample Answers …
Q 3.10
Explain the autopoiesis theory. Who developed it?
A 3.10
Chilean scientists Humberto Maturana as a neurobiologist and Francisco Varela as a biologist, philosopher, and neuroscientist coined the autopoiesis theory and the term autopoiesis, which is attributed to Maturana and refers to the self-creation and selfmaintenance of living systems. Autopoietic systems are recursively—retroactively— structured or organized, meaning that the result of the interaction of their system components leads back to the same organization that produced the components. This characteristic way of internal organization or self-organization is a clear distinguishing feature from non-living systems.
Q 3.11
Why are systems modeled?
A 3.11
The approximate capture of processes within real complex processes, which are always associated with conflicts and uncertainties, refers to the analysis of system sections within certain limits. At this point, the modeling tool comes into play. It attempts to capture structures (interconnected system elements) and processes (transport processes between system elements) as realistically as possible by mapping real conditions and to describe their state and development in a model-like manner.
8.3 Chap. 4: Cybernetics and its Representatives Q 4.1
Describe the special achievements associated with the cybernetics representative Norbert Wiener.
A 4.1
Many refer to Norbert Wiener (1894–1964) as the “father of cybernetics,” which is largely due to his 1948 book “Cybernetics or Control and Communication in the Animal and the Machine” (German: “Kybernetik. Regelung und Nachrichtenübertragung im Lebewesen und in der Maschine“), which the American mathematician Wiener dedicated to his longtime scientific companion Arturo Rosenblueth. Wiener’s journey of discovery, which is associated with the central concept of cybernetics, “negative feedback,” began in the military sector, where so many new research areas—e.g., bionics—had their start.
Q 4.2
Describe the special achievements associated with the cybernetics representative Arturo Rosenblueth.
A 4.2
The Mexican physiologist Arturo Rosenblueth (1900–1970) was a close scientific companion of Norbert Wiener. Their connection was based, among other things, on Rosenblueth confirming Wiener’s view that feedback played a crucial role in both the control technology of machines and living organisms. Notable works by Rosenblueth and Wiener include “Behavior, Purpose and Teleology” (German: “Verhalten, Zweck und Teleologie“) (Rosenblueth and Wiener 1943) and “Purposeful and Non-Behavior” (German: “Zielgerichtetes und nicht zielgerichtetes Verhalten“) (Rosenblueth et al. 1950).
8.3 Chap. 4: Cybernetics and its Representatives
215
Q 4.3
Describe the special achievements associated with the cybernetics representative John von Neumann.
A 4.3
John von Neumann (1903–1957) was an early computer pioneer, mathematician, computer scientist, and cyberneticist. Cybernetic mathematics and game theory as part of Theoretical Cybernetics (see Chap. 5) were some of his areas of interest. Von Neumann is considered one of the fathers of computer science. The computer architecture he designed is still present in today’s computers.
Q 4.4
Describe the special achievements associated with the cybernetics representative Warren Sturgis McCulloch.
A 4.4
The American neurophysiologist and cybernetician Warren McCulloch (1898–1969) became known for his foundational work on theories of the brain and his involvement in the cybernetics movement of the 1940s. Together with Walter Pitts, he created computer models based on mathematical algorithms, the so-called threshold logic. They divided the investigation into two individual approaches—one focusing on biological processes in the brain, and another focusing on applications of artificial intelligence neural networks (McCulloch and Pitts 1943). The result of this investigation was the model of a McCulloch-Pitts neuron.
Q 4.5
Describe the special achievements associated with the cybernetics representative Walter Pitts.
A 4.5
Walter Pitts (1923–1969) was an American logician, his field of work was cognitive psychology. Pitts became McCulloch’s collaborator in the 1940s, resulting in the wellknown McCulloch-Pitts neuron model. In 1943, he took up an assistant position and became a doctoral student at the Massachusetts Institute of Technology—MIT—under Norbert Wiener. The McCulloch-Pitts cell—also McCulloch-Pitts neuron—is the simplest model of an artificial neural network, which can only process binary—zero/one—signals. Analogous to biological neuron networks, inhibitory signals can also be processed by the artificial neuron. The threshold of a McCulloch-Pitts neuron can be set by any real number.
Q 4.6
Describe the special achievements associated with the cybernetics representative William Ross Ashby.
A 4.6
The English researcher and inventor William Ross Ashby (1903–1972) made significant contributions to the development of cybernetics through his influential research results, such as his homeostat and the law of requisite variety.
216
8 Control Questions (Q N.N) With Sample Answers …
Q 4.7
Describe the special achievements associated with the cybernetics representative Gregory Bateson.
A 4.7
Gregory Bateson (1904–1980) was an Anglo-American anthropologist, biologist, social scientist, cybernetician, and philosopher. His areas of work included anthropological studies, the field of communication theory and learning theory, as well as questions of epistemology, natural philosophy, ecology, and linguistics. However, Bateson did not treat these scientific fields as separate disciplines, but as different aspects and facets in which his systemic-cybernetic thinking comes into play. The following example illustrates Bateson’s holistic thinking (quoted from Rid 2016, p. 220): “If the axe was an extension of the woodcutter’s self, then so was the tree, for without the tree the man could hardly use his axe. It is therefore the connection tree-eye-brainmuscles-axe-stroke-tree; and it is this entire system that has the characteristics of the immanent mind.“
Q 4.8
Describe the special achievements associated with the cybernetics representatives Humberto Maturana and Francisco Varela.
A 4.8
The concept of autopoiesis is a central component of the biological theory of cognition developed by the two Chilean neurobiologists/scientists Humberto Maturana (*1928) and Francisco Varela (1946–2001). “Maturana and Varela associate the concept of all living things with the autopoietic (= self-creating) organization, which they demonstrate using the example of a cell and transfer to multicellular organisms. Maturana’s treatment of the question of life, system properties, and possibilities of distinguishing between living and non-living systems led him to the realization that it depends on the ‘organization of the living’.” (https://de.wikipedia.org/wiki/Der_Baum_der_ Erkenntnis#Autopoiese. Accessed on 26.01.2018)
Q 4.9
Describe the special achievements associated with the cybernetics representative Stafford Beer.
A 4.9
Management cybernetics is a branch of management theory founded by British management scientist Stafford Beer (1926–2002) and is the basis for the St. Gallen Management Model, particularly at the University of St. Gallen in Switzerland, developed by economist Hans Ulrich (1919–1997). Beer’s “Viable System Model”—VSM—for organization is based on systems thinking. In this approach, interconnected system elements influence each other. According to Beer, the VSM can be applied to any organization or organism, making it a universally applicable framework tool, with a preferred application in companies.
Q 4.10
Describe the special achievements associated with the cybernetics representative Karl Wolfgang Deutsch.
A 4.10
1986 Social and political scientist Karl Wolfgang Deutsch (1912–1992), at that time working at the Science Center for Social Sciences in Berlin, Germany, wrote a short contribution about his 1963 book “The nerves of government: models of political communication and control“. In this work, he considered political events in society as a control process, which also included the concept of Wiener’s negative feedback.
8.4 Chap. 5: Cybernetic Models and Orders
217
Q 4.11
Describe the special achievements associated with the cybernetics representative Ludwig von Bertalanffy.
A 4.11
Austrian biologist and systems theorist Ludwig von Bertalanffy (1901–1972) developed a General Systems Theory, which attempts to find and formalize common laws in physical, biological, and social systems based on methodological holism. Principles found in one class of systems should also be applicable to other systems. These principles include complexity, equilibrium, feedback, and self-organization.
Q 4.12
Describe the special achievements associated with the cybernetics representative Heinz von Foerster.
A 4.12
Austrian physicist Heinz von Foerster (1911–2002) is one of the co-founders of cybernetic sciences. Terms such as first and second-order cybernetics are inextricably linked to his name.
Q 4.13
Describe the special achievements associated with the cybernetics representative Jay Wright Forrester.
A 4.13
American computer scientist Jay Wright Forrester (1918–2016) is a pioneer of computer technology and systems science. He is the originator of the research field of system dynamics, whose model structure of the System-Dynamic-Model is still used today in many disciplines for analyzing complex systems and became globally known through the world simulation model of the Club of Rome.
Q 4.14
Describe the special achievements associated with the cybernetics representative Frederic Vester.
A 4.14
Frederic Vester (1925–2003) was a German biochemist and systems researcher. In 1970, he founded the “Study Group for Biology and Environment,” from which numerous research results, books, and articles on cybernetics or systemic or networked thinking emerged. System thinking was probably his greatest motivation in striving to revive research and application in various societal—not least conflict-ridden—areas and to raise them to a new level of thinking and action, in stark contrast to the prevailing, unrealistic causal strategies in the complex dynamic environment. Vester dealt intensively with biocybernetics. He is responsible for the eight basic biocybernetic rules. He also designed the sensitivity model, which makes networked thinking and action practically tangible.
8.4 Chap. 5: Cybernetic Models and Orders Q 5.1
How can the state of cybernetic mechanical systems be described?
A 5.1
The state of cybernetic mechanical systems is externally determined. They react to environmental influences, causing disturbances to affect the system. Fixed reaction patterns are used to counteract these disturbances (negative feedback). Any change outside a norm is compensated for by a counter-reaction until a stable, statically determined state is re-established.
218
8 Control Questions (Q N.N) With Sample Answers …
Q 5.2
What type of signal transmission do trees use among themselves in case of danger?
A 5.2
“In addition to chemical signal transmission in the extensively networked cybernetic control circuits, trees also help each other in parallel through the more reliable electrical signal transmission via the roots, which connect organisms largely independent of weather. Once the alarm signals have spread, all oak trees around—the same applies to other species—pump tannins through their transport channels into bark and leaves.” (Wohlleben 2015, p. 20)
Q 5.3
What mechanical means of defense does the kapok tree use against enemies?
A 5.3
Thorns.
Q 5.4
Which cybernetic control process strengthens the protection of the kapok tree against enemies? Sketch and describe the process.
A 5.4
For the sketch and description, see Fig. 5.5.
Q 5.5
Sketch and describe the cybernetic predator-prey relationship between foxes, hares, and plants, and highlight the peculiarity of the circular connections.
A 5.5
For the sketch and description, see Fig. 5.6. The peculiarity of the circular connections is the linking of positive and negative feedback processes, which keep the entire relationship structure and thus all organisms capable of growth. Figure 5.6 shows four highlighted control loops of far more complex connections between three organisms in nature than can be represented here. The influence of negative feedback in this relationship network is striking, contributing to keeping the outlined system dynamically stable. This means nothing other than that all involved organisms have the opportunity to continue developing. The predator-prey model developed by Lotka and Volterra (see K 5.6) shows the dynamic interrelationship between predator and prey in a growth-time diagram, as can be seen in Fig. 5.7 for two organisms.
Q 5.6
What does the Lotka-Volterra predator-prey model state?
A 5.6
The predator-prey model developed by the Austrian/American chemist and mathematician Alfred Lotka (1880–1949) and the Italian physicist Vito Volterra (1860–1940) shows the dynamic interrelationship between predator and prey in a growth-time diagram.
8.5 Chap. 6: Cybernetics and Theories
219
Q 5.7
Sketch the course of the predator-prey model according to K 5.6 in a Lotka-Volterra diagram.
A 5.7
For the sketch and description, see Fig. 5.7. It can be seen that the prey population is always ahead of the predator population in time.
Q 5.8
Sketch and describe the cybernetic course in the social entrepreneurial environment according to Fig. 5.8.
A 5.8
The impending loss of sales markets and thus a drop in profits prompt the company management to take cost-saving measures—action –, which are linked to a specific goal—decision –, with various implementation options—choice—available. Figure 5.8 shows this typical approach as a control-oriented cycle, in which the negative feedback is recognizably linked to the manipulated variable.
Q 5.9
Briefly explain the term “first-order cybernetics” and show the process using a sketch.
A 5.9
First-order cybernetics is the cybernetics of observed systems. For the sketch and description, see Fig. 5.9.
Q 5.10
Briefly explain the term “second-order cybernetics” and illustrate the process using a sketch.
A 5.10
Second-order cybernetics is the cybernetics of observing systems. For the sketch and description, see Fig. 5.10.
8.5 Chap. 6: Cybernetics and Theories Q 6.1
Sketch and describe the three system concepts (system models) of the systems theory of technology according to Ropohl. What specific characteristics can be identified in the three system concepts?
A 6.1
For the sketch and description, see Fig. 6.1. System Model 1: Functional Concept, relationships between inputs, outputs, states, etc. System Model 2: Structural Concept, interconnected elements and subsystems System Model 3: Hierarchical Concept, distinguishable from their environment or a super-system
220
8 Control Questions (Q N.N) With Sample Answers …
Q 6.2
Name and describe the four roots of modern system theory according to Ropohl.
A 6.2
Root 1:
refers to Ludwig v. Bertalanffy and his rational holistic approach, which is applicable not only to the objects of individual scientific disciplines but also to the interaction of the sciences.
Root 2:
is that of Norbert Wiener’s cybernetics (Sect. 4.1), which covers the entire field of control and regulation technology and information theory.
Root 3:
Ropohl sees in various approaches to the scientification of practical problem-solving. In doing so, it was inevitable that the notorious relationship conflicts between theory and practice had to be reflected. Due to their constitutional principle, individual scientific theories, as mentioned, always only concern partial aspects of a complex problem.
Root 4:
is the structural thinking of modern mathematics. If mathematics today is understood as the science of general structures and relations, or even as the structural science par excellence, it not only offers itself as a tool for system theory but also proves to be, in a sense, the system theory itself. Based on set algebra, the concept of the relational structure has emerged, which is defined by a set of elements and a set of relations and thus precisely clarifies the difference between the set and the whole that Aristotle already saw. This mathematical system concept can be used for the basic definitions of General System Theory.
Q 6.3
With what do natural, psychic, and biological systems operate according to Luhmann?
A 6.3
While biological systems are activated by “biochemical reactions” and psychic systems operate through “thoughts and feelings,” communication is the modus operandi for social systems.
Q 6.4
Luhmann provides several explanations for what he understands as communication. Name four of them.
A 6.4
1. Communication is the smallest unit in social systems. 2. “Communication is not the achievement of an acting subject, but a self-organization phenomenon: it happens.” (Simon 2009, p. 94) 3. To realize communication, three components are necessary: information, communication, and understanding. They come about through their respective selections, with no component occurring on its own. 4. “The general theory of autopoietic systems requires a precise specification of those operations that carry out the autopoiesis of the system and thus distinguish a system from its environment. In the case of social systems, this is done through communication. Communication has all the necessary properties: it is a genuinely social (and the only genuinely social) operation. It is genuinely social insofar as it presupposes a majority of participating consciousness systems, but (precisely for this reason) as a unity cannot be attributed to any individual consciousness.” (Luhmann 1997, p. 81)
8.5 Chap. 6: Cybernetics and Theories
221
Q 6.5
Sketch and describe the schema of a general communication system according to Shannon.
A 6.5
For the sketch, see Fig. 6.4. The message of the information originates from an information source. The signal is transmitted via a sender to a receiver, who forwards the message to its destination. The signal itself can be influenced by noise sources during transmission. Shannon (1948, p. 379) states: “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.“
Q 6.6
Why are technical, information-processing systems the major beneficiaries of Shannon’s insights into information theory?
A 6.6
Technical, information-processing systems are the major beneficiaries of Shannon’s consideration, which made the physical quantity information tangible or countable by connecting it with the smallest digital unit, the bit –binary digit. It is the unit of measurement for digitally stored and processed real data that can have two values with equal probability, usually zero and one.
Q 6.7
What is algorithm theory?
A 6.7
Algorithm theory is a “mathematical theory derived from formal logic that deals with the construction, representation, and machine realization of algorithms and provides the foundations of algorithmic languages (algorithm). It gained importance, among other things, for the application of computing machines by developing methods that help to find equivalent algorithms of different structure (e.g., with shorter computing time or fewer computing steps) for given algorithms and at the same time to investigate the principal solvability of mathematical problems.” (http://universal_lexikon.de-academic. com/204271/Algorithmentheorie. Accessed on 10.02.2018)
Q 6.8
Describe the Rete algorithm.
A 6.8
The Rete algorithm (rete = Latin, stands for network or net) is an expert system for pattern recognition and for mapping system processes through rules. The Rete algorithm was developed with the aim of ensuring very efficient rule processing. In addition, even large rule sets can still be handled performantly. At the time of its development (1982 by the American computer scientist Charles Forgy), it was 3000 times superior to existing systems.
222
8 Control Questions (Q N.N) With Sample Answers …
Q 6.9
Name four different classes of algorithms with two concrete algorithmic applications or names of the respective algorithms for each.
A 6.9
Class 1:
Problem: Optimization Algorithms linear and non-linear optimization, search for optimal parameters of mostly complex systems.
Class 2:
Methods: a. Evolutionary Algorithms, b. Approximation Algorithms. Stochastic, metaheuristic optimization methods, whose functionality is modeled after evolutionary principles of natural organisms.
Class 3:
Geometry + Graphics: a. De Casteljau-A., b. Floodfill-A. Enables efficient calculation of an arbitrarily accurate approximation representation of Bézier curves—parametrically modeled curves—by a polygonal chain—union of connecting distances of a sequence of points
Class 4:
Graph Theory: Dijkstra-A., Nearest-Neighbor-Heuristic Solves the shortest path problem for a given starting node.
Q 6.10
Describe or define what is meant by automata theory.
A 6.10
Automata theory is an important subject area of theoretical computer science. Its findings on abstract computing devices—called automata—are applied in computability and complexity theory, as well as in practical computer science (e.g., compiler construction, search engines, protocol specification, software engineering).
Q 6.11
Name and describe four different automata. What can finite (deterministic) automata process? List five features.
A 6.11
1. A Turing machine is a computational model of theoretical computer science that models the operation of a computer in a particularly simple, mathematically easyto-analyze way. 2. A Moore automaton is a finite automaton whose outputs depend solely on its state. 3. A Mealy automaton is a finite automaton whose outputs depend solely on its state and its input. 4. A pushdown automaton is a finite automaton and a purely theoretical construct that has been extended by a pushdown storage. With two pushdown storages, the automaton has the same power as a Turing machine. Finite (deterministic) automata can process the following features: 1. a finite set of input symbols/characters, 2. a finite set of states, 3. a set of final states as a subset of the state set, 4. a state transition function that returns a (new) state as a result for an argument consisting of state and input symbol, 5. a start state as an element of the set of states.
8.5 Chap. 6: Cybernetics and Theories
223
Q 6.12
What do you understand by “Chomsky hierarchy“?
A 6.12
A hierarchy of formal grammars: “Although grammars are not actually machines, they have a close relationship to automata in that a language is defined/generated by a grammar and a suitable automaton can determine whether words belong to the language. For example, finite automata recognize regular languages, pushdown automata recognize context-free languages. The fact that formal languages can also be understood as “problems” by assigning semantics to the words of a language (e.g., numbers, logical expressions, or graphs) results in the connection to computability.” (http:// www.enzyklopaedie-der-wirtschaftsinformatik.de/lexikon/technologien-methoden/ Informatik%2D%2DGrundlagen/Automatentheorie. Accessed on 10.02.2018)
Q 6.13
Languages defined by grammars are called programming languages. What features does the grammar include? Name five of them.
A 6.13
“1. a starting symbol, 2. a finite set of variables that must not be included in the derived words of the language, 3. an alphabet, i.e., a set of terminals as symbols of the words of the language, 4. a set of derivation rules, by which a specific combination of terminals and variables (in a specific order) is transformed into another combination of terminals and variables, in which the variables of the starting combination are each replaced by a sequence consisting of variables and terminals, and 5. a start symbol as an element of the set of variables.” (ibid.)
Q 6.14
Decision theory distinguishes three subareas. What are they and how do they differ?
A 6.14
“1. the normative decision. It is based on rational decisions made by humans on the basis of axioms— unproven assumptions. The question arises as to how decisions should be made. 2. the prescriptive decision. Normative models are used, which include strategies and methodological approaches that enable people to make better decisions, taking into account the limited cognitive abilities of humans. 3. the descriptive decision. It refers to actual decisions made in the real environment based on empirical questions. Here, the question is: How are decisions made?” (according to: https://de.wikipedia.org/wiki/Entscheidungstheorie. Accessed on 10.02.2018)
224
8 Control Questions (Q N.N) With Sample Answers …
Q 6.15
The basic model of (normative) decision theory can be represented in a result matrix. This includes the decision field and the goal system. How is the decision field structured?
A 6.15
Action space:
Set of possible action alternatives
State space:
Set of possible environmental states
Result function:
Assignment of a value for the combination of action and state
Q 6.16
Describe game theory according to Bartholomae and Wiens.
A 6.16
“As a scientific discipline, game theory deals with the mathematical analysis and evaluation of strategic decisions. Game-theoretical fields of application are omnipresent in our everyday life, as ultimately every social issue in which at least two parties interact and make strategic considerations can be examined with the tools of game theory. In the field of economics, this includes measures of financial and social policy, entrepreneurial decisions such as estimating the effects of market entry, a merger or a tariff structure, negotiations between tariff parties, and extreme behavioral risks such as economic espionage or terrorism. The high relevance of game-theoretical issues and the simultaneous increasing compatibility with other disciplines, such as psychology or operations research, make game theory an indispensable part of basic economic education.” (Bartholomae and Wiens 2016, p. V)
Q 6.17
What distinguishes game theories from decision theories?
A 6.17
Game theories differ from decision theories in that the successes of individual players are always dependent on or influenced by the activities of other players. Decisions are therefore always interdependent decisions. Game theory can be divided into cooperative and non-cooperative game theory.
Q 6.18
What is the difference between the Nash bargaining solution and the Nash equilibrium?
A 6.18
1. The Nash bargaining solution belongs to the so-called cooperative game theory. Players agree—cooperate—with a view to certain requirements for the solution of the negotiation. There are recognized principles—axioms—for the negotiation solution (see relevant publications on this topic): a. Pareto optimality b. Symmetry c. Independence from positive linear transformations d. Independence from irrelevant alternatives. The Nash bargaining solution—Nash’s theorem—states: There exists exactly one Pareto-optimal, symmetric solution, independent of positive linear transformations and irrelevant alternatives. 2. The Nash equilibrium belongs to the so-called non-cooperative game theory. It describes a combination of game strategies, where each player chooses the strategy that he considers the best and from which it does not make sense to deviate. The strategies of all players are therefore the best solutions among themselves. Consequently, a Nash equilibrium is established.
8.5 Chap. 6: Cybernetics and Theories
225
Q 6.19
Describe the game theory example of the prisoner’s dilemma.
A 6.19
“Two prisoners are accused of having committed a crime together. Both are interrogated separately, without being able to speak to each other. If both prisoners deny the crime, both receive a minor punishment, as they can only be proven guilty of a less severely punished offense. If both confess to the crime, they receive a high punishment, but not the maximum penalty. However, if only one of the two prisoners confesses to the crime, he remains unpunished as a key witness. The other prisoner is considered convicted without having confessed to the crime and receives the maximum penalty. How do the prisoners decide?” (according to: https://de.wikipedia.org/ wiki/Gefangenendilemma. Accessed on 10.02.2018)
Q 6.20
Describe the game theory example of the stag hunt.
A 6.20
“The stag hunt is a parable that goes back to Jean-Jacques Rousseau and is also known as the hunting party. In addition, the stag hunt (engl. stag hunt or assurance game), also called insurance game, represents a basic game-theoretical constellation. Rousseau dealt with this in the sense of his investigations into the formation of collective rules under the contradictions of social action, so that paradoxical effects lead to the institutionalization of compulsion (to cooperate) in order not to breach the contract. He describes the situation as follows: Two hunters go hunting, where each has so far only been able to catch a hare on their own. Now they try to coordinate, that is, to make an agreement to catch a stag together, which would bring both more than a single hare. During the hunt, the dilemma develops analogously to the prisoner’s dilemma: If, during the hunt, a hare crosses the path of one of the two hunters, he must decide whether to catch the hare now or not. If he catches the hare, he forfeits the opportunity to catch a stag together. At the same time, he must think about how the other would act. If the other is in the same situation, there is a risk that the other will catch the hare and he will ultimately suffer a loss: not getting a hare or a share of a stag.” (according to: https://de.wikipedia.org/wiki/Hirschjagd. Accessed on 10.02.2018)
226
8 Control Questions (Q N.N) With Sample Answers …
Q 6.21
Describe the game theory example of the Braess Paradox.
A 6.21
“The Braess Paradox is an illustration of the fact that an additional action option, assuming rational individual decisions, can lead to a deterioration of the situation for everyone. The paradox was published in 1968 by the German mathematician Dietrich Braess. Braess’ original work shows a paradoxical situation in which the construction of an additional road (i.e., an increase in capacity) leads to an increase in travel time for all drivers with the same traffic volume (i.e., the capacity of the network is reduced). It is assumed that each road user chooses his route in such a way that there is no other option with a shorter travel time for him. There are examples that the Braess Paradox is not just a theoretical construct. In 1969, the opening of a new road in Stuttgart led to a deterioration of traffic flow in the vicinity of Schlossplatz. The reverse phenomenon was also observed in New York in 1990. The closure of 42nd Street led to fewer traffic jams in the surrounding area. Further empirical reports on the occurrence of the paradox can be found on the streets of Winnipeg. In Neckarsulm, traffic flow improved after a frequently closed railway crossing was completely abolished. The meaningfulness became apparent when it had to be temporarily closed due to construction work. Theoretical considerations also suggest that the Braess Paradox frequently occurs in random networks. Many networks in the real world are random networks.” (according to: https://de.wikipedia.org/wiki/BraessParadoxon. Accessed on 10.02.2018)
Q 6.22
Describe the game theory example of the tragedy of the commons.
A 6.22
“Tragedy of the commons (engl. tragedy of the commons), tragedy of the common good, refers to a social science and evolutionary theory model, according to which freely available but limited resources are not used efficiently and are threatened by overuse, which also threatens the users themselves. This behavioral pattern is also studied by game theory. Among other things, the question is pursued why individuals in many cases stabilize social norms through altruistic sanctions despite high individual costs.” (according to: https://de.wikipedia.org/wiki/ Tragik_der_Allmende. Accessed on 10.02.2018)
Q 6.23
Describe what is meant by “learning theory.“
A 6.23
Learning theory is the systematics of knowledge about learning. Learning theories describe the conditions under which learning takes place and enable verifiable predictions. Meanwhile, a variety of learning theories exist, which must be considered as complementary to each other. Roughly, two directions of learning are distinguished: stimulus-response theories, which deal with the investigation of behavior, and cognitive theories, which deal with processes of perception, problem-solving, decisionmaking, concept formation, and information processing.
8.5 Chap. 6: Cybernetics and Theories
227
Q 6.24
Name and describe five different learning theory approaches.
A 6.24
1. Behaviorist Learning It is a scientific concept for investigating and explaining the behavior of humans and animals using natural scientific methods. The American psychologist Burrhus Frederic Skinner (1904–1990) and the Russian physician Ivan Petrovich Pavlov (1849–1936) are two early representatives of this school. 2. Cognitive Learning—Learning through Insight “Learning through insight or cognitive learning refers to the acquisition or restructuring of knowledge based on the use of cognitive abilities (perceiving, imagining, etc.). Insight here means recognizing and understanding a situation, grasping cause-effect relationships, the meaning and significance of a situation. This enables goal-oriented behavior and is usually recognizable by a change in behavior. Learning through insight is the sudden, complete transition to the solution state (all-ornothing principle) after initial trial-and-error behavior. The behavior resulting from insightful learning is almost error-free.” (according to: https://de.wikipedia.org/ wiki/Instruktionalismus. Accessed on 12.02.2018) 3. Situational Learning—Constructivism “In terms of learning psychology, constructivism postulates that human experience and learning are subject to construction processes influenced by sensory-physiological, neuronal, cognitive, and social processes. Its core thesis states that learners create an individual representation of the world in the learning process. What someone learns under certain conditions thus depends strongly, but not exclusively, on the learner himself and his experiences.” (according to: https://de.wikipedia.org/ wiki/Konstruktivismus_(Lernpsychologie) . Accessed on 12.02.2018) 4. Biocybernetic-Neuronal Learning “Biocybernetic-neuronal approaches are learning methods that originate from the field of neurobiology and primarily describe the functioning of the human brain and nervous system. One subject within the biocybernetic-neuronal learning theories are mirror neurons, which, in addition to empathy and rapport skills, could also be involved in neuronal basic functions for learning by imitation. An early representative of these learning methods was Frederic Vester (1975), who laid the foundation for biocybernetic communication with his description of biological neuronal learning processes. More recently, Manfred Spitzer (1996) has dealt with models for “learning, thinking, and acting” in his book “Geist im Netz” (Mind in the Net). The author has become particularly well-known recently with a controversially discussed and increasingly influential learning model concerning digital education (Spitzer 2012). The provocative title of his book is: “Digital Dementia. How we and our children are losing our minds.” (according to: https://de.wikipedia.org/wiki/ Lerntheorie. Accessed on 12.02.2018) 5. Machine Learning It is an umbrella term for the “artificial” generation of knowledge from experience: “An artificial system learns from examples and can generalize them after the learning phase has been completed.” This means that the examples are not simply memorized, but it “recognizes” patterns and regularities in the learning data. In this way, the system can also assess unknown data (learning transfer) or fail to learn unknown data.” (according to: https://de.wikipedia.org/wiki/Maschinelles_Lernen. Accessed on 12.02.2018)
228
8 Control Questions (Q N.N) With Sample Answers …
Q 6.25
Within the framework of Big Data and Deep Mining processes, various algorithmic approaches are used. Name and explain five of these approaches.
A 6.25
1. Supervised Learning The algorithm learns a function from given pairs of input and output. During the learning process, a “teacher” provides the correct function value for an input. The goal of supervised learning is to train the network to establish associations after several computing cycles with different inputs and outputs. A subfield of supervised learning is automatic classification. An application example would be handwriting recognition. 2. Semi-Supervised Learning Corresponds to supervised learning with limited input and output. 3. Unsupervised Learning The algorithm generates a model for a given set of inputs that describes the inputs and allows predictions. There are clustering methods that divide the data into several categories that differ from each other by characteristic patterns. The network thus independently creates classifiers according to which it divides the input patterns. An important algorithm in this context is the EM algorithm (Expectation-Maximization algorithm of mathematical statistics), which iteratively determines the parameters of a model so that it optimally explains the observed data. It assumes the existence of unobservable categories and alternately estimates the affiliation of the data to one of the categories and the parameters that make up the categories. An application of the EM algorithm can be found, for example, in Hidden Markov Models (HMMs) (stochastic model). 4. Reinforcement Learning The algorithm learns a tactic through reward and punishment on how to act in potentially occurring situations to maximize the benefit of the agent (i.e., the system to which the learning component belongs). This is the most common form of learning for humans. 5. Active Learning The algorithm has the possibility to request the correct outputs for a part of the inputs. The algorithm must determine the questions that promise a high information gain in order to keep the number of questions as small as possible.” (according to: https://de.wikipedia.org/wiki/Maschinelles_Lernen. Accessed on 12.02.2018)
8.5 Chap. 6: Cybernetics and Theories
229
Q 6.26
In explaining learning processes in the field of e-learning, three learning theories are in the foreground. Name and describe them. What role do the learner and teacher play in each of the learning theories?
A 6.26
1. Behaviorism—Learning through reinforcement “According to the doctrine of behaviorism, learning is triggered by a stimulusresponse chain. Certain stimuli are followed by certain reactions. As soon as a stimulus-response chain has been established, a learning process is complete and the learner has learned something new. As a result of certain stimuli, positive and negative reactions can occur. While desired positive reactions can be strengthened by rewards, undesired or negative reactions are reduced by remaining unrewarded. Reward and punishment thus become central factors of learning success. This explanation is extended by “operant conditioning” or instrumental learning. In this case, behavior depends heavily on the consequences that follow it. These consequences are the starting point for future behavior. • What role does the learner play? The learner is passively driven from within, becoming active in response to external stimuli and reacting accordingly. • What role does the teacher play? The teacher plays a central role. They set appropriate incentives and provide feedback on the students’ reactions. In this way, they intervene centrally in the learner’s learning process with their positive or negative evaluation or feedback. What happens between the areas of “creating incentives” and the reactions of the learners does not need to concern the teacher further, as these areas belong, so to speak, to the “black box“.” (Meir n.d., pp. 10–11) 2. Cognitivism—Learning through insight and understanding “According to the theory of cognitivism, learning refers to the intake, processing, and storage of information. The focus is on the processing process, tied to the right methods and problem situations that support this process. The learning offer itself, or the information processing and the problem situation and methodology, play a decisive role, as they greatly influence the learning process. The focus is therefore on problems, the solution of which enables the learner to gain insights and thus expand their knowledge. • What role does the learner play? The learner takes on an active role that goes beyond mere reaction to stimuli. They learn by independently absorbing, processing, and developing solutions based on given problem situations. Due to their ability to solve problems, their position in the learning process becomes more significant. • What role does the teacher play? The teacher plays a central role in the didactic preparation of problem situations. They select or provide information, set problem situations, and support learners in processing the information. They have the primacy of knowledge transfer.” (ibid., pp. 12–13)
230
8 Control Questions (Q N.N) With Sample Answers … 3. Constructivism—Learning through personal experience, perception, and interpretation “The learning process is inherently very open. It is seen as a process of individual construction of knowledge. Since, according to this theory, there is no right or wrong knowledge, but only different perspectives that have their origin in the personal experience of the individual, the focus is not on the controlled and guided transmission of content, but on the individually oriented self-organized processing of topics. The goal is not for learners to find correct answers based on correct methods, but for them to be able to deal with a situation and develop solutions from it.” • What role does the learner play? In this theory, the learner is at the center of attention. They are provided with information with the aim of defining and solving problems themselves based on the information. They receive few specifications and must find a solution in a self-organized manner. They already bring competencies and knowledge with them. Therefore, the focus is on the recognition and appreciation of the learners and the concentration on the individual knowledge that each student brings with them. • What role does the teacher play? The role of the teacher goes beyond the tasks of information presentation and knowledge transfer. They not only convey knowledge or prepare problems, but also take on the role of a coach or learning companion who supports independent and social learning processes. It is their responsibility to create an atmosphere in which learning is possible. In this sense, the establishment of authentic contexts and appreciative relationships with the learners becomes of central importance.” (ibid., 14–15)
8.6 Chap. 7: Cybernetic Systems in Practice Q 7.1
Sketch and describe the model of a cybernetic control loop “blood sugar“.
A 7.1
For the sketch, see Fig. 7.1. “The concentration of glucose in the blood is regulated by various hormones (mainly insulin, growth hormones, epinephrine, and cortisone). These hormones influence the various ways the organism can produce glucose from storage substances or, conversely, break down excess glucose and store it in the form of glycogen or fat. Conversely, the glucose concentration—but not only it—influences the concentration of these hormones in the blood. If one combines all hormones in their effect on glucose regulation into a fictitious hormone H, one can distinguish two inputs, namely the supply rate of glucose and hormone in the blood, and two output variables, the concentrations of glucose and hormone in the blood.” (Röhler 1974, p. 119)
8.6 Chap. 7: Cybernetic Systems in Practice
231
Q 7.2
Sketch and describe the model of a cybernetic control loop “pupils“.
A 7.2
For the sketch, see Fig. 7.2. “The pupil of the (human) eye changes with the luminance of the visual field, and the pupil becomes smaller when the luminance increases, and vice versa (pupil reflex). Since this reduction causes a decrease in retinal illuminance, the pupil reflex compensates for fluctuations in retinal illuminance to a certain degree, resulting in the stabilization of retinal illuminance at a set value. The photoreceptors of the retina form the sensors of the system, the involved nerve centers take care of signal processing, acting as controllers, and the pupil musculature corresponds to the actuator or, more generally, the actuating element.” (Röhler 1974, p. 37)
Q 7.3
Sketch and describe the model of a cybernetic control loop “camera image sharpness“.
A 7.3
For the sketch, see Fig. 7.4. “The camera should be able to automatically focus on a selected subject. Task size xA is thus the image sharpness, which can only be captured with great technical effort. It is easier to capture the distance to the object and a lens position xL dependent on it. Thus, the targeted influence on image sharpness is achieved by means of control and regulation: xL controls the task size xA (image sharpness) via a block “optics” (Fig. 7.4, B). The lens position xL is the controlled variable in the control loop of Fig. 7.4, C, which counteracts any deviation from the set position xL, S. xL,S is obtained by conversion from the subject distance d. D is determined by measuring the runtime ∆t = tSend − tReceive of reflecting infrared or ultrasonic pulses.” (Mann et al. 2009, p. 30)
Q 7.4
Sketch and describe the model of a cybernetic control loop “Position control of the read/write head in a computer hard disk drive“.
A 7.4
For the sketch, see Fig. 7.5. “A read/write head must be positioned on a data track about 1 μm wide in less than 10 ms before data can be read or written on the rotating hard disk. The positioning (head position x as task size) is achieved by pivoting the arm with a rotary voice-coil motor (motor voltage uM as manipulated variable). Disturbances z, such as aerodynamic forces or vibrations on the block “Voice-Coil-Motor and Arm” as a path in Fig. 7.5, B affect the head position x. The current position x (actual value) is determined by the read/write head using position data scattered in the data track. They appear as digital numerical values xk at discrete time points tk, k = 1, 2, 3 … These values can be directly processed in a digitally implemented comparator and control element (e.g., microcomputer), with the target position also being specified as a digital numerical value xk,S. The digital controller output variable yR,k is converted into the analog electrical voltage uR (DAC: DigitalAnalog-Converter), which, smoothed and amplified as manipulated variable y = uM, drives the voice-coil motor.” (Mann et al. 2009, pp. 31–33)
232
8 Control Questions (Q N.N) With Sample Answers …
Q 7.5
Sketch and describe the model of a cybernetic control loop “Power steering in a motor vehicle“.
A 7.5
For the sketch, see Fig. 7.6. “Power steering is intended to reduce the driver’s effort when steering. The task size is the deflection xR of the wheels. Disturbances are, for example, external forces acting on the wheels. The steering deflection xR is achieved by moving the tie rod (Sp) by xA via a lever connection. Without power steering, the driver has to move the tie rod directly with the steering wheel (Lr) via a gearbox (Gt). With power steering, he only moves the control piston (Sk) of a hydraulic drive, which provides the control force by driving a liquid under pressure into the working cylinder (Az) (liquid flow q). The fixed connection of the working piston rod Ks with the control cylinder Sz results in a follow-up control. If, for example, the control pistons Sk are moved from the position shown to the right, the working piston (and thus the wheel deflection xR) follows in the same direction. In doing so, the working piston Ak pulls the control cylinder Sz with Ks, so that Ak comes to a standstill exactly when the two flexible lines V1 and V2 are covered again by the two control pistons Sk and thus q = 0. The setpoint/actual value comparison takes place between the paths of the control piston (reference variable) and the control cylinder (controlled variable). Supply disturbances include, above all, the supply voltage of the pump.” (Mann et al. 2009, pp. 33–34)
Q 7.6
Sketch and describe the model of a cybernetic control loop “room and heating water temperature“.
A 7.6
For the sketch, see Fig. 7.7. “The room temperature ∂ is to be specifically influenced by means of heat supply via a radiator (Hk). Disturbances are mainly caused by fluctuations in the outside temperature ∂a. The heat is supplied by heating water (Hw), which is applied by the thermostat valve TV with the pressure pV and the temperature ∂V. The thermostat valve is a measuring and control device: The desired set temperature ∂S in the room is set with the setpoint screw (S). The actual value is detected by the expansion xB of the bellows (Ba) filled with a liquid (Fl). The set/actual value comparison is made between the two paths xS (position setpoint screw) and xB (actual value). The control difference e = xS − xB controls the inflow valve position sZ for the radiator (without auxiliary energy). The supply disturbance variables pV and ∂V should be as constant as possible. A constant speed of the flow pump P is sufficient for pV. ∂V can also drop more significantly depending on the room heat demand. Therefore, ∂V is kept at a setpoint ∂V,S in the boiler (K) with another control. The set-actual value comparison is carried out with the electrical voltages u∂ (from sensor Se1) and u∂,S. If the control difference e = u∂,S − u∂ is positive, the control device (Rg) switches on a burner (Br) and switches it off again if e is negative, etc. (so-called two-point controller). In this process, ∂V oscillates slightly around the setpoint, which has little effect on the room temperature. To save heating energy, the setpoint ∂V,S or u∂,S is lowered with a control device when the outside temperature ∂a (sensor Se2) rises (and vice versa).” (Mann et al. 2009, p. 35)
8.6 Chap. 7: Cybernetic Systems in Practice
233
Q 7.7
Describe the term “economic cybernetics“.
A 7.7
“Entrepreneurial action has not only been seen as a permanent, dynamic interplay of socio-technical systems since the increasing globalization of (world) economy and society(ies). The timely recognition of critical developments of exogenous influencing factors as well as the (consequential) effects of one’s own decisions requires, in view of the complexity of diversely interlinked control circuits with often exponentially shaped effect delays, a computer-based, model-like support of strategic planning processes. The research focus deals with the application of the method “System Dynamics” by J.W. Forrester, which became known through the studies of the “Club of Rome”, to business management issues in the macroeconomic context. Examples range from internal models of personnel development, product launches (life cycles), or changes in manufacturing technology to the consequential effects of state framework conditions (labor costs, infrastructure, etc.). The most recent application example is model-based analyses of system effects of formula-based resource allocation or tuition fees in higher education. In addition to concrete model applications, the main benefit of activities in the field of economic cybernetics is seen in the consistent training of intuitive recognition and understanding of complex systems.” (https://www.wiwi.uni-rueck.de/fachgebiete_und_institute/management_support_und_ wirtschaftsinformatik_prof_rieger/profil/ wirtschaftskybernetik.html. Accessed on 10.02.2018)
Q 7.8
Sketch and describe the five functional blocks of a cybernetic model for the simulative quantification of risk consequences in complex process chains. Highlight the path of negative feedback in the sketch. Assign the five blocks to the three risk areas.
A 7.8
For the sketch, see Fig. 7.9. Block a:
Creation of a System Dynamics model
Block b:
Statistical evaluation of the database to determine correlations and interactions
Block c:
Simulation of risks using the System Dynamics model
Block d:
Transfer of results to a database
Block e:
Evaluation of simulation results
1. Risk analysis—Database 2. Risk identification—Blocks a, b, c, d 3. Risk assessment—Block e Q 7.9
Explain the term sociocybernetics.
A 7.9
“Sociocybernetics summarizes the application of cybernetic insights to social phenomena, i.e., it attempts to model social phenomena as complex interactions of several dynamic elements. An important problem of sociocybernetics lies in second-order cybernetics, as sociocybernetics is a societal self-description.” (https://de.wikipedia.org/ wiki/Soziokybernetik. Accessed on 13.02.2018)
234
8 Control Questions (Q N.N) With Sample Answers …
Q 7.10
Explain the concept of psychological cybernetics.
A 7.10
“Psychology is an experience-based science. It describes and explains human experience and behavior, their development over the course of life, as well as all relevant internal and external causes or conditions. Since not all psychological phenomena can be captured by empirical means, the importance of humanistic psychology should also be pointed out.” (https://de.wikipedia.org/wiki/Psychologie. Accessed on 15.02.2018) In an early contribution to cybernetic approaches in behavioral psychology, Norbert Bischof raised the question in the Psychological Review (Bischof 1969, pp. 237–256): “Does cybernetics have anything to do with psychology?” Bischof provided a first answer himself a year earlier, in his article “Cybernetics in Biology and Psychology” (Bischof 1968, pp. 63–72), in which he speaks of “processes in or on “a human being”, “a living being” or “an organism” (ibid., p. 63), contrasting the thesis of the atomistic approach with the antithesis of the holistic approach, finally arriving at the synthesis of the cybernetic approach in psychology.
Q 7.11
What do you understand by biocybernetically oriented behavioral physiology?
A 7.11
“Biocybernetically oriented behavioral physiology, like holistic theories, tends to understand the organism from the “inner perspective“; in this respect, it is of historical scientific relevance that, for example, the life’s work of German biologist and behavioral researcher Erich von Holst (1908–1962) culminated on the one hand in the exploration of the spontaneous activity of the organism, and on the other hand in the formulation of the reafference principle. The organism thus appears as a system that not only reacts to stimuli (afferences) but also always receives reafferences of its (spontaneous) actions. Here, too, the reflex arc closes into a “circle“, which is now precisely determined as a control loop.” (Bischof 1968, pp. 69–70)
Q 7.12
How can organismic behavior be captured in four consecutive steps according to Kalveram?
A 7.12
1. Observing behavioral acts. Questioning the seemingly self-evident. Highlighting noteworthy phenomena. 2. Description of these phenomena with suitable terms, here with cybernetic terms. 3. Experimental verification of hypotheses about relationships between statements. 4. Deriving behavior theory from this to explain human (and animal) behavior, i.e., predicting it within the framework of the chosen theory.
8.6 Chap. 7: Cybernetic Systems in Practice
235
Q 7.13
Sketch and describe the block diagram of an abstract automaton. Supplement the picture with the representation of a human-environment relationship using block representation as a cybernetic system of two automatons.
A 7.13
For the sketch, see Fig. 7.14. The biocybernetic control loop between the individual and the material environment is constructed through the respective input variables (command variables) w and x and through the reactive linkage variables e and a, which form feedback loops to the respective systems individual and environment. Comparable biocybernetic functions can also be constructed—according to the same scheme—with the individual and social environment, for example between a child as an individual and a mother as a social environment. This is intended to show that certain features of human behavior can be more or less abstracted by automatons. However, it is by no means intended to conclude that humans are automatons.
Q 7.14
Describe in your own words the course of Beer’s cybernetic experiment in Chile Allende in the 1970s based on the template. How do you personally view the application of the cybernetic experiment to a society?
236 A 7.14
8 Control Questions (Q N.N) With Sample Answers … For the sketch, see Fig. 7.18. “On November 12, 1971, Beer went to the Moneda, the presidential palace in Santiago de Chile, to present his Cybersyn project to Salvador Allende. Beer was invited by Fernando Flores, the technical director of “Corfo,” a holding company of the enterprises nationalized by the Allende government. The young engineer Flores wanted to introduce “scientific management and organizational techniques at the national level,” as he put it in an invitation letter to Beer. In order to be able to cope with the pre-programmed economic crises in real-time, Flores and Beer envisioned connecting all factories and businesses in the country through an information network. The Cybersyn team, composed of scientists from various disciplines, set to work, collecting unused teletypes and distributing them to all stateowned enterprises. Under the direction of German designer Gui Bonsiepe, a prototype of an “Opsroom” (operations room) was developed, resembling a control room in the “Star Trek” universe, but it was never realized. Using telex and radio connections, data on daily production, labor, and energy consumption were sent throughout the country and evaluated daily by one of the few computers available in Chile at the time, an IBM 360/50 (indicators of “social malaise” included, among other things, absenteeism from work). As soon as one of the numbers fell out of its statistical margin, an alarm—in Beer’s vocabulary, an “algedonic signal”—was sent out, giving the respective plant manager some time to solve the problem before it was reported to the next higher authority if the signal was repeated. Beer was convinced that this would give Chilean companies almost complete control over their activities on the one hand and enable intervention from a central location when a serious problem arose on the other.” The Cybersyn project was “technically demanding,” writes computer historian Eden Medina, “but from the outset, it was not just a technical attempt to regulate the economy. From the perspective of those involved, it could help advance Allende’s socialist revolution. The conflicts over the conception and development of Cybersyn simultaneously reflected the struggle between centralization and decentralization that disturbed Allende’s dream of democratic socialism.” On March 21, 1972, the computer produced its first report. Already in October, the system faced its first test in the face of strikes organized by the opposition and professional associations (“gremios”). The Cybersyn team formed a crisis unit to evaluate the 2,000 telex messages arriving daily from all over the country. Based on this data, the government determined how to get the situation under control. They then organized 200 loyal truck drivers (compared to about 40,000 strikers) to ensure the transport of all vital goods—and overcame the crisis. The Cybersyn team gained prestige, Flores was appointed Minister of Economy, and the British Observer headlined on January 7, 1973: “Chile run by Computer.” On September 8, 1973, the president ordered the central computer, which had been housed in the abandoned rooms of the Reader’s Digest editorial office, to be moved to the Moneda. Just three days later, the army’s fighter planes bombed the presidential palace, and Salvador Allende took his own life.” (Rivère 2010)
References
237
Q 7.15
What do they understand by “information-theoretic cybernetic didactics“?
A 7.15
“The understanding of didactics in information-theoretic cybernetic didactics focuses on teaching and learning as a concrete method in the sense of technological feasibility. The goal is the greatest possible efficiency in the teaching and learning process for the purpose of optimization. Further development can be seen in the approach of cybernetic-constructive didactics.” (https://service.zfl.uni-kl.de/wp/glossar/informationstheoretisch-kybernetische-didaktik. Accessed on 18.02.2018)
Q 7.16
Sketch and describe the process in the cybernetic educational control loop according to v. Cube. What are the dominant criticisms raised by Pongratz against v. Cube’s cybernetic education model?
A 7.16
For the sketch, see Figs. 7.19 and 7.20. “In the analysis of teaching as a control process, cybernetic pedagogy achieves its goal of describing school learning processes as an event in which a measurable variable (student) is brought to a desired setpoint (learning objective) in a system to be controlled by an automatic device (program), regardless of disturbances affecting the system. The systems-theoretical analysis of the teaching situation clearly reveals teaching as a process of goal achievement in the sense of cybernetic control. Corresponding to the various subprocesses of control, five areas can be distinguished: goal area, controller function, control function, learning system, and sensor area. The function of the controller (in the terminology of systems-theoretical didactics: the selection element) is usually taken over by the teacher in the concrete teaching situation. On the one hand, the teacher designs the teaching strategy (depending on the given setpoint) and, on the other hand, acts as a sensor in the interaction with the learner, controlling the respective learning outcome (the actual value). The position of the controlled variable is occupied by the learner, on whom the controller acts. The action takes place by means of the actuator. (In traditional terminology, the actuating device of the control process would roughly correspond to the teaching media.)” (Pongratz 1978, pp. 148–149) Criticism: “The cybernetic approaches in didactics have to face the question to what extent they can still take into account the spontaneity and self-activity of students and teachers within their theoretical and practical concept, to what extent they do not merely pay lip service to human reflexivity and autonomy, but preserve the idea of human freedom and promote its concrete individual and social realization.” (ibid., p. 156)
References Bartholomae F, Wiens M (2016) Spieltheorie. Ein anwendungsorientiertes Lehrbuch. Springer Gabler, Wiesbaden Bischof N (1968) Kybernetik in Biologie und Psychologie. In: Moser S (Hrsg) Information und Kommunikation. Referate und Berichte der 23, Internationalen Hochschulwochen Alpach 1967. Oldenbourg, München/Wien, pp 63–72
238
8 Control Questions (Q N.N) With Sample Answers …
Bischof N (1969) Hat Kybernetik etwas mit Psychologie zu tun? In: Psychologische Rundschau, Bd XX. Vanderhoeck & Ruprecht, Göttingen, pp 237–256 Deutsch KW (1986) The nerves of government: models of political communication and control. Current Contents, This week’s Citation Classics, Number 19, May 12, 1986 Luhmann N (1997) Die Gesellschaft der Gesellschaft. Suhrkamp, Frankfurt am Main Mann H, Schiffelgen H, Froriep R (2009) Einführung in die Regelungstechnik, 11., neu bearb. Aufl. Hanser, München McCulloch W, Pitts W (1943) A logical calculus of ideas immanent in nervous activity. Bull Math Biophys 5(4): 115–133 Meir S (o. J.) Didaktischer Hintergrund. Lerntheorien. https://lehrerfortbildung-bw.de/st_digital/ elearning/moodle/praxis/einfuehrung/material/2_meir_9-19.pdf. Accessed 12 Febr 2018 Pongratz LJ (1978) Zur Kritik kybernetischer Methodologie in der Pädagogik. Europäische Hochschulschriften. Lang, Frankfurt am Main Probst GJB (1987) Selbstorganisation. Ordnungsprozesse in sozialen Systemen aus ganzheitlicher Sicht. Parey, Berlin/Hamburg Rid T (2016) Maschinendämmerung. Eine kurze Geschichte der Kybernetik. Propyläen/Ullstein, Berlin Rivière P (2010) Der Staat als Maschine. Das Kybernetik-Experiment in Allendes Chile. Le Monde diplomatique (deutsche Ausgabe), 12.11.2010, p 19 Röhler R (1974) Biologische Kybernetik. Teubner, Stuttgart Rosenblueth A, Wiener N (1943) Behavior, purpose and teleology. Philos Sci 10(1):18–24 Rosenblueth A, Wiener N, Bigelow J (1950) Purpose and non-purpose behavior. Philos Sci 17(4):318–326 Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27: 379–423, 623– 656 (Reprinted with corrections) Simon FB (2009) Einführung in Systemtheorie und Konstruktivismus. Carl Auer, Heidelberg Spitzer M (1996) Geist im Netz. Modelle für Lernen, Denken, Handeln. Spektrum Akademischer Verlag, Heidelberg/Berlin Spitzer M (2012) Digitale Demenz. Wie wir uns und unsere Kinder um den Verstand bringen. Droemer, München Steinbuch K (1965) Automat und Mensch. Kybernetische Tatsachen und Hypothesen., 3., neu bearb. u. erw. Aufl. Springer, Berlin/Heidelberg/New York Wiener N (1963) Kybernetik. Regelung und Nachrichtenübertragung in Lebewesen und in der Maschine. Econ, Düsseldorf, Wien (Original 1963: Cybernetics or control and communication in the animal and the machine, 2., erw. Aufl., MIT Press, Cambridge, MA Wohlleben P (2015) Das Geheimnis der Bäume. Was sie fühlen, wie sie kommunizieren – die Entdeckung einer verborgenen Welt. Ludwig, München
Annex I
Explanations for Fig. 7.17, according to: Deutsch, 1969, 342–345, in the order of the book template
Current Information—Coming From Outside the Decision System—(O1–O5)
O1 current general information on political events abroad (part of external news supply). O2 current general information on domestic political events (part of internal news supply). O3 current information about foreign countries (after selection by receiving bodies). O4 current information about the country (after selection by receiving organs). O5 current information on foreign and domestic (after verification and unification).
Fig. A.1 Current information O1–O5
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5
239
240
Annex I
Previous Information—Taken From the System’s Storage Facilities—(R1–R6)
R1 Information from distant memories (after extraction and reorganization). R2 information from recent or current memories (after extraction and reorganization). R3 united information from memories R4 unified information from memories (after selective extraction). R5 information extracted from memory after it has been reviewed for appropriateness (using factors such as culture, values, personality structure, logical fit, etc. as selection criteria); is passed on to the preliminary decision domain R6 pertinent reminders will be forwarded to the final decision area
Fig. A.2 Previous information R1–R6
Combined Information—Consisting of Memories and Externally Supplied Data—(C1–C7) C1 unified selected data and pertinent reminders are forwarded (such as in the form of draft programs) for final decision-making C2 combined selected data and reminders (after reexamination for feasibility and appropriateness as policy practices). C3 united data are forwarded in abbreviated form to the area of confrontation and simultaneous triage C4 unified data in abbreviated form (after checking for usefulness to consciousness). C5 united data and memories (after selection and rearrangement in the realm of conscious confrontation) are forwarded to the realm of final decision C6 political procedures are forwarded to the foreign-political impact bodies after final selection C7 political procedures are forwarded to the internal political impact bodies after final selection Note: Procedures at the C4 and C5 levels need not always be consistent with each other or within themselves. The American Fig. A.3 Combined information C1–C7
Annex I
to C7 Congress, for example, could vote for a foreign policy resolution calling for greater anti-Communist efforts in the Western Hemisphere while cutting economic aid to Latin American countries; or the West German government could call on England to help defend West Berlin while threatening British trade by excluding England from the Common Market. Such inconsistencies can become already early visible during the rearrangement and projection of the abbreviated information symbols in the area of the simultaneous sighting on the "level of consciousness". Otherwise, the later feedback of messages about the first results of inconsistent actions taken according to these procedures in the external world may still be in time to allow a correction of the procedures in a following stage.
Fig. A.4 Combined information C7 (Continuation of Fig. A.3)
Feedback of Information on the Consequences of Actions that the System Causes in Its Relationships with the Outside World—(F1–F4)
F1 Feedback of information on the results of foreign policy actions. F2 Feedback of information on the results of domestic political action F3 feedback of information gathered by receiving bodies in the external-political sphere. F4 feedback of information gathered by receiving bodies in the domestic-political sphere.
Fig. A.5 Feedback of information F1–F4
241
242
Annex I
Principle Feedback of any kind is the corrective for any action in complex networked systems.
It does not matter in which societal, technical, economic, environmentally oriented, or political context feedback occurs, it acts as a system-stabilizing element. This is especially true when it is part of an overarching societal aggregating network (see also Sect. 7.4).
The System of “Will” The Most Important Test Fields—(S1–S4)
S1 Check field for selective recording of current information S2 Test field to extract pertinent memories from memory. S3 Test field for the selection of appropriate short information for the area of confrontation and simultaneous triage ("consciousness") S4 Test area for selecting appropriate and feasible policy practices.
Fig. A.6 The main test fields S1–S4
Annex I
243
The Main Information Flows for Controlling the Test Fields—(W1–W17)
W1 Information used to direct the attention or tracking activity of receiving bodies in the field of foreign policy in a certain direction. W2 Information with which the attention or tracking activity of the receiving organs in the domestic sphere is directed in a certain direction. W3 information coming from outside regulates the test field for the admission of purpose-serving data into the consciousness W4 information extracted from memory regulates the test field for selective information intake W5 information selectively extracted from memory regulates the test field for extracting further pertinent memories
Fig. A.7 The main information flows for controlling the test fields W1–W5
W6 Information about preliminary decision regulates the testing field for selective information intake (example of a "policy of self-affirmation") W7 Information about preliminary decision regulates the selection criteria in the extraction of interesting memories (example of a "search for precedents"). W8 Information on preliminary decision regulates the testing field for the inclusion of appropriate data in the consciousness W9 Information about preliminary decision regulates the test field for taking appropriate reminders W10 Information on preliminary decision regulates the testing field for the selection of appropriate and feasible procedures. W11 Information about the results of simultaneous confrontation and triage ("consciousness") regulates the test field to receive information coming from the outside W12 information about the results of simultaneous confrontation and triage ("consciousness") regulates the testing field for the inclusion of purposeful data in the consciousness
Fig. A.8 The main information flows for controlling the test fields W6–W12 (Continuation of Fig. A.7)
244
Annex I
W13 information about the results of simultaneous confrontation and triage ("awareness") regulates the testing field for the selection of appropriate and feasible procedures W14 Information about the results of simultaneous confrontation and triage ("awareness") is passed through the test field for screening awareness to the test field for selection of appropriate and feasible procedures (consideration of the "unthinkable") W15 Information on appropriateness and feasibility of procedures regulates the testing field for the inclusion of appropriate data in awareness W16 Expedient information from memory regulates the test field for selection of expedient and feasible procedures W17 information about the final decision regulates the test field to shield the consciousness
Fig. A.9 The main information flows for controlling the test fields W13–W17 (Continuation of Fig. A.8)
Smaller or Secondary Information Flows—(M1–M6)
M1 Selected information from the external world is transferred to the memory to be stored there and eventually retrieved as a memory. This information stream is only small in terms of immediate decision formation; however, its actual volume can be quite considerable. M2 Selected information from the outside world regulates the likelihood of retrieving memories from memory ("That reminds me ..."). M3 Memory retrieval command. M4 command (associative path, chain reaction) in memory. M5 Information about the results of simultaneous confrontation and triage ("awareness") is forwarded to the preliminary decision area M6 Brief information about the final decision is returned to the area of simultaneous confrontation and triage
Fig. A.10 Smaller and secondary information flows M1–M6
Annex I
245
Consciousness The control loop “C5-M6” makes the final decision “conscious” after repeated cycles, C5: combined data and memories (after selection and reorganization in the area of conscious confrontation) are forwarded to the area of final decision. M6: brief information about the final decision is sent back to the area of simultaneous confrontation and screening.
Decision Areas
D1 The area of resolution and recombination of memories is inevitably also a decision area, since the formation of certain combinations and the omission of other possible combinations has indirectly the same effect as a series of partial decisions. In such combinations not only data but also their arrangement and structural patterns as well as ideas and values are combined. D2 In the area of preliminary decision, the combination of memories and current information causes a clear preliminary decision formation. D3 The area of simultaneous confrontation and triage has indirectly the function of a decision area, because certain combinations are formed from the simultaneously acquired data and other combination possibilities are omitted; the successful combinations have again the effect of partial decisions. D4 In the area of the explicit and final decision, the final result may already be largely anticipated by the processes in the preceding decision areas D1 -D .3 Fig. A.11 Decision areas D1–D4
Reference
Deutsch KW (1969) Political cybernetics. Models and Perspectives. Rombach, Freiburg im Breisgau
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 E. W. U. Küppers, A Transdisciplinary Introduction to the World of Cybernetics, https://doi.org/10.1007/978-3-658-42117-5
247