229 61 43MB
English Pages 355 [377] Year 1957
Thinking by Machine
Books by Pierre de Latil
THE UNDERW ATER NATURALIST MAN AND TH E UNDERW ATER W ORLD (with Jean Rivoire)
THINKING BY MACHINE
Thinking by Machine A STUDY OF CYBERNETICS IJlJTJTJTJTJTJTJTJTJTJTrUTJTJTJLJTJlJTJTJlJTJlTLrLJTJTJTJTJTJlJTXLriJT^^
by Pierre de Latil TR A N SL A TE D BY Y. M. GOLLA. ILL U S T R A T E D W IT H P H O T O G R A P H S A N D D IA G R A M S.
W IT H
A
FOREW ORD
by Isaac Asimov
H O UGH TO N M IF F L IN COM PANY BOSTON 9
it
JUbersitit JJrtSS tamtmbse
57
To Pierre Dejean, Dr. Roger Feletin, Paul Gasiglia, Jacques Goiran, Louis Liaigre, Dr. Pierre Martin, Henri Passeron, Georges Renaud, Dr. Jacques Sauvan, Francis Vaglio, who have been good enough to be my technical advisors, my guinea-pigs and my “ catalysers” and who helped me as an independent investigator not to feel too isolated. P. de L.
F ir st
pu b l ish e d u n d e r t h e t it l e
L a Pensee Artificielle b y L ib r a ir ie G a l l i m a r d , P ar is
F ir st
pu b l ish e d in t h e u n it e d s t a t e s ,
C o p y r i g h t © 19 5 6 A ll
by
L ib r a ir ie G
1957
a l l im a r d
r ig h t s r e s e r v e d in c l u d in g t h e r ig h t t o r e p r o d u c e th is b o o k o r p a r t s t h e r e o f in a n y for m
L ib r a r y
of
C on gr ess C a t a l o g u e F ir st
p r in t in g
card n o.
J u n e , 1957
®fje -Ritjersibe iflrcftS CAMBRIDGE
• MASSACHUSETTS
PRINTED IN THE U.S.A.
57-6927
FOREW ORD T he e x p a n s i o n of knowledge, particularly and chiefly in the natural sciences, has taken hold of society in a gigantic grip. It has been molding it and changing it in each g e n e r a tio n since the eighteenth-century beginnings of the Industrial Revolution. The rate of change is increasing and now, two hundred years after the beginnings of the Industrial Revolu tion, has become runaway. The earth now supports a population much higher both in numbers and in living standards than would have been thought conceivable two centuries ago. If the machine were to vanish tomorrow, most of earth’s population would starve and the remnant be reduced to wretchedness and ruin. The machinery that preserves our society runs on quantities of power that are increasingly straining the coal and oil re sources of the world, and it is that fact more than anything else that directs and dictates the international policies of the major nations of the world. Having come so far in this sort of society, we have no choice but to go further. With population continuing to increase, we must go further if the standard of living is to remain what it is. And if the population of Asia, Africa and South America wish to lessen the gap between their present standards and ours, advance must be doubled and redoubled. Nuclear and solar power must supplement and replace the fossil fuels. Newer and better technology must be evolved; further expansion of knowledge, further invasion of the un known must come. T o accomplish this, mankind needs more technicians, more engineers, more scientists. Not merely more in absolute numbers; we must have more in terms of percentage of the total population. The increase in the number of technically trained men must be proportionally greater than the increase in the general population. And yet, along with all this, the expansion of knowledge
v
FOREWORD
has resulted in another phenomenon which works against the fulfillment of the very needs it has created. There has been a weakening of communication between the scientist and the layman. As the achievements of science become more numerous, more awe-inspiring, more divorced from the little corner of common-sense familiar to all of us, the non-scientist is more and more driven into frustration and defeat as far as any real understanding of the details of his life is concerned. More and more he is reduced to push-a-button-and-have-faith. A jet plane must be taken with the same faith that a broom stick was some centuries ago. The aerodynamic principles in volved in both means of transportation are beyond question. But from lack of understanding comes fear, and from fear comes hate. If the scientist becomes a creature removed, it is only a step to that of being a creature first mistrusted and then fought. The simultaneous fear of, and need for, the scientist are creating an increasing tension which is forcing a new and unprecedented crisis on society, one that is sickeningly ap parent now and may become overwhelming in the near future. At a time when the technically trained man is needed in unprecedentedly large numbers, public interest in science is declining. Science courses in secondary schools are decreas ing and those that are given are often of inferior quality. Science teachers are hard to find. Students turn away from science. Why? The “difficulties” and “ mysteries” of science frighten off students. The low pay offered the science teacher by a public unappreciative and even distrustful of science frighten off the teachers. Nor can the scientist, who has himself already tackled the difficulties of science and mastered them, who has found the necessary teachers and taught himself what remained, afford to be blase or smug about his own achievement. It would be dangerous for him to bask in the rarefied air of the intel lectual mountain-peaks and watch the struggle in the valley below with an I-did-it-why-can’t-they attitude. Furthermore, the scientist faces a peculiar danger of his
vi
FOREWORD
own in this weakening of communication. As the years pass and the quantity and variety of research increase, it becomes increasingly difficult for an individual scientist to maintain a firm intellectual grip on the various ramifications of a chosen field of knowledge. Little by little, the “ one-mind field'’ contracts. More and more, the individual scientist becomes dependent on his fellows for any understanding at all of allied fields. II communication between fields is allowed to weaken with out limit, the effect may well be that of a negative feedback on the advance of knowledge. Each sub-sub-sub-discipline, ingrown and hardened into its own mold, will reach selfimposed limits. Only through the partial integration of neighboring (and even not so neighboring) disciplines can progress be resumed. Yet what if, in time, the species grow so far apart as to make cross-fertilization impossible; or at least so infrequent as not to maintain the growth of science? ft seems to me that there are two possible solutions to this problem, one general and one specific, and both are exempli fied by this book. First, the general solution. The scientist must recognize his responsibility to communicate. He cannot and must not' expect laymen and scientists of other fields to come to him or be damned. He must go out to meet them partway or he will be damned with them. Each scientist must do what he can to popularize science in the sense that he must make its content understandable to as many human beings as possible. Granted, he cannot communicate all he knows to any man not trained equivalently with himself. Granted, he can com municate nothing of what he knows to many individuals unequipped by nature or temperament to receive what he would give them. And yet he must convey as much as he can to as many as he can. T o scientists outside the field, the popularization, even in its simplification and incompleteness, may stimulate a new understanding of one's own field in the light of a neighbor’s, with satisfying and fertilizing consequences. T o a young stu dent, it may mean an increased interest that will encourage him, perhaps, to turn to science as a career. T o anyone at all, it may mean the increased comfort and peace of mind
vii
F ORE W ORD
that even a little bit of new understanding often brings. Thinking by Machine is an excellent example of a popnlarization of a field of science. M. de Latil ranges over tbe sum of human experience, presenting it in the light of a clear and fresh understanding that makes it seem new-minted. What makes the book particularly interesting to me, how ever, is the nature of the held with which it deals, for that, in itself, is the second and specific solution to the problem of continued scientific advance. Cybernetics is not merely another branch of science. It is an Intellectual Revolution that rivals in importance the earlier Industrial Revolution. The Industrial Revolution began the liberation of the animal and human muscle by substituting for it tbe energy of burning coal and oil. For two hundred years, mankind has been developing variations on that theme. And yet, in a certain way, the human mind was not freed. Rather, it was further enslaved as the Industrial Revolution progressed. As technology became more intricate, a larger percent of human mental effort was directed toward paper work and red tape; the keeping and correlation of records; the preparation and filing of statistics — in short, all the routine that is required to direct the machinery man had created. Is it possible that just as a machine can take over the routine functions of human muscle, another can take over the routine uses of human mind? Cybernetics answers, yes. The human mind, thus freed from routine, can tackle, with that much greater concentration of effort, the creative aspects of mental activity. Mental energy can be turned to the task of strengthening the lifeline of communication (using cyber netics as a new and powerful tool even here) and of pushing back more speedily still the boundaries of the unknown. This book, Thinking by Machine, explains one way (per haps the only way) in which the threat of eventual intel lectual stagnation, with all the ills that it will bring upon us, can be, if not removed completely and forever, at least shoved far, far into the future. Isaac A simov
Boston University School of Medicine
1957
TABLE OF CONTENTS pagt
Foreword
vii Chapter I
AN EXPLOSIVE SCIENCE
3
The New Round Table Conference The Advantages of Feedback A Quick Review of the Milestones Are Machines capable of Reasoning ? The Science of Self-government The Mathematics of Human Activity Synthetic Animals ?
4 6 8 11 14 17 21
Chapter II
THE REALM OF THE ARTIFICIAL
25
What is a Tool? What is a Machine? Controlling Energy and Operational Energy The Determination of Automatism The Recording of Thought The Nine Components of Action
25 29 32 41 43
Chapter III
THE MIRACLES OF FEEDBACK
47
Where the Machine surpasses Itself The Wonderful Mechanism of Anti-fading An Industrial Revolution
48 52 55
Chapter IV
TOWARDS FACTORIES W ITHOUT MEN.’ AUTOMATIC CONTROL 6 1 The Defects of Feedback: Lack of Strength and Delay in Regulation. The Farcot Servo-motor Fixed and Variable References Regulators and Servo-mechanisms An Important Mechanism: the Amplifier with Negative Feedback General Definitions Master and Slave Mechanisms
IX
61 65 73 76 79 81 83
CONTENTS
TH E LOGIC OF EFFECTS
Chapter V
The Study of Well-trodden Paths Constancy Effectors and Tendency Effectors Direction of the Effects and the Factors The Regulation and the Guiding of a Factor Regulation by Interaction Regulation by Retroaction More Complex Cases The Universal Law of Regulation Interaction and Retroaction Chapter V I
Pai e 86 87 89 91 92 96 101 105 107 108
RETROACTION, THE SECRET OF NATURAL ACTIVITY
112
All Stabilization is effected by Feedback The Baille-ble (Mill-hopper), Progenitor of Retro action Chemical Equilibria Vital Equilibria Where Logic destroys Mankind Stable and Unstable Equilibria A Strange Relationship: An Oscillating Circuit and an Atomic Pile The End-Goal as a Logical Concept From Bus Queues to Indian Famines A New Light on Political Economy The Secret of the Stars Bancroft’s “ Universal Law”
112
FUNCTIONAL CAUSALITY
116 118 122 123 125 127 130 133 137 141 143
Chapter V II
Progressive Organization Facing a Blank Wall The Uncertainties of Causality The Effect as a Function The Law and the Field The Enchanted Field of the Lady Variables Does Chance Exist? The Probability of Effects When the Causal Chains fuse A Definition of Probability x
1 46 148 150 151 153 157 161 164 168 171 174
CONTENTS
Chapter V III
page
ANTI-CHANCE
17 7
The Important Concept of “ Clinamen” The Law of Twofold Function Eddington’s Views on Anti-chance When the Cause is Part of the Effect Contingency, Determinism, Organization The Establishment of Order Essential Laws and Existential Laws An Image of the Universe Entropy and Anatropy
178 181 185 186 191 195 198 201 204
Chapter IX
SYNTHETIC ANIMALS
208
Subtle Behaviour The Secrets of the Tortoises A Principle of Life: the Economy of Means
212 216 220
Chapter X
TH E USE OF MODELS
224
The “ Animal-Machine” of Descartes Condillac’s Statue and Grey Walter’s Tortoises The Tropism of the Tortoise The Chess Player of Torres y Quevedo An Essential Concept: the Unpredictable Why should the Problem not be Equally Simple ? The Art of Compromise The Learning Box
228 230 234 238 240 244 245 247
Chapter X I
CALCULATING MACHINES The Principle of Calculating Machines Addition by Electrical Impulses Binary System: the Logical Language The Way of the Mind and the Way of the Machine The First American Realizations In the Land of Descartes The “ Memory” Unit Electronic Brains ? xi
25O 250 253 255 258 259 262 264 267
CONTENTS
Chapter X II
page THE MECHANISMS OF ANTICIPATION AND MEMORY 27 1 Where Feedback is Powerless 271 Has Cybernetics many Practical Applications? 273 The “ Intellectual” Operations of Anti-aircraft 275 Defence A Parallel between a Gun and a Man 278 Mechanical Reasoning 278 Machines without Men 284 Memory and Logic 287 The Machine which teaches Itself 289 Chapter X III a s h b y ’s h o m e o s t a t a n d f if t h d e g r e e o f a u t o m a t is m 295
Walter Cannon and Homeostasis A Mechanism that is like No Other Spare “ Determinisms” The Machine which finds Its Own Way Artificial Education The Machine that receives Punishment Design for a Brain Towards Machine Government Chapter X IV
295 297 299 301 304 306 307 310
AT THE LEVEL OF HUMAN FUNCTIONS
312
The Multistat, a Multiple Goal-seeking Machine Equilibrium: the Ultimate Goal The Organism integrates what It perceives Machina Liberata The Integration of the Machine, seen from the Temporal Aspect A Model of Nervous Function The Machine is Something Else Where Certain Words are Meaningless
314 316 319 323
Chapter X V
THE HIGHEST DEGREES OF AUTOMATISM Does Matter arise out of the Void? The “ Qualities” of Freedom Between Probability and Certainty The Logic of Effects xii
327 332 334 336 34 1 343 345 348 351
LIST OF ILLUSTRATIONS facing page
Vivian Dovey and Grey Walter with their two c h ild r e n : T im o t h y , a h u m a n b a b y , a n d E lsie , th e
electronic tortoise
34
Pioneers of cybernetics: W. Ross Ashby, W. Mc Culloch, Grey Walter and Norbert Wiener
35
The internal anatomy of Elsie, machina speculatrix created by Grey Walter
50
The Conditional Reflex Analogue, c o r a , made b y Grey Walter to demonstrate his hypothesis of learning
51
A game of chess played betweeen the classical mechanical device and a cybernetician
258
Louis Couffignal and Aiken at the Cybernetic Con ference of Paris
259
Ashby’s machine: a revolutionary concept
274
The behaviour of the electronic tortoise Elsie during a period of two minutes
275
xiii
Thinking by Machine
CH APTER
I
An Explosive Science “ One of the greatest obstacles to human knowledge is the tendency of sciences to segregate themselves into systems. Systems tend to enslave men’s minds.” Claude Bernard.
? The word has suddenly become fashionable. Soon, perhaps, it will supplant “ atomic” which replaced “ electronic” which superseded the words “ electric” and “ automatic” . Every day we read the magic word “ robot” . Every newspaper carries headlines about “ mechanical brains” , “ thinking machines” , “ synthetic animals” . Even the reader who most abhors sensational journalism develops a passionate interest in these devices, which, shown in this light, destroy all our preconceived ideas of machines. What exactly is behind these fascinating toys, the artificial tortoises of Grey Walter which “ live” independently, “ feed” on light and seek a quiet corner in which to “ digest” , using up energy in the search ? What is behind these calculating machines as big as modern apartments which improve on brains, as an aeroplane improves on legs ? What is behind this homeostat of the English psychiatrist Ashby, which we read about without understanding anything beyond the fact that it is a revolution ary machine? It is a whole new science promising a major philosophic, as well as a scientific upheaval. A science, indeed, of the first order, it was aptly conceived during the war: if the scientific revolution exploded the atomic bomb, cybernetics was self explosive. The atomic revolution operated at a technical level and has affected only the technical domain, having brought us only experimental proof of physical and chemical theories elaborated many years ago. The cybernetic revolution developed with an ybernetics
3
THINKING
BY
MACHINE
astonishing rapidity. The detonation had for a time been sparking across two spheres, which were formerly independent or even opposed— mathematics and physiology, machines and life. Vast potentials of knowledge were being accumulated at each pole, when abruptly the two poles came together and the spark burst into flame. By its light, the abyss which we believed to stretch between mind and matter disclosed a new world; in the illumination a new science was suddenly to be seen from the familiar shores. As this book will attempt to show, not a single field of science — or of ignorance— has not received some glimmer of the great light. It all starts from the simple idea that life can be— if not ex plained— at least approximated by mathematical reasoning and experiment. Claude Bernard, whose many ideas prevail in cybernetics generally and in this book in particular, wrote: “ Nervous processes are nothing but a mechanical and physical apparatus created by the organism itself. These mechanisms are more complex than those of simple bodies, but they do not differ in the laws which govern their phenomena and that is why they can be submitted to the same theories and studied by the same methods.” And if, as Poincare has said, “ All great progress takes place when two sciences come together, and when their resemblance proclaims itself, despite the apparent disparity of their sub stance” , what progress, then, might one expect from the inter marriage of biology and mathematics ? THE
NEW
ROUN D-TABLE
CONFERENCE
The story began in Boston in the years immediately preceding the war. A nucleus of scientists, drawn chiefly from the various faculties of Harvard University, used to sit down together at a round table in Vanderbilt Hall for dinner once a month. After dinner, one of the guests would expound some aspect of scientific technique which would provoke a discussion. Thus, specialists from different fields of knowledge grew accustomed to one another’s points of view and an atmosphere of mutual under standing was built up. Norbert Wiener, a professor of mathematics from the Massachusetts Institute of Technology, was brought along one
4
AN
EXPLOSIVE
SCIENCE
day by one of his former pupils to this round-table discussions group which was later to become so significant in the history of science. There he met the promoter and instigator of these reunions, Dr. Arturo Rosenblueth, then professor at the Harvard Medical School, and close collaborator of the great physiologist, Walter Cannon. The two men both deplored the ever-growing tendency towards over-specialization that was complicating the free exchange of views between workers in different branches of science. Wiener writes that most specialists tend to “ regard the next subject as something belonging to his colleague three doors down the corridor, and will consider any interest in it on his part as an unwarrantable breach of privacy.” Not since Leibniz, Wiener maintains, has the world known an encylopaedic mind capable of viewing science as a whole. During the last two centuries the domain of the specialist has never ceased to deepen, and at the same time to become more and more restricted. (This has provoked Bernard Shaw’s sally that the specialist is the man who gets to know more and more about less and less so that he finishes by knowing everything about nothing.) Between their disciplines, certain specialists had been able to sound and even to explore uncharted territories, but one and all they had come up against the limitations of their own techniques, which had prevented any great progress in their inquiries. None of them had understood that this same noman’s-land had been approached from other quarters by scientists in other branches. Some of them had already christened their concepts by different names, and even when they met they were unaware of the help they could have given one another. It was towards the study of such a no-man’s-land that the meetings at Boston finally directed the attention of Wiener, the mathematician, and Rosenblueth, the physiologist. But those who were to create the science of cybernetics by bridging the gap between the two sciences would certainly have remained bogged down in theoretical speculation had not the war pre sented them with a practical problem to solve. In short, were an enemy aeroplane to approach, how could the anti-aircraft fire be certain of reaching it ? Since the speed of aeroplanes in relation to that of the
5
THINKING
BY
MACHINE
projectile was no longer negligible, the problem was not, in fact, one of aiming at the target itself, but of determining the point at which the shell would hit its objective. If the aeroplane kept on a straight course, it would be comparatively easy to deter mine this point. But, in practice, as soon as a pilot encounters enemy anti-aircraft fire, he changes course. The problem, then, is to predict the position of a plane in a curved trajectory. The curvature of this course is restricted by the speed of the plane and the physiological limits of the pilot. But within these limits the pilot has an infinite choice of action. He will no doubt sup pose that the gunners will aim at the point where the aeroplane would be a few seconds later, if it were not to change course: consequently he will alter his course. The gunner, then, has to make allowances in his calculations for the future actions of the pilot. So the problem becomes not only mathematical but psychological. However, Norbert Wiener and Julian Bigelow, another mathematician, were instructed, as members of an anti-aircraft advisory board, to investigate the possibilities of a machine which would control anti-aircraft fire. This machine had, therefore, to take into account the two human reactions, that of the pilot who was free within certain limits, and that of the gunner who was aiming at the aeroplane. It had to incor porate, as an integral whole, two nervous systems in a mech anical design. On that day cybernetics was born— a science which was destined, rightly, to bring mechanics and neurology into a common field of research.
T H E A D V A N T A G E S OF F E E D B A C K In order to solve this problem of trajectory prediction, Wiener and Bigelow first thought of using Bush’s differential analyser, a series of integrating wheels; but later they found that the solution lay in what the servo-mechanism specialists call feedback. Since this form of mechanism appeared to play an essential role in the functioning of the nervous system, it seemed as if the core of the problem had been reached: that of the integration into a mechanism of a function which had hitherto been regarded as essentially human. What is a feedback system? For the present let us say simply that it is a device which makes an effect act back on one of its causes, thus en abling this effect to carry out its given aim. The differences
6
AN
EXPLOSIVE
SCIENCE
between the real and the ideal effect are transformed into energy which is fed back into the mechanism and tends to cancel out the original differences which set the system in motion. Wiener and Bigelow saw the prototype of feedback in the governor designed by Watt to regulate the speed of his steamengine. The difference between the desired speed and the actual speed is used to adjust the steam supply, so that the speed is corrected to its predetermined value in spite of variations in the load. In the same way our bodies automatically adapt themselves to the work to be done: by feedback. So we regulate our move ments automatically according to the action that they are re quired to perform. For example: when we pick up a pencil a constant correction of movement takes place, determined by the deviation of the actual position of our hand from the position it should reach in order to perform the action we want it to : that of picking up the pencil by the most appropriate method. Just as the Watt governor, with its system of fly-weights and levers linked to the spindle, sends back messages to the source of power about the performance of the whole, so a complicated system of nerves sends information to the cortex which, by means of other feedback systems, informs the motor cortex whether or not the original order of execution should be modi fied. If the corrections brought about by a feedback system are too weak, the action will not follow the exact pattern as intended. But too strong a feedback is even more serious; the consequent correction will overshoot the mark, bringing about a new excessive correction in the opposite direction and giving rise to an uncontrollable series of oscillations. It is well known amongst engineers that a self-regulator can go into a persistent oscillation; it is then said to “ hunt” . Still worse, the same system which can proportion so well its effort to the work to be done, can, if connected in reverse, bring about its own destruction; according to whether the machine has begun to slow down or accelerate, it will either come to a dead stop or race until it seizes up. Do such aberrations occur in the feedback of the nervous system and if so, what are their causes ? This was the ques tion that Norbert Wiener put to his friend Rosenblueth, the
7
THINKING
BY
MACHINE
physiologist. Thus cybernetics came into being, through the union of mechanics and physiology. Its birth certificate was an article published by Rosenblueth, Wiener and Bigelow, in 1943, in Philosophy of Science, under the title “ Behaviour, Purpose and Teleology” , but the word “ cybernetics” did not appear in it. However, the basis of what is current knowledge today was laid down in that article, in particular the striking parallelism between certain disturbances of the nervous system and those of mechanical regulation. Thus, in cases of ataxia the feedbacks which convey information about the movements of the limbs exhibit disturbances due to various causes; the simplest movements degenerate into hesitation followed by oscillations. The paths of nervous communication are affected, that is, the ascending tracts of the spinal cord by which information about the performance of the acts is con veyed to the brain. The precise word is “ information” , a term which was about to become more and more widely used. It symbolizes another aspect of anti-aircraft defence with which the American scientists had to deal. I f the automatic gun-controller was to incorporate a feed back, the fundamental idea of a message, whether transmitted by mechanical, electrical, or nervous means, had to be investi gated. A message, according to Norbert Wiener’s definition, is “ a discrete or continuous sequence of measurable events distributed in time” . This reduces the problem of trajectory pre diction, from the mathematical point of view, to a statistical problem: that of forecasting the later terms of a time series. From that time a whole branch of cybernetics was initiated: communication theory, dealing with the quantity of in formation contained in a message and its relation to back ground noise, which dominates our present-day technique of electrical communications. From the abstract point of view, the idea of quantity of information is linked with that of entropy; the amount of information is a measure of the degrees of differ entiation between the regularity and irregularity of a pattern where the components are distributed in time.
A Q U IC K R E V IE W
O F T H E M IL E S T O N E S
Let us resume our account of the milestones of cybernetics.
8
AN
EXPLOSIVE
SCIENCE
In 1942, at a congress in New York organized under the auspices of the Josiah Macy Foundation and devoted to the problems of Central Inhibition in the Nervous System, Dr. Ro'senblueth put forward his ideas which were later to appear in Philosophy of Science. Among those present at the meeting was Dr. Warren McCulloch, a psychiatrist of the University of Illinois, who was already keenly interested in the subject, and who was later to become one of the leading lights of cybernetics. In 1943 Walter Pitts, a young mathematician vho had alreadypublished someimportant papers with M cCull ch, came to work with Wiener at the Massachusetts Institute of technology being interested in the study of electronic valves as an ideal method of constructing analogues of complexes of nervous cells. In 1943 and 1944 the war effort called for the construction of vast calculating machines and Wiener played an essential role on the theoretical side of their development. There was hence forth a continuous rapport between different specialists in terested in the subject, and under the influence of Wiener the electronic vocabulary became, as he describes it, rapidly “ con taminated with the terms of the neurophysiologist and the psychologist” . Amongst the mathematicians who were working on the theory of calculating machines was von Neumann of the Institute for Advanced Studies, the creator of the “ Theory of Games” . He and Wiener decided to hold a meeting at Princeton in the beginning of 1944 of all those who were interested in these new ideas. The well-known neurologist Lorente de No and Goldstine, the maker of the giant calculating machines e n i a c and e d v a c , became members of the group. The physiologists expounded their point of view and the calculating machine experts gave theirs. The outcome showed clearly that a common basis existed between them, but that it had become essential to find a common idiom in which they might express themselves. The ending of the war permitted the publication in 1945 of a hitherto secret report, written by Wiener and Bigelow, on prediction apparatus which, incidentally, had never been con structed. Wiener could now speak of his ideas publicly at the Massachusetts Institute of Technology. In the spring of 1946 McCulloch organized, with the aid of
9
THINKING
BY
MACHINE
the Josiah Macy Foundation, a series of meetings devoted to feedback problems. Psychologists were invited into the circle, because, as Wiener wrote, “ He who studies the nervous system cannot forget the mind, and he who studies the mind cannot forget the nervous system.” That summer Wiener went to Mexico, where Rosenblueth was Director of the National In stitute of Cardiology. A new laboratory there had been endowed by the Rockefeller Foundation and together they carried out experiments which were to be of great importance to the new science. If a physician wishes to check on the state of a patient’s spine, he frequently exerts a sudden pull with his hand on the attach ment of the great extensor muscle of the quadriceps to the knee, which may give rise to a physiological phenomenon known as “ clonus” . Clonus is the name given (in cases of injury to the spinal cord) to the rhythmic contractions of the muscles pro duced when a certain degree of traction is applied to a muscle extremity. The mechanism responsible for this phenomenon is always present in the spinal cord, but is normally inhibited by impulses arriving from the cerebral cortex. If the higher cortical centres are no longer functioning, the clonus may appear with the frequency of about fourteen contractions per second. Wiener and Rosenblueth transected the spinal cord of a cat that had previously been injected with strychnine in order to increase the reflex responses. A weight was attached to the tendon of the extensor muscle of the thigh by means of a cord passing over a pulley, so as to set up clonus. The physiologist paid particular attention to the experimental conditions: the load on the muscle, the frequency of oscillation, its base level and amplitude and the nature of the impulses in the afferent nerves. The mathematician, on the other hand, studied pri marily the theoretical relations between the biological pheno menon and the frequencies observed in “ hunting” oscillations of the servo-mechanism, using the experimental methods described by McColl. The agreement by this method with the results obtained by the physiological experiment were certainly very encouraging. The mysterious rhythm was found to bear a logarithmic relation to the number of impulses transmitted by the afferent nerves. We come now to the decisive year 1947. 10
AN
ARE
EXPLOSIVE
MACHINES
CAPABLE
SCIENCE
OF
REASONING?
Norbert Wiener visited France to take part in a mathematical conference held at Nancy on problems arising from harmonic analysis. Certain European members of the Congress who read papers on the application of statistical methods to communica tion engineering showed themselves to be in complete agreemenf with the trend in cybernetics on the other side of the Atlantic. In Paris, Wiener was introduced by one of his colleagues of the M .I.T., Georges Santillana, scientific historian to M. Freymann, director of Hermann et Cie, scientific publishers. Naturally Wiener began to speak about Mexico where he had spent so much time. “ But I come from Mexico myself,” ex claimed Freymann, who, as a professional diplomat, is Cultural Attache to the Mexican Embassy. Thus a firm friendship developed between the two men. The premises of the publisher, in the Rue de la Sorbonne, became the Paris headquarters of the mathematician. “ Why don’t you write a book on the theories that you are always talking about?” — “ The public isn’t ripe yet. Maybe in another twenty years . . .” — “ All the same, I think I know of a publisher who might be interested. . . .” — “ No publisher would ever take such a risk!” -— “ Oh, I think he might” . The interchange continued thus for a moment, and then Wiener suddenly said, “ I get you! You are the publisher.” They shook hands on it. “ In three months’ time I shall hand over my manuscript.” But when Wiener left, Freymann smiled and said, “ O f course he’ll never give it another thought” ; and in fact no further mention was made of the subject throughout Wiener’s stay in Paris. Three months later, however, an air-mail package arrived at the Rue de la Sorbonne. Freymann opened it— and there was the manuscript. A quarter of an hour later one of the printers working for Hermann et Cie came to seek orders for work, or men would have to be “ laid off” the next day. “ There’s something to get on with— that manuscript on the table over there.” So, eleven days later, since there was no other work on hand, the type was already set and the proofs dispatched to America by air. An acknowledging cable arrived: “ You’ll have to beat American efficiency twice over.” Thereupon a
THINKING
BY
MACHINE
call came from Boston, from the Director of the Technology Press, the publications organ of the asking Freymann to release Wiener from his contract: the M .I.T. could not let the work of one of its own professors be published by another firm. “ Only Wiener himself can ask to be released from his contract,” was Freymann’s reply: “ Besides, the type is already set.” Some days later the M .I.T. offered to reimburse the publisher’s expenses. After six telephone calls they reached an agreement. So as not to place Wiener in a difficult position vis-a-vis his University, Freymann agreed to publish the work in collabor ation with the Technology Press; but he retained sole copy right for all countries and it was the Paris edition that was universally distributed. This is how a French publisher came to publish a book in English that has sold throughout the world. The story is worth recounting, above all for its ending; the book sold twenty-one thousand copies! In 1947, at the spring meeting of the Josiah Macy Founda tion and at a meeting of the New York Academy of Sciences, McCulloch and Pitts presented their work on an apparatus to enable the blind to read ordinary printed letters by sound; the fact that the apparatus was never actually constructed in no way detracts from the cybernetic importance of its design. The idea was to translate printed characters into sound, each letter producing specific tones that the blind, with their especially sensitive hearing, could easily learn to identify. From the point of view of electronic technique the question was simple enough, each letter being scanned by photo-electric cells. The difficulty lies in making the pattern of the sound correspond always to the same printed characters, where the size and form of type used may differ. That a given letter“ r” , for example, should differ in sound from a “ t” presents no great feat of magic; but that the sound produced by “ r” should always be the same— for an “ r” in roman, italics or heavy type of all sizes— is quite a different matter. I f one thinks about it, it is, in fact, a question of constructing a machine which can imitate the process attributed by Gestalt theory to man and higher animals: that of recognizing the general pattern of forms— a square, a circle, a diamond, an “ r” or a “ t” — in spite of changes of scale, embellishments or chance variations. 12
AN
EXPLOSIVE
SCIENCE
Such a machine must be capable of assuming the highest function of the brain: that which the Swiss neurophysiologist Marcel Monnier defined as— “ the possibility of transforming abstractions into acts.” One might almost say, that which enables the identification of a symbol and the elaboration of a concept from it. But all this leads us into deep water. Here, at the beginning of this book, we are forced to understand how the traditional machine, blind, insensitive, and stupid, is only a primitive counterpart of the machine that is envisaged and even already realized today. When Von Bonin, the great American histologist, saw the diagram of this apparatus for selective reading, he is said to have exclaimed: “ But this is just a diagram of the third layer of the visual cortex!” Even if apocryphal, the anecdote speaks for itself. An electronic apparatus and a system of neurones, which fulfil similar tasks in machine and man, might very conceivably bear some sort of mutual resemblance. And, proceeding on this basis, one might take the machine as a “ model” of life and study through this artificial means phenomena which are impossible to investigate in vivo. . . . We have here an illustration of one of the characteristic methods of cybernetics: the use of models. Thus McCulloch and Pitts built up a whole theory of vision: that a system of scanning analagous to what happens in tele vision might conceivably play a certain role in the process of vision in man. Moreover the surprising thing is that the time sweep of the scanning operation of these electronic mechanisms corresponds closely with the basic rhythm of the brain, the famous alpha rhythm which is thought to play some part in visual perception, since it is abolished by visual activity. Thus once again the future “ cyberneticians” confirmed by theoretical methods the presence of the basic rhythms that experimentation had already demonstrated as accompanying nervous activity. In the same year, in France, Dr. Jacques Malvoisin, an electro-radiologist at St. Chamond, was tackling a very similar problem: that of designing an electrical apparatus capable of reacting to the perception of a certain form, or, in more con crete terms, of imitating the semi-automatic action of a car driver who puts on his brakes as soon as he sees the warning cross sign indicating a road intersection. In man it is the per ception of a certain form rather than of a certain object which
THINKING
BY
MACHINE
causes the reflex action. Would this be possible for a machine ? Gould the machine abstract from a series of crosses, the cross with a special meaning? Could it assume a function of integra tion which up to now had been considered to be beyond the scope of a machine and to be a special capacity of the mind ? . . . It is apparent how closely these questions correspond to those which McCulloch and Pitts were trying to answer at the time. The parallelism appears even more striking when one learns that Dr. Malvoisin was also working on a project for a machine capable of printing words spoken into a microphone in con ventional graphic signs, syllable by syllable. This discovery provides an example of the fate of an isolated research worker buried in the depths of the country who is unable to keep up with research teams with official backing such as those existing in America or Russia. After being com municated to a Congress of Radiology, the theoretical and practical work of Dr. Malvoisin was buried in a technical journal1, whereas in the United States cybernetics was assuming recognised status. Already it was in existence; it had a name. THE
SCIENCE
OF
SELF-GOVERNM ENT
Amongst Wiener’s group, physiologists and mathematicians were suffering from the absence of a common vocabulary en abling them to understand one another. They had not even the terms which expressed the essential unity of the series of problems connected with communication and control in machines and living beings— this unity of whose existence they were all so firmly persuaded. All the existing terminology either laid too much emphasis on the machine or over-stressed the vital aspect, whereas the new science was destined to embrace both equally. “ So,” wrote Wiener, “ we have been forced to coin an artificial neo-Greek expression to fill the gap. We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the name cybernetics.” This was chosen from the Greek for steersman and, by extension, governor of a country. The name expresses adequately the idea of command and control. Moreover, from 1 Journal de Radiologie, Nos. 5-6, 1948. “ Essais d’application rationnelle du m£canisme c6r6bral pour l’accomplissement d’actes semi-automatiques.”
14
AN
EXPLOSIVE
SCIENCE
the same root, by way of the Latin gubernator, a corruption of the Greek word, Watt derived the name “ governor” of his steam engine. But is “ cybernetics” really a neologism? In English perhaps, but not in the French language. Strange as it may seem, this word appears in the Littre dictionary: Cybernetics— Name given by Ampere to the branch of politics which is concerned with the means of government. In d e e d in the lo n g E ssai sur la Philosopkie des Sciences, which was intended as “ a natural exposition of all branches of human knowledge” , Ampere used this term to catalogue the science of government; it figures in his classification under item No. 83, the science of government: Regnes
Sous-regnes
I. Sciences cosmologiques
(A) Cosmologiques (B) Physiologiques
II. Sciences noologiques
Sciences i*r ordre
Sciences 2* ordre
Syncing(C) Noologimique ques f (6)............... (D) Sociales ( )............... (8) Politique '
1
7
Politique
Sciences 3* ordre
ic6e (82) Diplo matic (83) Cyberndtique (84) Th^orie \ du pouvoir.1
The most interesting feature is that Ampere borrowed this word directly from the Greek without even coining from the Greek root kubernetes: Greek dictionaries do, in fact, give the 1 Possible Translation Sciences of the Sub-realms Order 1 (A) Cosmological I. Cosmological (B) Physiosciences [ logical Realms
II. Noological sciences
1 (B) Noologi- f (6)............... cal ( ) ............... \ (8) Politics | (C) Social
7
2nd Order
Syncinemic
Political
15
3rd Order
(81) Ethnodicy (82) Diplo macy (83) Cyberne tics (84) Theory of power
THINKING
BY
MACHINE
adjective kubernetiken. The word is even employed as a sub stantive, with the meaning “ science of piloting” , by Plato, who puts it in the mouth of Socrates: “ Cybernetics saves souls, bodies and material possessions from the gravest dangers” (iGorgias, 511). Whether it has to do with a ship, the direction of a machine or the “ governor” of Watt this word is exception ally apt. Perhaps some day in the far future it will recapture its original Greek meaning of “ government” , for Ashby’s homeostat brings promise of machines that govern. The definition of “ cybernetics” is implicit, then, in the name itself: the science of government— one might almost say of selfgovernment. Moreover, the title of Wiener’s book, which ap peared soon after, defined the new word: Cybernetics or Control and Communication in Animal and Machine. Norbert Wiener, in order to emphasize the two fold significance of this book, gave his double scientific status: Professor of Mathematics at the M .I.T. and Guest Investigator at the National Institute of Cardiology of Mexico. The book in itself does not invite public interest. It bristles with obscure symbols and what little of the text is free from mathematical formulae is anything but simple. It is a book that would be above the head of any ordinary person even if he knew a great deal of science, yet it had an instantaneous success. Thus, some months later, an American reviewer said: “ Next week, John Wiley and Sons Inc., New York, will start distri buting the fourth printing of a book which it had been expected would appeal to only a small technical audience. Cybernetics by Norbert Wiener has a mystifying title; its pages are spiky with mathematical signs and Greek letters; it is wretchedly printed, with a perverse habit of thrusting out typographical error to trip the reader just when the reasoning is hardest to follow. Yet in six weeks Wiley had sold out a first printing intended to last for a year, run through two more printings and ordered a fourth.” 1 And the American journalist continues: “ In one respect 1 Business Week, 19th February, 1949. “ Machines that think.”
16
AN
EXPLOSIVE
SCIENCE
Wiener’s book resembles the Kinsey Report: the public re sponse to it is at least as significant as the content of the book itself.” This is more than a flash of wit: Cybernetics, like the famous investigation on contemporary sexual behaviour, gives the raw facts without fancy presentation. It is for the reader to draw his own conclusions from the Kinsey Report; but it is for the future to construct the new science on the basis of Wiener’s Cybernetics. That success w h ic h attended a book so s u p e r fic ia lly u n a ttr a c tiv e proves that the real attraction lies in its fundamental structure, that is in the power of the idea which it reveals. This idea grips everyone who comes across it. Its interest never flags and after the initial unfamiliarity wears off, it still retains its attraction. Its depth then becomes apparent and the reader is aware of its ramifications into so many familiar fields, and of the hitherto unsuspected common basis that it affords to several familiar systems. THE
MATHEM ATICS
OF
HUMAN
ACTIVITY
However, the cybernetic revolution would not have been so widespread had it not already been “ in the air” . This ordinary phrase “ in the air” is really significant. The way of thought was so very present everywhere, though unperceived, that the spark caused by such a book was sufficient to touch off an explosion. The idea was, in short, to study living organisms by the method of the exact sciences. It was an attempt to investigate the most ill-defined phenomena of life by mathematics, the most precise instrument of thought. It was to investigate the correspondence between the mysterious mechanism of animal life and the machines that we know so well, through having constructed them ourselves, in the belief that the understanding of machines will help us to understand life, and that, conversely, we can improve our machines by imitating life processes. Descartes and Condillac were certainly the forerunners of this school of thought. They recognized the possibility of a similarity between animal and machine function, without, how ever, providing a solution of the problem. These ideas, originating in Europe, blossomed in the United States. The pioneers used neither vast nor expensive experi mental methods, nor did they resort to difficult techniques. On 17
THINKING
BY
MACHINE
this occasion, at least, progress was independent of laboratories and dollars, which suggests the maturity of American science in that it was stimulated by a train of thought— by a revolution of ideas. A whole new series of subjects of investigation had thus arisen. It was no longer a case of specialization and of sub division of particular specializations; now, on the contrary, the various doctrines united and fertilized each other. Thus, after a lengthy period of incubation which might be called the age of analysis, science passed on to the age of synthesis. In future ages, the coming of this era will be remembered as the birth of cybernetics. Psycho-physiology and mathematics represent the two main currents of thought that have combined to bring about the new methodology in which the element of uncertainty which is characteristic of the sciences of humans has been compensated by the objectivity of mathematical formulae. In this movement is included mathematical bio-physics. Rashevsky, of the University of Chicago, in his important book Mathematical Bio-physics, published in 1938, claimed that in special instances he could demonstrate the possibility of in vestigating vital phenomena by mathematical methods. He was particularly interested in the functioning of neurones and in the transmission of nervous impulses through the neurones. He thus obtained by pure calculation a theoretical value for the reaction time of the neurone in relation to the intensity of the stimulus— that is to say, the delay that takes place before a neurone can react to a stimulus. His value agreed satisfactorily with experimental results, notably those found by Henri Pieron, for taste and hearing. The titles of two articles by McCulloch and Pitts give some idea of the extent of this method: “ A Logical Calculus of the Ideas Immanent in Nervous Activity” (1943) and “ The Statistical Organization of Nervous Activity” (1948). These authors have advanced a theory of nervous function covering the various phases of neuronic activity— all-or-nothing reaction, excitation and inhibition. From now onwards, they claim, the study of nervous function may be pursued by the methods of mathematical logic. McCulloch and Pitts have found an equation for such facts as that when a cold body touches the 18
AN
EXPLOSIVE
SCIENCE
skin for a very brief instant, it gives a sensation of heat; from their calculations they are able to construct a scheme of neuro nic organization which fits in with the mathematical data. But psychology as well as neurophysiology can be expressed algebraically. Clark Hull, in 1943, applied the methods of mathematics to the study of conditioned reflexes; Kurt Lewin uses the most up-to-date phraseology of topology in the study of the Gestalt theory; Wiener, studying the same psychological problem, gives a mathematical explanation of the perception of abstract forms by our nervous system, and psychologists every where are using the calculus of probabilities. Nor, indeed, do human relations escape the general trend. Here a revolution of thought originated in the “ Theory of Games” by John von Neumann, of the Institute for Advanced Studies at Princeton. This theory is in no way based on statistics of the results of thousands of games, and in no way searches for magic formulae. It has nothing to do with games of pure chance, but is concerned only with those in which the behaviour of the players intervenes— such as in a type of simplified poker. It calculates what advantages a player might have in using a certain plan of campaign to counter one or another strategy on the part of his opponent; it specifies the extent of the risks and carefully demonstrates “ optimal strategy” . Here a game is chosen as a convenient vehicle for a most fascinating study: that of the struggle between two players whose moves are limited by the rules of the game, but who, within this framework, regulate their play according to each other’s behaviour. In such cir cumstances it is possible to reduce the play of the opposition, in a game against one or more players, to a series of mathematical calculations. As von Neumann explains it, the approach is an entirely novel one. A mathematician or a physicist has normally only to consider the relation of objects to each other, or oc casionally those of a subject to an object; but here we are con cerned with quite a different matter— the relationship between two entities in a state of continual reaction to each other. In order to tackle such a problem, it is necessary to resort to the theory of convex bodies; that is how these psychological situa tions are approached from the mathematical point of view. The necessity to bluff at poker thus arises from a theorem where the bluff of defence and the bluff of attack appear strictly
19
THINKING
BY
MACHINE
differentiated. The calculation leads us to formulae and ways of thought where one finds such psychological concepts as “ dom ination” or “ coalition” . We may ask why these ideas should not be applied on a wider scale. Lewis Richardson attempted exactly this, in his theory of wars. Further, we may ask why these theories should not be applied to economic questions. Neumann, in collaboration with Oskar Morgenstern, in their Theory of Games and Economic Behaviour, went into this question. Leon Festinger used matrix calculation in drawing up sociograms and Rashevsky, the leading man in bio-mathematics, published in 1947 another excellent book called The Mathematical Theory of Human Re lations. Thus the three related works, those of von Neumann and Morgenstern, of Rashevsky and of Wiener, appeared one after the other in successive years. Such an event shows that these ideas had long been boiling up in America, and with the erup tion of cybernetics the internal ferment found an outlet and burst forth. In the country where the theory of the “ mechanical animal” originated, the train of thought issuing from this far-off source was also developing in a new direction. The theories of Louis Couffignal on binary calculating machines had certainly not achieved any success when he published an article in Europe in 1938; but even if they escaped notice at the time, they appear by now to have been prophetic. He wrote of machines that reason, which are now more than a prediction.1 In 1942 an event took place in France which provided a striking parallel to those which produced cybernetics in the U.S.A. A group of neurologists and mathematicians was formed for the study of certain functions of the nervous system— par ticularly those of the cerebellum. Unlike its American counter part, the group was not officially constituted. The initiative was taken by Louis Lapicque but a member of the group was Louis Couffignal, the authority on calculating machines, who later published his work in an important book Les Machines a Penser,12 1 Europe, 15th August 1938. “ Un point de 1me nouveau dans Vetude de la machine: Vanalyse mkanique” by Louis Couffignal. 2 Editions de Minuit, 1952. 20
AN
EXPLOSIVE
SCIENCE
SY N TH ETIC ANIM ALS? Since the formulation of its notions, cybernetics had de veloped furthest in Europe, and above all in England, but it is surprising that the revolution of thought which in the U.S.A. was chiefly theoretical and intellectual evolved in Europe on a practical basis. The traditional roles of the two sides of the Atlantic were reversed. The idea arose in America and took concrete form in Europe with the creation of the electronic tortoises of Grey Walter and Ashby’s homeostat; in the science of cybernetics, which reconciles the living organism and the machine, America appeared most interested in the organism and Europe in the machine. Reserving the physiological aspect of cybernetics for a future book, UHomme en Equations, we deal here with the vast progress that cybernetics brings to the machine which it suddenly trans figures. The tortoises of Grey Walter are machines which move about freely and have certain attributes of an independent life. (We are not speaking of the exterior attributes of life, which, naively enough, are associated with automata of old.) They “ feed” on light which they seek and transform into electric currents. This current charges an accumulator. When their stomachs are full (or, if you prefer it, when their accumulators are charged) their behaviour changes. They no longer need a bright light on which to feed, but a soft light in which to “ repose” . In their search for rest, they run down the batteries of their motors, so that they are soon “ hungry” once more and set off again to hunt for “ food” . In the same way an animal divides its life between a search for food, and periods of rest. But this theoretical plan of artificial existence would condemn the tortoises to devour sunlight for months before they could accumulate enough energy for a few moments of respite. There fore they are allowed to feed off electricity which they take direct from the mains supply. Attracted by a very bright illumination, they make for its source, situated in a little hutch; they enter and plug themselves into the mains just beside the bright light bulb. Here they nourish themselves, recharging their batteries, as in a stable with a well-filled hay rack. But “ light” pure and simple does not act as a direct magnetic
21
THINKING
BY
MACHINE
attraction for these machines. There are three degrees of il lumination, each of which has a specific effect, and the tor toises react differently towards them according to whether they are hungry, less hungry, or replete. When hungry, a light which was previously too bright for rest becomes avidly sought after. Since there are usually several sources of light in a room, things become complicated and it is not possible to predict what will happen. The tortoise weighs up the pros and the cons, takes stock of the situation and acts accordingly. If it runs up against an article of furniture standing in its way, it backs a little, moves sideways like a crab and, having avoided the obstacle, continues in its original direction. It has a short “ memory” of its impact with the obstacle and pauses a moment or two; then, having recovered from the shock, it recalls its previous goal. But still better than Elmer or Elsie, who only know how to react to light, certain of their descendants have learnt to answer to the whistle of their master— literally “ learnt” — like Pavlov’s dogs who salivated at the sound of the dinner bell. Another was taught to halt as soon as its master warned it, by clapping his hands, of the proximity of an obstacle. The analogy to living beings is somewhat more than super ficial. The basic reason for this similarity of reaction lies in the fact that, like animals, the electronic tortoises are torn between conflicting “ emotions” . The extreme originality of these mechanisms is to be found in their balancing of different tendencies, always inclining toward the optimum condition; this complexity renders the behaviour of the tortoises quite un predictable. Moreover they display various moods; on one day they will come and go gaily between two “ optimal” regions and at another time they will lie about sluggishly, or else they may display “ anxiety symptoms” and move around restlessly from place to place, quickly running down their batteries so that they have to rush off quickly into their hutches to recharge them selves. When their passage is barred by an obstacle, they may either react with circumspection and prudence or they may become restless and impatient. It is said of bad novelists that their characters are all cut and dried, they have no life. Auto mata, too, are cut and dried, and have no pretensions to life. Elsie and Elmer, however, dare to approximate to life itself, so the term “ automaton” is hardly applicable. 22
AN
EXPLOSIVE
SCIENCE
But what exactly do we mean by an “ automaton” ? What is “ automatism” ? The reply is certainly not easy. The scope of this book is an attempt to throw some light on this problem. Indeed the “ synthetic animals” are new kinds of machines, but it is important to see in what way they revolutionize our old ideas of machines. Can one go so far as to say that such mechanisms achieve thought processes? The question is not without its importance for the mechanism of both man and machine; it concerns the whole problem of man and future civilization; it is one of the most fundamental ques tions from both the technical and philosophical points of view. We are bound to admit that we are in no way equipped yet to tackle this important subject; we do not even possess the basic apparatus of well-defined terms— automatism, machine, thought. . . . What do they really signify? We should be even worse equipped to consider the homeostat, the most revolutionary of goal-seeking machines, which achieves its end despite all the changes of the external environment and its own internal structural variations. An English psychiatrist, W. R. Ashby of Gloucester, has at tempted to build such a machine, which is endowed with one of the essential characteristics of the living organism: the power to adapt itself, within certain limits characteristic of the species, to changes which take place in its environment and even to internal modifications. This is what the physiologist, Walter Cannon, called “ homeostasis” — a word which today has proved indispensable to cybernetics. From this originated the homeostat, the machine which al ways finds its own equilibrium, which always pursues the same goal, irrespective of all other considerations. One senses how far the “ automatism” of this machine differs from traditional auto matism; senses it, truly enough, but it is difficult to know pre cisely how to formulate this difference. This book will attempt to clarify the distinction. In order to explain electronic calculating machines, we are tempted to use the word “ thought” . No, the machine does not “ think” . But, nevertheless, it elaborates quite a different pro cess, an activity which for the machine is the counterpart of thought, and which enables us to economize our own thinking
23
THINKING
BY
MACHINE
process, just as a car dispenses with walking, or a power-hammer with forging. Thus man makes the machines of his own creation do his thinking for him.
24
CH APTER
II
The Realm of the Artificial “ L ’intelligence, envisagee dans ce qui en parait etre la demarche originelle, est la faculty de fabriquer des objets artificiels... Notre vie sociale gravite autour de leur fabrication et de leur utilisation.” B ergson . L ’Evolution creatrice.
though
the bridge th a t spans the g a p betw een m achines
and life is slender and even insecure, its foundations ap pear firm enough on the shore of the machine territory. A methodical survey of the ground, however, shows it to be a quicksand. The most learned treatises and technical diction aries either fail to define the basic terms, or give definitions that are highly controversial and contradictory. In this respect, cybernetics throws light on a metamorphosis that started in the machine several decades ago and developed throughout these important years of our civilization. Since then, the vices of which it is so often accused appear rather as growing pains. Before showing how the machine progressed, unnoticed be fore our eyes, beyond its traditional limits towards a future hitherto undreamt of, we must redefine our conception of “ machine” . The ground on the physiological side is so insecure that it would be better to consolidate the foundations on the side of the machine.
W H A T IS A T O O L ?
W H A T IS A M A C H I N E ?
If we try to classify the machine within the old arbitrary divisions of “ animal, vegetable or mineral kingdom” , we find that it fits into none of them. Thus we see at a glance how un important machines were for the men of yesterday.
25
THINKING
BY
MACHINE
Just as, on the other hand, “ animal” and “ vegetable” seem to us today, by the very nature of their limitations, to merge into the realm of the “ living” , so we can conceive of three primary categories: The mineral realm, that of organized matter lacking the power of proliferation. The realm of the living, that of organized matter having the power of proliferation. The realm of the artificial, which, unlike minerals and living organisms, is organized by man— all that which, in accord ance with Leibniz, we might call “ artificiata” (artefacts). These products of human artifice can be broken down into objects that are passive and machines that are active. We may wonder to which of these two subdivisions tools should belong ? The tool is a passive object, used by man who is active. It increases his capacities, giving him greater possibilities than he would have with his bare hands. It participates in human actions and extends their scope. Bergson has written of tools and instruments as “ artificial organs extending the functions of the human organism” . This is exactly what they are: tools are a material extension which man gives himself in order to do something. To do what? Each tool has its particular purpose; it increases our strength, our precision, or our speed of performance. In a word: the tool increases the efficacy of our acts. However, the idea that our output can be increased is not expressed. If we decide to use two words to express these two shades of meaning, we arrive at this definition: a tool is a material extension that man gives himself in order to increase the efficacy and output of his actions. I can take hold of something better with a pair of tweezers than with my thumb and forefinger alone; I paint better with a brush than with my fingers; I split wood better with an axe than with my hands; I ram stones into the earth better with a rammer than with my feet; I urge on a horse with spurs better than with my heels; by the aid of a megaphone my voice is carried further; with an ear trumpet, I hear better. In each case the tool increases the efficiency of my acts and the capacity of my body. As for the word machine, there is no generally agreed and 26
THE
REALM
OF
THE
ARTIFICIAL
precise definition, so we may attempt to make our own. The tool may be quite a simple structure, whereas the machine is com plex: let us say a system. A tool may be artificial only by virtue of the use to which it is put, whilst a machine is always a creation of human artifice; a living system is not a machine (even if cybernetics shows their mechanisms to be similar). Let us, then, call it a system created by man. But the machine is not created on account of a whim, it is designed to serve a set purpose; it does not always extend h u m a n actions, it more often replaces them. Better still, it is capable of performing acts of which man is utterly incapable, as in the case of the lamp, the radio set or the camera. We might also say that the machine is constructed in order to perform a given action. But one important fact remains to be stated: whilst a tool produces nothing except when moved by a human force at the other end, the machine can receive its energy from any kind of motor. An ass can never use tweezers, but an ass can turn a mill-wheel as well as a man can; so can a waterfall, or atomic energy. The tweezers are a tool; the mill-wheel is a machine. Hence we get a simple differentiation between a tool and a machine. A corkscrew, which only my hands can use, is a tool, even if it is a complex one. An apparatus for corking bottles is a machine, even if quite a simple one, inasmuch as any power can work it.1 But although the machine is able to obtain its energy from any kind of motor, it will not work from any form of energy. A machine-tool cannot make direct use of electrical energy; a radio set only becomes a motor inasmuch as it radiates energy in sound waves. The form of energy, then, should be appro priate to the type of machine; if this form is not convenient an auxiliary machine must first transform the energy. Thus, to work machine-tools an electric motor is required where only electricity is available, and a steam-engine when only heat is available. Though it may be unnecessary to include the latter illustra tions in the definition, they should at least provide a key to the 1 In the terminology of the logic of effects, which will be dealt with in this book, one says that man is a “ factor” of the act accomplished by the tool and that he does not intervene directly in the act accomplished by the machine.
27
THINKING
BY
MACHINE
various special cases. We will say, then, that the machine must receive “ the energy necessary” for its functioning, and so: A machine is a system constructed by man to accomplish a certain action, when the necessary energy is supplied.1 This important definition calls for some comment: 1. The end of the machine. The machine only exists in re lation to its end. It is no longer a machine unless it works to wards the end for which it was constructed. In water, a weaving loom would be nothing more than a collection of unworkable parts which would ultimately sink, whereas on the firm floor of a textile mill the immobility of a boat would be ridiculous. A derailed locomotive ceases to function as a locomotive; it rolls off the track and becomes nothing but a heap of scrap iron. In short, a system designed by man is or is not a machine according to whether it accomplishes a certain action. 2. Simple machines. When the system is simple some diffi culty may arise in recognizing it as a machine. For instance, a torch or an oil lamp is a machine, despite the absence of inter acting metal components which we tend to think of as inevitably associated with our idea of a machine. Even in its simplicity, a machine for illumination is composite in effect, comprising a fuel which is more or less complex, a container for the fuel, a burner, a draught, and a flame which is the vital point of its function. 3. Machines capable of driving things. The artificial motor is a special class of machine. It is a machine to which man has assigned a particular purpose: the production of energy. It only accomplishes its action when the “ necessary energy” ij provided for it. In order that machines shall produce energy, it is necessary to give them energy. 4. The independence of the machine and the motor. It is 1 Amongst the best definitions proposed in the past, we will quote Monge (1808): “ The primary purpose of a machine is to convert the available power into the form o f energy most appropriate to the machine that it may produce the desired effect.” Reuleaux, in his comprehensive and classic work Theorie des Machines (1877): “ A machine is an assembly of resistant bodies arranged in such a way as to enable the natural mechanical forces to act and to produce certain predetermined movements.” Louis Couffignal in Machines d Penser (1952): “ The word machine comprises all inanimate entities and occasionally animate ones capable of replacing man in the execution of a number of operations designed by him.” 28
THE
REALM
OF
THE
ARTIFICIAL
necessary to stress the fact that whether a machine be moved by means of an animal or by wind power, by an artificial motor or a natural agent, neither the principle nor the mechanism of the machine is modified thereby. The mechanism of a sewing machine remains unaltered whether it be driven by hand or by electricity. It would be possible to conceive that a blind ass, instead of moving the ancient Persian wheel, could work a dynamo, supplying the necessary current for an electronic calculator. The motive power is independent of the machine. The source from which the power is derived is unimportant; it may be animal or human, the flow or expansion of liquids, springs, sus pended weights or electric current; the machine, however, always remains what it is. It must always be thought of as sup plied with energy in some form. 5. Control and execution. This commentary on the definition of the machine is of such importance that we must consider it separately.
CO N TRO LLING ENERGY AND O PER A TIO N A L ENERGY The energy necessary for certain machines may be derived from two sources. A wireless set requires electric current and wireless waves; for the steering of a steamship manpower and electrical power are necessary. Let us call them machines of double input. On the one hand energy directed to the per formance of the act, and on the other hand a less direct form of energy, which controls the former, enters the system. This dis tinction will be better illustrated if we elaborate it further. When I type, my fingers give both a command and an im pulse: the controlling energy and the operational energy; but it is not possible to distinguish one from the other. It becomes obvious, on the other hand, that if I use an electric typewriter, I need only stroke the keys gently; that is to say, I command and an electric motor takes care of the operation. The same dis tinction is still more difficult to make in the case of a typical machine-tool which has a single power input, branching into two supplies, the one operating control gear (lugs, cams, eccentrics, &c.) governing the actions of the machine in time and space, and the other working through a transmission, either rigid or flexible, which guides and distributes the motive force 29
THINKING
BY
MACHINE
to the right place. Finally, the two forces combine to accomplish the operation dictated by the control. But sometimes it is practically impossible to distinguish the controlling energy, as in the case of a milling machine or a countersink. If these machines accomplished actions which varied according to certain laws, the controlling energy would immediately become apparent. When it is not apparent, it is simply because the control demands that there should be a uni formity of movement. Thus one can differentiate between: A. Single input machines. The controlling energy and the operational energy are provided by the same power supply. Before the advent of electricity, this was the case with almost all machines. B. Double input machines. The supply of the controlling energy is distinct from the supply of operational energy. This category of machines has come into its own with the advent of electronics. But in this class B the operational energy can assume two very different roles: (1) It may carry out a given task dictated by the controlling energy. For example, the manoeuvring of the rudder requires considerable force, so that the operation is entrusted to a servo motor working in correspondence with the movements of the wheel. (2) It may serve as a subsidiary to the controlling energy, or, rather, to its variations. This modifies the operational energy which from then on becomes the carrier of the continually vary ing orders of the controlling energy as well as of the material power to execute them. For example, in the telephone, the operational energy is the “ carrier current” supplied by the administration; the controlling energy is the vibration of the voice which determines the induced current in the cable where the operational energy passes. It would appear that the controlling energy ought to be much weaker than the operational energy. This is not so in reality; it may only be minimal, but it may also be considerable, in the same way as the weight of a car changes the traffic lights in certain types of traffic control. It is therefore not their 30
THE
REALM
OF
THE
ARTIFICIAL
relative strengths which differentiate the two energies; it would be more accurate to say that the operational energy works according to its quantity, whilst in the case of the controlling energy it is the quality which matters. But what “ quality” ? The control causes the variations in performance, variations in time and space. One branch of cybernetics is occupied with the theory of signals and information; the study of the transmission of m essages in te le c o m m u n ic a tio n system s o r te le c o n tr o l. A f u n d a
mental distinction can be made between the signal which con veys the message and the information which is included in that signal and rendered intelligible. Thus it becomes obvious that the signal and the information can be identified with the energies of control and operation. The signal is the operation and the information the control. But the two classes of machines with double or single input for two energies do not cover all types of machines. One special class achieves its goal by means of a single controlling energy, information; these are the detectors. Thus, a thermometer has as a single energy, the difference in temperature between its previous surroundings and its present situation. In the same way a compass is supplied solely by the energy of the earth’s magnetism. Here, the energy has only one act to perform, that of manifesting its presence, or furnishing simple information. This is made plain by the fact that a detector can be sup plemented by an operative force. The thermometer then be comes a thermostat, which, at the order of the information coming from the temperature, sets heating or cooling apparatus in motion. In the same way a crystal radio receiving set is a simple detector whose controlling energy is only perceptible in the headphones, but an amplifier and a loudspeaker provided with operational energy may be added, which will give a much louder tone. It can be said that the thermostat and the wireless set with loudspeaker are detectors with a secondary function. The simple detectors, then, are set in motion by controlling energy alone: one might define them as machines whose purpose is to decode information. Traps are detectors of animal force with a secondary action. The animal force supplies the control whilst the energy necessary for the trapping (secondary action) is provided 3i
THINKING
BY
MACHINE
by the spring, or in the case of a pit, by the weight of the animal. Finally, we arrive at the following classification: A. Machines of dual energy and single input, which include most machine-tools and clock movements. B. Machines of dual energy and double input. (1) The controlling energy gives the orders to the operational energy, as in servo-mechanisms and detectors with a secondary action. (2) The controlling energy is translated into operational energy, as is the case in telecommunication and telecontrol devices. C. Machines with a simple controlling energy: detectors. But this classification must not be taken too seriously, as it in no way corresponds to the more important hierarchy of machines. It is, however, not much less unrealistic than several other such classifications, such as that of Reuleaux, for ex ample, which have become outdated since the advent of electronics. As with all things, machines should only be classified ac cording to their essential nature. Thus, the essence of a machine lies in the acts that it alone can achieve, in the independence that it can acquire, vis-a-vis man, by its automatism. Surely, the only possible valid classification of a machine must be based on its degree of automatism.
T H E D E T E R M IN A T IO N OF AUTOM ATISM This term, also, is not precisely defined, but its etymology seems to supply a definition: an act is automatic if the machine accomplishes it by itself. However, in the realm of “ acts” every thing seems in a hopeless state of confusion. “ Automatic” is most commonly applied to the sweet-distributing slot-machines. In reality, however, there are few machines which work less automatically. Human agency performs the two essential acts: that of putting the coin in the slot which releases the mechan ism, and that of pulling out the drawer on which the last packet of sweets in the pile rests. The only act that remains for the machine to perform, and which might be called automatic, is the substitution of the next packet of sweets in the pile for the
32
THE
REALM
OF
THE
ARTIFICIAL
one that has been taken, and this obviously descends by force of gravity. On the other hand, one would never think of describing as automatic a watch, the chef-d’ceuvre of mechanisms; but one might call it so when it is of the type that is self-wound by the natural movements of the arm that wears it. The meaning attached to the word in everyday speech is clear. Automatic stands for everything that has a mysterious way of functioning, everything that is novel or replaces an act that is usually the prerogative of man. The slot-machine is automatic in that it is a substitute for the shop assistant’s taking the money and handing over the box of sweets. The ordinary watch is not called automatic, because it is no longer mysterious, whereas the functioning of the self-winding watch, in that it is quite novel, possesses a sort of magical quality. A fountain pen was said to be self-filling when the old fashioned glass-tube and rubber-cap filler gave place to the simple operation of a lever; one no longer talks of “ self-filling” now that the system is in everyday use. Similarly, when electric lifts replaced the hand-operated one, they were spoken of as automatic. Another common mistake is the confusion of automatism with automata which, by definition, have the appearance of living beings. The greatest automatist of the age of electro mechanics, Torres y Quevedo, in his important work Essai sur I’Automatisme, inclined towards this error. All his works are per meated with the idea that automatism is the imitation of life. There is a general tendency to label as automatism that which in reality is merely autonomy. To decry such a confusion might seem unnecessary. However, Quevedo himself does not avoid it; he holds that automata may be provided with energy from accumulators, waterfalls, springs, cylinders of compressed air, &c., but he maintains that their source of energy should be con densed into a small volume. This is not essential; the fact that a machine is supplied with energy from outside is immaterial to the essential nature of the mechanism. In view of the misuse of this term, the anthropocentric mis conceptions and the vagaries which it has undergone, one feels tempted to leave it as it stands. But how can we ex press this independence of the machine from man on which we wish to base our classification of machines? It must be
33
THINKING
BY
MACHINE
defended against the misuse of everyday speech and an attempt has to be made to establish this hackneyed word on an ob jective basis. Let us turn to Larousse: “ Automatic: that which is moved or which operates by purely mechanical means.” One understands the meaning, it is true, but if one wants to proceed further and see what meaning this dictionary attributes to the adjective “ mechanical” , we find, “ Mechanical: that which requires the work o f h a n d s or machines” . Here we have it! P. Maurer, in his well-known book on automatism, Machines automatiqu.es, is more precise in his definition: “ that which operates by mechanical means without the participation of human volition” . This sounds all right on the face of it, but let us try to fit it to an actual case. If I plunge a cork into water and release it, would its rise to the surface be automatic? According to the foregoing definition, it would. In that case we may ask what is missing in Maurer’s definition. Surely, it is the idea of purpose or end. This can easily be introduced by the word “ machine” or “ mechanism” , so that the definition becomes: “ the term applied to a mechanism which operates by mechanical means without the participation of human volition” . “ Mechanism” has the advantage of applying to the de finition of natural as well as artificial phenomena. It is not only a question of satisfying the new demands of cyber netics, but of complying with the origins of the word auto matism. In fact when Reaumur coined it, he used it in connexion with the behaviour of bees. From “ automaton” — the term reserved for a machine having the outward appearance of a man or an animal—-he created the word “ automatism” , as that which in living creatures evokes the pattern of behaviour usually associated with automata. Even a century later, the meaning had undergone no change, since Littre styles it a “ physiological term” , and nothing else. Today most people would regard this with astonishment. Indeed, by a strange twist, this word, absorbed into physiology from mechanics, has been adopted into mechanics once more, via physiology. And finally, today, it becomes for cybernetics the bridge be tween life and machine.
34
IN T H E IR C O U N T R Y H O M E N EA R BRISTO L, TH ESE PARENTS HAVE T W O CH ILD R EN : ONE IS E L E C T R O N IC . Vivian Dovey and Grey Walter have two offspring: Timothy, a human baby and Elsie, the tortoise, of coils and electronic valves. Timothy is very friendly with his mechanized sister.
TH E FO U R PIONEERS OF CY B E R N E T IC S G E T T O G E T H E R IN PARIS: left to right: W. Ross Ashby, W. McCulloch, Grey Walter and Norbert Wiener.
THE
REALM
OF
THE
ARTIFICIAL
However, what machine is there that would function without the participation of human volition? All machines must remain idle unless man decides to put them into action. Let us imagine that our hand, instead of merely winding up the mechanism of the flute-player of Vaucanson, were to move the puppet by means of a handle. Would the automaton still be automatic if it were moved by hand ? Since the source of power cannot influence the nature of the machine, I have not modified the mechanism by changing the motor power; if it was originally automatic, then it has remained so; if it was not, then it still is not. The question comes back to what would seem to be senseless: is an automaton automatic ? I f I turn a drill by hand, it is not automatic; if I transmit the movement by a series of gear-wheels, it is no more auto matic than before. But if the rotation stops on its own by reason of the setting of the machine (dependent, usually, on the thick ness of the material to be drilled, so that as soon as the hole is bored the drill springs up again), then I might consider it as an automatic machine. We may ask, then, where automatism begins? . . . The well-known story of Humphrey Potter, who in 1713 invented, almost unwittingly, the automatic steam valve, will prove illuminating. At that time, the pumping of water from English mine shafts was performed by the Newcomen engine, which consisted chiefly of a large vertical cylinder into which the steam was admitted and where it was condensed by a jet of cold water playing into it. Children were employed to con nect the cylinder alternately to the steam supply and to the tank of water— a very irksome task. One day, a small boy, the young Humphrey, had the idea of freeing himself from the machine; he fastened his stop-cock with string to the beam of the pump worked by the piston. So he was able to amuse himself instead of working. From then on, the machine gave the orders at the appropriate moment for the movement of the water inlet; it controlled itself. The machine was thus substituted for human labour in the control of one of its parts. It is the word “ control” that provides the real key to the solution. Automatism is the supply by the machine of its own controlling energy. The operational energy is always furnished by man, and so
35
THINKING
BY
MACHINE
the automaton is given power of movement in just the same way as every other machine. It is, then, impossible to link automatism with any notion of the machine’s independence of man, in the sense employed hitherto; the machine is always dependent on man. It is this essentially fundamental distinction between the energies of operation and control that enables us to grasp this idea: A mechanism is automatic in so far as it furnishes its own controlling energy. Or, neglecting the definition of the controlling energy: an automatic mechanism controls by itself the variations of its operation in time and space. Or, put more abstractly: a mechanism is automatic when it gives its own information to its operative components. In fact, the term “ automatic” does not correspond to its etymology; no mechanism moves itself. In the sense of the word according to the dictionary, no machine is automatic, not even the automaton. But within the framework of our definition, one can say of numerous machines that they are “ automatic” . It is still, however, necessary to define the degree of auto matism that a machine may attain. Now that we have more than a fleeting notion, it is possible to determine the different degrees that are logically possible. They mark the stages of the notable progress which the machine has made towards in dependence. fir st
d e g r e e
:
d e t e r m in e d
a c t io n
When one passes from the tool to the machine, it will be found that the energy of operation is transferred from man to the machine. The most elementary action is that in which the factors do not change and which itself always remains un altered; or, more precisely, that which only varies according to contingency and not by reason of any particular law governing the operation. This is the case of a fountain or a lamp, where the factors remain constant. One might be led to suppose that such a simple device as a machine that strikes a bell should belong to this category; but its factors are obliged to vary in time in order
36
THE
REALM
OF
THE
ARTIFICIAL
to co-ordinate the different phases of their action. It appears, therefore, to belong to the next category. Man makes use of a certain effect that may be the product of a natural phenomenon, but which will only take place if the factors happen to have certain values in space and time. This effect is determined by the machine. (“ Determined” is the exact word, and one which in the course of this book will prove to be indispensable). How? By the artificial determination of th e fa c to rs, o r a t le a st b y th e a tte m p t o f th e d e sig n e r to d e te r
mine the factors, which, in reality, can never be completely guaranteed against contingent variations. One of the factors, however, may be left free. In this case the contingent variations of this factor influence the effect. Such machines might be considered to be a step towards the total determination of the effect, as being progressively freed from the fluctuations of contingency; that is to say, then, as an elementary degree of automatism. But, in reality, the influence of contingency is not evidence of an imperfection in the machine; on the contrary, it is systematically sought after, because in this instance we are, in fact, dealing with detectors, such as thermometers and barometers. For simple purposes of terminology, we will, then, consider these as machines of the first degree. Let us represent a machine by a circle, its effect by an arrow, its factors— reduced to two— likewise by two arrows. A small cross will denote that a factor is constant, or, rather, that man has tried to determine it. We arrive, then, at the following diagram:
Chance effect
SECOND
ist-degree machine with variable action
d e g r e e
:
v a r ia b l e
ist-degree machine with detector action
a c t io n
The first sign of progress in the machine would be to vary its effect, without causing any interruption in its strict
37
THINKING
BY
MACHINE
determinism, to free it from contingency. In other words, the machine shall determine the variations of its own action. One can also say that it shall co-ordinate simple actions. A machine which can push planks forward, a machine which presents nails vertically, a machine that can hammer in nails, does not amount to the same thing as a machine that will hammer nails into planks. Each acts without purpose. It is their co-ordination that gives a machine of the second degree. In such a machine, the factors remain variable (within limits, of course), but their variations are determined by what one might call a common “ pre-factor” , by a part, or an assem bly of parts, which, without its ever having been officially defined, is commonly understood by the word “ programme” . This pre-factor is itself strictly determined. Graphically, such a machine could be represented thus:
This “ programme” is the cam shaft, the spiked drum of the musical box, the punched cardboard cylinder of the pianola, and tomorrow it will be the photographic film or the mag netized wire. But in certain machines it is not so clear cut; it is incorporated in the actual structure of the mechanism. The programme is thus the distributor of the controlling energy. We come here to the domain of classic machinery, of the movements of clocks and machine tools. For a long time it was never dreamt that the machine could progress beyond this stage; even today, few people have realized that right up to the second half of the twentieth century, the machine was only in its infancy. It is true that we know that the machine is evolving and that tomorrow it will progress even further with the aid of our electronic knowledge. But this is not the essential. What should be understood is that in so far as it is able to liberate itself from man, its creator, it will acquire a separate identity; that is to say, by progress in its degree of automatism. We may speculate as to what direction its evolution will take 38
THE
REALM
OF
THE
ARTIFICIAL
in this respect. One need only think of the shortcomings that have been described as being of the essence of machinery: grossly repetitive of its action and totally unable to adapt itself to the changing circumstances of its environment. These vices do not belong to machines in general, but rather to the second degree of automatism. Whether the machines be electronic or the most marvellous calculating machines, they will never be able to do more than man has ordained for them. They are slaves to a complex system of cams and shafts or per forated cardboard. With the development of technical skill, they may become able to perform almost any operation, but they will always blindly adhere to their set programme, whether it be to fly 2,000 miles an hour or to extract the cube root of a twenty-digit number. They will never be able to do anything more than this, and without the orders they have received from the constructor who has written them into his programme, they will always behave stupidly. Our machine for hammering in nails will hammer them into the air if we omit the supply of planks; or if the supply of nails runs out, the hammer will con tinue to hit in nails that are not there. Similarly, even the most marvellous of calculating machines will only know how to repeat the same operation if man does not provide it with a fresh programme in which he has registered the data of a new problem.
TH IR D D E G R E E : TH E CONDITIONED ACT The most serious defect of the classical machine is its in capacity to adapt itself. Once it can harmonize its acts with the prevailing circumstances, it will have moved up to a new degree of automatism. In other words, when its programme need not be totally determined, when one or several of its factors can remain variable in such a way that the machine can modify its behaviour according to the contingency arising, then it will attain the third degree. To illustrate this degree of automatism, a double diagram is necessary, having an arrow coming from the exterior, sym bolizing the contingency which can modify one of the factors or the programme. We should note that the variable introduced into the system can emanate from man himself as well as from circumstances
39
THINKING
BY
MACHINE
that are not human. In the first case it is a question of a simple control. A control lever, for example, makes the machine appear to have a degree of automatism that is lower than that of a machine of the second degree in which the intervention of man does not occur. On the other hand, if we think of an automatic
fire alarm, operated by the heat of the fire itself, the degree of automatism seems to be higher. However, from the logical point of view, both are exactly the same, and it would still be the same if, by some constructional quirk, the programme could be rendered sensitive to a certain variable. It always involves a contingent effect, that is to say an effect coming from the exterior of the system; but this effect may or may not come from man and may or may not serve the purposes of man. The truth is that one can only examine these questions from a purely logical point of view, since they are dominated by our anthro pocentric pragmatism. By the third degree of automatism, then, must be understood the degree of automatism of a machine that is sensitive to non human exterior events which cause it to act according to the end-goal that man has set for it. Similarly, it should be under stood that the diagram corresponds to that of a machine to which man and not a natural circumstance dictates an order; in both cases the machine receives a command from without. Thus, the machine estimates the appropriateness of its own actions; it reacts to the influence of certain contingent condi tions, the release of whose influence is nothing but a special case of this influence acting according to an all or nothing principle. So long as these variations do not take place, the conditions of the act are not fulfilled and the machine either fails to act, or acts in another way. We will call it the machine with condi tioned action. It is true that such a degree of automatism existed in classical mechanics (one can imagine our nail-hammering machine 40
THE
REALM
OF
THE
ARTIFICIAL
controlled by the weight of the plank), but it is only electronic techniques that have enabled the evolution of the third degree to take place. Here is an example. On steamships, fire detectors are placed in the holds; some are set off when a photocell, that is constantly scanning the degree of clarity of the atmosphere illuminated by a strong light bulb, detects smoke; others are controlled by thermostats sensitive to a sudden rise in tem perature. At the least sign denoting fire, the alert is given, its lo c a tio n is s ig n a lle d , w a te r is b r o u g h t u p to a c e r ta in p ressu re
in the pipes and carbon-dioxide foam is sprayed into the hold where the danger is. Thus man is by-passed. The machine dispenses with our having to think along these lines: “ The temperature is rising too high, a trace of smoke is visible, therefore it might be a fire; I ought to take action, therefore I should sound the alarm, open up the water supply for the firemen, and begin to spray carbon-dioxide foam.” It is certain that the machine does not “ reason” . It is man who has assembled the syllogisms and has stored them within the mechanism. Henceforward, triggered off by the circum stances which call for it to go into action, the machine will pour out syllogisms as a musical box pours out the strains of Plaisir d’Amour. The distinction between the second and third degrees is very great. In the second degree the sequence of events cannot be changed and does not admit the effects of any subsequent action. The third degree admits that the sequence of events might be interrupted in certain circumstances.
T H E R E C O R D I N G OF T H O U G H T Such are the limits of classical mechanics; man delegates the machine to act in his stead (first degree), then come the machines that co-ordinate their own functions (second degree), and finally the machine that estimates the efficacy of its own action (third degree). Thereafter it appears that the machine not only replaces man at a purely material level, but also on the intellectual plane. All acts, with the exception of pure reflex acts, entail some degree of intellect. But when this act is delegated to the machine, what becomes of the intellectual component? . . . 41
THINKING
BY
MACHINE
No one, not even those who hold the lowest opinion of the machine, can deny that the most primitive machine assumes a function which, in operation, plays the same role as the in tellect does in our own actions. In planning a mechanism, man uses his powers of adaptation and reasoning to the maximum, so that the machine shall economize his future thinking in this direction, once and for all. He then incorporates this reasoning into the mechanical structure. In the way that a gramophone does not speak, but records our voice, without having recourse to thinking, the machine records our thought. Our training has given us numerous reflex actions which originally required thought. The intellect no longer parti cipates, but they attain their end with the same perfection as if they were directed by the intellect. Thus mechanical action appears as reflex action artificially created by man and in corporated into the machine. At all levels of automatism, then, the machine takes up the reasoning processes which man has put into it. At the first degree, the amount of “ reasoning” put into the machine is very slight; the factors are adjusted at a certain setting in order to obtain a given effect. But we cannot delegate very much of this reasoning power to the machine without pursuing it to a much higher level. With the second degree, where the machine co-ordinates its acts, the presence of recorded thought is much more apparent. What could be more commonplace than an internal combustion engine? Whilst everyone is prepared to get excited about the “ intelligent” work of a calculating machine, who would stop to think that the engine of his own car economizes “ thought” . Let us try to imagine an engine that had no “ programme” ; we should have to co-ordinate its actions. It has four cylinders and we have only two hands and one brain. Nevertheless, we should have to do and think of everything at the same time; to expel the gas from cylinder no. i ; to introduce air into cylinder no. 2; to vaporize the petrol in cylinder no. 3; to compress the mixture in cylinder no. 4; then, a moment later, to admit air into no. 1, not forgetting the petrol in no. 2 or the compression in no. 3 or the spark in no. 4; and to inject petrol immediately into no. 1; remembering, meanwhile, the lubrication of each 42
THE
REALM
OF
THE
ARTIFICIAL
joint and surface where there is any friction. Even if we imagine the engine reduced to its simplest terms, even ad mitting the absurdity of a system of 5 to 10 revolutions per minute, it would be quite a feat if one were able to co-ordinate a complex machine thus, even for a few minutes. Soon the most lucid mind would muddle up one of the directions. It would make anyone’s head spin— this common expression is an apt one: one’s head spins if one thinks too much. . . . We see at once, without any very deep discussion, that the machine greatly economizes thought. Thus, man elaborates a system of reasoning and records it in the design of a series of cams and their settings. Thus, the musician will practise to arrive at the best possible performance before making a gramophone record. The machine is only capable of a single sequence of thoughts, of a single sequence of sounds; but its functioning is the most perfect that can be realized by modern technique. Without recourse to calculating machines, then, it is easy to discern synthetic “ thought” in mechanisms of the second degree. At the third degree the machine integrates more complex thought; it is still a matter of controlling the result of co ordinated actions, but the consequence of these actions is not inevitable. Without such a machine, we are obliged to use our judgement: as soon as I perceive by my senses an increase in temperature, or the presence of an iceberg, if I am responsible for the safety of my ship, I must take action to avoid this danger. But the machine of the third degree replaces my perceptions by means of subtle detectors which are far more sensitive and accurate than my senses. As for its criterion of judgement, I have established it once and for all and have entrusted it to the machine.
T H E N I N E C O M P O N E N T S OF A C T I O N What new progress can be achieved by the machine? Will it acquire an ever-increasing independence and take over the functions of man ? We have seen that it is able to perform these actions and that it can co-ordinate them and control their efficacy. But what will be its new conquests when it attains the fourth degree?
43
THINKING
BY
MACHINE
In order to understand the nature of the human responsi bilities that belong to the third degree, we must try to make a general analysis of human action. In order to do this, let us consider man’s work at its most elementary level. Let me imagine myself as a primitive man with nothing but my bare hands to work with. I want to make something. First of all I must have the material to work on. Man can work on a sub stance, but he cannot make the substance. The first condition of action, then, is to have a material. The second condition is that, in relation to this material, it is necessary to have an organized system, a mechanism which will impart its goal to the material. In other words, it is necessary to have a mechanism which works to some definite end. Let us suppose that the mechanism is myself, man. What shall I construct from this material ? Will it be a god, a table or a bowl? An objective is decided upon— the action is given a goal. What process will enable me to attain my objective ? How can I make a bowl from clay; or, having chosen from a chain of mountain peaks the summit that I want to climb, how do I get to the top ? By which mountain face ? How can I cross this glacier ? And that sharp ridge ? By making a tunnel ? By jumping across the valley? I draw up a programme: a plan of action. This idea is essential: the means by which man, animal, or machine determine their actions towards a goal. It is, as Lalande’s Vocabulaire de la Philosophie says: “ the totality of the conditions necessary for the determination of a given pheno menon” . I know where to make, what to make, and how to make. I have to decide when to make. It is not expedient, if I am not hungry, to pick fruit before it is ripe, when I have no means of preserving it. In short, I estimate the expediency of an act. Now, I am in a position to act. But not before I can be reasonably sure of succeeding. If I have only one hand, I shall be unable to plait rushes to make a basket; if my foot is injured, I cannot run; if my vision is defective, I cannot carry out sentry duty. From this a new condition is attached to action: aptitude. Thank God, I have nothing wrong with my foot or my eyes; I am able to act. This is how I accomplish an elementary action. But to act is not everything; neither is it all to co-ordinate
44
THE
REALM
OF
THE
ARTIFICIAL
actions according to a logical programme directed towards a fixed objective. All that I do must be subjected to my constant correction and must be adapted or modified in view of what I have already achieved. This is adjustment; it is here that the intellect intervenes; this is the stage that is essentially human. The workman adapts his work, on the one hand, to the purpose which it is to serve, and, on the other, to the unforseeable circumstances which in practice affect his execution of it. The work is c r o w n e d by th is s u p re m e in te r v e n tio n of th e in te lle c t. All human work, then, entails nine components. Material. Mechanism. Goal. Determinism. Expediency. Aptitude. Action. Co-ordination. Adjustment.
On what to act? Who or what will act ? What is there to be done ? How shall it be done? When to act? The ability to act. To act. To co-ordinate. To correct and adapt.
The seven last components always depend on man. The first is beyond his scope. As for the second, man is master when he makes the machine act; he is not master when the mechanism is himself. With the advent of the tool and with the advent and develop ment of the machine, man was able to delegate a part of his creative power to them. Thus, he progresses by improving his “ artificiata” which he turns into his slaves; and each time that he delegates a new role of action to them, he marks a new degree of automatism. By providing myself with the help of a tool, I confer aptitude on the tool. I could not cut that branch without a saw, or make that hole without a pick. If I provide myself with a machine, it differs from a tool in that it takes charge of my action. It is the first degree of auto matism. When the machine passes to the second degree, it guarantees the co-ordination of the action in my stead. At the third degree, the machine estimates the appropriate ness of the action.
45
THINKING
BY
MACHINE
In the fourth degree, will the machine take on the control of its own directives or programme? This sounds absurd. (But the homeostat of Ashby can actually be seen to attain this apparent “ absurdity” .) Will the machine conquer “ adjust ment” , the supreme component which has hitherto been con sidered to be an essentially human prerogative? Even here machines are in process of adapting their own actions, in terms of the work already achieved, towards what they are about to do. They are already assuming the function that in human activity marks the highest feat of intellect. Cybernetics played no part in the creation of the first machines of this degree of automatism. It did, however, demonstrate the importance of their basic principle. Such machines were already known to classical mechanics; but their profound significance and future possibilities had passed un perceived. Today, we are able to understand that the machine only reaches its highest fulfilment by adapting its own acts to a given end. We now realize that in the first three degrees the machine was only in its infancy; now the machine is just beginning to mature.
46
CHAPTER
III
The Miracles of Feedback h e mechanism of the Watt governor never fails to impress the uninitiated by its marvellous ingenuity. Two heavy metal balls are attached to a spindle con nected to a steam engine; when the machine speeds up, the balls are flung outwards by centrifugal force and their dis placement, by acting on levers hinged to a slide valve, cuts down the supply of steam and thus reduces the speed; this brings the balls nearer together, which reopens the slide-valve and restores the engine speed, once more causing the balls to fly apart. This self-righting process continues; the oscillations of the weights gradually settle down until a state of steady equili brium is reached between input of energy and output of work. Think of a steam crane with a weight at the end of the cable. If this weight is extra-heavy, the engine power has to be in creased. The crane driver can hardly be expected by rule of thumb alone to estimate the weight of the load correctly and to regulate the steam supply accordingly. Errors in judgement on both scores would almost certainly result from this kind of guess-work. The mechanical regulator will, however, make neither of these mistakes; it will always adjust the amount of power to the work to be done. If the cable should break and the engine race, the regulator will act more swiftly and surely than the engineer in cutting off the steam supply. Its perception is never at fault; its attention is never diverted and it never leaves the control levers unattended. One cannot but marvel at the ingenuity of such a mechanism. Now, this mechanism has been in existence since round about 1780; and, throughout more than a century and a half, no one has seen that, far from being merely “ ingenious” , this classic device contains the makings of a revolution. In all the various attempts at classification of machinery,
47
THINKING
BY
MACHINE
the Watt governor has never been rated worthy of any very prominent position. Haton de la Goupilliere, in his well-known Traite des Mechanismes (1864), drew up what he called: “ a systematic index of mechanical devices” in which he lists governors simply as “ accessory apparatus” . The Larousse dictionary defines: “ Governor: a component which regulates the motion of a mechanism.” Let us widen this definition; if instead of “ motion” we write “ functioning” and use “ mechanism” in its true sense as applying to both artificial and natural phenomena, then surely, in defining the regulation of a natural function in physical terms, we have at once put our finger on one of the mysteries of life? In order to appreciate its significance, we have had to await the cybernetics school to show us the virtues of feedback. Feedback is a word used in electronics, meaning to feed into the other end. In French, radio technicians sometimes use the expression “ retroactive coupling” , more often “ reaction” ; the first is too long, and “ reaction” is too vague a word with a wide variety of interpretations. Let us stick to “ feedback” , which is already accepted. In this book the word retroaction will be used to signify the inherent general principle so as to give this term its proper logical meaning: the action of an effect on one of its causes. Aware of the fact that cybernetics borrows from their tech nique and vocabulary, certain electronic engineers discount the evidences of the new science, claiming that they know all about it and have been working along these lines for a very long time. That is as may be!— but are they not merely applying the principles without having grasped the inherent theoretical possibilities, in the same way as Watt and his successors did? For it is a fact that, unwittingly, they held within their grasp the key to problems that had arisen in many other fields, even in metaphysics.
W H E R E T H E M A CH IN E SURPASSES ITSELF If you have a solid-fuel central-heating system in your house, your boiler will probably have a little chain, which, by acting on a damper, controls the draught to the furnace. The other end of the chain is fixed to a temperature detector submerged in the hot-water tank. This device, consisting of a capsule filled with
48
THE
MIRACLES
OF
FEEDBACK
a liquid which expands when heated, exerts more or less tension on the chain thus causing the damper to open or shut accord ingly. When the temperature of the water is too low, the chain pulls on the damper for more draught to draw up the fire and when the water is too hot the air intake is diminished. For all practical purposes this very simple device is remarkably effective in keeping the temperature of the water constant. Thus, the machine adapts itself so as always to produce the s a m e e ffe c t; it m a in ta in s its in te r n a l te m p e r a tu r e as a n a n im a l
does. Man need only intervene should he wish to set the machine the further goal of establishing a different temperature, which is achieved by a simple adjustment to the length of the chain. The general principle is the same as that underlying the Watt governor; that is to say, in both cases the action of the mechan ism is adapted so as to produce and maintain a desired effect. In this case the temperature of the water is the effect; before, it was the speed of the spindle. Here, the variations of the effect are detected by a thermometer; in the other case, by the degree of separation of the weights, that is by a kind of tachometer. In both cases, these variations in effect react on one of the factors, curbing its activity in such a way as to restore stability promptly within the system. This reaction is brought about on the one hand by the movement of the damper allowing the fire to draw up or die down and on the other by a valve which regulates the flow of steam. In both cases there is the uncontrolled factor which may vary from time to time (within limits, of course) without the result being affected by its variations; in one case, this factor is the fuel and in the other, it is the load. This parallelism can be illustrated diagrammatically:
From this point it is easy to distinguish the essential elements of a feedback system. But we have to invent a descriptive
49
THINKING
BY
MACHINE
terminology for them since this analysis of the retroactive circuit has never been defined. (1) The detector: a mechanism that is sensitive to the variations in output of the system. (2) A message transmitted by the detector and supplying one of the component factors with information. (3) The reactor: a device which is sensitive to the recep tion of this information and capable of reacting accordingly on one of the factors. From this we get the general principles of feedback re presented in the following plan:
Now let us try to fit an absolutely concrete example into this abstract framework— that of the maintenance of the water level in a cistern:
The diagram is self-explanatory and illustrates the general sense of the scheme, that one of the component factors becomes a function of the output. The output at the same time continues to be a function of this factor and, for that matter, of all the factors. 50
A photograph of the internal anatomy of Elsie, who belongs to the species Machina speculatrix created by Grey Walter.
The Conditional Reflex Analogue, c o r a , made by Grey Walter to demonstrate his hypothesis of learning. At first, the learning circuits were incorporated in a “ Tor toise” (see previous figure) but this was too complicated for class demonstrations, so they were put in a display box (see page 247)- In effect, the learning circuits work out the contingency of two series of events and permit the learned response to appear if the contingency turns out to be significant.
THE
MIRACLES
OF
FEEDBACK
So machines, long before the advent of cybernetics, had the power of adapting their present activity both to the work already done and to that which was yet to be performed; in short, the machine adapted itself to seeking the goals deter mined by man, such as certain speeds, temperatures or water levels. Thus we see that the machine had long emerged from in fantile dependency and no longer required human operators to k e e p it r u n n in g . B u t m a n , so e a s ily su s c e p tib le to vanity, h as as yet strangely failed to appreciate the importance of a victory which enabled him to delegate to a machine powers demanding, hitherto, the intervention of his intellect. In this elevation of the machine above the merely material plane to which it had apparently been self-condemned in the past the transcendent role of feedback becomes apparent. A new automaton is born into our midst, having a selfcontained automatism in contra-distinction to the automatism of cams and lugs, controlled from without. The directive, that is to say the information, does not originate from factors out side the controlling mechanism, but from within the mechanism itself; thus it is truly automatic. A feedback mechanism may be defined in practical terms as a self-correcting device which enables a machine to regulate its operation by adapting the drift of its own deviations. It will be appreciated that this definition fails to take into account the full implications of feedback, in that no reference is made to the advantages of stabilization. Furthermore, in final analysis, we can express retroaction as a mathematical concept: a function of a function, relating, through the de pendent variable, all the variables of a single function to one of these variables. The idea of retroaction could from this stand point be extended to all organized phenomena of the universe; to all that Aristotle and many others would call “ nature” . The concept of feedback, which Wiener has proclaimed to be “ the secret of life” and which is implicit throughout the physio logical work of Claude Bernard, can be shown to extend still further, as the secret of universal order. But let us confine our attention for the moment to concrete examples. We have examined a solid-fuel heating system; but what
5i
THINKING
BY
MACHINE
about an oil furnace, where we are still dealing with a feedback system, but of a different type? In this case, the temperature detector is generally a bi-metal strip thermometer; a thermostat. When this is placed in the hot-water tank, it expands and breaks the electric contact. Thus, when the water exceeds a given temperature (generally 70° C.), a circuit is broken and the supply of oil fuel is interrupted. Consequently, the temperature soon drops to the degree where the electric contact is restored and the oil pump motor starts working once more; the temper ature will continue to rise to the point where contact is again interrupted, and so on. . . . Whereas in the coal furnace the corrective mechanism of the ventilation is continuous, in this case the correction is only intermittent. The retroaction is only operative on the heating unit when the temperature falls below 70° C.; when the temperature rises above this mark, no more messages are sent as the feedback will be temporarily cut off, which means that the factor “ external temperature” will bring about the cooling of the over-heated water. Refrigerators, pneumatic drills and the safety devices of lifts provide similar illustrations. Feedbacks may be classified as “ all or none” , “ go or no go” or, even, “ one-way” circuit types. But rather than limit our selves to these sterile forms of intermittently controlled circuits, let us examine the more adaptable retroactive circuits. The simplest type of radio set provides a perfect example. THE
W O N D ERFU L
M E C H A N IS M
OF
A N T I-F A D IN G
Without feedback a radio set designed to pick up long-range signals would be far too sensitive to permit listening-in to local stations. On the other hand, if local reception were perfect, signals from more distant transmitters would be practically inaudible. In more technical phraseology, it could be said that to ensure good reception— of constant intensity— the power supplied to the loudspeaker must be confined within certain limits. It is easy to imagine a regulatory device whereby the listener himself could compensate for the differences in in tensity. But there is nothing along these lines that would over come fading without also modifying the strength of reception. The difficulty is that the reception fluctuates because of electric
52
THE
MIRACLES
OF
FEEDBACK
disturbances in the ionized layers of the upper atmosphere, causing the reflection of the radio waves coming from rather more distant stations. This fading fluctuates too rapidly to be controlled by hand. The listener would play about with his volume control in vain; he could not possibly compensate for the fading. In practice, listening-in would be such a hideous procedure that radio could never have attained its present popularity. We may go on to ask whether the machine cannot replace man in work which is too rapid and delicate for him to perform, and, if so, would not this work demand on the part of the machine a speed of judgement and continued precision in rapid performance that would surpass both the physical and intellectual capabilities of man? Only the feedback system affords so many and such intricate possibilities. The automatic regulation of sensitivity, or, as it is better known, the automatic volume control (A.V.C.), which prevents fading, is in fact a feedback system. On the one hand, it balances the strength of the signals received from near or distant transmitting stations; on the other, it overcomes the inconveniences due to fading. Comparisons may be made between this and the problems common to the other selfregulatory devices already described. In the steam-engine the flow of energy is regulated in such a manner as to stabilize the speed, which is itself the effect of these fluctuations of energy. In the coal furnace, the temperature of the water is stabilized; that is to say, the effect of the machine depends on the in corporation within its activity of the effect of one of its essential factors— the flow of oxygen. In the radio set the volume of sound to be stabilized is similarly an effect of a flow of energy, but in this case it is not possible to control the volume of the sound waves by the same regulatory process as can be applied to the flow of steam or air; on the contrary, we must either endure these fluctuations or seek to overcome them in a differ ent way. We can, therefore, only proceed by converting these waves into electric current to be fed into the loudspeaker. Let the current be inversely proportional to the waves received— and we shall have the key to our problem. Automatic volume control is nothing more than control of the amplifier by the current produced in the earlier stages.
53
THINKING
BY
MACHINE
The voltage of an intermediate amplifier is controlled from a “ detector” , which transforms the signal into a negative voltage; in other words, the voltage is rectified by a diode in reverse. This voltage is applied to the grids of the amplifying valves (the reactor of the feedback) whence it originally came. Since the amplification factor of a valve is inversely proportional to the bias voltage on its grid, the overall amplification will be increased if the current fed into the loudspeaker be weaker; conversely, the amplification will be smaller if the current be greater.
In practice, if the system is well designed and the com ponents well adjusted, the signal strength can be almost entirely stabilized by the A .V.C . Moreover, this operation is effected without our even being aware that it is taking place. To us the control seems instantaneous, as we are unable to appreciate through our senses such minute time lags. Being entirely electronically operated, the inertia of this mechanism is practi cally nil; its action is so rapid and so exact that to us it appears most miraculous. But this perfection is only apparent. Actually, the voltage is not controlled instantaneously; if the voltage is too weak initially, at first it becomes too strong, and if it is too strong, it becomes a little too weak and so on. Its oscillations around the ideal level are damped and settle down gradually. (In the same way, the Watt governor is unable immediately to stabilize the mechanism after a serious disturbance.) But the oscillations are practically inaudible, a filter being provided which assumes the same sort of role for them as the flywheel does for an engine.
54
THE
MIRACLES
OF
FEEDBACK
There is, then, an inherent drawback to all feedback mechan isms in that the action takes some time to reach stability. But when, from mechanisms where the inertia is practically negli gible, we turn to regulatory systems dealing with considerable energies, such as the control of hydraulic turbines, we see that these oscillations of stabilization have greater significance and present serious inconveniences. AN
INDUSTRIAL
REVOLUTION
The automatically regulated rolling-mill, invented in America and only recently introduced into Europe, is an example of an industrial feedback system which will serve as an illustration of retroactive mechanisms. It is difficult to en sure perfect uniformity of thickness in sheet metal coming off the rolling-mill. For over a century improvements of machines, methods and material have proved useless in achieving this over-all regularity. Generations of technicians have directed their scientific skill, patience and ingenuity to this end. A rolling-mill differs from the simple machines that we have been describing up till now in that a greater number of factors is involved. At least seven factors must be attributed to the effect— in this case, the finished metal sheet; distance between the rolls, speed of the rolls, thickness of the metal, mallea bility, ductility, temperature and traction on the sheet exerted by the roll as it passes backwards and forwards. Thinking along traditional lines, the regulation of the product can only be conceived in terms of all the factors; that is, to obtain an effect of a predetermined value, all the dependent factors should be set at values which have been carefully calculated, strictly co-ordinated and rigorously adhered to. This, up to now, has been the general rule on which the control, manu facture, and consequent activity of all mechanisms has been based. But, unfortunately, whilst one of the variables is in the process of regulation, another will be thrown out of equilibrium; interference with one factor will unintentionally affect another; by the time the correct adjustment is carried out, the product on which it works will have changed its characteristics; and when everything is finally in working order once more, the machine, through wear, will have lost its former precision and if,
55
THINKING
BY
MACHINE
in spite of everything, production gets under way, the product will sometimes still reveal imperfections. The mill has to be brought to a standstill to enable the speed of the rolls to be regulated with even greater precision; the worn parts are re placed by stronger and more accurate components; there are fresh instructions for the workmen; a new foreman is installed; recriminations are levelled at the suppliers who are accused of not delivering materials of uniform quality; a stricter control of the raw materials is set up. Finally the machine is thrown on the scrap-heap and work is started on another, which is still more complex and which turns out to be even more refractory. In short, the engineer, in all branches of industry, has been carrying on a perpetual struggle to obtain accurate setting, stability and synchronization of all the factors of machines. But the fight has been like that of the man trying to stop the escape of water from a tank riddled with holes; as soon as one hole is stopped up, another leak appears. Rolling-mill engin eers, in common with those of other industries, have spent their time and their intelligence in such pursuits, all because a systematic study of automatic control had not hitherto been attempted. The formulation of such a line of reasoning would have pro vided the solution to the self-regulation of the work of a machine: sending back information about the effect to one of the factors. Before the advent of electronic developments, such progress in rolling-mills would have been unimaginable, but, as a matter of fact, it could have been introduced some fifteen or twenty years earlier, if the problem had been approached with a more open mind instead of purely from the standpoint of accepted technical knowledge. This is the kind of solution which will be adopted hence forth : a feeler measures the thickness of the metal sheets coming off the rolls and, according to their degree of deviation from a fixed gauge, sends an electric signal to one of the factors to correct this deviation. But to which of the factors should the signal be delivered? In the past, this quest for uniformity led to an adjustment being made to the speed of rolling of the metal sheets just before their release from the rolls. Rolling in this way, however, is accompanied by a certain degree of traction or drawing 56
THE
MIRACLES
OF
FEEDBACK
causing modification of the thickness of the sheets. It is essential, then, that this traction be strictly constant, so that the angular velocity of the rolls diminishes proportionately to the increase in thickness of the sheet being rolled out. This delicate regulatory manipulation is provided by the amplidyne, a con trivance capable of converting the weakest currents directly into motion, thus serving both as an amplifier and a motor. The retroaction from the thickness of the sheets is fed back th r o u g h th e a m p lid y n e to th e fa c to r “ tr a c tio n a t th e o u t p u t-
end of the rolls” .
The feeler translates its deviation into electrical information; or, rather, a radio-active substance is placed underneath the metal sheets, whilst, above, a Geiger counter detects the radiation flux, which is inversely proportional to the thickness. The detector is set in such a manner that it transmits no infor mation if the sheet has the required dimensions (coefficient figures). But as soon as the thickness varies beyond a certain tolerance, a message is immediately transmitted to the ampli dyne so that it may correct the traction on the metal sheet and thus cancel the error. The semi-miraculous aspect of feedback becomes apparent here: this correction takes place whatever the cause of the deviation. Whether the failure to reach the standard dimen sions be due to a change in the malleability or the ductility of the metal, or whether it should stem from a gradual wearing of the rolls, or from a sudden reduction of the temperature, makes no odds! The regulator is unconcerned with causes; it will detect the deviation and correct it. The errors may even
57
THINKING
BY
MACHINE
arise from a factor whose influence has never been properly determined hitherto, or even from a factor whose very existence is unsuspected. Whatever the cause, the disturbance will be overcome none the less effectively. At last, there is no further need to mount guard over the thickness of the metal sheet, nor to stop the machine when the tolerance is overstepped, nor seek the causes of defective pro duction, meanwhile paralysing the whole workshop. Neither need we embark on the construction of new and more accurate machinery which would require more highly skilled operators. From now on, the factors are regulated as efficiently as possible and without fuss and bother. This can be taken for granted and production is even better than before. One is left with the impression that this is more than progress; it is revolution. A whole chapter of the history o f industrial production is brought to a close; a new era is opened up. Cybernetics is by no means responsible for progress which was developing before this new grouping of sciences was born or thought of; but it will have an all-important influence on the trend of further progress. The technicians, who have spent their lives struggling to obtain perfect regulatory control over all the dependent factors, do not seem to have realized the importance of this upheaval. Moreover, having searched through the technical literature for evidence of any echo of this revolution, we find only two revealing phrases. Both occur in specialized scientific journals at the end of monographs on the regulation of rolling-mills, published in 1948 and 1949 respectively. “ The thickness of a rolled sheet article needs to be kept constant within a high degree of accuracy. . . . For this reason, an automatic control of the thickness is sometimes envisaged by means of measuring devices exercising an over-all control on the pressure of the rollers.” Nothing more. . . . And in the second article: “ It would be best to use the thick ness readings from the travelling micrometer— a standard instrument in all rolling-mills— to vary the traction according to whether the thickness is too great or too small. For this purpose it would be sufficient to arrange that the readings be fed into the reference circuit of the amplidyne controlling the speed. . . .” 58
THE
MIRACLES
OF
FEEDBACK
Thus, it almost looks as if history were repeating itself: James Watt, too, was unable to see the dynamic implications of his invention. The importance of this self-regulatory action of the finished product extends far beyond the scope of any single industry. Gerard Lehmann, one of the great French specialists in auto matic control, was thinking of just such a united front when he enumerated the four inevitable consequences of this new t e c h n iq u e :
(1) The reduction in cost of a machine which no longer need be constructed with such precision. (2) Lower-priced goods, resulting largely from a relaxa tion of the minimum standards of tolerance for raw materials. (The possibility of economy in highly skilled labour might also be included.) (3) Increase in the speed of the machine which will often be made possible by the continuous nature of the control of the product. (4) Improvement in the quality of the product by the narrowing of production tolerances. And Lehmann quotes examples of machines that can be controlled by the deviation of even fundamental dimensions of the product, such as, the thickness of metal sheet, the diameter of a wire or cable after drawing, the diameter of rubber tubing, colour printing and so forth. The control of printing presses is particularly interesting: defects of alignment can be detected by a photocell giving its own correctional directives to the mechanism of the registry which adjusts the position of the paper. When one knows how much time can be lost by registering in colour printing, one readily understands the enormous benefits afforded to the printers by such an automatic regulatory process. We are now better able to understand the operation of the feedback system. We see that metal is thus freed from the effects of the variation of all its factors, malleability and ductility of the metal, gap between the mill rolls, and even from the effect of the regulatory process itself (traction at the end of the rolls) and, moreover, it is uninfluenced by factors which we are unable to detect.
59
THINKING
BY
MACHINE
One condition only is necessary for perfect regulation: the variation of the factors must not exceed certain limits, which in each case have to be defined in terms of experience. The effect thus becomes independent of certain variations in the very causes which determine it. This formula demon strates the amazing function of the feedback system: the “ causes” no longer contribute towards producing the effect. It even appears, in this case, as if the principle of causality no longer obtains. We will, however, develop this hypothesis later on. For the moment let us simply formulate the definition of a feedback system as follows: A feedback system is a “ retroactive coupling” which, within certain limits, will protect the effect from the variations of its factors. This is the fourth degree of automatism: an auto-control which has come to life before our very eyes.
60
C H A P T E R IV
Towards Factories without Men: Automatic Control here
is n o f e e d b a c k t h a t d e a l s w i t h s u c h e n o r m o u s
quantities of energy as those concerned in the regulation of hydro-electric turbines. All variations of power con sumption in the mains of any one area are transmuted into a variation in the speed of the generators and hence into varia tions in the frequency and voltage. But the consumers of electricity require that the frequency and voltage of the mains supply remain constant. How can this be achieved? The only way is to increase or diminish the energy supplied by the water; regulating the flow of water to the turbines, according to whether the demand for power increases or diminishes. Much in the same way as in the Watt governor, the differences in speed would be detected by a tachometer and the control of the motive power would be effected accordingly by information coming from the tacho meter. But there is a great difference in the two cases, as the inertia of the power flow is much greater in the turbines. Some figures will give an idea of the quantities dealt with in installa tions with a fall of water from a considerable height; the water power carried by the aqueducts to the giant machinery of Modane constitutes a world record. The pressure of water in the pipes guarantees a flow of 12 cubic metres a second, the fall being of 847 metres. The control valve is able to shut off 1,300 metric tons of water.
T H E D E F E C T S OF F E E D B A C K : L A C K OF S T R E N G T H A N D D E L A Y IN R E G U L A T I O N It is impossible for the tachometric detectors to act directly on the flood-gates (except in very special cases, such as the 6
l
THINKING
BY
MACHINE
extremely small Pelton turbines). It is only necessary for the detector to exercise a controlling function and its work is then passed on to an auxiliary motor. When the tachometer has a direct action, as in the steam-engine, there is direct regulation, but when it actuates a motor which supplies the necessary power for the operation, it is called “ indirect regulation ” in the terminology of hydraulics. Thus the retroactive circuit regulating the turbines consists of: (1) a tachometer (detector) (2) an auxiliary motor receiving the message of the detector. (3) a flood-gate regulating the inflow of water (reactor). motor
But such a regulation would in effect be very far from per fect. Following a serious disturbance of any kind it would not adjust the inflow of water to the current consumed sufficiently quickly, and it might in fact never succeed in reaching an equilibrium. On the other hand, oscillations inherent in all retro active regulatory systems may, owing to the inertia of the flow, assume considerable proportions and, on account of the varia tions of the load of water in the pipes, they might set up violent knocking. It is easy to understand the process of regulatory oscillations of hydraulic turbines. Let us take the case where the demand for the supply of current is suddenly decreased; the turbine will tend to race. The tachometer starts to turn faster and faster, which actuates the motor in one direction and results in the closing of the flood-gate. When the speed is stabilized, the flood-gate, having closed, will not admit enough 62
TOWARDS
FACTORIES
WITHOUT
MEN
water; this will decrease the speed of the turbine and thus, in turn, of the tachometer, and will make the motor rotate in the opposite direction. In other words, if the flood-gate does not cease to act before the desired speed is re-established, it over reaches itself. But this is the defect of such a system: the motor will go on running until the tachometer has got back to its normal speed, the speed at which it ceases to send any controlling orders. I n th e sa m e w a y , i f a c a r d r iv e r k eep s his s te e r in g w h e e l in
the position for steering round a sharp bend until the car is in the required direction, he will straighten up too late and his turn will be too sharp. Just as the learner driver who does not know how to keep straight, when he finds himself going crooked, will compensate too late and go into a series of zig zags, so the regulator forces the system that it should stabilize into a series of oscillations which may be uncontrollable; it is then said that the motor “ hunts” . One can see from all this that feedback is not without its defects. On the one hand, it often has not the strength itself to react on the factor and on the other hand, its regulating action depends on the events of the preceding moment. In the present state of our technical knowledge, the feebleness of the message is not important; all information from the detector can be translated into electrical data and all currents, however weak, can be amplified as much as required. As for what we call the “ delay in regulation” , the defect is not due to any slowness in regulation that might be remediable; one might succeed in accelerating the working of the motor, or diminishing by appropriate means the resistance offered by the floodgate, but one could never alter the fact that the messages received at the time t from the tachometer by the flood-gate via the motor date from the time t — e, and e can never be zero, since it represents the time taken by the machine to pro duce its effect. In more familiar language, the regulation is always one stage behind. In more abstract language, we should say that the feedback is not sensitive to the effect, but to the variations of the effect in time. From the moment that time intervenes as an essential factor in the regulation, it will continue to play an important part. We may ask if this is tantamount to admitting that it is 63
THINKING
BY
MACHINE
impossible to overcome this delay in adaptation or if there is nothing that we can do against these wide oscillatory swings or “ hunting” of the mechanism? Fortunately this is not so; and here an important new element enters into the question. In order that the regulatory mechanism should be perfect, it would be necessary for the tachometer “ to know” the posi tion of the flood-gate at the moment of action; thus it could stop the orders to the motor at the right moment. In order that the tachometer should “ know” what the flood-gate is doing, it would only be necessary for the flood-gate to send out messages indicating its position. Let us turn to the diagram of turbine regulation, once more, and add a connexion from the flood-gate to the tachometer and we have the following: motor
The connexion between the tachometer and flood-gate, then, is nothing else than a feedback on a feedback. This is apparent immediately if we turn the diagram round and we look upon the motor as a machine requiring regulation. From now on the detector-tachometer, receiving messages from the flood-gate reactor about the execution of the previous orders, will “ know” , just as the car driver, having zig-zagged several times while learning the feel of the steering, ultimately acquires the right balance between his sensations and his performance (between his observations and his reactions) and knows by how much and when to straighten up his steering after he has negotiated a bend. In short, a feedback harmonizes a factor with a given effect, 64
TOWARDS
FACTORIES
WITHOUT
MEN
couples a detector to a reactor. But in certain cases, where great inertia is involved, where the relay of an auxiliary motor slows down the transmission of the messages, it is necessary to harmonize both the input and the output of a feedback. This is looked upon, then, as if it had a direct action and it is also given a feedback— a counter feedback— so as to harmonize the message sent out by the tachometer with the message received by the flood-gate. This system is sometimes described as a servo-mechanism, but such ill-defined words need clarification. THE
FARCO T
SE R V O -M O T O R
If the Englishman James Watt, looked upon as the creator of the first feedback, is considered as one of the chief fore runners of cybernetics, the Frenchman Leon Farcot, the inventor of the “ servo” principle of servo-mechanisms, is equally entitled to this position. But he is almost unknown. An engineer, born in 1823, he was the son of a machine designer who made many improvements to the steam-engine. control
In 1868, when he was studying the control of ships’ steering mechanisms, he invented a device in this connexion which he baptized “ the servo-motor” or “ moteur asservi” and which 65
THINKING
BY
MACHINE
he soon applied to other machines where considerable forces are required to be controlled by relatively weak ones.1 Let us imagine a cylinder filled with oil and, on the other hand, a motor capable of raising the oil to a high pressure. Between the two is a slide-valve distributing the pressure and having a movable part which opens or shuts the two orifices communicating with the extremities of the cylinder by means of tubes. The pressure is thus directed first to one face of a piston and then to the other, according to whether the vent holes are closed partly on one side or partly on the other. It will be apparent that the movements of the slide-valve determine the movements of the piston. Thus, with a minimum of energy and without any great displacement, considerably greater and more powerful movements can be obtained. The problem of steering a ship, then, seems to be resolved. The helmsman can in effect determine the position of the rudder, by means of a small lever, whatever the speed of the ship or the force of the mass of water against the cheek of the rudder. However, such a mechanism would be very difficult to realize practically. Certainly the rudder would incline to wards the right or the left, according to whether the lever was moved to one side or the other, and would stop moving only when the lever was brought back to a neutral position. Above all, a given displacement of the control lever would correspond not only to a turning of the rudder, but also to a certain speed of turning. Even though one could conceive that the man at the wheel might learn to adapt his movements correspondingly, such a system would be of no practical use. It would be a considerable advance if the helmsman were given a dial whose pointer would indicate at each moment the position of the rudder. The man would then watch the move ments of the indicator and set the control lever accordingly. But could one not go even further? Could not mechanisms replace man’s continual concentration in this delicate operation of superintendence and correction? Farcot thought it possible and created his servo-mechanism. The helmsman originally acted as a link between the lever and ship’s rudder, between the input and the output of the system. Let us make a mechan ical link between the position of the rudder and the position 1 T h e A m e r ic a n T . E . S ic k le s h a d d e s ig n e d a s im ila r s y s te m in 18 4 9 .
66
TOWARDS
FACTORIES
WITHOUT
MEN
of the lever, the output and the input; they will be dependent on one another. The machine thus assumes a function which, in man, was strictly intellectual, so that a simple sliding rod plays the part of a man plus his intellectual powers. This is an excellent example of a machine exhibiting artificial thinking. The machine achieves this function of integration by a very different process from the human one; its methods have nothing in common with our thinking. But the result is the same— or, rather, it is better. As soon as the servo-mechanism is used, the man at the wheel has only to move the lever in proportion to the degree of rotation that he wishes to give to the rudder; he knows in advance at what position the rudder will come to rest. The machine takes complete command; man only sets it a goal: to keep to its course. The Farcot servo-mechanism works in the following manner. The slide-valve is joined at B to a floating lever A B C . The helmsman applies his control at the end of the lever at A c
Farcot’s Servo-Motor (input). It is transmitted by B to the distributing organ and thence, through the oil pressure to the piston (output). The servo-mechanism has to connect the output to the input, the piston to the lever. This is done by a sliding rod, C D, which is linked to the floating lever at point C and presses on D by way of a small roller on an inclined cam fixed to the piston rod. The system is at rest only if the pressure is equally dis 67
THINKING
BY
MACHINE
tributed on the two faces of the piston. For this to happen B can only be at a point b. Let us now suppose that A is displaced downwards: B will be displaced to b'; and the piston will be displaced, but only to such an extent that, by the intervention of the rods of the servo-mechanism, B returns to b, its original position, where the pressure is again equal on both sides of the piston. Thus a given movement of the control corresponds to a particular displacement of the piston and of the piston rod: the output acts on the input. This principle of servo mechanism is of great mechanical importance and has had numerous applications, such as on warships, in automatic gun-turrets and the automatic pilot of large aeroplanes, where the servo-mechanisms work on oil pressure exactly as in the Farcot servo-motors. The automatic pilots are even said to work too perfectly; pilots complain that their action is so smooth that in order to have the feeling that anything has happened, they must wait to see what the aeroplane does. We have just described two machines that are generally held to be servo-mechanisms: the regulators of turbines and the Farcot servo-motor. But is it possible to define a servo mechanism? In what way does it differ or not differ from other mechanisms, in particular from those that have a feedback? These questions merit a reply for, as Norbert Wiener says, “ We are entering the age of servo-mechanisms” . I must own that, for many years, I never clearly understood what a servo-mechanism was. I put this down to my own deficiency of understanding and my absence of technical knowledge, until I read the book on servo-mechanisms in the well-known Radiation Laboratory Series.1 There I came across the little phrase: “ It is almost as difficult for practitioners in the servo art to agree on the definition of a servo as it is for a group of theologians to agree on sin” . I discovered that if one generalizes, without considering any particular type of mechan ism, it is possible to distinguish clearly between two different mechanisms using a retroactive circuit: the regulator and the servo-mechanism. What is a servo-mechanism ? What are its peculiar character istics ? How can they be distinguished from feedback, now that 1 H . M . J a m e s , N . B . N ic h o ls a n d R . S . P h illip s . Theory o f Servo-Mechanisms, M . I . T . R a d ia t io n L a b o r a t o r y S e rie s , v o l. 25 ( M c G r a w - H ill) 1 9 4 7 .
68
TOWARDS
FACTORIES
WITHOUT
MEN
we are familiar with the regulation of speed by a tachometer or the control of level by a float and needle valve ? But more pre cisely, what is specific in the turbine regulator or the Farcot motor ? These questions are very difficult: they cannot be approached without previous study of the literature; but at the end of it all, it only seems to add to the general confusion to have twenty different ideas and definitions paraded before one. Hazen and Hall1 define it as a power-amplifying device in which the amplifying element driving the output is actuated by the difference between the input and the output. It should be noted that Hazen and Hall’s definition does not imply the idea of retroaction, which makes it difficult to understand their comparison of input and output. MacColl, in his well-known book,2 defines first the essential function of a servo-mechanism: to give the variations of the output signal in time the same functional expression as those of the input signal (which would include the Farcot motor without servo action in the category of the servo-mechanisms). MacColl gives as an example a mechan ism without retroaction: an electric current after amplification acts on a motor whose movements— outside the system— will reproduce the variations of the input current; that is all, but it is already enough to mislead us for, in this case, a simple system of rods where the output obeys the orders of an input lever would be a servo-mechanism as would all mechanisms whose effect obeys a command. But the same author adds the qualification that “ the generally accepted sense” of the term under consideration is different; in order to have its usual significance, the system must, in addition, be moved by the difference between its input and output signals. This comes rather nearer to the definition of Hazen and Hall. For F. H. Raymond,3 “ a servo-mechanism is a physical assembly composed of a system detecting the deviations of the co-ordinates in relation to their given values (regulatory system) and supplying a regulated system with forces of appropriate value in such a way that the deviations dis1 C h a p t e r o n “ S e r v o -m e c h a n is m s ” , Electronic L a b o r a t o r y S e rie s , v o l. 2 1 , ( M c G r a w - H ill) .
2 Fundamental Theory of Servo-mechanisms.
Instruments,
M .I .T .
B e ll T e le p h o n e L a b o r a to r ie s P u b lic a
tio n ( V a n N o s tr a n d ) .
3 “ Electronique et automatisme”
( S y n d ic a t
69
R a d ia t io n
d es
M a c h in e s - O u t ils ) .
THINKING
BY
MACHINE
appear (motor system) by imparting energy to an auxiliary source.” For James, Nichols and Phillips,1 a servo-system is a com bination of elements for the control of a source of power in which the output of the system or a function of the output is fed back for comparison with the input and the difference between these two quantities is used in controlling the power. All this is rather abstruse to those who are not specialists; the best formula is surely the last, for it is confused only in its wording. One could say the same thing in fewer words: a servo mechanism is one in which a retroactive circuit controls a source of energy by the difference between the input and output signals. In England, A. L. Whitely, in the report of the opening of the Convention on Automatic Regulators and Servo-Mechan isms, 19th May, 1947,2 proposed terminology containing ideas which are still more pragmatic than the others: what differen tiates industrial automatic regulators, process controllers and servo-systems is above all the delay of their reactions! The servo-mechanism is defined as: “ a control of position in a closed circuit, generally intended to maintain angular correspondence between two axes more or less distant from one another.” This restricts the servos to remote control, which is nothing but the application of a much more general principle. The French specialists avoid explicit definitions. G. Lehmann and F.-H. Raymond have collaborated in the best work on the subject3 in the French language. Lehmann writes “ without defining servo-mechanisms, let us restrict ourselves to . . .” and F.-H. Raymond, having asked the question “ what is a servo-mechanism?” , only answers it by citing a “ typical example” : then further on he says: “ a servo-mechanism is defined by its goal: to establish between two values, one of which is the input signal and the other the output, a definite relation” (which, like the formula of MacColl, amounts to admitting all mechanisms as servo-mechanisms). In the same way, Pierre Dejean says:4 “ We will not attempt 1 Op cit. 1947, Part II, A. P. Golombani, G. Lehmann, J. Loeb, A. Pommelet, F.-H. Raymond; 2 vols. (Soci£t6 d’Editions d’Enseignment sup&rieur). 4 Toute la Radio, November, 1949. “ Les £ervo-m£canismes’\ a Journal o f the Institute o f Electrical Engineers,
3 “ Servo-mkanismes” ;
70
TOWARDS
FACTORIES
WITHOUT
MEN
to define this term.” In a remarkable glossary drawn up during a course of lectures in 1945 at the Conservatoire des Arts et Metiers, servo-mechanisms are not even mentioned. It only deals with thermo-regulation, from which it could be inferred that servo-mechanisms do not apply. All this goes to show that the unification of terminology should not become the monopoly of any particular branch of science. Here, however, are two French definitions: “ A servo m e c h a n is m is a c ir c u it in w h ic h th e m a g n itu d e o f th e o u tp u t
depends on the difference between the input and output values.” 1 “ A servo-mechanism is a power-amplifying mechan ism intended to ensure an output bearing a functional relation to the input value.” 2 At the end of it all we were not able to find any common basis on which to formulate any ideas; everything seemed even more confused in the light of this review of the terminology. Professeur Veron, opening the Congres de la Regulation Thermique at the Conservatoire des Arts et Metiers in 1945, was right when he said that “ the vocabulary of regulation is a source of difficulty, obscurity and misunderstanding; it stifles this excellent technique and it sets people against it.” 3 Our ideas became even more confused when we learned, in another text,4 that our well-known specialist Gerard Lehmann has a personal tendency to consider the servo-mechanism as a machine capable of replacing a human act, either wholly or partially. Our ideas get even more hazy, when reading in the Theory of Servo-Mechanisms of the M .I.T., we see the simple regulation of temperature by a thermostat described as a servo mechanism! This same book, however, insists that “ the output signal be sent back to the input signal for comparison” . It is true that in the thermostat the effect is sent back to one of the factors, the factor oil-pump or compressor, but it is not com pared with it. We can find absolutely no common elements in all these definitions. We see them, on the contrary, as being more often contradictory. Why does one author insist on an auxiliary source 1 * * *
F .- H . R a y m o n d , L ’ Onde (lectrique, J a n u a r y , 1 950. J . R . D u t h il. L ’ Onde (lectrique, O c t o b e r , 19 5 0 . Mesures, M a y , 1 9 4 5 . L ’ Onde (lectrique, J u n e , 19 4 8 . “ L e« s e rv o -m £ c a n is m e s e n lia s io n a v e c le s p r o b -
leme» d e r a d io - £ le c tr ic it6 .”
71
THINKING
BY
MACHINE
of power and another on amplification? Why is the relation between input and output nowhere explicit? Why does the best formula amongst them, that of Nichols and Phillips, re quire that the control of the retroaction should act on a “ source of power” when it can just as well act on a factor that is not a source of power ? Is the source of auxiliary power necessary ? N o; in the regu lation of the turbine, it is only a practical device used to com pensate for the feebleness of the message. As for the oil com pressor of the Farcot machine, it is in no sense a source of auxiliary energy, but the only source of operative power. If we take away the oil compressor, there is no Farcot machine. Without the motor that shuts the flood-gates, the regulator would, in many cases, be lacking in efficacy. Neither is ampli fication especially characteristic of servo-mechanisms. It is true that these mechanisms are often employed to amplify move ments or forces, but it is not an essential function. In microsurgical techniques, similar mechanisms are used to reduce the scale of the movements, and for remote control of highly radio active materials, servo-mechanisms are used which are neither reducers nor amplifiers. How can such a confusion of ideas be explained ? We are bound to admit that the definitions are not always applied to the same thing. It should also be understood that the writers were primarily electronic engineers who were chiefly interested in electronic servo-mechanisms. One of the most eminent amongst them1 admitted that when dealing with servo-mechanisms of other industries he “ had difficulty in getting rid of his accustomed modes of thought” . Servo-mechanisms based on electronic techniques have been able to perform almost miraculous feats. The complex apparatus of the servo-mechanism is much more easily realizable by electric couplings than by mechanical ones. The study of the complex electric circuits of these systems could be developed even further by electrotechnicians using the mathematical methods of harmonic analysis. It is well known in the history of science that the general laws governing stability and the accuracy of retroactive regulation (Nyquist’s criterion of stability and Bode’s relation) have been established by the 1 Jacques Loeb, during a conference on servo-mechanisms, held in London in 1947.
72
TOWARDS
FACTORIES
WITHOUT
MEN
theoretical physicists of the Bell Telephone Company studying special cases of counter-reaction amplification. But if radio-technique rules supreme over servo-mechanisms both in theory and in practice, it cannot annex them entirely. Regulators exist in numerous other industries; certain engineers of other branches using different techniques have found ways to prevent “ hunting” in their servo-mechanisms. Here we wish to generalize as far as possible and to extend the idea of automatic regulation to as many different applica tions as possible, but at the same time we do not wish to limit ourselves to any one technique. We wish to show that most natural effects are organized according to these principles, in which the manifestation of universal order is inherent. It is necessary to abstract the principles of regulation from the mere mechanisms which bring them into effect; for this reason we must inevitably go beyond terms of simple prag matic definition, where the accessories belonging to certain techniques are not always discernible from the conditions in herent in the essential nature of the system. I f science is really concerned with the general and not the particular, then the person who sees things from the general standpoint has the greatest chance of coming nearest to the truth— even in spite of the technicians, who may succeed in constructing wonderful mechanisms and very useful machines. If we manage to arrive at some clear definitions in this confused domain, we shall feel fully justified, for does not Pascal’s golden rule require that reasoning should replace what is defined by its definition ? Having gleaned hardly any general ideas from the specialists, and knowing that henceforth no definition can help contra dicting others which have gone before, we will start from scratch with no preconceived ideas. FIXED
AND
VARIABLE
REFERENCES
First let us have a look at another servo-mechanism that is very clearly defined. This time it is to be an electrical one. Let us choose the type of servo-mechanism consisting of a potentiometric control of movements made by an electric motor; in other words one where the angular position of one axis is electrically dependent on that of another axis, which may be some distance away.
73
THINKING
BY
MACHINE
Two potentiometers A and B are connected in the arms of a Wheatstone Bridge. When the resistances of such a system are in equilibrium, we know that there is no current passing across the bridge to which they are connected. The bridge is composed of the slider of the potentiometer A, the coil of a polarized relay, R, and the slider of the potentiometer B. According to whether or not the system is in equilibrium, the relay, R, will or will not pass current, and thus will or will not act on the contact, P, which is in the circuit of an electric motor, M. Even better, according to the direction of the disequilibrium, that is to say the direction of attraction or repulsion of the contact, the motor will turn in one or other direction. We have only to connect the motor to the slider of the potentiometer B to get a closed circuit: the position of B will react on the relay, R, and on the whole of the system, exactly as the corresponding position of A acted. Then for all changes of position of A, M will obey; but as it will send information to the control, through B, it will operate only until the moment when, the equilibrium in the relay R being re-established, the electrode P, returning to the central position, cuts off the current. And if the motor turns A
more than it ought in obedience to A, then the new disequili brium of the bridge, in the opposite direction this time, will bring about a rotation of the motor in the opposite direction, which will in this way correct its excessive action. We are now in a position to understand the principle of
74
TOWARDS
FACTORIES
WITHOUT
MEN
servo-mechanisms. But before looking for what might distinguish between two types amongst all the retroactive mechanisms mentioned up to now, let us see in what respects they may be similar. One general characteristic is at once evident: they all have the same objective: to regulate the effect, or at least to control it. This is the right word: these mechanisms are automatic “ controllers” . In the proposed glossary of automatic control terms of the Association of American Electronic Engineers, we find: Automatic Controller: a mechanism which measures the value of a quantity or a variable state and brings about corrections or limits the deviations of this value measured in relation to a selected reference. The “ quantity” which is controlled in this way is not neces sarily the effect of the controlling mechanism and should per haps be called auto-controlled. Which permits the definition: an automatically controlled mechanism is one which has a retroactive coupling to enable its effect to correspond to a given reference. The word “ reference” is very important. In the Watt regu lator or the turbine regulator, it is a certain speed, which depends on the mechanical working of the tachometer. In the regulatory mechanism of the furnace, it is a certain temperature which depends on the length of the chain. In the Farcot motor, it is the position of the control lever; in the servo-mechanism concerned with angular position, the position of the input axis of the system. We see here two different types of controllers; one where the reference is fixed and the other where the reference is mobile. If the effect belongs to the fixed type, we have the regulator, whose function is implicit in the name, and which is the type of feedback with which we are familiar. If the effect is of the variable type, with a single control, it is a servo mechanism, whose name is equally apt, since it indicates that the effect of this mechanism is dependent on a certain variable. The regulator of turbines with auxiliary motors is in no way a servo-mechanism. But it is easy to explain why it has acquired this misnomer; it does include a servo-mechanism, that of its auxiliary motor. Let us summarize. Two kinds of self-controlling machines exist: the regulators whose effect has a fixed value, and the
75
THINKING
BY
MACHINE
servo-mechanisms whose effect has a value depending on the value of a variable which is the “ control” . This idea is simple and reveals itself to be accurate. We have found it confirmed by the technical authority, Professor Arnold Tustin, of the University of Birmingham, who during the war elaborated a system for the movement of gun turrets and naval guns. According to him, if a machine were entrusted with driving a car, it would be a regulator on a straight road, and a servo on a winding one. One is tempted to say that in a regulator the effect is pro tected from the variation of all the factors, whilst in a servo mechanism, the effect is subject to the variation of one of the factors, the control of the system. It looks as if the feedback of the regulator plays its part perfectly, which is that of removing the effect from contingency; and that in the servo-mechanism, one of the factors escapes its organizing power. This is not quite exact, because the control is not a factor of the machine itself but of the detector, as will be seen.
R E G U L A T O R S AND SER VO -ME CH ANI SM S In regulators, the reference is fixed because the retroactive mechanisms are strictly determined by the constructor; the least change in the electronic characteristics of the feeler or the pivots of the tachometer would bring about a change in the reference which would modify the effect. In servo-mechanisms, instead of seeking the stabilization of the detector, one of the factors is left variable on purpose: the control. We can examine the Farcot Machine from this point of view. It is a machine which, receiving oil under pressure, gives a longitudinal displacement to the piston rod. This effect is detected by the system of cams and rods which, by means of the vent holes of the slide-valve (which may be completely or partially open), reacts on the effect itself. The control, therefore, acts on the position of the lever which acts on the position of the slide-valve, as shown in the following diagram. The electrical angular position servo-mechanism gives exactly the same diagram. A motor, M, causes an axis, B, to revolve. But the position of B is detected by the Wheatstone bridge; it is compared with the position of A, the position of reference, controlling the system; and the retroactive message 76
TOWARDS
FACTORIES
WITHOUT
MEN
is returned to M, causing it to turn in such a direction that B is always in the position required by A. Thus, the servo-mechanism can be distinguished from the regulator in that the intervening control comes from the ex terior of the system. Its diagram differs by the arrow of control which acts on the feedback.
This sketch shows how, in retroaction, the control plays quite a different role from that of the factors. Let us admit by sup position that which we will later demonstrate by logic: the effect is guaranteed by the feedback against the variations of all its factors, but it remains sensitive to anything which can affect the feedback itself. If no characteristic of the feedback varies, the reference is fixed and we have a regulator. If a characteristic varies, the reference is mobile; this mobile characteristic becomes a
77
THINKING
BY
MACHINE
“ control” ; it only affects the effect that the retroaction protects from the variations of its factors and this is what constitutes a servo-mechanism. So that if one fixes the control of a “ servo” , one obtains a regulator. The helmsman goes to lunch; he fixes his tiller in a certain position; and the Farcot machine becomes a regulator. Will it not always bring back the rudder to this position no matter what factor may vary ? Conversely, if the reference of a regulator be freed, we have a servo-mechanism. It is what would happen if the setting of a thermostat became freely mobile. It would become the control of a servo-mechanism. And in the same way, if one of the elec trical data of the apparatus measuring the thickness of sheetmetal were made variable, the thickness of the sheet would depend on this variation, even though the effect remained pro tected from the action of its real factors. How does this control operate ? In order to understand this, it is only necessary to look carefully at two examples of servo mechanisms that we already know. In the Farcot machine, the displacement of the piston rod affects the rod of the slide-valve distributing the pressure; but this rod itself depends on the manual control: there is, then, a convergence of the controlling factor in the same organ; on the one hand, that of the message of the effect, and on the other, the external control (which is clearly expressed in the diagram which follows). In the same way, in the angular servo-mechanism, the retroactive message of the rotation of one axis converges on the bridge in a form electrically equivalent to the control message, itself the elec trical equivalent of the angular position of another axis. Retroactive coupling must be seen as a series of effects. The control acts on any one of these effects in which the message and the control merge together. Practically, the genius of the inventor is displayed in his having created a retroactive kind of chain such that one of its links can be easily affected by the particular variable that he wishes to control the system. From the theoretical point of view, every “ reference” plays an extremely important role. It is this that gives the system its end-goal. It is the neutral point at which every message is sup pressed when no correction is necessary; it is the position at
78
TOWARDS
FACTORIES
WITHOUT
MEN
which retroaction is achieved so perfectly that there is no correction to be made.
AN IM P O R T A N T M E C H A N ISM : T H E A M P L IF IE R W IT H N EGA TIV E FEEDBACK An example which enables us better to understand the sur prising possibilities of servo-mechanisms is the “ negative feed back” , well known to telegraphic and telephonic engineers. All amplifiers of modulated current, when increasing the voltage of these currents, cannot avoid distorting them: that is to say, the shape of the electric oscillations at the output of the ampli fier is no longer the same as it was at the input end. These distortions may be of different kinds. Those that cause the greatest interference with reception are the non-linear types of distortion which depend on the characteristics of each differ ent valve; they appear at the output as interfering waves superimposed on the waves coming from the input. Amongst other distortions, we may mention those of frequency (not all audio-frequencies are transmitted in the same manner) and phase distortions (the delay in amplification caused by the transmission of the signal is not the same for all frequencies). However, in spite of all these pitfalls, the musical tone is not very much affected through our sets; this is due to yet another marvel of feedback. Imagine an amplifier, A, that multiplies voltage a hundred fold. When it receives one volt it will yield ioo volts. Let us shunt some of the output current, say io volts. Let us change its phase and reinject this voltage into the input of the amplifier. 10v
r --------------------------------------\ A IV
/ \
100 w
Since, in opposite phase, it is always of an opposite sign to the input voltage, at the input the voltage will be weaker, and will diminish proportionately to the voltage at the output. Up till
79
THINKING
BY
MACHINE
now, there is nothing especially miraculous; we have simply succeeded in diminishing the amplification power! But we have retroactively shunted an output voltage, which is in reality very complex: it is composed (i ) of a voltage of the same strength as that of the input; (2) of a voltage representing the distortion. By changing the sign of the current, we obtain very different effects according to which component we are dealing with. The one, corresponding to a modulated current representing the sound coming from the receiving end, finds a current at the input of the same type but with a different sign. It diminishes the strength, but because modern amplifiers are able to multiply considerably the signals received, this creates no difficulty; it is only a question of increasing the amplifying power to compensate this loss. On the other hand, the other voltage, representing the dis tortion, finds no equivalent current at the input, because it develops inside the amplifier itself. It is here that the ingenuity is displayed; a distortion is introduced into the amplifier which has an opposite sign to that which it produces; the distortions to be imposed on the modulations are neutralized in advance. Thus, for a given regulation of the whole, it is possible to arrange that the negative feedback completely annuls the voltage due to distortion. Here we have a servo-mechanism; the effect is subject to the control; the modulation of the output will faithfully reproduce the input modulation. But we may wonder by what magic procedure the defects
80
TOWARDS
FACTORIES
WITHOUT
MEN
of the amplification are annulled. This is effected by the feed back, whose corrective function in this case is particularly sensitive; the feedback annuls the deviations due to all factors; it overcomes, then, the effect of all the characteristics of the valves and enables the modulation of the output to depend strictly on the modulation of input alone. This wonderful mechanism is largely used in telecommuni cations; and by its means long-distance telephone communi cations operate, being amplified by successive relays without too much distortion. Enormous degrees of amplification can be achieved; for instance, in conversations between London and San Francisco the amount of sound amplification is io266, a figure which is all the more awe-inspiring if one remembers that the total number of atoms in the universe can be expressed in 80 figures! One can understand why the telephone specialists, studying feedback amplifiers, were led to the discovery of the laws of stability in automatic controls. But one can also see why the general ideas on retroactive control are so confused and why so many definitions of servo-mechanisms are based on amplifi cation; the specialists and technicians have not been able to dissociate themselves from their electronic techniques in order to make the distinction between what is accessory, but never theless necessary, to these techniques and what is essential. Thus, the confusion arises from the very defect that the cyberneticians decry in science: specialization and “ compartmentalism” and the absence of a broad enough point of view. The word “ servo” has evolved differently according to the various techniques to which it has been applied. But one of these, electronics, in constructing the theory of these mechan isms, has proposed as being absolute definitions which, in reality, are only relative. Even worse, each technique using automatic controllers has its own vocabulary. Without having too much faith in such a proposition, we would like to express a pious wish that a world congress on the control of the termin ology of control should be held.
G ENERA L D EFINITIONS Let us summarize the foregoing by a series of definitions. The formula already given for feedback remains valid: a 81
THINKING
BY
MACHINE
coupling of the effect of a mechanism to one of the factors which, within certain limits, protects the effect from variations of its factors. An automatic control mechanism is a mechanism making use of feedback to enable the effect to correspond to a given reference. Two categories of automatic controls can be recognizedregulators and servo-mechanisms: An automatic regulator is an automatically controlled mechanism where the reference is fixed and where a value of the effect tends to remain fixed. A servo-mechanism is an automatically controlled mechan ism where the reference is variable and where a value of the effect tends to reproduce the variations of this reference that is called “ the control” . (It must be noted that the transition of a regulator to a servo-mechanism includes intermediate cases which are diffi cult to classify in either of the categories. A refrigerator equipped with a thermostat is a regulator, but if this regulator is capable of being given various settings, can it not be considered as a servo-mechanism where the control is the setting of the thermo stat? In the same way the automatic pilot which steers the course on which it has been set is a regulator. However, this course can vary over long distances according to its previous plotting: here we have a servo-mechanism.) If the effect can be considered as a function having several variables, it may be said that the servo-mechanism transforms this function into one with a single variable, the control, the value of the effect being related to this variable alone. The regulator, which tends to suppress all relations between the effect and the variations of the factors, appears, then, to give an effect of absolute value, which explains the mysterious con ception of finality or end-goal in a logical way. Hence this simple definition can be formulated: An automatic controller is a mechanism where the value of effect is rigidly defined, whether it be absolute (as in the case of the regulator) or relative to the value of a variable, the control (as in the servo-mechanism). 82
TOWARDS
FACTORIES
WITHOUT
MEN
But all these definitions have an anthropocentric bias. They are only valid for mechanisms constructed by man. In nature, there are neither regulators nor servo-mechanisms, for the retroactive chain is never strictly determined as in a regulator and rarely has just one element which is free to vary as in the servo-mechanism. It is impossible to see anything more than retroaction out of which arises the order of the universe. MASTER
AND
SLAVE
MECHANISMS
Let us turn to the practical level and examine the different roles played in mechanics by the two classes of controllers. The object of employing a servo-mechanism is always that of making an effect depend exclusively on a variable, the con trol. This is surely the end-goal of the greater majority of mechanisms. When we use a pulley to get a boat up out of the water, the movement of the boat is dependent on a controlling force. All transmissions dependent on belting, chains or toothed wheels have the simple function of linking the output of the mechanism to the input. Thus the function of a servo-system is in no way transcendental; it is one of the most ordinary methods of coupling. But there is an essential difference between this and the ordinary systems of transmission in which the effect depends on mechanisms other than those of the control. Thus, for exam ple, in the pulley there is the elasticity of the rope to be taken into account; and, if the action of these variables is successfully overcome, it is only by rigorously determining their value. The servo-systems, on the other hand, can so compensate the possible variations of all their factors that they are sensitive to the control alone. Thus, the servo-system is a coupling mechanism; but because of the subtle action of its principle, its fidelity is absolute. It is the only means enabling pneumatic or hydraulic transmission to function with absolute precision and the only one, when electronic, which affords a means by which very rapid fluc tuations of the control can be dealt with. However, perfect as they seem, servo-systems play only a very unimportant part in our ordinary mechanisms: that of simple couplings. On the other hand the regulator assumes a function which had never been achieved before the invention of feedback, a function which until then was only to be found in
83
THINKING
BY
MACHINE
naturally occurring systems and was even one of the mysteries of life itself: that is to say, the stabilization of a continuous effect despite contingency. The regulator, then, marks a decisive triumph in mechanics, whilst the servo-system simply re presents a great technical advance. One might say that the servo-system plays the same role in electro-mechanics as the toothed wheel played in traditional mechanics or the belting or cords in ancient mechanics. When ever it is a question of transmitting, multiplying or reducing a movement servo-mechanisms will henceforth be used. The servosystem is the gearing of electronics, which has in addition all the marvellous accuracy and astonishing lack of inertia in herent in all electronic mechanisms. When a retroactive message is too weak and needs to be amplified for accurate transmission, a servo-mechanism is brought into use; it is a servo-mechanism that is entrusted with acting on the mechanism that works the gates of hydro electric turbines, when the message is incapable of doing this work by itself; a servo-mechanism whose input will be the message coming from the tachometer, and whose output the action on the gates. The motor supplying the auxiliary energy is a mechanism whose feedback is the counter feedback of the turbine.
It is easy to understand why the term servo-mechanism is sometimes applied to the whole, when really it is only an ac cessory that is not always indispensable to the regulator. It is
84
TOWARDS
FACTORIES
WITHOUT
MEN
easy to understand how the energy belonging to the servosystem might be mistaken as being auxiliary: it is in fact only auxiliary in relation to the regulator. The complex machines of tomorrow will include numerous servo-systems which will have the job of carrying out any subtle variations of the control. But these servo-systems will never be anything else than auxiliaries of other mechanisms, particularly regulators. Thus in the automatic pilot, it is a regulator which controls the aeroplane in order that it may keep to the right course; but the “ reactor” of this regulator, entrusted with the actual execution of these delicate instructions, is a servo-mechanism. We shall see how the machine will be able to go even further than the regulation of its own effect, although this is already a transcendental function. This regulation by feedback can only take into account events to which the machine has already reacted; the anticipatory regulators will take into account events that have already happened, but to which the machine has not yet reacted, and even events that have not as yet taken place, but the probability of which can be calculated. The machine, then, will be corrected by the information coming directly from its organs of perception, in the form of computing machines which, if not reasoning ones, will be capable of evaluating the probability of an event faster than man, using data collated by senses that are far more accurate than those of living beings. Always, the directions of the regulatory or antici patory mechanisms will be transmitted for operation to the faithful and powerful arm of the mechanical executives, the passive and servile servo-mechanisms. In this sense, the word justifies its etymology. The servo mechanism is the executor, the transmitter. Henceforth, orders will be transmitted, not as in past centuries by pulleys and gears, but by servo-mechanisms, which alone are capable of decoding and passing on the minute and fluctuating messages from the detectors and electronic anticipatory devices. But under this label, care must be taken not to include the mechanisms which give the orders and which are properly the master mechanisms.
85
CH APTER V
The Logic of Effects p to the present we have only looked at machines. Now, we must consider natural effects as well as artificial ones and bring ourselves from the machine to the “ effector” . Whereas other cybernetic terms may come from electronics, “ effector” is borrowed from neurology. It is expressive: the effector, that which produces an effect. It was in current usage in a discipline where it had the virtue of conveying generality, of grouping together physiological organs such as muscles, glands or living cells which execute the orders given by the nerves. In this way, the nervous impulse could be considered apart from its result. Cybernetics, which is a synthesis of many sciences, uses this word in an even more general sense. It can be abstracted even further by extending it to the philosophical domain, where it is entirely new and gives a clear distinctive idea, devoid of all compromise with non-scientific metaphysical discussion. It is surprising, but obvious, that the logic of mechanisms, the inner source of their functioning or the methods that they employ for automatic regulation, has never been studied hitherto. The engineers who use and construct machines show little inclination towards abstract philosophy. Neither are philosophers inclined to speculate on the cybernetic technicali ties of toothed wheels and cam shafts. In so far as any theoreti cal study has ever been carried out in this field, it was pursued at a time when the machine had not evolved sufficiently to enable it to be considered from an empirical point of view. Man, who has devoted so much study to the analysis of reason, has devoted little attention to its method of function. This is one of the no-man’s-lands that Wiener considers should be charted by cybernetics; that which lies between a practical technique and abstract thought might be considered as the 86
THE
LOGIC
OF
EFFECTS
abstraction of the technique. At first the path seems only to lead to waste land and public places; but we are led finally into such broad avenues that we would need another book, entitled, say, The Theory of Effects, to deal with them. Here we will limit ourselves to a brief outline.
T H E STUDY OF W E L L - T R O D D E N PA TH S First of all, let us accept the following propositions without fu r th e r a r g u m e n t; th e re is n o n e e d fo r th e r e a d e r to fo llo w th e
author’s development of them: An effect is something which depends on other things called factors. An effector is a natural or artificial system which produces a certain effect. It follows that: a machine is an artificial effector. But as soon as the definition of the effector is applied to physiology, it seems incomplete. We ask ourselves how a natur ally occurring system may be organized so as to produce a given effect? It is the fundamental question of biology that we are posing here and we will attempt to expand it. From the outset, one principle should be considered; an effect cannot have only a single factor. How can a new situation arise from an act unless this act has encountered another act? Moreover it is always necessary to have one fact expressing time and another the situation. Surely, gravity, that is always present, must also have a factor ? Other factors are surely always implicit, like this one ? One can see that from the very beginning it is necessary to reject certain ideas concerning causality. An effector must be regarded as a self-contained entity where factors go in and effects come out. What happens within an effector should not enter into consideration. A circum ference symbolizes the causal law which of necessity joins the factors to the effect without contingency being able to act upon their linkage. In other words, in the language of the statisticians, the coefficient of correlation between each factor and the effect is supposedly equal, having an absolute value of unity. It is best to define pre-factors as factors of factors which latter may then be considered as effects. In the same way, it is
87
THINKING
BY
MACHINE
sometimes necessary to distinguish secondary effects, the effects of the original effect. To give actual examples, an effect may sometimes be broken down into component effects, such as the speed and the load of a machine or the intensity and the voltage of a current. In this case, the effect is the resultant of the components. This can be graphically represented thus:
One of the effects has an important property, which is even essential; that of the “ useful effect” . Thus the two factors “ electric current” and “ resistance” give as a primary effect the passage of a current, with the secondary effects of heat and light; according to whether the effector is a lamp or a radiator, the useful effect will be one or other of the secondary effects. One may say: The useful effect of an effector is that which attains the artificial goal of the effector. But, the useful effect of an effector being determined, the goal is not necessarily defined; numerous effectors have not only to produce a certain given effect, but also a certain value for this effect: The value of the useful effect which best attains the goal of an effector is called the objective. The useful effect is a quality, the goal is a quantity of this quality. This analysis of the qualitative and quantitative aspect is fundamental, although not usual. 88
THE
LOGIC
OF
EFFECTS
The goal cannot always be attained; the effect falls short of this theoretical value, and we have “ deviation” . This concept of deviation is often referred to, especially in regard to servo mechanisms, as “ error” . We must reject this word; a machine does not make errors, since its effect always follows from its predetermined course of action. If errors there be, it arises from the human element, which has miscalculated or been unable to calculate the course of action. Deviation is the word to use here, not error. In practice, in studying the logic of an effector, one neglects the factors which are unnecessary for the regulation to be considered. One effector can often be broken down into several effectors, to be seen as a chain. A pump raises water; but each compon ent can be regarded as an effector having a limited effect; the internal mechanism can be dissected and each component can be considered as a separate entity; each of its components may be studied and even each of their molecules, if we wish to push it to the extreme. An effector can be seen from various points of view. For instance, a reservoir could be described as: either, an effector giving a certain level of water, the factors being the rate of inflow and the rate of outflow, or, an effector giving a certain rate of outflow, the factors being the level and the rate of in flow. The concept of the goal may be found difficult to accept when introduced in this way right at the very beginning of the study of effects. But for the moment we are only interested in effectors that have a goal, whether it be the humanly deter mined artificial goal of machines, or the natural finality of physiological effectors. At the risk of what might seem— provisionally— a transgression of logic, we can return later to the notion of goal or finality as a logical concept.
CO NSTANCY E F F E C T O R S AND T E N D E N C Y E F F E C T O R S Here we have a fundamental division of effectors without which it would be impossible to analyse the logic of machines; the goal and the deviations differ intrinsically, according to whether the factor is able or unable to attain the goal. In the first case, the goal is a level first of all to be attained
89
THINKING
BY
MACHINE
and then maintained. The effector tends to stabilize its useful effect at this level; deviations can then occur in two directions. We will call this a constancy effector. Examples are all machines producing objects having as many goals; a weapon whose goal is to annul deviations between the projectile’s point of impact and the target; the respiratory mechanism regulating the C O a content of arterial blood. In the second case, the goal is the maximum effect (in absolute value). The effector tends towards this objective without being able to reach it; there is always a deviation between the real effect and the goal and this deviation is always in the same direction. The goal can even be considered as being infinite. We call this a tendency effector. Examples are the machines that produce power, light or heat; all organisms or machines engaged in vital or sporting competition. A lamp is not required to produce a certain quantity of light, but as much light as possible; if our lamp gives too much light, we do not regulate it so as to give less, but we use a less powerful one which also tends towards the maximum light. An effector is said to be a constancy effector if it is able to attain its goal. It is called a tendency effector if it is unable to attain its goal. This distinction in attainment or non-attainment of goal in the constancy and the tendency effectors, the distinction of deviations in two directions or only one, is of the greatest theoretical importance; all regulatory mechanisms are rever sible, as we can see in passing from one case to another. But such a discrimination is linked to the condition of the techniques used; for each effector, it has an importance that is relative to any given time. If, tomorrow, it were possible to reach 500 miles an hour on the road, the motor-car would no longer be, as it often is, a tendency effector. The same holds good for electric light bulbs if one day they were to give too bright a light. However, for reasons of economy, it will always be a better proposition to use cars and lamp bulbs that are less and less powerful and to make them work as tendency factors. Graphically, the arrow symbolizing the useful effect is a thick line. The goal is shown by a little bar at right angles to the direction of the arrow, on the arrow itself if it is a goal that is attainable and beyond it if it is a goal which is unattainable.
90
THE
LOGIC
OF
EFFECTS
An effector can change its useful effect; for the same useful effect, it can change the goal. It must then be considered as a different effector; it is the end result which gives an effector its individual character. Thus, our legs can jump, walk, or kick a ball; for the same effective power of movement they can have different objectives, that is to say different speeds. Thus, a car, which is a complex of effectors, is, in its entirety, an effector whose goal (and hence the useful effect), is modified almost continually. The ideal speed (goal) to be attained and main tained, depends on a number of factors; on a long straight line the speed is always beyond the possibilities of the car, which, thus, is always in a state of deviation, always tending towards some value. When a cyclist comes out of a side road, the car is no longer a machine for speeding, but becomes a machine for slowing down.
D IR E C T IO N OF T H E E FF E C T S AND T H E FA C T O R S To analyse an effect logically, a certain direction must be given to the effector as a whole with reference to the direction of the effects and the effectors. By convention, this is the direc tion of the useful effect.
Diagram of Automatic Film-printing apparatus. (The little bar on the arrow of the effect symbolizes constancy.) In the case of constancy effectors the useful effect may not have any one direction more than another imposed on it. While for a lamp, a radiator, or a muscle, the direction of the effect is evidently that which increases the light, heat or force,
91
THINKING
BY
MACHINE
the direction of effect of an automatic device for printing cinematograph film can be towards either a lighter or a darker positive. In such a case, the direction is fixed arbitrarily; the convention should be explicit in the diagram (p. 91). An effect is positive if, when the useful effect varies, it varies in the same direction. In the opposite case, it is negative. In the same way, a factor is positive or negative according to whether the function which is linked to the useful effect is increasing or decreasing. Let us keep in mind the following more precise formula:' A factor or an effect has the same positive or negative sign as the derivative of the function which links it to the useful effect.1 When the signs of factors or effects are analysed, the arrows symbolizing them are formed of positive or negative signs. The useful effect (a thick line) being positive by convention, it is unnecessary to indicate its sign. The sign of a factor is not due to its nature, but to the role it plays in the effector; the resistance in ohms is a negative factor in an electric motor and a positive factor in a radiator. And all the factors of an accelerating car are reversed when the driver brakes.
T H E R E G U L A T I O N AND T H E GU IDI NG OF A F A C T O R To regulate an effector is to act on the values of the factors in such a way that the useful effect attains a certain goal (constancy effector) or that it approaches it to the maximum of its capacity (tendency effector). Regulation, then, should be carried out in accordance with the imperative of an imposed goal; e.g. the regulation of a classical machine-tool. To guide an effector, is to give it a goal which may be variable and to regulate the values of the factors accordingly, so that the goal may be attained or at least approximated; e.g. driving a car whose goal and even useful effect may change at any moment. Whatever the importance of this distinction, regulation and steering mean, in practice, an action on the factors which 1 One could well give a factor the same sign as the statistical “ correlation co efficient” between it and the effect. 92
THE
LOGIC
OF
EFFECTS
enables the effector to produce a certain effect. It is in this general sense that we use the term regulation. When we operate on a factor so as to regulate an effector, it should be taken for granted that the other factor remains fixed. For that which follows, the formula “ all other things being equal ” should constantly be borne in mind. The history of mechanical technique is of a perpetual struggle to attain constancy in those factors which are not left variable on purpose. We may wonder which factor to vary. First let us give an illustration; the factor on which one is to act should be capable of being regulated. Thus, in an automatic film-printing appara tus action to vary the sensitivity of the emulsion would at all times be inconceivable. But the following is a much more important conception; one can only obtain from the regulating factor a very limited action on the effect. Let us take the case of a car. By varying the advance of the spark from its minimum to its maximum, we get, all things being equal, a speed range between 25 and 35 miles per hour. If the amount of fuel mixture varies, the speed varies between two different values. If we modify the lubrication, for zero lubrication we shall have zero effect. But beyond a certain level of lubrication the speed will cease to increase. If the quantity of fuel admitted to the cylinders changes, the speed may vary very considerably, but the most efficient factor will certainly always be the stroke. Efficient is the best word; the factor determines the effect to a greater or lesser degree. This capacity of the factor to make the effect vary either more or less, we will call the coefficient of efficiency.1 But every factor nevertheless has a characteristic of its own which is its interval of variation; in other words, the limits between which it acts. Thus, below a certain level of lubrica tion the car would no longer work; neither would it work better if the ignition were too far advanced. Let us take a machine and make one of its factors vary independently. First the factor A. Let us note the value a1 where the effect appears, and the value a2 where it ceases. Let us note also the maximum and minimum values attained by the effect. The 1 This coefficient is quite different from the “ coefficient of influence” or “ path coefficient” of the American statistician Sewall Wright.
93
THINKING
BY
MACHINE
interval ax— a2 will be the interval of variation of the factor A. The interval between the minimum and the maximum values of the effect will represent the coefficient of efficiency. Hence these definitions: For a given effector and for given values of all the other factors, each factor is characterized by: (1) an interval of variation representing the limits be tween which the value must remain for the effect to be produced. (2) a coefficient of efficiency which expresses the limits between which the variations of a factor can produce a variation in the effect. One can distinguish: An absolute coefficient of efficiency which will express the measure of the valuation of the effect in the same units as that of the effect itself. In a car regulated in this way, the absolute coeffi cient of efficiency of advancing the ignition will be, say, 35 to 55 miles an hour. A coefficient of relative efficiency which expresses the variation of the effect due to the variation of one factor as a percentage of the total of the coefficients of efficiency of the effector. (One can thus know the relative efficiency of the different factors and know on which of them it is best to act in order to vary the effect.) The calculation of the coefficients is the business of the engineer. But the technician will exclaim: “ What is the good of all these theoretical ideas ? Our machines have always worked without worrying about their coefficient of efficiency. It would be all too easy to reply that theory is never vain; but, above all, it is necessary to consider the automatic control which is capable of correcting deviations whatever magnitude its coefficient of efficiency may have. It determines what we call the “ compensatory power” of regulation: The compensatory power of regulation is expressed by the amplitude of the deviations of effect which the regulation can control. It is clear that regulation can only maintain the goal of an effector within certain limits of deviation.
94
THE
LOGIC
OF
EFFECTS
Naturally, one immediately begins to wonder what it is that limits regulation in this way and what are its exact bounds. Is it the effect itself that inhibits the power of a correction ? This cannot be the case, because different sorts of feedbacks regulat ing the same effect do not have a similar influence. Might it be that if the message of the regulator is too weak it engenders weakness? This it certainly is not, because modern technique can amplify to whatever strength is desired from the minutest v a r ia tio n s o f th e w e a k e st flo w o f e n e r g y . T h e lim ita tio n s o f th e
regulator depend on the factor through which it acts. If we take the case of a retroactive rolling mill, the least variation in the thickness of the sheet metal can, by the action of its elec tronic amplifiers and the servo-motors, give rise to an immense flow of power. We may ask if a gigantic effort applied to the factor “ traction on the sheet at the moment of exit from the rollers” could reduce a sheet to a millimetre if it was one centi metre thick when it was coming off the rolls ? N o; obviously not. By increasing the traction one might succeed in tearing the metal, but one would fail to obtain a sheet with more than a certain degree of variation, because the factor selected for the regulator can only act on the thickness within certain very strict limits. By varying it, one might obtain variations of the order of, say, 5 to 10 per cent., but not of 100 to 200 per cent. But if, instead of acting on the factor “ traction” , one were to act on the factor “ distance between the rolls” , the action on the thickness could become very considerable. It is evident that the factor on which a regulator acts and which is designed to act automatically on the effect, cannot correct large deviations of effect, of a magnitude that it could not be capable of applying automatically. It follows that: The compensatory power of regulation is expressed by the coefficient of efficiency of the factor to which it is applied. It is therefore always better to regulate an effector through the factor which has the greatest coefficient of efficiency. But it is not everything merely to produce a correction, it is also necessary that the correction should operate quickly. If, in a paper mill, we see that the paper coming out is too thin, and therefore we act on the quality of the pulp that goes into the rollers, the defect will still continue to persist during the whole
95
THINKING
BY
MACHINE
of the time that it takes to transform the pulp into paper. If, in order to regulate the inflow of a hydraulic turbine, we were to shut a gate several miles up-stream, some time would elapse before the new flow would correct the situation. This is what we call a delay in efficiency. The regulation must, then, be applied on a factor which acts as near as possible to the output of the system. (In the remarkable solution of the rolling mill, the arrangement is even better than one which acts “ as near as possible to the output” ; it acts at the output itself, not on a factor of the production process, but on a factor for correction of the finished product.) In summing up, let us suggest the following useful rule: The factor on which the regulation is applied must conform to three conditions: (1) It must be as easily susceptible to regulation as possible. (2) It must have as large a coefficient of efficiency as possible. (3) It must act as close as possible to the output of the system. The fulfilment of these conditions often presents a difficult problem to the engineer.
R E G U L A T I O N BY I N T E R A C T I O N We may wonder whence comes the energy for a system of regulation. Two possibilities are conceivable: that the energy comes from another factor or from the factor itself. The first case needs no very close examination: it leads straight into the second, because it raises the question: how does the effectorregulator, even if it be a human one, regulate itself in order to affect another factor? We always come back to the auto matic regulator. Where does the energy of the effector come from? (1) From a factor. We may then say: by regulation through the interaction of a factor on a factor or, simply, by interaction. (2) From an effect. We may say: regulation by retro action of an effect on a factor or, simply, by feedback.
96
THE
LOGIC
OF
EFFECTS
Let us first look at the interaction of the factors. It comprises three elements: (1) an active factor: or a factor of derivation, the regulat ing factor. (2) a factor of application, the regulated factor. (3) an action of the regulating factor on the factor that is regulated, in other words, interaction. L e t us a c c e p t w ith o u t p r o o f th a t th e in te r a c tio n h as a sig n
given by the algebraic product of the signs representing: (1) The direction of the factor of derivation. (2) The direction of the factor of application. (3) The direction of the action on the factor of application, this being positive when the absolute values of the two factors vary in the same direction, negative in the opposite case. For example, a positive factor acting on a negative factor in the positive direction gives a negative interaction. (One might say: -f on — by -f- = — .) The interaction is represented by an arrow joining the regulating factor to the factor which is regulated. The + and the — qualifying the different arrows give the direction of the factors and of the interaction itself; the sign of the action relative to the two factors is given by a positive or by a minus sign near the end of the arrow symboliz ing the interaction. Thus:
AUTOMATIC FILM-PR INT ING DEVICE (CONSTANCY EFFE CTOR) The useful effect of the effector is to make positive prints of the film; the goal is to obtain positives of an optimum and regular density. There are three factors; a negative film of very irregular value according to the pictures, a positive blank
97
THINKING
BY
MACHINE
film free from specific reaction for any one emulsion and a variable lighting source, easily regulated. The irregular negative interacts on the variable light in order to give a regular positive; before the negative comes into contact with the blank positive, a beam of light shines through it on to a photo-electric cell which translates the opacity of the silvering into an electric potential which, through amplifiers, is able to govern the light intensity by rheostatic control.
The density of the negative image (negative factor) acts then on the light (positive factor); this action has a positive sign since the light increases when the density of the negative increases. We have then: — on -f- by -f- = — . The regulation is thus obtained by a negative interaction. Let us note that it is a question of an effector that has achieved its goal, a constancy effector.
W I N D M I L L F O R P U M P I N G W A T E R W I T H C ONSTA NT E F F E C T (c o n s t a n c y e f f e c t o r ) A windmill for pumping water may be required to produce a constant electric current. The variable factor, whose varia tions it is necessary to compensate, is made to act on a factor which it is possible to regulate. The variable factor here is the wind whose force is concen trated by a funnel; the factor that can be regulated is a resist ance controlled by a rheostat whose position is itself controlled by a plate on which the air-currents impinge. The stronger the wind, the greater the resistance and the less current is sent out by the dynamo for each revolution of the windmill, but, as its speed increases, the output of energy remains constant.
98
THE
LOGIC
OF
EFFECTS
A p o s itiv e fa c to r (th e fo rce o f th e wind) acts positively on a negative factor. We have thus: -f on — by + — — . As in the preceding example, although the factors have opposite signs, the interaction is negative. We would emphasize that there again we are dealing with a constancy effector. Another solution to the same problem is given by the wind mill whose speed is diminished by an increase in force of the wind. For a gentle wind the inclination of the vanes is at its optimum value; for a strong wind this inclination must be altered to reduce the speed of revolution. Here the two factors are positive (force of the wind and favourable setting of the vanes), but their relative action is negative, that which gives: + on + by — = — . Thus, in three cases of a constancy effector, there is negative interaction although the signs of the factors combine differently. It would seem as if we have here an indication of a general law. Let us see what happens in the case of a constancy effector; as an illustration we will choose an example that is very like the preceding ones.
WIND MILL W I T H A MAXIMUM E FF E C T (T EN DEN CY EF F E C TO R ) A windmill takes its greatest advantage of the wind at a setting produced by the wind itself acting on a tail-vane. This is expressed in the following diagram where a bar beyond
+r
+ +
++%
++++.
f
W IN D M ILlA | \T A IL -V A N E
j
99
p ow e r
i
THINKING
BY
MACHINE
the arrow of the effector shows that the effector is a tendency one. The force of the wind (positive) acts on the factor of “ un favourable direction” (negative), causing it to decrease when it itself increases. Thus we have: + on — by — = + . Contrary to the foregoing cases, it is a positive regulation. The import ance of this example following after the others can be appre ciated. If the effector “ windmill” is to maintain its effect towards the value of the attained goal instead of tending to wards a goal that is not attained, ought not its regulation to be in a contrary direction, that is to say negative? But since the goal always remains beyond the reach of the effect obtained, the negative sign of compensation does not enter into the question; instead it is the positive sign of the maximum effect. We naturally ask if regulation by interaction ought not to have a different sign according to whether the goal is or is not attained? If we come back to the example of the automatic printing apparatus, we can show that this intuition is not un founded. The goal of this effector is to obtain, despite the variable density of the negative, a constant depth of exposure in the positive; the elimination of the differences of density during the process of printing. If we could imagine the absurd pro position that, on the contrary, we might wish to accentuate the differences in exposure of the negative, then the under exposure of the negative would be more accentuated in the printing than it was originally. The negative that was too dark would give a positive that was lighter than the most unskilful of human operators could ever produce. The effector would then be a tendency effector, since its aim would be to obtain the greatest possible difference of silvering. To obtain such a result it would suffice to invert the negative action of the light, that is to say to diminish it where it increased, to increase it where it diminished. We should then have a positive that would be the blacker the more that the negative was under exposed and vice versa. The irregularity of the positive would be still more accentuated towards its maximum and the effector would tend towards its goal. Its regulation (if one could call it that!) would be — on -f- by — = that is to say, it would be a positive regulation. ioo
THE
LOGIC
OF
EFFECTS
In the same way if, in the windmill producing a constant supply, we were to invert the direction of the action of the wind on the rheostat, we would have an interaction -f- on — by - = -f. It would result in absolute chaos: when the wind blew with gale force, the rheostat would provide low resistance and the dynamo would burn out; when there was only a very gentle breeze, the resistance would be a maximum, and the current practically zero. For the cases which we have expressly quoted as being ab surd, a tendency effector would be regulated by a positive interaction. From now on we can postulate a law of interaction: A negative interaction regulates a constancy effector and disorganizes a tendency effector. A positive retroaction regulates a tendency effector and disorganizes a constancy effector.
R E G U L A T I O N BY R E T R O A C T I O N Regulation by retroaction is the regulation of a factor by energy arising from an effect; it is the feedback which guaran tees an effect against the variation of its factors. Three elements participate: (1) an active effect or an effect by derivation: regulator; (2) a factor of application which is both regulated and at the same time regulating; (3) an action of this effect on this factor, the retroaction. In principle, retroaction consists of a function relating an effect to its cause. In practice, it consists of a chain of effectors one of which plays the role of “ detector” and the other of “ reactor” . Let us try and distinguish, as we did for interaction, the laws that a retroacting regulator should satisfy. Can the feedback be applied to any factor whatever? In principle, it can. In practice, certain general rules must be complied with: the factor should be capable of being regulated, it should have a high coefficient of efficiency and it should act as near the output of the system as possible. Often these quali ties are irreconcilable. Here are two examples drawn from the motor-car. 101
THINKING
BY
MACHINE
Our most modern cars have only two sorts of feedback, both of which are derived from the speed of the engine, acting either on the ignition control or on the gearbox. The ignition control has rather a low coefficient of efficiency and the feed back has rather a feeble compensatory power. But the gear ratio factor, whose influence is certainly very considerable, is on the other hand not very easy to regulate; or at least its regulation— the substitution of one pinion for another— requires an energy that is fairly considerable; the “ detector” has practically no mechanism capable of appreciating the variations of speed except by centrifugal effect; the whole difficulty lies in making this effect, which is always minimum, react on an awkward regulation factor. This is what accounts for the difficulties in the problem of automatic gear changing and this is why the internal combustion engine has difficulty in adapting itself to the changes in speed of its effects. Its regulable factors are not efficient enough and its efficient factors are difficult to regulate. But the situation would become quite different with the introduction of injection motors, as the quantity of explosive mixture is both an efficient and regulable factor. As in the case of interaction, retroaction can be positive or negative. Its quantity is given by the algebraic product of the signs representing: (1) the direction of the effect of derivation; (2) the direction of the factor of application; (3) the direction of the action of the effect on the factor of application, this being positive when the absolute values of the effect and the factor vary in the same direction and negative in the opposite case. It is important to note that the direction for a retroaction remains the same whatever the direction chosen for the useful effect; in fact the direction for the factors changes if the refer ence sign changes. Some examples will serve to show that retroaction, just like interaction, regulates or upsets the effectors according to whether it is negative or positive, accord ing to whether the effector is a constancy or tendency effector. Here is the case of a constant-level reservoir, the two dia grams of which are based on an effect which has been arbitrarily 102
THE
LOGIC
OF
EFFECTS
chosen as being in a different direction for each (“ lower level” and “ higher level” ) ; yet each diagram shows that the retro action is in a similar direction:
+ on — by +
=« —
+ on + by — =
—
Another example of a constancy effector is the furnace with an automatic draught control which is retroactively regulated.
We now have a case (still with a constancy effector that has a negative regulation) where the “ reactor” acts negatively on the application factor, which should therefore be considered as being a positive pre-factor; in other words, the anti-fading device which can be expressed in two ways:
103
THINKING
BY
MACHINE
Let us pass on to the case of a tendency effector. These examples are much less common. Let us take the case of the automatic regulation of ignition control on many of our cars. The engine sets its own advance of spark, its speed of rotation acting by counter-weights whose centrifugal force alters the position of the distributor arm relative to the contact breaker. The faster the engine turns, the earlier spark it will have and the faster it will continue to turn. It is positive retroaction, the action tending towards its maximum. The -f- regulates the tendency, giving it new spurts of energy, whereas the — regulates the compensation, stabilizing it. But according to the foregoing it would seem that the ideal condition would be always to advance the spark as the speed increased. But, for a given speed the admission of mixture, that is to say the position of the accelerator, may be very different. So that a slight adjustment to the spark control can usefully be made according to the variations of this factor: suction of the cylinders or partial vacuum in the inlet mani fold. In some carburettors, that are not in very general use as yet, the ignition control is submitted to another regulation that is independent of the first one, capable of opposing it and compensating it; the regulation depends on the partial vacuum in the inlet manifold, acting on the membrane of a manometer. This ingenious mechanism is difficult to analyse, in fact almost impossible without the aid of a diagram making the idea more accessible. A positive interaction acts negatively on
X X X
**
+ + dva” ce
+
( M O T O R )------±-- ► J speed of revolution ^ ---
1
A. ^
' c o
^
eSS' ° °
Dual regulation compensator Retroaction: Interaction:
104
+ o n + b y + =+ - o n - b b y - =+
THE
LOGIC
OF
EFFECTS
a factor which is regulated by the speed, in other respects, through a retroaction which is not only positive but also has a positive action. This example shows admirably the advantage that the technicians would have in logically analysing the effectors that they employ and construct. Analysis which gives results for certain techniques would be worth perfecting for other techniques. Before searching for a practical solution, it would often be worth while solving problems by the method of the logic of effects.
MOR E C O M P L E X CASES Let us examine an interesting case showing a feedback which compensates a natural retroaction leading to disorganization, that of carbon arc lamps in which the distance between the carbons is regulated automatically. The functioning of the arc wears the carbons so that the current will diminish to a point where, finally, no more will pass; the effect of wearing (a secondary positive effect) retroacts on the distance (negative factor) between the carbons by increasing it (positive action). So that we have + on — by + = — . The retroaction is negative; it upsets a tendency effector. But on the same factor a feedback is applied, which, being derived from the component effect of “ intensity” , retroacts on it by means of a system of solenoids diminishing the distance as the current increases. This feedback is: + on — by — = thus regulating the tendency effector.
distance between carbons
105
THINKING
BY
MACHINE
It is satisfactory to see clearly in this mechanism how a device may act in the opposite direction to a natural disorganiz ation. Let us go on to the analysis of the Watt governor. This typical feedback proves to be quite a special case. A careful analysis shows that the steam-engine can equally well be con sidered as a constancy effector or a tendency effector. Constancy Effector. One can look upon the steam-engine as a machine which gives a constant speed of rotation whatever the load (which is, of course, one of the factors). The speed (a positive effect by definition) retroacts through the medium of the centrifugal force of the fly-weights on the factor “ steam inlet” (positive) in such a way that the factor is diminished when the speed increases and increased when the speed diminishes (negative action). Hence: -f on + by — = — ; constancy regulation.
Tendency Effector. But the same machine can be seen as a tendency effector, to lift the maximum load. The global effect of the energy may be broken down into two component effects,
106
THE
LOGIC
OF
EFFECTS
“ load” and “ speed” . The load, the useful effect, is positive; but the speed, which diminishes when the load is increased, is negative. The feedback is derived from this “ speed” com ponent in order to have a negative action on the admission of the steam. We have then: — on -|- by — = + ; tendency regulation.
T H E UN IV E R S A L L A W OF R E G U L A T I O N These examples would enable the postulation of a law of retroaction which is similar to that of interaction. But we must unite these two laws in a much more general formula: Every constancy effector is regulated when a factor or an effect acts on a factor in such a way that the product of the signs of the factor or the effect of derivation, of its action and of the factor of application is positive. It is disorganized when the product is negative. Or, if we take the various definitions for granted: Every constancy effector is regulated if it is submitted to a negative interaction or retroaction. It is disorganized if the interaction or the retroaction is positive. Let us accept this law as being experimentally true, holding its demonstration in reserve. But let us understand from now on the importance of this dual direction of the regulation, whether it be a question of increasing an effect to its maximum (or to its minimum), or whether it be to maintain it at a certain value. Cyberneticians have seen feedback only in the light of con stancy regulation. When they say that a negative feedback annuls the deviation of effectors, it is perfectly true for con stancy effectors, but untrue for tendency effectors, which have not been distinguished from the former. Regulation is a per petual battle between deviations; but, in the case of the ten dency effectors, it is positive feedback which (without annul ling them, since, by definition, these effectors never attain their goal) tends to diminish them. One might suggest that this law is self-evident. But we should remember that we have applied it to the actions of mechanisms that are already well known, since they have been determined by us. But if, on the contrary, it were to be applied to data that 107
THINKING
BY
MACHINE
have been very little studied and are incompletely understood, or inaccessible to study, then the law would show itself to be an admirable means of penetrating such phenomena. If we could think of biological regulations along these lines, or of the regulation of the nervous system, which probably depends on it, we should find their comprehension greatly facilitated by the use of these logical methods.
I N T E R A C T I O N AND R E T R O A C T I O N The essential character of retroaction is to bring the effect back to a given value, whatever may have been the origins of its variations. But what do we mean by interaction ? . . . Let us come back to some typical examples. In the automatic film printer, if the sensitivity of the emul sion varies, the print will become accordingly either more or less dark. The regulation was provided for a given sensitivity of the blank film; for a different sensitivity, a different regula tion would be necessary. The interaction has protected the effect from the variation of one factor only, that of the density of the negative. With regard to the windmill with constant effect, if we were to measure the amount of current that it produced after a period of some years, we should certainly find it very different from that originally intended by its constructor. The gearing would have rusted and the magnets of the dynamo would have changed. Interaction therefore does not guarantee the effect against variation of all the factors. The only factor whose variations do not influence the effect is the factor of derivation; in this case it is the force of the wind, and in the other it is the density of the negative. Let us imagine a windmill regulated by retroaction. A feed back would detect the variations of the voltage produced and would regulate the resistance accordingly. The effect would be much more regular than that of an interaction. It is obvious then that it would be worth while for an engineer to solve a problem by logic before embarking on a practical solution. So it is desirable to lay down a principle, which, in a prag matic guise, hides, as we shall see, a very important logical implication: 108
THE
LOGIC
OF
EFFECTS
Interaction can only regulate an effector in relation to the variations of one factor, the factor of derivation. Retroaction regulates an effector in relation to the varia tions of all the factors. It would seem as if retroaction always carries an enormous advantage; but interaction still has its points. The following is a diagram showing retroaction: the effect E acts on the factor A. The message is the function of the variations which, originating in B, have affected E. The same applies to the factor C and moreover for the factor A which has corrected its own variations. But outside these intrinsic variations of A, the message also reflects those which in A represented the correction of the effect of the preceding moment. We see then that the feedback regulates the factor A according to past information. The delay in regulation of retroaction is inherent in its basic principle; the retroactive message of time t depends on the retroactive message of the time t — e.
This is the concept that in other sciences is known as “ hys teresis” , extending the application of a term that belongs really to magnetism. Why should not the sense of this term be extended still further? All regulation, whether it be manual or automatic, acts only with a certain delay of efficiency; this is the time taken for the correction of a factor to make its influence felt on the effect. This delay depends on whereabouts in the system the regula tion is applied; the nearer to the output of the mechanism that it is applied, the shorter the delay. It is in fact the time elapsing between the moment that a factor acts and the moment that its result becomes apparent in the effect. But, in addition to this delay which is manifested (as in all regulation) by interaction and which is mostly negli gible, retroaction suffers from its own delay which we propose 109
THINKING
BY
MACHINE
to call hysteresis; the time necessary for a variation of a factor to make its influence felt on the regulating factor through the effector, the effect and the retroaction. In the sketch on the previous page, the delay in efficiency is represented by the path of AE, hysteresis by the path B (or C)EA. Regulation performed by hand seems to have only one delay of regulation, the delay of efficiency. In reality it also has a hysteresis, but a human one; the delay in action of our senses (detector), of our nervous transmission (message) and of our muscles (reactor). But it goes without saying that the hysteresis of a good machine is smaller than that of man, with out even taking into account our worst disabilities, inattention and fatigue. Let us define these different terms: In an effector, the delay in efficiency is the delay which elapses between the moment when a factor undergoes a varia tion and the moment when the effect receives its influence. In a retroactive effector, hysteresis is the delay which elapses between the moment when a factor undergoes a variation and the moment when the factor of application receives its influence through the effect and the message. One can define error of hysteresis as the difference between the effect which would be produced if the retroaction depended only on the value of the factors at the moment t and that which happens in reality due to the influence of the value on the variable factor at the moment t — e. Delay in regulation comprises two elements and both have to be fought against. For delay in efficiency, this is possible; it can be diminished by applying the regulation as near as possible to the effect. But for hysteresis, it is quite a different matter. One might diminish in vain the chain of effects by which retroaction takes place; one might equally battle in vain against the inertia of the different components, because it is necessary that the variation of a factor should upset the effect in order that the effect should send back its corrective message. The rolling mill whose feedback corrects the effect that has already been accomplished does not suffer from any delay in efficiency. It only shows hysteresis: the time between which it discovers an irregularity in the sheet metal and acts upon it accordingly. The error of hysteresis is the amount of variation i io
THE
LOGIC
OF
EFFECTS
of thickness in the sheet metal which is able to exist on account of the small time lag of the correction. Obviously an interaction has some delay in efficiency. In order to comprehend it, we may imagine the following absurd ity: that in an automatic film-printing apparatus, the regula tion, instead of acting on a rheostat, acted on the flood-gates of the turbine producing electric current. On the other hand, interaction possesses no hysteresis. Actually, it does suffer from a slight delay of its own. In the sketch below it is seen in the difference between the paths A B E and A E, or, more exactly, between the delay in action of A on E through B. But this delay can be disregarded. A
B
Interaction and retroaction compensate for each other’s defects and advantages. Interaction adapts the effect to the variations of a single factor, but acts without hysteresis; retro action adapts the effect to the variations of all the factors, but acts only after a certain delay, the delay of hysteresis.
111
CH APTER VI
Retroaction, the Secret of Natural Activity
O
u r radio sets compensate for fluctuations in signal
strength and reduce the gaps in the reception due to fading. The very weakness of a signal received will cause amplification; and if the signal is too loud, its strength will be toned down. Thus any deviation from normal leads back towards normality; herein the machine realizes the con cept of goal-seeking. In the case of living organisms, such a faculty may often be likened to an intelligence inherent in the species, or a part of the mechanism oflife itself; its amplification is either stronger or weaker according to the requirements. It is the final goal set by the constructor for the machine which governs its own activity, the effect of such activity influencing the factors so as to maintain stability. Thus the machine of today has over-stepped the field of action assigned to it in the nineteenth and early twentieth centuries. One cannot but marvel at this automatism when one comes to examine it closely. All causes of error are corrected without its being necessary for man to analyse their nature or even to recognize or suspect their presence. It is the error itself which corrects the error. It is because it has deviated from its route that the “ automatic pilot” is able to remedy this deviation. In the case of the machine, as in the animal, error is a generat ing force; it conditions its own correction.
A L L S T A B IL IZ A T IO N IS E F F E C T E D BY F E E D B A C K If we extend this argument still further we shall find that it is the existence of feedback in living organisms that gives error its constructive power. This is surely what we understand by the mechanism of experience. The way in which life extracts from ill its appropriate antidote can only be fully realized when we invoke the principle of feedback. Whether it be a 1 12
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
question of temperature regulation, the pH of the blood, or the urea content of the blood, or whether it is necessary to expel a foreign body from the eye, nose or throat, it is the deviation from normality that causes the secretion of some hormone or other substance or causes some reflex mechanism to act in order to combat the imbalance. All microbic disease gives rise to a retroaction designed to restore the equilibrium that had been disturbed.1 Always the remedy is roughly pro portionate to the disturbance which it combats. Claude Bernard gave a striking proof of his genius and clarity of vision when he said: “ The stability of the internal medium is the condition for the existence of free life” . The further living beings evolve, the more they achieve independence in relation to external environment. A polyp dies if it is swept by the currents into water which is either too hot or too cold; it is not free. The Coelentera and the Spongifera have no internal media, the echinoderms have an internal medium which is in communication, however, with the external environment; the internal medium is sealed off in the Annelida, the molluscs, and the arthropods, but the liquid filling the cavity is not part of the organism. In vertebrates the plasma plays an analogous role but the blood and lymph are living tissues; the animal is its own internal medium, and it keeps itself stable. Pierre Vendryes elaborates the ideas of Claude Bernard in a book which shows a cybernetic approach before the science was formulated. In his book Vie et Probability he claims that independence is achieved by the organism due to its faculty of being able to disengage itself from external events, so that it does not become the plaything of fortuitous external changes. He sees in this stabilization against chance events an illustration of probability which will be discussed in the following chapters. If we run, we require an increased supply of oxygen, just as the Watt steam-engine requires more steam when it has a bigger load to shift. An analysis of what happens, by the method12 1 Long before cybernetics was talked of, Herter, an American physiologist, gave as a definition of illness “ the signs of reaction against any noxious agent” . 2 Albin Michel, 1942.
"3
THINKING
BY
MACHINE
of the logic of effects, shows that the effect to be stabilized here is the concentration of C 0 2 in the arterial blood; the respiratory centre in the medulla acts as the detector element in the system, starting off faster and deeper respiration as soon as the C O 2 level in the arterial blood is increased. This increase of level is accompanied by an increase of lactic acid in the venous blood which is the measure of the previous increase in muscle activity. In this book, however, we do not propose to deal with the human organism, which will be the subjectmatter of a future book Homme en Equations. What is remarkable is that the mechanisms of the nervous system make use of the same principle. Without attempting proof for the moment, we will here affirm that feedback governs all our movements. It is due to feedback that the adjustment of our movements towards their objective is possible. Let us consider for a moment the movements of the hand if we wish to pick up a needle lying on the table in front of us. The action designed to pick up the needle follows a smoothly running sequence of appropriate movements. But in certain diseases the action will be clumsy, spasmodic and will degenerate into a tremor: intention-tremor. In another type of nervous disease the movements will be wildly inappropriate and suggest that the nervous system is without information as to the position of the limbs: ataxia. The effect of the absense of an organ may often furnish a demonstration of its utility. In these pathological cases, the functions of the nervous feedbacks are rendered apparent by the effects of their breakdown or inco-ordination. The study of nervous disease showed the early students of cybernetics the parallelism between nervous activity and that of electronic mechanisms. In the first group of cases cited, the intention-tremor is due to disease of the cerebellum; in the second, the ataxic inco ordination is due to interruption of the ascending tracts in the spinal cord. We are entitled to assume that in both cases retro active circuits governing the movements have been affected; this hypothesis in the hands of the earlier cyberneticians was a fundamental step in the application of the new science to the nervous system. A scheme of motor regulation can be sketched in which the “ effect” is the movement, the “ detector” the sensory proprio
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
ceptive system which informs the nervous system as to position and tension of joints and muscles. The “ message” conveying this information ascends in the proprioceptive tracts and the factor that is modified is the motor discharge. The movement taken as a whole is regulated by the distance that separates the limb from its objective. Another type of control, visual control, works in with that of the proprioceptive system. The eye detects the relation between the movements achieved and the objective. Sufferers from ataxia endeavour to supplement their defective proprio ceptive sense by sight, but sight in itself is unable to compensate for the deficiency in full. It would seem, then, that before the advent of the machine age all work was regulated by a feedback system operating in man and animals. When an ox pulls a cart, if the resistance to the wheels is increased by some incline or a stone in the way, the animal reacts by an automatic increase of traction, just sufficient to overcome the difficulty. Our motor-cars without a truly automatic gear-change are in this respect inferior to the animal form of traction. In the early times before the advent of machinery, all work benefited by the power of adaptation that has generally been claimed as the irreplaceable advantage of animal or manpower; machines which had no feedback were at a grave disadvantage. The least task requires to be regulated in. terms not only of what has already been done but also of what there is yet to be done. It is always necessary to record the effect, to detect any deviations and to react according to the deviation by the modification of our movements. Observa tions are made by the sense organs and communicated by the afferent nerves to nerve centres that estimate any deviations from the norm by reference not only to acquired, but also to innate, sensory structure; the operation is carried out by our hands; here is a feedback system. All reflection may be under stood as an example of the effect retroacting on the cause. Even the etymology of the word suggests this; an effect is con templated in advance by the mind and its relation to the goal is estimated; thence the nearest approach to the ideal is made possible, by correction of the factors involved. Memory itself can be represented as registration of retroactive messages analysing effects according to their success or failure.
"5
THINKING
BY
MACHINE
A machine without a feedback would be incapable of acting on its own— it would be absolutely ataxic. If left to itself, it would be limited to a programme determined for it by the operator and the occurrence of any unforeseen circumstances could not be met by any adaptation. Its master would have to guide it as he might a cripple. In reality, of course, he supplies the feedbacks that the machine lacks; when he detects certain deviations from the goal he is able to govern the machine as a whole, by acting on certain factors that were left designedly free by the constructor. Such a machine constitutes the reactor in a man-machine complex governed by retroaction on the part of the man. When the first true machines made their appearance, a genius very soon invented feedback; Watt created his governor, not by theory, not by logic, but by the instinct of a technician. This at any rate is generally admitted by the new science of cybernetics. One may doubt whether all work before the advent of the steam-engine was governed by human or animal feedbacks; surely use was made of one natural agency, however irregular in action: the wind. It was even more important for a windmill to find ways of adjusting itself to gales and breezes than for a steam-engine to respond to variations in load. It seems hardly possible that through the ages during which humanity depended on windmills for its bread no one bothered to deal empirically with variations in grinding power due to irregu larity of the wind. The author has inquired into this and dis covered an answer: Watt cannot be claimed as the first in the field. Long before him there existed a retroactive mechanism in windmills, the “ baille-ble” which regulates the grain delivery from minute to minute in response to the requirements. the
“ b a i l l e - b l £ ” ( m i l l - h o p p e r ), p r o g e n i t o r o f RETROACTION
A description of this mechanism was found in the Encyclo pedic methodique des Arts et Metiers mecaniques of 1786 in the chapter on windmills. Bertrand Gille, in a later study of the history of technical processes, was able to trace it back to 1588, the date of publication in Paris of a book entitled Diverse Artificiose Machine, written in Italian by Ramelli, a military engineer of the time of Henry III. This work describes a similar 116
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
device used in water-mills. Here was a governor ante-dating that of Watt by at least two centuries. The grain distributor has always been called a “ baille-ble” and consists of a wooden chute which guides and delivers the stream of grain. The end of this chute rests directly on the driving shaft of the mill, which at this point is squared or encased in a more or less square box whose edges are strengthened with metal. At every revolution the “ baille-ble” received four knocks, each of which makes some grain fall out. In modern mills this principle is termed a “ shock distributor” . When the wind increases, the mill turns more rapidly and will receive more grain; with less wind, the feed will be diminished. At first sight it might seem that this mechanism is another version of Watt’s regulating principle. When, for example, the sails factor is diminished because gales have torn away some of the sail cloth, the mill will slow down and receive less grain; this will allow it to recover some of its former speed. If the resistance of the grain is increased, as may happen if some stones or grit get in with it, the mill will have to increase its power, which it will do by diminishing the grain supply. As far as the wind is concerned, the rate of revolution is most certainly not stabilized, for it will continue to depend on the wind. It might even be urged that here is an example of a servo mechanism controlled by the wind; this would mean that the system could compensate for disturbances affecting the sails or for hardness of grain, but not for variations of the wind’s force. This is wrong; the wind force is a factor, not a controlling force. It is not through its effect on the grain delivery that it turns the sails of the mill. Here is an ordinary feedback system, derived from the speed and acting on the grain supply as the following diagram shows. But the factor of application is not strong enough to cancel the effect of variations of speed due to wide variations in the wind force. The feedback which is sufficient to compensate for variations in the lesser factors (sail efficiency, hardness of grist) is unable to compensate for the wind factor; it can govern all the factors but one. In water-mills, for which this system seems to have been primarily designed, the feedback is not called upon to make great variations in grain flow, since the water is always regulated by a sluice gate and the only important variable
“ 7
THINKING
BY
MACHINE
factor will be the hardness of the grain. If this system is unable to fulfil its compensatory function as far as wind force is con cerned, it shows another function of retroactive linkages which is not always so easily discerned in other examples; it binds all the factors to that of the factor of application through the effect, making this dependent on each one of the other factors and
thus on all of them together. Here, the grain supply depends on the wind through the factor “ speed” ; so that the grain supply is always related to the wind requirements. This secondary function of feedbacks, of harmonizing two factors through the same effect, seems to occur frequently in physiology. In respiratory regulation, the amount of oxygen breathed in is always proportional to the amount of lactic acid formed through muscular metabolism; that is, to the effort made. One is therefore entitled to think that the true function of this feedback is not so much an attempt to control the amount of C 0 2 in the arterial blood, which is not an end in itself, but as a means of regulating the supply of air to the needs of the muscular mechanism.
C HE M I C A L E Q U I L I B R I A We find the retroactive mechanisms of our artificial machines in physiological mechanisms. We find them indeed in all man’s activities, whether he thinks, or works with his hands, or directs a machine (the man-machine retroactor), accumulating the corrective power of feedbacks in his own memory or in the programme of a mechanism. It would even appear that retroaction is the fundamental property of nature which tends to maintain its orderly equilibrium under all conditions. 1 18
RETROACTION.
THE
SECRET
OF
NATURAL
ACTIVITY
To instance chemical equilibrium; at first sight it is difficult to realize that the laws governing this are the same as those applicable to widely different phenomena. In fact they are only the expression by chemists, in their own field, of logical principles that are universally applicable. If a reaction is reversible, that is to say, if the compounds produced by it can be restored to their original form, equilibrium is established between the tendencies to combination and dissociation. To g iv e a n e x a m p le , le t us ta k e th e fo r m a tio n o f a n ester fr o m a n alcohol and an acid (esterification); this reaction has a reverse process (hydrolysis). These two reach a state of reversible equilibrium, symbolized in chemistry by the sign: ACID + A LCO H O L 5* ESTER + W ATER esterification
hydrolysis
The effect to be considered is the proportion of ester (or alcohol) in the mixture. The more ester, the more hydrolysis will take place and the less esterification. But the factor “ esterification”
which increases the proportion of ester is positive and the factor “ hydrolysis” which diminishes it is negative. There is therefore a positive retroaction on the negative factor and a negative retroaction on the positive one; that is to say, a state of equilibrium. Even if we did not know from experience that equilibrium is established between these two reactions, it would be implicit in logical analysis, which illustrates the fact that the logic of
THINKING
BY
MACHINE
effects is an essential form of approach to knowledge. Even if we did not know how to increase the proportion of ester, the above diagram should leave us in no doubt that we must cross the messages, acting positively on the positive factor and negatively on the negative factor. This is precisely what is done in commercial ester production; water is removed as fast as it is formed to inhibit hydrolysis and alcohol is added to in crease esterification. The natural constancy regulation is re placed by an artificial tendency regulation. What is the point of equilibrium ? This is a general question of negative feedback whose answer in chemical terms is the “ law of mass action” of Guldberg and Waage: the reaction will effectively cease when the direct and reverse reactions are equal in velocity. If instead of “ velocity” we say “ differential coefficient” or “ slope of a curve” , we begin to discover that we are dealing with a particular case of the universal law of equilibrium of the effect in negative retroaction. In a similar case the equilibrium of the amplitude of the counter-reactions is expressed in the formula A = ^ where the increase in amplification A is the reciprocal of the reaction R. But at this stage we may not go into the law that proclaims the balance between tendency to variation of the effect and retroactive correction; for the purposes of a popular exposition many such paths leading to cybernetic by-ways, however tempting, must be ignored if the discussion is to be kept within reasonable bounds. In a chemical reaction the attainment of equilibrium is not, however, always entirely determined by the amount of reacting bodies present; if their reaction modifies temperature and pressure, these must be considered. In such cases the “ law of moderation” of Van’t Hoff and Le Chatelier may be stated thus: when any influence or factor capable of changing the equilibrium of a system is altered, the system tends to change in such a way as to oppose and annul the alteration in this factor. This almost amounts to a definition of negative retroaction. Consider a salt in the presence of a saturated solution of the same salt. When the temperature rises an additional amount of the salt will be dissolved. This is so because solution is accom panied by absorption of heat; it is an endothermic event. According to the above-mentioned law, the only change in 120
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
the system possible is one that will compensate for the increase of temperature by heat absorption and this is effected by increased solution of the salt. The absorption of heat depends on the effect “ solution” ; it acts as the “ detector” ; it retroacts on the temperature factor in order to diminish it. The law amounts to this: that a retroactive message must always oppose any variation of a factor, if a system is to maintain its equilibrium. That this is so we have already learnt from the lo g ic o f effe cts.
To take yet another example: consider the synthesis of ammonia from nitrogen and hydrogen, which is a reversible reaction and hence only gives a small yield. The synthesis, however, is accompanied by a diminution of volume; an increase of pressure would therefore be a favourable influence. Actually, commercially-produced ammonia is subjected to very high pressure. In this case we have an example of a servo mechanism. The effect— the yield from the synthesis— is stabilized as in the example of esterification, but its equilibrium point is dependent on the pressure. In order to demonstrate the validity of a law by the logic of effects, one must give an ac count of the system which allots appropriate values to the fac tors involved; if this yields a stabilized whole, then the validity of the law may be considered proved. If we examine this synthetic method in the light of the law of Van’t Hoff and Le Chatelier, taking the yield as the effect, the pressure factor must be -f- since according to the law an increase of pressure in creases the yield. This may be shown diagrammatically:
12 I
THINKING
BY
MACHINE
This system, giving us a negative retroaction which is stable, demonstrates the law.
VITAL EQUILIBRIA We may next consider a similar system of interpretation in the case of vital equilibria such as were studied by the great mathematician Vito Volterra, the creator of modern functional analysis. Such vital equilibria are built up between different living species where one lives on the other and are demon strable not only between the fishes and worms of a pond but between man and nutritive plants. Let us think of a pond containing goldfish and worms. The more fish it contains the more worms will be eaten, the supply of worms diminishes and the fishes die of starvation. If, on the other hand, the worms increase greatly, they will feed more fish which in consequence will take a greater toll of the worms and so equilibrium is set up between the two populations. This equilibrium is that of an effector set for unvarying work governed by negative retroaction according to the dual scheme depicted below: WORMS
This equilibrium involves hysteresis; action and reaction do not compensate instantaneously; it is always possible for one side to gain a momentary advantage. Volterra studied the case of certain soles and sharks in the Adriatic, an almost-
122
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
enclosed sea, where the two types of fish dominate alternately. Yet another such cycle may illustrate the matter even better: in a certain region of Northern Canada, the foxes prey entirely on rabbits and the rabbits have hardly any other enemy than foxes. But there is always some variation round the point of equilibrium which is never actually found; for about four years the rabbits lead and then for the next four years it is the turn of the foxes. When the foxes are too numerous so many rabbits get killed that the race is on the point of dying out and then the foxes, left with nothing to eat, die off in turn. Then, with no enemies, the few rabbits are able to proliferate in peace to such an extent that the few surviving foxes, with an abund ance of food, are again on the increase and the cycle begins once more. In just such a way does a machine with an illadjusted regulator start “ hunting” .
W H E R E LO GI C DESTROYS MA NKIND Economic and demographic phenomena are governed by retroaction. Famine may be looked on as a vital case of adjust ment to equilibrium, if we consider the ration of food per person as a stabilized effect dependent on two factors, one positive, the amount of food available, and the other negative, the size of the population. Any variation in the effect modifies the population factor positively.
But, owing to a special type of arrangement, the effect has not the same retroaction according to whether it increases or diminishes. When the ration per person increases, it can only lead to a slow increase in population, whereas in the reverse direction its acts much more rapidly and brutally— it is
123
THINKING
BY
MACHINE
obviously quicker for a man to die of hunger than to live and bring up a generation of children. If the factor “ available food supply” is not a strong one and the population is always under nourished, any sudden diminution of its resources, such as a bad harvest, will set up a brisk retroaction to diminish the number of mouths to feed. Famine is thus the reactor of a negatively regulated system. Its fatal influence colours the picture drawn by the logic of effects. We have just used the term— negatively regulated: it might be said that it would be better to speak of faulty regulation. As a matter of fact it would be better not to use such terms as regulation and faulty regulation and “ useful effect” at all; they are all linked with the notion of an artificially ordained end-goal, a purely anthropomorphic concept. An effector always tends towards equilibrium whether this be one of con stancy or tendency. It is a matter of indifference to the logical process of reasoning whether its laws favour human good or lead to human disaster. The freaks of the law of supply and demand may give rise to retroaction that is sometimes favourable from an anthro pocentric point of view and sometimes unfavourable. It is easy to present this law in logical guise. The supply offered is a negative factor; as it increases it causes a lowering of prices, whereas demand on the other hand is a positive factor. The law which this illustrates affirms that a rise in prices increases the offer of supplies and diminishes the demand, thus bringing
124
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
back the price to the point of equilibrium. The diagram makes it sufficiently obvious that this is an example of negative retro action. But from the human point of view the two reactions have very different values. The increased supply is desirable, but restriction of demand, if the food in question is vital, may result in famine. Thus of the two types of regulation which are jointly exercised in the same system, one may be ideal and the o th e r c a ta s tro p h ic .
STABLE AND UN ST AB L E E Q U I L I B R I A It is sufficiently obvious that any system endeavouring to establish an equilibrium must be regulated by negative retro action, but the role of positive retroaction in natural conditions is less evident. This is perhaps due to the fact that we have given less attention to it in the machines where it exerts an influence. Consider the Watt regulator acting with a positive retroaction. To effect this it is enough to change the position of one of the levers which connect the ball-weights to the steam-valve, thus reversing its action. It will now no longer ensure moderation in activity, but will condition frenzied over-activity. The machine will now aim at an apparently insane end-goal as assiduously as it previously sought to ensure a sensible perform ance; it will cut off power when an increase of work demands greater effort and increase power when work is light. Cutting down the steam supply with increased work, it will obviously soon stop functioning altogether; increasing it with little to do, it will race to destruction. Much the same sort of thing would happen in all other machines designed for continuous work, if the direction of the feedback were inverted. Alteration to the regulating device of a water-tank would cause it either to run dry or to overflow. In a boiler the water would become either too cold or boil over; in radio sets, anti-fading would become pro-fading. The rolling-mill would produce sheet metal becom ing progressively thinner and thinner or thicker and thicker. The carbons of an arc lamp would separate as soon as the current was switched on, and in the case of the greatest triumph of feedback, the automatically controlled anti-aircraft gun, the fire would be diverted from the enemy as soon as his plane 125
THINKING
BY
MACHINE
was sighted. All this would happen if the directions of the lever age or the current were inverted. A twofold diagram will make clear both the similarities and the differences between the two types of retroaction. I f a ball is placed on a concave surface, it will come to rest at the bottom. If it be displaced, it will at once return to the same position. The force exerted in the displacement carries it back to its stable position. It is in fact a model of retroactive equilibrium. If the ball is placed on a convex surface, even if we succeed in placing it on the summit, it will not remain there. The least disturbance in any direction will cause it to roll down with increasing speed. Thus a governor with its reaction reversed will either stop the machine or overrun it.
Negative retroaction — (Stable equilibrium)
Positive retroaction + (Unstable equilibrium)
The diagrams of these balls are remarkable models of stable and unstable equilibrium and much more illustrative than the usual symbols of a cone standing on its head or on its base. Obviously it is the displacement from the equilibrium posi tion which gives rise to the restoring force when the cone (or the ball) is in a stable situation; deviation leads to its own cancella tion; there is negative retroaction. Equally, in the unstable situation the slightest displacement sets up a force leading to more displacement; deviation leads to its own increase; there is positive retroaction. As a broad generalization, it may be said that equilibrium is stable if the retroaction is negative and unstable if the retro action is positive. It now seems that the “ law of regulation” — which remains applicable to machines— can appear also as a “ law of equili brium” in which constancy corresponds to stable, tendency to unstable, equilibrium. Thus the importance of retroaction for natural phenomena is even greater than might appear at first
126
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
sight; the two types correspond respectively to stability and instability. In addition, negative retroaction explains the “ principle of least action” , or the “ principle of Maupertuis” which is one of the fundamental principles of the universe. The pebbles on
the bed of a river lie so that they will offer the least resistance to the stream, as do the scales of a fish or the hair of an animal; a floating stick lies in the direction of the current; in the same way light finds the shortest path. Let us take two positive factors: a field of force and the angle at which a moving body cuts the lines of force in the field. The reaction of the field and the moving body may be considered to be the effect. This effect retroacts on the angle factor and diminishes it when it in creases itself. The retroaction is negative and so the system is in equilibrium. This demonstrates the principle of least action.
A STRA N G E R E L A T IO N S H IP : AN O SCILL A T IN G C IRCU IT AND AN ATOMIC P I L E By this time we should all be in a position to appreciate that retroaction is the great organizing force of nature, the important principle of “ anti-chance” ; but we have not yet dealt with all its possibilities. Let us again consider the ball in unstable equilibrium on a convex surface. It rolls down, but if it encounters an obstacle on the slope it will stop. We have now touched on one of the great universal governing principles: a positive retroaction whose effect is stopped when it attains a certain magnitude by some mechanism which may be called a limit. 127
THINKING
BY
MACHINE
A perfect example of positive retroaction being used in con nexion with such limiting organs is that of self-maintained oscillating circuits. A circuit consisting of an inductance and a capacity will, it is known, start to oscillate in response to the least electrical stimulus. These oscillations would quickly die away, but if, however, the stimulus is repeated at an appro priate frequency, the oscillation continues. If a positive retro action enters into the system, the oscillations, once started, continue indefinitely; this is exactly what happens in selfmaintained oscillating systems. Consider a retroactive circuit joining the output of an ampli fier to its input. There is an alternating current at i volt which is amplified to ioo volts. A part of the outgoing voltage is returned to the input. Let us say one-fiftieth, or 2 volts. This will add itself to the input current, provided of course that it is in phase. There will now be an input of 1 + 2 = 3 volts, which when amplified gives 300 volts and so on. There is no theoretical reason why the increase of the output should ever stop. Practically, of course, the whole system would blow up. It is here that the principle of limitation comes in; it is not necessarily due to one particular organ although in most cases it will be provided by the filaments of the amplifier valves which can only emit a certain electronic flow.
Such an arrangement is frequently found in spontaneous retroactive mechanisms; the message is of the same nature and acts in the same way as the factor on which it operates. It does not therefore condition a new function, but reinforces that already in operation. The arrows of both the message and the factor tend to converge instead of the message being superim posed on the factor. This mechanism constitutes a perfect electronic analogy to 128
RETROACTION,
THE
x *
SECRET
OF
NATURAL
ACTIVITY
V + + + + -f j e r iV ' d
d r r e* * * .
+ +
Jr
*
+ + + + + + + ++-*^3 f + + + triggering curren t
AMPLIFIER)
tension
amplified curren t
an unstable equilibrium such as that of the ball on the convex surface. When the ball is on the summit of the convexity it does not require even a flick of the finger in order to make it roll down— an imperceptible tremor will suffice to start its fall in any direction. In the oscillating circuit, the situation is much the same; it is unnecessary to feed in a deliberate input current; the minimal induced current that occurs in every conductor will be enough to trigger off the retroaction; in the case of our example we assumed it to be i volt. Another remarkable example of limiting action is given by an atomic pile. If we think of the uranium of a bomb in its global mass and not in terms of each of its atoms, the fission will appear to be a positive retroaction tending to the produc tion of an infinite effect limited only by the mass of the matter that is capable of fission: the neutrons trigger off the effect which then increases in snowball fashion. The arrangement exhibits perfect parallelism with the oscillatory circuit.
x I"+ +
-+ + + + + + + .
Here is an exact resemblance to the unstable ball. The effect and the retroaction are both triggered off by a natural factor acting spontaneously but of minute power— in this case cosmic radiations. In the atomic pile, this fission has to be limited. Cadmium bars absorb most of the neutrons and allow the 129
THINKING
BY
MACHINE
passage of only enough to keep up the fission process, but not enough for it to become explosive; the cadmium is a limiting agent. But the arrangement is complicated by retroaction which controls the action of the limiting agent. In order to ensure that the cadmium bars should not allow fission to run riot, they are submitted to a control in the shape of a feedback derived from the flow of neutrons. This is a secondary effect that can be detected in the ionization chamber; if the reaction tends to accelerate, it is braked by the cadmium bars, which are pushed a little more deeply into the pile by a motor mechan ism. All this is conveyed by the diagram in which the limiting agency has a negative value.
+ ++ + + + + , «"*
neutrons of fission
y. + + Jr *
cadmium
T‘
+ + + + +
+ + + + + -*{ > + + _/VroMic\ + + + + + * + + 3 triggering uranium fission neutrons
,-^
;
+ + h- + + + -
In nature, however, limiting action is not the function of one special organ. It depends on the retroactive relationship that only allows the passage of one definite message, which as often as not is that engendered by the coefficient of efficiency of the driving factor. It is, however, desirable to rid our descriptions of mechan isms from anthropomorphic implications. From now on it is our purpose to present an abstract picture of these mechanisms.
T H E E N D - G O A L AS A L O G I C A L C O N C E P T When man gives an end-goal to an effector, he arranges its factors in such a way that a certain effect will in all probability ensue. This end-goal may then be considered as an effect whose probable action has been artificially planned; it is the result of a struggle with contingency. When a machine is pro vided with interaction or retroaction its likelihood of attaining
130
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
the end-goal is enhanced since this probability is increased by guarding it against chance variations of the factors. But the operation then seems of a different type inasmuch as it is not primarily determined by the values of the factors; it depends on the internal organization of the effector. This internal con ditioning of the end-goal is also found in nature. But in natural processes an end-goal due to the action on the factors of pre determined values is hardly conceivable from an anthro pomorphic view-point. In the days of classical mechanics, only one way of ridding the effect from contingency was known: this was by very rigid predetermination of the factors. Observations based on machines of this nature made it appear inconceivable that natural effectors could use the same type of mechanism. Now, however, we recognize that linkage between factors and effects can organize the effects independently of the functional deter minism of the factors so that it is easy to realize that natural processes have their own way of combating contingency. All interpretations of end-goal attainment by natural effects ap peared to be logically inexplicable, so long as it was thought that the end-goal could only be reached by predetermined factors. We now know that nature has other means of ensuring that an effect should have every chance of accomplishment. Nature does not act along the rigid lines of strict determinism, but possesses to a great extent the power to protect its processes from chance interference. Between these two extremes there is room for innumerable philosophic doubts. It is a purely mechanical device, the Watt governor, which offers the solution to the problem; it has shown that certain effects are stabilized by their own action. Millions of effects are actualized around each of us. They may have varying degrees of importance. They are contin gent. But if one of them acts on one of its factors, thus estab lishing its own particular value, it is no longer a product of chance. At that moment, two elements that we distinguished in the attainment of an end-goal are united: a certain effect and a certain value of this effect. The effect that in our self regulating machines we pronounce as being useful, whose utility we recognize in physiological mechanisms, is the effect that retroacts. The value, the purpose of the end-goal, constitutes
THINKING
BY
MACHINE
the zero point of negative retroaction. The ideal of an end-goal considered as a metaphysical concept, and as such frequently denied (not without justice), is entirely re-established when seen from this logical point of view: if a system tends to achieve a certain effect with a high degree of probability, this effect is its end-goal. The end-goal of an effector is a certain effect whose proba bility is much greater than if it were dependent on mere chance. All end-goal results postulate a struggle with contingency, an anti-chance organization, which may take either of two forms. The artificial end-goal is achieved by the strictest pos sible determination of its factors; this is typical of mechanisms. The natural end-goal is achieved by appropriate organization of the inter- and retroactive linkages between factors and effect; it is characteristic of the living and mineral worlds. Between one or other of these methods there is, however, nowadays, no fixed division. The artificial often conquers the natural type and this revolutionary process, the cybernetic revolution, is not only to be found in material processes but in the manner in which we view life. A tree grows up towards the light. We may be invited to admire the teleological character of nature. Others may tell us that the behaviour of the tree was rigidly determined. This age-long discussion between the two schools of thought seems to be futile as soon as we realize that this natural phenomenon is not due to goal seeking nor yet to external stimuli, but to the internal constitution of the effect. The tree threw out “ effects” on all sides in the shape of bran ches, but it was the branch that got most sunlight which developed the factors that caused it to grow and it will con tinue to grow as if the sun drew it up like a magnet. This we call a positive retroaction or an effector with a positive ten dency ; for the tree is heliotropic. The pebble that is rolled about by the movement of the sea tends to assume a smooth surface that will aid further rolling. The effect here is the movement of the pebble which retroacts on the resistance to movement factor rendering it continually less resistant. The more the pebble rolls, the more it wears
132
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
away its irregularities, so the better it rolls— an example of positive retroaction; the pebble tends towards the annulment of its resistance. We need neither determinism nor teleological finalism, but the internal constitution of the act itself— a constitution that can overcome contingency. Let us now look at some more complex cases. FROM
BUS
QUEUES
TO
INDIAN
FAMINES
What person, queueing for a bus, has not, at some time or other, burst into a flood of invective against the bad organiza tion of the transport system? A wait of ten minutes and then two or three buses will roll up almost bumper to bumper. It does no good to lose one’s temper; let us look at it instead as a good example of retroactive effect. If, for any reason, a bus starts to fall behind its time schedule, its lateness will get worse. The effector “ omnibus A” is so arranged that its journey will
be accomplished in a given time; its factors have been ad justed as well as possible. The factor “ length of stops” depends on three pre-factors: (i) the number of passengers who get off; (2) the number who get on; (3) the number of passengers standing inside the bus. These three pre-factors are all positive and increase once the bus has started to run behind schedule. The later it is, the more passengers it will find waiting at each stop; and the more crowded the bus is inside, the longer the 133
THINKING
BY
MACHINE
passengers will take to get on. Retroaction + on + by -f = -f. The delay will tend to increase. The bus B ought to have followed five minutes later, but not having had an initial late start it arrives at the stopping places less than five minutes behind the other. There are fewer passengers, therefore they get on and off quickly. The bus now begins to gain more and more time until it will finally catch up with bus A. Its schedule was exactly the same as T ’s, but in the one case the effect tends to increase and in the other to diminish. Both effects are due to the initial disruption. In one case the unstable ball has overbalanced on one side of the convex curve and in the other case on the opposite side. What is the remedy? To act on one of the positive factors of bus A. Inasmuch as one cannot prevent the passengers getting off, nor remedy the obstruction of the exit, the only solution would be to take up no more passengers at the stops and thus accelerate A and slow down B, by leaving it a surplus of passengers to pick up. Conductors actually do this some times to the great indignation of the queue of waiting passengers who do not understand the mysteries of positive retroaction. From comedy to tragedy. In India vast irrigation schemes have been undertaken to overcome shortage of food supplies due to drought. The first results, however, were disappointing. The quantity of food increased, it is true, but the more food that is available the lower the death-rate; the population increase offsets the increased food supply. There is a retro action of the average person’s ration on the death-rate (positive pre-factor): when the ration increases the death-rate decreases. We have then -J- on + by — = — . The effect “ average ration” thus tends to be stabilized, especially when the irrigation factor affects the death-rate through improved sanitation. We have thus a serious situation in which the remedy leads to its own destruction. The logic of effects will permit the difficulty to be overcome. The feedback acting on the death-rate pre-factor will tend to stabilize the average ration effect in spite of the increase of the pre-factor on irrigation; we must offset this retroaction. But we know that feedback is impotent to control a tendency once a factor varies beyond certain limits; in such a case it cannot compensate for the deviations arising in the
134
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
effect. The irrigation pre-factor would have to increase very suddenly for the resultant deviation of the effect to exceed the compensatory power of the feedback, that is to say that of the coefficient of efficiency of the pre-factor “ death-rate” . This is the situation pointed out by the English economist Malthus.
MO UN TAI NS AND FORESTS When a mountainside begins to crumble away by erosion, it is an ever-accelerating process and the resultant deforestation, after a gradual start, will increase rapidly. Vegetation, the only factor that combats erosion, is eventually destroyed by it; the surface water running off, which is the chief cause of erosion, gathers speed, thus furthering the process. There is therefore a double positive retroaction. The process can then only tend towards final disaster. The theory of effects makes the interpre tation of this retroaction quite clear; in this case the tendency is unequivocal. The only reservation that need be made in a pro phecy of ultimate disaster is that contingency is not likely to affect the factors involved beyond certain limits. If we consider a stretch of country covered with a vast virgin forest, we are pretty safe in prophesying that some thousand years hence, though the forest may still occupy the same area, it will be no less dense. Apart from climate, six other factors contribute towards afforestation: seeding, fertility
135
THINKING
BY
MACHINE
of the soil, humidity, firmness of soil surface, shelter from gales, and, lastly, sufficient space, that is the space necessary for the trees to spread their roots and branches. These are all favour able or positive factors. The decisive years for the future exist ence of the forest are the earliest. It is true that all the first five conditions may not be fully met until once the forest has suc ceeded in establishing itself, when its further development is assured. The “ forest” effect retroacts positively on the five factors with a tendency to increase them: the more trees, the more seeds; the more humus from fallen leaves, the more fertile the soil; and the more undergrowth, the damper the soil; and the more roots, the less likely is soil erosion with its resultant denudation; and the greater the number of trees, the more shelter from the wind there will be for each of them. For all that, the forest will not be able to exceed a certain maximum density; stabilization is achieved by a negative retroaction on the “ lebensraum” . The more trees there are, the less space for the growth of each one, and there will be more young stunted trees stifled by overcrowding. A force of five positive feedbacks promotes the development of a forest of ever-greater density, but at the same time a nega tive retroaction begins to make itself felt, and when the forest reaches the stage of full development, a negative feedback stifles the trees and prevents too great an increase of density. Such a combination of retroactions could successfully combat the usual climatic variations, were it not for certain accidents
136
RETROACTION,
THE
SECRET
OF
NATURAL
ACTIVITY
for which no feedback can compensate and which displace the factors beyond the usual limits of tolerance; extremes of heat or cold may kill the trees or at any rate inhibit their germina tion.
A N E W L I G H T ON P O L I T I C A L E CON OM Y The theory of effects should help in the study of economic phenomena and the prediction of their evolution. It is here that cybernetics reassumes its original role, that of Plato and Ampere: the science of social government. This definition of the aims of cybernetics made by Ampere is very much to the point: “ to seek out causes, to study effects, to foresee and en hance results by continually taking account of the mutual dependence of cause and effect.” For example, some economists praise the mechanisms under lying American prosperity, while others condemn them. Mass production allows low prices and high wages; this gives the public high purchasing power and creates prosperity. An in creased output calls for an increased demand; this would seem all to the good, since mass production promotes prosperity and with it an increased demand. Such is the mechanism of American prosperity, above all in post-war periods when the demand for consumer goods is very high.
137
THINKING
BY
MACHINE
amount of
/ * + *
-J r* V o1
+ + d- + + - f . f + .j,
goods possessed
■
of house
fall of stone B On the one hand the effect due to independent factors, or, according to Cournot, due to an “ independent causal series” . The arrows diverge without the genealogical stem of their pre factors ever mingling with their foliage. This would be a pic torial representation of chance. On the other hand the effect due to factors having a common dependency. The arrow of the common pre-factor is only represented in its secondary state; but it symbolizes every com mon pre-factor even though the genealogical origins of the factors may be far back. The factors are not connected by a functional linkage, but for all that they are not entirely inde pendent, since both are a function of a common variable.
THINKING
BY
MACHINE
In our fairy story, it would symbolize chance if the Lady Variables found themselves in the same territory as a result only of caprice. A chance effect is produced when its factors come together in the causal field without being constrained by any function or by a relationship to some common pre-factor. This may be summarized: a chance effect is one whose factors are not only independent, but also do not depend on any common variable. Chance, about which so much has been written, is thus defined with precision. But it is another matter to observe where chance is to be found.
DOES C H A N C E E X IS T ? No link at all between factors? This condition could easily be fulfilled if we required only the absence of functions linking one factor with another, but we may ask whether it is conceiv able that none of the factors should have any common relation to any pre-factor. The answer to this question must settle whether chance exists or not. Let us return to our house wrecked by a meteorite. It is not by chance that two stones clash together in falling; or perhaps, more exactly, the part played by chance is small whereas that played by determinism is great and the reason for this is that the common pre-factor is intimately involved. On the other hand, we may cite a case where the common pre-factor is very remote: the argument that all mankind exists because, at a certain epoch, human beings appeared on earth. Every en counter of one human being with another and all that they do together is certainly not chance, for all these effects have at least one common pre-factor, a pre-factor which we might call Adam and Eve. In some ways it might seem an extravagance to juxtapose these two cases. We are all prone to believe that the first case was no affair of chance, but that where the human factor is con cerned chance may always play a part. We should assuredly be wrong; the two cases are logically identical. The effects have a common pre-factor; hence the factors tended to find one another within the boundaries of the field of action; hence there could be no question of chance. The truth is that between the one case and the other all intermediate stages are possible. At the bottom of all the events that took place during the
164
FUNCTIONAL
CAUSALITY
Second World War, there was a common pre-factor: the exist ence of Hitler. If one followed out the causal chain of no matter what event taking place in France or Navarre during the past century and a half, one would always come across a pre-factor in Napoleon; just as all our thinking has a pre-factor in ancient Greece. In order to understand even better how what we call “ chance” grows as the common pre-factor recedes in the causal chain, we may take some extreme cases. Let us suppose that all the factors of an effect increase their ramifications as they grow further away from the effect, without, however, a single one of their respective branches coming into contact with those from other factors. One would for all that discover sooner or later that all the causal chains stem from a common pre-factor constituted by the formation of our planet. The effect, then, would be undetermined, at any rate for a considerable length of its causal chain. Certain pre-factors would be always com mon to every event; these would be the cosmogenic factors. But, still, an event whose causal chains are truly independent from their common origin in the cosmic pre-factors would appear to us to be entirely undetermined. O f course we would be deceived, however; pure chance cannot exist. Let us imagine an event on the smallest conceivable scale— a corpuscle of atomic nucleus— in the greatest possible setting — the universe. One of the nuclear corpuscles disintegrates in some part of the universe. What are the odds that it will be the particular corpuscle of which I am thinking, but cannot see? If we accept the calculations of Eddington, the answer is not difficult; the universe contains 136 x 2256 corpuscles. The probability of an event is expressed by the ratio of the number of likely chances to the number of possible chances and hence the probability that this particular corpuscle disintegrates is: 1 136 x 2258 If the disintegration takes place, the event may be said to have been 1/136 x 2256 determined; to the extent that it was deter mined to this degree, it was certainly not entirely a matter of chance For chance to be absolute, the fraction expressing probability should be zero. 165
THINKING
BY
MACHINE
A fraction equals zero where its numerator is zero or its denominator infinite. It is impossible to annul the numerator, since it expresses the number of favourable cases and if it were zero the event must be impossible. When the denominator is infinite, the event is equally impossible, seeing that Einstein has demonstrated that the universe is finite. Only in an infinite universe could events of pure chance be possible; events, that is, with the probability ratio n — — o oc Pure chance is only theoretical and applies to events which might be produced without having any chance of being pro duced. Thus, we can think of chance as “ the complement to i ” of probability. I f an atom should explode before us, we would tend to say that the event is almost entirely due to chance; that it has taken place practically without having been determined, which is as much as to say that it was only very slightly probable. On the other hand if I shake my fountain pen on this sheet of paper, the paper will be blotted with ink; I should say that the event was almost entirely determined and that chance played no part in it. Between these two extremes of quasi-certainty and quasi chance it is conceivable that there is a range of graded steps in which chance increases pari passu with the diminution of deter minism. Suppose a bomb were to be thrown from a plane, capable of destroying everything within a radius of a hundred yards; it if be aimed at my house, I am certain to be killed. If on the other hand it is aimed at my home town, the probabilities of my death will be diminished and if I should be killed it would be possible to say that chance played a great part in my death. If the plane intends to drop its bomb in my part of the country, my death would be even more due to chance and still less pre determined; and if the bomb is to be thrown anywhere in France or in Europe or on the Earth, determinism would play a smaller and smaller part in my death and chance a greater and greater part. If the field of action be still further enlarged, if we imagine the bomb bursting anywhere within our galaxy, or even in the whole universe, it is obvious that the part played
FUNCTIONAL
CAUSALITY
by determinism in my death will tend to vanish and chance will tend to approach unity, without any sudden step from the circumstances in which the plane aimed at my house and my death was fully determined. Rigid ideas must be abandoned for a more flexible approach, the measure of determination and the measure of contingency: The measure of the determination of an effect represents the probability of its occurrence. The measure of contingency of an effect represents the probability of its not occurring, the probability of the opposite event. In every effect the sum of the measure of determination and the measure of contingency gives a value equal to i. Contingency is chance; determination is certainty. Chance and certainty represent limiting values towards which contin gency and determination tend. They are limiting values such as absolute vacuum, absolute zero, the speed of light in relativity theory, limits which can never actually be reached. It is thus possible to postulate the following abstract defini tions: Chance is the probability that one given event out of an infinite number of possible events will occur and is expressed by a measure of contingency equal to unity. Certainty is the probability that a given event will take place when only that event is possible and inevitable and the measure of determina tion is equal to unity. No wonder, then, that there have been such endless discus sions concerning chance and certainty; neither one nor the other exists absolutely in our universe; we can only approxi mate to them in some measure. According to the point of view from which they are considered, they will be assigned values expressing their subjective importance to the assessor. It is only necessary to demonstrate that absolute certainty is just as conceivable as absolute chance. Consider an effect to gether with its factors and pre-factors to infinity. In order to determine the effect completely, a certain value would have to be assigned to the factors considered as causal agents. But how ever much we may know of the factors, we can never have a complete knowledge of their pre-factors for the very good reason that to know every possible variable of a pre-factor, it would be necessary to possess a complete knowledge of its factors and so
THINKING
BY
MACHINE
on ad infinitum. Any change in value of a pre-factor must change the value of the factor which it affects. It is of course humanly impossible to control all the pre-factors to their most remote origins. If we wish to control the amount of water flowing in a river, it is not enough to construct embankments; it would be neces sary to embank all the affluent streams and, again, their tribu taries to the smallest little trickle; it would be necessary to estimate all the sources in the catchment area, the rainfall, the cloud states, the prevailing winds from all parts of the comtinent and so on to infinity. It is, therefore, impossible for a human brain to demonstrate absolute determinisms and our best calculating machines are not infallible. Even if their mechanisms were perfect, we could not know if their metallic structure might not fall victim to some cosmic cataclysm. In order to conceive absolute determinism, which predicates the control of all the pre-factors to infinity, it is necessary to postu late the existence of a God whose powers are by definition infinite and who is thus identical with certainty and whose contingency factor would be zero. In short, an effect whose determinism is represented by n/n = i, that is to say whose contingency is zero, is just as unthinkable as an effect whose determinism is n\o o = o , or whose chance value is equal to i.
T H E P R O B A B IL IT Y OF EFFEC TS When the multiplicity of factors is considered, it is apparent that any attempt to calculate the probabilities of an effect is an impossible task. At what point indeed could an analysis of the factors stop ? In any event affecting Jack or Jill, would it be necessary to assess the probability of Jack or Jill being born; or of their surviving a childhood illness? Would it not then be equally necessary to consider the probability of the existence of their parents and their remote ancestors ? The truth is that any particular probability can only be evaluated within a strictly defined frame of reference; all causal chains must be confined within certain limits. All such problems of probability must start: “ Given that this or that fact is so, then . . .” This does not of course simplify the problem; it is a condition of its presentation. Now, it is certain that a probability is only
FUNCTIONAL
CAUSALITY
relative to certain data. If a man wishes to evaluate his chances of being run over by a motor-car in the next sixty minutes, he must make the following premises: “ Given that I am alive at this moment and that hence I must have been born, that my parents have existed and that far-off ancestors had the good fortune to escape from fatal illness before giving birth to the next in line of descent.” It would therefore be necessary for him to arrest the chain of causation at a given point whose probability was i /i : that he is alive at the time of speaking. He must also admit that motor-cars have been invented, and must even cautiously follow the causal chain a little further: “ Given that so many million cars are on the road.” In short, he must delimit a basis, a standard, a framework in order to determine the probability of the accident. Without this he would be building on shifting sand. This firm foundation that he has established may be called the framework of the problem of probability. A framework of probability is a collection of facts whose probability is admitted to be i /i and which constitutes the necessary basis of reference for the definition of further probabilities. Without a well-defined framework, with out such a basis of certainty, it is impossible to consider a prob lem of probability. More precisely, everything would go back to the original chain of the common pre-factors: Adam and Eve, the appearance of life, the formation of our planet, &c. Thus, in expanding to infinity the numerator of the fraction expressing probability, the fraction becomes practically non existent. None of the classical problems of probability can be stated except in a given framework. In the framework of the probability of an effect, each factor has its own particular framework. Thus in the study of every effect, there must be premised a temporal and a spatial frame work. Certain frames are explicit and therefore need delimita tion and others are implicit and hence do not need formulation except in certain circumstances. Every time that an effect necessitates a reference to water, it would be otiose to add: “ . . . given that water is liquid at this temperature. . . ” . In the special cases studied in the calculus of probability all the frameworks are implicit; the frame containing the factors under consideration is always precisely and at the same time artifici ally determined. It is for this reason that the frame of reference
169
THINKING
BY
MACHINE
concept is not called for in the calculus of probability or, at any rate, not in the simplified problems with which it is generally concerned. The diagrammatic representation of the effect of chance assumes only a theoretical importance; no effect has arrows drawn from all directions without any of them having had some common origin before combining to produce the effect. It is a very different matter if we boldly admit that all the remote pre factors are neglected. The problem then becomes amenable to common sense and enters into the realm of calculable prob abilities. Thus in calculating the probability of any two lovers meeting, the fruitful amours of Adam and Eve are not considered. This way of procedure leads to a conception of chance, not as absolute, but as relative to a certain field of probability. That is indeed all we mean when we talk of chance. It is chance in a given framework, defined by a whole series of such clauses as “ given t hat . . . ” , which are taken for granted. A true chance effect is, as we have insisted, one in which the causal chains are rigorously independent. In the ultimate logic, such an effect cannot possibly be considered to exist; for com mon-sense reasoning, it remains a chance in a determined framework of probability. So it is permissible to talk of chance if a basis of certainty is clearly defined; this may be termed, more exactly “ relative chance” , the chance of common sense, which is characterized by the absence of interconnection be tween the causal chains of an effect beyond a conventional limit specified in the problem. If in following out the causal chains up to the admitted basic certainties we can detect no interrelationship between the factors, we call this an affair of chance. All causal chains must be delimited at some point (a procedure that is graphically symbolized by a cross at the point of origin of the last factor to be considered). Such limited chains appear to be greatly simplified. But simplification has to proceed further; the intermediate factors whose probability approaches certainty are likewise suppressed. It is proposed to evaluate the probability of an occurrence, one of whose factors is that the author was mobilized in 1939. If we start an analysis from the time of the declaration of war by the French government, it would be unreasonable to follow out the actual chain of events that brought him to the barracks; to
FUNCTIONAL
CAUSALITY
detail how the Minister of War signed the decree of mobilization of the author’s class, a certain official of the Ministry tele graphed the order through the services of a certain telegraphist, the Prefecture received the order and a certain clerk conveyed it to the headquarters of the gendarmes, the officer in charge sent a particular gendarme to deliver the mobilization order, the gendarme got up from his desk to do so, took his cap off the peg, &c. Obviously the intermediate stages must be neglected, not for purposes of simplification, but because the probability of each of these steps is i /i. It was certain, since war was declared, that the author would be mobilized, even if the telegraph system had broken down or the gendarme had been run over on his way to deliver the order. The only events that need be recorded in the causal chains are those whose probability does not equal i, the events that could modify the probability of the effect. Similarly the “ vir tual” events must be eliminated, such as what might have happened: “ if this or that had not come to pass . . . ” . We are not concerned with what might have happened but only with what actually did happen: in our genealogical trees we do not concern ourselves with individuals who might have been born, but only with those who were born; a zoologist does not study potential branches of the evolutionary tree. It is, then, a question of calculating the probability of the events presented. It is only within these limitations that some sort of answer can be given to the constantly arising question of whether this or that event was a matter of chance.1
W H E N T H E CAUSAL CHAINS FUSE Can we determine from these considerations the part played by a common pre-factor?— whether there is a directing influ ence on the part of the effect analogous to that which we have considered as interaction and retroaction: whether it be true that the nearer a common pre-factor is to the effect, the more completely is the effect controlled: whether an effect with in1 This probability of effect has nothing to do with probability of cause, to use a rather unfortunate term. The probability o f cause is not concerned with the probability of each intrinsic cause, but, given an effect, it seeks to determine whether it is more probable that it was produced by one particular cause rather than by another. This is the problem studied by statisticians when they determine the coefficients of correlation.
THINKING
BY
MACHINE
dependent factors is one of chance: what is the meaning of an effect, some of whose factors are dependent on a common pre factor ? The following example gives us an answer. Wartime; a person is killed by a bomb from an enemy plane when going to the food office about his ration book. He went a dozen times a year for this purpose alone, whereas before the war he never went at all. The same bomb kills a lawyer’s clerk who was visiting the town hall, not for ration cards, but on normal business which likewise necessitated a dozen visits a year; these visits used to take place before the war just the same. The two deaths have each two principal causal chains: “ the visit to the town hall” chain and the “ enemy aeroplane” chain. Every thing would seem to be identical in the two deaths, except in the way the two chains determining the deaths of the individuals are based on the common pre-factor “ war” . (Plane =bomb = war; visit to the town hall—ration cards =war.) For the law yer’s clerk this does not apply; for him, this visit to the town hall is determined by business matters. Here is the difference: the death of the clerk is determined by the war and his pro fession; that of the other individual by the war alone. The probability of the clerk’s death is less than that of the other person, because it required a supplementary condition. It is the probability of the other individual’s death multiplied by the probability of the supplementary condition. In other words, the framework of probability of the clerk’s death is greater than that of the ration-card seeker. In another example, two causal chains fuse very far back in a pre-factor, the existence of Cicero. If one chain, instead of leading to Cicero, led to Ovid, the event would be less deter mined. We are now in a position to formulate the following law: When causal chains have a common pre-factor, the probability that the event would have occurred without this fusion of causal chains is either increased or diminished according to the sign of the common pre-factor in one or other causal chain; it is multiplied or divided by the probability that the fusion of the causal chains occurred by chance. This law has important practical consequences. When a number of causal chains fuse in one or more common pre-factors, 172
FUNCTIONAL
CAUSALITY
the probability of the final event tends to depend more and more on the probability of the common pre-factor; the framework of probability tends to become more limited. Hitler, during five terrible years of war, was almost the only pre-factor for most of our actions; all our actions would have been very different if Hitler had never existed. This is good common sense and proved by the law of pre-factors. But here a law of conduct seems to become evident; man is not the toy of external events, he can influence them; Oriental fatalism is incompatible with the logic of effects. Every event has many factors and pre-factors. Every time we succeed in bringing one or other of those factors under our own influence, we make it dependent on ourselves and we con centrate the possibility of the final event within the framework of probability of our decisions. It is necessary for us to so organize ourselves that we can influence the greatest number of pre-factors. It is this that we term the weaving together of the threads of our plan, or sometimes just working at it or not letting it master us. Man certainly does not master events (no factor “ makes” them, for every event depends on contingency) but man can influence the probability of events by striving to control the greatest possible number of factors, or at any rate by trying to influence them. Our interpretation of the con cept of probability helps us to understand the utility of our efforts, to think of a check as a normal happening in the play of probabilities, and to go on trying to participate in the initiation of as many pre-factors as possible. When a man constructs a machine, he entertains a pre-factor which is the wish to construct a machine capable of responding to the factors involved in its mechanism. When this condition has been realized, the pre-factor itself becomes a common creative factor and consideration of probability is devoted to this alone. It might seem that effort is entirely devoted to this prepara tory action on the pre-factors and indeed it tends to assume this character more and more as technical devices are perfected, but such control will never be complete. Certain factors will always escape human control. It is always possible that the water in the boiler may freeze instead of boil, as Jeans hinted in his famous parable; that the constructional steel may soften
173
THINKING
BY
MACHINE
from strain; that lightning may strike the machine-shed; or a volcano may erupt in its vicinity; or the sun might even disin tegrate.
A D E F IN IT IO N OF P R O B A B IL IT Y Let us consider two balls in a box representing the framework of probability. If we shake the box, what is the probability of the balls hitting one another? If the box is so small that the balls are almost touching one another this probability is repre sented by i : the larger the box, the less inevitable will be their clash. If the size of the box tends towards infinity, determina tion of the collision will approach o and contingency will approach i ; their collison will become a question of pure chance. We see that the smaller the frame, the greater will be the probability of the effect. Only an infinite framework of even tualities could give a zero probability, that of pure chance. The law of common pre-factors derives from this progressive increase of probabilities with restriction of the framework; for common pre-factors contract the frame by suppressing some of its con ditions. It is obvious that probability will increase with the proximity to the effect of the point at which fusion of the causal chains takes place. If.a remote fusion will suppress one out of ten determining conditions, the probability of the effect will be much greater than if the same determining condition is one among a million. This surely puts us on the track of an entirely new concept of probability, one where it is defined in relation to the framework in which it acts. But first of all the concept of the framework itself requires elucidation. Every variable of a causal function is capable of variation within the limits imposed by its physical nature; the values of the diverse factors are naturally deter mined by these limits. We are therefore constrained to consider the frame of probability, not as containing a number of factors, but, in more abstract terms, as a multi-dimensional area within which the factors are free to undergo physical variation. The frame of probability of an effect is the space of n dimensions within which the n variables of causal function are free to vary. We may note the parallelism with the previous definition of the causal field as the space of n dimensions within which are
J74
FUNCTIONAL
CAUSALITY
found the n variables of a causal function which determine the effect. The frame of probability is the space in which the variables move and within which they must certainly be con tained ; it is to be thought of as a physical concept, whereas the field is the space in which these variables must be for an effect to exist, and its limits, which are functionally determined, are mathematical. In our fairy story the frame is the stretch of garden in which the Lady Variables were free to wander, it being understood that they would only be able to bring about an effect if they found themselves within the magic circle. It must be emphasized that the frame is the property of the mass of variables, the whole of which area is free to every one of them, no matter what their function may be; whilst the posses sion of the field is by virtue of their specific function. If the frame and the field coincide, or if the frame is contained in the field, the effect is determined. If the field is altogether outside the frame, the effect is impossible. If the field and the frame have a part of their area in common, the probability of the effect will be determined by the ratio of their common area to the area of the frame. The smaller the ratio, the less probable the effect. We are thus led to a definition of probability which accords with logical thought better than does the classical definition, thus: The probability of an effect is the ratio of that part of its field of causal function, which falls within the frame of probability of the variables of this function, to the frame of probability. Let there be two balls on a billiard table. The table is the frame of probability of the ball variables. On the table there is a collapsible trap-door capable of bearing the weight of one ball, but not two. When the two balls meet on the trap, it collapses and the balls both disappear. If the balls are in move ment with uniform velocity and are not any more likely to meet at one spot on the table that at another, the trap-door may be thought of as the field of the effect “ vanishing” . The prob ability of this effect increases with the size of the trap-door, that is, the size of the causal field. If, now, it is assumed that the balls are less likely to find themselves on one part of the table than
175
THINKING
BY
MACHINE
on another, owing to some such circumstances as a repulsion from some direction, other variables at once become opera tive in the causal function. The frame and the field grow in many additional dimensions and it becomes difficult for the mind to picture them. It might seem futile to talk of the relationship of two defined areas with some sixty to a hundred dimensions, since such a relationship must elude calculation. This is undoubtedly true, but the probability of an effect may be likewise incalculable. The relationship areas can only be evaluated in certain very simple conditions and it is in such fundamental problems that calculation of probability becomes possible. In such cases the number of possibilities will of course be determined by the field and the frame. As a matter of fact the evaluation of a proba bility can only be effected if the frame of the variables has been defined, which demonstrates that the whole question of proba bility is an anthropocentric one. If it were desired to estimate the probability of a pond freezing, an agreed frame of tempera ture values would have to be fixed; should such a system be chosen so as to vary only within the limits of terrestrial tempera ture, or should the frame be so extended as to be of universal application, between absolute zero and some stellar tempera ture? The artificial goal of an effector has been shown to depend on its factors. It is now easy to understand how this is arrived at; by reducing the framework through strict control of the greatest possible number of factors. This control is effected by a com mon pre-factor which represents the choice of the human controller. The fraction representing the ratio of field to framework is likewise increased by the consequent diminution of the denominator. The end-goal of a natural effect, that which increases its probability by interaction or retroaction, is obtained by in creasing the numerator, by extension of the field. This gives us a glimpse of the vast unity of the world which we propose to study.
176
CH A PTER V III
Anti-Chance p to now the variables concerned in the causal function have been considered as independent. Each is free to move relative to the others. It is for this reason that their common presence in the functional field, as evidenced by the effect, or in other words by the event, is demonstrable by a mere statistical calculation. The fact, that is the effect considered with regard to its values and qualifications, even if it does not actually exist— the fact governed by the functional law— can only be ascertained by mathematical calculation. On the one hand, mathematical law expresses the fact, the essential and necessary values, which are predictable. On the other hand, the event or existence does not depend on mathe matical calculations; it is not predictable, it must be observed, and statistics are a summary of the observations. But what happens when the variables of the causal function are no longer independent ? Let us look at this situation in terms of the Lady Variables. One of them wants for a while to give up her waywardness; she wishes to help Lord Effect and keep him in being, so she grasps the most erratic of her companions by the skirt to hold her within the magic circle. The wizards in their pointed hats, who in fairyland read the future by the comings and goings of the Lady Variables, rub their hands; for when this “ interaction” happens it is easy to see that Lord Effect will continue to live. Occasionally it falls out even better for the wizards; this is the situation where Lord Effect himself seizes all the Lady Variables. They can act only on one another; Effect can act on them all. The wizards are in a strong position; foretelling the future is child’s play when Effect has all the Variables in his grip; he retains control of what he needs for his existence. They call this “ retroaction” , for Effect has turned to act on that which brought him into being.
1 77
THINKING
BY
MACHINE
To put the matter more succinctly, let us consider those variables which are not independent; when they link up in the functional field, contingency is plainly not their only determin ant. We may then consider what influences them to link up or even to tend to approach each other. What is signified by the dotted arrows linking the A and B factors in the following diagram ?
T H E I M P O R T A N T C O N C E P T OF “ C L I N A M E N ” Before discussing the nature of the linkage, it may be as well to baptize it; and we propose to name it “ clinamen” . The origin of this term takes us back to the ideas of Democri tus. Democritus explained the world as due to a whirlpool of atoms conglomerating so as to give rise to matter. Aristotle, opposing atomism, contended that in vacuo all such atoms would fall with the same velocity. Epicurus met this contention by postulating an additional property of the atoms: “ ecclisis” , or, translated into Latin, “ clinamen “ Clinamen” means inclination or tendency. It is the pro perty or deviation which inclines atoms falling in the void to unite to form matter. Epicurus explains everything by the existence of the mysterious clinamen that is responsible for all material organization. We often find the word used in its more general sense by other philosophers to express a natural inclina tion. It is a term applicable not to chance linkages of factors, but to their natural tendency to seek one another and give rise to organized effect. A clinamen can only be defined as a function. One can only conceive of two modes of linking A and B: either by direct union, so that A becomes a function of B; or indirectly, when A becomes a function of E and thence, indirectly, of B. This seems like familiar ground in view of our previous explorations. A function directly uniting the values of two factors is inter 178
ANTI-CHANCE
action. A function of a function uniting the values of two factors through the effect is retroaction. Thus, the organizing devices that we have met with in studying machines are found again in the processes of abstract thought by which we came to understand how and why certain effects are so constituted as to escape from contingency and to give rise to natural order. Let us consider an effect, E, which depends on two factors, A and B. For the effect between the values e and e' to occur, the variation interval of if must be between the values b and b'.
If B is linked to A, then B will influence E in two ways: (1) Directly as a causal agent. (2) Indirectly through A as a function of a function that combines interaction and causal function. These two influences may be exercised in two very different ways; when B varies, the direct function and the indirect function of the function may cause E to vary either in the same or in the opposite sense. In other words, the one may represent an incremental and the other a decremental function, or both functions may tend to act in the same direction. Let us first consider the case when they act in contrary fashion, which is plainly negative interaction. The two in fluences of B on E, both the direct and indirect, will tend to annul each other. When B varies, E will vary less than it would had there been no interaction. Thus B will be able to go out side its range of variation bb' without the effect E having under gone any gross disturbance. In other words, the range of varia tion of B has been increased by negative interaction. The condition that has to be fulfilled by B in order that the effect E should remain constant is easy to attain; the causal field in fact is enlarged in the dimension corresponding to the variable B. The probability that E remains within the limits ee' is increased, then, as far as B is concerned.
179
THINKING
BY
MACHINE
If, on the other hand, the two functions are both either increasing or decreasing, the interaction is positive. Every variation of B will act in a twofold manner on E. The deviation of the effect will be increased and, if these deviations are to fall within the desired limits, it will be necessary for B to vary far less widely than it might have done; its range of variation will be reduced and the causal field constricted in this dimension, whilst the probability of the effect ee' being attained will be diminished. Hence this law of interaction is of more general import than that which we formulated for machines: when two factors of an effect are linked by some function, the causal field of the effect is modified in the dimension representing the interacting factor; it is either increased or diminished and the probability of the effect increases or diminishes according to whether the function of interaction acts on the effect either in the same or the opposite sense as the interacting factor. In the same way, it is demonstrable that retroaction in creases the range of variation of all the factors. B acts on E in two ways: firstly, directly by the causal function; secondly, indirectly through E or A by a function of a function combining retroaction and causal function. If the two influences balance, B or for that matter any other factor such as C, D, G or even A can go outside their field of variation without E falling outside the given limits; their range of variation increases and hence the field increases pari passu in all dimensions together with the probability of the occurrence of the specific effect conditioned by the sum total of the variables. This is an example of retro action tending towards stable equilibrium.
In the opposite type, positive retroaction, all the variation intervals are reduced and the field constricted so that it may even sometimes become pin-point. In such conditions the 180
ANTI-CHANCE
effect cannot be stabilized and a condition of unstable equili brium results. It would be possible to formulate a law of retroaction similar to that of interaction, but it is perhaps better to amalgamate the two formulae. Everything that we have learnt about automatic machines is explicable in terms of general principles. Negative organization owes its properties to the fact that it involves an expansion of the field and hence of the probability o f th e e ffe ct. P o sitiv e o r g a n iz a tio n , o n th e o th e r hand, makes the occurrence of the effect more improbable by diminishing its field. The effect is organized by interaction in relation to a single factor and by retroaction its organization is effected in regard to all the factors. If it be true that negative clinamens attain a relative equilibrium and that all the effects of nature are only stable within precise limits, it is because the field can never be expanded to infinity in every dimension. The mystery surrounding qualitative differences that an effect may exhibit in relation to one factor and not to another depends on the manner in which the causal field is increased or diminished in the dimension corresponding to the factor in question.
T H E L A W OF T W O F O L D F U N C T I O N The power that interaction exercises depends on the influence that two functions exert on an effect either acting in the same or in opposite directions. In the case of retroaction, there are two different functional linkages between each factor and the effect; for in the case of each independent factor, it is the double linkage to the effect that augments or diminishes its range of variation. The automatic cinematograph film printer (p. 149) affords an illustration of this law. The pre-factor represented by the mains supply is linked to the effect by two different paths and hence seemed to us to react in an “ intelligent” way, whilst the pre-factor of the lamp resistance, which has only one chan nel of action, appeared to be “ stupid” . The organization of an effect in respect to a pre-factor, however distant, is likewise determined by interaction; that is, through a double linkage. Hence we may deduce the following law: When an effect is linked to one of its factors or pre-factors by a function that is 181
THINKING
BY
MACHINE
other than a causal function, its field is modified in the dimen sion that corresponds to this factor. Its probability in relation to this factor will be increased if the second function acts on the effect in opposition to the causal function; whereas if it acts in the same sense, the probability will be diminished. The basic principle of all the organizing functions in nature is expressed by this law. It is now possible to assign a more precise meaning to “ clinamen” : Clinamen is a function that limits an effect to any one of its factors apart from its causal function. A clinamen is negative when it increases the causal field and so increases the probability of the effect; when it exerts the contrary action it is positive. Clinamens are of two types: the clina men of interaction which links two elements of the causal chain, and the clinamen of retroaction which links an effect to an element of its causal chains. If we apply this law of double function to servo-mechanisms, it explains what is meant by the “ control” , whose variations are mirrored in the effect, whilst it remains unaffected by other factors. In the following diagram it is obvious that the “ control” , C, acts only once on E through B, whilst A, on the other hand, as well as any other factor, has a twofold action on E.
In general terms it is apparent that every effect possessing retroaction must be sensitive to the variations that contingency may impose on the retroactive linkage; the Watt governor illustrates this. If the steam-engine should rust, the resulting wear would be compensated by the feedback which would increase
ANTI-CHANCE
the steam supply accordingly. If, however, this variation due to the wear occurs in the retroactive linkage so that, for example, the bearings of the tachometer are affected, the regulation of the engine will be adjusted to an entirely different value. Whence we can say that a retroactive effect is organized in relation to its specific factors but, at the same time, it is in fluenced by chance disturbance of any of the effectors of the retroactive linkage. This sensitivity of the effect to contingent happenings in the retroactive linkage is made use of in servo-mechanisms in which the accessibility of an element of the feedback to varia tions in the effect is designedly preserved. This arrangement causes no inconvenience in the construction of automatic regulators; they are designed to compensate for variations in the factors of the effect and the retroactive linkage will be practically accident-proof. It is, however, not sufficient to have freed an effect from variations due to chance disturbances of its factors, if the retroactive elements themselves are liable to similar happenings. Up to now, by placing an effector at the heart of the retro active complex, we have been guided by anthropocentric ideas— or misguided because of the controlling role played by the human effector. In strict logic, a retroactive system should be viewed as a closed chain of effects of which no single mem ber plays a purely individual part. In the following diagram, i, 2, 3, 4 are the effectors; £1, £2, £3, £4 are their effects; M i, M2, M3, M4 their messages; Ci, C2, C3, C4, the external contingent factors which act on each of them. It is from this viewpoint that retroaction in nature must be studied if it is to be understood. Thus tachometric regulation involves a chain of effectors where 1 is the turbine to be regulated; 2, the flyweights of the regulator; 3, the rods of the tachometer; 4, the sluice-gate for admitting the water. But a detailed study of the cycle of messages would be beyond the scope of this book. Sticking to essentials, it may be well to define what is meant by “ causal contact” . This term covers those linkages of the effector system which are unaffected by contingency and are governed by the laws of functional activity. This essential
THINKING
BY
MACHINE
condition can be illustrated in the following: in a circuit of effects each effect in only organized in relation to the factors with which it is in causal connexion. Thus E I is organized in relation to Ci, but not to C2, C3 and C4; and in the same way E2 is organized in relation only to C2. In the case of each effector, the external factors of all the other effectors exercise a
contingent function; this is the arrangement utilized in servo mechanism design. It would be difficult to understand the cancellation of the “ differential message” at each point in the circuit; it might seem to be a problem of purely academic im portance were it not that this plan is implicit in every living organism that is influenced through retroactive activity by its environment. It is equally the plan of differential analysers, which Louis Couffignal calls “ machines with reactionary control” — the type of machine which was created by Vannevar Bush in 1925, developing the principles that had already been utilized by Torres y Quevedo. In such machines the several variables of a differential equation are mutually independent and maintain one of the organs in a state of equilibrium. The study of the functions sketched here and of their equilibration by the annulment of differential messages should go a long way to account for animal behaviour in terms of logic. By this presentation of a differential analyser and by this alone is it possible to interpret many physiological functions— 184
ANTI-CHANCE
the functions that were recognized by Claude Bernard in these words: “ It is the subordination of the parts to the whole which knits the complex being into a living whole, into an individual” . e d d i n g t o n ’s v i e w s
on
a n ti-chance
If the variables of a causal function are independent, their conjunction in the field of activity obeys no law. The event is at the mercy of contingency, it can only be studied by statistical methods which confine themselves to mathematical analysis. According to statistics, the probability of an event is inversely proportional to the size of the causal field. If the variables are interrelated by a clinamen, the conjunction of the variables in the causal field tends to lead to greater or smaller probability of the effect. Thus a mathematical law would appear to contribute to the event in the guise of a function. As far as the contingency of variables and the law of functions are concerned, the former in the world of probability and the latter in that of mathematics, the functions of the clinamen bridge the gulf between uncertain event and certain fact. They influence an effect by functions which either in crease or diminish the various contingencies. Even when chance seems to be banished, it still remains all powerful, for if by chance the variables are varied outside certain defined limits, the clinamens are unable to influence the result. The functions of the clinamen would appear to represent anti-chance as a universal characteristic. Eddington has said that the character of the universe is not one of chance arrange ment. The same thought has been expressed in many terms: the organizing principle of the world has been called “ nous” , the intelligence that governs the universe of Anaxagorus, the “ pneuma” of the Stoics, the “ clinamen” of the Epicureans and many other terms beside. From all time, thinkers have sought such a principle and have only too often succumbed to the temptation to hypostatize it. Our study of the logic of events exemplified by machines has led us to believe that clinamens furnish an adequate explana tion of determinism or, in other words, of order in the uni verse. Without the retroaction of the driver, who knows the 185
THINKING
BY
MACHINE
position of his goal and sees the obstacles, the car might go anywhere. Without retroaction, the stars would not have come into being; the equilibrium of living forms would cease to exist. It would be impossible for man to order his conduct by referring it to an ideal standard of behaviour; it would only become a complex of chance activities. There could be no goal-directed thought, for nothing can seek a goal if past activities do not order future action. In short, without retro action there would be complete and utter chaos. It may be well to reiterate as succinctly as possible what we mean by retroaction. It is certainly not a return of the effect back to its cause, of output current to the input. It is rather a retroaction of the effect on the causal agent of a function. This return message is not conditioned by the global effect, but represents one of its characteristic attributes, such as might be exemplified by the voltage of a current, the speed of a machine, the thickness of a sheet of metal. Retroaction is then a matter of choosing the appropriate quality of the effect. The choice of a specific quality might seem to suggest a definite power of selection on the part of the feedback, not depending on any particular organ, but on the characteristic of the effect which gives rise to the cancellation of the differential message, or at least the tendency to cancellation. In fact, by virtue of the deviations from a reference value, retroaction corrects the factor of operation so that the effect may be corrected. It is not claimed that retroaction is intelligent although it does seem to select abstract qualities such as comparison with a reference value that is sensitive to contingency, and to correct deviations of the effect; but it may be claimed that intelligence makes use of no other process than retroaction.
W H E N T H E CAUSE IS P A R T OF T H E E F F E C T Order is not to be regarded as the only condition of stability. Instability may also be a form of order when it is not due to chance, but due to an internal organization. The universe is certainly not without purpose; it is true that it may exhibit certain effects that seem to be without purpose, but far more of the effects give the impression that they are part of some deliberate purpose. This organization of nature, showing a tendency in certain effects towards a maximum or
ANTI-CHANCE
towards a state of equilibrium, is a fact so evident that various schools of philosophy have postulated a final goal as responsible for the order of the universe, whilst others have maintained that nothing occurs that has not been determined from the beginning of time. The problem is how to account for a uni versal causality that tends undeniably to a definite final goal; this dilemma has a disturbing effect in every attempt at a philosophical systematization. Those who believe that the final goal influences the course of events seek for its solution in something beyond the effects, and those upholding causal explanation seek it in the antecedents of the effects. Both sides may be right to a certain extent and, equally, both may be wrong; it is but a form of the eternal question as to which came first, the hen or the egg. The solution is to be found neither before nor after the effect; it is in the effect itself, in the structure that is conferred on it by clinamens. For about a century the alpine glaciers have been receding, but most markedly within the past ten years. Geologists, glaciologists and meteorologists have given much consideration to this problem but failed to answer it satisfactorily. The failure is attributable to the habits of thought engendered by the traditional technique of causal explanation; they searched for causes lying outside the effect. They sifted past meteorological data, they determined the records of sunshine and of tempera ture. They found no trace of any general increase of tempera ture that could account for the recession of glaciers. Such a search for an external cause was unsuccessful; the real reason that the glaciers shrink is because they just do shrink. In a high valley a glacier descends to some fifteen hundred metres. If the glacier could be taken away, the valley would be full of spring flowers like those of the neighbouring valleys. Let us suppose that a hundred tons of ice were placed in this valley. At this altitude it would certainly melt and yet here is a glacier, just as there are other glaciers in other valleys at the same height and equally exposed. The glacier itself constitutes a phenomenon in which positive retroaction plays an essential part; the principal factor in the existence of a glacier is the cold that is conserved by its own mass. The glacier causes cold else where, but cold causes the glacier to exist; and not only is there retroaction, but a highly efficient retroaction; indeed it 187
THINKING
BY
MACHINE
is the real determining factor of the effect “ glacier” . Take away the cold of the glacier and there would be no glacier.1 It is, then, evident that the glacier shrinks because it just shrinks. With each year’s shrinkage the influence of the mass of ice will be less and each year the diminution will be more than in the preceding year. The increasing magnitude of the shrinkage depends on no other factor than itself and cannot be explained by any increase of the mean temperature in recent years. The initial cause of the phenomenon must have been the occurrence of some years of supranormal temperature or increased sunshine perhaps more than a century ago; un fortunately we have no meteorological data reaching back so far. The glacier will continue to shrink till contingency, that is to say occurrences in the external environment, such as some very cold summer or at any rate an abnormal lack of sunshine, favours an increase in the glacier and thus gives rise to an increasing retroaction, when each year the factor making for the preservation of ice, that is the mass of ice, will become more potent. The further the glacier retreats into the high valleys the greater will be its chance of meeting with such favourable conditions. On the other hand, the more it advances into the lower valley (as it did in the Chamonix Valley in 1830) the more probable it is that it will meet with unfavourable con ditions for further progress. Not only does the glacier shrink because it shrinks, but it will advance because it shrinks. The glaciers will advance and retire alternately and climatic con tingency is only the triggering force of the phenomenon. The operation of the trigger becomes more and more probable as the advance or regression becomes more obvious. This is pre cisely what has happened since the time that changes in the Alpine glaciers have been verifiable; it is known that several such pendulum movements have taken place. Mountains afford yet another such illustration. Consider a snowy slope which is not sufficiently rough and frozen to retain a stone placed on it. If a stone falls from the rocks above, it will roll down the slope. One might consider the 1 This is Commander Rouch’s explanation of the Greenland ice-cap. It prob ably consists of ice that is a relic of the Ice Age and would not re-form if it melted (Geographia, M ay 1952).
ANTI-CHANCE
cause of its movement to be the melting under the influence of sunshine of the ice that froze it in place. This is not so; it was the melting of the ice that caused the stone to become loose, but the stone’s movement which we see, has no other cause than its own movement. If one were to stop the stone on the middle of the slope and then leave it at rest on the snow, it would roll no more; and yet a moment before it was rolling down. Its movement was caused by the active force of its fall, that is, by an effect of an effect. The force engendered by its movement was the potent force, since without it the stone would not fall. However strange this may seem to be, we are logically obliged to consider that this effect is of the same nature as that of the glacial shrinkage. Instead of this we find ourselves prone to say that the rolling of the stone was due to thawing of the underlying ice, instead of viewing this as the triggering factor. In the case of the glacier the time involved is so extended that it would be difficult to trace the process back to a period when the phenomenon was not governed by retroaction; due, that is, not to its internal organization, but to external causes. If we wanted to discover the origin of such a phenomenon as the sun, also due to positive retroaction, a yet greater demand would be made on the imagination; one cannot hope to trace the minimal effect due to a contingency which, by uniting a number of molecules, engendered a mass capable of attracting other molecules. In this instance the organized effect has become immeasurably greater than the initial effect. The same story is invariably enacted; external causes give rise to specific effects, intrinsic causes give rise to organized effects. Man, as an artificer, has for a long time only known causes of the first category; nature is only accounted to be nature, and not art, in the measure of its internal causal system. Certainly nothing exists without cause, but when causation is internal, the whole process differs fundamentally from that due to artifice. The causal principle in no way postulates that the causal agents must be external, but we almost invariably believe this to be the case. Experts on glaciology have always sought for the cause of the glacier’s retreat in the external environment.
189
THINKING
BY
MACHINE
To postulate internal causality in no way impugns the principle of causality; but it is very different from external causality and this must be borne in mind when the essential nature of phenomena is investigated. In other words, the principle of causality in its classic form is incapable of explain ing the world. It should be set forth in the following two propositions: (1) Nothing exists without a cause. (2) The causes can be external, when the effect is deter mined; or internal, when it is organized. The classical external causality is only one aspect of causality; an effect whose factors are all external represents a special case. Traditional causality is indeed only a special instance of a far more complex system. Thus, Riemann or Lobatchevsky, when they devised non-Euclidian geometry, in no way contra dicted Euclidian geometry, but found it to be applicable only to a special case. It would probably be advisable to refrain from using such an ill-defined term as “ cause” for the purpose of ordinary speech and to banish it entirely from the logic of phenomena. To revert to the definition of effect that we gave at the same time as Riemann’s definition of function (page 157, Chap. V II): a fact E is the effect of n facts which vary in a field of n dimensions, if each value of these n facts corresponds to a single value e of the fact E, determined by some law. Yet again we must insist that a global fact is not character ized by a single causal function, but that it has as many of them as it has essential qualities. It is, then, necessary to con sider separately each different quality of an effect. (Thus, in a machine a different logical rule must account for each charac teristic of the finished product.) The infractions of this rule committed by the highest intel lects are past counting; some quality of a phenomenon is over and over again mistaken for its cause. A chain of argument is often weakened by such a confusion of thought, a confusion that is sometimes very difficult to detect. This notion of internal causality is implicit in physiology. Claude Bernard recognized this when he wrote: “ A living organism is organ ized for itself. It has its own intrinsic system of laws.”
ANTI-CHANCE
C O N T I N G E N C Y , DETE R MI NISM , O R G A N I Z A T IO N When an effect is contingent, it is at the mercy of the varia tions of its factors; it is what it is. It is true that its qualities will always be in accordance with the law of causal function, but just because this obliges it to mirror its factors. According to the classical notion, contingency is that which may or may not be. This notion must be further defined in relation to a system: that which is not a function of any of the elements of a system is contingent in relation to the system. An organized system may exercise a contingent action on another system if this action is entirely independent of the second system. Stated in such terms, as a relative proposition, the notion is universally acceptable. Three essential definitions may now be stated: An effect is contingent when it depends only on factors which are free to vary in their own framework. An effect is determined when it depends entirely on factors whose framework of variation has been reduced. An effect is organized when it not only depends on its factors, but also on its own specific functions, that is to say, functions of the clinamen. If these functions tend to free the effect from the variations of its factors the organization is negative: it is, on the other hand, positive if the variations of the factors are magnified in the effect. In the diagram, contingency is represented by every arrow impinging upon the exterior of the system, whilst organization is represented by every arrow joining any two elements of the system; and determinism by every arrow having its orgin in a little cross.
dependent on one o f the factors
dependent on all o f the factors
THINKING
BY
MACHINE
The two notions of contingency and determinism are already familiar. It is to be noted that an effect can only be organized in so far as it is not determined; feedback is inconceivable in an astronomical chronometer in which no factor is variable. A clinamen can only operate on a factor whose value is not fixed, through which it can exercise its influence. Hence one can say; an effect can only be organized in the degree to which it is not determined; its organization can only affect its contingent possibilities. Hence it is easy to express the measure of organiza tion: The degree of organization of an effect is the difference between its probability and what would have been its prob ability had it not possessed a clinamen. Thus, without feedback there might be a probability p that a rolling-mill rolls out a metal sheet one millimeter thick; with retroactive regulation, the probability becomes P approaching unity. The measure of organization is P — p. Let us consider the male and female germ cells of a fish and the probability of their meeting, not necessarily in the vast framework of the ocean, but in that of a pail of water. The probability would be infinitely small without the organization responsible for the deposition of the ova and the emission of the male
sperm. There are few effects so dependent on organization as the union of male and female sex cells, requiring, as it does, the reciprocal interaction of either sex, each seeking out the other
192
ANTI-CHANCE
and exciting its own activity by the mere presence of the other. Once the mutual approach has begun and the effect is produced, a double positive retroaction is engendered in which the effect (the approach), so acts on the factors (excitation of the partici pants) as to increase their mutual excitation. Diagrammatically, one can show this as a situation with maximum contribution from clinamens. The contribution of the deterministic element to the effect is at a minimum; without the influence of the clinamen the measure of contingency would have been very great; and hence the measure of organization must be very considerable. This diagram is equally valid if we substitute the magic word “ love” for the effect. Man and woman meet by chance (con tingency) ; a twofold interaction draws them ever more together, and love is born, even when it is exemplified by the most tenuous sentiment. From now on begins the double retroaction of love and the sentiments that gave birth to it. The lovers become more intense by the mere acts of loving and are caught up in a cycle of positive retroaction. Yet another example may illustrate the growth of the mere probability of an effect from a dependence on contingency to determinism and thence to organization. That the flow of water in a stream lies between certain close limits is improbable; it may run at this value for only 3 hours in a year, so that the probability of such a rate of flow is 1/3,000; then the measure of contingency is 2,999/3,000. m e a su re o f d e te rm in is m
m e a su re o f c o n tin g e n c y
Suppose a man wishes to increase the measure of determin ism, that is the probability of the event. He might build dams, supplying them with sluice-gates. In consequence the prob ability of the flow attaining the desired amount would be much increased. Let us suppose, for example, it were 2,000/3,000 = 2/3. The measure of determinism will now be 2/3. The actual amount attained, however, will be subject to the mistakes that the man might make in his efforts to determine the effect in the 193
THINKING
BY
MACHINE
light of such data as he may have collected. This degree of un certainty is expressed by the measure of contingency which has now receded to 1/3. m e a s u re o f d e te rm in is m
m e a su re o f c o n tin g e n c y
But in the years 1948 to 1950 the largest laboratory in the world for the study of hydraulics, Neyrpic at Grenoble, revolu tionized a subject that had progressed very little for thousands of years. Up to then, in order to regulate the distribution of water in an area, water was directed from each point in a quantity that was judged to be sufficient; since it was often possible that this quantity was not entirely used up, it became necessary to build reservoirs to hold the difference between flow and consumption. It was also necessary for the banks to be built higher than the mean water level. The engineers of Neyrpic did not concern themselves with the fact that their solution of the problem was a cybernetic one. They substituted a regulation by the effect for regulation by the factors; for a determined system arranged by man, they substituted one dependent on hydraulic organization. Various sluice systems, simple in action but admirably conceived, were placed in position. The essential mechanism was a sluice maintaining a constant level; water only entered the channels in sufficient quantity to meet demands on it. Thus intermediary reservoirs became unnecess ary and the banks of the canals only required building up to a minimum height; only as much as is needed is put into the supply. Sometimes, however, when there is a relative excess or lack of water up-stream, it becomes highly expedient that the situation up-stream should govern that down-stream; at this point other sluices come into action, sluices that are so efficiently planned that they are termed “ thinking sluices” (“ vannes pensantes” ). Used in conjunction with other apparatus these sluices have already achieved wonderful success in Algeria, giving an entirely new concept of irrigation. We are once more in the domain of retroaction; the level and the supply are regulated by the down-stream situation.
194
ANTI-CHANCE
The probability of a given flow becomes almost a certainty. If, up-stream, there should be hardly any water, of course the flow down-stream will diminish and if, on the other hand, there is a flood, the flow will increase. As always, retroaction only works within the limits of variation of the factors. The flow probabili ty, for example, may have become 2,995/3,000; without the clinamen, the probability was 2,000/3,000. The measure of organization is 995/3,000. The effect thus has the following c h a r a c te r is tic s ; m e a su re of d e te rm in ism 2,000/3,000; m e a su re of organization 995/3,000; measure of contingency: 5/3000. m e a su re o f
d e te rm in is m
m e a su re o f o rg a n iz a tio n
m e a su re o f c o n tin g e n c y
Thus an effect may exhibit all three characteristics. Many philosophical problems are insoluble so long as we fail to recog nize that inherent in many of the effects are three processes; that there exist three measures whose total is unity.
T H E E S T AB LI S HM EN T OF O R D E R One might ask if there is a limit to the degree of organization. Does organization tend towards a degree of limitation which is, for it, what chance and certainty are for contingent and deter mined effects? It is debatable whether a degree of organization equal to unity is conceivable. For organization to equal unity, it would be necessary that the measure of contingency and determinism should be zero; in other words that the effect should be due entirely to the internal organization and in no sense to external causation. This is of course unthinkable, since the clinamen is a function of the effect and could not be manifested without some effect originally initiated by external factors. To say that a rolling-mill dependent only on internal organization could provide us with satisfactory sheet metal is absurd; to regularize the thickness of the sheet, the sheet metal itself must be provided. The same is true of the circuit of a self-maintaining oscillator, in which the effect must be triggered by minimal contingent currents. Internal causality operates on matter which has been
r95
THINKING
BY
MACHINE
furnished by external causation; contingency and determinism are the basis of organization. An effect tends to become fully organized by the suppression of that part of contingency that is unable to overcome determin ism; at its maximum, the degree of organization is equal to the degree of contingency responsible for a similar unorganized effect. Organization, having eliminated all influence of con tingency, guarantees the effect against all form of interference; the coefficient of its feedback would be infinite. This limitation is perfect order. Order is the state of an effect that reaches a probability of unity through organization such that its degree of contingency becomes zero. In practice we only attain some tendency towards such order; perfect order is the final goal to which all the clinamens direct the effect. The tendency to order is anti-chance, it is the build ing up of internal causation in order to escape external in fluences and to follow an internal conditioning alone. This signifies a change from the formless to the formal. Nature be comes actualized. It was Kant who spoke of “ the order and regularity in the phenomena that we call nature” . We have indeed passed from Kantian “ mechanism” to “ organism” . “ Organism” necessitates internal forces with a teleological orientation, in contradistinction to “ mechanism” where activity is determined by external conditions. This antithesis has, to a great extent, influenced nineteenth-century philosophy, especially in Germany. So long as it remained a subject for metaphysical debate, it might be considered as mere intellectual play. Logic, applied through mathematical functions— func tions which have to be studied henceforth— is the fundamental basis of all scientific and philosophical problems. All arguments that are based on a misapprehension of the terms of this anti thesis are ab initio fallacious. External causality embodies con tingency. Determinism is an artificial construct which only partakes of contingency in so far as human directive power participates in the effect; man himself is to be regarded as ex ternal to the system. The clinamens serve to establish order and organization. If the matter be viewed more abstractly, one may distinguish, in the effect, the fact and the event or, if one prefers, essence and existence. In an effect due solely to external causality, only the
196
ANTI-CHANCE
essence is determined, being dependent on its own “ functional law” . In an organized event, it is the event or existence which tends to prevent it from being thought of as a mere probability. This existence depends, to a greater or lesser degree, on a mathematical “ law” , that which governs the function of the clinamen. In the matter of contingency and determinism, the effect mirrors only the law of causality that governs its specific essence; in organization, it mirrors also the laws governing the clinamens, which, by modifying its field of action, act upon its existence. These abstract considerations lead us to the very foundations of nature. Organized effect appears to be a truly transcendental entity; it has no existence, apart from its organization, that could be considered to be linked to its essence. Its essence might indeed be thought of as in some degree dissociated. Let us think of it in terms of negative retroaction. The effect continues to mirror the variations of contingency, except for what con cerns one of the essential characteristics of its activity: that to which the retroactive function is sensitive (such as the speed of the Watt steam-engine, thickness of sheet metal, direction of the aeroplane, &c.). This control of the essential activity does not depend on the factors, but on the characteristic imposed by retroaction, which is capable of cancelling itself in the interests of the stability of the essential control. Retroaction, then, exhibits two characteristics: on the one hand it imparts properties to the essence of an effect which are not determined by its factors and on the other hand it modifies the existence of the effect. A contingent effect is an effect that is produced without any artificial interference with the framework of its variables. A determined effect is an effect whose probability has been in creased by a reduction in the framework of its factors which are themselves controlled by common pre-factors. These two types of effect are due to a single external causality. An organized effect is due simultaneously both to internal causality which springs from the effect and to external causality w'hich determines the nature of the effect. On one side we have contingent and determined effects, on the other side organized effects. Yet organized and determined 197
THINKING
BY
MACHINE
effects have a certain common difference from contingent effects; both of them, by different means, tend to increase the probability of the particular effect.
ESSENTIAL LAWS AND EX ISTE N T IA L LAWS An organized effect mirrors two separate functions, the causal function that tends to make it dependent on its factors and the organizing function or clinamen that tends to modify its character. The first of these functions links it to contingency, the second gives it its specific orientation. The first tends to bind it to matter; the second to emancipate it. The effect itself is the resultant of these two functions. But even if we were to reduce the causation of any essential property of an effect to the operation of only two factors A and B, it would be impossible to represent the two functions by curves drawn in the same plane. The causal function might, it is true, be so represented, but the clinamen could not be drawn in the same plane, since it depends neither on A nor on B, but expresses the variations in time of the “ deviations” of this qualification from the effect. One may, however, in order to get a clear picture, imagine two independent curves, one of which represents the causal, and the other the organizing function. The curve representing the effect will evidently be between the two. If the causal curve comes to a stop, so will that of the effect and that of organiza tion, which obviously cannot exist without the effect, since it is a function of the effect. On the other hand, if it be the clina men that is interrupted, the curve of the effect will join up with the causal curve; the effect will be disorganized. If these two curves exist at the same time, the effect will be their resultant. When we study an effect, we find ourselves studying only its effective curve. In one sense this is a result of the combination of two fundamental laws: a law of causation and a law of organization; the one governs the essence of the effect and the other determines its existence. The impasse which our thought has reached now becomes evident; it might seem that all the laws of physics need to be reformulated, and that in each one would have to seek laws o f essence and laws of existence. The former relate effects to their factors and show the origin and necessity for this or that
198
ANTI-CHANCE
essential quality, and they are purely mathematical; the latter express the way in which the probability of the effect is modified by changes in the causal field due to the interdependence of factors, and these are statistical laws. To give examples from the physical world of pure laws of essence is impossible; the summation of more or less probable events always confuses the human observer in his search for rigorous mathematical laws. On the other hand, mathematics alone, even supposing the v a r ia b le s to b e in th e c a u s a l fie ld , w ill o n ly g iv e la w s o f essen ce ;
their task is, in fact, to express the essential qualities. Let us consider a small stone, which, it if be found in the desert, will present the appearance of being faceted, but if fished up from the sea will resemble a pebble. The first event is essen tial, the second existential; and here is the reason. In the desert, the stone is carved by sandstorms. Its shape mirrors its original form modified by the dominant winds; it is an effect due to external causation. The round pebble, on the other hand, has a shape that is a property of its present self, and which finally ceases to be directly conditioned by its original shape. This present form, that seems to be so dominant as to outweigh any modification by external forces, represents an existential func tion, a retroaction tending to a final goal. The original stone may be rolled by the sea in any axial direction, but it is less likely to be rolled on its broadest axis, since in this dimension it offers a maximum resistance and will only be turned over by the biggest waves. It is most likely, then, that it will be rolled over and worn away in its thinnest dimension and the more this action is repeated, the more will the factor favouring it be re inforced. This is positive retroaction. An ordinary stone, found on land, that is taken and thrown into the sea will be rolled in any direction, but it will always tend to roll more easily on its long axis and it becomes gradually worn away more and more in its thinnest dimension and thus the disproportion between the two dimensions becomes increasingly accentuated. Hence the oval shape will become more and more marked and by the wearing away of the curvature that offered the resistence to its rolling on its long axis, a positive retroaction will be set in motion which will finally give rise to a flattened oval stone. The law that would appear to formulate the shape towards
199
THINKING
BY
MACHINE
which all pebbles tend is an existential law. It expresses a summation of events, no one of which is specifically necessitated. It shows the manner in which contingency is organized by a clinamen. It gives rise to an organized or natural form, whilst the faceted stone carved by external factors shows little resem blance to the structures encountered in nature. Let us now return to consider the two types of functions, causal on the one hand, and organized on the other. The first type tends to mirror only continually changing factors and to give a rounded form to the stone without any definite shaping in any one direction; the second tends to wear away the stone in one direction only and in one plane. It would be impossible to conceive either of these tendencies in isolation, since the causal process in question could not fail to give rise to an organ izing function and every organizing function is a function of the effect due to causal function—-an effect, which in reality is not accomplished by this causality itself, but by the resultant of the two causal systems. The ripple marks on the sand seen at low tide furnish another illustration. Such markings are also to be seen in ridges of dust or snow caused by the effect of the wind. The theoretical ex planation of these phenomena has so far been unsatisfactory. The author has further observed these ripple marks in shallow depths, while swimming under water. By destroying them and then watching their re-formation, the motion of the waves can be seen to create these undulations by the formation of whirl pools which arise in response to the slight sucking action of the eddying water. In this process positive retroaction occurs, subsequendy limited by negative retroaction. Here, then, we are privileged to observe a natural pattern in process of creation and to see the very regular undulation take its origin from a contingent event. The formal is here develop ing from the amorphous. Even so a crystal is a growth depend ing on the contingent deposition of the first molecule. In this case the clinamen not only conditions a certain length of the wave form, it also determines its angular formation and thence forth no other axis of crystallization can be effected. It is not impossible that similar principles determine living structures. However, we must here limit ourselves to a few simple examples of existential and essential actions on the effect and defer to
200
ANTI-CHANCE
another book the study of more difficult abstractions which may lead to a number of abstruse questions.
AN I M A G E OF T H E U N I V E R S E It may be well to survey from a comprehensive point of view these organized phenomena that we have been studying hither to. In the next few pages it is proposed to attempt such a uni versal survey. An effect, by reflecting freely variable factors, gives us the picture of a contingent event. To say that an event is contin gent does not mean that it is entirely nebulous or ill-defined. If two stars collide and form a single star without any function of their organization impelling them to seek each other, their collision might be said to be a contingent event. The universe comprises everything; it is all and it is nothing; all effects are possible; none is specific. To characterize it in one word, it is the rule of chaos. But pervading it is a drift (it admits of no more precise definition) which tends to obliterate all those differentiations that may have been engendered by the effects; it may be thought of as a levelling or anti-differentiating tendency. Here, let us say, by contingency the position of a corpuscle has been raised to a level other than that of the structural mean. But if the causes of this effect do not continue, the corpuscle, thus differentiated, will fall back to the undiffer entiated whole. This is a fundamental law of nature. It is that which levels all differences of potential, it is the second law of thermo-dynamics which may be formulated thus: Every state of differentiation, once the causes that sustain it are withdrawn, returns to a state of non-differentiation. This is a version of the famous law of entropy. There is more non-differentiation than there is differentiation in the universe, since every differentiated system that may be formed is liable to be levelled out. Hence there is a greater probability of effects showing little differentiation; the greater the degree of differentiation, the less the probability of its occurrence. The most homogeneous state is the most probable and the most heterogeneous, the least probable; so the degree of entropy of a system is a measure of its degree of probability.
201
THINKING
BY
MACHINE
In the game of bridge, the hands occurring most frequently tend to exhibit a homogeneous division of suits. If one day a suit of thirteen spades turns up, this hand, thus differentiated, would appear to be so improbable that it might be thought of as faked by some player wishing to play the part of Maxwell’s demons, who defeat chance. We have only to think of our terrestrial atmosphere to imagine the probability of an undifferentiated condition. The density of the gaseous molecules is greatest in the lowest layers of the atmosphere; the probability that any given oxygen molecule would attain a height of 60,000 feet is obviously far less than that it would remain in contact with the ground. It might seem that contingency is not always free from ten dency. But the two terms contingency and tendency are an tagonistic; if an effect shows any tendency whatsoever, then to that degree it cannot be classed as contingent. Thus the orient ation of all events towards a maximum degree of entropy represents a current in which all events are bathed— a current which interpenetrates contingency and yet remains totally distinct. Such abstractions as this are often helped by imagery. Let us imagine an ocean whose molecules represent objects composing the universe. In the sea, squalls, waves, currents and water-spouts may occur in any place, acting in any direc tion; in other words, contingency is complete. But at the same time there is a vast and powerful current guiding everything to an area without tides, a sea of nothing but calm, where there are no waves and no storms. We may imagine on our right the smoothed-out sea and on the left this is invaded by a great current of entropy flowing from the zone of differentiation to that of non-differentiation, from the heterogeneous to the homogeneous, from the improbable to the probable. The whole affords a picture of the universe viewed in the light of its essen tial quality, that of differentiation. If we set out from a state of equally disturbed probabilities, it will be obvious that differen tiation is not more or less probable than non-differentiation. But the current of entropy— that of functional activity— is constantly guiding events towards a state of non-differentiation; things, or, better, events become, therefore, unequally divided, the less differentiated being the most numerous. If we imagine the effect of subjecting these states successively
202
ANTI-CHANCE
to the action of determining and contingent agencies, the resultant may be characterized as follows. Contingent effects. Contingency may equally characterize the effects subject to differentiation or non-differentiation, but since this latter state is more probable, whatever change is effected is likely to be in the direction of differentiation, since non-differen tiation has already approached the possible limit of change. This may well be illustrated, if we consider the case of an atmosphere that is not of uniform density; its molecules will obviously have a better chance of rising than of sinking since most of them will be found in the denser lower atmosphere. Contingency, then, may tend to produce states of differentiation. The wind whips up a wave and blows its foam; this is a matter of contingency but the constantly acting current of entropy will direct the broken wave towards calmer water. One might throw a stick into the sea and it would be equally likely to fall on the crest as in the trough of a wave. But the causal function will only be responsible for directing it some where within certain definite limits; it will never be thrown in to the infinity of space. When it has reached a point at which its propulsive factors are no longer operative, it will be borne further to one side or the other by the current and, to whatever side it may be carried, it will finish its career in the calm waters whither the current of entropy tends. To put it in other words: be the contingent effect what it will, where its factors cease to act, the current enforcing univer sal levelling will take charge. Effects of determinism. The factors have a fixed value and they can carry the effect to a certain fixed level of differentiation. But if the cables of determinism should part, the raft of effect would be at the mercy of contingency, drifting towards maxi mum entropy, instead of being held fast against it. Negatively organized effects. Tossing on the waves of our sea and immersed in the powerful, if invisible, current, ships of another kind of effect remain motionless. These may be likened not to rafts anchored by rigid unyielding ropes, but to launches kept in position by their internal engine power, their negative clinamen. No doubt they rock a little on encountering contingent influences but they keep station against winds and tides, their
203
THINKING
BY
MACHINE
internal power of organization renders them immune to such influence. Positively organized effects. The positive clinamens are respon sible for differentiation. They are indeed creators of potential differentiations, responsible for the intrinsic energy of the world. Chance could never have flung milliards and milliards of cor puscles together and thus formed the stars. Our sun, the source of all terrestrial energy, could never have created itself. The influence of these organized effects works backwards against the levelling stream and without it the states of a major degree of differentiation would be infinitely improbable. They repre sent boats navigating against the current, uninfluenced by it. Only when their power is insufficient for its task do they risk destruction by the greatest tempests of contingency. But, against the current, whither will they travel? Their motor is capable of functioning indefinitely, but contingency never per mits a limitless voyage. Three things may happen. Either the retroactive tendencies are overwhelmed by an irresistible chance occurrence (the coal bunkers may be completely ex hausted, leaving the ship adrift with no internal force to oppose the current); or one of the elements of the retroactive function only allows of a single specific type of differentiation, as in the case of feedbacks acting on the steering so that the boat moves at right angles to the current of entropy; or, lastly, a negative retroaction acts on a positive retroaction. Such is the case when positively organized solar reactions also have within themselves the menace of negative retroaction, owing to the exhaustion of nuclear material that has not undergone transformation. In such a case the resultant of the two reactions rises to an apex and then slowly descends once more. Just as the sun is destined to cool slowly, such a vessel may be pictured as sailing up to a certain point in its journey against the current, then coming back down-stream, but controlling its path sufficiently to avoid the culminating entropy.
E N T R O P Y AND A N A T R O P Y All these analogies of seas, currents, rafts and launches are, of course, highly artificial anthropomorphic explanations of a world that is not human, but they are not necessarily mislead ing. Contingency acts in any and every way. Entropy always 204
ANTI-CHANCE
guides events in the same direction; the effects of determinism are possible in all directions and are conditioned by its fixed factors: positive organization which may either work with or against entropy and negative organization which is only sensi tive to current acting in one direction. From all these consider ations emerges the very important fact that an organized system can only affect another organized system by chance; a collision between two self-propelled boats would be an illustra tion. The current of entropy leads everything towards a state of non-differentiation, but in spite of general opinion to the con trary, it is not the supreme power and will not engulf all the world of form in an amorphous nothingness, for the world that is our world has many ways of combating it. First of all, by contingency, that may launch its effects anywhere and in its own fashion, but never in the stagnant sea of amorphous nothingness. Next, by determinism that strives to immobilize its effects. Thirdly, by positive organization, the internal force of which drives itself where it wants to go however strong the current or violent the storm. Fourthly, by negative organiza tion which maintains itself in a condition of dynamic stability. But the factors of contingency may cease to function, the chains of determinism may break, the motors of the organization may not have sufficient force to weather the storm. The current, in these circumstances, carries down the wreckage whose origin we are then unable to determine, whether it be from fragile contingency or stout determinism or powerful organization. These fragments of wreckage, when we can succeed in capturing them as they sweep past, can be of use to us in our mechanism by virtue of their tendency to non-differentiation. They en courage us to say that Carnot’s principle governs the world. In this we should be wrong; Carnot’s principle is only one of the world’s governing principles. The levelling of differences of potential sets free energy, but when we inquire as to the power which created these differences of potential and stored up the energy that is used by mankind, we discover that it is the expo nential power of positive retroaction. This winding-up process countering the stream of destruction is just as powerful as entropy itself. It may indeed be claimed that it has conquered entropy since it is responsible for the creation of our highly
205
THINKING
BY
MACHINE
differentiated world from the primitive nebulae which were very close to the state of maximum disorder. The word entropy is derived from the Greek entropus meaning return, involution. Clausius used this word to express the idea of a self-return, of the irreversibility of phenomena which is manifested in the principle of Carnot. In opposition, to express the idea of positive retro action, a counter term has to be forged. If entropy fully ex presses the process of degradation, some term expressing a pro cess of expansion is indicated and for this purpose “ ektropy” would seem to be apt. The prefix “ ek” characterizes an external idea and if our ascending scale were indicated, the Greek prefix ana could be substituted as in “ anatropy” . The functions of entropy and anatropy are mutually contra dictory and the degree of entropy of a system is the reverse o f its degree of anatropy and vice versa. Professor Brillouin has introduced the term “ neg-entropy” to express a concept which is the opposite of entropy. The degree of entropy cannot be expressed by a positive sign. Entropy is non-constructive and hence negative rather than positive. Only during the past few years has differentia tion been studied as de-differentiation and we have acquired the habit of considering entropy as indicating the univeral direction whither all things tend. To express the fundamental distinction between entropy and its opposite, it is, however, convenient to attempt some linguistic analysis. Some writers contrast entropy, as expressing at the same time levelling and order, with differentiation and disorder. This seems to be wrong; stagnation is not order but death and differentiation is not disorder. Order is not a question of the presence or absence of differentiation but of continuity of tendency or purpose. Norbert Wiener defines entropy as the measure of the degree of disorder of a system and its opposite as the degree of order. A measure of information can be considered as a measure of order, which is a very important concept for communications engineers, a field in which Wiener has made some important studies. The amount of information that can be transmitted in telephone and telegraph messages depends on a measure of the degree of order. If we were to find ourselves in a realm of non-differentiation, we should be unable to communicate with our neighbours since
206
ANTI-CHANCE
any signal necessarily involves differentiation. The possibility of transmitting information is therefore linked with non entropy. It is easy to put this in a simpler fashion: a constant current will not allow the transmission of telegraphic signals; if it is used for this, it must become discontinuous or differentiated. Still less can a system that is only weakly differentiated serve to trans mit a great amount of information. A high degree of differentia tion allows all sorts of codified variations and hence a large amount of information can be carried. In short, the quantity of information is linked with the degree of differentiation of the effects that are responsible for its trans mission and is inversely proportional to the degree of entropy of the system. The struggle of the telephone technicians against the weakening of their signals and their flattening out over long distances is in fact a struggle against entropy. In passing we may refer to a rather surprising connexion of these ideas. This progress towards an ever-increasing facility for the transmission of information, this opposition to the stream of entropy, is not unknown to the philosophers. When Herbert Spencer defines evolution as a passage from the homogeneous to the heterogeneous, his famous “ process of differentiation” can equally well be interpreted as progress towards anatropy. Our intuition in this presentation of the world is dependent on strictly logical premisses. The system thus sketched satisfies the complexity of our perception of the world and at the same time our intellectual desire to resolve the complex into general principles.
207
CHAPTER IX
Synthetic Animals t is easy to see that Grey Walter’s tortoises or Ashby’s homeostat are not automata in quite the same way as the others. It is difficult to say exactly in what way they differ from them. They certainly do not perform a series of inevitable acts like the classic automata of Vaucanson or Jacquet-Droz. But all this is very easy to say; it is more difficult to know in what way they are emancipated from their creator, or to define this new kind of automatism. When we come to see the very complicated reactions of the homeostat which tries to reproduce the mechanism of the living being, we find that we are obliged to discard all our old notions of automatism and that we must start all over again right from the beginning. This is just what we have attempted to do in our journey into the world of abstraction. We are now in possession of definite principles and methods of analysis, and we should be in a better position to examine the problems and to try and under stand them. Let us first of all look from the point of view of a bystander at how these famous tortoises “ live” , discarding for the moment all scientific considerations. Later we may pro ceed to their structural dissection. On the morning that I first saw the tortoises, Elsie was hungry, and Elmer was quietly digesting under a leather arm chair. At io o’clock the wintry dawn, misty and icy, was still cling ing to the soft green hills that surround Bristol. It seemed a vast contrast to my first meeting with Grey Walter one scorch ing hot day on the Italian Riviera, in his house perched among the roof-tops of the old town of Bordighera overlooking the blue Mediterranean. In this typical English house, one finds the traditional “ pets” . They are not dogs, cat, parrots or canaries or even
I
208
SYNTHETIC
ANIMALS
tortoises of flesh and shell. They are made of metal and electric coils and are listed in no biological classification. They are man-made; but their creator named the species Machina speculatrix, according to strict zoological canons. He also gave them pet names abbreviating their imposing technical termin ology. Elmer (Electro-Mechanical Robot), the first-born of the family, saw the light of day in 1948 and his sister Elsie (ElectroLight-Sensitive-Internal-External) is his junior by some months. The shell of the male is of bakelite plates. His com panion is more feminely attired in red perspex. Three beings were crawling about the living-room carpet. The third was a small boy— Timothy, Timo for short— still, like the tortoises, at the crawling stage. He was chasing his red sister Elsie. After all, why should they not be called brother and sister, since both of them are the “ children” of Dr. Grey Walter, the famous electro-encephalographer, and his wife Vivian, his assistant at the Burden Neurological Institute, who together have produced several important scientific communi cations and have co-operated in making the robots. So, Timo on all fours was chasing Elsie on her three wheels. He hit himself against a chair and started to cry. The tortoise, however, passed underneath it, only to knock herself against one of the legs; she reversed a bit, and, without becoming discour aged, regained her former direction and avoided the obstacle. Elsie moved to and fro just like a real animal. A kind of head at the end of a long neck towered over the shell, like a light house on a promontory and, like a lighthouse, it veered round and round continuously. This is the photo-electric cell which explores the surroundings, searching for a source of light, just as the antennae of an insect with rudimentary visual organs seek for a contact by which it may orientate itself. In the front of the “ shell” , on the breast, shines a minute pilot light. I enclosed Elsie in a barricade of furniture, but by banging herself and reversing and knocking herself and backing and turning again, she managed to find her way out. I had the impression that I was watching an insect blindly bumping itself against all the objects that it comes up against until it finds a way to liberty. Elsie seemed anxious. She was obviously looking for some thing. I knew what it was: a light, but on this dull dark day
209
TH INKING
BY
MACHINE
there was none to be found. So Grey Walter switched on a standard-lamp and, immediately, the head which had never ceased rotating caught sight of it. Now Elsie knew where she wanted to go . . . . No, she still hesitated a moment, continuing her exploration. But a moment later she continued on her way towards the attraction of the light source. Even so, she inter rupted her direct course towards the light by one or two “ hesitation waltz” steps. At last, however, her course became firm and direct. He head no longer turned, but remained fixed in the direction of the light as if fascinated by it. The pilot lamp on her bosom went out. But the heartless research worker now put a box in the path between her and the light, and, inevitably, she ran into it. She got a shock, seemed to hesitate, and no longer continued her way towards the light, although she could see it quite well shining above the level of the box. Now came the surprising turn. Elsie acted as if she remembered her shock. At first, it seemed to have destroyed the attraction of the light for her. But, in fact, it really “ inhibited” it for a few seconds, just long enough for her to walk sideways like a crab away from the obstacle and then, her path no longer barred, she hurried on her way towards the light. At last she was in front of her objective. Suddenly her be haviour changed; she backed like an animal which has gone too close to the fire. She described a wide circle round the light, exploring again, backing and advancing, as if looking for some thing else that this lamp did not quite provide. Finally she lost interest and went off once more on a voyage of exploration, with her head turning round and round and the pilot light shining brightly once more on her breast. Grey Walter flashed an electric torch in his hand and attrac ted Elsie’s attention. He waved it backwards and forwards and Elsie turned hither and thither with her photo-electric cell always directed towards the beam of the torch. But Grey Walter now set another trap for his pet. He put the torch on the floor and hid it by placing a screen in front of it. Elsie, not seeing it any more, started again to explore, turning and moving hesitantly. Chance took her in her search beyond the cover of the screen. Thus, rediscovering the light, she found her objective and made off towards it. 210
SYNTHETIC
ANIMALS
Yet another trap, a mirror, was placed in front of Elsie. What would she do ? As if attracted by her own image, she approached the mirror, where the light from her breast was reflected. But she hit herself against the glass. She then waltzed around the mirror in zigzag movements, to and fro, as if admiring her own reflection. The explanation of all this was not difficult; what attracted the tortoise towards the mirror, was indeed the reflection of her pilot lamp. But this is only alight when the motor turning the photo-electric cell is receiving current. As soon as the photo electric cell is fixed by the attraction of a light, the motor stops turning, the lamp goes out and the mirror ceases to have any attraction; Elsie starts searching for another light, and then the pilot lamp lights up once more so that again she sees its reflection and is once more attracted by it. Again her attention becomes fixed and this once again causes the motor to stop and the pilot light goes out. Hence the hesitant dance in front of the mirror. “ Look at that!” said Grey Walter with the pride of a father admiring his progeny. “ Isn’t it the personification of Narcissus ? If an animal were capable of recognizing its reflection in a mirror as its own image and not that of a rival individual, one would say ‘how intelligent’.” But thinking to catch him out, I asked, “ Why? Wouldn’t she behave like that if faced with her brother ?” “ O f course not! When both tortoises are looking for the light and come face to face, they begin a strange sort of dance, each describing wide circles, one moment attracted and the next instant repelled by each other. How is this quadrille explained ? As soon as one of them sees the pilot lamp on the breast of the other, it is attracted, and the same thing may happen to the other, but directly a luminous object is presented to the photo electric cell, the pilot lamp goes out and each ceases to attract the other; hence once more a new search begins, the motors start up afresh and the pilot lamp glows, until they catch sight of each other and the cycle starts all over again. It’s just like a courtship dance where the partners seek each other out and find each other, only to retire again coyly.” A community of Machina speculatrix, explained Grey Walter, would be obliged to lead a gregarious life. Each individual would 211
TH INKING
BY
MACHINE
seek the company of the other without, however, necessarily finding in him the answer to its quest. But Elsie no longer seemed to exhibit the same degree of energy that she had a short while ago. “ We mustn’t keep her waiting for her meal much longer,” said Grey Walter and he switched on a commutator. On the floor, in a corner of the room, was a sort of hutch in a portable box illuminated by a very strong lamp inside it. Immediately Elsie made off towards it. Her passage was unencumbered this time and she went straight into her stable where she would find food. There was a faint click and Elsie remained motionless drawn up as close as possible to the powerful lamp, leaning against the contacts at the back of the hutch. These are so arranged that in this position she plugs into the mains in order to recharge her batteries. “ She is taking her bottle,” said Grey Walter. “ When her appetite is satisfied, a reversal of her reaction will take place, she will be repelled by this strong light and will go off looking for a quiet corner in which to digest her meal.” I asked what would happen if the tortoise had been unable to get to its manger, thinking the question to be unanswerable. But Grey Walter was unperturbed, “ She would die of course,” he replied, “ once her accumulators were run down. When animals no longer have the strength to search for food, they die, as might easily happen if they were imprisoned and unable to escape. An impassable wall, or a stair or even a very thick pile carpet or a fur rug is fatal to my tortoises.” SUBTLE
BEHAVIOUR
And now for some explanations of these diverse behaviour patterns. First of all, the stomach of the tortoise is obviously no more than an accumulator. The best solution, of course, would be for the machine to charge its accumulators with electricity manufactured by its photo-electric cell from the same source of illumination that attracts it; thus once it had arrived as near as possible to a powerful light, it would absorb the light and transform it into electrical energy. But this solution, although theoretically possible, would provide only a very feeble source of power. It is better for the tortoise to have direct recourse to 212
SYNTHETIC
ANIMALS
the mains current, whilst the lamp only constitutes a signal of attraction— a promise of the electric food— in exactly the same way that the smell of food attracts an animal to the source of its origin. The mechanism is here so constructed as to be sensitive to this signal; whereas in animals this sensitivity is inherent in the nervous mechanism responsible for instinctive be haviour. Grey Walter has calculated, however, that if the shell were studded with numerous photo-electric cells, and the tortoise left exposed to the sunshine of the long summer days on the terrace of his house in Bordighera, it could store sufficient elec tricity from the sunlight to preserve its life for some minutes at least. The tortoise’s behaviour changes entirely according to whether the accumulators are charged above, or discharged below, a certain voltage. When the accumulators are running down, the behaviour is that of a hungry animal hunting for food. Grey Walter’s synthetic animal is hungry and searches for light— the brightest light. But in its search it uses up its reserves of energy; if it does not find its stable and manger, or is prevented from getting there, it dies, its reserves ex hausted. In quite a different manner, when the batteries are charged up to a certain level, the tortoise then looks for rest in a quiet corner. The resting electronic mechanism still requires a certain equilibrium of light. It leaves the brightly lighted hutch, but avoids complete darkness. The exact degree of illumination required depends on the regulative setting that its designer has given it, a setting which can be modified from one day to another. Thus, the day that we were in Bristol, Elsie was afflicted with a very unstable, very feminine mood; her regulating mechanism was hypersensitive, her point of equilibrium was too finely adjusted, so that in practice she could never find it, or at any rate could not main tain it; the least change of position or lighting was enough to destroy her equilibrium. As a result, she very quickly ran down her batteries running hither and thither to find an ideal condi tion. Elmer, on the other hand, had been given a very stable, very bourgeois character; his electronic system found its equilibrium not for a precisely defined light intensity, but for quite a wide
213
THINKING
BY
M ACHINE
range. Thus, he was perfectly happy quietly ruminating under an arm-chair. But, Grey Walter said: “ His reflexes are really lacking, he is quite lifeless. For days on end he doesn’t stir from under the furniture; I must liven him up a bit and make him more in telligent. Because, you see, if an individual is intelligent he has to pay the price of a certain degree of accompanying irritability. Thus our radio sets when they are too delicately tuned suffer from a certain amount of instability.” Slowly or quickly, according to their nature, the tortoises use up their batteries and the current becomes low. The desire for light becomes intense and takes them hurrying off to their hutch where the batteries will be recharged and the cycle will start up again, just as in animals life is divided between resting and hunting. To effect such an analogy, the directive function of the current was made contingent on its voltage rising above or falling below a certain arbitrary standard. The motor has three wheels, one in front, two at the back. The back wheels are free. The front is both a steering and driving wheel, these movements being provided by two separate motors. The steering motor turns the direction mast which bears the photo-electric cell and the fork of the wheel. Thus, as the cell turns, the direction of the wheel is continually changing. The driving motor works on the front wheel axle. The explanation of the various forms of behaviour of the tortoise is really this: each of the motors can function at two different speeds; the tortoise will vary its behaviour according to whether or not both are set to work at the same speed. Take, for example, when it is searching for light. In the searching phase the driving motor works at half speed, the steer ing motor at full speed. The front wheel continually changes direction, and the tortoise thus combines a tendency to advance in a straight line with the effects of the rotation of the front wheel. This gives rise to the series of complex circling move ments, while the photo-electric cell continues to turn, exploring the surroundings. If the photo-electric cell turns in the direction of a beam of light, the micro-current which this light engenders is amplified; the output current of the amplifier cuts off the current of the steering motor and both the photo-electric cell and the plane of 214
SYNTHETIC
ANIMALS
the wheel, which act in unison, cease to turn. As the governor wheel is then turned towards the light, it is towards the light that the tortoise makes at full speed. Later on we shall see how the hesitating gait that is apparent when the light is far away, can be explained. When the light is too strong for the tortoise, there is a new internal charge: the steering motor switches on, but at half speed only, whilst the driving motor continues to work at normal speed, with the result that the tortoise circles in a zigzag around the light. The photo-electric cell, having resumed its rotary exploration, can pick up other lights, which, although relatively far away, will affect it.
Simplified diagram of the electronic “ tortoises'” Here is a very simplified diagram of the action of the light on the electronic tortoises. The light engenders a current in the photo-electric cell. This current is amplified in two successive amplifiers. Each of these amplifiers can connect with the steer ing motor or with the driving motor. But the current coming from the first amplifier, being stronger, only makes the motors work at half speed. The tortoise manifests different behaviour 215
TH INKING
BY
MACHINE
patterns according to the combination of the connexions controlled by the strength of the current of the photo-electric cell, that is to say by the strength of the light, in the same way as did the first light seen a moment ago. In this way we can explain the sudden indifference of the tortoise to a light which attracted it a minute before. When the tortoise collides with an obstacle the resulting dis turbance may be accounted for as follows. The shell is mounted on the chassis at a single point of contact: it is not rigidly fixed at this point. The attachment of the shell consists of a rod dipping into a socket, where it is suspended by a rubber ring. The diameter of the rod is such that normally the rod does not touch the sides of the socket which is, of course, solidly fixed to the chassis; but each time that the shell knocks against some thing, it is tipped and the rod comes into contact with the wall of the socket, making a contact with one of the circuits inside the mechanism. Hence the mechanism determining the alter nating change of the two motors is started up and it is this that has the effect of giving the mechanism the peculiar sort of crab like gait, due to which it is able to by-pass the obstacle, provided that it is not too extensive. THE
SECRETS
OF
THE
TORTOISES
Let us penetrate further into this complex mechanism (if the reader is content with the former explanations, he may skip the pages up to the next sub-title). The essence of the system lies in two amplifiers AM Pi and AMP2 (see the diagram on the opposite page). Each of them has a corresponding electromagnetic relay, R i and R2. Each of these relays can be connected in two positions: D 1 and L 1 for the first, D2 and L2 for the second. The position of D corresponds to the steering motor MD, the position L, on the other hand, directs the current towards the driving motor ML. The current coming from the amplifier 1 is obviously less powerful than that coming from amplifier 2. Hence positions D i and L i supply the motors with less power, and thus less speed, than the positions D2 and L2 dependent on the second amplifier. The different behaviour of the tortoises will correspond to the different combinations of the connexions D i, D2, L i, L2, so
216
contact activated by knocks against obstacles
SYNTHETIC
217
ANIMALS
TH INKING
BY
MACHINE
that the arrangement D i—L2 will give a half speed of steering and a full speed of driving. L j—D 2, on the contrary, will give full speed to the steering motor and half speed to the driving motor. L i —L2 represents full driving speed (the current of the two amplifiers adding up) and the steering locked. As soon as the tortoise is in its hutch, in contact with the mains, the voltage of this current cuts off the contacts A which link the accumulators to the mechanism; they recharge instead. But when the accumulators are charged up to 7 volts, the relay R4 makes contact again at A and the tortoise moves away, since now that it is replete it has regained its movement and will avoid bright light. Let us see now what happens with the different connexions. Normally, when no current passes in the amplifiers, relays R i and R2 make contact at D i and D2 owing to the action of the springs.
When the amplifiers are connected by the action of the relay R4, R i and R2 make contact at L i and D2, which gives half speed of driving through the pilot light T and full steering speed. This is the exploratory position, when the tortoise turns away more than it advances forward, as in the sketch below: RI +
D I+
+ R2
D2+
+ LI
+ L2
i
1 D riving
Steering
half speed
full speed
When a faint light strikes the photo-electric cell, the amplifi cation of the current AM Pi is not sufficient to make relay R i act, but after further amplification in AMP2, the signal can act
218
SYNTH ETIC
ANIMALS
on R.2. Hence the positions R i - L i and R2-L2, which cut off the steering motor MD and supply the driving motor M L with the current of the two amplifiers. The tortoise then moves full speed ahead towards the light. The pilot lamp is extinguished, as seen in the following diagram: RI +
D> +
R2+
+ LI
D2+
D riving half speed
+ L2
D riving full speed
--------- v --------D riving maximum speed
But during this following of the light, the tortoise has two different modes of behaviour according to the intensity of the light. The amplifiers are linked by two connexions, a condenser coupling and a direct coupling. The first functions if the light is weak (let us accept this without explanations); the condenser has a charge-discharge frequency of about 2 seconds which cuts off and alternately switches on, the current of AMP2, the relay R2 oscillates between L2 and D2 and vice versa— that is to say it may momentarily reinstate the previous situation. Thus the gait is once more changed into little “ hesitationwaltz” steps. When the light is more powerful— but still not enough to act on R i through A M P i— the current of the photo-electric cell passes by direct connexions from one amplifier to the other; the contact of L2 is no longer interrupted and the approach of the tortoise becomes bolder and more rapid. The light meanwhile draws nearer and becomes more intense, too strong for the tor toise which seeks a light of medium intensity. How can this new reaction be explained in terms of the electronic system ? The current of the cell is strong enough to act from the first stage of amplification: it switches R i on to D i. This effects a new contact system in which half speed of the steering motor is combined with full speed of driving, which makes the tortoise 219
TH INKING
BY
M ACHINE
keep a respectful distance from the light: the following diagram shows the connexions: R I+
D I+
l
R2+
+ LI
D2 +
+ L2
I
Steering
D riving
half speed
full speed
Let us see what happens after the tortoise has hit against an obstacle. The shock closes contact with an auxiliary circuit which sends the output current of the second amplifier into the input of the first through a condenser C. Thus the micro-current which is emitted by the photo-electric cell when it is hit by the light rays is submerged by its own amplification. The amplifiers start to oscillate, their rhythm being determined by the condenser period. This oscillatory period lasts for a certain length of time dependent on the capacity of the circuit. This is the length of time during which the astonishing “ memory” of the collision with the obstacle persists and, during these few seconds, the alternations of charge and discharge of the condenser determine an alternation of the two combinations of the connexions L1-D2 and L2-D1, which causes the predominance, alternately, of steering and of driving and gives rise to the sideways gait in zigzags that permits the tortoise to skirt round the obstacle. A
PRIN CIPLE
OF
LIFE:
THE
ECONOMY
OF
MEANS
Those who are unfamiliar with electronics will find all the foregoing sufficiently complicated. But anyone with an amateur knowledge of radio will be surprised to find that the tortoises hide nothing very extraordinary in their insides. Not a single new invention. Simply a very ingenious and remarkable arrangement. The chief interest of the tortoises lies precisely in the simplicity with which they solve the technical problems. Grey Walter often insists on the principle that has guided him
220
SYNTHETIC
ANIMALS
in the construction of these mechanisms: the principle of parsi mony. “ No living creature has more organs than it needs,” he says, “ this is a general principle that the first artificial creatures should not transgress. I wanted to prove that Machina speculatrix could be made to work with the greatest possible degree of simplicity.” Grey Walter did not attempt to “ synthesize” these animals as a joke, not even out of dilettantism, although he found the challenge an amusing one. But behind these marvellous toys there was a very important scientific problem. Grey Walter, Director of the Electro-physiological Depart ment of the Burden Neurological Institute, is one of the world authorities on electro-encephalography, a science sufficiently new to require a great deal of research both from the neuro physiological and electronic points of view. It was he who differentiated the delta and the theta rhythms from the alpha and beta brain waves which were first described by Hans Berger. Grey Walter introduced a new system of electrodes which are now in common use, and are named after him. He has made some fundamental observations on the interpretation of electro-encephalographic recordings and introduced a very important method, phase opposition, to help localize variations of electrical potential. He also designed and constructed two very remarkable and complex forms of apparatus. The first of these is the automatic frequency analyser which analyses the differences in potential shown on the electro-encephalograph recordings and distinguishes between the waves of different frequencies; it will also yield an average of the wave amplitude over a certain specified period. Recently Grey Walter, together with H. V. Shipton (the well-known electronic engineer of the Burden Institute), con structed the “ toposcope” , known as “ Topsy” in the intimacy of the laboratory. It is a veritable radar of the brain giving a picture of the electrical activity of the whole brain, which, by exploring all cortical fields simultaneously, permits a spatial as well as temporal analysis of each frequency. Electro-encephalography has proved its value above all in the study and diagnosis of diseases of the brain. It also probably offers the best means of approach to the study of cerebral function. 221
TH INKING
BY
M ACHINE
In the last century, it was hoped to increase our knowledge of cerebral mechanisms by more and more precise anatomical investigations and by the study of localization of function by physiological methods. Both of these methods have proved limited in their utility. Today electro-encephalography proves to be a more promising tool than the scalpel and by its means, instead of studying the structural basis of brain disorders, it is possible to study the actual electrical phenomena themselves, or at any rate to detect any concomitant electrical effects. By combining electronics with neurophysiology, cybernetics permits the formulation of analogies which should clarify the two studies. It may be hoped that cerebral mechanisms will be capable of illumination by electrical mechanisms bearing analagous functions. Thus, the holistic approach to cerebral organization is based today more often on the study of the electrical activity of the brain than of its anatomical structure. Physiologists like Grey Walter and physicians like McCulloch, who is also a specialist in electro-encephalography, or Lorente de No, eminent supporter of cybernetics, are all able to co-operate in the same study in a way that would have been inconceivable as recently as ten years ago. Although the supporting evidence is still only at the earliest stages of development, Grey Walter had a strong conviction that the complexity of the brain is probably not so considerable as was originally supposed. Its nine or ten thousand million neu rones can possibly be grouped in some thousand systems which are chiefly differentiated by their functions. If one did not know the ideas that motivated Grey Walter in his construction of these tortoises, during his spare time, from odd bits and pieces of gear wheels and pinions from Meccano sets, it might be thought that these constructions were nothing else than the pastime of a young intellectual or even an odd quirk of his British sense of humour. But once his views about higher nervous activity are known, the meaning of these auto mata becomes intelligible. Grey Walter wanted to demonstrate that machines consisting of very few electronic components are already capable of be haviour reminiscent of that of animals and behind the amusing idea of these toys with which young Timothy plays, behind their 222
SYNTH ETIC
ANIMALS
ingenious arrangement, lies this question: since such a small number of elements can organize a sufficiently complex mode of functioning to approximate that of animal behaviour, what would come to pass if hundreds of elements performing detec tion and really intricate reactions were put to a similar usage ? The question is of prime importance and takes us far beyond scientific toys and tinkering with radio components.
223
CHAPTER X
The Use of Models v e n w h e n it is p o s s ib le to c o n s t r u c t a p h y s ic a l m o d e l of n e u r o n i c o r c e r e b r a l a c ti v i ty b y t h e a p p lic a t io n o f c y b e r n e tic m e th o d s , it s till c a n n o t b e c l a i m e d t h a t th is f u r n is h e s a c o m p le t e e x p l a n a t i o n o f th e n a t u r a l p ro c e s s .
E
If the sliding-rod mechanisms of a servo-motor can take over the work of the helmsman in the difficult task of steering a vessel not thus equipped, it by no means follows that the same task is effected in our brain by a simple mechanical or electrical coupling. In the same way, although a calculating machine can register for future reference the intermediate processes of an arithmetical operation in what it may be convenient to designate as its “ memory” , it is obvious that our human memory works in quite a different manner. The machine, using entirely different methods, can indeed imitate the functions that in man we call “ thought” , without our neces sarily understanding any more about the nature of thought. Even when a machine, by means of electronics, proves capable of carrying out an activity that corresponds to cerebral or nervous functioning, it is still inadmissable for us to claim that we have produced a mechanism with a brain. On the other hand, the importance of such experiments must not be neglected. To be able to reproduce vital behaviour, even by the use of quite different methods, enables us to draw closer to the comprehension of such behaviour. We can hope to draw closer still. No longer should such problems be con sidered as beyond us. It is henceforward possible to eradicate the inferiority complex that has for so long paralysed even the greatest intellects when endeavouring to understand the higher mechanisms of their own functioning. Without claiming to reproduce the organization of life, such experiments are often able to give some representation 224
THE
USE
OF
MODELS
of its fundamental principles. A working electronic model allows us to understand better the probable neuronic circuits responsible for human or animal behaviour and throws some light on the mystery of cerebral activity. Why indeed should such a model not prove to be the clue to our problems ? We have seen that causation has its own immutable logic; cerebral mechanisms cannot but obey the laws of interaction and retro action, which are the laws governing all internal organization. There exists, then, a new approach to scientific knowledge through the use of models. This method represents a scientific advance that is both novel and revolutionary. Rashevsky makes great use of it in his important work Mathematical Biophysics in which he mani fests his intention of studying vital phenomena from the mathe matical standpoint. He used it above all in the investigation of neuronic mechanisms. Thus, in order to envisage a physiological system that cannot be entirely explained by reason or by experiment, we might construct a theoretical model which would permit us to investigate it by calculation of data with which we were empirically acquainted and even, when necessary, to evaluate certain constants. When we have thus constructed a model which seems to copy reality, we may hope to extract from it by calculation certain implications that can be factually verified. If the verifi cation proves satisfactory, we are entitled to claim to have got nearer to reality and perhaps sometimes to have explained it. It is obvious that such a method is at least somewhat hazard ous, although it utilizes basically the same methods of thought as those characterizing much of our scientific reasoning: construction of an hypothesis, deducing its consequences and then proceeding to their verification. It becomes a fascinating pursuit to attempt to penetrate neuronic complexities which have so far eluded the scalpel and the microscope and even the most delicate instruments of electrical measurement. Let us take as an example an investigation carried out by Rashevsky concerning the reaction time of a neurone in response to variations in the strength of the initial stimulus. In other words, the time lag after which the cell responds to an excita tion. Assuming that the neuronic circuit under consideration
225
TH INKING
BY
MACHINE
is in the simplest form (two neurones, one exciter and the other receptor) Rashevsky arrives by differential and integral calculus at a logarithmic formula comprising four unknown constants. He then evaluates these constants by statistical methods. He now compares the values thus obtained with those given by actual experiment. The concordance resulting from this comparison, most notably with the experiments performed by Henri Pieron on taste and auditory excitability, has proved to be very satis factory. By using a similar model technique, McCulloch and Pitts investigated the neuronic connexions, the synapses, without, however, resorting to the arbitrary practice of evaluating the constants from calculation. Let us give an example of their results. A cold object brought into momentary contact with the skin will cause a sensation of heat. McCulloch and Pitts first symbolized this fact by two equations of mathematical logic. It was then only necessary to imagine a neuronic circuit which would satisfy these equations. By successively introducing a number of intermediary neurones, it was possible to construct a satisfactory coupling. It was even found that by adding still more neurones the neuronic circuit was able to deal with still more complex syllogisms. The model can equally be conceived as part of a concrete scheme; instead of building an imaginary structure designed to deal with the phenomenon under investigation, it may be possible to use a material construction that will exemplify them. If, as Goethe said, to understand is to be capable of acting and if the criterion of the eighteenth-century Italian philo sopher, Vico, is justified: “ Id est verum quodfacio” (“ that which I make is true” ), then when the scientist recreates by artificial methods what had previously been known as a “ natural pheno menon” , he is surely taking a step towards the understanding of reality. Certain critics consider this method, which relies on the “ submission of hypotheses of approximation to analysis and then confronting them with reality” , to be too dangerous. The answer to such criticism is simple; when we encounter a prob lem that appears to be insoluble by the classical methods of observation, analysis and experimentation, should we meekly 226
THE
USE
OF
MODELS
give up the attempt at solution? How can it be possible for anyone imbued with a scientific spirit to condemn an adventure of thought as unorthodox, or to refuse to consider hypotheses that are shots in the dark when it is at least possible to test the strength of their foundations? If a model may appear at first sight to correspond with reality, there will probably be a onein-a-thousand chance that it will ultimately explain some fragment of reality. But however slender the chance it is worth an attempt. The author once said something of the same sort to Professor Henri Gastaut of Marseilles, who was the first to introduce cybernetics into French thought. Gastaut improved on this: “ I would go further,” he said: “ it is just in those cases where the model fails to reproduce the phenomenon that the experi ment becomes interesting.” It must be admitted that if we succeed in imitating a natural phenomenon by an artificial mechanism, this is no proof that the natural mechanism has any true resemblance to our model. On the other hand, if the devices adopted in our model fail to give any approximation, we can feel certain that the natural mechanisms make no use of such devices, or at least, not in the conditions under consideration. The use of models leads to two aspects of scientific truth, which may be formulated thus: The Negative Truth: If a principle utilized in an experiment fails to furnish any results that correspond to those of the natural process, it may be assumed that this principle cannot, at any rate of itself, explain the phenomenon. The Positive Truth: If a principle utilized in an experiment gives results corresponding to the natural process, this principle should be retained as one capable of furnishing an explanation of the phenomenon. Such a degree of success, however slight it may appear to be, should not be neglected. The electronic “ tortoise” may appropriately be considered as illustrating the use of models.1 But models of what? 1 The English physicist, Lord Kelvin, may be considered to be a forerunner of this method with his mechanical models of electronic phenomena. He used the method, however, only to make accessible to the senses that which had already been established and understood. The cybernetics models aim at an examination of functions that are hardly yet even recognized.
227
TH INKING
BY
MACHINE
In the first place they must be thought of as illustrating the simplicity of construction of the cerebral mechanisms, rather than any simplicity in their organization. The truth that can be attained here by the use of models is all important; it is that the mechanisms of the higher nervous system which are responsible for any given function cannot be more complex than those of the electronic tortoise exemplifying such a func tion, since the experiment has shown that their performance is guaranteed despite their economy of means. In short: reality may indeed be more complex than the model, but it is legitimate to consider that it may be equally simple. In a matter of which we know so little such a hypo thesis is not without importance. THE
“ ANIMAL-MACHINE”
OF
DESCARTES
We have the assurance based on the electronic “ tortoises” of Grey Walter that the neuronic function may be simple and we may assume as a working hypothesis that it really is simple. Viewed in this light the electronic mechanisms become models of the nervous organization of the living animal, of animal behaviour, or more precisely, of the automatic element of animal behaviour; in other words, of instinctive reaction. Descartes summarized the theory of animal mechanisms in the fifth part of his Discours de la Methode, a theory which he had set forth in the Traite du Monde, a book which he refused to publish after hearing of the condemnation of Galileo. We have, then, only a digest of the theory named by posterity the “ animal machine” (“ bete-machine” ) although as a matter of fact this word never appears in the text. A letter to Father Mersenne (30th July, 1640) proves that the lost book dealt fully with this question. “ I have proved in my Monde that all the structures which would be necessary to enable an automaton to reproduce all those activities that we have in common with animals are to be found in the animal body.” In an epitome that Descartes gives of his views in the Discours, the following are the essential passages: “ I have demonstrated the structure of the nerves and muscles of the back that would be necessary to enable the ‘animal spirits’ contained in the body to move its limbs. A decapitated head is, thus, for a short time capable of moving
228
THE
USE
OF
MODELS
and biting the earth though no longer alive. Further, I have shown the type of change that must take place in the brain to cause waking, sleep and dreams; how the light, sound, smell and taste and temperature of external objects can arouse ideas through the medium of the senses and how hunger, thirst and other internal drives can act similarly. I have shown that there exists what we are led to believe must be a common centre where these ideas are received, where reside the memory that stores them and the imagination which can modify and re arrange them and can, by the same mechanism, send the animal spirits to the muscles and cause the limbs to move appropriately to the various external objects and internal drives, so that our bodies can move without the intervention of will.” After having compared the human body to an automaton or animated machine, which, however, being a work of God, is “ better contrived” and more “ wonderful” than any man made machine, Descartes continues: “ And here I paused to consider that if machines having the organs and semblance of a monkey or of some other animals with no reasoning powers could exist, then we would have no means of recognizing in what way they differed from real animals.” But such an auto maton of human semblance never could be taken for a man, firstly, because speech would be lacking, and, secondly, because “ although they might do some things as well or better than us, they would inevitably fail in other things which would reveal that they were not acting according to reason, but only on account of the mechanical structure.” That which will always be lacking in the automaton will be the spirit. So far as we are concerned here, the imitation of animals is a question only of science and technical design. Since at that time only automata of the second degree could be conceived, the faith of Descartes in science— or rather in prescience— is all the more remarkable. Still, it is true that Descartes was the forerunner of cybernetics; he dared in the seventeenth century to think that the animal spirits, even those of man, could only act automatically. Moreover, as Jacques Chevalier has said in his work on Descartes: “ Does not all modern science tend to realize the Cartesian dreams of a physical universe interpreted in mathematical terms?”
229
THINKING
BY
M ACHINE
Malebranche touched on the question when he speculated whether the howl of a beaten dog was a manifestation of pain or nothing more “ than air that is blown through the tubes” . Then Condillac propounded the idea of his famous statue. c o n d il l a c
’s
statu e
an d
g r e y
Wa
l t e r
’s
to rtoises
Imagine a statue “ having the internal organization of man and animated by a force devoid of any ideas” ; it is supposed to be of marble and to have no contact with the external environ ment, that is to say it has no senses, no sensations and no thoughts. If we were to endow it with the knowledge of one sensory event succeeding another, then of two at once, followed by all together, the statue would thus progressively attain mental life. It would, by hypothesis, have had no innate ideas and now it would be able to think, since the new idea is a transformed sensation and the idea can only be derived from a sensation. One can easily see how all this could be put from the angle of modern technology. One can see that there is a nexus between the statue of Condillac and the “ tortoise” of Grey Walter. The latter is an autonomous machine which possesses a source of energy and a motor to operate it and certain deli cate mechanisms which regulate and direct its activities. But it had no organ allowing it arbitrarily to influence its environ ment. It can vary and control the degree of its activity, but it cannot act except when messages from the environment in struct it when and how to act. It might be endowed with a certain type of electronic sensibility, then with another such and then with many more. Here, we may premise that elec tronics can give it all the forms of sensibility; that is to say, can electronically parallel everything to which man is sensitive and even certain things to which he is insensitive, such as wireless and ultrasonic waves, or ultra-violet, infra-red and X-rays, or a magnetic field— not to mention time, to which man is only very crudely sensitive. Here we can only show the possibility of an interesting analysis in which, by using a method analogous to that of Condillac, advantage would be taken of electronic methods and of the more recent advances in psychology. Without necessarily regarding an outdated argument as infallible, one can still feel certain that it would not be difficult to arrive at some very
230
THE
USE
OF
MODELS
surprising conclusions as to the capacities of the machine. If an eighteenth-century theologian was able to demonstrate by a feat of intellectual audacity that his statue would be capable of ideation when endowed with sensibility, a twentieth-century scientist could demonstrate the same possibilities in an elec tronic robot. Thus, Condillac, when he analyses “judgement” , finds it to be the comparison of two sense data and the perception of th e ir re la tio n s. E ith e r C o n d illa c is w r o n g a n d w e a re b o u n d to give up hope of a scientifically intelligible explanation of human behaviour, or the electronic animals of tomorrow, if not of today, will prove capable of judgement— that is, assuming that electronics can compare two currents which would give rise to different perceptions. We shall be obliged to assign to these future machines the capabilities, which the old-time philosopher conferred on his statue, of manifesting attention, memory, comparison, judge ment, imagination and knowledge— and possibly even “ thought” . We have now come to the core of this book: mechanical thought. But the exploration of the foundation on which it rests is an adventurous one. It may be setting ourselves a false problem to ask whether or not the machine is capable of judgement, of memory or of thought. First of all, these terms must be defined and in the process of definition we are bound to encounter innumerable disputes and disputants. What is really important is to know what electronic machines can do; it is a matter of lesser importance for the potential activities to receive the labels devised for human psychology. It is necessary to face the truth, revolutionary and distinctive as it may be, that all these confused and diffuse abstract notions, by the aid of which philosophers hoped to be able to analyse mental life, are relics of a past way of thinking. They are inapplicable to machines precisely because they were devised to analyse human behaviour, for even if a man be classed as a machine he is certainly a unique machine. Nowadays a very different analytical method than intro spection is indicated; it must be based on mechanisms, it must adopt their various capabilities as models of the human activi ties which perform a similar function. It would be of little use to try to group the mechanisms in a
231
THINKING
BY
MACHINE
frame of uncertain dimensions and to label them with the disputed tags by which mankind tries to explore the obscure depths of human mental processes. On the other hand, it seems vital for the acquisition of self-knowledge to compare the activities of our minds with the activities of mechanisms, to whose vast realm they belong, so long as we remember that the term mechanism is of wider implication than that used by the watchmaker or motor mechanic. Man will thus have established a more solid and welldefined basis for self-observation when he uses the machines of which he is both creator and master. He will then be able to compare his capabilities with those of the machine. He will certainly only rarely find similarities, but on this material testing ground he will be able to discover the essential qualities inherent in his mental processes and then to study in detail the particular properties that they may exhibit. Thus, the study of the machine will eventually become, by a curious but inevitable inversion, the basis of the study of man himself. Such a science of man will not be the less a human science; it will always be possible, by careful and minute observation, to determine in what way we are different from the machine. Such a research will occupy many generations to come. The present generation will hand on the torch to future investigators who will not find themselves, as we do, utterly as sea when confronted by the mystery of inner life; they will start their research from the basis afforded by mechan ical models and will only make human behaviour their study when they have understood the “ perceptions” and reactions of the model. The path of future study leads from the general to the par ticular; from the vast field of the universe to the minutiae of particular attributes. It is wrong to attempt a primary study of the particular in its own right— to study man isolated from the whole mechanical universe and to claim that knowledge of the whole may be attained by starting from this basis. Hitherto, we have attempted to deduce the general from a study of the particular, but, indeed, no other method was open to us since we had no knowledge of the whole as exemplified by mechani cal abstractions and were thus forced to concentrate on the details of human behaviour. Now that the passage of time has
232
THE
USE
OF
MODELS
introduced us to an opposite line of research, it is unpardonable to refuse to accept the new light on account of the prejudices inherited from an era whose impotence was manifest. We should not seek to know whether or not the electronic machines are capable of exhibiting attention, memory, valuation, judge ment, knowledge or thought. All these terms would be meaning less if thus applied. These terms only represent pigeon-holes suitable for the classification of the nuances and various degrees established in our thought. It is better to study the functions of which highly developed machines are capable, to classify them and to give purely technical descriptive names to such functions as are still uncatalogued. We should avoid all anthropomor phic thought. Later on, making use of the new system of classi fication, which is stable, pragmatic and truly scientific, we should look for corresponding functions in our mental life. We should determine whether or not they possess other essential characteristics, in what minutiae they differ from those of the model. We should be ready to abandon traditional ways of thought if it should become clear that the uncertain evidence obtained from introspection has no solid foundation. Such a method may be considered to be a logical extension, an extreme application of behaviourism. The school of be haviourism refuses to attribute any value to introspection and has built up a psychology founded on the study of the behaviour of living beings. This Atudy is strictly objective— that is, limited to the study of reactions. The behaviouristic school wishes to replace the subjective study of the human individual by the objective study of be haviour in general. It may be asked why should we stop there ? Why not push still further afield ? Why, having studied man in the shape of a living mechanism, should we not extend our study to that of the behaviour of mechanisms themselves ? There is certainly one advantage to be gained from this method; whilst the living organism is not entirely modifiable by the experimenter, the mechanical model is infinitely malleaable in the hands of the craftsman. Once the activity of mechanisms in general has been fully studied, a study that, as we have said, may well occupy us for some generations, it should be possible to pass on to the special ized mechanisms of the living organism and thence to proceed
233
THINKING
BY
M ACHINE
to study the human being without in any way departing from the solid foundations laid in the study of the behaviour of mechanisms. THE
TROPISM
OF
THE
TORTOISE
In the time of Descartes, the concept of a purely mechanical animal could only be speculative. Such a concept received some support from the facts revealed by zoological studies in the nineteenth century, but it was not till the beginning of the First World War that it was supported by a scientific system— the theory of tropisms of the German-American biologist Jacques Loeb. Loeb and his school considered all animal actions to be physiological, automatic responses to external stimuli, whether chemical or physical. Animal behaviour is thus comparable with that of a plant in which positive heliotropism turns the leaves towards the sun, negative geotropism causes the shoot to turn upwards, whilst positive geotropism directs the root down wards into the earth. All living beings are the puppets of tropisms, the salmon that when sexually mature seeks the currents of fresh oxygenated water, just as well as the lion that tends to move towards the source of the scent emitted by its prey. It is obvious that for those who hold such views the synthetic animal is a triumph, or at any rate it will be so in the future. No one, of course, thinks Grey Walter’s tortoises, as we have described them, constitute the final achievement of these theories. Their maker only thinks of them as marking the starting point. Grey Walter, as we would again emphasize, only wished to endow them with very simple tropisms. He was as much concerned with tackling an elementary technical prob lem as he was with resolving a much more universal theoretical enigma. His purpose was surely to show how, with simple limited mechanisms, certain aspects of the very complex behaviour of animals may be successfully demonstrated. Grey Walter endowed his creations with a minimum of perceptive power. Neglecting their sensitivity to shocks, we may study, respec tively, their sensitivity to internal and external tensions and to the light stimulus. Sensitivity to internal voltage. There are three thresholds of sensitivity:
234
THE
USE
OF
MODELS
When down to less than 5-5 volts, the tortoise seeks the strongest source of light; and, when it contacts the main supply of electricity, it plugs itself in to this source of energy (model of the hunger state). After receiving more than 7 volts it ceases to feed itself and breaks contact with the mains current (model of contentment). Between 7 and 5-5 volts the tortoise seeks optimum light and it depends on the precision of its setting as to whether it attains its o b je c tiv e w ith ease o r w ith d iffic u lty (m o d e l o f r e tr e a t to
favourable surroundings). In order to arrive at this state, the tortoise must have “ experienced” the preceding state; so long as it is plugged in to the mains it will not react to the medium voltage (model of an animal that neglects all other stimuli whilst feeding). This mechanism belongs to the third degree; it acts according to certain circumstances. The behaviour of the machine depends, then, on certain conditions occurring in the environ ment. But let us be alive to the fact that the complexities here are much greater than in the mechanisms of the first two de grees of automatism. The conditioned activities of the tortoise are simple: three variations of the electronic circuit. Each of these activities governs the access to another mechanism that is in itself capable of further conditioned activities; moreover, the combination of generalized conditions with specific conditions allows a graded type of behaviour. Sensitivity to light. This sensitivity depends on internal voltage. In a state of hunger we have a sensitivity to the strongest avail able light; during charging of the cells, no sensitivity; and, in a state of repletion, a highly complex sensitivity. If we consider this latter alone, at what degree of automatism are we ? . . . It would seem to be the third degree, that of mechanisms whose activities vary according to the circumstances— mechanisms exhibiting conditioned activity. But the story is much more complicated, and leads to major philosophical problems. One might say that if the tortoises see the light, it is that they have searched for it and that they are capable of retroaction. One might say that when they make for the lamp, their angle of approach to the light is adjusted by retroaction to the most favourable approach. Such a mechanism is truly eclectic. At any rate it is so in the machine world, but in living beings
235
THINKING
BY
M ACHINE
such behaviour is the rule. One can term it a mechanism for balancing up contingencies. The tortoise approaches the light. What is its effect? It stabilizes its position with reference to the light. If one now considers the action of the mechanism with respect to the light, one learns that it is a case of retroaction. But this is a special type of feedback, it is concerned with a non-material relation. The tortoise modifies its position with reference to the light light in relation
of the tortoise
because it is determined by the light and the internal factors of its mechanism. Once it has altered its position, it ceases to “ perceive” the light in exactly the same fashion. Thus, one of the factors is modified by the result; or, in other words, there is a retroaction. The position in relation to the light is influenced by a value that depends on an internal function. The feedback is outside the strict limits of material mechanism. One can, then, define the closed system of the mechanism as comprising everything that it perceives and that reacts on it; so the tortoise has an external feedback. This idea which we have sketched briefly merits some further development. It tells us that the tortoise is a model of a sensitive being that is in relation, through its sensitivity, with the external world. The intimate relation of an organism to the rest of the world is illustrated here. If it be capable of perception, its limits are bounded by the perceived objects and its system is in a con tinuous state of modification, always either growing or shrink
236
THE
USE
OF
MODELS
ing. Thus, the object that I see, that I sense, cannot be thought of as an isolated entity; if I sense it, it is part of my system. Such a relativist view of external reality is of course common place in many philosophies. But here we find that its existence is proved and it passes from the metaphysical to the logical. If the study of models did no more than demonstrate that the organism is integrated with that which it perceives, the tortoise would have proved its value. A sensitive organism, just because it is sensitive, is inevitably linked by retroaction to the external world. It is continually dealing with fresh contingencies by feedback. The model is in a continuous state of investigatory activity; the tortoises, in the absence of light, search for light. Their movement is in conse quence strictly determined; they move because the voltage of their accumulators is low. We are in fact dealing with the third degree, where the machine behaves in a certain fashion because of a specific change occurring in its structure. This in no way resembles the externally conditioned feedback, since there is, for the moment, no relation with the outside world. The movement is directed to the search for a stable organiza tion. It is determined by a state of internal disequilibrium, of dissatisfaction. Its aim is to explore the environment in order to link up a feedback; to search for the means of attaining inter nal equilibrium with the aid of the environment. In other words, it is a movement conditioned by the need to reorganize itself. All the behaviour of living things may be accounted for along these lines. It is a question of tropisms that can be described in terms of conditioned movements orientated by the external en vironment ; if they are thus orientated, it is by the operation of a feedback. “ Preferendum” , the term that is used in the study of tropisms, is simply the reference point of the various feed backs, the point at which their various differential messages cancel one another. Much the same plan is found in “ Miso,” the electronic animal created at Versailles in 1952 by Albert Ducrocq using Meccano parts. Instead of being sensitive to light, it is affected by the electric charge of the objects that it approaches. It ab hors these objects, even if one of them is the hand of its maker; thus it is called “ Miso” , but it would present no great difficulties to transform it into a “ Philo.”
237
THINKING
THE
C HE S S
PLAYER
BY
OF
M ACHINE
TORRES
Y
QUEVEDO
Unlike the “ animal machines” , the classical form of automa ton only performed inevitable actions. One can best appreciate its limitations by a study of the most highly developed example of pre-cybernetic automata: the famous chess player of the Spaniard, Leonardo Torres y Quevedo. Under a table is a mass of machinery, usually concealed, revealing a fearsome complexity of toothed wheels, endless screws and cams. On the table is a metal chess board with only three pieces; there are two white pieces for the automaton— the king and a rook— and a black king for its human adversary. The man moves his king wherever he wishes and the automaton replies to his moves by moving its own pieces; the effect is some what fantastic. Here is the key to the apparent mystery. Each square is formed of three metal plates electrically insulated from each other by rubber strips. The king of the human player has a
Each square of the chess-board is divided into 3 parts by 3 metal plates (1, 2, 3), insulated from each other by rubber strips, and connected to two batteries. The presence of the black king, which has a metal base, allows a current to pass from 2-1 and from 3—1. One of these currents indicates the position of the king horizontally and the other vertically.
metal base by which it makes contact with the metal plates of the square on which it rests, thus sending two different currents into the mechanism. One of these identifies the position of the king horizontally and the other vertically. These two messages inform the automaton as to the position of its opponent’s king. The automaton’s reply to the human move is effected by electro magnets which move about under the chess board attracting
238
THE
USE
OF
MODELS
the white pieces, each of which has a metal ball fastened inside its hollow base. If the man makes an illegitimate move, an electric sign bearing the inscription “ First fault” lights up and the machine refuses to play until its opponent has corrected his move. A second mistake elicits the response “ Second fault” and at a third false move, the machine is annoyed and refuses to play any longer. When the King is in check, a loudspeaker bellows forth “Jaque al rey” (“ King in check” ) and at the end of the game it triumphantly announces: “ Mate!” But there is an even more amusing feature; if a certain button is pressed, the machine becomes “ stupid” for a moment; it suffers from an artificially induced neurosis— “ a knock on the head” , explained M. Gonzales Torres y Quevedo, the son of the late inventor, when he showed the automaton at the Congress of Cybernetics in Paris in 1951. The machine when thus affected plays badly; it is satisfied to make aimless moves without attacking. This effect lasts for five moves. Then it pauses for a moment. “ It is recovering its senses” , is the comment of the demonstrator. Then at last it recovers and plays with its accustomed acumen. All this seems very marvellous, but the very complexity of the necessary devices discloses the limitations of these machines and of all the classical automata. They do only what was specifically intended by their maker. Let us consider what really happens: the most elementary types of checkmate. The un guarded king is inevitably lost; in the most favourable position that the black king can occupy, white can bring about check mate in sixteen moves at the most. This has been established as a result of three centuries of trials. The machine is, therefore, certain to win, though it may require a great number of moves to do so. The analysts of the game always postulate that the attacker, when he makes a move, chooses the one that leads most rapidly to mate. In many this is what is known as the “ best” move, that is, the move that saves most time. In the world of automata, however, the “ best” move will be the one that can be made by the most simple form of mechanism. We may study this more precisely in the case of “ checkmate by the rook” . Any given position affords three distinct methods of bringing about checkmate. These are the methods of D. Ponziani, modified by G. Renaud, that of X. Adam and
239
THINKING
BY
MACHINE
that of A. D. Philidor modified by Berger. The perfect player of the white pieces uses one or another method successively, according to circumstances, but unless it is extremely compli cated, the machine cannot have three plans of action to gether with a super-planning device capable of modifying, when opportune, each of the three plans. It can, in fact, follow only one plan, and Quevedo has chosen for it the Philidor-Berger plan. However simple it may be, this plan is not simple enough for the machine and only the essential prin ciple of it has been extracted; when the two kings find them selves in actual opposition (face to face, with one square be tween them, either on a vertical or a horizontal column) a lateral check obliges the defending king to retreat until he reaches the border where he will be mated. To each move of the man the machine responds by a single move which is always the same. One could say that the man himself sets off the moves that must inevitably beat him; when he moves a piece, it is just as if he pressed a button causing an advance movement of the automaton’s pieces. It is only the astute planning of the con structor that produces the illusion of some initiative in the machine, which in reality is absolutely determined. AN
ESSENTIAL
CONCEPT:
THE
U N PR ED ICTABLE
The machine can only be said to acquire a certain degree of personality when it is not entirely dependent on the programme arranged by its creator, but is capable of reacting to contin gency. When this occurs man no longer acts through the medium of the machine, but simply watches it act. If, however, the perceptions of the machine are reduced to threshold sensi tivities and if the activities engendered by variations of these perceptions are only simple in nature, the behaviour of the machine may be strictly foreseeable. Thus, one can know that if the temperature in the hold of a ship reaches a certain level, a fire alarm will be set in action. Thus, also, the ancestor of the electronic animals, the dog “ Philidog” , did nothing unexpected. He was demonstrated in 1929 at the International Radio Exhi bition in Paris and at the International Exhibition of MagicCity. He was the creation of Harry Piraux, now chief of the technical publicity service of the Societe Frangaise Philips, and a pioneer of cybernetics. The “ dog” , who was sensitive to light, 240
THE
USE
OF
MODELS
would follow an electric torch and turn or circle round as often as desired. When, however, the light was brought too near and put just in front of his nose he became annoyed and started to bark. The technical differences between Philidog and Elsie are considerable. In the case of the dog two photo-electric cells, fixed one in each eye, ensured its light sensitivity, but of course only to light rays straight in front of it. Each of these photo electric cells drove its own motor which turned a wheel in the opposite paw. The general principle, however, is the same; the model must be sensitive (perceive), must translate its percep tions into electric currents and these currents must activate the motors. Nevertheless, Philidog is only an automaton perform ing predictable actions. His master, instead of guiding him by levers or switches, guides him by beams of light. But with the tortoises it is as much their complexity and the accuracy of their sensory or receptive system as their reaction that is responsible for their individual behaviour. Once again it must be emphasized that no categorical classi fication of these reactions is practicable; the transformation of a purely mechanical action into one that is to all appearances vital is only a question of degree. That is why this comparison of Elsie and Philidog furnishes an extremely important demon stration of the fact that vital reactions may differ from that of the machine in degree only. Another “ electrical dog” was to be shown at the New York World Fair of 1939; it was to be sensitive to heat and was to have attacked visitors and bitten their calves, but just before the opening of the exhibition it died, the victim of its own sensi tivity. Through an open door it perceived the lights of a passing car and rushed headlong towards it and was run over, despite the efforts of the driver to avoid it. Certain experts in electronics have objected that the tortoises are not in any sense novelties, since their essential principle is the same as that used in the dogs. But it is just because their guiding principle is the same that they have led to definite pro gress. No one would deny that they exhibit a very wonderful type of behaviour which is very like that of animals, but the most remarkable fact is that they achieve this result by the means of very simple electronic devices, thus demonstrating 241
THINKING
BY
MACHINE
that everything, in these matters, is a question of com plexity. In the case of the tortoises, their conduct also can be theo retically predicted; they react appropriately to external and internal stimuli which can be evaluated. In the actual experi mental situation, however, their reactions, owing to their complexity, must necessarily elude calculation. It is thus possible to foresee the impasse in which certain exponents of cybernetics will find themselves; life does not necessarily belong to a different system, it is simply more complex. Why is it impossible for us to calculate the behaviour of the tortoise ? It is certainly not because the laws governing it can not be understood, for those laws were made by us in advance. If the solution to this problem cannot be found, it is because the data are insufficiently known; we may not know the EMF of the accumulators, the resistances of the various circuits, the intensity of illumination “ perceived” by the tortoise, the angle at which the light strikes it. At any rate these values cannot be estimated with the same precision as that with which we can measure the reactions of the machine. To obtain such an exact knowledge it would be necessary to put oneself in the position of the tortoise; to see at the same instant from the same angle the same light at the same distance, to evaluate the blocking effect of the same crease in the carpet or of the same crack between the floorboards, to allow for the same faulty lubrication and the same defect in the dielectric of the same type of condenser. In brief, one cannot fully know the perceptions of the tortoise, because these perceptions are specific to the particular model. They may be said to be sub jective experiences which could only be appreciated if it were possible for the observer to enter into the structure of the model. Nor would it suffice, even then, to make a single observation at any given moment, since the state of certain components will depend on the history of their previous reactions and thus our account would have to change perpetually with the passage of time. It is, of course, inconceivable that a human being could enter into the life history of a machine, any more than a man could assimilate the subjective experience of another man or an animal. For this reason, then, the reactions of Elsie cannot be predicted. 242
THE
USE
OF
MODELS
This concept of subjectivity applied to a machine is certainly an astonishing one and it is revolutionary for anyone who has not fully understood what an analogue machine may really mean. These considerations become yet more surprising when they are brought into touch with the requirements of Bergson, who demanded that a man should immerse himself in the phenom ena which he seeks to understand. I would, according to this, have to substitute myself for the organism which I wish to understand, but then I should have to surrender the past history which conditions my present activity. I am asked to identify myself completely with the object of my study. It is only thus that I should be able to understand it. This attitude would apply also to the scientist studying a machine, but Bergson did not envisage it, nor could one dream of suggesting it in the study of simple machines. When a valve, subjected to a certain pressure, allows the escape of steam, can one legitimately talk of its “ subjective sensations” ? It is the simplicity of the single reaction that prevents one doing so, since we can, without reference to the valve, estimate the steam pressure in the boiler. If, however, the “ sense data” of the machine become numerous, complex and intense, if the reac tions to them become acute or otherwise influenced, we could never estimate the actions and reactions with the accuracy that would be possible for the machine. It is at this point that we may say that the machine develops personality. Everywhere we may see examples of reactions that are in calculable on account of their great complexity, such as a wave breaking on a rock or a handful of sand thrown into the air; every drop of water, every grain of sand will only obey laws that are well known, but still their respective trajectories will elude prediction. Up to now, there have been no machines made by man which reacted to contingency, that is, machines of the third degree, of such complex sensitivity. Until the present they had but one approach, as for example sensitivity to heat in the case of the automatic fire alarm. That is why the tortoises are the first artificial machines that have unpredictable reactions. (An excep tion must be made of certain machines in which the contingent possibilities which are not directly capable of human evalua tion have been purposely allowed for: machines whose be
243
THINKING
BY
MACHINE
haviour is unpredictable by definition, such as gambling machines, roulette machines or pin-tables.) Their creator ad mits that he does not know how they will behave. In fact, in order to demonstrate that his toys are endowed with a true personality, Grey Walter attempted to build two that were strictly alike. Elsie and Elmer are more exact replicas than any twins. But still, in spite of the most minute care in construction, it has never been possible to get them to behave in an exactly similar fashion. It should hardly be possible for the most sceptical to refuse them the title of “ synthetic animals” , but if anyone does so, we may refer them to E. S. Russell’s classical work on animal be haviour. He lays down that the most characteristic aspect of animal behaviour is its attempt to carry out some specific action. Elsie and Elmer, when they seek specifically either rest or a brighter light, seem then to behave like animals. WHY
SHOULD
THE
PROBLEM
NOT
BE
EQUALLY
S IM PL E ?
Using the principle of “ parsimony” which is so dear to Grey Walter, we have found our way so far without having dealt with all the faculties of the tortoises. It seemed more profitable to attempt to understand the machines by studying their essen tial principles; to determine their position in the machine hierarchy without cataloguing all their potentialities. But in addition to the principle of economy of structure, Grey Walter introduces other faculties. Speculation: By virtue of this faculty the tortoise explores its environment and does not, like other machines, wait passively for the environment to act upon it. (One cannot even go so far as to say that the classical automata maintained a passive attitude; not only did they not explore their surroundings, but they were unaffected by them.) That is why the type of machine exemplified by the tortoise has been named Machina speculatrix. It will be agreed that in this respect it exhibits an altogether novel characteristic: incessant exploration in search of those environmental conditions to which it is sensitive. This search for an optimal state of equilibrium brings the behaviour of the machine very close to that of animals and, in any case, alienates it from the traditional concept of machine functioning. Plasticity: The classical machines are constructed to function 244
THE
USE
OF
MODELS
in certain predetermined conditions. Animals, on the other hand, have great powers of adaptability. M. speculatrix shows greater adaptability than a machine, but less than many animals. Still, it does try to adapt itself to the surroundings and if it fails to do so it dies just like an animal would. Discernment: Grey Walter uses this word to express the power of his creatures to choose between effective and ineffective be haviour. This faculty is manifested only at the moment when, having bumped against an obstacle, the tortoise ceases to react to the attraction exercised by the light, which, so long as pro gress is impeded, would be harmful rather than useful. But it would be easy to write at length on the behaviour or the psychology of M. speculatrix. All the modifications of its activity are dependent on two relays sensitive to specific varia tions of voltage and two miniature valves; it is the combination of these components that determines highly complex activities. It is tempting to speculate as to whether behaviour that in living animals seemed to depend on mechanisms exhibiting an unanalysable degree of complexity may not arise from very simple mechanisms. Whenever cybernetics reveals the ease with which a given animal or human faculty may be imitated, it would certainly be illegitimate to assert that the living organism depends on a similar type of mechanism; but at the same time it would be as well to ask why it should not be equally simple. In any case we are constrained to believe that, in the future, the behaviour of synthetic animals will show still more subtle varia tions, when, instead of simplifying their mechanism, we shall, on the contrary, be able, by virtue of their complexity, to adapt them to various degrees of sensitivity. In other words, instead of basing the models on a meagre structure, they will be given a greater complexity. Most convinced students of cybernetics are prepared to believe that the difference between these electronic models and the cerebral mechanism is only a question of complexity. At any rate it is fairly evident that to pass in review from one to the other requires no fundamental difference in our approach. THE
ART
OF
COM PROM ISE
Grey Walter has founded a new school; other synthetic animals are being created and will continue to be created. If,
245
THINKING
BY
MACHINE
however, they only serve for academic demonstration there will be no necessity continually to recharge them automatically. Once M. sfieculatrix has demonstrated that a machine can imitate the behaviour of an animal seeking food, it becomes unnecessary to reproduce mechanically the act of feeding. For the experimenter to recharge the accumulators himself would only be to deprive the experiment of an amusing illustration, but would in no way impair the demonstration value of the machine. It can be premised that if the machines were controlled by feedbacks, their actions would be regulated by mechanisms similar to those which govern animal movement. One can imagine that such machines might be capable of either co operating or resisting each other when subjected to similar simultaneous stimuli. One of the characteristic functions of animal behaviour could thus be imitated; the power of choice between different demands. Elsie is not sensitive to several allied sensations. More pre cisely, her internal voltage conditions her sensitivity to light, but does not compete with the strength of the light. The animal, it is true, is accessible to certain afferent stimuli which influence or inhibit others, but generally speaking they act concomitantly. If this can be imitated, it should be possible to construct models exhibiting such concomitant sensations. Let us consider a tortoise that is analagous to that of Grey Walter, but sensitive to heat and sound. We may decide that it is to be attracted by heat and repelled by noise; it will exhibit positive heat tropism and negative sound tropism. One recep tor organ will transform heat into electric current, whilst the other generates a current inversely proportional to sound. The two currents will summate in a central mechanism. If, for example, the maximum heat gives rise to a potential of 5 volts, the minimum noise stimulus will give the same potential; so in ideal circumstances the tortoise may build up a potential of 10 volts. In such a condition it is contented and remains immobile. I f the sum of the two potentials is below the threshold of 6 volts, it moves off in search of more favourable surroundings. What will happen if it can only find heat from a stove situated in a very noisy room ? Will it, perhaps, prefer a cold quiet room to the noisy hot one ? It will choose between these two requirements; 246
THE
USE
OF
MODELS
it will be Machina judicatrix. Such a search for equilibrium between conflicting stimuli gives us a remarkably true pic ture of life. The more the sensitivity to each form of stimulus is graded, the more gradation will be exhibited in the resulting activity. One may thus imagine that with a total voltage of 8 the satisfied animal may be heard to purr, whereas with less than 2 volts it will speed up its activity and emit furious growls. It would have four ways of displaying what one might call its “ fe e lin g s ” : a fev erish p u r su it a c c o m p a n ie d b y g ro w ls, a c a lm e r
searching, immobility and a purring, contented immobility. THE
LE AR N IN G
BOX
Even if such automata are dreams of the future, we already have models of conditioned reflexes. We are already at the stage when the method of models leads to great advances. At the end of 1950, Elsie and Elmer had a little sister, unless it was a daughter as it may well have been. In truth it appears to have been a case of parthenogenesis, c o r a , the newcomer, was constructed with organs belonging to Elsie, who thus suc cumbed, poor thing! But the most amazing feature of the case is that she represented a mutation; she was one of another species: Machina docilis. Some months later she changed; up to then, she had only been able to associate sound and light, but now she became sensitive to bumps. She was christened with the name of c o r a for Conditioned-Reflex-Analogue. Let us refresh our memories as to the meaning of a conditioned reflex. Meat paste is presented to a dog and it salivates. If at the same time a bell is always rung, the dog, after a few days, will salivate at the sound of the bell only, even without seeing the meat paste. This was the celebrated experiment of Pavlov. Grey Walter wished to produce the phenomenon in one of his creations. It seemed to him that if he could succeed in this, he might be able to reveal the mechanism, not only of conditioned reflexes, but of all educative processes. When she was “ born” , c o r a was only sensitive to light, like her elders Elsie and Elmer. She had a potential auditory capacity, but this capacity was not yet actual; her microphone functioned, but the sounds registered had no significance for c o r a and initiated no response. She was as yet uneducated; her education went like this. A strong light presented to the
247
THINKING
BY
MACHINE
tortoise promised food, just as sugar is given to a dog that is being trained. At the same time, or immediately after, her trainer blew a whistle whose note lay within the range of the electronic mechanism, After a certain number of repetitions, c o r a associated the sound of the whistle with the appearance of the light. She had become sensitized to this sound and would from now on respond to this signal— in other words answer to her name, even when there was no accompanying light. The most remarkable feature of this experiment is not so much that the machine has been able to learn, it is rather that in order to realize the potentiality of this ingenious scientific toy, Grey Walter had to establish the theory of the conditioned reflex. With his knowledge of neuronic organization on the one hand, and of associated reflexes on the other, he sought an understanding of how such reflexes are organized. He thought out a scheme which might explain the whole problem and then realized this in an electric model. The construction of the model did not entirely fulfil his expectations, so the theory was further elaborated and the model modified until the process of learning was exactly reproduced. This theory, however, does not immediately concern us; it is more important for the study of human behaviour than for that of the development of the machine, and as such will be reserved for the future volume L’Homme en Equations. Here we are concerned with the study of cybernetics in relation to the machine; moreover it is unlikely that industry will ever require the type of machine that we have just described. If it were necessary for a machine to possess a specific reaction, the simplest procedure would be for this to be incorporated into the mechanism during the process of fabrication; that is, the reflex would be innate rather than acquired. In connexion with the realm of the artificial, the function of cybernetics lies above all in the demonstration that the tradi tional concept of “ machine” has been a false one and, it we stick to the word, we must at all events change its former sig nificance. The most recent edition of c o r a serves to mark still further progress; a new association of stimuli has been effected, that of noise and bump. When the tortoise bumps into an obstacle, the experimenter blows his whistle. After a certain number of trials, 248
THE
USE
OF
MODELS
the machine will be able to accomplish the extraordinary feat of backing away from obstacles when its master signals their proximity by whistling. This association mechanism is multivalent; it allows for the association of all sorts of stimuli. Grey Walter calls it the “ Learning Box” . Memory is not always a conditioned reflex; it is more often a question of a programme being organized by the mechanism itself. It might be thought that this lies beyond the powers of a machine But we shall see!
249
CH APTER XI
Calculating Machines “ T h e arithmetical machine does things that are nearer to thought than anything that animals can do.” Pascal.
machines have played a great part in the origins of cybernetics. They are still frequently thought of as the typical mechanisms of cybernetics. If, however, we discuss them here, it is in order to show that they do not reach into the domain of the new science; their behaviour is in fact determined with complete rigidity and is not organized by a network of clinamens. They do not belong to the realm of cybernetics which is that of super-automata. It is, however, necessary to give a backward glance at the mechanical calculating machines, for the principles of calcula ting are the same whether the mechanism has gear-wheels or electronic valves.
C
a lc u la tin g
THE
PR IN CIPL E
OF
CALCU LATIN G
M A C H IN ES
In the classical type of machine, the essential element is a toothed wheel in which each of the ten teeth corresponds to a number o to 9. One wheel represents units, another tens, another hundreds and so on. Each step of any of these orders is made by the turning of a wheel advancing the position of a single tooth. When a wheel passes from 9 to o, it causes the next higher wheel to advance by a tooth. It is obvious that numbers represented by the rotation of toothed wheels can be mechanically added. Pascal’s machine was the first to be made on this principle. The chief improvements brought by modern technique are in the replacement of manual rotation by auto matically driven motors, so that the calculator has only to press the appropriate keys. The operator is nowadays relieved
250
CALCULATING
MACHINES
of reading the results on a scale; he receives them in printed form. Every arithmetical operation can be reduced, by various devices, to one or more additions. Pascal had also planned to substitute addition for subtraction. This is the method termed “ Nines complement” which is still used. Thus, in the subtraction 561 —276, the figures of the second term are completed up to 9 except for the units term which is completed to 10. We thus obtain 724. If we add 561 and 724, we get 1,285. It is only necessary to ignore the number on the left to obtain the result 285. Put broadly, this operation can be explained thus: the sub traction a— b is replaced by the sum a + (io"— b), io" being the power of 10 immediately greater than the first term of the sub traction. But it might be objected that to obtain the nines complement it would be necessary to perform a calculation, a true subtraction in fact, and that the machine should be able automatically to get rid of the superfluous number. The machine is able to do all this. In the first instance, it can give the complementary numbers to 9 automatically. Thus, pressure on the subtraction key reads off from each appropriate gearwheel not the number marked on the key but its complement. As to the suppression of the superfluous “ one” , this is realized by the following ingenious method. The number to be sub tracted is supposed to have on its left side as many o’s as the machine has numbers. Thus in a machine registering 10 num bers, the number 276, which is the second term of our example, is assumed to be 0,000,000,276, although the operator has only registered 276. The figures are completed to 9 including the o’s and without regarding the number of units which must be completed to 10; all this is done automatically by the key with the minus sign. Thus, on pressing the keys composing 276, we obtain 9,999,999,723If this be added to the first term of the difference: 561
9 ,9 9 9 ,9 9 9 * 7 2 3 10,000,000,284 We now find ourselves with one unit too much on the left,
251
THINKING
BY
MACHINE
which is not registered in the machine or on the dial; but there is a unit too little on the right. It might be thought that this operation would lead to a wrong result. This, however, is not the case; by a simple mechanical linkage, the rotation which comes from the left-hand wheel is transferred to the units wheel and this gives us 0,000,000,285-—the right answer. If we have dealt at length with the tricks involved in sub traction, it is in order to demonstrate that the calculating machine follows a rigid programme, in a field that the operator controls strictly, without anything that resembles in any way an organization of clinamens, allowing a greater or lesser degree of liberty to the machine. As for multiplication, there is no difficulty in reducing this to a series of additions. To add the multiplicand as many times as the multiplier has units may, however, be a very long process. Leibniz advanced a step further than Pascal by inventing what he termed a “ drum with unequal teeth” or a “ chambered cylinder” . This is a pinion whose ten teeth project unequally, the second being twice the length of the first tooth, the third three times the length of the first and so on. This pinion slides on a square axis and engages with a toothed wheel which it will cause to rotate a greater or a smaller number of teeth according to whether it be more or less advanced. If it be in the position that corresponds with the figure 1, each turn will only cause the wheel to be advanced by one notch; only the longest groove can then engage with it. In the position 9, on the contrary, the wheel will be advanced by 9 grooves and will turn 9 units. By this device the position of the grooved cylinder on its axis represents the multiplicand whereas the multiplier is introduced into the machine by the number of turns given to the handle: in order to multiply 8 by 4, the cylinder which is in the position 8 is turned 4 times, thus turning the wheel of the combination system by 4 times 8 teeth. In the modern type of machine, the handle has disappeared, but the disc with the unequal teeth still remains as an essential part of the mechansim. In order to perform divisions, the machines subtract the divisor from the dividend until the dividend is exhausted; only in the most recent models has the registration of the results been truly automatic.
252
CALCULATING
A D D ITIO N
BY
MACHINES
ELECTR ICAL
IM PULSES
All the aforesaid principles are found in electronic machines. Fundamentally, however, the unit arrangement is entirely different: instead of the rotation of a wheel turning by one notch, we have a brief electric current, in fact a pulse or “ pip” . A number is represented by a series of pulses: it is sufficient to register the series of pulses representing several numbers to obtain their sum. This registration, however, differs according to whether the machines are electromagnetic or electronic. In the first type, the electric pulsations act on electromagnetic relays, generally on rotary relays. The prototype of these machines is Mark I, the design of which was begun by Professor Aiken at Harvard in 1938 and finished in 1943. This was followed by Mark II in 1946. These enormous machines beat all previous records in speed of action. A multiplication takes at most six seconds and generally only four seconds. These performances were soon surpassed, however, by those of electronic machines. These machines are not slowed down by the inertia of the metal parts that is unavoidable in electro magnetic relays. Instead of relays they have electronic valves and the transmission of a signal is instantaneous, since the inertia of the parts is practically nil; an electronic valve can transmit a million signals a second. It is obvious, therefore, that electronic technique has in this connexion, as elsewhere, proved revolutionary. The first of these machines was Goldstein’s e n ia c , made in 1944 at the University of Philadelphia. Its speed is such that they say it can calculate the trajectory of a shell before the shell reaches its destination. The essential features of these machines can readily be imagined. The main organs are four in number: (1) In view of the speed at which these machines work it is of course impossible to write down numbers by hand to feed into the machine. There is in fact a manual operation before calculation begins; this registers the numbers on a solid record. This registration of the number can be made either by perfora tions in a card, or well-developed lines on a cinematograph film, or by magnetic changes in a metal wire. In all cases the signal is of the “ all-or-nothing type” . 253
THINKING
BY
MACHINE
(2) A reading mechanism for the numbers that have been thus registered is provided by a part which turns continuously; it is either a metal brush making contact, through a perforated card, with a metal cylinder every time there is a hole, or a beam of light passing through the perforations in the card, or the transparent marks on the contrast film falling on a photo-electric cell. In all these cases the reading system sends electric impulses which correspond with the number to be introduced into the machine. (3) An additional system, a “ counter” , which will count the electric impulses as in the electronic instruments. (4) In the last group are the essential mechanisms which write out the results. This is accomplished automatically by typewriter-like machines. These four systems would suffice for the type of simple calcu lation that was done by the classical calculating machines. But the big modern machines are constructed so as to resolve very much more complex problems. For this purpose two other types of mechanism are necessary: “ programme” mechanisms and “ memory” mechanisms. The programme mechanism is constructed so as to register on a solid record not only numbers on which the calculation is to be conducted, but also the nature and the order of the operations. Obviously some codification has to be made in order to translate the data into a system of card perforations or lines drawn on a film strip. Thus the instructions of the operator to his machine are registered in advance. The operator does not guide his machine by supplying it stage by stage with the necessary data, but dele gates his powers to the perforated card or film strip on which he records his instructions; this is exactly what we have hitherto defined as the programme function of the second degree of automatism. It is noteworthy that though Aiken, with his Mark I, was the first to realize programme machines, such a function was at any rate conceivable in non-electric machines. Charles Bab bage, between 1812 and 1833, attempted the construction of an analytic machine governed by a perforated “ programme” belt. He worked at it for twenty-five years and spent two hundred
254
CALCULATING
MACHINES
and fifty thousand pounds— an unheard of sum in those days— and in the end died without having accomplished his project. As for memory, when we multiply we have to keep in mind all the figures until we arrive at the first product. When we resolve a complex problem, as we complete each step we have to write the results in the margin of our paper, or draw a circle round them so that they will be readily recognizable in the course of subsequent calculations. T h e s e fig u res, w h ic h w e p re se rv e fo r a tim e in o u r m e m o r y , or written down somewhere, are kept in the electric calculating machine in mechanisms which Louis Couffignal used to call “ reserve cypherers” (“ Chiffreurs de reserve” ) until the Americans, with their usual flair for picturesque journalism, christened them “ memory” . The “ memory” is in fact an auxiliary transient programme which at an opportune moment, actuated by a code signal forming part of the main programme, will enable the partial results to be reinserted in the calculation. B IN A R Y
SYSTEM :
THE
LO G ICAL
LANGUAGE
The whole history of calculating machines may be looked on as an interplay between European logic and American tech nology. Pascal laid down the first principles from which the material realization evolved slowly for nearly three centuries. During the Second World War, America constructed the first electromagnetic machines, and later the electronic ones. The war brought a demand for machines which could rapidly calculate the solutions to various technical problems. A typical instance of American achievements was their ability to adapt technical methods and utilize them throughout, regardless of cost and complexity of design. Thus, in the realm of electronic calculation they created veritable giants with the delicate intricacy of the minutest machines; giants that are 55 feet long and 8 feet high, using up to 23,000 valves and 500 miles of wiring. These machines are kept in laboratories that need over 400 square yards of floor space. The Europeans might well have admitted defeat. That they did not, however, was because in this domain, as in so many others, they do not compete with the same weapons as
255
THINKING
BY
MACHINE
Americans. When they find that a technique is too complex or too costly they look for a different technique, reinvestigate the theory instead of making the machines and invent roundabout ways which will allow them to attack difficulties insurmount able for them by frontal attack. In short, Europe is obliged to cultivate cunning where America relies on geometry. This leads us back to Pascal and to his calculating machines. Indeed the spirit of cunning was triumphant and European logic furnished a simple solution to problems resolved in America only at enormous cost. This solution was actually proposed by Professor A. M. Turing1 and Louis Couffignal, Inspector-General of Mathematical Education in France and Director of the Blaise Pascal Institute for Mechanical Calcula tion of the National Centre for Scientific Research. The human hand is the oldest form of calculating machine. Since the hand has io fingers our numerical system is based on io, but it might have had quite another basis. The base of 12, which was once in use by certain races and of which many traces are still to be found, would be a more logical system than io, since it is divisible by 2, 3, 4, and 6, whereas 10 is only divisible by 2 and 5. It is possible to conceive a system based on 2 with only two cyphers, o and 1, where 10 would signify 2. Such a system has long been known and Leibniz sang its praises with lyrical fervour. Louis Couffignal has repeatedly called the attention of world science to the remarkable properties of this binary system and proposed its adoption for electronic calculating machines. In this system, 10 signifies 2; 11 is 3, and 4 is written 100; and so on, 5, 6, 7, 8, 9, 10, being 101, n o , h i , 1000, 1001, 1010, &c. Addition thus becomes blissfully easy; multiplication would become a schoolboy’s dream: 1 x 1= 1
1x0=0
0 x0 = 0 .
That is all, and surely nothing could be simpler. It is thus easily understandable how the number of parts in calculating machines might be reduced; they would use two cyphers in stead of ten. But the binary method has achieved its greatest success 1 Journal, London Mathematical Society, 1936.
256
CALCULATING
MACHINES
with electronic calculating machines. When the current passes through a valve, it signifies i and when it does not, it signifies o. What is still more marvellous, if we say that i is “ yes” and o is “ no” , we can realize that the binary system can give a mathe matical translation of logical arguments. Thus the electronic calculating machines of today are likely to become even more simplified rather than more complicated; to become, indeed, machines of reasoning and logic. The binary system with its all-or-nothing type of signals corresponds closely to our thought processes. One formula in information theory indicates that the best form of message is a sequence of alternatives. It is true, nevertheless, that the binary system has many defects. To start with, it takes longer to write a number by this method than by that of the decimal system. But this defect, though it must be admitted, is not very important; it takes only about three times as long. Another objection is that the binary system is very difficult to interpret and that its translation into another cypher system is very complicated. This objection is not really valid as, with a minimum of training, transposition of binary into decimal system and vice versa offers little diffi culty. The binary system is not simply an ephemeral mathe matical trick. The use of electronic calculating machines using the all-or-nothing code is certain to become more wide spread. What has hitherto been regarded as a perfectly useless mathematical procedure has turned out in the end to be the very language of logic, built up from fundamental considera tions, in fact calculation at its most elementary level. The binary system is bound to become generalized, to grow into a universal method of notation; in future generations it may serve to express the most absolute abstractions in which the mind finds its supreme satisfaction. Leibniz had already understood the completeness of such a system of notation, for he found in it the undeniable proof of the existence of God. Leibniz, however, could not foresee that the calculating machines of the future would be based on this system and that the passage or non-passage of an electric current would resolve any conceivable chain of syllogisms.
257
THINKING
THE
W AY
OF
THE
BY
M IN D
MACHINE
AND
THE
W AY
OF
THE
M ACH IN E
Decimal operations would appear on paper to be shorter than binary operations. Thus, to multiply 854 by 77 by the decimal system 6 element ary multiplications are necessary and, following these, 5 ele mentary additions; that is 11 elementary operations (not counting the numbers to be carried). Using the binary system we require 70 elementary multiplications and 31 additions; not counting carry-overs, this leads us to 101 operations. Including the setting of the problem, the hand has to write out 18 decimal figures as against 77 binary ones. It is true that each of the binary elementary operations is extremely easy, but the rapidity with which they can be accom plished far from compensates for their excessive number. This would still be true even if we had learnt to use both systems from early childhood. Any thought process, no matter how simple, involves a certain delay. The writing of a sign by hand is a definite action and 101 such elementary operations will always take longer than 11. More than this, man has a great defect: he can get tired, and the mind gets tired even faster than the muscles. When dealing with a great number of things, above all with abstractions, there is always the risk of human error. One can become fatally muddled in the manipulations of 1 and 0. None of this applies to the machine, for which the binary system is necessarily an advantage. It is evident that it allows an extreme reduction in the number of parts required in the calculating mechanisms. The machine gains greatly by the ease of working with an all-or-nothing system, 1 being all and o being nothing. We always come back to this marvellous simplicity; if the current passes it is 1; that is to say, “ yes, an object of this kind exists” . If the current does not pass, it is o; that is, “ No, there is no object of this kind” . We see here a difference between the methods of the machine and of the mind in arriving at the same end-result, but in this instance the basic reason for this divergence of method is very obvious. The advantage for the machine in using the binary
258
D ETER M IN ISM A G A IN ST O R G AN ISM A game of chess played between the classical mechanical device and a cyberneti cian. At the Cybernetic Congress in Paris, 1951, G. Torres y Quevedo, the son of the famous inventor of the automatic electro-magnetic chess player (left) challenged Norbert Wiener, the great cybernetician. On this occasion, the machine won every game.
A R E T H E Y DISCU SSIN G TH E R E SP E CT IV E M E R IT S OF T H E IR BASIC A L L Y D IFFER EN T C A L C U L A T IN G M A CH IN ES? An argument at the Cybernetic Congress of Paris between Louis Couffignal (on the left), Director of the Blaise Pascal Institute, and Aiken, designer of the famous “ M ark” series of calculating machines.
CALCULATING
MACHINES
system depends firstly on the fact that it acts with a rapidity in its electronic working which would be quite beyond the human brain. The mind has a disadvantage in that it gets muddled if asked to perform too many operations; this is not so for the machine. The brain consists of almost innumerable entities, and, though likely to go astray if the operations are too rapid and too numerous, it can on the other hand economize their number at the price of additional complication in each of them. It is to the advantage of the machine, which can perform very rapid and numerous operations, for the unit mechanisms to be as simple as possible. To be more precise; the advantage of simplification accrues to the constructor. The machine simplifies its operations at the cost of multiply ing them, the mind diminishes their number at the cost of complicating them. In other terms, the machine carries out multiple operations with simple symbols and man carries out a limited number of operations with complex symbols. The one method restricts the activities, the other restricts the number of mechanisms involved. The two ways of action are very different, but that of the machine is surely more elegant from the abstract point of view; it is the more obvious and is only dictated by the principle of economy of means. The human way is dictated by our inability to accomplish a rapid series of operations even when they are very simple. The decimal system would appear to be a dodge to diminish the number of elementary operations. A dodge is the right word, for the binary system is the more natural. It is the method of calculation demanded by logic: a thing is or is not, i or o. THE
FIR ST
AM ERICAN
RE ALIZATIO N S
If one spends a little time over the study of the binary method of mechanical calculation one soon wonders how the electric impulses can be counted. In the decimal type of toothed wheel, each engagement causes the wheel to advance by one tooth and at each complete turn of the wheel, the next wheel up the scale advances by one tooth. But in the binary system the counting is accomplished electronically. Many types of counter are of course possible, but the method
259
THINKING
BY
MACHINE
used in the first electronic machine, e n ia c , which was not in itself a binary type, proved to have so many advantages that it came to be generally adopted and will probably prove to be classic. This is the multi-vibrator of Abraham and Bloch in which two counter-batteries replace the condensers. This device has been used in radio ever since 1926, and is commonly known as a “ flip-flop” . It consists essentially of two electronic valve elements inter connected in such a way that the system possesses two positions of stable equilibrium. In each of these on-off positions, one valve allows the current to pass whilst the other is blocked. The situation is reversed by the arrival of a new impulse. It is a rocking device. Such a chain of flip-flops, by controlling the passage of a long series of almost instantaneous impulses, allows the representa tion of any binary number; a certain arrangement of flip-flops corresponds to o (the one that gives access to the succeeding flip-flop); the other position (which blocks) corresponds to 1.
1
1
1
1
Consider the flip-flop A whose position of equilibrium is such that the current does not pass further (that is position 1), but which in the other position, the position o, will allow the current to pass on to the second flip-flop B. Let us imagine it in the position o; another impulse arrives, it switches to position 1; yet another impulse follows, it comes back to the o position and lets the impulse pass through to the flip-flop B ; the latter was in position o, but now comes into position 1. There are then 10 such impulses, that is to say 2 in decimal notation. The impulse bringing A over to 1 is blocked and cannot affect B. This is 11 binary==3 in decimal notation. On the other hand, the fourth passes through the o position of A towards B and, bringing B back to the o position, it passes through B to C which 260
CALCULATING
MACHINES
marks i. This is ioo binary = 4 in decimal notation. The following impulse, the fifth, registers 1 in A without passing on to either B or C. The first flip-flop is thus affected by each impulse, the second by one impulse in two, the third by one in four, the fourth by one in eight; so that each flip-flop is affected every 2" pulses, n representing its binary number. But this process of registration, however rapidly performed, would always be too slow. For example, in order to register 100 m illia rd s it w o u ld be n e c e ssa ry to send out 100 m illia rd impulses; even at 1 million a second, it would take more than a day. The number must be introduced directly into all stages of the calculator. Let us suppose that one number is already being registered in the calculator, and that we want to add another number. This other number is manifested by a series of contacts and non-contacts, of impulses and absence of im pulses, in relation to flip-flops representing the several binary orders. Thus, 1001 sends an impulse to A and one to D, causing the flip-flops A and D to tip over and not affecting the others unless the number already registered in the counter requires them to be retained. However, despite all the advantages of the binary system, the American machines have used it little up to the present. As Professor Brillouin said on his return from U .S.A .: “ The Americans discussed it on many occasions, but have not yet ventured to adopt it entirely. In spite of this they were appalled by the complications of the decimal system and made use of rather curious hybrid combinations.” Thus the Bell Telephone Laboratories machine uses a bi quinary system. In this there are only 5 cyphers from o to 4; 5 is written as 10. Still this is not a system based on 5, for if this were so 10 would be reckoned as 20, 15 would be 30. For all this 20 does not represent 120, it represents 200. How this comes about would be a mystery to anyone who did not know that in the odd rows (beginning from the right) the cyphers go from o to 4 (quinary system) but in the even rows they go from o to 1 (the binary system). The biquinary system may be said to resemble Roman notation, not on account of the classical Roman numbers as exemplified by IV and IX, but in the pre vious numerical system where 4 was written IIII and 9 as V IIII. It seems strange that such a primitive system should be
2 61
THINKING
BY
MACHINE
used in modern calculating machines. Nevertheless this venture has held its own with success. The electromagnetic machines of the Bell Telephone system, utilizing the same components as the automatic telephone system, succeed in giving a good per formance with considerable economy of means compared to the decimal system. In e n i a c the numbering is different. There are only 5 num bers which are 0, 1,2, 2, and 4, “ two” occurring twice. All the numbers are obtained by conbinations of these figures; exactly as with a box of weights containing only four weights of 1, 2, 2, and 4 grammes, it would be easy to make up any weight in grammes from 1 to 9. In the Mark II, Aiken took a step in the direction of the binary system, using this system of numbers up to 10, translating all the decimals into it, but at the same time maintaining a classic division of the groups into tens. This system is practically analagous with the former, but with 4 cyphers having the values 1, 2, 4 and 8. Today the best exclusively binary calculating machines exist in England. The first entirely binary electronic machine was constructed in Manchester and is known as m a d a m (Manchester Automatic Digital Machine). In Cambridge M. V. Wilkes completed the construction of the e d s a c (Electronic Delayed Storage Automatic Computer) in 1951. At the National Physical Laboratory at Teddington there is the a c e (Automatic Computing Engine) and a more recent model known as d e u c e . IN
THE
LAND
OF
DESCARTES
While the Americans, using logically unorthodox methods, had achieved such great technical success, Louis Couffignal, in the land of Descartes himself, had long found an ideal solution. Construction of his machine, known as I.B.P. (Institut Blaise Pascal), began in 1949, but had to be abandoned for lack of financial support. Now at last he has been able to build the first elements of his machine at the laboratory of the Centre National de la Recherche Scientifique. From his first acquaintance with the giant calculating machines in the U.S.A., Louis Couffignal was much impressed by the memory mechanisms which were highly developed whereas the calculating mechanisms were reduced to a single
262
CALCULATING
MACHINES
totalizator. In some American machines the “ memory” is able to store 5,000 cyphers and the solution of the problems of mathematical physics on which they will be engaged in the future may necessitate an even greater number, in the neigh bourhood of 20,000 or even 50,000. Louis Couffignal thinks that if, for example, an ordnance survey of France (a problem of 800 linear equations with 800 unknowns) were to be carried out using machines of the American type, the number would have to b e still fu r th e r in c r e a se d . On the other side of the Atlantic-— at any rate at the time of Couffignal’s visit— the tendency was to simplify the calculating mechanisms still further. At Princeton, von Neumann even proposed to revert to Pascal’s system and carry out multiplica tion by successive additions in view of the fact that the simplicity of the electronic machines cancelled out the very great loss of time involved by the old type of mechanism. Louis Couffignal had the advantage of reasoning in the Car tesian tradition, whilst the Americans looked at the problem from the point of view of technicians. He analysed, first and foremost, the fundamental principles of mechanical calculation and realized that it is necessary from the start to deal with the problem on a basis of scientific work-study. The American machines with a single calculating unit are obliged to have as many “ memory” elements as there are results occurring step by step in the solution of a problem. In fact they can only introduce these results into the calculator if it is un occupied. Louis Couffignal designates this form of organization as a “ serial progression” . If, on the other hand, a machine had many calculating units, these would all be able to work simultaneously or “ in parallel” . The distribution of the various steps to these calculators would be a problem of division, planning and organization of work. In consequence Louis Couffignal came to analyse various types of operational planning. He distinguishes two essential types. On the one hand is the type of operation that Professor Peres has aptly termed “ chain calculations” . These are analogous to the work of an accountant who introduces each partial result as he obtains it into his chain of calculations and can thus only deal with his problems step by step. On the other hand, the operations might be termed “ complex
263
THINKING
BY
MACHINE
calculations.” It is here not a question of a progressive solution; the partial results are worked out and introduced into a terminal chain. It is as if, instead of a single accountant, there were a whole team of workers— a calculating bureau. The work is methodically divided up in advance between the various calculating units. Some of these units yield partial results; these are used by others as the data for further calculations, which again may act only as intermediaries. The mechanism that figures as the head of the team takes charge of the final opera tion which obviously must be a chain operation integrating the interim results of the collaborating mechanisms. The French genius for effecting mathematical combinations has been successful not only in elaborating the plan of opera tions, but in selecting the neatest form of solution. It is easy to understand that a well-organized plan of work can appreciably accelerate the operations of a calculating machine. THE
“ MEMORY”
U NIT
The greatest source of wonder for a layman in the modern calculating machine is the “ memory” . The name itself is partly to blame for this; if it had been called the “ registration unit” , it would not have caught the public attention so readily. One would expect such a machine to register. The function of the memory is to store a number until such time as it is required by the calculators. It might appear that this could have been effected by very simple means; it might even have been effected by coded perforations in a card. All at once the mystery surrounding “ memory” seems to vanish and the layman might be led to think that memory is the easiest of all the cerebral functions to copy mechanically. This simple method of punched cards was the method used in the earlier electrical calculating machines. In the Mark I, the memory was a perforated strip like that of the programme, except that instead of being punched in advance by the opera tor, the machine itself recorded the partial result. A reading machine found the number thus recorded in a particular coding and reintroduced it into the chain of calculations. It might be asked whether this faculty is a modification of the programme effected by the machine. This it certainly is not.
264
CALCULATING
MACHINES
The programme could only be composed by the machine if it were capable of deciding by itself the appropriate moment for any number placed in reserve to be reintroduced into the chain of calculations. On the contrary; the programme devised by the operator “ instructs” the machine to transmit at a given moment to a specific totalizator the number which has been translated into the coded perforations. The only difference be tween this and other steps of the calculation is that the number had not been punched out beforehand by the operator, but was calculated by the machine itself at another instruction of the programme. The number set in reserve must be regarded as one of the factors in the final result, but it is a factor that the
machine fabricates itself. This factor is connected to the pro gramme from the start but left blank; then, by a special form of feedback on this phantom factor, the machine brings it into existence by a feedback on the existence of the factor. This is well illustrated in the design of a programme machine; that is, a machine in which all the factors depend on a common pre factor, by a punched-card factor on which the effect acts. When, however, e n i a c , the first electronic machine, came into being, the memory depending on punched cards, that were equivalent in inertia to electromagnetic relays, could not co exist with the ultra rapid progress of calculation. Another method for storing numbers in reserve had to be found. Un fortunately, whilst electronic mechanisms act with very great rapidity, the “ memory” mechanism has to act with a consider able delay. It became a question as to how the reactions of the electronic mechanism could be slowed down. For this purpose, the train of impulses acts like a little surge of elasticity. Let us imagine a closed electric circuit round which there circulates a train of impulses which represent the number to be
265
THINKING
BY
MACHINE
retained. These impulses travel with the velocity of 300,000 kilometres a second; that is, a practically infinite velocity. If this circuit contains a mercury tube and at each extremity of the tube are piezo-electric crystals, the sequence is as follows: the quartz crystal converts the electric impulses into ultrasonic waves; the mercury column transmits these ultrasonic waves with a relatively slow velocity; the second crystal transforms the ultrasonic waves into electric waves, which again complete the round of the circuit; this last is effected by a valve which recti fies the signals that have been deformed in their passage through the quartz-mercury system. When the machine needs the number thus stored, a commutator passes the waves to the calculation mechanisms; thus the train which was circulating on a loop is directed back into the main line. This ingenious method is often found difficult to realize in practice. In 1947, Professor Brillouin, in the course of his lec tures at Harvard, obliged the designers of e n i a c to admit that their machine only gave accurate results twenty times in a hundred. This is not, however, greatly inconvenient, as the machine repeats the calculations until all the various calcula ting units whose concordance is necessary for the avoidance of errors give the same results; the final result alone is printed. This enormous deficit of 80 per cent, is due to the malfunction ing of the ultrasonic memory: what happens is that the mer cury tube gets hot, causing a deformation of the signals passing through it. In England the tube perfected by Professor Williams of Manchester forms a much better “ memory” . The reserve num bers are written on a semi-conductor screen where the different voltages remain visible for some time in the shape of conventional luminous points. A cathode beam “ reads” these figures by scanning the screen. There are also optical memories which register the numbers in light or dark points on a cinematograph film, which is sub sequently explored by a photo-electric cell. But in all these designs the numbers can only be read after a certain delay, which, however short it may be, inevitably slows down the ultra-rapid calculating procedure: it is necessary, first, either to present the registration panel to the reading mechanism or else to wait until the required number passes in
266
CALCULATING
MACHINES
front of the reader. The ideal would be that a number that had been stored could be “ read” at any moment, and to be able to write in another or efface one at any time one liked. There are, however, two machines which permit this: the American selectron and the French diode valve memory. Another faculty of the giant calculating machines, which seems astonishing to the general public, is that when they go wrong they give no results: that is, they cannot actually make mistakes. This miracle, the apparent creation of the brain of a super-genius, can be explained quite simply. Two or more mechanisms may conduct the same operation; the results are compared and delivered only if they agree. If they do not agree, the calculations begin over again. It seems almost too obvious. Even school-children compare their results before handing in their solution. EL E CT R O N IC
B R A IN S?
Calculating machines arouse lyrical enthusiasm amongst writers and journalists. To quote one passage only: “ What should we think of a mechanism of wood and metal which not only computes astronomical and nautical tables to any degree of accuracy, but can also guarantee the mathematical accuracy of its operations by its own power of correcting possible errors ? What are we to think of a mechanism that can not only do this, but can actually print the results of its complicated calculations as soon as they are obtained and performs all this without any intervention of human intelligence?” But these words were not written at the present time; they refer to Babbage’s analytical machine. Moreover, they were written by Edgar Allan Poe, in one of his strange tales, “ The Chess-player of Maelzel” . But Poe was justified in being lyrical at that time. Nowadays, verses in praise of the super-machine represented by the giant calculating machines have a more sober ring. “ Electronic brains” ; why not? But one would have to call a car “ a mech anical pair of legs” which would emphasize the point that a machine is no more a brain than wheels are our legs. Machines and human beings adopt different methods to arrive at the same goals. One would have to apply similar terms to the auto matic till which may be said to think under the fingers of the
267
THINKING
BY
MACHINE
cashier. The difference between this machine and e n i a c is that the latter does not operate under direct human control, but by virtue of orders communicated to it in advance . It also works with far greater rapidity. In this pre-registration of orders and this speed of operation there is certainly no transcendental principle. Between e n i a c and the cash register of the grocer at the corner of the street there is simply a progress from wheel mech anisms to electronic mechanisms. The definite progress that marks our epoch is that we are freed from the inertia of movable metallic components by the use of electrons whose inertia, according to our standards, is practically zero. Pascal’s systems of gearing could calculate; the same principles in a computer working on electrical pulses can achieve the same end more easily because the means is more effective. In all these machines it is really only a question of counting impulses; in some the arithmetic unit is realized by rotary gearing which moves a tooth at a time and in others by a short electric signal. In either case they are machines of the second degree; they only obey the precise orders of their human master. The factors belonging to their acts are strictly determined in advance, either by their construction or by the transcribed orders of the programme. They possess no margin of liberty, or possibility of initiative or correction. If they are able to solve many problems, it is because man changes their programme for every problem; in other words for each individual problem he makes it a slightly different machine. They are without doubt the most perfect and the most astonishing of all types of determined machines; but they have not that individual type of life that clinamens give to mechanisms. In short, the calcula ting machines and the machines which purport to “ think” in binary language are in no way organized. This may seem to be surprising. Two explanations are possible; either the calculating and thinking machines are not really analogous to brains or the cerebral processes that enable us to think and to calculate are themselves pre-determined. Let us consider the first alternative: if the difference of method is obvious the analogous function is less so. Louis Couffignal, in his important book Les Machines d penser, puts a number of questions that will require an answer
268
CALCULATING
MACHINES
from the cyberneticians of tomorrow. He asks whether the delay system of the dynamic “ memory” functioning through the mercury column is not analogous to certain types of memory actuated by neuronic circuits in which the stimuli themselves continue to circulate indefinitely; whether the diode valves that serve as static memory in the French machine cannot be legiti mately compared to the Purkinje cells in the cerebellum; whether (a yet more crucial question) the new wave detector, the transistor, a minute electronic amplifying device consisting of crystals of germanium, does not explain the mode of action of the mysterious intraneuronic synapses ? These questions cannot be dealt with briefly. A somewhat analogous problem, which is however easier to understand, is offered by the attempt of Norbert Wiener to compare break downs in calculating machines with neuroses. When an electronic calculating machine is not working properly, it may be treated by any of the three methods which are analogous to the treatment of mental disorder. One, to leave the machine alone in the hope that it will recover by itself — with man, rest. The second is to shake it up thoroughly by giving it a good punch as we might to a penny in the slot machine, or to pass a strong current through it in the hope that the faulty contacts will re-establish themselves— an analogy to shock treatments, especially electro-convulsive therapy. Thirdly, not to use the badly functioning part, to disconnect it com pletely, and then attempt to make the unaffected parts function properly: this resembles the procedures of the neuro-surgeon which may even involve lobotomy. It may be asked if these analogies can be pushed to such ex tremes as to constitute a logical identification of function. This may very well be doubted and we should then be driven to consider a second alternative in which the calculating machine that is rigidly determined and not self-organized could be said to resemble the brain because the latter is itself rigidly determined without any liberty of action for the operating factors. We can show that all reasoning is equally determined when it gives rise to a concept instead of an immediate activity. The resulting activity is part of a retroaction with the environment and is governed by feedback, but the concept is not.
THINKING
BY
MACHINE
If two men fight, their tactics of attack and defence are auto matically determined by feedbacks which inhibit them or excite or modify them in conformity with the anterior situation, but if two generals fight each other through their intervening armies, they are obliged to formulate their decisions in terms of con cepts, which will not be put into execution immediately and consequently are not modified at the moment of their formation. The combat tactics are the same for two men face to face in battle and the two opposing generals, but the armies of the latter have to be “ determined” and cannot be “ self-organized” . If there be an error in the determining factors or in the law of functional causality, the resultant effect will be inefficient and hence it is difficult to see how reasoning affects an action that is self-regulated. In the same way as arithmetical calculating machines might be instanced as models of the operation of reason, other calculating machines certainly offer a perfect model of activities into which thought enters; such are the algebraical machines and particu larly the Bush differential analyser, which we are purposely refraining from discussing here. Let us say simply that it corre sponds to the diagram on page 184, Chapter V III, one of the results being maintained at a fixed point and thus influencing the entire system. The analogy to the animal is so exact that it will serve as the base of departure for L'Homme en Equations.
270
CH APTER
X II
The Mechanisms of Anticipation and Memory he
fe e d b a c k o f a m a c h i n e has u p to n o w b e e n in stan ce d
as the nearest approach to an imitation of vital processes. The principle of retroaction would appear to embody all the elements that enable the organism to adapt itself to circum stances. It sometimes occurs, however, that the feedback mechanism fails in spite of its complexity or even, perhaps, on account of this. Suppose that we are dragging along a heavy cart; when the road begins to go uphill we instinctively pull harder at the drag ropes. The word “ instinct” is the name that has been given to the feedbacks which enable animals to adapt their efforts to the task they are carrying out; thus, the first sign of an increasing difficulty in pulling automatically stimulates us to increased effort. But even if our motor-cars could adjust their power to the steepness of the road with the same degree of automatic certainty, their performance would still be inferior to that of our bodily mechanism. We do not, in fact, simply adjust our effort to the slope of the hill; we adjust it also to the length of the climb. We do not react with the same degree of effort to a slope of a few yards as we would to an incline of some miles in length. We would trust to our momentum in the case of a short slope, not slowing down, or even increasing speed, whereas we would tackle the long hill with some degree of advance planning.
W H E R E F E E D B A C K IS PO W E R L E SS The motor-car, even if provided with the most efficient auto matic gear-change, could not act with the same perspicacity. With the least increase in the steepness of the road, it must change gear. If, on the other hand, finding that the car has 271
THINKING
BY
MACHINE
changed gear for slopes that were too short, we adjust the mechanism so that the automatic gear-change only comes into operation after a climb of ten yards, then it will still change at the ten-yard mark, even though there may be only another foot to the top. It will, in fact, behave “ stupidly” in spite of the feed back. Thus man and the higher animals adapt their behaviour not only to the exigencies of the preceding moment, but also to those of the future. Here we are a long way from the concept of the feedback which can only react when it is mechanically deter mined and which only learns that it is going up after the ascent has already begun. Let us suppose we are ski-ing down a slope. At the least change in the degree of incline of the ski-track, our bodies bend either forwards or backwards in order to remain constantly at right-angles to the slope. This would appear to be a case of feedback, since, if we are skilful skiers, we allow our body to adjust itself without any conscious act of volition. To prove this, an experiment can be attempted by blindfolding the skier. As a result he will invariably fall forwards. The reason for this is that there remain only the bodily feedbacks to ensure longitudinal equilibrium. It is true that the proprioceptive system is capable of determining the slightest alteration of the slope and this will cause disturbances of tension in the muscles and joints, but by the time that they are able to detect this and to signal that the body is inclined forward by a sudden altera tion of ground slope it is already too late and equilibrium can not be re-established. A diver cannot turn back once he has left the board. Retroaction can only prove efficient after a certain delay. Here, however, the least delay is disastrous, for the skier is travelling fast; by the time the loss of equilibrium is detected there is no time to react before the lack of balance becomes still further accentuated— so much so that the feedback may be said to have no power of recovery. The spinal movement by which the blindfolded skier instinctively reacts will not succeed in saving him from pitching head foremost. A skier maintains his equilibrium by observation of the lie of the land ahead of his skis and by inclining his body accordingly. In the novice this is an act of volition but later it becomes automatic after its
272
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
incorporation as a conditioned reflex; it occurs before the skis actually contact the altered slope. Thus, compensation is initi ated before the change of equilibrium actually happens. The equilibrating feedbacks might thus seem to play no part in the affair, but this is not so. They constitute a system of fine adjust ments that completes the gross adjustment of the preliminary adaptation. They are unable, however, to compensate for a disequilibrium which they perceive only when it is already lead ing to a n inevitable fall. If, however, visual adjustment has ensured a gross equilibration of the body, they can compensate for minor deviations arising from the imperfections of antici patory adjustment. The part played by feedbacks in animal movements is thus evident. They do not constitute the entire system of regulation, but only complete the voluntary control. This adjusts the body in such a way as to fall in with the antici pated movements, so that any deviations that may manifest themselves in action will be capable of being dealt with by compensatory reactions. This is true of all bodily movements, but it is most obvious in ski-ing, since the rapidity of the move ments shows up the impotence of the unaided feedback. A vital reaction, of which the machine has so far proved in capable, is that which enables the animal first to explore the environment in order to appreciate the situation and, on the one hand, to act according to its appreciation of the situation, and, on the other, to prepare for future action. It may still be asked why the machine should not ultimately accomplish this also.
HAS C YB ER NE TI CS MANY P R A C T I C A L A P P L I C A T I O N S ? One may easily imagine a motor-car exploring the road in front of it and altering its velocity in obedience to the indications received by its “ electric eye” . One can imagine, even if such a thing is impossible, a model skier which would explore the slope of the ground beneath it and would regulate its equilibrium accordingly. Such mechanisms, if they were actually made, would be of almost inconceivable complexity. The adversaries of machine-lore may well claim that a machine can never equal man. But it is not certain that the machine is necessarily beaten on this score. If the model skier and the electric eye to scan the road seem too complex and costly, it is because the game in
273
THINKING
BY
MACHINE
this case is not worth the candle. Mankind does not need a mechanism to economize those thoughts and reflex actions that are easy. What would be the use of a drinking machine that would pour out a glass of water at the instance of an electric device capable of detecting when we were thirsty; or of a machine to lace shoes or to play tennis for its master ? Here we put our finger on an essential consideration for the new science; it seems that the practical applications of the vast complexities of cybernetics are very few. It may seem marvel lous when we think of machines performing human functions, but it is questionable if mankind wants mechanical replacement of precisely those functions whose subtlety can only be imitated by cybernetic mechanisms. The role of the machine will undoubtedly always be to re inforce man in actions requiring strength. That the machine may be capable of truly miraculous imitations of activity at present confined to animal intelligence is no argument for the enactment of such miracles in daily life. There are few oppor tunities for the practical use of machines which possess the power to govern their own activities. It will always be more economical, easier and more efficacious, to have a complex machine controlled by a man, rather than by a delicate com plexity of electronic reflexes. Why apply cybernetics to a mech anical shovel ? If it were a question of replacing the fifty men who did its work fifty years ago, it might be a proposition. But mechanical science has attained such a degree of efficiency that it is now a question of replacing a single man. Would it be ex pedient to make a motor-car entirely automatic? It would surely be more prudent to steer one’s own course in a crowded thoroughfare. It would appear that the only manoeuvres of extreme delicacy which are better performed by the machine are those where human fatigue constitutes an important factor. When human fatigue reduces efficiency, the machine of course wins; thus the greatest achievement of cybernetics is the auto matic pilot of aeroplanes; some day the automatic pilot of space ships will be even more remarkable. All this holds good for times of peace, but in war cybernetics will be of prime importance, because in war-time the complex ity and the cost of machines no longer counts. It is often neces sary in the dangerous enterprise of combating enemy machines
274
A PIC T U R E OF A R E V O L U T IO N A R Y C O N C E P T IN M ACH INES This is a machine that accomplishes everything that it sets out to do; even if all its determinant factors are upset or reversed. Its aspect is most unpretentious and yet this peculiar faculty of Ashby’s machine introduces a complete revolution in our former conception of mechanical possibilities.
T W O M IN U TES O F ELSIE’S LIFE The photograph records the coinings and goings of the electronic tortoise Elsie during a period of two minutes. Elsie had a lighted candle attached to her which produced a luminous trail. She is seeking out the source of illumination of another lighted candle. (Photo “ Time-Life" .)
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
to replace man, and to command powers of reasoning and reflexes which are more rapid than those of the enemy. Already A.A. guns are outmoded; artillery will be use less in the destruction of planes or supersonic rockets at a height of 30,000 feet. Protection must be assumed by interceptory devices; guided missiles with very rapid climbing power will steer themselves towards the enemy machines and will explode as soon as they establish contact. This indeed will be the triumph of cybernetics. In the meanwhile the automatic gun aimer has been responsible for the greatest success in the new science since its birth in the work of Wiener and Bigelow.
T H E “ I N T E L L E C T U A L ” O P E R A T IO N S OF A N T I - A I R C R A F T DEFENCE The more recent types of anti-aircraft guns, particularly those installed in warships, are extraordinarily complex ex amples of cybernetic machinery. Generally speaking they are military secrets, but in this book we are only concerned with the abstract principles of their functioning. Consider a man firing at a moving target. His bodily and mental operations may be detailed under eight headings: (1) to find the target; (2) to estimate the direction of its motion; (3) to estimate its velocity; (4) to calculate from those data its immediate future position; (5) to aim in this direction; (6) to fire; (7) to observe the deviation; (8) to correct the sighting according to the deviation. No matter how far these operations may be classed as intellectual, the modern A.A. gun is capable of dealing with them all. At one time planes were located by sound. Modern planes, however, can fly beyond the reach of sound, apart from which sound offers far too slow a medium of detection. Radar, on the other hand, scans a greater distance and operates almost in stantaneously. Whether the search is conducted by locating the sound or the echo of Hertzian waves, the principle remains al ways the same, that of feedback— the principle of least action or principle of Maupertuis. To estimate motion is difficult, to estimate the angular velocity is still more difficult, but for the calculating machine it is easy, provided that it is fed with the data— that is to say, details of the position of the plane at any given moment. On the other
275
THINKING
BY
MACHINE
hand, the mechanical detection of the position of a plane is not such an astounding feat. If we consider separately the two components of these intellectual operations which have been combined under the single designation “ to estimate” , we can see easily enough that each operation separately could be per formed by an ordinary machine. But when we think of a machine that is capable at the same time of observing and of predicting the results of its observations, we marvel at its ingenuity. All of which goes to show what false ideas we are inclined to entertain about machines; all our former education leads us to underestimate their capabilities. The arithmetical calculating machines receive all the data about their problems from a human operator who registers them in accordance with some agreed code on a card, a tape, a film-strip or a wire. These machines, despite their efficiency, only embody the lower degrees of automatism; they are slaves dependent on the commands of man. But the calculating machines that collaborate in the aiming of the anti-aircraft guns are at the same time less complex— for they have only to solve simple and unvarying problems— and yet they rank higher in the hierarchy of automatism in that their “ programmes” have certain gaps; these blanks in their information have to be sought out for them in the environment by sensitive mechanisms. It is thus a case of a third-degree complex— a machine that functions in relation to external conditions. Such a machine gives an answer to the single problem which it is required to solve: that of the position of the aeroplane a given number of seconds hence. It observes the plane, notes the characteristics of its course, calculates from these data the point at which the gun must be aimed and gives the gun the necessary orders. But such a manoeuvre, based purely on intricate elec tronic data emanating from a calculating machine, would be inconceivable were it not for servo-mechanisms. A servo mechanism constitutes the operating link between the calcula ting mechanism and the machinery for elevating the gun to its correct firing position. As soon as it is aimed, the gun fires. In all this there is nothing miraculous; it is an instance of “ classic” automatism. But now we come to a more marvellous feature: that is, the observation of deviations and the correction of the angle of fire
276
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
according to these deviations. The wonders of feedback make it possible for the A.A. gun to point at a target, spotted by means of radar, flying far beyond the reaches of our sight and hearing. We are already accustomed to the quasi-miraculous aspect of feedback, but when we learn that in the secret laboratories of the arsenals there exist certain weapons which are able to correct their fire according to the results of previous shots, there is surely cause for wonder. Radar seeks the enemy plane and, having spotted it, holds on to it; another radar follows the flight of the shell. If all goes well, the Hertzian wave echoes from the plane and those from the shell should come from the same direction, and when the shell bursts and its echo dis appears this should coincide with the position of the plane reported by its echo. It is conceivable that deviations of the echoes in space and time may give rise to weak currents which, when amplified, act on the aiming mechanism in such a way as to abolish the deviations. It is true that one can imagine this, but nevertheless one would still like to know something more of this extraordinary mechanism which enables a gun to observe the effects of its fire and to make appropriate corrections for its errors. Here, unfortunately, we are up against a curtain of military secrecy. W. R. Ashby, the British psychiatrist, appears to be the only writer who has alluded to this automatic correction. Such a mechanism seems to be so perfect that it is difficult to imagine that it should ever fail to hit an enemy plane. It is, however, less infallible than it might appear at first sight. One factor that always escapes exact control is the charge behind the shell, which may always vary in some degree or fail to fire in an identical fashion each time. The gun is aimed as a func tion of the charge behind the preceding shell, not that of the next shot. This constitites a sort of hysteresis or lag. Apart from this defect the mechanism reaches the highest level of automatism; the event that is taking place acts on the event in preparation, which takes into account the results already achieved. This feedback through space appears to be more subtle and more intriguing than anything that has gone before. One can think of the gun as extending long and in visible antennae into space, with bursts of fire at their extremi ties, suggesting hands that search out their victim, find it and
277
THINKING
BY
MACHINE
seize it, whilst we would not even be able to find and pick up a pin on a table without the use of our eyes.
A P A R A L L E L B E T W E E N A GUN AND A MAN Here we have a combination of machines, most of them cybernetic, which represents the highest attainment in mech anical evolution at the present time. It is a combination that exercises the same functions as those of a man with a rifle. It will be fruitful to analyse this mechanical complex and to compare it with a schematization of a human performance such as the balance of the body in ski-ing. We had better say straight away that more than a comparison is involved; a complete parallelism must be admitted. The same functions are exercised by the man and by the machine through mechanisms actuated by like principles. It is even unnecessary to trace two diagrams; one alone should serve for both (see diagram on opposite page). These schemata are of course very general. The feedbacks only function within strict limits. For them to fulfil their func tions the factors that they regulate must lie within these limits. This is true of animals when their actions are more than re flexes and of machines whose work involves more than mere strength. In short; a retroaction whose task it is to begin to act after the result has begun to develop, has to be associated with mechanisms that act in anticipation. Without this, a feedback may be as stupid as any inferior mechanism; as, for example, the motor-car with an automatic gear change that changes speed after 10 yards of ascent, even if there is only half a yard more to the summit of the hill. Anticipation is a function of reasoning.
M E C HA N I C A L REAS ON ING Those who would relegate the machine to an inferior status emphasize that its retroactive function is worth very little with out an anticipatory mechanism, that is to say intelligence. The case of the A.A. gun is not conclusive inasmuch as anticipation is simply a matter of calculation and it is generally admitted that the machine can calculate. It might be argued that there are situations in which action must be pre-regulated, not by calculation, but by subtle reasoning, by the balancing of reasons for and against, based on past experience. This is said to be
278
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
Headings applicable to both: i. External world. 2. Organs of perception. 3. Infor mation. 4. Advance planning mechanism. 5. Command. 6. Transmitting and amplifying servo-mechanism. 7. Amplified command (gross adjustment). 8. Factor of maximum efficiency. 9. Effector to be governed. 10. Action to be regula ted. 11. Detector of deviations. 12. Servo-mechanisms amplifying the retroactive message. Headings applicable to the gun: 8. Elevation. 9. Gun. 10. Angular position (another mechanical complex regulates its aiming direction). 11. Radar detecting firing errors. 12. Servo-mechanisms amplifying the message given by the radar. Headings applicable to the skier: 2. Sense organs. 3. Sensory messages. 4. Cortex. 5. Cortical commands. 6. Voluntary automatic movement. 7. Muscle. 8. In clination of the body in anterior-posterior plane. 9. The head and trunk. 10. Position of head and trunk relative to the slope. 11. Proprioceptive sense organs. 12. Servo-mechanism regulating movements.
beyond the power of the machine, but it is not; reasoning machines exist, and we have considered them at length in this book. They are calculating machines. Ideas may be expressed as numbers— obviously not as decimals— but as binary num bers. In such a system i signifies: Yes, there is an object of this kind, o signifies: No, there is no object of this kind. A collection of predicates (that is, a collection of attributes which are either to 279
THINKING
BY
MACHINE
be affirmed or denied) may then be expressed by a series of i ’ s and o’s. Is such a series a binary number? No, but a binary number is the expression of a complex of predicates; we ask: “ Does the number signify or not the attributes of such and such an order?” We may then give such numbers to the binary calculating machine and it can utilize such elements for opera tions which will be none other than the traditional operations of logical reasoning. A revolutionary mechanical logic is in process of develop ment. When an astonished world wakes up to it, it will be linked with two names: Louis Couffignal in France and A. M. Uttley in England. In 1938 Louis Couffignal published a re markable article in the review Europe1 which prophesied the advent of a cybernetic epoch but which, being in advance of its time, passed unnoticed. The binary calculating machine is, in principle, a “ reason ing machine” , not solely by its own construction, but by virtue of the binary system, which is a logical language. Although we seem to have arrived at the core of the problem of this book, we do not propose to treat the subject here. The fact that the machine can reason does not mean that we are unlikely to con sult it as to whether or not to take our umbrella out with us, nor does it mean that a business man is likely to put the prob lem of replying to a dissatisfied client into binary code. On the other hand, the fact that the machine can reason is highly important for our understanding of human mentality. For, now, we are in possession of a wonderful instrument for the study of brain function. For this reason, the author proposes to reserve this discussion for a further book. Here it may suffice to give an example of “ binary reasoning” . Consider three pre dicates: “ it rains” , “ it snows” , “ I walk” . We are putting them arbitrarily in this order. Then if 1000 means that it does not rain, it does not snow and that I do not walk, 101 will ex press the facts that it rains and that I walk. 010 will express that it snows and I do not walk. Each of these rows of binary sym bols forms what is termed a logical combination. (One must be clear that these symbols are not to be confused with quantities.) Let us consider the proposition: “ I walk when it does not rain” . The proposition “ I walk” is true and the proposition “ it 1Europe, 15th August 1938. 280
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
rains” is false. We now write a series of binary numbers with as many cyphers as the system predicates (in this case three cyphers) and below each number we write o if the proposition “ I walk when it does not rain” is irreconcilable with the numer ical symbol, and if it is reconcilable we write i. ooo
o
ooi
i
oio
o
on
ioo
i
o
ioi
o
iio
o
in
o
The new binary series obtained gives us a logical index which expresses the integral structure of a logical function. oioioooo is the perfect logical expression of the proposition “ I walk when it does not rain” . If we wish to make the same statement about snow, we get ooi iooi i. Let us make a “ logical multiplication” of the two propositions; that is, let us seek in what conditions the two propositions are reconcilable. All we have to do is to write the two logical indices one above the other and to write i when two i signs correspond and o for any other combination.
OIOIOOOO
o o i i oo i i
oooioooo This signifies that the only situation that is reconcilable with the two propositions is the fourth of the original series of binary numbers— o n — and this means “ I walk when it snows and does not rain” . One could easily have arrived at the same con clusion after very little thought, but it is unfair to judge logical mechanisms by such a very elementary example. Suppose that dozens of predicates are involved and that the combinations are complex, that negations come into play; such an argument might easily elude human attention, if not human intelligence. Even with only three predicates, “ rain” , “ snow” , and “ walk” , things can become quite muddling. Let us take another proposition: “ I do not walk when it snows” . Its logical index is ooioooio. Now let us pass over to its negative: “ it is false that I do not walk when it snows” . One obtains the logical index of a negative by inverting the figure of the affirmative index. Thus, we have: i i o i i i o i . Suppose now we wish to know in what way this proposition can be reconciled with another proposition, such as “ I do not walk when it rains” : ooooioio. Now let anyone ask the most 281
THINKING
BY
MACHINE
clear-headed person when the two propositions: “ it is false that I do not walk when it snows” and: “ I do not walk when it rains” are reconcilable or irreconcilable. The answer will certainly not be very prompt and will stand a good chance of being wrong. Let the reader try the experiment for himself before reading on. If this problem is given to a binary machine, however, the machine will simply put the two logical indices face to face; where two Ts correspond, the current will pass; where a o interrupts the circuit there will be no current and at once we will get the following answer: i i o i i i o i o o o o
i o i o
ooooiooo
The verdict of the machine is that the fifth combination satis fies the two propositions; that is to say ioo, or “ it rains, it does not snow, I do not walk” . But even now that we know the answer, we still have to give a moment’s thought to make sure that this is the only conceiv able answer. O f course, the reasoning that we have to do does not involve any great effort of our intellect. What would happen if we had to take account of from twenty to fifty propo sitions? Our brain would give it up very shortly; but the machine never gives in. For it, simple contacts signalize the concordant propositions and the electric current gives us the answer, after passing through complicated circuits but still arriving at the answer almost instantaneously. Such a machine has been constructed in England.1 1 This machine, constructed by W. Mays and D. G. Prinz, deals with the logical relations between three propositions or “ variables” A, B, C. These variables and their functions (relations) are represented in the form of binary numbers (essent ially an abbreviated form of the Boolean expansions) which are stored in electro magnetic relays, “ on” (i) standing for truth, “ o ff” (o) for falsehood. The state of each relay is indicated by a lamp. Five stores each containing eight relays, are provided. Two of these, U, V, accept the initial propositions A, B, C, and their negatives. A third store, S, accepts the result of a logical operation performed on the contents of U and V; these operations, which can be selected by corresponding switches, are conjunction (“ and” ), disjunction (“ or” ), implication (“ if, then” ), equivalence (“ if and only if” ) and negation (“ not” ). The contents of S can be transferred to two auxiliary stores and from there back to U and V so that further operations can be performed by them. A tautological relation is indicated by the lighting up of all the light lamps of store S. (See Nature, vol. 165, p. 197, 1950).
282
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
But the answer of the machine must not be underrated as a mere fortune-telling apparatus; its use lies principally in the possibility of its controlling another machine according to the circumstances required. The answers that it will give will be the only ones that fit the premises furnished by the receptive mechanisms. If these responses are irreconcilable there will be no response; the mechanism will simply refuse to act. Mechanical reasoning may seem to be marred by a defect that precludes any nuances in its activities. It works with “ allor-nothing” rigidity. But is our mind really any different? If we dissect any argument, we ultimately resolve it into a chain of syllogisms; and what is a syllogism other than an all-ornothing proposition? Socrates is a man; all men are mortal; therefore Socrates is mortal. Between the set-up that represents “ Socrates” and that which represents “ man” a contact is made in which Socrates connects with “ mortal” . Undoubtedly an all-or-nothing reaction; but with all-or-nothing complexes the binary system may succeed in expressing values. If we imagine that the tortoises of Grey Walter, instead of being small toys, were enormous electronic machines in which one could introduce binary calculators, (if they were only con cerned with elementary calculations they might be relatively small), it is obvious then that all the extraordinary mechanisms of these artificial animals could be accessible to such control. However, such automata will probably never be made; they would be very expensive and without any compensating prac tical value. Or rather, they may be realized, if indeed they do not already exist, in some secret laboratories, but they will be engines of war, pilotless rockets destined to combat aeroplanes or other rockets. Completely autonomous, completely auto matic, completely cybernetic, they will attack the enemy, pursuing him till they explode on establishing contact. Not only will they be equal to animals in one type of function, but they will surpass them in accuracy of perception, precision in “ reasoning” and, above all, in rapidity of reaction. Even in the association of ideas, in reasoning, the man in the enemy plane -— if it has a pilot— will be beaten in advance. As Louis Couffignal points out, human intelligence can only deal with a very limited number of concepts preserved by the memory and subject to a very limited number of operations,
283
THINKING
BY
MACHINE
whereas the machine is able to deal with many more concepts and many more logical situations. The machine can, in fact, perform feats of reasoning which surpass the human intellect. This is surely the most astonishing revelation that a book like this dealing with mechanical possibilities can put before the lay man: that there are mechanisms which perceive the external world and reason about their perceptions as premises and ad just their reactions to the outcome of their syllogisms. The only limit to machines in conquering our world of intellect is that they only solve the problems with which they have been con structed to deal. If their designer wishes them to deal with a hundred problems they will do so, but if a hundred and one problems are presented, they will reveal themselves as stupid and impotent to deal with the one extra. But after all, does not man also find himself disarmed and stupid in face of certain problems ?
MAC HINES W I T H O U T MEN One fact constantly emerges— those miraculous machines in which cybernetics could develop all its resources seem to be usable only as engines of death. Is it possible that the works of peace have no need of such skilful machines ? It might seem so. The world at peace looks only for tranquil well being. It is not so evident that machines working with electronic rapidity and graded activity are necessary in our homes. Cybernetics, how ever, can guarantee a man more leisure, relieve him of much of his work and replace him in the factory. It will certainly accomplish this much, but its full possibilities will not be ab sorbed by guiding machine-tools. We need not talk only in the future tense. Two young engin eers, E. W. Leaver and J. J. Brown, have invented and con structed a mechanical complex which has much in common with the highly elaborate A.A. gun aimer, but it is much simpler. These men worked together on radar research in Canada during the war. Leaver, the Englishman, appears to have contributed the basic ideas that were elaborated by Brown, the Canadian. Leaver and Brown started off from the point of view that industrial progress necessitates that each machine should be constructed to perform a highly specialized task. The better a
284
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
machine can perform a certain task, the less it is likely to be adaptable to other forms of work. (It will be noted that our attitude throughout this book supports this view.) They tell us of a machine designed to turn out automatically the cylinder heads of aeroplanes during the war; it was 90 feet long and cost 100,000 dollars. A crude casting was fed in at one end and came out at the other as a perfect and complete cylinder head. It delivered one a minute! It seemed to act marvellously, but once the type of cylinder head had to be modified, the machine was so much scrap-iron; it could not be adapted to a new task. This story is even more important than Leaver and Brown realized. It throws light on the constant antagonism between two different possible results in any form of planned behaviour — the “ determined” result and the “ organized” result. The first result is determined by the rigidity of its factors, the second frees itself from its factors and assumes a specific quality. Technical advance has always tended towards a limitation of the variable factors. The final stage of this evolution is detri mental to machines and they are gradually getting hardening of the arteries. The solution to this problem is to aim at obtain ing mechanical results of equally high certainty and precision by organizing the cybernetic interaction of effects. There will be a new epoch in the history of humanity when mechanisms pass from extrinsic, rigidly determined control to one in which internal organization allows them to imitate the living organ ism. The technical solution to this problem urged by Leaver and Brown is this: there should be a standard type of machine in factories, designed to replace the manual worker, but the tool that it adopts is to vary according to the nature of the work attempted. The essential factor is the “ hand-arm” . This is a flexible articulated arm; at the end is a complex clamp, some what like a steel hand. The clamp siezes the piece of work and presents it to the tool. The hand-arm is activated at a distance by a punched card like the music roll of a pianola on which is indicated the code pattern of the piece to be machined. In two seconds the arm may be lowered through an angle of 82° and in three seconds the clamp turns 220 to the left, &c. Up till now there is nothing “ cybernetic” about all this. But the machining is constantly supervised by detector systems that act just like
285
THINKING
BY
MACHINE
sensory organs: photo-electric cells, thermocouples, micro phones, electro-micrometers, gas detectors, &c. Thus the work executed is translated into electrical data. The data are transmitted back to a collating mechanism, where they are compared with the recorded programme. The following is a schematic diagram of the assembly of Leaver and Brown.
reject
Such are the vast servo-mechanisms sketched by Leaver and Brown for their factories without men. The perforated card is the order. The hand-arm executes the order, watched by detectors which send back retroactive messages. These messages are compared with the initial order in the collation unit whence issues any correction needed to ensure perfect operation. The machine may be thought to resemble a human workman who does everything himself. Suppose I want to make something. Whether there is a real model, the image of which is formed in my brain by the aid of the visual system, or whether it only exists as an idea, I always have in my mind the completed picture of what I want to make. From it, the cerebral cortex and the muscles receive orders directed towards the realization of the model. Sight, or some other sense, will continually record the state of the work. The sensory messages, true retroactive messages, return to the cor tex which gave the order. It is any variation between the work
286
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
executed (as it is shown by these messages) and the model as it was designed (as in the programme) that will determine the nature of the orders issued to the muscles. The diagram on this page is just as applicable to a man constructing something as it is to the machine of Leaver and Brown, which, in the future, will work for man in almost empty workshops on a twenty-four-hour shift. Thus the designer, the sculptor, the craftsman and the manufacturer constitute a team, a servo mechanism, that is both physiological and psychological, which is always governed by the differences between the plan that they have formed in their imagination and the reality of which their senses inform them.
The schemata of human mechanisms belong to two essential types; on the one hand that of pre-regulation or global regula tion by voluntary automatism according to external circum stances, which is subject to a more detailed regulation accord ing to the demands of the fully automatic feedback; on the other hand, a servo-mechanism tending to approximate a result to its model. In both cases the intellectual part of the process— the assessment of the existing circumstances or the design of the model— requires not only a summation of the sensory messages and a reasoned assessment of their import, but it also requires
287
THINKING
BY
MACHINE
the intervention of memory which, hitherto, we have neglected in the interests of simplicity. This means that man not only takes account of his present, in terms of perception, and of the probable future, by virtue of reason, but he also actualizes the past in terms of experience. Here the machine might seem to be beaten. Before arguing that this is not so, it may be as well to con sider what is meant by memory. When a calculating machine places a number in reserve, it only fills a space which has been left unoccupied in the pro gramme; it endows itself with a factor. This is an elementary type of “ memory” in relation to a factor, not memory in relation to the programme, which alone governs the total work of the machine. True “ memory” should affect the “ programme” and not only a factor. When man modifies the behaviour of the machine, as when, for example, he changes the needle of a sew ing machine, his action might be represented by an arrow acting on the “ programme” ; this arrow also symbolizes con tingency, which may intervene to modify some unit of the “ programme” . But since man is outside the machine, the
symbol is always in reality that of a contingency acting on the system. In order that this factor should not be contingent, that it should be an integral part of the system, that it should in fact be organized, it would be necessary for the arrow to be depen dent on one of the elements of the system. This influence on the “ programme” can be dependent on nothing else but the effect which may be shown in the following diagram. Thus the mechanism of “ memory” is expressed diagrammatically before being defined and it is obviously a feedback on to the “ programme” . It is the effect of a mechanism or of an animal activity retroacting so as to modify not a simple factor, but the programme unit which governs all the factors; thus, according to the result of its experiences, the effector unit will modify the machine behaviour. Hence we may venture on
288
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
this definition: “ memory” is a process by means of which the retroaction of the result will register itself in a more or less per manent fashion on the “ programme” of the effector.
If one wishes to “ teach” a synthetic animal not to become over-heated or burnt out after an initial trial run, it must be provided with a fuse which, at a given temperature, will cut the circuit. Alternatively, a rise of temperature may set off a current to activate a reserve mechanism that ensures a negative heat tropism. Equally one can envisage a mechanism which would avoid obstacles after having been bumped several times. Such a mechanism would accumulate the force of the blows received in a toothed spring; when the spring had reached a certain tension a key would close, thus activating an ultrasonic unit analogous to that which prevents bats hitting themselves against obstacles in their path of flight. THE
MACHINE
WHICH
TEACHES
ITSELF
It is indeed easy to object that in all these mechanisms their “ all-or-nothing” type precludes any gradual modification of reaction. They show nothing comparable to the slow acquisi tion of new reactions in an animal that learns by experience. Once a machine receives a new “ programme” it reacts as positively as it did to the old one. In other words “ memory” in these mechanisms is represented by a series of discontinuous records. It would seem to be true that certain new experiences may bring discontinuous additions to our memory; an unhappy memory may bring about a complete change in some of our reactions. Whether this be so or not, it is quite possible to impart a graded type of memory to machines. It is only a question of whether it is desirable or not. One may hope that future 289
THINKING
BY
MACHINE
machines will reach a stage of development that is more adaptable to the finer shades than the somewhat clumsy efforts of the present day which justify the derisive remark— “ machine like behaviour” . Albert Ducrocq has devised an ingenious system for the class of artificial animals that he calls “ Miso” . They will turn either to left or right when they encounter an obstacle. On the first occasion the direction will be a matter of chance: let us suppose that the machine turns to the right. The effort that represents the turn to the right is registered as a memory of bad import, so that before the next obstacle it will turn to the left. In the future it will tend to turn in the direction that has proved to be the most favourable; that is, the one in which the sum of the angles of rotation is the least. These rotations, which may be classed according to the direction as positive or negative, are registered by the rotation of a disc. If a rotation of +15° is followed by one of —20° the disc will finally rest in the —50 position, which will determine the next rotation in the positive direction which on the whole has been the least “ unfavourable” , and so on. This is surely a type of embryonic memory which can estimate the sum of the acts that it has already accom plished, but which cannot ever know whether the acts left un done (rotation in the contrary direction) would not have been more favourable. Ducrocq broadened “ memory” to include “ habit” ; that is to say a memory of activities without concern as to whether their results were good or bad. He designed a disc which was to turn through a fixed + (positive) or — (negative) angle every time that the machine gyrated, independently of the angle of rotation of the actual turn. Thus, a series of angles of rotation + 15°, —50, — 90°, + 3 0, +30°, + 470 would not put the disc in the o position which would be the algebraic sum, but in the +2 position which would bring about a turn in the + direction by virtue of the habit formed. One might wish to equip the machine with a less simplified type of education. This would entail designing machines aim ing at a more complex result— machines capable of playing some game. This does not mean necessarily machines capable of playing chess or bridge— their day is not yet. Let us select a very simple game— the game of noughts and crosses played by
290
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
nearly all school-children during class. It consists of nine squares contained between horizontal and vertical lines. I draw a cross in one of the squares and my opponent puts a circle in one of the empty squares. The first player who can get three crosses or three noughts in a row, either horizontally, transversely or diagonally, wins. Such is this very simple game in which it is not difficult to recognize standard tactical plans, and in which the player who leads off need never lose, if he knows how to play. A machine exists which will play this game against a human adversary. It was constructed in England by D. W. Davies of the Mathematics Division of the National Physical Laboratory at Teddington. The first model was constructed after the war, but this was replaced by an electronic machine in 1952. The nine squares are outlined on a transparent screen. Below the screen there is a panel bearing nine knobs. The player presses one of these that corresponds to the square in which he wishes to play and a cross immediately lights up in the appropriate square on the screen. Then he pushes a special button which makes the machine play a counter move. A nought lights up in another of the squares on the screen. The player, of course, has no idea in advance in which square the machine will “ choose” to play. The player answers the move of the machine and the machine plays again and so on. The machine always plays the best possible move. If it starts it will win or draw. If the human player starts off, provided he knows how to play, the game will end in a draw, but if he makes a bad move, the machine will win. In other words, the machine never loses. What happens in a mechanism that calculates each time the best possible move in a given situation ? The machine has actually no choice; it cannot lose. In spite of the interesting aspect of this machine and the extraordinary subtlety of its design, its mechanism is rigidly determined; it is the complete slave of its programme which has been calculated in advance. But we would like to envisage quite another kind of machine for playing a similar game. The squares will be holes into which the players put pegs. The pegs of the human player make an electric contact in a certain network. The pegs of the machine establish contact in another network. The machine begins and plays the first move blindly; the man does the same. The
291
THINKING
BY
MACHINE
machine plays again. But each time that the machine pegs its hole, it pierces a little hole in a tinted transparent film, the perforation corresponding to the pegged hole. At every move the film, which may be an ordinary cinematograph strip, moves a certain distance like the him in a roll him camera. At the end of the game the him is cut off. If the play has resulted in a defeat for the machine, the him is rejected; if it has won, it is retained and all the victorious films are piled one on top of the other. The machine has now a record of all its successful experiences. The question is how these experiences are to be exploited. This is simple. The perforated strips of him lie one above the other and the pile of him strips can be illuminated from below. If, in 20 victories, 12 occurred after pegging hole number 5 at the hrst move and 3 when hole number 1 was pegged, 2 with number 2 and 2 with number 8, perforation number 5 will constitute a sort of luminous tunnel in the pile of hlms, since the light will then have only to penetrate 20 — 12 = 8 thicknesses of him. It will be sufficient for a photo-electric cell to explore the 9 points of the him and the machine will now peg the brightest points, which in this case will correspond to hole number 5. Thus, at each move, the machine pegs the hole which its experience has shown to be the best. It is not, of course, a machine that invariably wins (such a machine would be fully determined and of little interest), but it is a machine that will play better and better as it acquires experience. Matched against a skilled human player, it would undoubtedly lose, but against a novice an experienced machine would win. It is important to note that two similar machines will not work on an identical programme. There are a number of com binations which will allow success. Each machine will play that which, by chance, allowed it to obtain its first victories. Thus, if a machine A pegged hole number 1 and won, it will always begin with the same hole, whereas machine B may have by chance acquired another tactic and will always start with hole number 5. The machine has, indeed, made a great step forward; it has acquired personality. There is, however, a weakness in its education. If the man, after playing frequently against the machine, discovers that it always plays the same pattern game,
292
THE
MECHANISMS
OF
ANTICIPATION
AND
MEMORY
he can adjust his play to counter the rather rigidly determined game of the machine. From that moment the machine will in variably lose and so will never be able to register the successes that would automatically modify the “ programme” . In other words it will be just as “ stupid” as the classical machine and enter into a hopeless routine that can lead nowhere. That is not, however, of decisive moment. If the machine is deficient in intelligence, it is because its “ memory” feedback is defective, inasmuch as it only functions when the machine wins. If it be endowed with a “ memory” of its defeats, it will learn to adapt itself to defeats as well as it does to victories. All that is neces sary is to retain the films of the defeats and to explore the perforations of these in the same way, but now to translate luminosity into a negative current. The mechanism will then be able to take account of defeats just as well as of victories; the pos itive current will give the number of one type of moves and the negative that of the others. The two will add algebraically and the machine will react to the hole that gives the strongest current. It is true that the machine will only adapt its play slowly to meet that of an adversary who has discovered its tactics. If, for example, it wins 1,000 times with the hole number i, it would be necessary for it to lose an equal number of times before a change in tactics would be initiated. Although understanding seems to be very slow, it can correct its behaviour more rapidly if it be provided with an internal mechanism by which the thickness of the pile of winning films will boost up the negative current of the defeats, so that a machine that has won 1,000 times with one hole will efface the memory of this experience if it loses ten times. The machine is able to accommodate itself to circumstance by means of a new internal function. All this would suggest that intelligence is a complex of inter dependent functions. The self-educating machine exists so far only in print; we should welcome its construction at the hands of others. In suggesting it, we were concerned to demonstrate that “ memory” is a possible faculty of machines and that it is to be understood as a feedback modifying the programme in a permanent fashion. As to the difference between positive and negative retroaction, the problem is easily answered. All
293
THINKING
BY
MACHINE
“ memory” is due to a positive retroaction. Negative retro action tends towards an effect of stabilization; in this case the nature of the effect is inherent in the effector and cannot be modified by experience. Those reactions which allow for the correction of deviations may be called tropic; they are retro active messages tending to maintain a negative equilibrium on the part of the organism in relation to one of the characteristics of its environment. Positive retroaction is very different in that the retroactive message bears a direct relation to the qualitative value of the resultant. The greater the result the more intense the retroactive influx which summates and modifies the pro gramme, either progressively, or by steps. In this way the mechanical model stores up its experiences until a change in its general reactions occurs. Thus, the machine designed to play the game just described registers the results according to the number of successes. It is thus that we are emotionally affected pleasurably or otherwise by previously experienced events.
294
CH A PTE R X III
Ashby’s Homeostat and the Fifth Degree of Automatism the fulcrum of a lever or the direction of a current is changed, the machine concerned will, of course, modify its behaviour. A watch will stop if the smallest part is missing; a displaced ball-weight in the Watt governor will cause the machine either to stop or to race. If the poles be changed, the A.A. automatic gun aimer will turn away from its objective and the automatic pilot, instead of pointing towards the north west, will point to the south-east. Every machine, no matter how perfect, is the slave of its initial determinism. The problem is to find out how a mechan ism can escape from its internal constitution. Whether a human being finds himself at the North Pole or in the Sahara, his body temperature will always remain at about 98° F. and Ashby’s homeostat, even with its poles reversed, is able to carry out its programme. Its maker claims that it is the only machine that always does its duty. It is obvious that its construction marks a revolution in the history of machines and a new step in the establishment of the hierarchy of automata.
I
f
WALTER
CANNON
AND
HOMEOSTASIS
Walter B. Cannon adopted the word “ homeostasis” as the title of a series of conferences given at the Sorbonne from 1930 onwards. He was then acting as guest professor from Harvard University and these lectures were published in the Cours des Facultes. The concept of homeostasis dates from Claude Bernard, who coined the famous dictum; “ The fixity of the internal milieu is a condition of free life” — Grey Walter and Ashby like to quote this phrase and Pierre Vendryes has made it the keystone of his book Vie et Probability— a cybernetic work written before the formulation of cybernetics.
295
THINKING
BY
MACHINE
Homeostasis may be defined, to quote Walter Gannon, as the faculty possessed by a living organism to maintain itself in a relatively constant state of equilibrium. It is here not only a question of actual equilibration, but of a persisting tendency to establish equilibrium which implies that we are dealing with a case of retroaction. Cannon collected in his well-known book The Wisdom of the Body a series of papers written at Harvard University. In these studies he dealt with the stabilization of the properties of the liquid matrix of the body, with hunger and thirst, considered as a means of assuring its nutrition, that is, as retroactors in maintaining alimentary equilibrium. His work deals also with the homeostasis of the blood in its many essential constituents, with the role of temperature, of hydro gen in concentration and of the sympathetico-adrenal system in maintaining homeostasis. Once the possibility of creating cybernetic models of physio logical function is realized, it is a natural step to design a mech anical analogue of homeostatic equilibration. William Ross Ashby, Director of the Research Laboratory at Barnwood House, Gloucester, undertook this stimulating adventure— the conquest of a new degree of automatism by the mind. Nearly all cybernetic work has been accomplished by teams in which psychiatrists or neurologists collaborated with electrotechni cians. Ashby worked alone. Higher mathematics had always been his pastime and he had kept in touch with the progress of electronics, so on his own he was able to conceive, calculate and realize the most revolutionary mechanism in the world. The object that Ashby had in mind in constructing his mechanism is proclaimed by its name— homeostat— by which he sought to convey that his machine could seek and maintain homeostatic equilibrium just like a living organism. If homeo stasis were merely a condition of equilibrium maintained con tinuously, which did not abolish deviations— though when they occurred it tended to neutralize them— every refrigerator might be termed a homeostat. But the merit of Ashby’s work lies in the fact that his machine can seek and maintain homeostasis; it surpasses the promise implicit in its name. It balances con flicting tendencies towards equilibrium, it chooses between different, and sometimes divergent, forms of homeostasis. In doing this it imitates some of the much more complex mechan
296
a s h b y
’s
h o m e o sta t
an d
fifth
d e g r e e
of
autom atism
isms of physiology which actuate various homeostatic regulators acting on one another to establish the wonderful system that is known as a living organism. This is the nature of the homeo stat; it is a model of the equilibrium achieved by a living organ ism through a complex of equilibrating units, each of which reacts on the other. The homeostat is more than a simple regulating mechanism; it is capable of “ homeostatic” activity, if such a term be permissible, an activity which does not depend on external factors. It imitates the innate activities of animals, even when meeting with opposition; it may be considered to exhibit an artificial instinct. It is true that its activity is of a very simple type, the maintenance of a pivoted object in a certain position. That is all— but Ashby only wants to demon strate the possibility of a condition being maintained by the machine, in spite of all sorts of occasional external irregularities. A
MECHANISM
THAT
IS
LIKE
NO
OTHER
Consider a movable magnet mounted on a pivot within a coil through which current passes. The intensity of the current will cause it to turn more, or to turn less, relatively to its initial position.
297
THINKING
BY
MACHINE
A stiff metal wire through the pivot moves with the magnet. This wire is bent at one end and the bent end carries a metal plate dipping into a semi-circular trough filled with distilled water. Two electrodes, one in each end of the trough, apply polarizing EM F’s of + 5 volts and — 5 volts respectively. It is obvious that, according to the position of the magnet, which is regulated by the strength of current in the coil, the plate will be subjected to a voltage of between -[-5 to —5 according to whether it approaches one end or the other. Now suppose (and here we come to the originality of the system) that this voltage is conducted to the grid of a triode valve and that this triode controls the strength of the current passing though the coil. The current intensity will vary with the potential of the plate and will in this way reflect the movements of the magnet. We have here a system where the current determines the position of a part (the plate) whose position again influences the current: the positioning of the plate acts on the current that acts on the positioning of the plate. In short, the current in tensity depends on current intensity. Here we have an integral feedback, the effector being 100 per cent, controlled by its effects.1 But only one unit of the homeostat has been described and the machine is composed of four such units each one of which is subject to the influence of the three others. The voltage of the grid of element A controls not only the current of the coil A, but also that of the coils B, C, and D. More precisely, the output current of the triode is sent to the four coils, each of which is subject to four different circuits of which three come from the other units and the fourth is in fluenced by the activity of its own unit. If, for example, the magnet A turns through a certain angle on account of a stronger current received from B, its plate will be subjected to an altera tion of voltage which will not only modify the current intensity of A but the current sent from A will affect that in the coils B, C and D. In all the units, the position of the magnets, and con sequently that of the plates, will be affected and this will modify the influence that B, C, and D have on A and on each other. 1 Ashby uses the term “ self feedback” in this connexion. The expression does not seem particularly fortunate, as all feedback circuits act on themselves and might therefore be qualified thus. We should propose the term “ integral feedback.”
298
a s h b y
’s
h o m e o sta t
a n d
fifth
d e g r e e
of
a u tom atism
Diagram of the inter-connexions of thefour homeostat elements Any change in one of the units obliges the others to adapt to it and the adaptation of each will oblige all the others to react. There results a whole flood of actions, reactions and interactions which will be beyond the powers of any reasoner attempting to establish a logical scheme of the chain of events. It is a true complex of feedbacks. The continuous motion of the plates of the four units in their glass troughs suggests the be haviour of caged wild animals. It is a matter of conjecture whether they will ever attain a state of general equilibrium. Each unit seeks equilibrium for itself as well as for all. It may well be that the search will prove as endless as the labours of Sisyphus and the daughters of Danaus, the ordeal of aiming to reach a goal without ever being able to attain it. Up to now we have been studying mechanisms with all their factors deter mined, except the one subject to retroaction; but this machine has no single determining factor. SPARE
“ DETERMINISMS”
Here then is the revolutionary import of the homeostat. Un like other machines subject to simple determinism, it has a complex of factors acting on the effector which determine its effect— hundreds of thousands of spare determinisms: to be exact there are 390,625! One might imagine a motor-car capable of adjusting itself to suit circumstances, giving itself more or less throttle, changing gear or advancing the spark. Such is the homeostat. When a deterministic factor prevents it from achieving its intended result, this goal-seeking becomes so potent that it reorganizes the mechanism and overthrows all
299
THINKING
BY MACHINE
the primary guiding data. The feedback, in short, acts on the complex of determinisms in the same way that in the fourth degree it regulates a single factor. Every time that in one of the four units of the homeostat the electromagent undergoes too great a swing, the determinism of this unit is changed. This is not the effect of magic, but simply because the current related to the position of the magnet acquires a certain voltage in one or other direction and this voltage determines the automatic passage of a contact in the selector which is an essential part of the system. Each selector has the choice of 25 positions of which only one is connected at a time (it is known as a uniselector). Each of the 25 positions corresponds to different values in the resistance and capacity of the circuits. These values have purposely been left to chance selection. In fact to be certain of their being quite at random, Ashby took their numerical values from the Fisher and Yates tables of random numbers, thus ensuring that there should be no functional relation between the reactions of the machine and the conditions that it might encounter. In order to imitate living conditions, the homeostat is made to behave like an animal that reacts to an unfavourable environment by removing itself without necessarily knowing the nature of the conditions it is likely to find in the place it is going to. If there be a very great swing of the magnet, the uniselector of this unit will establish new conditions for the whole mechanism— the four magnets will seek to regain their equilibrium. If this can not be attained, another magnet, or perhaps the same one, swings from its central position and a new arrangement of the circuits is effected with new attempts to establish equilibrium; and so it goes on. As each selector has 25 positions, each ele ment of the homeostat is subject to 25 determinisms and the whole mechanism has at its disposal 25 x 25 x 25 X 25 =390,625 possible arrangements. The possibilities of equilibration that these numerous deter minisms offer are explored one after another but strictly in chance order, without method, one determinism being re placed by another each time the machine fails to achieve the end sought. Inevitably, the homeostat ultimately achieves equilibrium; it may be after some hours or even days of trial; it may do so with ease, or with difficulty, smoothly or jerkily, 300
a s h b y
’s
h o m e o stat
and
fifth
d e g r e e
of
au to m atism
methodically or neurotically. Ashby says it depends on the mood it finds itself in. But it will always find a way; it will always come to rest finally with its four plates in a central position in each of the four troughs. It will succeed even if its mechanism is subject to serious interference. Ashby invites visitors to the quiet and spacious laboratories of Barnwood House to come and try some tricks on the homeo stat : to set traps for it and to try to get the better of it. Every thing is permissible, short of bashing it with a hammer. One can disconnect one or two units, or paralyse one or many of the plates by putting obstacles in their way, or by changing the poles of the current. The homeostat will always be able to adapt its internal organization to the new situation. It fulfils the purpose that Ashby had in mind: to construct a model of the animal’s power of adaptation to its environment. THE
M A CH IN E
W H IC H
F IN D S
ITS
OWN
W AY
It may now be legitimate to consider the homeostat from the standpoint of logic. To make such a logical analysis is a matter of difficulty; and this would be true if there were only a single unit to be considered, even if the uniselector were ignored. The effect or result may be considered to be indicated by the position of the plate in the trough. This position depends on one factor only— the current in the coil. But the strength of this current is governed by the plate acting through the more or less polar ized grid of the triode. This is, then, an instance of the “ fourth degree” . This feedback has very considerable corrective powers; it depends on a highly efficient factor. A single element of the homeostat finds its equilibrium when retroaction cancels out— i.e. when the effector has no further retroactivity. The four elements will eventually find a mutual condition of equili brium. Such a mechanism, however, would hardly permit the striking results achieved by Ashby, who can interfere with the directional factors without changing their resultant. The correction that can be applied by an ordinary feedback, even through a highly effective factor, is not enough; for a feedback to act thus it must be endowed with an action that can be varied according to circumstances. In other words, it needs to be a reactor with a non-uniform power of correction. We 301
THINKING
BY M A C H I N E
purposely omitted this variability in order not to complicate the picture. Thus it might react in proportion to the deviations of effect, or to its acceleration, or it may react either in direct relation to the effect or in a non-linear relation and so forth. u n is e le c to r
The technique of servo-mechanisms utilizes an infinity of different modes of reaction. Specialists in servo-mechanisms are able to calculate the most efficient type of retroaction for any one case, but they are necessarily confined to one problem at a time, whilst the homeostat attempts to solve a very large number. It chooses the type of feedback that will be efficient after having first explored the capabilities of a chance selection; this constitutes the fundamental originality of the system. The uniselector constitutes a highly specialized reactor to the feed backs, which, receiving messages whose intensity varies with the size of the deviations, acts on the capacity and resistance factors in a random fashion. The homeostat is not obeying a variety of laws but groping for a variety of solutions. This really comes to the same thing, for though the direct recourse to various solutions seems to ignore rigid laws, the final and permanently adopted solution is that which ensures stability, for the obvious reason that instability implies change. All this is true for a single element of the homeostat. The uniselector, however, only gives 25 solutions and the mechanism may very well find that this number is of no avail against the unpredict able disequilibrium that the external contingency— the experi menter— may cause. With four elements, however, it is a different story; the number of solutions at the disposal of the mechanism for dealing with contingent disturbances will be somewhere near 400,000. 302
a s h b y
’s
h o m e o sta t
an d
fifth
d e g r e e
of
au tom atism
Experience shows that this is sufficient and the homeostat is even capable of compensating for an inversion of the polarity of the current. The homeostat is able to control the many determining factors both by retroaction and by various minor corrections of direction and amplitude. Fourth degree mechan isms control but a single factor; fifth degree mechanisms control a very great part of their determining factors. There is a grada tion rather than a sharp division between the two degrees. The fifth degree, then, exerts a double retroaction, but it should be understood that the two factors outlined symbolize the entire deterministic system.
Thus, the fifth degree deals with a new component of activity — determinism. The homeostat has attained the requisite knowledge to answer the problem of “ How to act” . The “ How to act” of man, when the target is defined by an end-goal, is the choice of a path leading to the target, which path to choose if one wishes to mount one height and not another, typifying the problem. The homeostat illustrates that this in fact is its sole concern— to find the path and if neces sary to try out thousands of possibilities. At present it is the only machine that needs no “ programme” , no cam, no pianola rolls. It has an aim, given it by the operator, and a choice of many ways of attaining it which is also given by its master; then it chooses the most adequate means of arriving at its goal. If this automatism is thought of in terms of animal behaviour, it postulates the stage of instinct; that is, a sort of intelligence directed to a given target, the persistent trial of means of
303
THINKING
BY M A C H I N E
attaining the target. The bee flies towards a flower; it has to get there as soon as possible— perhaps immediately, perhaps in an hour; flying to right or left, above or below, on a straight course or zigzag, after one trial or many, one thing is certain, it has to go on. This is instinctive activity. This is the activity of the homeostat which, through contingency, inevitably finds its equilibrium. There is, in any case, an undoubted physiological correspon dence between the homeostat and the behaviour of a decerebrate frog when a drop of acid is painted on its back. It attempts to wipe off the acid with its hind-leg. If this leg be paralysed, it attempts to use the opposite leg and if this is also immobilized, a fore-leg is used, and if this in turn be paralysed, it tries to accomplish the reflex act with the other fore-leg. A R T IF IC IA L
ED U C A TIO N
Ashby shows us by an amusing demonstration that the homeostat is capable of a primitive educational process. He reminds us first that his machine illustrates the adaptive be haviour of animals; he defines adaptive behaviour as “ equiva lent to the behaviour of a stable system, the region of the stability being the region of the phase-space in which all the essential variables lie within their normal limits” . The way by which the organism attains this perpetual miracle— so well formulated by Claude Bernard— had always been an enigma until it be came apparent that the fundamental characteristic of a goal seeking mechanism was feedback. It is not enough, however, for it to seek the goal— or even to find it: it must stabilize what it finds. It must register the distinctive feedbacks, so as to avoid clumsy hesitations later on. The homeostat reveals a rudimen tary form of this capacity which is nothing else than the faculty of learning. When a small boy has his ears boxed each time he goes to the cupboard where the sweets are kept, he will soon cease to go there. This, says Ashby, is education and he proposes to “ edu cate” the homeostat. First, Ashby reduces it to two units. Unit A represents the animal and unit M the external environ ment. The two plates are in a central position. Let it be supposed that this position constitutes a physiological optimum — a “ preferred” position. It is easy to show that this is one of
3°4
a s h b y
’s
h o m e o sta t
an d
fifth
d e g r e e
of
au to m atism
stable equilibrium. If the plate M be moved by the finger from its central position, the element A representing the animal reacts to this alteration of the environment M ; it reacts so as to re establish the optimal conditions which represent the equili brium of the animal and the environment of the two plates is recovered. This is indicated at point i in the diagram in which the movements of A and M are shown above and below the horizontal line. T IM E
A
f
s \
V
(animal)
M (environm ent)
f]
/~ \
S \
J
1
2
movement o f M when
Inversion
the system
3
of A and M
4 movement of M
Is stable
If now the equilibrium of the system is upset by inversion of the relations of A and M (as seen in 2) the animal is given a new environment. The reaction of A, that was formerly a right one, now becomes a wrong one—-just as if in a car the action of the steering wheel was interfered with so that the steering direc tions were reversed. For the mechanical model this is analogous to the experiment of Sperry in which the flexor and extensor tendons of a monkey’s arm were severed and then crossed over so that the extensor muscles were unified with the flexor attach ment and the flexors to the extensor attachment. After a time the monkey adapts its movements to the new conditions. If we investigate the behaviour of the machine, we find that the homeostat reacts vigorously to this interference and the plates are displaced further and further from the central position which brings the uniselector (3) into action. The first, second or third reactions of the uniselector bring about a stable situa tion. If now the plate M is turned a little (4), A will so react as
305
THINKING
BY M A C H I N E
to re-establish equilibrium immediately. But this new way of reacting will be the reverse of the first where A reacted to M by movement in the same direction; it now reacts by movement in the opposite direction. Or, as Ashby puts it, the animal has been disturbed from its equilibrium because it has been given an environment with an inverted reaction. It has been obliged to invert its own reactions to compensate for the external situation. THE
M ACH IN E
THAT
RECEIVES
PU N ISH M EN T
Now that we are clear as to the mechanism, we can consider a third unit. The animal is now represented by a combination of three units. We may call S the sensory unit, E the unit which is to receive the training, and R the unit on which the external contingencies act. These external contingencies may be repre sented here by the human operator. Suppose the operator has decided that unit E shall be so trained that when S is affected by a stimulus in one direction, E will respond by a movement in an appropriate direction; thus, if the plate of S is moved down, the plate of E is obliged to move upwards; just as if a small boy had been prohibited from approaching the cupboard where the jam is kept. In terms of the logical resultants, S is given a factor that can be adjusted, E is the factor that it is desired to train in a certain course of action whose deviations are to be suppressed. R is the factor on which a feedback reacts conveying a message of deviation. The operator is the feedback and his will determines the plan of action of the whole system. In £1 let us assume a stimulus S, with resultant slight displace ment of the plate downwards. E also will move downwards and therefore wrongly. E has behaved badly; this movement is for bidden! The operator corrects the element R by giving it a blow, just as one would smack a naughty child. The disap proval is manifested by a vigorous displacement of the plate R. This results in the engagement of the uniselector causing a modi fication of the total organization in the direction of equilibrium. To determine whether training unit E has learnt its lesson a new push is given to plate S. The movement of E is again in the wrong direction (as seen in £2). A second punishing displace ment is administered to E through R and again the uniselector engages, bringing about a new equilibrium. 306
a s h b y
’s
h o m e o sta t
an d
fifth
d e g r e e
of
au to m atism
In S3 another stimulus is given to S and this time E reacts by a displacement in the opposite and desired direction and now no further displacement of the plate R will be necessary. To make sure that E has really learnt its lesson, a fourth stimulus, 64, is administered to S and it will be seen that E now gives appropriate displacement. In £5 the plate is now moved in the upward direction and the well-trained E responds by a downward movement. TI ME S2
s --------------------
\
(sensory unit)
r
~ \f
(unit to be trained)
R _______ (element on whic deviation acts) 1st punishment
2nd punishment
and engagement
and engagement
of uniselector
of uniselector
The training or education, therefore, amounts to this: the wrong response acts on the internal mechanism of the effector unit until it has so modified it that the response is no longer “ wrong” . The principle is equally valid in the case of the animal that learns at the expense of its own experience not to get burnt, or the beaten child that acquires an appropriate conditioned reflex rendering further beating unnecessary. Once again, when the principles of an animal mechanism are exhibited by cybernetics, we are tempted to protest that the explanation is over-simplified. The truth is that although neural mechanisms are appallingly complex, they may nevertheless react along fundamentally simple lines. D ESIG N
FOR
A
BR A IN
The paper in which Ashby communicated the essential details of his homeostat was published in a serious technical
307
THINKING
BY M A C H IN E
journal.1 Nevertheless, it bore the highly sensational title, “ Design for a Brain” . This must not be taken as evidence that an exceptionally stable and thoughtful scientist has been infec ted with journalistic exhibitionism. Ashby is convinced that the mechanism that he has created marks an advance in our attempts to understand cerebral activity. He believes that the homeostat really exhibits some activities shown by the brain; that it exhibits the power of the brain to govern itself, correct ing every deviation from an optimum mean and, above all, the brain’s power of seeking this goal by a great variety of methods — a power that had hitherto been assumed to be beyond the possibilities of machinery. It may well be asked if the homeostat and the machines that will one day follow its lead have any practical utility. At first sight it would not seem so. Grey Walter, who sees Ashby each week at the Burden Neurological Institute in Bristol, (for Ashby also directs a research department in this Institute), chaffingly defined the homeostat as “ a machine designed to do nothing” . This criticism might seem to have some validity, but the answer is that a mechanical novelty has resulted which can link up any conceivable factor with a fixed goal by way of complex paths and connexions whose formation is due to the fact of the linkage itself. The homeostat works through the exploration of possibilities and the sifting of eventualities. The machine itself cannot “ know” the best solution of its problems, so it tries either systematically or at random, all possible solutions. If one day a bridge- or chess-playing machine became an actuality, it would be a descendant of the homeostat. Its effects are not “ determined” ; they are the effects of “ organization” . It is the goal-seeking effect— in this case “ victory” — which steers it. The machine is confronted by n possible actions. It explores the n actions. Each one of them may entail m thrusts of which each is susceptible to multiple parries. The machine explores each series of actions and indicates the series that may lead to success. It may even compare the length of the indi cated series with the importance of the gain and event ually it may perform the action that is most certain to succeed. Electricity enables the machine to deal with all these intri1 Electronic Engineering, Dec. 1948.
308
a s h b y
’s
h o m e o sta t
an d
fifth
d e g r e e
of
au to m atism
cades. Just as water from an underground source always finds a free outlet— and moreover the shortest path, avoiding obstacles, detours and barrages and the dangers of soil absorp tion— so the electric current chooses the shortest path. The best path for the current will be that which it will find perman ently open for it, just as the chosen course for the river will be that which has the least obstructions. Like the machine, the torrent does not “ know” which path to take, but it invariably follows the one offering the least resistance. Whilst we may admit that an electric current will seek and find the best among many possible paths, we still find it difficult to admit that thought can only proceed after similar gropings. We assume that it “ knows” whither it is going. The apparent clumsiness of the solution offends us. We will not allow that thought scans all the horizon before it settles on the obvious path. Still, if we think of it, imagination really implies such action, and likewise scientific hypothesis. It is the slowness of the reactions of the homeostat that makes it difficult for us to regard the exploration of eventual possibilities as a mechan ism of thought. We see the homeostat hesitate, we see it explore a number of solutions one after the other and we find that this takes too long. The explanation is simple: the homeostat is an electro-mechanical device; it is a machine that is retarded by the inertia of its mechanical elements. If it were an electronic machine and explored in each second as many possibilities as the scanner on a television screen and if it gave its decision in a few seconds, or if it could find the correct move in a game of chess no more slowly than a human player, then we would be unaware of the mechanical action— the invisibility and the instantaneous nature of the electronic phenomena would pre clude this and we should have little difficulty in accepting this process as akin to that of thought. If we conceived an electronic homeostat in place of an electro-mechanical one, a mechanism that would equilibrate, not moving electromagnets, but voltages, our viewpoint would be changed. If such a machine were found to deliver a rapid judgement on a very complex problem involving the exact balance of “ for” and “ against” , it would be easy to believe that its processes resembled those of the human brain. Ashby has realized such a machine which he terms the
3«9
THINKING
BY M A C H IN E
dispensive-and-multistable system: dam s . In this there is no visible movement; nothing but electronic reactions. Only voltages are measurable. Equilibrium is achieved, not between metal needles with inertia, but as the voltage between different units. Variations in voltage are not retarded by any inertia of practical importance. Such a system is designed to have a hundred units instead of the four of the original homeostat. It is a question as to how many different “ organizations” will enter into each element. One can imagine the prodigious number of possibilities that such a machine could review in the same time that a man would take to develop the main thread of an argument. The new universe that would be created by such machines consisting of thousands of units is hardly conceivable. But Ashby already considers that the present dams machine is too simple and is planning another with even more complex action. Un fortunately its construction would be an extremely costly under taking and is not to be envisaged for the present. It may be that some co-operative alliance between binary calculating machines and machines of the dams type is possible. It would seem to be a natural sequence to integrate fifth-degree mechanisms with the complexes of machines with receptive units and servo-mechanisms with retroactive regula tion. The resultant machine would have advantages over human intelligence such as can scarcely be imagined. TOWARDS
M ACH IN E
GOVERNMENT
Ashby is not the man to let his imagination run away with him— on the contrary he is a most sober scientist. Yet, all the same, he tells us of miraculous machines that might govern the world; his sole proviso is that they will only become actualities in a distant future. The present homeostat achieves equilibrium between contrary and independent urges that are only those of electric currents but it may be that in the future, instead of feeding the machines with abstractions, they will deal with concrete problems. Such machines will undoubtedly find appropriate solutions. All that is necessary is to present the problems in the shape of electric data. The machine of the future, says Ashby, will be able to explore domains far too com plex and far too subtle to be envisaged by human intelligence. 310
a s h b y
’s
h o m e o sta t
a n d
fifth
d e g r e e
of
au to m atism
After the calculating machine will come the machine for weigh ing the pros and cons of a situation— the intelligent machine. Ashby alludes above all to economic and political problems which often baffle the expert. For example: in order to fix the price of butter, the authorities have to consider a number of things: the net cost, the volume of productivity, the purchasing power of the consumer, the policy of the producers, the whole salers, the retailers, the demands of political parties and trade unions, the requirements of international markets, & c. If such a machine were to be given all considerations it would be able to assimilate them and give a balanced price. It would certainly make as good a job of it as the bureaucrats. Certain people prophesy the advent of a universal machine civilization. No government, however, could dominate the economics of the entire world. Napoleon on a battlefield could take in the whole situation at a glance; nowadays military commanders are overwhelmed by the extent of their field of action. When it becomes a question of governing mechanical continents even the greatest genius fails to grasp the entire situation. It will be then that the machine will govern mechan ical civilization. Such a machine, according to Ashby, will be fed enormous tables of statistics and masses of scientific facts, so that after a time it will be able to provide huge quantities of intricate directions dealing with all the elements of the problem. These instructions might seem, perhaps, to be disjointed and incomprehensible to the human beings who would have to con sult the machine. Mankind would have to obey the new Sibyls blindly, without attempting to understand them, and little by little it would see its difficulties vanishing. The interplanetary government of centuries to come will be represented by a body of experts under the guidance of fabu lous machines. The machines will govern the experts who made them and the experts will govern the masses in the name of the sacrosanct and infallible machines. Even H. G. Wells did not foresee this!
CH A PTE R X IV
At the Level of Human Functions limits the machine’s capabilities ? The most highly evolved machine, the homeostat, has only one limit, and that is the limit imposed by man. But is a machine of the sixth degree, master of its limitations, con ceivable ? We hope to show elsewhere that there is a level of nervous activity corresponding to each degree of automatism, in ascend ing order: reflex arcs, mesencephalon, diencephalon and, crown ing all, the cerebral cortex which, in its power to change and influence the end-goal, represents the sixth degree. To attain this degree would seem the unique privilege of man— or so it seemed until we restudied the question under the leadership of Dr. Jacques Sauvan, who has influenced much of what follows. We came to the conclusion that this estimate of the possibilities of the machine had been far too timid and that an artificial effector of the sixth degree is realizable. The first three degrees of automatism only yield activities that may be logically classed as determined. They are to be looked for only within the province of the machine; they can be evaluated only from an anthropocentric point of view, being conditioned by the operator who makes cunning use of both the rigid and the flexible properties of the mechanism. With the fourth and fifth degrees, organization begins and the machine starts to augment its potentialities by the retroaction of its resultant effects. The operator regulates a “ determined” type of machine by his action on one of its factors. So far as such a machine is con cerned, the operator represents contingency; as far as its effector system is concerned, it is governed by a directive coming from another effector. If now this directive has its origin in the effect,
W
hat
312
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
the government is internal to the system. Thus the machine now becomes one of the fourth degree. But this machine which governs itself still depends on the environment with regard to its other factors and its determin ism. If, now, the effect assumes the power to choose the form of determinism that guarantees its continued existence, deter minism becomes an internal factor of the system and the machine has passed to the fifth degree. But the operator still gives something to this fifth-degree machine; he gives its action the reference to the feedback. This reference is, as we know, the value of the effect for which retroaction becomes zero, the value where the feedback no longer sends any message.
In the homeostat the zero point is that in which the grid of the triode is subject to zero voltage, in which the negative and positive voltages of the respective ends of the trough are balanced and the plate is in a central position. In the above diagram the reference is symbolized by an arrow coming from the external environment; it represents the correction to which the machine has been subjected in order that the plate should reach equilibrium in the centre of the trough; it represents, therefore, the end-goal of the system. One might conceive a homeostat whose end-goal would be to set its plates at 150 to the right or io° to the left. This could be effected by modifying the voltage at one end of the trough or by modifying the polariza tion of the triode grid. Suppose that we externally modify the grid polarization to — 2V, through the end-goal influence. In order to obtain the zero polarization necessary for the system to reach equilibrium, the plate would have to be in such a position that it would produce a voltage of + 2 in the grid, so as
3*3
TH INKING
BY
M ACHINE
to compensate for the environmental voltage. In such a case we have realized a servo-mechanism; the position of the plate, that is to say the end-goal of the machine, varies with the volt age imposed on the grid by the environment. If, instead of this, the voltage is imposed by internal condi tions, the mechanism will once again exercise its power to con trol what was up to then an external voltage. In other words, we will then have a mechanism of the sixth degree. THE
M U LT IST AT ,
A
M U LT IP LE
G O A L-SEE K IN G
M ACH IN E
Let us no longer draw the directive arrow of the goal to be achieved as coming from the external environment, but let us instead make it dependent on the effect, one feedback acting on the other feedback as in the diagram.
Such a system would not have many end-goals; the plate would be immobilized in a new position, but it would still have only one position of equilibrium. This, then, is not a sixthdegree mechanism. The behaviour of the mechanism depends moreover on the plus or minus signs of the two retroactions. The more interesting combination, which occurs often in nature, is that of a principal positive, and a secondary negative retroaction. It is this combination that we find in the thermo nuclear pile, where a negative retroaction prevents the disinte gration favoured by the positive retroaction from exceeding a certain maximum value. A retroactive effector whose reference depends on the effect itself does not give a mobile end-goal; it merely displaces the fixed point of its goal. Thus, under such conditions, the sixth degree is apparently inconceivable. The directive giving a variable reference cannot come from
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
another effector outside the system. But if one makes this other effector dependent on the first, a system of two effectors which would have a changeable end-goal might be realized. Before attempting to translate this possibility into concrete reality, it may be well to sketch the logical plan of such a com bination. The effect of the effector B acts on the directive of A. This influence would be contingent to A if B were independent of A ; however, the directive reference of B depends on the effect achieved by A. We have now created a closed system with two effectors. (Practically, a number of effectors could be thus combined.) In the homeostat a multiplicity of effectors creates a com plexity that is indispensable for our imitation of vital functions, but the logical plan of the fifth degree is conceivable with a
single effector. To produce a mechanism with a multiplicity of points of equilibrium, logic requires the existence of many effectors. This is the key to the whole problem. A single effec tor can only have one point of equilibrium for each sort of con tingency affecting its feedback; or, to put it more precisely, it can only change its equilibrium on receiving an order from a 3i5
THINKING
BY
M ACHINE
source exterior to the system. A complex of effectors with inter acting retroaction can attain equilibrium for many positions of its effectors. A system with n effectors may produce a feeble response to one effector and a strong response to another; or, equally, it may give the contrary result. Here we have a sixthdegree system each effector of which can change its end-goal in the interests of the system or global equilibrium of the whole system. Starting with the homeostat it should be easy to realize such a machine. If in each unit the polarization of the grid depends on the position of the plate, we have a mechanism capable of attaining its equilibrium in many ways: we may call it a multi stat. The position of a plate influences not only the grid of its own unit but those of the other units. Thus, when the plate of A is on the right, when at a voltage of +3v, it will change the polarization of B, C and D. In each of these elements, for the voltage of the grid to be annulled, it will be necessary for the plate to settle to a voltage which is not zero (i.e. for the plate not to be in the centre of the trough). Thus a machine of the sixth degree— a multiple goal-seeking mechanism— can be realized. e q u ilib r iu m
:
th e
u lt im a t e
g o al
An argument that might be urged against the foregoing is that the end-goal of the machine has not really been changed, since this machine tends constantly to annul its retroactions as ultimately it always has the same end-goals: that is, to annul the voltage on its grids. The only real change in end-goal would be if the machine sought to establish a voltage of, say 2; if it sought to induce a certain strength of current in its coils instead of working to make it zero. The answer is that such is an impossibility— that it is defini tively or logically impossible. No mechanism operating by negative retroaction can tend to give its messages a fixed value differing from zero. There is a supreme goal common to all these systems: the search for a state of equilibrium. This might be termed the “ final cause” of all things it if were not that this term has a special meaning. For all this, it really is a case of “ final cause” , of an end that takes account of all that tends to wards it.
316
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
This tendency to equilibrium which governs the universe may be designated as the ultimate goal. This idea of an ulti mate goal is common to all effectors or effector systems. It is the search for equilibrium between the effects and the internal and external factors that give rise to these effects. Even positive retroactions tend towards an equilibrium. But their point of equilibrium is infinity; it can never be attained because sooner or later one factor will act as a brake upon it. From now on, it should be easier to grasp the hierarchy of effectors and to appreciate the true significance of their various levels. The effectors of the first three degrees attain their ultimate goal because of retroaction affecting one of the factors, or, in the case of positive retroaction, they continue to seek it until they reach the limiting level for this particular retroaction. The effectors of the fourth degree attain their end-goal by means of a retroaction applied to one of the factors (or when the retroaction is positive they continue to the point where this retroaction will be limited). The fifth-degree factors act in the same way as do those of the fourth degree, on the totality of their factors. The effector com plexes of the sixth degree are profoundly different from the others— the superior equilibrium is found by the aid of multiple states of disequilibrium in simple effector systems. Here we reach the level of higher animals. To anyone who maintains that a certain machine is an artificial brain, we may reply that the brain itself is a sixth-degree mechanism. The regulators of the fifth degree may modify a factor in order to maintain their state of equilibrium. At the fifth degree the homeostat changes its programme in order to maintain its state of equilibrium. At the sixth degree, both a multistat and a higher animal modify some of their multiple activities in order to attain their ultimate goal; they adapt their programme to their results. This is a true type of human mechanism; to act according to one’s means, just as much as to adapt one’s means according to one’s ends. It might be said that the god of machines endows them with the means to attain their final goal— to find their equilibrium. The multistat and the higher type animal have the choice, or it may be called “ liberty” , to reach their goal through many diverse channels. When an organism acts in any fashion whatsoever, it invariably does so
3W
THINKING
BY
M ACHINE
in order to attain its equilibrium— its ultimate goal. But this difference would cease to exist after the birth of Machina liberata. The crucial word “ liberty” has been introduced. The hier archy of effectors dealt with in this book may be interpreted as the conquest of liberty. A completely determined effector can make no choice; it is only a predetermined effect that is sought by its factors. Liberty does not exist here. At first an organic effector acquires a certain degree of liberty with respect to a certain factor and later on with respect to its predetermined programme. An effector with multiple points of equilibrium reaches a much greater degree of liberty; the effect by which it attains the ultimate goal is not predeter mined for it. The most surprising fact in all this is that it is achieved by very simple means; if two results are coupled “ head to tail” , an internal activity has been given to the effector which will necessarily be freed to that extent from the influence of the environment.
This diagram simply represents a retroaction where such factors as are not implicated in the closed circuit represent con tingency. To attain the sixth degree of automatism two of these retroactive effectors have been selected and coupled “ head to tail” (see diagram page 315). This is why nature is not a matter of chance; and why living organisms attain that persistence of internal equilibrium which Claude Bernard recognized as the mark of their independence in face of external contingency. This cardinal secret, which has been alluded to so often in different guises, has never been thought of in such simple terms: an effect becomes more or less independent in the degree to which it acts upon itself. Stephane Lupasco, a contemporary logician, using very
318
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
different ways of approach, appears to have arrived at a similar conclusion. He postulates “ a principle of antagonism” which he claims to be the key to the universe— in fact the universal key. Lupasco, in a number of books,1 claims to have demonstrated a “ fundamental dualistic contradiction” . “ Every phenomenon tends to resolve itself into a duality, an analytic opposition of a logical contradictory nature; in each phenomenon is implicit an antiphenomenon which is essentially united to it.” This “ contradiction” would appear to be the second func tional accompaniment necessary to all organisms. Some dis cussions between Lupasco and the author convinced both of the fundamental convergence of their views. The logical analy sis of the principle of contradiction claims that the absence of contradiction in any phenomenon is inconceivable. The logic of effects and the law of dual function when applied to concrete events show that the absence of multiple functional relations would give us a universe of pure contingency; a universe inno cent of atomic and astronomic balance, in fact an “ a-logical” chaos— to use a term that emerged from our discussions. Thus it would appear that liberty, henceforward, may be assumed to be a logical idea; it is an idea subject to progressive develop ment whose final realization is inconceivable. The degree of liberty of a system depends on the degree of independence that it can maintain against contingency. Liberty, however, is powerless against the ultimate goal of equilibrium inherent in all systems. Complete liberty would entirely free the effector from contin gency. The effector would then be able to do anything, but at the same time it would not be free, for freedom is not an absolute idea; it is only conceivable in relation to the external circum stances which it can overcome. As far as entropy is concerned, it can have no other meaning. In short, liberty is something that must show itself by empirical proof. THE
ORGANISM
INTEGRATES
WHAT
IT
PERCEIVES
The multistat has to prove its degree of liberty by its reac tion to external circumstances. One might imagine that the 1 Le Dualisme antagoniste et les exigences historiques de Vesprit (Vrin, 1935). Essai d'une nouvelle thiorie de la connaissance (Vrin, 1936). Logique et contradiction (Presses Universitaires, 1947). Le Principe d'antagonisme et la logique de Linergie (Hermann,
■ 950-
3*9
TH INKING
BY
M ACHINE
experimenter could subject it to contingency in the same way that the homeostat can be made to show its freedom from deter mining factors, when subjected to interference. On the other hand, it can be thought of as an autonomous engine, which, like the tortoises, explores external contingency and seeks equilibrium. Such a machine would not only free itself from its determined programme, but also from reference to its feedbacks. Whilst the tortoises have the power to attain their equilibrium in a certain situation only by means of a single type of activity, the multistat would have many and, according to the degree of complexity incorporated in its design, it might have countless numbers. In its explanatory diagram no arrow could impinge upon it from the external environment without being controlled. If such a machine were ever constructed we should propose it be called Machina liberata. Such a machine might be thought to be, and indeed must be, a veritable menace. M. liberata could obviously not be con structed from a homeostat. The troughs filled with liquid and the delicate plates would not be practical for the movements of an autonomous engine. Some entirely different device for electric equilibration could give the same performance. But we do not propose to describe such a machine at present. Here we shall confine ourselves to an analysis of the functions that such a machine should possess to serve as a cybernetic model of nervous activity. This, in fact, is our aim; not to deal with the more or less anthropomorphic automaton, nor even the scientific toy, but with an instrument for research. The fact that such an electronic model of mental function is contem plated must oblige us to investigate and explore mental functions. The desire to understand the mechanisms of thought may well seem somewhat fantastic, but to study them by means of cyber netic models, as we propose to do, is an immediate possibility. To extract the full significance, it is not proposed that any par ticular type should be described, but that only the general principles should be dealt with here.1 The hierarchy of organized mechanisms is built up on the prin ciple of emergence which is dear to Georges Matisse; anything 1 In our book entitled L’homme en Equations, which is in preparation, we propose to envisage its construction and to bestow on it functional ability just as the statue of Condillac was progressively endowed with senses.
320
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
that is true at the lower level is true at the higher level from which alone the characteristic principles emerge. The clinamen only organizes a determined effect; the homeostat only establishes effects that are already organized. The sixth degree is only conceivable as a product of the other degrees. The ner vous system might appear to be a similar hierarchy. We seem to be logically compelled to envisage a system of nervous hierarchy. In order even to plan the mechanism of such a model we have to envisage a logical scheme of living organisms. It is much the same problem as that which we encountered when dealing with the “ tortoises” ; to assess the logical order of the mechanisms that explore the environmental contingency. All animal activity is retroactively organized and yet numer ous nervous mechanisms can be dissected without the least trace of a retroactive circuit. It seems that the feedback is generally developed in the external environment, outside the living organism. It has already been said here that such a feed back is “ external” and that it is an essential part of sensory mechanisms. external environment of the effector
internal factors
These mechanisms may be activated by internal factors and by the external environment in the degree to which it can be perceived by them. But, in activity, they modify their relation to the external world and this implies modification of their receptors. A retroaction is established which tends to maintain
321
TH INKING
BY
MACHINE
a certain equilibrium between the environment of the living organism and the organism itself. This is precisely the feed back regulation of movement that the early cyberneticians studied in order to construct an A. A. gun capable of correcting its aim. The organization of a living organism extends, then, to its environment, or more exactly to what it perceives. A perceiving being is limited only by its senses; whatever it perceives is part of its system. A fish considered in isolation is a non-existent entity. What exists is a fish maintaining its equilibrium in the water with its varying currents. A living entity is in continual movement; it is in a state of constant flux, since it forms a whole with that on which it acts. This simple diagram has many other implications; above all, that the external environment must not be thought of as acting on the organism as an external influence, but as part of its own that which it senses; the infra-red universe of certain photo graphic plates and the olfactory universe of the sporting dog are not ours. Contingency is not imposed on the organism by the world in itself, but by the discontinuous spectrum of the senses that perceive it. It might be thought that if this diagram is true it is not in variably so; that it is possible for an action to be determined by the effector alone and not linked with a retroaction. Certainly, simple reflexes are not only capable of modifying the relations of the organism to its environment, but even to the very conditions that triggered them off. Thus, when the pupil contracts, it affects the relation of the organism to light. When a dog barks, it is in order to frighten off the marauder that alarmed it. When a protozoon contracts in response to an acid stimulus, it diminishes the noxious stimulus of the acid which had caused it to contract. But, still, such unorganized activities will occur as rare events and will generally be evidence of some internal dis turbance. Activities are, as a rule, organized so as to deal with contingencies in the interest of the organism. This, then, is the key to the whole question of adaptation of activity to the situa tion that arouses it : animal actions, to be useful, must be a link in the retroactive circuit. All animal activities participate in retroaction involving the external environment, for they are all awakened by perceptions of environmental events and all affect
322
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
the relation of the organism to its environment except those that we may study as examples of instinctive behaviour. MACHINA LIBERATA
We may now enumerate the logical requirements of Machina liberata, the machine that we discussed with Dr. Jacques Sauvan. At the periphery, there will be end-organs establishing con tact with the external world— both sensory and motor organs. They represent, respectively, the point at which the feedback becomes external to the material limits of the machine and the point at which it re-enters these limits after being actuated in the environment. It is clear that in such a construction the effector must be accessible to an effect of its effect.
As we have already pointed out, a retroaction is to be thought of as a closed chain of effects succeeding each other, and to no one of them should primacy be ascribed. We may now con sider a scheme such as that illustrated in the following dia gram in which the organs of sensation, the nerve centres, the organs of activity and the related world, are interconnected by stimuli, by nerve impulses, by activities and perception and in which contingencies may influence any of the effectors, but above all the nerve centres and the environment. In the forthcoming book L’Homme en Equations it is proposed to deal with animal behaviour, with this diagram as a basis, on lines parallel to those of the Bush differential analyser which Louis Couffignal believes to be a model of the nervous system. All animal behaviour might appear to be a search for equili brium in a retroactive circuit when equilibrium is disturbed either in the internal or external environment; differential 323
THINKING
BY
MACHINE
messages in the circuit have a tendency to annul one another in order to arrive at a stable equilibrium. A study of the most modern calculating machine leads us back to the physiology external contingency
internal contingency
of Claude Bernard: “ all activities, no matter how varied, have only one aim; that of ensuring the constancy of vital conditions in the internal environment.” To return to M. liberata; it is possible to modify a scheme such as the one below so as to emphasize the ambivalence of the passive and active relations to the environment. EFFECTOR
The passive organs receiving messages from the environment might be microphones, photo-electric cells, thermometers, &c. Each would transform some essential quality of the environ ment into electric qualities or, in fact, into sensations. The active organs may be just as varied: motors, turning wheels, a mechanical arm which may turn and stretch out, the dual system for cooling and heating which regulates the internal temperature, &c. All these organs, facing outwards from 324
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
the machine, constitute what the neurologists would call the peripheral system. The level of the elementary reflexes. The inferior level is that of absence of liberty, imposed perceptions and reactions. A degree of variation in the external environment gives rise to a certain logical effect. Thus, the diaphragm of the iris contracts if the light is too bright, escaping from higher control. This is the lowest level of the nervous system, that of the spinal cord, of simple reflex arcs. It might appear that here is a state of pure determinism. However, this is not so, for the reflex, by its occurrence, modifies the environment which evoked it; the iris diminishes the entry of light, an alarm signal will drive away the intruder who touched off the mechanism causing it to sound. These actions do not, however, play a part in the specific make up of the organism. The level of the “ effects” . Physiologically speaking, this is the level of those complex reflexes that affect the whole spinal cord or which are set free in the medulla. These activities are more complex than those of the segmentary reflexes and since they are subject to some degree of higher control they are less strictly determined. In the external environment, retroactive circuits are established between sensations and the resultant activities and these relations will be continually subject to contingency. It has been thought desirable to term such relations “ affects” rather than “ tropisms” . (A discussion of the term “ tropism” as opposed to “ reflex” will be found in the forthcoming book L’homme en Aquations.) This second level, that of the affects, forms the fundamental structure of the machine. At this level all sensory messages are received and organized. This level is that which governs the organs of activity and corresponds to “ the final common paths” through which all human activities pass whether reflex, auto matic, or voluntary. The resultant activities may still be termed reflex, but not in the same sense as the reflexes of the lower mechanisms; they are open to modification by the top-level organization which deals with the corresponding demands. A further difference from true reflexes is that these activities are not specific to any one type of sensory stimulus. Thus the mechanical arm rotates, lengthens or shortens and adjusts itself to a variety of stimuli. It depends on sensory organs which are 325
THINKING
BY
MACHINE
so activated as to be able to detect, like the mirrors of a tele meter, the pressure of objects exerting effects either deleterious or beneficial. If an object is beneficial, the arm is extended towards it, guided by a servo-mechanism analogous to the mechanism which regulates our activities by evaluating their deviation from the ideal standard. Many senses may combine to give rise to certain actions. A model of this mechanism may be made by summating the positive and negative voltages fed by various detector organs into our sensory centre, where they add up algebraically. This centre estimates as a whole the “ satisfaction” or the “ dissatis faction” and, should the voltage rise above a certain level, it can act on the motor mechanisms by either inhibiting or rein forcing their motion according to how it evaluates the situation. At this level, it is obvious that the “ personality” of the organ ism is established by reference to its feedbacks, by the criterion by which it estimates deviations or shows its “ preferences” , to use a term more akin to psychology. This preference is innate; one type of fish may prefer warm, salt water, another the oxy genated current of a freshwater stream. To establish further the relative freedom of both organism and machine in the building up of “ personality” , the effector must have the power to vary its influence on its feedbacks or, to put it more simply, to modify the retroactive chain. The level of equilibration. The activities of the feedbacks of the second level may be affected by a regularizing servo-mechanism that we may place on the third level: that of equilibration. From this issue messages designed to effect a condition of general equilibrium of all the reflex actions evoked by each sense organ in search of its own specific equilibrium. Such a mechanism is in fact the homeostat. The homeostat is in com mand at this level; each sense organ transmits its specific reac tion to the situation and an overall equilibrium is established between the various elements; each of these elements is connec ted with one of the sensory organs enabling it to deal with a specific situation and thus to contribute to the general adjust ment. This is the level of the central grey matter, of the dien cephalon, of the pituitary. At this level, the mechanism appears to be such that all its subsidiary parts work in harmony to attain a certain degree of
326
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
equilibrium. The feedbacks of the next level exercise a correc tive influence which comes into action whenever any condition of disequilibrium arises. These readjustments will continually give rise to new modifications of the situation until a condition of general equilibrium is attained. The level of integration. The homeostat has only one type of equilibrium; all the subsidiary levels are adapted to this single end. If, for instance, it were so adjusted as to keep an automatic pilot steering a north-westerly course, this course would con tinue to be held under all circumstances. The analogy of piloting may render it easier to appreciate the resemblance of the homeostat and the multistat to the higher levels of the nervous system. To steer a course is to correct chance variations so as to arrive at a given goal, and this is automatism of the type exhibited by the diencephalon; the power of altering the course is inherent in the cortex as the organ of volition. This brings us back to an idea mentioned earlier in the book: that of “ adjusting” an effector, as opposed to “ operating” it. The machine at this fourth level has great power to vary its general behaviour. It may attain a number of states of equili brium, each of which will correspond to an alteration in the specific reference of its feedbacks. The level of the homeostat is one in which activities are elaborated and guided so as to attain one single equilibrium. The level of the multistat is one in which a state of equilibrium has to be sought; it is not determined in advance and as a con sequence multiple points of equilibrium are established and it is the duty of the homeostat to maintain them. The highest level is that of volition, which may modify the final paths of the organism; it is that of the cortex, which integrates sensory messages with internal structural tension. These tensions or influences will now be considered. THE
IN TEG R ATIO N
OF
THE
M A CH IN E,
TEM PORAL
SEEN
FROM
THE
ASPECT
Up to now, M. liberata has only responded to contingencies of the present moment. It has an internal equilibrium that does not depend on chance variations. Whilst the activities by which it preserves the general equilibrium certainly do depend on 327
TH INKING
BY
M ACHINE
these variations, they mirror the contingencies of the present moment. Thus, if it is warm, some cooling mechanism will be started to keep the internal temperature steady, taking into account such other factors as air movement, the degree of humidity, and a variety of other variables of the moment under consideration. The machine so far does not establish a “ personal” type of reaction with reference to the future. Two similar machines will differ in behaviour only because thay are not subject to exacdy the same environmental contingencies, because there are bound to be some minute differences of construction and they cannot be identical spatial positions. That which distinguishes a living being from such mechan isms is that it is not only conditioned by the present; it is rooted in the time series, its stability is not dependent on the passing moment, but has in its foundation as great a span of time as possible; looking backwards, time past is integrated into its system by experience, i.e. the memory; looking forward, it endeavours to integrate by reason, i.e. prevision. The longer the time segment to which it has reference, the greater will be its stability. Thus, an immature animal, the slave of its affects, will rush into all the activities indicated by the stimuli of the moment; a mature and experienced animal, sensible of its past, will not so lightly surrender to the present. A philosopher or a mystic who sees far ahead of the present and looks forward to the future, may even become insensitive to the claims of the moment. The greater the temporal period integrated by the effector, the less it will be at the mercy of contingency. This may be formulated as a law in more abstract terms. Such a sixth-degree machine will utilize both memory and logical thought processes. Memory is not necessary for a mechanism of the fifth degree. One might almost say that if a homeostat were endowed with memory, it would find the gift of little value; possessing a rich experience, it would have little incentive to seek out solutions to its problems. They would be set out for it by the “ pro gramme” that it had made for itself. But for the sixth-degree machine memory is indispensable to save the effector from be coming the plaything of immediate circumstances. In the fifth-degree mechanism, if there be apparent choice,
328
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
it is subject to a predetermined goal. Such a choice might be implicit in the question, “ How can I climb to the summit which represents my goal?” If the climber has no memory when he seeks a way up, he is likely to injure himself on the rock face—just as we see an insect endeavouring to reach its goal by an impossible route, slipping back time after time. If, however, the climber has not to decide on any particular peak for his ascent, it is obvious that the absence of memory will now b e m u c h m o re p r e ju d ic ia l.
H e w o u ld h a v e to c lim b se v e r a l
mountains in order to discover which was the most favourable, without ever finding out until he had reached the top. Each would have to be tried in turn and only at the end of an attempt could he decide whether or not the mountain was dangerous, and even then he might descend and climb up the same mountain again and again. In the case of Machina liberata the memory mechanisms will register failures and successes and will assess the value of the circumstances responsible; but they will not be free to annul the messages due to these circumstances, nor yet greatly to increase their potency— so that the decision that they bring about will not invariably obtain under other conditions. Reason intervenes to the same extent as memory. Binary calculating machines can put it into practice. Thus a complex of mechanisms attached to an A.A. gun can determine where the plane is most likely to be in a certain number of seconds, when it has as data its present position, its course, its velocity and the probability of this or that reaction on the part of the pilot. The reasons on which we found our activities may appear to be attempts to enlarge the temporal factor that allows us to avoid the collisions and obstacles of immediate contingencies in order to prevent us from deviating too far from our state of equilibrium. If it be supported by both “ memory” and “ reason” , the mechanism can considerably enlarge its field of action, but all the same it will be separated from the living organism by an impassable gulf. It will only be capable of governing its activities by its own experiences. If animals had nothing but their own individual experiences to guide them they would all die before they could ever amass enough to make use of them. Insects with the shortest lives “ know” what they have to do and they know it by instinctive processes.
329
TH INKING
BY
M ACHINE
The only way to understand instinct is to conceive it as a memory belonging to the species, a memory rooted not only in the past of the individual but in that of all of its ancestors. Here we are beyond the sixth degree. The effectors of the sixth degree come into possession of instinct at their birth or their creation. M. liberata receives it from its constructor; one or other of its elements of equilibrium will once and for all be either favoured or rejected; a given event will be followed by an ineluctable response. If instinctive processes are copied in a model mechanism, it becomes obvious that they are not dependent on feedback from the external environment and do not tend to modify the world of perception. They are entirely determined and in this they are profoundly different from other animal activities which are organized in relation to the environment. Instinct is a “ pro gramme” , a roll of perforated paper, which is only aroused as a reflex and whose subsequent development is strictly determined. It is understandable, therefore, why instinctive activities are often ill-adapted to circumstances and often seem to be utterly pointless; they do not harmonize with universal relativity or tend to produce a state of individual equilibrium. Instinct gives stability not so much to the individual as to the species; to understand its operation one must look at it in relation to the development of the species as a whole. M. liberata, then, must possess, like living organisms, the following three accessory mechanisms. Memory, which links its present to its past; it being always understood that this mechanism can in no way claim to imitate human memory. Reason, which links its present to its future. Instinct, which initiates pre-determined activities and which is implicit in its construction just as it is innate in living animals and represents a link with the past history of the species. The influences of the past and, to a lesser extent, of the future, are united with that of the present which is the integration of the external perceptions and the internal sensations that have their origin in lower levels. It may be asked at what level all these are integrated. At the fourth level, the level of volition, memory and reason are united, but it is impossible that the punched-card of instinct may function at the third level— that
330
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
of the diencephalon, whence it may dominate all the organs of activity and govern the correlation of many activities. A question of this sort is not easily solved theoretically. The anatomy of the nervous system is of little help; the tracts be tween cortex and diencephalon are not sufficiently defined. Besides, if one knew from the outset what the necessary con nexions of the higher levels of M. liberata had to be, the existence of such a machine would be of less scientific importance. Its interest lies less in its conception or its mode of behaviour than in its possibilities of practical realization by the constructor, every attempt at which would be a source of information as to the structure of the nervous system. The fact that we know com paratively little of the structure and functions of the nervous system greatly enhances the value of a model in our endeavours to comprehend it. If, now, we look at M. liberata as a whole, it appears as a complex of effector mechanisms, each of which tends towards its own equilibrium, but all of which collaborate to establish a global equilibrium. This global equilibrium is the work of four more or less complex systems: (1) A mechanism organized in relation to the environment and reacting to the present situation. This is the essential mechanism, for it is the only one possessing true organs of activity or, to use neurological terminology, final common paths. (2) A series of “ programmes” implicit in the construction which correspond to the innate instinct of animals. (3) A memory mechanism. (4) A “ reasoning” mechanism, which, based on the present and the past, tries to predict the future. In M. liberata these various mechanisms collaborate to estab lish general equilibrium just as happens in the animal organism where one influence tends to organize the whole in the instant present, another in relation to the immediate future and yet another in relation to the past history of the species. From their interrelations arise directives which descend to the lower level, whence they react on the environment. Actions cannot be per formed by mechanisms in relation to the past or the future but only by mechanisms which relate the organism to the present. From below upwards, each organ which seeks to establish its
33i
TH INKING
BY
MACHINE
particular equilibrium notifies the higher levels of the results of its activity (that is, of its state of disequilibrium); in the higher levels, this disequilibrium is collated with other tensions mani fested by memory, reason or instinct; and then, from above downwards, the integrated result gives to the homeostatic mechanism of equilibration the point at which stabilization must be effected and this in turn ensures the collaboration of every unit in the common task. No action is necessary in a machine of this sort. Even the instinctive activities which do not depend on the environment, except for their release, are not necessarily activated by the “ programme” ; this is responsible for the transmission of im pulses to certain units of the homeostat or the multistat, tending to influence their decision without ever completely dominating it.
A M O D EL OF NE R V O U S F U N C T IO N The machine that has been described would have one in evitable defect: it would be sensitive to every stimulus, it would hunt ceaselessly, without ever finding a state of equilibrium; it would start an activity only to countermand it on finding that the course of action interfered with other equilibrating activi ties. It would be a model of resdess human behaviour. Only elementary creatures achieve peaceful equilibriums; the com plex human organism never finds it. For all that, a mechanical method can be conceived which would ensure greater stability. This solution would be to endow our machine with an inter nal rhythm that alternately connects and disconnects the organs of perception, that disconnects and connects the organs of equilibration and integration. In the interval of a second the machine may perceive and register, but not weigh its percep tions nor send out any orders. In the next second it would cut itself off from the environment and balance its accounts, weigh, judge and decide, enclosed in its “ ivory tower” , un perturbed by such of its activities as were in course of develop ment, or by the play of contingency. In other words a period of perception would alternate with a period of conception. We have been led to this concept as a result of a search for the optimum working conditions for our model. It appears that some physiologists have attributed to the fundamental rhythm 332
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
of the brain waves a significance that is practically identical with that which purely mechanical considerations led us to postulate for our model. Pierre Auger, in Temps modemes1 wrote a masterly essay, “ L ’fisprit denombrable” , in which he put forward the theme that man realizes his identity through cutting himself off from the external world by virtue of his alpha rhythm. The argu ment is as interesting and important as its subject, but un fortunately it cannot be reproduced here. Another work, more technical than philosophical, which gave a theory of human activity, was that of K. J. W. Craik, a young British scientist who was killed in a motor accident.1 He likens the human being to a servo-mechanism with intermittent correction. It is remarkable to find the concept of a mechanical model resembling the physiological theory and even leading to an explanation of human activity. Man, who is in communion with the universe, would lose his integrity and would have no possibility of self-control if he did not periodically shut himself off. At the same time these arguments furnish evidence that man is subject to the laws of quantum mechanics. Another technical consideration which is implicit in the concept of such a machine leads to further very important theoretical implications. The problem, which at first sight appears to be one of mechanics, is this: if n combinations are sought, is it more convenient to effect them by a small number of units, each capable of many choices, or by a great number, each capable of few choices? Thus, to give 16 possibilities of equilibrium to a multistat, is it better to have 8 units with 2 dis placements each of 4 units with 4 displacements each ? The study of this problem led us to use elements with the smallest possible number of choices. This question is linked up with certain laws that give the best method of classification for analysing a given arrangement; laws that are purely mathe matical and involve the exponential function e. Practically, it is best to endow M. liberata with elements which, far from having numerous possible choices, work on the all-or-nothing basis. It is here that the matter becomes very 1 Les Temps modemes, April 1950. 1 British Journal of Psychology, Vol. 38. 1947/8. 2 articles.
333
THINKING
BY
MACHINE
interesting: according to modern views, the neurones function in like fashion. Thus the brain, with some ten million binary neurones, would have the same logically excellent structure as M. liberata. M. liberata is not only to be thought of as an ingen ious robot, but as a model displaying the functions of the ner vous system and, above all, its psychic functions. We have banished from this book the ill-used word “ robot” , but this is not in order to substitute “ automaton” for it. M. liberata in no way resembles anthropomorphic automata. If a certain action is suggested in its attempts to achieve equili brium, it is not necessary for it to perform this action; the only action which it might perform would be to activate some ancil lary mechanism which would produce the same ultimate effect. Thus, in order to obtain food when its arm touched the object that would satisfy its appetite, a simple tripping device would supply the machine with the current of its internal accumula tors. Similarly, to keep itself from getting cold, it would not wrap itself in a blanket, but simply close the ventilators. Coloured discs connected with the voltmeters could indicate to the experimental worker the action which it was about to accomplish.
T H E M A C H IN E IS SO M E T H IN G ELSE When describing Machina liberata, we have been asked whether it could write tragedies. This gibe is a typical one. Many very intelligent people have been so obsessed by oldfashioned conceptions of automata and by prophecies of “ robots” of the future, that they cannot conceive a machine that would rise to a level of human perfection without accom plishing the activities of a human being. A manufactured effector can never be a human effector; it will never enjoy the same possibilities. It will have greater capabilities in certain directions and far less in others. The superiority of M. liberata will show itself above all in complex reasoning processes or in the operations of calculation involving the binary system; in situations in which a man might become hopelessly muddled, whilst the machine can give its verdict almost instantaneously. It can indeed extract its judgement from a vast number of human pronouncements, the mass of which would bewilder any human brain, even that of a genius. It must also be understood
334
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
that the machine may possess more extensive sensibilities than those of the limited human sensory system. It can, as a conse quence, evolve solutions and programmes of action that will be more firmly established than most human activities. One may take as an instance the sense of orientation, which is easy to real ize in a machine, whereas it is nearly absent in human beings. All planning of effect is only conceived within the framework of a certain attribute of this effect, and of no other. To plan the thickness of a metal sheet is quite a different matter from the planning of its malleability or its colour, which would require very different adjustments. To put the matter in a more abstract way, it may be said that a clinamen acts on the probability of occurrence of a certain value of one of the essen tial qualities of an effect. To come back to living organisms and machines, each of them is influenced by a different retroaction which organizes the effector in relation to some one essential quality; hearing is sensitive to the quality of sound, vision to that of light. Thus, a machine with a very great number of sensory organs would be more highly organized than men with limited sensitivity. On the other hand, man maintains his superiority in certain aspects. The conceptual thought pro cesses of our brains allow us to adjust our sensations and the power of integration of the various visual sense data. The machine will prove itself more efficient than ourselves when it is a question of determining the nature of sensation (e.g. dis tinguishing certain wave-lengths from others) or of measuring the intensity of these sensations. The visual system of living organisms is not there, however, simply to register the range and intensity of electromagnetic waves that strike the eye; it also exists to perceive form. Gestalt theory rightly assigns great importance in human psychology to the concept of “ forms” , built up in the first instance from visual perceptions. In the present state of technical attainment, a machine is not capable of the integration of form. Here we come to one of the first problems set before the cyberneticians: that of the perception of the form of letters in all the various printing types, with a view to the manufacture of reading machines for the blind (McCulloch, Wiener); the mathematical study of “ pointed ’’characters (Rachevsky); the recognition of a cross by an experimental machine designed to
335
TH INKING
BY
MACHINE
study the mechanical perception of forms (Malvoisin). The least that one can say is that at the present time the mechanisms required by these problems would be far too complicated to be added to M. liberata, which in its present form has not proved a difficult technical problem in electronics. In spite of the present theoretical work of Wiener, it seems that there will be great difficulty in elaborating the perception of form by the machine. The interceptor rockets which will replace A.A. shells will hit the enemy plane, but will never be able to distinguish it by its form and from all angles— a feat common to all men and even young children.. In order that they should not hit friendly planes, it would be necessary for these planes to emit certain waves which would render the interceptor rocket harmless. This example shows conclusively that in the realm of the machine perception of form has to be replaced by purely qualitative and quantitative sensations. Pursuing the matter still further, let us imagine a machine sent out on a mission of espionage. Though it could manoeuvre with success in highly complex situations, often better than a human spy, it is hardly conceivable that it could identify an enemy by his features. But a dog recognizes his master on the other side of a closed door. A pigeon finds its way to its nest without seeing it. Compared with these two examples, is man superior or inferior? There are, then, two modes of perception; perception by reference to quality and quantity, or perception by form. The first mode is that of machines and animals as well as that of our own sense organs other than sight. The second depends, in the higher animals and in man, on sight. It is not here a question of different modes of perception but of an entirely different order of things which cannot be compared with one another. On the other hand we may compare the way in which the two classes of effectors, man and the machine, accomplish their tasks. M. liberata, it is true, does not perform the same actions as man, but the mechanism by which the actions are performed may logically be inferred to be the same as that of man.
W H E R E C E R T A I N W O R D S A R E MEANINGLESS In the case of M. liberata, the machine is entirely freed from the old-fashioned ideas which we have attacked in this book, 336
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
such as those that picture the machine as a mechanical slave without power of adaptation. Whenever it has appeared neces sary to free a machine from contingency, it has had an appro priate internal function assigned to it, or sometimes this function has been made dependent on another internal function. In the case of a machine which exhibits the highest degree of automatic activity, it has been necessary to postulate a hierarchy of func tions, all retroactive, each one influencing the others in so far as its reaction to the environment is concerned. Unlike a simple causal relationship, in which the effect is directly conditioned by the factors involved, if the effect is linked with a retroaction the resultant will be directed to maintenance of the equilibrium of the organism. It would then appear that the whole activity was guided by some immaterial force acting in the interests of the final goal of the organism. This immaterial force can obviously never be sensed, nor can it be evidenced by examination of the brain or its components, al though interference with the neuronic mechanism leads to its destruction. We can only deal with matter and we are here concerned with what is not even an effect of matter, but a func tion of such an effect and, what is still more puzzling, of an effect that is never realized, since it is ex hypothesi subject to continuous modification. To reconsider the matter in terms of this book, we may premise: a certain function can only control a determined effect; the metal sheet can only be controlled in so far as it is rolled; the speed of a steam-engine in so far as there exists a steam-engine speed. Initially, there must be an effect. Then, if a retroaction arises from the effect, the effect no longer exists as such in its own right; the real effect is produced at the point where we have the combination of the determining causal function and a function of the clinamen that controls. The clinamen, which tends to its own goal and which imposes a new quality on the material effect, corresponds to that which has been defined in many loose terms and which in particular has been called a “ spiritual force” . It is really a qualification of the effect, in which a certain quality is chosen from the global effect; it is a function not of the chosen quality but of its deviation from a certain standard. It is a variable omnipresent function determined by deviation
337
THINKING
BY
MACHINE
and time and as such is a mathematical concept and should not be characterized by such loose terms as “ spiritual” . It is often an instance of pure determinism; thus the result of a calculation is determined by the original data, just as is the concluding sentence of a chain of syllogisms. This determined thought corresponds closely to that function which is called “ reasoning” . The difference between a thought activated by the environ ment and a determined thought is of fundamental importance, as we shall see. One cannot establish an exact correspondence between the logical ideas which may be found in thought processes and the terms in which this or that mental operation has hitherto been described. It will never be possible to say: This is the mechan ism of the mind. As Georges Matisse puts it1: “ We are lost in a maze owing to verbal ‘symbolism’ ; a symbolism which is rich in the fallacies by which we are duped. We must be ever on the alert not to fall into the traps set by words, ready to un mask their fraud and expose their dishonesty!” If our logical ideas have been established on a firm basis they will have no use for a subjective terminology which is personal to this or that man. Such words are always surrounded by a mysterious halo which can never be subjected to analysis. It would often be preferable to coin new words for those that have had centuries of ambiguity and which cannot be employed usefully to express rigidly precise ideas. For the lack of such new words, we are obliged to use such a dubious word as “ thinking” on the title page of the book. Let us acknowledge once and for all that there can never be such a thing as mechanical thought, in the sense that we usually employ the word “ thought” . Each of us, prob ably, conceives it in a different way. There will be a machine one day which will be capable of exercising this or that function of our thinking with either greater or less facility than our brain. It will remain impossible to reproduce the complexity of brain function by a complex of such mechanisms and, still more, to impart to the machine thought— a specifically human quality. On the other hand, a study of logical functions in terms of either utility or experimental machines will be both interesting 1 Georges Matisse, Identite du monde et la connaissance (Presses Universitaires), p. i i 2.
338
AT
THE
LEVEL
OF
HUMAN
FUNCTIONS
and rewarding; it would mark a revolutionary epoch in psycho logy. Machines designed for a special operation might well be superior to man in respect of this function alone. It seems now that one of the essential functions of thought (using the term “ function” in its mathematical sense) is con nected with the temporal quality of variations of effect. The occurrence of such variation involves a reference to some par ticular standard quality and such a reference must in itself be at the mercy of contingent modifications of the retroactive impulses. Even if such questions should ultimately permit a more exact formulation than is at present possible, the polemic between idealists and materialists will be in no way settled. There will still remain the fundamental problem of consciousness. It seems to us to be impossible for a machine ever to be conscious of being a machine. This problem cannot be settled in terms of mechanistic logic; it remains a metaphysical problem, and, however much metaphysics recedes before the advances of such logic, the problem of consciousness is likely always to remain a metaphysical question.1 Still another problem remains to be dealt with, and that is illogical or irrational human behaviour. It seems that man has the power to set aside his final goal of equilibrium and that he can voluntarily put himself in a state of disequilibrium. He can voluntarily starve himself to death. When we first discovered the science of cybernetics, we were tempted to believe that a machine constructed to avoid fire could never in any circumstances seek it. Even though an animal, after a painful experience, will always flee from fire, yet a man always retains the peculiar power of being a hero, a martyr, a saint, or of walking on red-hot coals. Now of course, M. liberata has shown us that this power is not exclusive to man. It represents, in its construction, a compro mise between antagonistic tendencies; it can compensate for a negative tension due to the negative tropism of fire by a positive tension based on a “ memory” mechanism that assures it that 1 A possible escape from the dilemma is to say that this is a false problem— that man has no consciousness of himself; he is conscious of sense data, but he has no consciousness of being the effector that conditions them. Is he aware of the func tioning of his own brain ?
339
TH INKING
BY
MACHINE
this experience has always had a favourable influence. Thus, it would mimic the action of a man who would plunge into a burning room to rescue his child. An animal, it is true, can overcome its fear of fire in another fashion, as when it is trained to leap through a flaming hoop, but it does so goaded by the fear of the punishment that would meet its refusal. Man, on the other hand, has a power that is very different; he can condemn himself to be burnt for an ideal, to prove his will-power, his free dom. Since any tendency to disequilibrium is unthinkable, it has to be admitted that abstract factors participate in his equili bration ; or, to put it more exactly, such abstract factors send in their messages just like concrete factors; or, in yet more precise language, a concept may balance a sense datum. It is possible that M. liberata will throw some light on this problem, which is the problem of idealism. Who could predict the exact behaviour of a machine that had been endowed with a very considerable number of units ? We are thus led to postulate an influence that would act on the higher cerebral mechanism side by side with its present perceptions, with its memory and its intelligence which would not only affect the past, the present and the future, but would be outside time as an ethical principle. Materialists may trium phantly assert that spirit may be envisaged as a function of matter in that the cerebral activity is that of a mechanism. The idealists will exultantly claim that the influence exerted by the soul may just as certainly be experienced as material influence, since without such spiritual aid certain feats of human equili bration would be impossible. The problem narrows down to this: when a man proves his capacity voluntarily to resist the power of present, past or future, whence comes the influence that allows the re-establishment of absolute equilibrium that is inevitable?
340
CH APTER X V
The Highest Degrees of Automatism upon a time a man set out to construct a human brain. He had at his disposal a number of marvellous machines and he had invented several others: calculat ing machines dealing with 186 types of problem, machines capable of the operations of logical thought, those that could predict the future on the strength of information received from sensory mechanisms, machines of universal memory which could deal with the sum of human knowledge, a machine capable of playing chess or bridge or doing crossword puzzles. Thus he had 83,287 infallible machines. When he had set them up in an immense building, he proudly pointed to the building where everything could be accomplished without human aid and claimed to have made a brain that could do all that it is possible for the human intellect to accomplish. Everyone wished to test the machine. The wise men asked it questions which they alone knew how to answer and the machine gave the correct response. The masters of chess or of bridge chall enged the machine and it beat them. But one day a small boy turned up saying that he wanted to play “ snakes and ladders” with the machine. “ Snakes and ladders?” cried the inventor, “ but I never thought of a machine for playing that.” Thereby ends our tale— and the moral? The machine has only the structure that was designed for it by man. If it sees, it is by virtue of the photo-electric cell given it by man, if it hears, it is because it has a microphone. In other words, it is man who has made it what it is. It seems at first sight a truism; if a machine exists at all then someone must have made it. But this truism leads to immense problems. The machine has conquered all possible components of action until it reaches its final goal. Here it is arrested by the supreme component, which is the effector itself; that which will
O
nce
34i
THINKING
BY
MACHINE
answer the question as to what should be done. Confronted with the material with which something is done, we are obliged to postulate a force that does it, that determines the activity of the component of the machines, that sets the electric resistance at 21 ohms or puts a brake on to determine the activity of certain factors. The machine will never be able to tell who directs its activity; by definition, the machine is artificial and man is its “ artifex.” At this stage it finds itself in the same dilemma as man. It reaches the same logical impasse; neither he nor the machine can create themselves. M. liberata is no more the mistress of its photo-electric cell than man is the master of his eye. The machine has no more set its resistance at 21 ohms than man has regulated the length of his optic nerve. There is, however, an immense difference between man and the machine. Man made the machine, but who made man ? There are two ways of answering this burning question, both of them logical. Either machine, animal and man are all three at the same level and, just as the machine has man for the being who created it, so man is the work of a being outside himself. Or else the machine is not the equal of man and animals, and living beings belong to a higher level. We may define this higher level as the stage at which the effector absorbs the component mechanism. It is the level at which the maker becomes immanent in the mechanism. This is the vital force, in so far as it is an aspect of its evolution. No single living entity attains this level, as a single entity is incapable of making itself. It is the totality of the long line of living ascent that we must envisage. If this living line evolves from a simple matrix, one may say, in fact one must say, that it is its own artifex. Evolution, considered thus, appears to be the retroaction of the effect on the process of determination of the organs of the effector. The living species, if it evolves, is a mechanism of the seventh degree; it has achieved the power of making itself. To recapitulate: a force that has up to now been a contingency outside the system comes to be integrated in the resultant activity. Whereas up to now such contingency appeared to be an outside force, partially determining the activity of the effectors, now, in terms of the seventh degree, it is controlled by the effect in the shape of a feedback. It is not a 342
THE
HIGHEST
DEGREES
OF
AUTOMATISM
question of inquiring into the nature of evolution; we are entitled to say that if evolution is a reality, it is in the nature of a feedback modifying the programme of the factors. The effector, then, creates its own organs and, above all, those that sensitize other retroactions. Not only is it its own programme maker, but its activity modifies the environment both as to the nature and intensity of those qualities with which it necessarily reacts. Man, as a sixth-level effector, seems incapable of the degree of self-organization that we have predicted for effectors of the seventh degree. He could certainly not provide himself with a peripheral organ of vision, but he can aid his eye by giving it various auxiliary mechanisms enabling it to see further, to register ultra violet rays, to see in the dark. Likewise he creates machines that are extensions of his organs of activity. Thus, indirectly, he is master of himself, developing his sensory approach as well as that of action and he integrates into his system the resultant expansion of his world. To put the matter in more precise terms, we have seen that totally determined machines must always, in the end, be governed by a human operator, who will note their perfor mances and, when necessary, correct their various activities to ensure efficient working. This man-machine complex can be envisaged as master of its operating factors. At this highest level of automatism man and the tools with which he equips himself form a seventh-degree complex. Thanks to its progress in science, the human race is still evolving and arming itself with forms of sensibility and activity that expand its possibilities and reduce its limitations. The result of such technical advances is of very great philosophical importance. We are far from sharing the fashionable disbelief in progress.
DOES M A T T E R ARISE OUT OF T H E V OID ? At the very lowest level of the logical scale, effect is at first determined by the external environment; next, it is organized by internal forces (this is the level of organization); then comes the great mysterious leap in the life process, the appearance of a mode of life which is internally determined (the level of
343
TH INKING
BY
MACHINE
internal determination). This logical progression has only one source of energy: the retroaction of the effect, a retroaction which has its roots deeper and deeper in the process of develop ment of the effect: a retroaction at first acting on a single factor (fourth degree), then on the whole determined mechanism (fifth degree), then on itself (sixth degree) and, finally, acting with the totality of the life force on the determination of the factors (seventh degree). Beyond this, there remains only one component of vital activity that escapes control of the effect, that remains external to the mechanism and even to the activity itself; this is matter. It would appear that here we have penetrated to the inmost circle and logic can do no more. Matter just “ is” . Matter is the fundamental datum; a mechanism that can create matter is inconceivable; yet some do attempt to conceive it— P. Jordan in Germany and, in Britain, Fred Hoyle and Raymond Lyttleton, whose broadcasts in the B.B.C. programme attracted so much attention in English-speaking countries. The startling astronomic thesis of these two young Cambridge mathemati cians amounts in fact to a theory of self-creation of matter. The galaxies separate from each other by expansion of their spatial interval. In this space the intergalactic gas becomes more and more rarefied and matter is born in this void. A curious revival of the dictum that nature abhors a vacuum! Hoyle and Lyttleton have given a precise mathematical form to this synthesis. In explaining the essential principles of the mechanism on the wireless, Hoyle said that new matter is ceaselessly created so that the density of the fundamental matter may be kept con stant. “ So that it may be kept constant” . . . this sentence has a familiar ring! If it be true that matter creates itself, it is due to negative retroaction. Thus, the eighth degree exists, a level at which matter, the substratum of activity, creates itself— acts on itself, so to speak. So, just as the simplest feedbacks correct a deviation by virtue of the deviation itself—just as in the marvellous systems of equilibrium of physiological organisms, anti-bodies are created by antigent— so matter, born in the absence of matter, would be, so to speak, the anti-body of the void! What seems certain is that the Hoyle-Lytdeton theory is a magnificent example of the “ principle of antagonism” which
344
THE
HIGHEST
DEGREES
OF
AUTOMATISM
was envisaged in France by Stephane Lupasco.1 This logician considers that every phenomenon is linked to its antiphenomenon; mathematical logic led him to this “ causality of contra diction” that we have noted in all retroactive mechanisms. This meeting of widely divergent roads on common ground is the more surprising when one learns that Hoyle and Lupasco were unacquainted with one another’s work. Everything seems to occur as if the world were organized by the mutual fertiliza tion of a negative retroaction tending to entropy and a positive reaction which engenders differentiation and tends to anatropy. The negative tendency is to universal constriction and the positive to universal expansion; our world is what it is because the positive and negative principles have combined to make it. One may mention in this connexion the ancient Chinese con cept of thejyin, female, and the yang, male, the first representing the principle of equilibrium and the latter the principle of force. This is the same thesis as that offered by the logic of effects to account for the two fundamental forces of statics and dynamics, of rest and effort, of stability and instability, of differentiation and equalization, of contraction and expansion. But if we find ourselves agreeing with the oriental mystics, it is not because of any nebulous views on spiritual existence; we started off solely on the fundamental concepts of the machine. The reader has been present from the beginning of these concepts, for we have wished him to tread along the same path which we took in order to arrive at an understanding. The principles which have been advanced might have been arrived at by meta physical consideration, had they been founded on thought alone, but, founded as they are a posteriori not a priori, on consideration of the mechanical functions of machines, they are of absolute authority.
T H E “ q u a l i t i e s ” OF F R EE DOM The time has now come to look back at the different degrees of automatism. The term “ automatism” as applied to the classical machine is controversial; it attains its real meaning in the higher-level mechanisms. A machine that regulates the function of one of its factors may be said to be taking the first step in self-government. 1 Vide supra.
345
TH INKING
BY
M ACHINE
The machine at the highest level of this progression is the machine of the seventh degree, which determines itself and may be said to be truly self-governing. If we adhere to the theories of Hoyle, we might cite the eighth degree, in which matter becomes created because it was not in existence previously. This progression should above all be viewed as a conquest of liberty— a system is free in so far as it is able to demonstrate its independence of contingency. All freedom has a limit; it can only exist in the conditions imposed by the absolute finality of equilibration which is the supreme law of the Universe.One might be tempted to talk of “ degrees” of freedom, but it would be preferable to use the term “ qualities” and it is only in terms of each “ quality” that we can talk of “ degrees” . How ever much we may wish to avoid anthropocentric reasoning, it must be admitted that the eight degrees of automatism postula ted here are not free from this taint. It is indeed almost im possible not to emphasize those entities that are essential for human functioning. Thus, the third-degree machine is sensitive to any condition that sets it functioning, but if the condition to which it is sensitive were noxious to the end-goal with which we had endowed the machine, we should be inclined to call it a second-degree machine which was functioning badly. Similarly, between the simple feedback of the fourth degree which modifies a single factor and the multiple retroactive linkages of the fifth degree which modify the majority of the factors, there is no fundamental difference in the efficiency of the retroaction. The only really logical classification of effects would be one that corresponds to the qualities of freedom involved. I. No freedom. All effects determined and the effect the out come of the factors (first three degrees of automatism). II. Freedom with regard to the factors. The effects organ ized. The effector machine can of itself determine how to solve the problem. This quality of freedom is progressively acquired, starting from a simple example of interaction up to the homeostat, which is master of its own determinism (fourth and fifth degrees of automatism). III. Freedom in relation to the end-goal. The effects have many individual end-goals. A complex of effectors is the final arbiter of its feedbacks and can of its own initiative answer the question “ What is to be done?” (sixth degree of automatism). 346
THE
HIGHEST
DEGREES
OF
AUTOMATISM
IV. Freedom in relation to the organs of the machine. The effects are self-determined. The effector itself selects the func tional elements that constitute it. It may be said to make itself. It answers the question “ Who is to do it ?” (seventh degree of automatism). V . Freedom in relation to matter (?). The effects create themselves. The effector actually makes the matter of its activi ties (eighth degree of automatism, which depends for its exist ence on confirmation of the theories of Hoyle). It would seem, then, that there are ways of freedom— one might even say different forms of liberty: the freedom that answers the question “ How can it be done?” ; that which answers “ What is to be done?” ; and that indicating “ Who will d o it?” To cite a concrete case: I want to explore certain parts of the world. I set off; how shall I travel? I shall be free to travel by any means I choose, if I have a well-filled purse; I can charter a private aeroplane, or, if I so desire, I can just set off on foot; that is to say I have freedom in relation to the factors in answer to “ How?” . Freedom in relation to the question “ What shall I do?” is quite another matter— another quality. My instructions may have been to proceed to Bordeaux or Lyons— to a certain district or a certain house in one of these towns, or perhaps to travel elsewhere in France, Europe or the world. The freedom that answered the question “ How shall I do it?” opens the door to the freedom that answers “ What shall I do ?” , just as an interplanetary rocket would confer on us the freedom of space within the limits of the solar system. The higher liberty depends for its existence on the lower liberty, but this does not mean that all possible forms of the higher liberty must necessarily be involved; the goal of my journey may be strictly defined even though I possess freedom of locomotion. The freedom that answers the question “ Who?” cannot be ours in its entirety. No human being is completely master of his executive organs. It may be necessary to assist our organs to answer the question “ Who?” . Imagine that the traveller is deaf or blind or prostrated by illness; he will be less able to fill the role than the traveller well equipped with sound sense organs. Man is engaged at present in perfecting the “ Who?” by his inventions; the more he can sense of the environment, the more he can affect it, the more he participates in the world. 347
TH INKING
BY
M ACHINE
Every time that he acquires new artificial organs that enhance his approach to new mechanisms for action, he integrates a greater part of the world in his system, he pushes back limitaations, he develops the “ Who?” . It is tempting to expound this thesis, but it cannot be done here. Let us note, however, that the logic of effects banishes the traditional irritating prob lem of freedom versus determinism. Feedback shows the way by which freedom enters into the world of determinism, and freedom will develop pari passu with the power of the negative clinamens.
B E T W E E N P R O B A B I L I T Y AND C E R T A I N T Y At the end of this book will be found a table setting forth the successive achievements of effector mechanism; it illustrates the progression towards independence of effects. It will be seen that we have never been able to dispense with continual reference to cybernetics. Cybernetics is the science of the control of effects by themselves, which, from the fourth degree onwards, condi tions the determinism of the effector mechanism. The mechan isms that were studied by the early cyberneticians are seen to be nothing other than isolated cases of the internal control of effects. This book has attempted to demonstrate that the prin ciples of such mechanisms are applicable to anything in the universe, no matter what its nature, so long as it does more than mirror, and in fact reacts to any change in its relations. If this be so, cybernetics is applicable to everything in nature and we are always concerned with it. Thus our theory of effects is valid for every science that deals with nature and it has a profound epistemological value. It is not a mere hypothesis ad hoc to discover the parallelism between two branches of science. It is more than a theory; hypothesis and theory are in essence conjectures elaborated by the mind. Here, on the other hand, we are dealing with certi tudes, with a logical system, with an instrument of precision for the eventual construction of theories and hypotheses. This sys tem is not just a bridge between two disciplines, it forms the foundation for all those sciences that do not deal with pure determinism; that is to say, all sciences other than mathematics. By the aid of mathematics we are able to study the laws of causality or, more exactly, the effects due to causal function. 348
THE
HIGHEST
DEGREES
OF
AUTOMATISM
Thus the factors involved need only to be considered from the statistical point of view. The only law applicable to the cal culus of probability is that of Gauss which, of course, necessi tates a frequency curve. It will then be possible, by studying the curve, to decide in what degree variations in the effects, due to pure chance, are modified by the internal functions of the mechanism; the probability of the occurrence of an effect cal culated for independent factors will be different from that where the effect is controlled by “ clinamens” a n d hence the causal field may have been either increased or diminished. This type of assessment represents a triumph of mathematical reason in a field that has hitherto been one of statistics only. Louis Bachelier saw the importance of this in a treatise on the cal culus1 of probability published before 1914. He wrote: “ The calculus of probability should not be confined to the study of pure chance. Between pure chance and absolute knowledge there is an immense territory into which some cautious ap proach is at least possible.” It is not enough to say that this field is immense; it covers in fact every phenomenon whose properties are not totally determined, all that does not depend on “ functional” laws. In short, the field contains all the body of concrete science; only abstract science escapes. As far as real phenomena are con cerned the linkage of their factors in the domain of causality can never be entirely free from some relation to probability. In all the science of nature the clinamens, functions of mathe matical laws, oppose the uncertainties of probability. Bachelier alone seems to have appreciated the importance of this opposi tion which he termed “ connective probability” , a term that has fallen into disuse since he wrote of it. “ The classical laws are to a large extent the laws of pure chance” , he said. “ It is, however, important to study the more complex cases where chance does not act independently of other causes.” He establishes three classes of “ connective probabilities” : in one class a causal agent tends to diminish the deviations engen dered by chance and this cause is all the more effective when the deviations become greater (this is precisely what happens with feedback); in another class events are governed by chance and 1 Louis Bachelier. Calcul des Probability (Gauthier-Villars) and Le Jen, la Chance et le Hasard (Flammarion, Bibliotheque de Philosophic scientifique).
349
THINKING
BY
MACHINE
by the maximum antecedent state, that is to say by the greatest deviation occurring previously; and, lastly, in a third class the behaviour of an agent depends both on the contingent force that mobilizes the agent and on the previous activity. It might be thought that this logical view of effects, seen thus, as balancing between probability and mathematical certainty, is nothing other than statistical analysis, the great prophets of which were Karl Pearson in England and Sewall Wright in America. Behind the coefficients of correlation which arise from an intensive statistical analysis and, above all, behind the “ path coefficients” of Sewall Wright and the co efficients of determinism, it might seem that we recover the same sort of complex of functional intricacies that we have en deavoured to capture in this book. Nothing of the sort occurs. The length of the hypotenuse depends invariably on the length of the other sides of a right-angled triangle; the co efficient of correlation between the first and the second is unity. But on board a ship the age of the captain and the height of the mainmast are obstinately independent; their coefficient of correlation is zero. Between these two extremes, which repre sent an absolute correlation and a total absence of correlation, there is a whole sequence of more or less certain and less or more doubtful correlations. It is always necessary to find out, in complex phenomena (most often demographic, economic or genetic) that are being analysed statistically, if such and such a fact is or is not a factor in a certain phenomenon. It is, then, necessary to express the probability of the intervention or non intervention of any given factor in a causal series. Statisticians disclaim the power of being able to draw any precise inference from the study of their coefficients as to the causes of a phenom enon. As one of them put it: “ Statistical analysis does not reveal the nature of the causes of variation; it constitutes, rather, a guide for their investigation.” 1 Let us suppose that the temperature is rising in an enclosed space; a piece of wire is expanding and the vapour pressure of a liquid is increasing. If we measure one or the other of these changes at certain intervals, we shall discover a clear-cut corre lation between the expansion of the wire and the vapour pressure. This does not mean to say that the first phenomenon 1 Andr6 Vessereau. La Statistique (Presses Universitaires). 350
THE
HIGHEST
DEGREES
OF
AUTOMATISM
is the cause of the second or vice versa. We might, perhaps, be inclined to think so, if we did not know that both are the effects of a common factor— heat. When a statistician has to study poorly investigated problems, the common factor may be entirely undiscoverable. The logic of effects is something quite different; it is not con cerned with eliciting the probability that any particular variable is a factor in a given case, but with deciding how, when the factors of an effect are known, their relationship to each other affects the probability of their co-operation in the pheno menon in question. On the one hand, we study the probability of a certain causal sequence and on the other, the probability of a certain result where the causes are known.
T H E LOGI C OF EFFEC TS The logical study of effects offers a satisfactory and necessary explanation of the many special laws in various disciplines. One would have to discern in these laws both the accidental attri butes and the essential principles in order to discover the logical basic principles which have been dealt with briefly in this book. Many laws in other sciences, even if they are not entirely illuminated by these logical hypotheses, will, at any rate, be made more apparent, no matter how different the sciences may be in content, from physiology to geomorphology, or astronomy to psychology, not omitting metaphysics, which, in adopting the methods of logic, becomes a science. A study of the relationship of these laws with a view to unearthing a common factor should prove to be a very important and interesting inquiry and should, ideally, be pursued by experts working in collaboration. We might hope thus to forge a solid chain of interconnected facts by the collection and linking together of many apparently isolated phenomena. It might, in fact, lead to a universal epistemology of scientific thought. At any rate it would be a step in the direction of the “ simplicior et clarior” of Descartes, just as it would prove the foundation of the logical economy to which Einstein refers so often when he endeavours to present all physical reality in an extremely simple form; it would be the way to the “ economy of thought” advocated by Ernst Mach. It is nothing less than a
35 i
THINKING
BY
MACHINE
restatement of the most general of all those general principles to which this last philosopher1 looks for clarification, general principles that allow us to unearth fundamental propositions in the most complex descriptions of phenomena. At present we can only throw out some hints hastily elabora ted in the confusion incidental to planning future explorations, cramped as we are by the limits of this book. The logic of effects must be re-investigated from its very beginnings. It must primarily be developed from the mathematical point of view by studying the functions of independent variables, whose practical importance is unquestionable, since they govern all natural effects by differential equations. On the other hand, adopting a concrete programme of ex perimental work, we would suggest the construction of very simple electronic mechanisms demonstrating the logic of effects by reproducing processes of interaction and retroaction and facilitating the study of the complicated network of the clinamens. It would be very encouraging if the basic principles summar ized in this book should prove to be so firmly established as to serve as foundations for future work. However, it may be premised that these new fundamental laws applicable to the logic of effects are hardly necessary; the basic property of this discipline is to possess only one such law from which everything can be inferred, the law of twofold function which seems to be applicable to both universal activity and its powers of equilibration. On the other hand, things are not quite as simple as they might appear to the readers of this book. We have purposely neglected to deal with the conditions responsible for the ten dency of an organized effector mechanism to seek a state of equilibrium. We have dealt with the logic of effects as it if were a static condition, as if, indeed, the problem were solved as soon as the final goal is known. In mechanics, just as in physio logy, the tendency to equilibrium is, to a great extent, a function of time; it does not appear until the mechanism has been oscil lating for some time and it may even fail to appear. The true logic of effects must be dynamic. Pursuing this line of thought a little further, it will be apparent that the point of equilibrium 1 Histoire de la Micanique. 352
THE
HIGHEST
DEGREES
OF
AUTOMATISM
(or in our language the finality or end-goal of a system) depends on a time constant peculiar to the system, which has as much practical as philosophical importance. Again, the clinamens cannot act continuously, they act spasmodically, by quanta of activity. In this connexion it may be pertinent to consider the functions of relaxation studied by Balth van der Pol, the Dutch mathematician. In the present study no mention has been made of this function, but it is certainly of great importance, above all in physiological retro actions. It is also important to note that a number of feedbacks which receive continuous messages from the effect only react in a discontinuous fashion.1 Thus, the simple laws of the logic of effects have many applications, according to different conditions. Little is yet known of the various fields of action in which they operate, though the trail has been blazed by the work on servo-mechan isms. To arrive at an understanding of anything in the universe, it is essential to recognize its implications and its specific function. Thinking in terms of causality has led for much too long towards the formulation of abstract concepts that were impossible to define and which seemed to be refractory to science in their absence of precise delimitation. As soon as we introduce the idea of functional activity, we are able to enter into a realm of thought where all is relative and the concepts themselves are not absolute. Within this realm each of the various systems seeks to attain its own particular state of equilibrium; but all are involved in ever vaster systems which again are themselves in search of equilibrium. Such a complex in a continuous pro cess of equilibratory activity is the central nervous system. 1 Professor Fessard (Annie Psychologique) in 1931 called attention to the part played by relaxation oscillations in nervous function.
353
~a a
'^ 2 C
- VO
° o i u
g-g a .|P S3
° 1
1-aSSg
•a“ rt C
S ^ M
^ s
no
m
ii- f ^ l ‘I| 1-261“ 1-0 *-3
S -r?
3 S3 S 3
■ sfj s.i
1aa •Isa §-g
■S
r.i
3 v. « 0 :6 3 ■3 3 H >C2 2m 2& 3 2£-3 2u 2 t35mc 2 h< ^ *?{c £ 0 0 i£ y v £ d> , s 0 £:2 V 53
>~T
a M « ^ „ g .>: . S.S§ ' ^' C u F
§« c y £-a
^ cj Mo-iiqt§££
I
3
~ O
fcc £ -
.
£ £ -a-a-- £ £
|
i i ! :i ! l | I ° |
| | | | | | |
1a
■ ? s
SX
° jj ° O 8 -3 -2 g £>.| | 5c*5 §jf
X u < s
£ ’> ^
| 1|
.1 ° £ 8 ^ S
(ZJ W £
£ £ £
S -a -a % £
rs *o
11t °fc160|C — ST60.2 so3 oi 3 c .3
"3
£
OF
t2
^ a
a £££ S 1 1 1 £ ” 8 8 8 '| 2 •2 ° ‘° 8 S S *!^ a
£ £ ‘is •1 rt .> -C
£ fr |
«.2 X >* Ur St s S3 O•3C O a u“ J ' ^~
I f i l l l l l 5 -G ? 0 -Si “a Oo$Qli|
S .s i 8 | S s ! ’’I ^ O Oo-oQ
■ S’
- oi
I s
3S
’n? • ^ J»> 2 c .2 « fc* £ -S-s-* | 8 8 8 8 U