184 45 2MB
English Pages 270 [251] Year 2020
Culture’s Engine Inside Science and Technology
William Gosling
Culture’s Engine
William Gosling
Culture’s Engine Inside Science and Technology
William Gosling University of Bath Bath, UK
ISBN 978-981-15-4591-7 ISBN 978-981-15-4592-4 (eBook) https://doi.org/10.1007/978-981-15-4592-4 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: © Matej Ograjensek / Alamy Stock Photo This Palgrave Macmillan imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
to Naomi Climer President 2015–2016 of the Institution of Engineering and Technology Her distinguished role is shared with many great names from the past but to be the first woman elected in its one and a half centuries is to make history
Acknowledgements
I began writing about technology as a young lecturer at Swansea University and was honoured when Herbert Simon took an interest in my first book, encouraging me to continue. He hoped for an objective ‘design science’. It seems an impossible dream now, but my gratitude to him remains. Towards the end of the last century, David Gooding, historian and philosopher of science, encouraged me to present a radical view of technology in a Masters’ degree programme at the University of Bath. Over several years the talented students on the course tested and enlightened me. They have my heartfelt gratitude. I had come to feel that there was a place for a study of technology ‘from the inside’, looking at its distinctive way of working. At Bath, it was fortunate that I also encountered Helen Haste, an innovative and inspiring psychologist. She understood what I would be attempting in the book I had begun to plan and suggested how it might be better done. Her support helped me greatly, even after her move to Harvard, and despite her heavy commitments there. At that time many well-informed people were unable to make a consistent distinction between science and technology, though in truth the two are as different as can be. Their goals and objectives are quite diverse, as therefore also their criteria of success. They identify their problems in distinctive ways and set about solving them differently. The much- debated scientific method is certainly not like the doxastic method used vii
viii Acknowledgements
by engineers, medical clinicians and other technologists, but of this more later. Also maybe writing about technology ought not to emulate science’s style too closely, but be more human and personal, as is its subject. Half of my working life has been spent in academia, half in industry. My years as Technical Director of the Plessey Company greatly extended the range of technologies within my personal experience. Michael Clark, as Deputy Chairman, helped me find my feet in an unfamiliar Board Room environment and was always a source of wise advice and support. I miss him greatly. I also thank friends and colleagues with whom I exchanged views, greatly benefiting from their thoughts. David Aspinall, Kenneth Grange, Edmund Happold, Sam Jordison, Vivian Phillips and Keith Warren all gave me valued advice at crucial points in the development of my ideas. Above all, from the moment I began to research for this book, Patricia Gosling contributed a psychoanalyst’s valued insights to my quest, both informative and challenging. It gave my thinking irreplaceable depth. Many puzzling features of technology as practiced can be understood only in terms of the distinctive mindset of the practitioners. My doctorate is in communications engineering, but to establish my argument I have written on history, archaeology, psychology and other disciplines in which I am just a layman. Experts in these fields will doubtless be so kind as to point out my errors. I thank them in advance, but am unabashed by my shortcomings; somebody had to do it.
Contents
1 Introduction 1 2 The Odd Couple 5 3 Why Are Humans Different? 17 4 Three Flavours of Technology 25 5 Subtle Subversives 45 6 The Active Elements Appear 53 7 Drivers of Technology 61 8 Technology’s Other Half 67 9 Talk the Talk 83 10 Patterns of Innovation 95 11 Invention Push and Market Pull109 ix
x Contents
12 Deep Design123 13 A Hazardous Business143 14 Will Anyone Want It?155 15 Failure Foreseen169 16 Can Machines Think?183 17 Lady Lovelace’s Revenge193 18 The Dark Side209 19 Be Careful What You Wish For225 20 Past, Present, Future237 21 Technology and Dreaming245 Appendix: A Clutch of Designers255 Index259
1 Introduction
In the quarter of a million years or so since humans joined the animal population on this planet, we have been actively changing our physical world, and not merely adapting to what we find. We have used technology, making endless alterations in the environment to help us flourish. It began with weapons and sharp-edged tools made from stone. We built shelters and defences against many threats, cooked our food, made baskets and pots, wove fabrics. Soon animal husbandry, horticulture and agriculture began; starting independently in several geographical locations, but widespread by 10,000 BC. So began our unique historical trajectory towards domination of our planet. It was not a smooth or uniform process: technology never is in a steady state for long. Since earliest times people have triggered epochs of rapid innovation in technology, each changing the world. Very early among these revolutions was the use of domesticated animals. Bees were kept from 10,000 BC in North Africa, valued for both honey and beeswax. Horses were important by 4000 BC. Perhaps bred at first for their meat, horses were quickly seen as an important mechanical energy source, soon dominating all forms of land transportation. And so it went on: great and memorable technology revolutions, one after another from then until now, and our own © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_1
1
2
W. Gosling
revolution—microelectronics on the silicon chip—arguably the most significant advance there has ever been. Technology is not static but from its beginning has been in process of continual development, presenting a different face to successive human generations. New material worlds change the patterns and boundaries of thought for those who live in them. This historical succession of new and different worlds, each giving rise to distinctive cultural outcomes, is created by the evolution of technology. Story-telling, fine and popular arts, philosophy, science and theology are immensely precious expressions of the human spirit. However, they make sense only when seen as responses to the place and time in which they appear, the unique material culture which is itself technology’s consequence in interplay with history. In no way is this to be seen as technological determinism. What is going on here is a dialogue, or perhaps a ritual dance. Needs and wishes expressed in society at large evoke new technology. If it succeeds this should have the outcomes being sought, but will probably have others, not foreseen. Many facets of social behaviour and structure will change in time, to adapt to the new material environment that emerging technology has created. This transformed culture will evoke new social demands which again technology will seek to fulfil. And so it goes, turn and turnabout. Technology is never in steady state so neither is our social forms and institutions. Each responds to the other. Already having brought us domination of our own planet, this process has not finished yet. We are reaching out to the universe. But technology has a worrying downside too. The news is not all positive—potential for good and evil have come together. People fear things to come, and need time to adjust their lives and thinking to manage unforeseen consequences of change. We are better able to do that if we have a broader perspective of what is happening. To claim an important part for technology in human history is not controversial. My contention is that through this dialogue successful technology sets the direction and pace of all cultural evolution. The state of technology at any time is the major influence on the world, and not just the material world. Since the earliest times, humanity has lived beset by occasional revolutions in technology, which, in turn, change the world, often beyond recognition. This book is not a history of
1 Introduction
3
technology, still less of science. It questions how technology and social forces interact, leading to these successive revolutions and their outcome. The following chapters explore five themes: • How is technology different from science and what language shall we use to talk about it? • How does technology go about its work? What is the doxastic method? One of the most widespread problem-solving strategies, technology orchestrates it to symphonic levels. • What distinctive feature have we in mind when we call an activity technology? Many animals use tools, but we do not think of that as technology. • How are the needs and wants society expresses matched to the contemporary potential of technology? • What characterises its successes and failures? In 1932 DELAG, the world’s first commercial airline, initiated a regular service direct from Berlin to Buenos Aires, using an airship. Yet in a few years airplanes were replacing airships everywhere because they were faster, cheaper and safer. Was what DELAG did a success or a failure? Perhaps it was neither—for a time it proved useful, so maybe it was just a precursor. We are Homo Sapiens Sapiens, the apprentice guardians of Earth. If we fail in our duty of care for our planet no others will follow. No living thing here has the potential to displace us, and any sentient extra- terrestrials are just too far away. So we need to learn to be competent guardians of what we have. Our technology creates irreversible social revolutions. We blunder thoughtlessly into them, acting like children, innocent of what will follow. But now we are living in a unique epoch of human history: a great transition has begun. We are building, at absurdly low cost, general-purpose digital machines of more than biological complexity. Technology has come to childhood’s end, and it is time we took ownership of our future.
2 The Odd Couple
It would be absurd to claim that science and technology are not closely related. Many find it hard to say where one begins and the other ends. Any activity pursued in laboratories by people with advanced technical qualifications, using complex measuring instruments, and described in an arcane language with a lot of mathematics, that is surely science? Not necessarily, it may be technology. Science and technology may seem to outsiders like identical twins, yet they are deeply different. This unfamiliar idea is difficult even for some scientists and technologists to grasp, yet poles apart they are. Some people move between the two without difficulty, but they cannot be both at the same time and mostly settle for one or the other as their temperament inclines them. Michael Faraday (1791–1867) is a good example.1 Of exceptionally high intelligence and creativity, he is sometimes called the father of English electrical engineering. In hope of improving their own standing, the English electrical utilities, feeling like poor relations compared with Gas and Water, gave him that title in the thirties of the last century. But Faraday never was an engineer; he was one of the most distinguished scientists of his age, and the informal scientific consultant to the British government. His interests went far beyond electrical topics and he made contributions to knowledge in many fields. When he built apparatus, like © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_2
5
6
W. Gosling
the early monster electromagnet with which he detected the existence of diamagnetism, he did so only to resolve experimentally the scientific issues that fascinated him. Compare him with technologists like William F. Cooke (1806–1879)2 in England and Samuel B. Morse (1791–1872)3 in the United States, working at about the same time, whose primary concern was to establish nation-wide telegraph systems. When they made contributions to knowledge they were relatively minor and always subsidiary to the objective of building commercially viable telegraphs. So despite their common ground, can we say what the essential differences are between science and technology? Technology is older than science by at least fifty thousand years. Its extended pre-scientific phase gave us the early civilisations and much to treasure, from Gothic cathedrals to using paper for information storage.4 The physicist Freeman Dyson, in an address to Texas University, said: ‘Technology … is the mother of civilisations, of arts and of sciences’. So Dyson sees technology’s distinctive methods coming long before science. It must have done, otherwise Egypt would have built no Pyramids, nor would the Orkneys have their incredible 5000-year old chambered tombs. All there was of science back then was a little arithmetic, with some rudimentary geometry and astronomy. Yet many people still believe technology is somehow the application of science—impossible, since the first preceded the second by millennia.5 Even today if science is not yet able to offer help, technology still needs, uses and makes progress at the forward edge of innovation by its own distinctive methods. From tens of thousands of years ago, and to this day, innovation in technology has been achieved by successive trials, each of them constrained by trusted opinions—the doxastic method6,7 It can be seen as a kind of deeply thoughtful trial and error. After every trial comes critical evaluation of the result, to find out whether it has succeeded or the work of innovation will have to continue. Those trusted opinions that set boundaries to the area of trial, and on which its success depends, may come from past experience, from the views of others believed competent, examination of historical examples, or any other source of credible, persuasive information. When finally science appeared, its persuasive insights suggested that opinions brought to the doxastic method from that source
2 The Odd Couple
7
were fit to be trusted. Success came faster and more reliably. Yet the basic method is a far older way of solving practical problems than science itself, and indeed is still in everyday and universal use. Science would like to fathom all the workings of nature and the universe which are accessible to human understanding, everyone. What is more, the explanations must be in a convincing, reliable and verifiable form. The range it addresses is almost unlimited, through physics, chemistry, biology and medicine, on to the human sciences. Nothing is beyond its curious eye, though some might be beyond its comprehension—that is debated. At depth science seeks to know about things, but does not, as its primary concern, seek to create anything material, nothing beyond a trustworthy new and original insight. At times scientists do need to build radically new equipment, but they do so in order to answer scientific questions. Once these answers are known what they have built has interest only to historians or technologists. New equipment or instruments scientists produce in the course of their investigation may later prove useful to others and be widely reproduced, like Galileo’s telescope. It was not the first of its kind: several were seen in the previous year in Italy. Galileo was in Venice in 1609, where he picked up the idea. However, his creation of an improved product was not science, but technology. Indeed, in the mid-eighteenth century two technologists—the inventor Chester Moore Hall (1703–1771) and the optical technician John Dollond (1706–1761)—improved Galileo’s telescope, rendering his older version obsolete. So technologists and scientists do similar things at times, and are of great assistance to each other, but what they aim at is different. The objective of scientific research is ideas: new insights into the nature of the universe. The outcome of technology is new products, processes or services, intended, one hopes, to improve the human condition by material means. Science wants to know; technology wants to create. Science works towards the great ‘aha’ moment and a published paper, technologists want to fire it up and see if it flies. When it does they feel a deep craftsman- like satisfaction. Their different objectives give the communities of scientists and technologists different attitudes to innovation. In science, the priority of the new idea is greatly honoured, and rightly so, because that is precisely
8
W. Gosling
what is being sought. However in technology advances are not exclusively concerned with ideas, or new techniques. Some of its most highly regarded innovators are those, like Dollond, Morse, Marconi and Armstrong, who were pioneer designers producing early instantiations of new technology using techniques that had already been introduced. The starting place for scientific research is realisation that a path exists, even if long and tortuous, by which new knowledge might be reached. The perceived possibility of success triggers the scientific endeavour. Scientists value aesthetically pleasing solutions to their problems. What constitutes ‘pleasing’ is obscure, but they know it when they see it.8 It is a close relative to the ‘charm’ factor in design, discussed later. Technology is different. It begins with a human need or want, which it attempts to satisfy, and thus proceeds in hope; cautious and mildly sceptical. Scientists do what they can, technologists what they must. A simple example: what happened when I changed the central heating boiler in my rambling late fourteenth-century house? The old boiler was underpowered and unreliable. It had to go, so in the middle of a warm July, two polite and efficient technicians installed a new one for me. One feature was a small device the size of a wall light switch: a wireless- connected thermostat that set the general temperature in the house. They put it on the wall in our Inner Hall. However, that room held a log-burning stove, cold in July but in winter contributing to background heating. Lit in November, its heat got to the thermostat and shut off all the central heating when most needed. What to do? Clearly the thermostat must be moved to a new location where the heat from the log-burner would not reach, yet where the thermostat could continue to ‘talk’, by radio, with the boiler, telling it to turn heating on or off. A scientific approach would have been to calculate the unwanted temperature rise in the thermostat for a number of locations and also the power loss in the radio path to the boiler, doing this until one was found that met both criteria. So is this what I did? No, I did it the doxastic way: I took the thermostat off its wall and put a lump of Bluetack on the back, so I could fix it where I pleased. Using my own guesses about radio propagation and heat conduction, I tried it out in a couple of plausible locations, checking that it worked with the
2 The Odd Couple
9
log-burner out, then again with it lit. In the second place I tried was a success, so I fixed it there permanently. Using the doxastic method the time taken was a couple of hours—the scientific style could have taken days.
Chicken and Egg and Chicken Again The earliest Greek counterparts of today’s scientists—Parmenides, Heraclitus, Democritus and above all Aristotle—were philosophers at heart. They created grand hypotheses about the nature of the world, later the universe, but rarely conducted experimental tests of their ideas, convinced that truth is accessible by rigorous deduction alone, beginning with ‘undeniable’ axioms. Of course the axioms are the weakness; in truth none are undeniable. As an example, for centuries philosophy took it as the most obvious of axioms that ‘nothing can come from nothing’. Physics has now shown that a vacuum—‘nothing’—is full of a soup of particles which come from nothing, exist for a few moments, and then to nothing return. The axiom was wrong, and in this it was typical. Science was different from the start, not sharing this vulnerability. Was it born when the intellectual rigour of philosophers attached itself to the technologists’ age-old empiricism? Steven Weinberg seems of that opinion.9 Spreading into Europe from the Islamic world during the fourteenth century, it put the experimental or observational testing of ideas at its centre. The Arabs translated the classic Greek texts into Arabic, but progressed beyond them. Later they spread what they had learned through Europe in Latin translations of both Greek texts and more recent discoveries. Science and technology always were wholly distinct activities, but even so are highly complementary. At times the doxastic method makes empirical advances which the science community finds it possible to explain, while at others technologists interpret known science in new ways, transforming it into a valued tool. Thomas Newcomen (1664–1729) built the first continuously acting steam engine before 1712, but a full scientific analysis of heat engines did not appear until a century later, when Sadi
10
W. Gosling
Carnot (1796–1832)10 published his Réflexionssur la puissance motrice du feu (1824) and founded the science of thermodynamics. Hundreds of steam engines had been built by then, improved by doxastic methods. Julius Lillienfeld (1882–1963) built transistors in 1925. They were unsatisfactory because of surface effects on the semiconductor not understood at the time. This need not have stopped him; doxastic empiricism might still have won through. But at the time valves (US: vacuum tubes) were improving fast, in consequence of doxastic innovation, putting them at the cutting-edge of electronics. They were the thing to be playing with then, and their theory seemed straight-forward and classical. Transistors looked a long a shot, too difficult to understand, so their pursuit faltered. But by 1947 it was clear that valves simply could not meet the needs of the new computers. Semiconductors looked attractive again, and viable transistors followed. There are times, by contrast, when science precedes the technology, as it did in 1854 when the Irish mathematician George Boole (1815–1864) codified symbolic logic.11 It was not until 1937 that Claude Shannon (1916–2001) demonstrated how Boolean symbolism could be used to design electrical switching circuits, making the discovery while still a student. He recognised the technological uses of a particular pre-existing scientific insight. An enthusiastic juggler, Shannon worked on information theory in the famous Bell Telephone Laboratories, which extended over such a large area that his preferred method of getting about was by riding a unicycle, often juggling the while. At age 84, Claude Shannon died of Alzheimer’s disease, after years when his own ability to communicate faded fast. He never knew the digital world he helped bring into being.
Doxastic Delivers Many technology advances of the past were wholly doxastic, without any benefit of scientific understanding. That was how technology first got started, and a little continues like that to this day, yet it should do so only when relevant scientific insights are just not available. In that case a hunch, a guess, an intuition, a tradition perhaps, but frankly little more,
2 The Odd Couple
11
suggests what to do and a tentative design is created. Something is built or modelled along those lines. It is then exhaustively tested to see whether it really does do what was hoped for. Adoption of this earlier doxastic method, when the paucity of the science background makes it inevitable, remains a respected and legitimate part of technology projects. Counter- intuitively, it usually has extremely low-risk outcomes, because testing must be exhaustive and rigorous, to establish for sure that it can in fact do what it is supposed to. Without it fewer projects could be brought to a successful conclusion.12 My first technology job was with a company started in 1933 as a British source of American-style electronic valves—vacuum tubes in US parlance. In the early 50s, whilst an undergraduate, I joined them for the summer vacation to assist, in a minor way, with valve design. Arriving from a physics ARCS course at Imperial College, I was perturbed to find that all valve development was done in doxastic style, modifying existing designs, their own and other people’s, using an empirical approach. Soon I understood that there was too little useable science available to do it any other way, given the dearth of computational facilities at that time. The first skill learned on joining the small design team was exactly how hard and where to strike the glass envelope of a valve with the nose of a pair of side-cutters. Done right, this would result in the glass envelope breaking away cleanly so that the electrode structure could be extracted undamaged for examination and measurement. Guided by trusted opinions, these dimensions could be modified to create a new design to meet a different specification.13 And so it did, but usually after several tries. ‘C’est brutal, mais ça marche’ as André-Gustave Citroen commented, introducing the clutch-gearbox combination into early cars—‘It’s brutal but it works’.14 The doxastic approach made possible a start for technology millennia before science appeared. Trial and error was its tool, and progress glacially slow. Remarkable things were achieved, though, from laminated finery steel sword blades at one extreme to hammer-beam roofs at the other. There were also many failures. Legend has the sword Excalibur shattering when it never should have done had it been rightly forged. Mediaeval buildings sometimes fell down soon after construction. Old Sarum Cathedral collapsed in a storm five days after the builders left.
12
W. Gosling
Using every last insight science offers before taking a step beyond it into the unknown, as we do now, greatly reduces the empirical failure rate. And there are heartening times when science comes up with something new and helpful at the last moment, like the US Cavalry in old cowboy movies, appearing over the hill, flags flying and bugles sounding. A great experience, but even so, doxastic methods will never be redundant. The ultimate challenge to all designs is from the social environment. Will society allow what is proposed to go forward, or would its impact prove intolerable? The early telegraph, using inconspicuous wires along railway tracks, did not threaten the quality of life in the mid-nineteenth century, and evoked negligible social objection. One of its competitors proved less benign. In 1837, effective signalling between London’s Euston station and a stationary winding engine was vital to operating the London and Birmingham Railway. Sited at Camden Town, nearly 3 km to the North, the winding engine cable-hauled trains up the difficult initial gradient, but needed a signal when to pull. William Cooke offered his telegraph for the task. The London and Birmingham’s directors opted for something less exotic and much cheaper: a pneumatic whistle at Euston powerful enough to be heard all the way to Camden Town. Soon life in the 3 km radius around Euston became unbearable from the frequent piercing whistle blasts. Protest was vociferous, and before long the silent telegraph took over.
Mysteries of the Craft This is the key to a deep divide between technology and science. It arises because the most advanced technology, even in our present time, is forced to go beyond what science knows and can speak about with authority. It does so, as technologists have since the beginning, by the doxastic method—through intelligent empiricism tempered by informed opinion. Towards the end of the twentieth century I directed a large UK company that began silicon chip production. The silicon foundry was designed using the best scientific insights of that day, yet at start-up nineteen out of twenty of the finished product proved defective. What did the
2 The Odd Couple
13
team do? They all continued to keep a close eye on new science as it appeared, in case somebody, somewhere cracked a relevant secret. Meanwhile, acting in the doxastic mode characteristic of technology, the team tried simply everything they could imagine that in their opinion might, conceivably might, affect the result. They tried endless adjustments to the equipment, improved the purity of materials used and made cleanliness in the production area outclass surgical operating theatres. ‘To get this process right I will spit in the distilled water at the full moon if I have to’, said their young leader to me. Some changes improved things, others made no difference. When their alterations helped the team would try to understand why, but often had to accept as convincing purely what their eyes told them. Full scientific explanations could come along later, if they were lucky. Yet even without that additional benefit, on our silicon line two years from start-up nine out of ten of our chips worked, thanks to the ‘mysteries of the craft’ they had found for themselves. I was very proud of them. What they did was not science, making no contribution to the advancement of knowledge of the natural world. It told us how to do things successfully, but not why. That was for the scientists to find out, meanwhile we were making chips we could sell and others could use—a triumph for the doxastic method. Our team was not seeking knowledge, they wanted to make useable chips. This anecdote is typical of doxastic technology development, but not of scientific research. The way technologists do things is empirical, experimentation based on trusted opinions, but often running ahead of scientific conviction, which they hope will catch up later. The evolving designs of traditional artisans came by the same process, the only difference is that now things go much faster. For the most innovative technology complete scientific understanding is unlikely, and doxastic instantiation must stand in for what is missing. Where good insights are available science promises successful outcomes with its aid, by ensuring that opinions the doxastic method depends on are indeed to be trusted. Actually though, the empiricism is the fun bit, if nerve-racking. The coming of science has had the effect of making the doxastic method faster and more certain in operation, and the sense we have of a dizzy acceleration in technology is becoming ever more widespread.
14
W. Gosling
The Royal Society admits those who make significant contributions to science as Fellows, be they notionally scientists or technologists. Our engineering Institutions, central to the technology professions, admit as Fellows people whose qualifications are in science, subject to certain minimal conditions. Yet ultimately the two, technology and science, have entirely different objectives and ways of working, and entirely different criteria of success. So science and technology get along together well enough, like affectionate cousins, but always a certain tension remains.
Notes 1. James, F. (2020) Michael Faraday Cambridge Univ. Press (Cambridge, UK). 2. Hubbard, G. (1965) Cooke and Wheatstone Routledge (London, UK). 3. The most significant contribution by Morse was the invention of serial binary digital signalling, but it is doubtful that he realised its generality at the time. 4. Mullar, L. (2015) White Magic Kindle Books (online). 5. If it really is as old as this, what do the archaeologists say about it? ‘Technology: one of the three basic components of culture; the systematic study of techniques for making and doing things. It is the way humans have developed things to help them adapt to and exploit their environment.’ Most dictionaries agree, offering definitions for technology such as ‘The specific methods, materials, and devices used to solve practical problems’. The Oxford English (2nd Edition, Version 4.0) defines technology, a word first appearing in 1615, as: ‘the scientific study of the practical or industrial arts’ and ‘Practical arts collectively’. 6. Dictionary OED 2nd ed. ‘pertaining to opinion’ first used 1794. The doxastic method of problem solving is not unique to technology, but is one of a small handful of problem-solving strategies that all humanity adopts. If there is no strictly logical path offering an answer, the choice is between the doxastic method and ‘blind’ guesswork, both widely used. 7. Recently there has been considerable interest in doxastic issues among philosophers. 8. Farmelo, G. (2002) It Must be Beautiful Granta (London, UK). 9. Weinberg, S. (2015) To Explain the World Allen Lane (London, UK).
2 The Odd Couple
15
10. In 1831 Carnot suffered a high fever with some mental disturbance. In the following year he was admitted to a private asylum where he died, allegedly of cholera though syphilis seems possible. 11. Boole, G. (1854) An Investigation of the Laws of Thought (facsimile) Dover Publications (New York, USA). 12. For some lost reason, this approach became known, particularly among engineers, as the ‘brute force and ignorance’ method, often written BFI. Really just a more down-to-earth name for doxastic empiricism, the words have a certain robust charm and are widely used. No aircraft flies without components developed by BFI, shocking though the discovery may be to the uninitiated. As a young Lecturer at Swansea University, I dedicated my first book, on engineering design, ‘to Brute Force and Ignorance’. My publisher was unhappy, agreeing only when I put it in Welsh, as ‘i Nerth Bwystfilaidd ac Anwybodaeth’, which he hoped few would understand. When the book was adopted by the United States Naval Academy, I was surprised to receive a letter from one Admiral Gabriel, asking what language the dedication was in, and for a translation so as to ensure, as he explained, that it harboured no ‘Communistic tendency’. I replied giving the information required, even though the whole thing had the feel of a student jape. The letter was addressed to ‘Swansea University, Wales, Scotland’. I felt sure the US Naval Academy knew more navigation than that. 13. Every valve had a long ancestry in earlier ones, developed using step-by- step doxastic modification. I helped the designers to perfect the sweet little 6BR7 valve, used then in hi-fi amplifiers, also the monster 6CD6 (which took nearly 12 watts to heat the cathode). If memory serves our 6CD6 was two of the earlier 6L6 structures welded together. It was designed to drive large cathode ray tubes (CRTs), used as the screen of every TV set then. These devices are gone today except in a few niche uses—valves, CRTs, all—museum items now. 14. The common language of technology can be entertaining. As a professional technologist, sometimes I have felt it wise to keep the fact of what was being done away from laypersons prone to panic. I would say, with impenetrable vagueness, that a proposed development will be completed by the accepted method of doxastic instantiation. In-house though, when it is clear that a development will have to be undertaken using a doxastic approach, one may hear ‘We’ll have to knife-and-fork this one’.
3 Why Are Humans Different?
On April 3, 1973, Martin Cooper made a call on Sixth Avenue, New York, from a handheld cell phone. Reporters and passers-by watched astonished as he keyed-in a number right there in the street and put the phone to his ear. Nothing quite like it had ever been seen before.1 The cell phone was one of the most recent of technology revolutions, and a big one too, but it was very far from being the first. They have happened again and again all through our history. Stone-age people were probably the first to experience one. Until that time they made durable tools, weapons and other small artefacts from flint by chipping it to shape, a difficult, slow business with severe limitations.2 Their revolution happened around 4500 BC with the introduction of bronze. This was not the first attempt at using metal. Copper, easiest to extract from its ores, came earlier but did not achieve much. The metal is soft and easily deformed. But add 10% tin to copper, it becomes bronze,3 and everything changes. Harder and far better able to hold a cutting edge than copper, it could be cast or forged into almost any desired form. Widespread use of metal for making and doing was a powerful new technology. Fabricating sharp tools and weapons by chipping stone suddenly looked unattractive. The Bronze Age had begun.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_3
17
18
W. Gosling
Yet this change was not only a matter of how things were made, not just a matter of tools and weapons with better cutting edges and more effective body-armour. The social structure underwent a profound change too. Hierarchical societies appeared, an age of Kings and heroes. Those able to source, process and fabricate bronze could gain access to power and wealth. If you wore a bronze sword and armour you could make the rules for those who did not. A privileged new social class emerged. But the next revolution, the Iron Age, was soon coming down the road. Among early technologies, it was particularly new sources of energy that made the major impact. Wind and water mills had a long and significant history of improvement. Using the tractive power of horses was transformed by new tackle, such as hard horse-collars, and new management. Various ‘partnerships’ between humans and animals were a frequent characteristic of the search for amenable energy sources. They included human slavery. Although rare in hunter-gatherer communities it flourished with the coming of agriculture, thirsty for labour. So hard to eliminate, even after it became morally repugnant, it remained with us patchily until agricultural machinery arrived to do a cheaper, more tractable job. But the greatest of all the early technology revolutions was surely the development of sailing ships with ocean-going capabilities. In the Classical period many large ships were propelled by human labour; they were rowed by banks of oarsmen—autonomous propulsion internal to the ship. The sailing ships which superseded them had nothing of that kind. They exploited natural energy sources, wind and sometimes tides and currents. Yet the progressive development of the ship-master’s skills made precise navigation and counter-intuitive sailing into the wind matters of routine. The environment potentially accessible became the whole world. The social consequences that followed, from global trading to oceanic empires, are history. Revolutions in technology are ever thus: changes in ways of making and doing lead to changes in the way society functions. Debora Spar4 argues that this is the common consequence of all technology revolutions. New technology creates new areas of activity, often able to bring power and fortunes to its pioneers and pirates.
3 Why Are Humans Different?
19
Since time out of mind humanity has lived beset by occasional revolutions in technology, each time changing the world beyond recognition. Technology is never in steady state, so neither are our social forms and institutions. The mobile cell phone was the product of just such a progression. Long before that we had a world in which electronics made possible broadcasting, world-wide telecommunications, radar, radical new scientific instruments, sound movies and much more. All these became viable thanks to vacuum electronics: to valves (US: tubes). World War II was fought using early electronics built like this, which gave the developed world its distinctive mid-twentieth-century character. Then along came another revolutionary change. Everything that vacuum devices could do also proved possible in solid-state crystals. The silicon chip was borne, and with it microelectronics, which led to our present Digital Age.5 Vacuum electronics passed into history. The coming of ‘the computer on a chip’ made the cell phone possible, but was itself part of a deeper on-going revolution. The earliest silicon chips carried only a few electronic devices, ‘gates’ which performed simple logic functions, but soon chips were made carrying thousands, later millions of gates. As early as 1971 they were sufficiently complex for Federico Faggin to design the 4004, a complete functioning computer on a chip. Selling for $60, when the most modest computers had been priced in hundreds, this was the earliest ‘microprocessor’. When I first saw one, the 4004 sent a shiver down my spine, for I knew the digital future was upon us, exciting but almost unimaginable. In 1974 the greatest density of electronic devices that could be formed on a silicon chip was comparable with the density of neurons seen in a section of a human brain. Our brains have not changed since then, but the number of devices it is possible to fabricate on a chip of constant size has grown by a factor beyond a million, the complexity doubling every twenty months or so. Each succeeding generation of chips works faster too. By this explosion of technique our newest technologies are carried forward. A stupefying rate of advance, it allows silicon chips to tackle ever more sophisticated tasks. Cooper’s phone astounded bystanders. It was just one innovation among many more to come. Microelectronics has changed the character of life, and this is a one- way trip. Over only a couple of decades universal digital computer use
20
W. Gosling
has become the foundation of the developed world. Whether as desktops, laptops, feature phones, tablets or naked chips, computers are prominent in all our lives and embodied in our artefacts everywhere. They are in cars plurally, in washing machines and dishwashers, phones and medical instruments, even singing birthday cards. Earth has more computers than people. It is a pity we ever settled to calling them computers, as we have. The silicon chip, with its wealth of software apps, is a true general-purpose digital machine. Key to every new thing we are doing, from intelligent machines to space travel, it also opens new prospects for science. We have learned the trick of building ultra-reliable general-purpose digital machines with more than biological complexity at absurdly low cost. The impact has been seismic.
The Great Discontinuity An Assyrian inscription from the Neolithic period, 2800 BC, expressed concern for the future: ‘The Earth is degenerating today. Bribery and corruption abound. Children no longer obey their parents, every man wants to write a book, and it is evident that the end of the world is fast approaching.’ This alarming prediction has not yet come to pass. Despite all the problems, our population, its lifestyle and culture have flourished for millennia. Were we just lucky or has some unique factor made it possible? On my desk I have a small axe, made of stone and said by those who should know to be a hundred thousand years old. Once it had a sharp cutting edge at the business end, while the other gave a good grip for the hand. Found in the Thames valley, its cutting edge is blunt now, but I quickly realised it was not designed for use by left-handers—the finger indentations fit only the right. This had to be a design decision by its maker. The technique used for fabricating handaxes was chipping flints. Techniques are the methods by which things are made or done, means and materials to create new products, processes or services. Design is something else: it is the ordering of whatever we have decided to bring about, guiding techniques in their application to achieve the desired outcome. Design is over-arching: it concerns not merely the choice of
3 Why Are Humans Different?
21
techniques to be used and the right ordering of their use. Even before this can begin it demands consideration of the configuration of the final product, process or service as it is conceived. What form will it take, how it will be used, and how it will seem, not merely to those who deploy it but also to all others who come into contact with it? So technique and design are distinct and different. Only when these two are married together do we have a technology. Fabrication of my handaxe was true technology: both a technique and a design went into its making. When our distant ancestors first appeared something changed in the world, something made a difference. At one moment of geological time there were various hominins around, special in their own way, yet by not quite enough to stand totally apart from other mammals.6 They had many of our characteristics, and were on Earth a long time, Homo erectus five times longer than we have been so far. They used tools, as do many animals. They also had fire, and some claim language. So they were clever tool-using hominins but they never looked likely to dominate Earth, as we do. A short time later, suddenly there we were: like them in many ways, yet metamorphosed and remarkable indeed. On the pre-history time scales it was fast. Interpretation is complicated because rapid cultural evolution followed immediately. Something, perhaps a special genetic advance,7 gave us a capacity for acquiring developed language. Estimating the age of developed language from theoretical models gives results which cluster around 200,000 years.8,9 The earliest bones of modern human type are at least 150,000 years old, which more or less coincides with the language estimate. In the past difficulty in settling these issues came directly from the nature of the evidence available to archaeologists. With rare exceptions, what they discovered had to be what was robust enough to survive, mostly in the ground, for many thousands of years. Stone is an obvious candidate. But speech is the work of soft tissues: throat, tongue and brain. They are not survivors.10 The date developed languages began long remained unresolved. In recent decades new evidence appeared. A most extensive repository of our past is in the genes we carry. For a long time the usefulness of genetics in palaeontology was limited by the difficulty of extracting DNA
22
W. Gosling
from old bones without picking up modern DNA as contamination. Recently great progress has been made by adopting the ‘clean room’ technology pioneered in microelectronic silicon chip manufacture. Now it is possible to extract significant amounts of DNA from the oldest bones, provided they have not been fossilised to stone. In parallel with this it has become possible to sequence the human genome, the package of genetic material passed on from parents to their children. The combined effect has been game-changing. The genomic evidence now available confirms a sharp increase in the spread and sophistication of human life in the Upper Palaeolithic period, about fifty thousand years ago. Some call it the Late Palaeolithic Revolution. Maybe this was not the start point though but the climax of a trend; perhaps we, Homo sapiens sapiens, talked to each other from the beginning. When our first ancestors appeared, developed language came with them.11 Was this when the tool-use of earlier hominins transformed itself into evolving technology? Much evidence fits such a picture.12 With language people could communicate complex ideas. Poets sang praise songs for kings, and literature was born. Philosophers began their long dialogue with the universe, and from this, much later, science emerged. Teachers could reveal to others the complexities of their world. Most importantly, the makers and doers of things could dialogue with peers and apprentices about their techniques and designs. So it was in the Upper Palaeolithic that design really took off, just as prehistoric cave paintings were blossoming. Both wall art and artefact design require a similar capacity for abstract thought, creativity and visual imagination. From an early date innovative design was used in preparing food, making clothes, constructing shelters, and creating weapons. Individuals built a store of how-to-do-it memories, which they communicated to each other and to their young. Early technologies grew fast using language. Thus began a lively technology tradition which grew ever more accomplished, able to reshape the world we live in, and with it our lives. With language as its indispensable ally, technology began, generation by generation, patterning and re-patterning the changing human world. A continuously evolving culture came into being, and we have been held in its embrace ever since. Language is the tool of culture, but technology has been its engine.
3 Why Are Humans Different?
23
Notes 1. The phone weighed over a kilogram, and Cooper said the short battery life (thirty-five min) did not matter, as it was too heavy to hold to the ear that long anyway. 2. It might be thought that chipping flint had itself been a technology revolution of the very distant past, but this pre-dates humans. It was a skill mastered by Homo erectus a hominid living long before our appearance. 3. Although tin bronze is commonest, many other alloyed metals and non- metals are used in bronzes for special purposes. 4. Spar, D. (2015) Pirates, Prophets and Pioneers Kindle Books (online). 5. Geoffrey Dummer (1909–2002) first proposed this type of integrated circuit in 1952, six years before Jack Kilby, who received a patent for essentially similar ideas. 6. Gould, S. & Eldredge, N. (1993) ‘Punctuated equilibrium comes of age’ Nature 3,666,452 (UK). 7. The matter is disputed. 8. Atkinson, Q. (2011). ‘Phonemic diversity supports a serial founder effect model of language expansion from Africa’ Science 332 (6027): 346–9. 9. Dunbar, R. (2004) The Human Story Faber and Faber (London, UK). 10. This is not unique to speech. Palaeolithic women and men must have copulated, yet there is no archaeological proof. 11. Homo Neanderthalensis, one of the human types immediately preceding us, is now widely believed to have had speech, though how developed it was is hard to say. Long ago there were half a dozen or so human types according to the findings of current studies, any or all of whom may have had developed language. Although with mostly non-overlapping geographical distributions, they could and did interbreed, and many of us today carry a few percent of Neanderthal genes. 12. Shryok, A. & Smail, D. L. (2011) Deep History U. of California Press (Berkeley, USA).
4 Three Flavours of Technology
In technology, there have been so many revolutions over the centuries that it would be impossible to list them exhaustively. An important early one was the animal breeding that made it possible to recruit horses as energy sources and a means of transport, with the invention of the horse tackle needed to make it practicable. Sailing ships were another fabulous early technology that ultimately enabled ships to travel around the world relying purely on the forces of nature, wind and currents, without internal means of propulsion. Also in the so-called Dark Ages of Europe, there was a revolution in sources of energy, derived both from further improved horse harness and from windmills and waterwheels. Technologies form such a complex pattern. Can we inject some structure into thinking about them? Physicists believe the universe is capable of description in terms of three entities—matter, energy, information—which give it content, life and structure. A universe without matter would be a void, without energy it would be utterly cold, dead and motionless, without information1 it must be diffuse and formless. The three components by which our universe is described are also represented in the three great ‘flavours’ of technology: the technologies of matter, energy and information.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_4
25
26
W. Gosling
The Technology of Matter Finding or building somewhere to live, as protection against the weather, other human beings and possible predators, was an early objective for the technology of matter. Available materials were used, adopting the techniques suitable for forming them and as a result evolving distinctive designs.2 Many of these building technologies remain vigorous to the present day. But building technology has also seen many radical advances. The Bessemer process, invented in 1856, drove steel prices down from £40 per tonne down to £6. Its widespread use became practicable, displacing wrought iron. Steel-frame buildings were feasible, first the Rand McNally Building in Chicago in 1890. Later, skyscrapers began to appear. One of the pleasures of technology is the improbable people involved. Joseph Monier (1823–1906), a French gardener, wanted cheap, strong terrace pots. Having seen an artist modelling clay on a wire armature, he made a pot frame in steel mesh then ‘clothed’ it with concrete. His pots proved strong and durable. In 1877 Monier patented reinforced concrete, which was then picked up around the world and applied to buildings, bridges, dams and highways. For a while design difficulties limited this versatile material. The application of finite element techniques by Olgierd Zienkiewicz (1921–2009) went a long way to solve the problem.3 There are numerous composites, of which reinforced concrete was a pioneering example. Now plastics are reinforced with glass, carbon or metal fibres. They make new solutions to old structural problems possible, particularly where lightweight is essential. The technology of matter extends far beyond structural concrete and steel, with many ‘new’ materials now economically available. In 1886 Paul-Louis Toussaint Héroult (1863–1914) in France and Charles Martin Hall (1863–1914) in the USA independently developed electric smelting of aluminium. This transformed it from being a precious metal, specified by Napoleon III for a prestigious dinner service, into one of the commonest, uses including aircraft, cooking pots, beer cans and shoes for race-horses. The list of exciting ‘new’ materials could go on, with stainless steel alloys, light alloys and titanium for aviation structures, and many more.
4 Three Flavours of Technology
27
Aside from metals, there is a choice of ceramics and glasses, ranging from the blocks of near-perfect optical glass used in the inertial navigators at the heart of nuclear submarines, to the ceramic coatings that revolutionised non-stick cookware.
But Energy Was Not Far Behind So far as the technology of energy is concerned, although later supplemented by horses and oxen, at first the only source of mechanical energy came from human muscle. People querned wheat to meal, drove hand- axes, dug soil, threw spears and paddled boats. It was this that ensured the persistence of slavery as a social institution until developing technology undermined its economics. One of the drivers of technology was Europe’s loss of population after the successive waves of plague, each killing around a third of the people. Before agriculture, the estimated world human population was under 15 million. Yet 50–60 million people lived in the Roman Empire by 350 AD, with Constantinople (modern Istanbul) as its capital, ‘the second Rome’. Numbers were repeatedly cut back though, by epidemics of plague. By 800 AD Europe’s population was only half what it was two centuries before, due to the Plague of Justinian. In the fourteenth century, world population fell from 450 million to under 375 due to the Black Death.4,5 In the aftermath of the great plagues, workers were less numerous everywhere and human energy more expensive than in earlier times. So European mediaeval technologists were preoccupied with improving the sources of nonhuman motive power available to them. The need to replace expensive human muscle was urgent, and for several centuries this became the challenge under which mechanical designers worked, millwrights and engineers alike. It conditioned their thinking and a revolution in energy technology started. Once there were no longer enough people to keep the land fully in cultivation, agriculture began to depend more heavily on horses, wind and water. Doing so transformed lives in Europe between the sixth and fourteenth centuries. Described from his own literary perspective by Petrarch (1304–1374) as ‘the Dark Ages’, in fact, this was a time of incessant technological
28
W. Gosling
innovation. First, mediaeval designers achieved advances in horse harness, leading to horse traction for the plough—faster than oxen, so the productivity of ploughmen was improved. Rigid horse collars and iron shoes did the trick, pure design innovations requiring no previously unknown techniques. They also made possible heavier and faster carriages and wagons, extending the range of overland trade. The modest force needed to propel a boat through water meant wind could be a natural resource exploited early for mechanical energy—boat propulsion by sailing. From Classical times to the late nineteenth century, when steam displaced them, sailing ships dominated civil and military transportation on the sea, lakes and large rivers, exploiting a natural resource with increasing virtuosity. Now, except for sport and recreation, they are gone, eight thousand years of marine technology brought to a close. Wind power had another significant application. Of growing importance in mediaeval times were windmills, which had been used in Persia from the seventh century. In Europe, they had sails mounted on a horizontal axis, at first facing in a fixed direction. In order to work they needed wind, but it had to blow from the right quarter. Post mills, introduced into Europe in the 1100s, had their structure supported on a central three-legged trestle, about which they could be rotated manually. By thrusting against a push bar, the miller could make them face the wind from whatever direction it might blow.6 The post mill was a design innovation alone; techniques needed for its construction were already long established. Later it was superseded by the cap mill, in which it was not necessary to turn the whole mill but only the cap at the top, on which the sails were mounted. The turning could be automated, and also made it possible to build mills much higher, catching more wind with longer sails, because the mill’s weight was no longer limited by the miller’s ability to push it. Metals were expensive, so mill machinery was fabricated from wood, at first even down to the gearing in the mechanism. Wood surfaces sliding over each other were treated liberally with goose grease, an important mediaeval lubricant. This sustained the numbers of geese and their eggs, destined for the kitchens of the fortunate, also of conveniently available goose-girls, famed in legend as brides for passing princes.
4 Three Flavours of Technology
29
Post mills were used not only for grinding corn and driving forge machinery but could also drain wet lands, using Archimedean screws lifting water into the canals that carried it away. This usefully increased the area put under cultivation throughout Europe, especially in the Low Countries. Coupled with horse ploughing, a rise in agricultural output resulted, yielding the wealth on which the European Renaissance flourished. ‘Dark Ages’ technology powered the cultural flowering Petrarch experienced in the fourteenth century. Water mills had also been an important source of mechanical energy since Classical times. In Europe, the wheels are either undershot—initially used by the Romans—or the more efficient overshot, developed soon after the beginning of the current era. In this type, the water flows from above into the buckets on the rim of the wheel. Overshot wheels have been built generating as much as a hundred horsepower. The evolution from the undershot wheel into more efficient overshot and back- shot wheels did not require new techniques of construction but was purely a matter of design. Mediaeval European water mills were widespread.7 Waterpower was extensively used wherever available, again for grinding corn, in textile manufacture, for wood and metal working and for draining wet land. An aside: the undershot mill, with water impacting on paddles as it passes beneath the wheel, was surely the intrapsychic object leading to the design of paddle-wheel ships. In the anonymous fifth-century Roman treatise De Rebus Bellicis, a paddle-driven warship is described, a team of oxen driving the wheels. It was just a design idea. The paddle ship needed steam power to make it useful, which was to take thirteen centuries more, due to the disregard of Hero’s steam turbine.
The Wonders of Steam Improvements in power sources by mediaeval technology only partly solved the energy problem. Using water or wind power required expensive mill construction at fixed locations. Energy from these sources was neither cheap enough nor always accessible where needed. The road to cheap, flexible mechanical power began with Newcomen’s steam
30
W. Gosling
pumping engine in 1712. A long step forward was the rotative steam engines, one of the earliest designed by Nicolas-Joseph Cugnot (1725–1804), powering his enormous wooden tricycle (1765). The first mechanically propelled vehicle, it achieved a brisk walking pace and is now a treasured exhibit in the Paris Musée des Arts et Metiers. Water running from an upper pond, fed by a stream, through a wheel, and then to a lower pond had long powered factories and mills. The earliest steam installations sometimes used a pumping engine, like Newcomen’s, to pump the water back from the lower pond to the upper whenever necessary. The established, and hence low-risk, technology of water power rescued the mill’s working from the vagaries of rainfall and river flow while retaining ‘free’ water power whenever practicable. This helped offset the high fuel consumption of inefficient early steam engines. An elegant development in technology, at this stage it owed nothing to science. An improved steam engine, designed by James Watt (1736–1819) with his assistant William Murdoch (1754–1839), was demonstrated in 1771. Its more efficient conversion of coal into mechanical energy opened the nineteenth-century golden age of mechanical engineering. Steam engines, ever improving in efficiency, began to replace water wheels in factories, but now directly as the energy source of choice. One engine might power a whole factory. Suddenly, the preferred site for factories was near cheap sources of coal, rather than strongly flowing rivers. The results were dramatic, decimating water-powered textile manufacturing in the West of England, to re-establish it in the North where coal was cheaper. Steam power drove machines of rich variety and versatility, made possible steam-driven factories and workshops, railways, ships, road vehicles, even the beginnings of mechanical flight. It was a period of industrial innovation yet much more too. The social changes resulting from this new technology went way beyond the bounds of industry. Transportation by land and sea changed out of recognition with the coming of steam. Steamboats on rivers and lakes were the first to appear. In France, Claude-François-Dorothée, marquis de Jouffroy d’Abbans (1751–1832) demonstrated a paddle steamboat, the Pyroscaphe, as early as 1783. Propelled by side wheels—the forgotten late-Roman invention—it was powered by a modified Newcomen engine. Not until William Symington
4 Three Flavours of Technology
31
(1764–1831) developed improved engines did steamboats establish themselves. The Charlotte Dundas was successful on the Forth and Clyde Canal from her first outing at the beginning of 1803. She impressed Robert Fulton, an American present at the trial. He had his own steamboat in France within the year and later established the first commercial steamboat service with the Clermont on the Hudson River. The 240 km journey from New York to Albany was completed in thirty-two hours. At sea, the SS Savannah made the first transatlantic crossing by steam in 1819, sailing part way to conserve coal. By the mid-nineteenth century, both British and US steamships were regularly crossing the Atlantic, and ocean-going steam had established itself. Change was also coming on the roads. Before steam, light two-seater horse-drawn curricles had achieved around 25 kph, but could only sustain it briefly without harming the horses. From the outset, trains, when pulled by locomotive steam engines, could surpass this speed, whether carrying freight or passengers by the dozen and sustain it for hours at a time. Nearly half a century after Cugnot’s first steam-driven vehicle, George Stephenson built his locomotive Blücher, an engine having flanged wheels and driving through simple adhesion to the track. This locomotive design soon became accepted practice. It was used for the Locomotion on the world’s first public-service railway, the Stockton and Darlington, which ran from September 1825 over a 40 km track. Passengers liked the service, and locomotive trains transported coal at half the cost of using horses on the same track. Within little more than a generation trains with speeds beyond 100 kph (a mile a minute) became commonplace. Journeys which had taken days by horse-drawn coaches, requiring multiple overnight stops to change the horses and to rest and refresh the passengers, were curtailed to hours when undertaken entirely by rail. Measured in travelling time, the distance between places on the rail network was reduced up to ten-fold. The inevitable result was a social revolution, working itself out during the Victorian period.
32
W. Gosling
If I Only Had Wings Human flight had been a matter of myth from early times, but in the late eighteenth century, it began to look a serious possibility. Aviation was set to become another technology with overwhelming socially transformative power, ultimately replacing ships as the dominant means of international passenger transportation. It slashed the real-terms cost of long-distance travel to the point where it could become a mass activity, no longer the privilege of a wealthy few. On 21 November 1783, Pilâtre de Rozier and the Marquis d’Arlandes took off from a point near the Bois de Boulogne, in Paris. At a height of over 900 m and for a distance of 9 km, they flew in a hot-air balloon. The Montgolfier brothers, owners of a paper factory, financed the balloon and built it of paper. Although it began to burn towards the end, the flight finished with a safe landing. In a remarkable feat, not as well recognised as it should be, just over a year after that first flight, on the 7th of January 1785 the English Channel was crossed by air. Flying from Dover to Guînes were Jean-Pierre Blanchard (1753–1809) of France and the American Dr. John Jeffries (1745–1819).8 They used a balloon with a hand-cranked propeller, taking over two and a half exhausting hours to make the flight in a dead calm. Many more balloon flights quickly appeared, using either hot air or hydrogen as the lifting gas. Going to view a balloon ascent became popular, and soon flight was both a hobby for the rich and an occupation for professional entertainers. What was needed, though, was a flying machine that would not be the plaything of the wind, but go wherever might be required, little influenced by weather, in fact, a true flying ship. Henri Giffard travelled 27 km from the Paris racecourse to Trappes in the world’s first airship in September 1852, in a dead calm. With a cylindrical envelope and hydrogen lifted, it was propelled by a steam engine at 10 km/h in still air, the speed of a trotting horse. Desperately under-powered, Giffard’s was scarcely a practical vehicle, yet the age of airships had begun, and mechanical propulsion had taken to the air. In the airships that followed either hydrogen or the more expensive helium was the lifting gas. Large airships
4 Three Flavours of Technology
33
ultimately required a hundred thousand cubic metres of gas, so the price was important.9
The First Passenger-Carrying Airline Germany took an early lead in commercial aviation. DELAG, the first passenger-carrying airline,10 was established in 1909, inevitably using airships. Air passengers in Germany around 1910 could travel in luxury. The cabin interiors were opulent, with much glass and mahogany. Refreshments were served in flight, and ladies wore fashionable hats. The subsequent modest success of Zeppelin airships in bombing England during World War I suggested a military pool of technical competence which would support civil aviation. Early in the twentieth century it began to look as though airships would be the future of aviation despite their problems, believed not insurmountable. A regular service from Germany to the USA was inaugurated by DELAG in 1928. The non-stop flight took at least three days, depending on the weather. In 1932 regular flights to Buenos Aires were added. The burden of fuel and stores for housekeeping on these long flights was considerable. It was also necessary to have a large crew, in part to serve the domestic needs of passengers, in much the same way as on the great liners at sea. The economics of civil airships, even as an elite service, were problematic. Adolf Hitler (1889–1945) described airships lifted by the highly inflammable hydrogen as ‘flying crematoria’. He shunned them, despite Germany’s technical lead. Yet the disastrous safety record of the big rigid airships was primarily due to structural failures, and ships lifted by non- inflammable helium fared no better. In most disasters where fire happened, it was secondary, though more obvious and dramatic than the underlying structure or skin failure. Indeed, with only 80% of the lifting power of hydrogen, allowing for the weight of the additional gas-handling equipment required, the structural design of a helium ship was even more marginal than for those lifted by hydrogen. Soon, to the surprise of many people, aeroplanes were proving both safer and more reliable than airships. Before long they demonstrated the ability to fly above the weather. By World War II it was no longer doubted
34
W. Gosling
that the future would belong to them. So the airship proved just a precursor to the airliners to come. Aeroplanes, like kites, depended on aerodynamic lift from their wings, needing no bulky, drag-inducing gas balloons. Quickly improving during the twentieth century, they were cheaper to build and soon much faster than airships, which usually cruised at around 120 kph (75 mph). Passengers found shorter aeroplane journey times attractive. Meals, in- flight activities to pass the time, and overnight sleeping could all be more basic if needed at all. The crew were fewer and the economics looked much better. To the military the advantages of higher speed and reduced target size were obvious.
Information, the Technology of Feeble Electric Currents The third flavour of technology, information, had its early pre-scientific successes in the invention of writing and printing. However, its great flowering came with the emergence of electrotechnology. Here things get complicated because many of the same electrical techniques which created a revolution in information technology also, little modified, played a similar role in electrical power generation and distribution—the technology of energy—about a generation later. It makes for a tangled story. An obsession with instantaneous long-range communication existed from early times, and proposals for electricity to do it appeared in the eighteenth century. Progress was slow because the current produced by a battery in a wire hundreds of kilometres long would have been too weak for detection by methods then known. In 1820 Peter Barlow (1776–1862), a distinguished physicist of the day, asserted that an electric telegraph operating over long distances would never be possible. By ironic coincidence, Hans Christian Ørsted (1779–1851) announced in precisely the same year his observation that a magnetic compass needle was deflected by an electric current flowing in a wire nearby. In doing so he revealed the long-sought interaction between electricity and magnetism—a triumph for science. Only a few months later Ørsted’s advance
4 Three Flavours of Technology
35
evoked a crucial step forward in electrical technology: Schweigger’s ‘multiplier’. Having a wire run alongside a magnetised needle was enough to make the scientific point Ørsted was exploring. That same year Johann Schweigger (1779–1857), at the University of Halle, behaving as a technologist, formed the wire into a coil with multiple turns. With a magnetic needle placed at the centre of the coil, the current passed by not once but many times, greatly enhancing the movement of the needle for small currents in the wire. In the domain of science, Ørsted is rightly accorded the honour of priority, but in technology, Schweigger is the more important figure, because of the practical consequences of what he did by exploiting the doxastic method. Schweigger’s ‘multiplier’, the first galvanometer, was an instrument that gave a visible deflection of a magnetic needle from weak currents. The following year Johann Poggendorf (1796–1877) improved it further by attaching a tiny mirror to the needle, onto which a beam of light could be projected. By catching the beam reflected from the mirror as a bright spot on a screen—metres distant if need be—the smallest movement of the needle could be seen. Detecting the most feeble of electric currents, the mirror galvanometer made communication over extreme distances credible. Schilling von Canstatt built the first electromagnetic telegraph in St. Petersburg around 1825. It used five galvanometers (later six), and transmitted code groups only. Each needle carried a paper disc, the size of a small coin, white in front but black at the rear. All galvanometers showed a white when the system was at rest, but when a current was passing through a particular coil the disc on its needle rotated to show black. The five discs were thus seen as either black or white, depending on whether a current was detected or not. Either of two states could be signalled on each of the five paper discs, giving thirty-two distinct patterns. One of these was assigned to each letter, with a few to spare to indicate punctuation, capitals and so on. Schilling’s telegraph worked well, but it was necessary to learn the code to use it. With this sole exception, all early telegraph designers were convinced that acceptable telegraphs must be immediately useable by any literate person, and so must indicate the letters of the alphabet directly, without
36
W. Gosling
being coded. In line with this view, Charles Wheatstone (1802–1875) devised a variant of Schilling’s equipment—his ‘hatchment11 dial’ telegraph—which avoided the need for the operator to learn a transmission code.12 Wheatstone’s telegraph could indeed be used by any literate person. At first, it was received with enthusiasm by his collaborator William F. Cooke, a retired Indian Army officer, and by others. Its disadvantages were severe, however. It could transmit only twenty letters using its five pointers able to work only in pairs, to indicate a particular location on the dial where a letter was written. A restricted alphabet was inevitable. There was no ‘C’, sent as ‘K’ or ‘S’, ‘QU’ was sent as ‘KW’ and so on. There were no lower case letters or punctuation marks. Wheatstone and Cooke jointly obtained a patent in 1837 and had a commercial telegraph service working from Paddington to West Drayton by 1839. Under Cooke’s day-to-day management, it was extended to Slough. Feedback to the design began to come in from the users. Experience with his working system made Cooke increasingly concerned about the severe alphabetic limitations of Wheatstone’s instrument, yet this was not the worst of his problems. Like Schilling’s telegraph, lines between stations had six wires, one for each of the five galvanometers and a common return. The six-wire line would have been ruinously expensive over long distances, and reducing it to five, using the earth itself as the return, helped little. Using an empirical approach, Cooke transformed the economics of the telegraph by designing a system using a single wire between stations, together with an earth return. The wire could be bare, which kept cost down, and was strung between pottery insulators fixed on top of vertical poles, where the insulators and wires were out of harm’s way. He abandoned Wheatstone’s instrument for a simpler one with a single galvanometer, which signalled letters by sending a coded sequence of left and right deflections. This made it possible to transmit over a single wire the full alphabet, upper and lower case, and punctuation marks. The telegraph had become a serious commercial proposition, but it had done so in coded form, not direct-reading as many expected. Cooke and Wheatstone had begun in an amicable partnership but soon fell into serious disagreement. Wheatstone convinced himself that
4 Three Flavours of Technology
37
his ‘hatchment dial’ instrument got the telegraph started, so he could fairly claim to be the inventor at least of the commercial telegraph. Cooke was quite sure that the system became viable only through the single- needle telegraph, considering himself its sole inventor. The instrument bore the words ‘W. F. Cooke invenit’, despite Wheatstone’s name on the 1845 patent. So both thought they had invented the practical working telegraph, leading to great bitterness. In 1845, in a series of moves supported by figures in the City, notably J. Lewis Ricardo (1812–62) and G. P. Bidder (1806–78),13 the Electric Telegraph Company, was formed with Bidder as Chairman. Wheatstone played no part; for £33,000 (maybe £2 million in present-day money) the new company bought out his patents and other interests in the telegraph. However, he was shrewd enough to exclude from the deal private telegraph lines, connecting users for personal use. Wheatstone went on to design slow but easy-to-use direct reading alphabetic telegraphs, much the same size as a pocket watch of the period. Based on this, from 1860 the Universal Private Telegraph Company set up a London network linking the houses of the wealthy—failing a secretary, one’s butler dealt with the messages. This private telegraph network also served large business premises and flourished until the coming of the telephone, to which it can be seen as a precursor. Along with other UK telegraph companies, it was nationalised in 1870, when it was valued at £160,000 a substantial part of which went to Wheatstone, making him a wealthy man. Yet despite this, in retrospect, Cooke had the right of the argument, for in mainstream commercial telegraphy certainly, the future was with single-wire coded designs. Samuel B. Morse (1791–1872) triumphantly demonstrated this in the United States, with the launch of his Magnetic Telegraph Company (1845), soon connecting major cities. In Britain, Cooke’s telegraph network was extended to the whole country and by the 1850s to continental Europe too. A first cable was laid to the United States in 1858 but destroyed in a few days by gross misuse.14 In 1865–1866 two more transatlantic cables were laid successfully. A wave of public enthusiasm for all things telegraphic followed. Cooke and Wheatstone were finally reconciled when, in 1870, Queen Victoria knighted both of them, along with the elderly Francis Ronalds,
38
W. Gosling
who demonstrated an electrostatic telegraph system as early as 1806, a precursor of what was to come. Stimulated by difficult practical challenges, both electrical science and electrotechnology made rapid progress. Georg Simon Ohm (1789–1854) published his famous law relating voltage, current and circuit resistance in 1827, two years after Schilling’s first telegraph. Soon widely adopted in both America and Europe, electrical telegraphy was transformative in its social consequences.15 • The new railways entirely depended on the telegraph for their safety in day-to-day working. • Once telegraphs linked European capitals and the US major cities, newspapers reported many world events within a day. • Trade and business began to operate on a world scale. • The effect on diplomacy was dramatic, ending the near-isolation of embassies from their home governments.16 • Military uses quickly followed. In their decisive defeat of Austria at Sadowa (July 1866), the Prussians used battlefield telegraphs to full effect. Modern Germany was born and the politics of Europe transformed. • The cause of imperialism also benefited. In India, the British retained their control of a subcontinent with only 40,000 troops, fully exploiting telegraphs as well as the steam railways that depended on them. For a time new areas of activity like this, generated by technology revolutions, are unregulated and lawless until they are brought within the control of governments. Often that happens only against spirited opposition. As an example, the present lack of regulation on the internet gives concern—it has been likened to the Wild West.17 Some degree of governmental regulation is already appearing in many countries. For the telegraph, it was not much different at first. Different telegraph systems worked to different technical standards, often incompatible, so the promise of easy, fast international communication was in danger of being lost. The International Telecommunications Union (ITU) worked to solve these problems. The world’s oldest inter-government agency, it
4 Three Flavours of Technology
39
began in 1865 with a telegraph convention signed in Paris. It has now evolved into the UN special agency regulating all telecommunications. Britain’s prestigious Institution of Engineering and Technology started in 1871, as the Society of Telegraph Engineers. Similar organisations appeared in other developed nations. Later, university courses in electrical engineering were launched, along with the no-less-important technician training.
odalming, Cradle of Worldwide G Electrical Power By the second quarter of the nineteenth century, it was already realised that electricity would be important not only for signalling but as a remarkably versatile source of energy. What began as a development in the technology of information thus now spread sideways into the technology of energy. Rapidly increasing use as a means of communication created a body of electrical engineering experience and in parallel stimulated new scientific understanding. Virtuosity blossomed in use of electricity at much higher powers than needed for a telegraph. By 1873 Zénobe Gramme, a Belgian, had successfully designed both electric motors and generators for use in a wide range of applications. The early growth of the electrical power industry was certainly impressive. The construction of the Godalming, England, hydropower station in 1881 was a world first. The steam-driven Pearl Street (New York) power station built by Thomas Alva Edison (1837–1931) was in service in the following year. In industry, commerce, government and homes the move to using electrical power spread, though at first mostly for lighting; arc lamps enjoyed brief popularity. So the birth of the electric telegraph marked a major turning point in human history, even if it seemed less socially significant for a time than the electrical power network to which it gave birth. Electrical power supply is now as important to us in the developed world as water. The Godalming power station, a converted water mill, generated alternating current (AC), changing direction smoothly fifty times a second,
40
W. Gosling
whereas Edison opted for direct current (DC), which was constant. ‘Fooling around with alternating current is just a waste of time. Nobody will use it, ever,’ said Edison in 1889. However for almost a century thereafter only AC allowed transmission of power over long distances without intolerable losses, a feat possible because it could easily be transformed to high voltage for the long haul. Such considerations would not have been in Edison’s mind in his early days at Pearl Street, given the limited area then served—he started with thirty-nine customers, all within walking distance. Today AC supply is almost universal throughout the world. Paradoxically, DC is chosen for long-distance bulk transmission of power at extremely high tension—over a million volts. Techniques Edison did not dream of make this possible. The prodigious advance in the technology of electrical power supply evolves at a great pace and shows every sign of continuing to do so, even as a generation by burning coal, oil or gas declines for economic as well as environmental reasons.18 There are plenty of cheap renewable energy resources, although there is still debate over the best of them to go for. One approach uses solar cell installations in the great deserts of the world, which would change the significance of geography. Who would have predicted that Tibet would become an exporter of electrical energy, yet seemingly it has. In the twenty-first century, heavy current electrical engineering has found an important place in all human societies. From the 1880s to the early twentieth century, electrical power engineering was the exciting cutting edge of electrical technology. Until the coming of radio, for a couple of decades telegraphy seemed elderly and less stimulating. The Society of Telegraph Engineers (1871) renamed itself the Institution of Electrical Engineers (1889), and as the new century approached was increasingly dominated by its electrical power faction, and understandably so because that was where the action was. Briefly, a curious antipathy flared between proponents of electrical power and communication, between heavy current and light.19 Fortunately, the IEE’s alienation from light current engineering was as short-lived as it was unwise, lasting just long enough for a competing Institution of Electronic and Radio Engineers to come into being. Sensibly they merged, and later, with others, formed the IET in 2006.
4 Three Flavours of Technology
41
For centuries electricity had been just a curiosity and sometimes an entertainment, but the electric telegraph became commercial in 1837 when Cooke and Wheatstone were granted their patent. Thus began the era of worldwide near-instantaneous communication, however, the discontinuity in human affairs to which it ultimately led was much greater then that implies. Once electricity was in practical use more people became involved in things electrical than ever before. It was with the electric telegraph that a new world began to appear. Electrotechnology accelerated and the electronics, telecommunications and computer industries of the twentieth and twenty-first centuries took shape. Its unstoppable social impact becomes clear when all this is reviewed as a connected story.
Notes 1. Claude Shannon created a mathematical theory of information, one of the crucial intellectual developments of the twentieth century. Today physics believes the universe is capable of description in terms of three entities: matter, energy and information, and those three are enough. The word ‘information’ is used in a technical sense, meaning what differentiates and orders things, giving them structure. When science adopts ordinary words they take a new specific meaning, well defined and with firm edges, so they can be incorporated into scientific argument. Thus ‘energy’ in ordinary speech is used in many ways; the Oxford English Dictionary lists mental energy, force of expression, the habit of strenuous exertion and so on. In technology and science it means kilowatt-hours, or some alternative equivalent measure. The same fate overtook ‘information’. In the scientific domain it is a purely statistical measure of the improbability of a configuration. 2. May, J. (2010) Buildings Without Architects Rizzoli (New York, USA). 3. Zienkiewicz, O. (1972) Introductory Lectures on the Finite Element Method Springer (Zurich, Swizerland) 4. Both were probably caused by one bacterial strain: yersinia pestis. 5. From the seventeenth century and beyond defences against cataclysmic epidemics improved. World population grew, passing a billion in 1804.
42
W. Gosling
Today we are more than 7 billion, although the number is growing much slower than a century ago. 6. In Asia the same problem was tackled using windmills with vertical axes. 7. Gimpel, J. (1988) The Medieval Machine 2nd ed. Pimlico (London, UK). 8. He was a Boston surgeon, educated at Harvard—a ‘Tory’ who opposed American independence from the British crown, and moved to Paris when his political view did not prevail. 9. Long ago, with a development in superconductivity under discussion, I had to explain to the Plessey Board of Directors the cost of the different liquid gasses that were being used. Liquid nitrogen, said I, when sold in bulk, cost much the same as Perrier water by the bottle. Bulk liquid hydrogen was like gin in bottle, and liquid helium roughly the same as Corton Charlemagne from a good year. Heads nodded all around me. Since then prices have tumbled, so a less great wine would be my choice now. 10. Deutsche Luftschiffahrts Aktiengesellschaft. 11. In Victorian times a ‘hatchment’ was a lozenge-shaped wooden board, bearing the coat of arms of the deceased, set up over the door of a house where there had been a recent death. 12. Hubbard, G. (1965) Cooke and Wheatstone Routledge (London, UK). 13. A celebrated ‘calculating boy’ in childhood, exhibited at fairgrounds. 14. Telegraph designers were dismayed that at the end of a long transmission line the signal pulses arrived very distorted. It was Faraday who sorted out the science, showing that this was due to the unavoidable charge- retaining capacity of the long line. It was still possible to signal at a slow rate and this was what was done on the 1866 Atlantic cable, which passed traffic at four words per minute. In the case of the 1858 cable the ‘electrician’ in charge (whose scientific background consisted solely of a professional training as a surgeon) tried to overcome the problem by increasing the voltage applied to the cable, which broke down irretrievably at 2000 volts. 15. Standage, T. (1998) The Victorian Internet Weidenfeld & Nicolson (London, UK). 16. Not all ambassadors thought this an advance. 17. Edwards, C. (2019) ‘Is technology running away from the law?’ Engineering & Technology (Special Edition: The New Wild West), 14, 2. 18. Renewable sources often produce most energy at times when demand is low. This creates a need for large scale energy storage, which was a prob-
4 Three Flavours of Technology
43
lem for a time. Several storage technologies are now in development, of which one is very large batteries using liquid metal electrodes and operating at 350 °C. See Chandler, D. (2009) Liquid battery big enough for the electric grid? MIT News (Cambridge, Mass, USA). Another approach is electrolysing water into hydrogen and oxygen, the first to store, the second to sell. When power is needed the hydrogen can be used as fuel, either in a turbine generator set or a fuel cell, restoring the electrical power that was used. This has been trialed at the 5 MW level. 19. One IEE President, long ago, spoke of radio engineers as ‘men who work in bicycle shops’, charging batteries and mending the ‘wireless sets’ of the day. Electrical power was where the serious and exciting new things happened. This view endured for a while. When I took up my first university Lectureship in 1958 the EE Department had two laboratories. Far the largest was the Machines Hall, in size and shape much like a Nonconformist chapel, holding a dozen or so heavy-current rotating machines, all hand-painted bright blue. And electronics? Up a set of lateinstalled wooden stairs at the back of the Machines Hall was the Radio Laboratory (sic), 4 m by six, where all the electronics experimental teaching was scheduled.
5 Subtle Subversives
Some new technologies arrive with a metaphorical blare of trumpets, projecting the greatest prior expectations. Firearms were a prime example of a new technology which seemed bound to transform the world, and it did. So also were printing, electrical power, the silicon chip and the mobile cell phone, among others. When each of them first appeared, their great inevitable consequences seemed obvious. The only risk in assessing the importance of these new developments was that it might be understated. There are, however, other developments, also emerging thanks to the doxastic method, that have a different quality. They are less obtrusive, sometimes barely noticed, and often thought sure to be of minor significance. Yet, appearing in all three of technology’s flavours— matter, energy, information—these unpretentious newcomers can have an impact almost as large as those that are more enthusiastically proclaimed. Looking at their history tempts us to believe that sometimes it really is the meek who inherit the earth.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_5
45
46
W. Gosling
Small Is Beautiful In the domain of energy technology, the obvious example of this kind is the fractional-horsepower electric motor (fhp motor). In the twentieth century, it transformed all our lives, yet even the name means little to most people. Early electric motors were at first typically large, heavy and used in electric trains and trams, or for driving heavy machinery in factories. They usually ran on direct current and were rated at a mechanical output of many horsepower. They were industrial and rarely encountered in ordinary day-to-day life. Then Galileo Ferraris of Turin (1885) and Nicola Tesla (1887) in the United States independently invented induction motors. They were able to run only on alternating current, but this was hardly a disadvantage since AC was increasingly widely supplied. Made simply without electrical connections to the rotor, many cheap induction motors soon appeared on the market, some of them quite small, producing only a fraction of horsepower of rotative power, hence the name. ‘Squirrel-cage’ motors, as they came to be called, were at first limited to a fixed maximum rotational speed, but ultimately acquired the ability to run at variable speeds, thanks to power electronics. Moving into the home, using AC power supplies originally introduced for lighting, fhp motors changed everyday life radically. These, and later even smaller motors, made practicable a host of domestic appliances—refrigerators, washing machines, dishwashers, vacuum cleaners, food processors, central heating pumps, hair-dryers and many more. Important for themselves, their coming also stimulated the design of new DC motors by demonstrating a hungry market for small motors. Domestic life was transformed by the fractional horsepower motor. Live-in servants grew ever fewer as the years passed, no longer seen as indispensable by the social classes earlier dependent on them. The servants’ work could increasingly be taken over by new domestic machines with their fhp motors. Those who had not been able to afford servants at all adopted these machines too. ‘Don’t kill your wife with endless laundry,’ an early advertisement is said to have urged. ‘Let our machines do it
5 Subtle Subversives
47
for you.’ The fhp motor, a simple device, had freed new generations from traditional domestic chores. Until the late nineteenth century, mechanical energy had frequently been transmitted within industrial workshops through rotating shafts and textile belting from a central steam engine. This was an inflexible approach, and dangerous because fast-moving belts, often unshielded, were the cause of many terrible accidents. With the coming of fhp electric motors, individual machines in a factory could each have their own electric drive, and life-threatening transmission belts were at once superseded. At the same time, the layout of workshops was no longer inflexibly dictated by runs of shafting—‘electricity can go round corners’ as was said at the time.
Going Cordless Another simple thing with big consequences has been a substantial advance in rechargeable batteries over the last few decades. Early examples used lead plates in dilute sulphuric acid, but Waldemar Jungner (1869–1924) of Sweden invented the light-weight nickel-cadmium battery (NiCd) in 1899. They used nickel hydroxide and cadmium as electrodes, however, because of the high toxicity of cadmium, worse than arsenic, they were later replaced by nickel-metal hydride (NiMH) batteries. Launched in 1989, the NiMH battery soon became a common consumer and industrial type. Its main disadvantage is self-discharge when not in use. It is irritating to pick up a cordless tool, unused for a few weeks, and find the battery dead. So NiMH is being superseded by lithium-ion both in consumer electronics and in much wider civil and military use. It does not have this defect. The result has been a ‘cordless’ revolution in hand-held and portable devices, in partnership with the small electric motors. More often than not today’s power hand tools are cordless. They are found everywhere. In the home, small cordless vacuum cleaners became commonplace, able to tackle minor spills. Rechargeable battery-powered cordless landline phones, computers, razors and even cordless power cheese-graters are now unremarkable. The personal cell phone, music player, tablet
48
W. Gosling
computer, e-book reader and laptop are also in this cordless battery- powered category. Battery technology is not static either: new proposed designs are showing signs of dramatic further improvement in the ratio of stored power to weight further ahead.
Power for Free Photovoltaic (PV) cells, which generate electricity from sunlight, have already made an impact, but this is a revolution only just at its very beginning, as their market price falls precipitously. For a few years now arrays of them have been appearing on house roofs, supplying much of the household energy needs, but it is potentially a far bigger story than that. If 5% of the area of the Sahara were covered with PV cells they could provide the whole of the world’s energy needs, although in a disadvantageous location. However, the world is blessed with many deserts, and if their potential were widely exploited as an energy source we would need no other. Currently, a desert in Tibet is exporting energy to nearby users in China. The solar-powered flight is also a reality. In 2015 an experimental aircraft, the Solar Impulse Si2, piloted by Bertrand Piccard (b. 1958) and André Borschberg (b. 1952), set out on an attempt at a round-the-world flight, powered only by the PV cells on the upper wing surface, with battery storage for night flying. On 3 July 2015, Borschberg flew the Solar Impulse from Nagoya (Japan) to Kalaeloa, Hawaii (US), taking 4 days, 21 hours and 52 minutes. This was a new official record duration for a solo flight. After a stay in Hawaii, Solar Impulse then made a flight of 62.29 hours from Hawaii to Mountain View, California, a distance of 4086 km at an average speed of 65.6 kph. To date, the principal limiting factors on the solar flight had been the early inefficiency and cost of solar cells, together with the poor power-to- weight ratio of useable batteries. Both of these have improved dramatically over the last few years, and still, further substantial advances now seem highly likely. When these are fully realised quiet solar-powered flight will be a serious contender for the future of aviation, with a major reduction in air pollution.
5 Subtle Subversives
49
Meanwhile, I have on my desk a calculator powered by light, and in my garden there is a charming little fountain in the middle of a pool, playing whenever the sun shines. Solar-powered garden lights are seen everywhere. They depend on solar cells but also coupled with another of these ‘major-minor’ technological transformations, this time in lighting.
Lighten Our Darkness We can be confident that artificial lighting has been in use for 40 or 50,000 years, at least. Without it, cave paintings where there is no natural light would have been impossible, yet they were made. Archaeological evidence is clear that simple lamps burning animal fat were widely used.1 Common light sources were burning wood or coals. Since mediaeval times there has been a steady evolution in lighting technology. For lighting, fires gave way to sheep-fat tapers, then ‘dip’ candles and lamps burning vegetable oils. They were replaced in turn by wax candles, then coal gas lights (1804), first using fish-tail burners and later incandescent mantles (1891). From 1806, electricity began to be used in lighting and slowly displaced gas. Initially, electric lighting depended on harsh, short-lived arc lamps, in the twentieth century replaced by the still familiar glass bulbs in which a tungsten filament is heated to white heat by a current passing through it. The efficiency of this kind of light bulb was under 10% and its life typically a few thousand hours, so the wise householder kept a small stock and expected to replace them at regular intervals. Today the light sources of choice are light-emitting diodes (LEDs), based on quantum effects in semiconductors and manufactured to produce not only white light but also a wide range of colours. They are far more economical than filament bulbs because of their high efficiency in converting electrical energy to light and their long life. Compared with mediaeval burning-wood torches, the cost of producing an hour of lighting with LED lamps has fallen, in real terms, by about half a million times. Indoors at least, the distinction between daylight and night activities has been all but abolished.
50
W. Gosling
When I converted a barn behind my house into a library, initially I had ten 50 watt tungsten-halogen filament lamps, taking 500 watts total, to provide an acceptable level of lighting. Later, I replaced them with LEDs, giving about the same level of illumination from as little as 60 watts in all. The life of the lamps is so long that having to change them is no longer a chore. The room is cooler too, which is better for both books and readers in summer. The LED lamp means that for many people the cost of lighting is no longer a consideration in their budget. The combination of improved batteries and LED lamps has also greatly extended the useful life of portable lamps and flashlights. Now light fittings are appearing with no provision for changing the LEDs, which last longer than the likely life of the fitting.
Less Ambitious Information Technology Early radio pioneers were obsessed by the challenge of achieving ever- increasing geographical range for their new means of communication. Europe to the United States was one goal, achieved by the beginning of the twentieth century. Later worldwide radio communication appeared, and in time became both reliable and of high quality, using satellite transmission. More recently, however, the unobtrusive use of short-range radio has come into its own, transmitting information over a few kilometres and sometimes just a metre or two. Even my electric toothbrush has a radio link to a tiny panel, less than a metre away, telling me how long I have been brushing and warning if I press too hard. Not so long ago radio transmitters and receivers were far too big and expensive for this role, but microelectronics has made them cheap and compact enough to use wherever it can give an advantage—new techniques which make the designs viable. Every cellular mobile phone depends on short-range radio to connect it to a mobile switching centre (MSC), and thence to the telephone network. The transmission is rarely over more than a few kilometres, and in cities less. Similarly, Wi-Fi distributes broadband digital data to computers tens or hundreds of metres range from the public transmitters—‘Wi-Fi hot-spots’.
5 Subtle Subversives
51
Bluetooth2 is another radio data system used for all kinds of short links—up to 100 m range maximum, though usually far less. Among many other things, Bluetooth links are used from an earpiece to a mobile phone, between a computer and its peripherals and for dozens of similar small-scale applications. Jim Kardach, its designer, introduced the name in 1997 for a system that let mobile phones communicate with computers, bringing them into united networks.
The Small Revolutions These are a few examples of the ‘small’ technology revolution which are soon universal because they are cheap and do not disturb the patterns of life much when they are first introduced. Yet all these things change life’s everyday detail, while their spreading presence is taken for granted, arousing little comment. Alan Kay suggested ‘Quite a few people have to believe something is normal before it becomes normal—a sort of “voting” situation. But once the threshold is reached, then everyone demands to do whatever it is.’ The small technology revolutions are like this, and few are aware of how much they all add up to. Like other technology changes what they offer is always a one-way trip.
Notes 1. de Beaune, S. & White, R. ‘Ice Age Lamps’, Science, March 1993. 2. ‘Bluetooth’ comes from the name of the tenth century King Harald Bluetooth who brought the Danish tribes into a single kingdom. The Bluetooth logo combines letters from a pre-Latin alphabet for H and B, the king’s initials.
6 The Active Elements Appear
The electric telegraph morphed into the wireless telegraph, from which came the subsequent developments in radio and television. Broadcasting has been described as revolutionary in its impact, but in the event, it was far from alone; many collateral innovations enhanced the revolutionary impact of electrotechnology and the multi-billion-dollar industry that grew from it. The arrival of a clutch of new techniques was critical. The most revolutionary among these was active elements in electronics. These are devices that were at first seen as a way of making weak signals stronger, extending the range of analogue telephone lines or radio signals. It was at once realised that in suitable circuits, often involving feedback, they made possible a whole range of other designs, including for ultrafast switching and generating radio transmissions. The earliest active devices used physical effects inside envelopes of glass or metal from which air had been evacuated. These vacuum devices were called ‘tubes’ in the US and ‘valves’ elsewhere in the English-speaking world. Now largely obsolete, they were the basis of the analogue era of electronics, roughly from the end of World War I to the last quarter of the twentieth century.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_6
53
54
W. Gosling
The Edison Effect In 1884 Edison witnessed an odd effect. An electric lamp, with a vacuum inside the bulb and the carbon lighting filament usual at that time, had a wire introduced into it through the glass envelope, but unconnected to anything within. When the filament was illuminated, Edison could pass a current into the apparently unconnected wire by using a battery to make the wire positive relative to the filament. No current flowed when it was negative. Today we explain it by saying that a cloud of electrons ‘boiling off’ from the hot filament were attracted to the positive wire because electrons carry a negative charge. This caused a current to flow. When the wire was made negative it repelled the electrons, so the current was cut- off. However, in 1884, the existence of the electron was not yet known, so the effect seemed incomprehensible. Thirteen years later, the electron was identified by J. J. Thomson (1856–1940) at which point science could explain the technological mystery. By 1904 the Edison effect had already been refined by John Ambrose Fleming (1849–1945) who replaced the inserted wire by a metal plate, the ‘anode’, substantially increasing the capture of electrons and hence the current that could flow. The improved device was known as a diode. Useful for its one-way conducting property, it played a small role in radio receivers as a detector of AM signals. Really, though, its significance was as a stepping stone to the much more important triode, announced by Lee de Forest in 1906. This was the major breakthrough. Placing a wire grill, called the ‘control grid’, between the filament and plate of a diode, de Forest showed that a negative voltage applied to the grid could control the plate current. It did this by repelling the electrons back towards the filament. Since hardly any current flowed to the grid, normally kept negative, yet a large current could flow to the anode/plate, a small power in the grid circuit could control a much larger one flowing to the plate. This property in the valve (US: electron tube) of amplification or power gain, was central to all developments in electronics subsequently, until the coming of the transistor. In the jargon of electronic engineering, the valve was the first ‘active device’.
6 The Active Elements Appear
55
The de Forest triode, which he called an ‘audion’, had a variable amount of residual gas inside the glass envelope and so performed inconsistently. Using an increasingly ‘hard’ vacuum in the envelope, it was rapidly improved, by de Forest himself and Fleming, among others. By 1916 early but effective triode valves were in volume production. There were many further doxastic improvements. By coating the filament—also called the ‘cathode’—with a mixture of barium, strontium and calcium oxides it could be made to emit electrons at a much lower temperature, down to 500 °C from 1400 °C. This ‘dull emitter’ had a considerably extended life and needed less power to heat the filament, compared with the original ‘bright emitter’. Other improvements followed, notably additional grids between the control grid and the anode, which raised the power gain of the device further and made it more stable in radio- frequency applications, and the introduction of chemical ‘getters’ within the bulb which gave a harder vacuum. In their day, valves were unprecedented devices, amazing in the new possibilities they opened up. Until about 1960 they dominated every aspect of broadcasting, telecommunications and electronics. Even so, from the start, they were beset by real problems. Despite successive attempts to ‘miniaturise’ them, they remained big. Constructed in glass or metal tubes tens of millimetres in diameter, they were fragile and they had short lives. They were also energy hungry, typically taking two watts or more for heating the cathode alone, and they worked best with hundreds of volts in the anode circuits. Above all they were expensive, because their internal components—cathode, often three grids or more and the anode—had to be hand assembled on a production line, then spot- welded into place. Long ago, I worked briefly at a factory that made valves. They were sold cheaply in volume to radio and television manufacturers, but at much higher prices in radio shops, as much as half a working man’s weekly wage. Because valves had a limited life, people were obliged to buy them as the replacements needed to keep their radios and record players running. Once settled into my new job, I realised that the employees were obsessed with finding ways of smuggling the high retail-value valves out of the factory, despite rigorous management action to prevent it. The assembly lines were staffed by women, and the finished devices were
56
W. Gosling
robust hard-glass tubes typically 22 mm in diameter and 40 or 50 mm long. Women have a natural place of concealment for items like this. Contemporary mores prohibited detection of the clandestinely carried devices by male security staff, whilst the workers’ unions fought the introduction of female security personnel, which might undercut the men’s wages. Problems with smuggling of valves continued. Because of the high cost of valves, designers used their ingenuity to minimise the number needed for a given job: few radios had more than five. The needs of World War II led to designs with more valves, particularly for radar, but when Tommy Flowers was obliged to use 1600 in his Colossus computer their shortcomings became painfully obvious. Widely used types consumed over 2 or 3 watts of power each, all were fragile, short lived and ran hot. Early computers were designed around a few thousand valves, and the cost, heat removal and above all reliability were problematic. It proved necessary to give the machines two hours ‘preventive maintenance’ at the start of each day, after which there was reputedly a 90% probability they would work for the next twenty-two. Something better was needed.
Silicon’s Tanks on the Vacuum Lawn After a few precursors as early as the 1920s, in the late 1940s came transistors, built on germanium crystals at first and later on silicon. Their reception was far from rapturous. For the first half of the twentieth century, few would have dared claim that vacuum devices were a precursor to the ‘real’ electronics which was to come. Vacuum electronics went so far and achieved such notable successes, that it created a great worldwide industry. Radio and television were introduced in the valve era. World War II was fought using vacuum electronics as the key element in military equipment on all sides. Not many could bring themselves to believe that it was all to prove just a dead-end. When the first hints and indications of vacuum electronics’ terminal condition appeared, many experts in the field simply went into denial. In 1952 Lee de Forest wrote: ‘As a growing competitor to the tube amplifier comes now the Bell Laboratories’ transistor, a three-electrode germanium
6 The Active Elements Appear
57
crystal of amazing amplification power, of wheat-grain size and low cost. Yet its frequency limitations, a few hundred kilocycles [kHz], and its strict power limitations will never permit its general replacement of the audion amplifier’. By the late 1950s articles were appearing in technical magazines explaining why transistors could never entirely replace valves. In succession, each article saw a smaller and smaller niche for them, and by the end of the century, it was all over. Vacuum electronics proved mortal and is now reduced to an important history and a present curiosity. But these vacuum devices were the great precursor to what was to come, and they made it possible to have a powerful electronics industry up and running thirty years before solid-state electronics arrived. The new computer technology needed a cheap electronic device, which must be smaller, faster, consume far less power and have a much longer life than a valve. Had anybody remembered—and few did—it could have been encouraging that there were precursors in solid-state electronic devices, going back to a series of barely viable ‘transistors’ of varied design patented by Julius Lilienfeld (1882–1963) from 1925 to 1927. They were ignored at that time, for then valves were still young and promising. Yet the dream of a solid-state active element was pursued in isolated research groups, notably at the Philips laboratory in Holland. Proposed devices appeared in the patent literature in the 1930s and 1940s, but none were successful, precursors all. The solution finally appeared in 1948, born in an internecine ‘partnership’ of John Bardeen, Walter Brattain and William Shockley at the Bell Laboratories. The transistor could at last fill the pressing need that electronic computers had created.1 After their Nobel prize in 1956, the personal history of the three inventors of the transistor was very different. Walter Brattain, a charming extrovert and great experimentalist, stayed with the Bell Labs and retired in 1967, a much respected man. John Bardeen left Bell for the University of Illinois and an academic career of the utmost imaginable distinction. He had the rare honour of being awarded a second Nobel prize in 1972 for the BCS theory of superconductivity. I collaborated with him writing a book, honoured to be among his acquaintances. He died admired by all who knew him. In the greatest possible contrast, a dozen years after his triumph William Shockley had become a social pariah, leading a wretched
58
W. Gosling
existence. The fatal shooting of one of Shockley’s few close friends by an Afro-American youth deeply shocked him. That was understandable, but the greater tragedy was in his response: he enthusiastically embraced extreme ideas about eugenics and the supposed intrinsic moral and intellectual disparity between people of different colour. These views predictably alienated him from informed public opinion. Politically unsophisticated and hopelessly naive about social issues, yet at the same time given to intellectual arrogance, soon Shockley could not speak in public, even on engineering, without provoking demonstrations. Feeling against him remains strong, and some accounts of the transistor’s invention play down his role. In reality, it was substantial, particularly for junction transistors, for almost a decade the only commercially viable form. A practical joker and enthusiastic amateur magician, Shockley carried his gold Nobel medal in his pocket (a replica I think). If hard-pressed in the debate, even on matters of politics or race, he pulled the medal out, threw it on the table and said ‘When you have one of these you can argue with me’. It was not an endearing trait. He seemed quite paranoid when I last met him, shortly before his death. Fearing misrepresentation, he recorded every conversation, even the most trivial, using a tape recorder slung on his shoulder—a sad and lonely man. But what of his wonderful new electronic devices? They were the beginning of a development that reduced valves to the status of precursors in only a decade. The first transistors were formed, one at a time, in near-perfect semiconductor single crystals of germanium, and later of silicon or, more rarely, other semiconductors. They first appeared in a transitional form. Just as cars began with ‘horseless carriages’, this phase of transistor manufacture tried to mimic the look and feel of valves, usually packaged as single devices. So transistors first appeared in small cylindrical envelopes, usually black, with three pins or wires, coming out from one end, to which the connections were made. The US had a tradition of metal ‘tubes’ (valves) and their transistors were metal cased, whereas Europe, with its tradition of glass valves, had glass encapsulated transistors.2 But within a few years, transitional packaging forms were used less and less, except for high power devices. Transistors were made differently: by photolithography on silicon ‘chips’, rectangular pieces cut from a thin
6 The Active Elements Appear
59
slice of mono-crystalline silicon. Interconnections between the individual transistors on the chip surface were formed by metallisation and etching. An electronic circuit on the surface of a chip resulted, all made without any direct human intervention—a microcircuit. It immediately became possible with this technique to form and interconnect many transistors and associated electrical components in a single chip, beginning with dozens but rapidly growing to hundreds. The complexity of the circuit possible on a fixed size of chip began to double roughly every twenty months—Moore’s Law. The feasible density, complexity and speed of microcircuits continue to advance while cost per function falls. This exponential growth cannot go on forever but has not abated much yet.
Notes 1. Arms, R. G. (1998) ‘The Other Transistor’ Engineering, Science & Education Journal IEE (London, UK). 2. One over-enthusiastic manufacturer scaled down the thickness of the glass along with the size of the package, so their transistors could be destroyed by squeezing them between thumb and finger.
7 Drivers of Technology
If changing technology is indeed a driver of cultural evolution, that pushes our enquiries one step further back. What is the ultimate cause of these technology changes and revolutions? The doxastic method enables the changes, but what initiates them? There is not one cause but several. First came the pressures of our environment. In Palaeolithic times human life was, in Thomas Hobbes’ famous words, ‘nasty, brutish and short.’ Dying young, from malnutrition, disease, disasters of childbirth, conflict, predators and cold was a common fate. Early humans turned to the doxastic method to secure themselves against the hardships of their lives. It was seen as a survival strategy, and it worked. Means of containing these threats were found: better food preparation, making clothes, self-defence and safe shelter. All were made possible by the beginnings of technology. As a result, today even the least fortunate in Europe can hope to live three times longer than their Palaeolithic counterparts. Nor did environmental factors cease to drive new technology in later centuries. At regular intervals of a few hundred years, great plagues swept through the inhabited world. The most severe of these was the Black Death, and the most recent the influenza pandemic of 1919, which is thought to have killed 50–100 million people (compared with around 30 millions for World War I). The earlier pandemics, against which people © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_7
61
62
W. Gosling
were less able to defend themselves, killed a quarter or more of the world’s human population each time. Once the nightmare had passed there was a serious shortage of labour in the agricultural societies of the day. The response was to redouble efforts to exploit wind and water power, draft animals and, in earlier times, slavery. Slaves were acquired either by slow, expensive breeding, or the speedier and much cheaper way of capture in war. War itself, whether undertaken for land, slaves or loot, was a great stimulus to new technology and has remained so to this day, even though large-scale warfare has now grown so destructive that its future is problematic. In Chap. 19 the undermining of warfare as a social institution is investigated more deeply. It has been the success of military technology that has turned war into something no longer socially so useful. It will take us time to adapt. Enthusiasm for new military technology remains strong, though increasingly oriented towards ‘low-intensity operations’1—antiterrorism and counter-insurgency. Yet for all that, it is clear that the most important trigger for new technology is the technology itself. Just so, when communication by land-line telegraphs became a worldwide reality, global trade and business activities expanded rapidly. This puts immediate pressure on transportation, hurrying along the deployment of railways, steam ships and aviation. These in turn increased the demand for communication to users who are not at fixed sites, beginning with maritime radio and thence to the phone in your pocket. And all these technical advances produced collateral changes which in turn had further impact. In the 1930s, as World War II approached, it was clear that advances in aviation had transformed bombing from the air into a major threat. In 1932 Stanley Baldwin (1867–1947), knowledgeable as former Prime Minister of the UK, made a speech in which he warned: ‘I think it is well also for the man in the street to realise that there is no power on earth that can protect him from being bombed. Whatever people may tell him, the bomber will always get through.’ A respected military expert, Basil Liddell Hart, agreed, predicting that a quarter of a million deaths and injuries could occur in Britain in the first week of the coming war. In the event, nothing like this actually happened due to the success of defensive technologies, and particularly radar.
7 Drivers of Technology
63
It was urgent to find some way to spot the approach of attacking aircraft when still distant. The defenders needed time for fighter aircraft to take off and get into position. Understandably, there was initial scepticism that long-range detection would prove possible at all. Detection of the attackers’ sound was tried, with poor results. However, the odd effects on short-wave radio reception when aircraft flew near a receiving antenna suggested that reflections of radio energy by aircraft could be sufficient to detect them. When the trial was made, ordinary radio transmitters and receivers proved able to do the job. CH (Chain Home), a chain of radar stations, was constructed on the UK Eastern coastline in the late 1930s. When the strike by German bombers came, British fighter planes were vectored onto them by radar, and the rate of aircraft loss the Luftwaffe had been expecting was more than doubled. With some technical improvements, it rose further, to the point where Germany could no longer sustain the loss of bombers and their pilots. So bombing failed to clear the way for an invasion of England as expected. Advances in aviation and radio technology had interacted to bring radar into being. There are so many other examples of technology stimulating more technology that the process seems almost the norm. When in 1985 Martin Cooper made the first public cellular phone call from New York’s Sixth Avenue, his hand-held phone had a battery life of thirty-five minutes—too short for general use. The vast subsequent sales of the new personal phones hastened the development of lithium-ion batteries, overcoming that problem. Lithium-ion now powers cordless electrical equipment of all kinds as well as our phones, from electric cars and aircraft, through power drills to toothbrushes. The move to cordless has itself proved revolutionary. Looking at technology as a whole, this self-stimulation is known in classical control theory (reviewed below in Chap. 13) as a positive feedback loop. Its unique characteristic is that once begun it will continue to propel inexorable change in the same direction until constrained by some wholly external circumstance. In the case of most technologies, this external limit is only dimly visible as yet.2
64
W. Gosling
Politics Is a Funny Old Game Because design is a step towards a new product, process or service which may well have transformative social impacts, the hunt for a new design will be heavily influenced by the prevailing attitudes and politics of the society within which the designer works. Authoritarian and highly conformist societies may have launched themselves in a revolutionary ethos, but once their own power has been established do not welcome any but the slowest social change. Capable of producing the most radical change of all, disruptive technologies are anathema. The practice of design is hazardous for those working in such societies. In 1966, Italy’s Fiat company agreed to establish car manufacture in the Samara Oblast. The factory made a version of the Fiat 124, just launched, but took so long to get up and running under Soviet conditions that they were not producing cars until four years later. In Italy, Fiat replaced the 124 with the 131 in 1974, but in Russia, the older Fiat design went on being produced, in modified form, to the dawn of the twenty-first century. They even made a couple of hundred of a ‘hottedup’ KGB version with a Western-designed Wankel engine. Towards the start of the 1980s, in a Paris restaurant just off the Champs Elysées, I had a long and bibulous dinner conversation with a distinguished Russian engineer. I asked why the Soviet Union had persevered with the Fiat design so late in its life. ‘In the Soviet Union we dare not innovate,’ he explained. ‘If you try a new design and anything goes wrong, even just teething troubles, you risk finding yourself under arrest for economic sabotage. It’s much safer to get something tried and tested from the West.’ Clearly, in this environment, the doxastic method would be hazardous. Despite this disheartening background in the Soviet Union, from 1946 Genrich Altshuller had the courage to develop, with others, new design strategies.3 He saw the need for it in his work as a patent examiner and based his ideas on a study of 40,000 patent applications. A strategy named TRIZ (a Russian-language acronym) resulted, attracting attention worldwide. In the hope of making TRIZ more acceptable in Stalin’s world, it avoided generating disruptive designs. Despite his precautions,
7 Drivers of Technology
65
in 1950 Altshuller was arrested, sent to the gulag, and released after Stalin’s death three years later. In contrast to authoritarianism, there seems to be evidence that periods of political instability, even turmoil, are more conducive to technological creativity than a well-governed polity.4 The Chinese Empire achieved stability through centuries, despite changes of dynasty, thanks to a durable bureaucracy, but had no industrial revolution, despite ‘top- down’ efforts to get one started. By contrast, from the sixteenth century, England was an industrial innovator. The changes became more obvious in the late seventeenth century, shortly after a civil war and the revamping of an old social order that resulted. In Japan, two hundred years later, an industrial revolution took off only after the destruction of the repressive Tokugawa shogunate. In both countries, internal conflict caused the downfall of a long-established social order and its replacement by a more flexible one. Were these the precondition for a new industrial age? Open societies have no alternative but to ride whatever consequences ensue from technological change, and so are far more tolerant of design innovation. Residual hostility may remain though. In particular, commercial interests react badly if they feel themselves threatened by a new technology. This may lead to publicising early failures, hoping to turn opinion against it. Electrotechnology began with the telegraph, and it was fortunate that, unlike the railways, it did not seem a threat to established interests. Where it was going to lead—to electrical power, electronics and the silicon chip—was quite beyond their imagining.
Notes 1. Kitson, F. (2010) Low Intensity Operations Kindle Books (online). 2. Price, D. (1963) Little Science, Big Science Columbia U. Press (New York, USA). 3. Altshuller, G. (1996) And Suddenly the Inventor Appeared Technical Innovation Centre (Worcester, USA). 4. Gosling, W. (2012) Helmsmen and Heroes (2nd edition), Kindle Books (online).
8 Technology’s Other Half
Technology has two parts: the techniques needed for making things and the design of what will be made. Techniques are the abilities to manipulate things, but designs are the formalisation of our wants and needs. Although in practising the doxastic method it is essential to understand the techniques to be used, on its own that is not enough to ensure a successful product, process or service (PPS). The way techniques find application has nothing automatic about it. Only through a successful design can we exploit them to our advantage, and this needs an act of creativity. From our earliest artefacts, this was true. As we have seen, long before writing, knowledge of these things passed from one generation to the next by memory. All we needed was developed language, the tool of culture. Once there was language, technology could start its evolution. For designing things, being able to recall the characteristics and limitations of the techniques on which production will depend is vital. Thus to plan an iron gate you need to know what a blacksmith can do. Similarly, to design a bridge you must understand structural engineering and you cannot devise a modern banking system without having IT at your fingertips. The need for understanding the techniques is absolute and inescapable; without that nothing can be achieved but daydreams.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_8
67
68
W. Gosling
Yet to understand techniques fully, does one have to be a practitioner? Some argue it is essential. They say that only those who have shaped red- hot iron, hammer in hand, truly understand what a blacksmith can achieve. This view is entrenched in the traditional crafts, always learned primarily through apprenticeship to a master. But a lot depends on how information about the practice of technology is kept alive. Human memory of techniques comes in two different forms: such memories may be explicit or implicit. Explicit memory is held consciously, information usually in verbal or mathematical form, but sometimes presented in other ways, such as pictures (for surgeons, engineers and ballet dancers) or sound sequences (for musicians). It is actively stored in the brain and may be retrieved later by a conscious process of recall. Because it is firmly rooted in the conscious domain, storage media such as books, discs, storage chips or memory sticks supplement it easily. Implicit (or procedural) memory does not use conscious recall of information, and shows up primarily, but not solely, in learned skills. It reveals itself wherever ‘practice makes perfect’; it always characterises the apprentice’s training, yet goes far wider. A ballerina and her partner in the pas de deux depend on well-ordered implicit memories, and so too does a surgeon conducting an operation, or a footballer scoring a goal. When literacy was rare traditional crafts were obliged to rely on implicit memory to guide the workers’ hands. Now, with literacy universal, we use explicit memory much more. The ‘how to do it’ information we need may be stored in computers or media, such as instruction books, but is also what we ourselves remember explicitly, and recall at will. Yet implicit memory is still important, and not only because nobody could ride a bicycle without it. When balance is momentarily lost there is no time to explicitly remember instructions we were once given. The correction is ‘automatic’ we like to say, which means drawn from implicit memory. Similarly, the comforting familiarity helping us to successfully use computers and cars depends on implicit memory, which rescues us from struggling to recall the details of the operating handbook. A new software package or the change to a different car makes us suddenly aware of how true this is. Anybody who has expertise in comprehending, making or doing in one area or another of human activity calls on two stores of knowledge.
8 Technology’s Other Half
69
Explicit knowledge is all around us in many forms and growing in volume all the time: humanity’s act of defiance towards the second law of thermodynamics. But in the immediacy of everyday interplay with the world, we also need implicit knowledge which can respond without hesitation, and that will be acquired in the processes of doing.
How to Sponsor a Design Suppose you need a wrought-iron gate. A blacksmith has command of the techniques needed to do the work, skilled at cutting, shaping and forging iron when it has been heated to its plastic state. These are the essential techniques, and without them, he could not do the job. However, other conditions must also be met. The finished gate must be the right size and shape, with suitable hinges and latches. It will be plain or ornamented, in whichever style you prefer. These things are part of the design of your gate. They have to be right or you will not be satisfied with the outcome. The design is constrained by the techniques used, some things you might ask for just cannot be done, but techniques do not uniquely determine it. Using the same techniques the blacksmith could make many quite different gates. Choosing the design amounts to a selection of a particular gate from among the vast number of possible gates. Ultimate success would be guaranteed by a design that settles every aspect of the proposed innovation in advance, leaving nothing to chance. When a design approaches close to this ideal it is called a deep design.1 Who shall be the designer then? A blacksmith with the right skills and talents certainly might undertake the work. Alternatively, some other person could: a designer who instructs the blacksmith, through verbal description, drawing or model. Successful designers of ironwork must know what techniques a blacksmith uses to make it possible, and what their limitations are, even though the designers are not themselves skilled blacksmiths. Considering another example, to produce a microcircuit in silicon— the ‘silicon chip’—requires many subtle techniques, crystal pulling, cutting, masking, diffusion and encapsulation among them. Together these add up to the amazing virtuosity of a silicon foundry. The wonder of our
70
W. Gosling
time, it can produce structures on the chip in size down to a ten- thousandth of the thickness of a human hair. Even more astoundingly, millions of these structures can be assembled and interconnected on a single chip, enabling us to build digital machines of biological complexity and beyond. Microcircuits cannot be made without the techniques of the silicon foundry, yet here too techniques alone are not enough. The finished chip will have to perform a specific function, be the heart of a calculator, say, a radio receiver or a computer. It will also have to do so within time limitations set by its pattern of use. Many other goals must also be met, such as cost, power dissipation and operating temperature range. Only the design makes this happen in the finished chip, distinguishing it from all others in the universe of possible chips. The design must dictate how the available techniques are used to achieve the specified ends. Almost always the ‘somebody’ who designs is several people. There have been many studies on design teams.2 Specific responsibilities within the team may be assigned to particular individuals. For example, those undertaking the functional design and those concerned with how the thing is to look may be different people, although they remain in close dialogue. With something so complex as a silicon chip, maybe carrying a million transistors, designing is not easy. Some might imagine it beyond the capacity of human minds, but it proves not so, provided computers are at hand to support us. Sophisticated software and a library of chip design components are the tools enabling designers to handle their difficult task. We revisit this later.
Failed Designs Yet although we accept the necessity of design and some may enthuse about it, we do not always do it well. Since earliest times many designs have gone sour on us.l The more romantic failures include Babbage’s nineteenth-century computer—the analytical engine—which was never built, Brunel’s ship Great Eastern, which ruined successive owners and was unsuccessful in its intended service, and the Brabazon airliner, luxurious in the extreme but too slow to find a single buyer.
8 Technology’s Other Half
71
Our survival depends on how well we meet the challenge of design. So far we can claim only a patchy success, and educating those whose life’s work will be design has not advanced much. Sixty years ago some university schools began to claim that they taught not ‘engineering’ but ‘engineering science’. The focus of interest moved to an overwhelming concern with techniques, and to the sciences believed to inform and guide them. The ‘hard sciences’—physics and mathematics—thought more ‘fundamental’ than other disciplines, were given prominence.3 Since the design process itself, though not capricious, cannot be entirely rationalised its study in a ‘scientific’ style seemed impossible. There were compelling reasons why enthusiasm for ‘engineering science’ came into fashion—in the 1950s the popular status of science was at its highest. Trying to dust engineering education with some of the glitters of science seemed like a way to make it more attractive to potential students. So the mid-twentieth century vogue for teaching students ‘engineering science’, but not design and its essential doxastic method, might have impoverished us if unchecked. Some economists argue declining innovation already has.4,5 Nor was this a challenge in engineering alone. It was patchily seen among other technologies,6 using the word, as throughout this book, in the wide definition adopted by archaeologists. It was fashionable to try to reduce technology to applications of ‘hard science’. Of course, science is a resource technologists use, indeed the most deeply valued of all, but there are other tools. Science is absolutely necessary, yet not alone sufficient for contemporary technology to succeed.
Learning Design the Hard Way In the world of making and doing, design is inescapable, yet in many engineering schools of the mid-twentieth century, this aspect of human innovation had only a marginal presence. Although better and more responsible design is crucial for the health and happiness of humankind, education for design lagged behind that for techniques. Sometimes it had to be ‘picked up’ informally. Yet unless virtuosity in both techniques and design is robust our technology will wither and fall short of its potential.
72
W. Gosling
It is in what we wish to bring about, in the design, that the ethical dimension of technology is hidden. Whilst techniques can fairly claim moral neutrality, being in themselves generally neither intrinsically good nor bad, the design arises from the desire to bring about human consequences and therefore cannot avoid an ultimate moral commitment. This could be one reason why some shy away from thinking about it, finding the moral issues too difficult to handle. Yet if we do not achieve success in establishing ethical technology our survival is at risk. Design will make or destroy us. In what follows we shall look hard at how we do it. Is there some coherent body of ideas to guide us? Absence of a ‘scientific’ theory of design, of the kind Herbert Simon urged us to work towards, is one cause of neglect, just as he feared.7 To those schooled in a ‘hard science’ tradition, lack of a framework of design theory makes the subject seem intellectually incoherent. Yet in other important areas of study teaching, learning and scholarship are well and long established, even though a fully systematised underlying theory seems inconceivable. History is an obvious example.8 In the event, the ‘hard science’ educational fashion of the mid-twentieth century proved inadequate, losing touch with design which it could not encompass. The needs of people are what drive technology, so the attempted purge of the human presence from the curriculum failed, as it was bound to do. ‘Hard’ science without obvious application is not what motivates young technologists. A swing back began in the 1980s, and academic appointments focussed on design were increasingly seen in university engineering schools. Design is the trickier half of technology; yet some people speak and write as though it is only about how things appear and how they feel in use. Steve Jobs said: ‘Design is a funny word. Some people think design means how it looks. But of course, if you dig deeper, it’s really how it works.’ So it will be helpful to distinguish between presentational and functional design. The first is getting the immediate perception of the thing right; the second is making it do what it should in the right way and at the right cost, throughout its useful life. Both are part of the design process. Yet it would be perverse to maintain that appearance is the whole of design. Whatever looks good yet works badly, cannot be depended upon, has a
8 Technology’s Other Half
73
great potential for environmental damage, or is socially unacceptable we ought surely to say is badly designed. Yet presentational design—which used to be called styling—is not a pursuit of visual deception, as some seem to think. Presentation is important. The way it looks sends messages to a user, signalling what it is, how to use it and why having it is a must. What looks and feels good works better, because users are more enthusiastic. So designers and design teams, who want to fully complete their task, take responsibility for both function and form. When they succeed it deserves to be called deep design, in contrast to purely presentational or functional design. People who have never done it themselves can have odd ideas about design. Some think it trivial, simply a matter of calculation: do the right maths and perfect designs drop out. How easy that would be—computers could do it all for us! The reality is different. Often a design is more like poetry than prose; when complete it looks inevitable, yet before that it seems hardly possible. Can it ever be wholly rational? Except when the designer is Heath Robinson or Rube Goldberg, the design is neither whimsical nor capricious. Yet it can hardly ever be reduced to an algorithm, a simple process of calculation. We can easily see why. No design procedure can be wholly objective unless all possible designs for a given specification are considered to evaluate their relative merit. Evaluation of all possibilities is inescapable, otherwise, potential solutions will remain unchecked. That would introduce subjectivity through an element of arbitrary choice. So a credible procedure for an objective design process would be to think of all the designs conceivable for a particular specification, then evaluate each one, using some undisputed measure of merit. Obviously, the design to choose is the one coming out with highest merit. Yet in practice, can we be quite sure we have thought of absolutely all possible options? Worse, an even deeper limitation stems from complexity: the number of alternative designs grows rapidly with the number of distinct components or factors in the design. Think of a simple example: suppose our design is simply a matter of connecting cables between boxes. The connection rules could be that boxes in general may both originate and receive one cable at most. However one (the ‘start box’) originates a cable, while another (the ‘end
74
W. Gosling
box’) only receives one. With these rules, the boxes will end up connected as a ‘daisy chain’. So let us begin with three different boxes. To make a new design, first choose the ‘start box’—there are three distinct ways of making that choice. The cable from the ‘start box’ can go to either of two remaining boxes—two choices—which themselves are then cabled together without further choice. The number of choices is therefore 3 × 2 × 1 = 6, the total number of possible designs. By an identical argument, for a more complex system with four components, the number of possible designs comes out to 4 × 3 × 2 × 1 = 24. For six it would be 720. A system with ten boxes could hardly be called complex, yet it generates over three and a half million possible designs. And this design is constrained to only one cable between each pair of boxes. Breakaway from the ‘daisy chain’ to a more realistic situation where multiple cables are permitted between each box and the possibilities multiply. For any but the simplest systems, a complete search of all possible connection patterns is a wearisome strategy. The time to complete the decision process increases fast with the number of available choices. The design choices are discrete too in this case, but things get worse when this is not so—the commonest situation. Design a garden spade and you are obliged to choose the length of its shaft, variable not in a few big steps but millimetre by millimetre.
Competing Designs As well as the rapid growth in the number of possible designs needing consideration with modest increases in complexity, the notion of an exhaustive design process is bedevilled by a further problem. In the attempt to meet the specification by exhausting all possibilities, many potential designs may have to be considered, each competing against the rest for adoption. How can the choice between them be made? We need a way of distinguishing the better designs from the worse. We therefore require a measure of merit that will reliably indicate how good any proposal is relative to any other. Ideally, it should be the value of the innovation to a potential user, but it is not easy to come at in advance of
8 Technology’s Other Half
75
actual use, so at the outset, some carefully chosen performance parameter might have to stand in for it. However, this choice cannot be based on an algorithm so is another point at which rationalisation fails. In a market economy, the price the product, process or service can command seems tempting as a measure of design merit, but this is inevitably unknown in the early stages, before launch. Also market price moves in consequence of factors other than design, such as skill in selling, so attainable price is an imperfect indicator of merit. Just to complicate matters further, products, processes or services commonly have more than one measure of merit. So, for a tablet computer reduction of weight is good, and so is faster computing. Does an increase in speed that also increases weight improve or reduce the value to the user? The answer is rarely clear. If the product, process or service has multiple measures of merit, how can the designer continue?9 Often none of the ways of getting at measures of merit is compelling. Many designers think it not worth a try, so it is common to arbitrarily choose just one feature of a design as crucial, for example, manufacturing cost, and optimise the design in respect of that alone. However, this choice is a hunch, compelling though it may seem. It may not result in an optimal design for a particular market sector—Rolls-Royce cars did not achieve their sales by minimising ex-works cost, seductive though that was to other car manufacturers, seeking sales in a different market sector. Neither growth in complexity nor difficulties in evaluating competing designs is just theoretical. There are 2000 distinguishable designs for a vacuum cleaner, so it is said, but no designer can afford the time to evaluate them all. And if you try, how do you know when you have got to the end? All this even if you could be sure that evaluation has used meaningful measures of merit, which is by no means certain. However, this need not frustrate the search for a design which is fit for purpose, merely good enough if not the highest conceivable.
76
W. Gosling
Wholly Rational Is Unattainable So design by complete evaluation is impracticable, and it follows that the design process cannot be fully rationalised in any but trivial cases. So how do we live with intuitive aspects of design? Along with strictly objective processes—the calculations and experimental evaluations—in practice arbitrary choice always enters design; it is unavoidable. Designers may not consider some options suitable, yet include others seeming less obvious. We are in the doxastic domain. Like signatures, designs are unique to those who create them and the opinions they hold, which shape empirical progress. Computers can only support human designers in parts of their work where algorithms are possible. If the design of a product, process or service (PPS) cannot be a wholly rationalised procedure, what will be its sources of doxastic solutions? How are well-formed opinions derived, the essence of this approach? To what questions must the designer respond to make progress? • Established practice can be a useful starting point, although only as a first step. The first motorcars copied horse-drawn carriages—all passenger road vehicles were like that. However, it proved a transitional phase. Although cars imitated horse carriages at first, designs changed rapidly. In a couple of decades ‘horseless carriages’ gave way to the beginnings of the modern car. So a question: Are there established products, processes or services which hint of the design we are seeking? • The designers’ own experience is formative because we do best those things not too different from what we have done before. When Charles Wheatstone built his first electromagnetic relay—used to extend the telegraph’s range—the moving part of his device was a magnetic needle, like the ones in his telegraph instruments, his previous, successful experience.
8 Technology’s Other Half
77
In 1919, W. O. Bentley (1888–1971) designed his sports 3 L, a very original car, but all his designs, to the last, were unmistakably from the same stable. Distinctive features, such as aluminium pistons and short- stroke engines, were carried over from the earlier cars to later. Edmund Happold (1930–1996) achieved fame by designing ultralight structures deriving their strength from cables in tension. They were in effect tents, though not in-filled with fabric, but something durable, like sheet metal. Many of his designs were based on these principles. But why not? None of us is uninfluenced by our past. Happold’s colleagues designed London’s Millennium Dome shortly after his death, but everywhere it shows his influence. The structural strength of the ‘dome’ is provided by cables and interconnecting membranes, all in tension. An ordinary dome of stone or brick has its structure entirely in compression, so strictly this is not a dome but a tent, and the masts are tent-poles. So a question: Do we have any experience of our own prior designs we consider relevant? • Prestigious successful designs invariably influence other designers’ thinking, especially if they are recent. The Boeing 707 airliner took the radical step of mounting engines on pylons below the wings. This made it easier to repair or change them and met with such conspicuous success that designers around the world followed the trend. It soon became the preferred way of doing it. Alec Issigonis (1906–1988) was not the first to think of reducing the overall length of a car by placing the engine at right angles to the axis of the vehicle. However, the worldwide success of the Morris Mini Minor (1959), designed in this way, made other designers receptive to the possibility. So a question: Are there prestigious existing designs that seem to have lessons for us? • A unique selling proposition (USP) is a common design objective. A product, process or service not much different from its competitors has only one weapon in the fight for adoption: low price. Emphasis is on low production or provision costs, profit margins are squeezed and
78
W. Gosling
the viability of the project is challenged. To avoid this, the product, process or service needs a differentiating feature not present in the competition, known as the unique selling proposition or USP. A cheap USP comes from presentational design: the product looks different without functional change. A new car model is often a restyle of last year’s—less costly than a complete redesign. Sometimes, though, the USP is a product of radical deep design, as in the mid-1980s when Apple launched a graphical user interface (GUI) on their computers, first the Lisa then the Macintosh. Other makers could offer commands only by symbol-string, giving Apple a valuable USP. Quickly Microsoft followed Apple, offering a GUI called Windows. The rest is history. So a question: Can we give the design a USP? • Fashion plays a crucial part in many designs, and not only of clothes. In developed societies, all manufactured items we own or use are influenced to some degree by fashion.10 There are some who claim fashion should not be considered in deep design, but this is just intellectual snobbery. Movements in fashion provide a valued lubricant to change. Fashion rules, OK? A subtle relationship exists between fashion and the ever-desirable USP. Early adoption of an emerging fashion can bring a powerful USP. Yet fashions are not forever: blue denim will be unfashionable one day, and the USP vanishes when fashions change. Following the old mode near the end of the fashion’s life kills the chance of a USP—everybody is doing it by then. Think of cars: for years farmers, civil engineers and others who had tough driving conditions used four-wheel drive cars. From the mid-1990s they became fashionable, as SUVs, bought by people who never used them off-road. Due to rising fuel prices, by 2010 the fashion waned. People sought more economical cars—hard on manufacturers who had invested in SUV production. Now oil prices have fallen again. How cruel fashion can be. Similarly, for years mobile phones were made in two forms: ‘clam shells’ which folded in the middle and ‘candy bar’ phones, which did not
8 Technology’s Other Half
79
fold at all. Both sold, but fashion moved against the (perfectly functional) ‘candy bars’ for a while—may be influenced by TV’s ‘Star Trek’ series, where ‘clamshells’ were universal. Makers of ‘candy bars’ had a marketing problem. Later, feature phones revived the ‘candy bar’ in a new form. ‘Clam shells’ were eclipsed in their turn. In many areas, fashion is rapidly changeable, so those who depend upon it for their USP need to respond fast. The rewards for early prediction of a fashion change can be significant. A close study of fashion trends is a key to this problem.11 So a question: What fashion trends bear on the design?
Coping with the Irrational Because design can never be totally rationalised, it would be a great stroke of luck if it were successful in a single try, so doxastic empiricism rules. The first design is attempted then instantiated, or modelled, and evaluated. If it falls short, as likely it will, some redesign follows. The design converges to the optimum in a series of iterations, and it is sometimes hard to know when it is good enough to stop. Researchers have studied design intensively, also the related topics of creativity and innovation. Psychologists and neuroscientists describe creative mental processes.12 Economists study how design and innovation result in wealth generation.13,14,15 Historians of technology investigate how design and innovation happened in the past.16,17 Business schools study how new designs come to market, and old ones die there.18 All bring insights of the greatest value, but none alone teaches how design is done. Starting a design requires a specification for the product, process or service. At best, this will be an exhaustive statement of what is required if it is to be acceptable. Ideally, the specification comes from a sponsor— the person or people who want (and also pay for) the design. This specification sets the boundaries between designs that conform to the requirements and the rest. It allows us to differentiate between conforming and nonconforming solutions to the design problem.
80
W. Gosling
Sometimes the sponsor is a government. Napoleon Bonaparte (1769–1821), when First Consul of France, commissioned Claude Chappe (1763–1805) to design and build the world’s first long-distance telegraph system. This was not an electric telegraph but used optical semaphore signals over each station-to-station link. A semaphore at the transmitting station was viewed from the receiving site by telescope.19 Napoleon thought the network a necessity for running his European empire. Twenty-nine cities were linked, using six hundred telegraph stations. As the sponsor, though, Bonaparte was hard to please. He made unrelenting demands for speedy completion of the system. Under war conditions, members of the construction crew actually died of starvation on the job, and Chappe finally committed suicide, throwing himself down a well in 1805. His system was immortalised in fiction. In Alexandre Dumas’s The Count of Monte Christo, the justly vengeful Count brings financial disaster to the evil Baron Danglars by bribing a Chappe telegraph operator. False messages to Paris about events in Madrid cause Danglars to invest ruinously.
Design Begins Once a sponsor has communicated the specification, designing can begin. Though great designs do at times originate with individuals, more often it is a team activity. The design team can hope to succeed only if the specification they work to is comprehensive and intelligible, or can be made so. An example will make this clearer. Suppose a mobile phone is to be designed. The specification sets a maximum weight of 150 g, a minimum working range of 5 km and a minimum operational time between battery recharges of hundred hours. Is this specification complete? Clearly the weight must be between 0 and 150 g, so is indeed explicitly bounded on both upper and lower sides. At first sight, though, range and operational time have only a lower bound. However, there are implicit upper bounds. Normally the range would be extended by increasing the phone transmitter power, which increases battery drain, so reducing operating life. There will
8 Technology’s Other Half
81
therefore be a maximum range, beyond which operational time cannot possibly conform, since it will be below the minimum set in the specification. Similarly, there will be a maximum use time beyond which conforming range is impossible. This, together with the explicit limits, defines a ‘box’ within which all conforming designs must be located. Yet often, though, the specification is indeed incomplete. This makes for unlimited possibilities; the design problem becomes formally indeterminate. Once aware of this, the sponsor and designers may dialogue, hoping to establish the missing design limitations. Sometimes designers are tempted simply to invent them unilaterally, which is always risky. But at times the incompleteness of a specification is intentional. If you are designing a car to break the world land speed record you do not need to set an upper limit on speed, provided safety parameters are met. When a design is open-ended in this way it is because an optimum design will take some parameter to the limit of the possible, highest or lowest. How far it is practicable to ‘push’ the unspecified parameter to new heights is a measure of the designers’ success. In the military domain, for example, more ‘bang for your buck’ has always been welcome. Once having settled on a specification it becomes possible to move forward to produce a design. This needs an act of creativity, and even before that a language for talking about the act of design itself. We now turn to these difficult issues.
Notes 1. MacGregor, N. (2010) A History of the World in 100 Objects Allen Lane (London, UK). 2. Loch, C. (2003) ‘Moving your idea through your organisation’ in Laurel, B. (ed.) Design Research MIT Press (Cambridge, USA). 3. This hierarchical view of science was fashionable up to the mid twentieth century. Later philosophers of science were to argue that no science should be seen as more fundamental than any other. The idea of a hierarchy derived from a mistaken view of the relationship between different sciences. But that was a generation into the future.
82
W. Gosling
4. Huebner, J. (2005) ‘A possible declining trend for worldwide innovation’ Technological Forecasting & Social Change 72, online at www.sciencedirect.com. 5. Cowan, T. (2011) The Great Stagnation Penguin eSpecial (online). 6. Heath, I. (2014) ‘Role of fear in over-diagnosis and over-treatment’ the BMJ 2nd November. Pp 19–21. 7. Simon, H. (1996) The Sciences of the Artificial 3rd ed. MIT Press (Cambridge, USA). 8. Gigerenzer, G. & Selten, R. (Eds.) (1999) Bounded Rationality MIT Press (London, UK). 9. If the measures were independent of each other they might be optimised one at a time. Even this is not easy though: the time needed to progress to an instantiation might be different for different aspects of the design, so crucial decisions must be taken much earlier in one area than another. Managing these situations can be difficult when it is necessary to finalise on some aspects with a long manufacturing lead-time, while other aspects are not yet optimised. 10. Gladwell, M. (2000) The Tipping Point Abacus (London, UK). 11. Raymond, M. (2010) The Trend Forecaster’s Handbook Laurence King (London, UK). 12. Miller, A. ((1996) Insights of Genius Copernicus (New York). 13. Schmookler, J. (1966) Invention and Economic Growth Harvard U. Press (Cambridge, USA). 14. Rosenberg, N. (1982) Inside the Black Box Cambridge U. Press (Cambridge, UK). 15. Arthur, W. B. (1994) Increasing Returns and Path Dependence in the Economy U. of Michigan Press (Ann Arbor, USA). 16. Basalla, G. (1988) The Evolution of Technology Cambridge U. Press (Cambridge, UK). 17. Gimpel, J. (1988) The Medieval Machine 2nd ed. Pimlico (London, UK). 18. Christensen, C. (1992) The Innovator’s Challenge Thesis, Harvard Grad. Sch. of Bus. Admin. (Boston, USA); (1997) The Innovator’s Dilemma; & Raynor, M. E. (2003), The Innovator’s Solution both Harvard BS Press (Boston, USA). 19. Mallinson, H. (2005) Send It By Semaphore Crowood Press (Marlborough, UK).
9 Talk the Talk
Specialist areas of activity often develop private languages the better to describe common experiences. This is certainly true of technology, but unfortunately it exists in a number of ‘dialects’ depending on the technology concerned and the geographical location where it is spoken. However, some words are in fairly widespread use and this chapter will try to bring them together to form a reasonably coherent whole. Think about a diverse collection of products, all meant to serve much the same purpose. A number brought together often appear surprisingly similar. Look, for example, at a large car park, with row after row of broadly similar vehicles—they have a strong family resemblance, to say the least. But suggest they are all much the same to their designers, well aware of many differences of detail, and they will be indignant. Something similar is true of processes and services too. In the beginning, different designs appear from various sources, with not much objective basis in common. As we have seen, design can never be wholly rationalised but must travel by various pathways, including established practice, prior experience, prestigious successful designs and fashion among others. So superficially it would seem unlikely that independently originating designs for a particular specification would have much in common. Yet the reality is that designs around us mostly do © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_9
83
84
W. Gosling
look quite similar, except for detail. Can this seeming paradox be reconciled? Present-day computer operating systems use a graphical user interface (GUI), like the one seen on every Mac or Windows PC. The GUI now defines a specific and dominant characteristic which all computer operating systems can normally be expected to have. Detail aside, it is what sets how present-day computers look, distinguishing them from earlier ones having only an alpha-numeric interface. Both in this specific case and much more generally, designs conforming to this kind of dominant similarity, even though different in detail, are called canonical designs. The similar-looking cars in the car park are based on a canonical design. In the early stages of a new design pressures towards conformity have not yet built up, and designs might appear which differ more radically. However, in real-world use, the diverse early design styles achieve different degrees of acceptance. As time passes, those preferred by users, and consequently also designers, appear increasingly frequently, while others fade away. Emergence of a canonical design, by this kind of quasi- evolutionary process, is a mark of a mature product. A classic example is the design of current fixed-wing airliners. Their present-day canonical form is an all-metal monoplane with a tubular fuselage. The wing is at right angles, and at roughly half the length of the aircraft. A tail assembly, having a rudder and elevators, is at the rear of the fuselage, and engines are attached to the wing or tail. An aircraft that is not like this looks out of the mainstream. Yet, of course, not all aircraft designs are canonical. Although most are, they have also been built and flown having quite different forms. Successful aircraft have been designed which have tail-first (canard) configuration. This is not for airliners but is often favoured for military fighter aircraft because it can give improved aerobatic capability. Designs like these, which break away from the canonical form of their time, are called dissenting designs. The Northrop P49 is another example. With neither fuselage nor tail assembly, it is a flying wing. Designed as a bomber, its low-drag form can be exploited for high speed, long range or low fuel consumption. The uncluttered line minimises radar reflection, making this also an attractive dissenting design style for ‘stealth’ aircraft.
9 Talk the Talk
85
A once-important class of dissenting aircraft designs were those for planes able to takeoff and land on water—both seaplanes, with floats as undercarriage, and flying-boats, with a fuselage acting as a seagoing hull. Capable of operating from a lake or sea they need no airfields. However because they were vulnerable to floating debris at takeoff and landing and are also awkward for embarkation and disembarkation, both grew less common as functional airfields became more widely available. They never became canonical.
From Dissenting to Canonical and Contrariwise Vacuum cleaners are another familiar example of both canonical and dissenting designs. All vacuum cleaners draw in air laden with dirt and dust. For decades the two canonical designs, cylinder and upright, filtered out the detritus using a cloth or paper bag, before discharging the air back into the room. In 1979 James Dyson introduced cyclonic separation, a dissenting design that used a combination of cyclonic airflow and gravity to remove the detritus, eliminating the need for a bag. For a while at least, any success dissenting designs may achieve does not challenge the canonical form, which continues stably alongside them. Sometimes, though, a dissenting design is increasingly widely adopted over time, and itself takes over as the canonical form, after a transitional phase. This has happened now in vacuum cleaners, and no-bag machines have become canonical. With aircraft designs a similar transition happened when monoplanes displaced biplanes in the 1930s. In the earliest days of powered flight, the biplane configuration was attractive, and soon it became canonical. The large wing area then necessary for low-speed flight was achieved by using two wings, both made of wood and fabric, one above the other, with struts and bracing wires between them. A strong box-like structure of low weight resulted, despite the structural limitations of a wooden airframe. Aerodynamic drag was higher than for a monoplane, but this was less important at low speed. They were the canonical form for military and civil aircraft alike because of their structural advantage, so biplanes became almost universal in the first quarter of the twentieth century.
86
W. Gosling
Dissenting designs were also seen at the time, both triplanes and monoplanes appearing. The triplanes were more frequent, because of the continuing structural difficulty of building a sufficiently light yet strong- enough monoplane wing with an area large enough to lift the aircraft. Biplanes designs became less common from 1930, as stronger all-metal airframes were adopted and air speeds rose, permitting smaller wing area. The ousting of the biplane by the monoplane illustrates how a canonical design is established. It may begin as a dissenting design but is increasingly widely implemented. By the start of World War II, the all-metal monoplane was the canonical aircraft design form, as it remains today. Another example of the canonical form is the choice of engine for small production cars.1 From the 1920s to the 1960s this was an inline four-cylinder water-cooled four-stroke engine, with valve gear operated by push rods. By contrast, six- and eight-cylinder engines were the choice for high performance and luxury cars. Indeed, they became canonical strictly for those market sectors alone. There were some dissenting designs, however, in all sectors. In the high-volume category, Citroën was successful for many years with air-cooled flat fours and also a flat twin, for the low cost but highly regarded 2CV car. Searching for the improved engine performance, a few designers abandoned push-rod operated valve gear for shaft or chain-driven overhead camshafts. Overhead-cam valve gear was a successful dissenting design, giving better aspiration at speed, but too expensive for general use. For nearly half a century overhead cam was unable to reach canonical status. The second half of the twentieth century saw overhead camshafts driven by toothed rubber belts introduced. This dissenting design proved inexpensive and reliable and consequently soon became canonical.
Multiple Canonical Designs As already noted in the case of car engines, different canonical designs can coexist indefinitely for the same product when well-differentiated groups of users are targeted. Thus cars for general use canonically have saloon (sedan) bodies with four or five seats arranged in two rows. By contrast, canonical sports cars (roadsters) have two seats in one row and
9 Talk the Talk
87
are soft-topped. Along with others, these two canonical design forms (having additional important points of difference also) have long coexisted, and will continue as long as the user groups and market sectors they serve remain distinct. The large number of canonical forms that are able to survive for cars is a measure of the large number of cars produced worldwide. The number of each canonical design selling to a particular user group is large enough to be economic. The same holds for other large-volume products. Personal computers have four canonical forms: desktops, laptops, tablets and feature phones. For all mature products, both canonical and dissenting designs are likely to exist. As the relative frequency of user uptake moves in favour of them, new canonical forms commonly arise, while at the same time some older canonical designs dwindle to dissenting designs, like biplanes in present-day aviation, with their limited niche market.
anonical and Dissenting Processes C and Services So far only products have been discussed, but the same is true of processes and services. To take the latter, a car-wash provides a simple service in a standardised way but exists in one canonical design and two dissenting designs. The drive-through car wash provides the service in the currently canonical manner. Its great virtue is it needs no attendant staff, yet requires negligible participation by the user. Modern versions offer a tariff of different wash styles, with optional extras, including waxing and wheel washing. One dissenting design for a car cleaning service is jet wash. High- pressure water and detergent jets are provided, and more rigorous and detailed cleaning is claimed for this approach. The user must do the work involved, but jet wash can be cheap because the capital investment is lower. Another often-seen dissenting design is hand washing, broadly similar to jet wash but the work is done by employed operatives. The range of services provided can be far more varied than with highly
88
W. Gosling
mechanised instantiations, however what is offered may be more expensive than the basic drive-through, because of labour cost. How do so many distinct designs for products, processes and services come into being? In different social contexts designs are created in different ways. Lacking scientific understanding of how vital technologies work, traditional societies go forward in doxastic mode, by trial and error; they do not welcome rapid change. By contrast, today’s technology has the self-confidence to make changes in designs fast when needed, deploying both scientific insights and doxastic empiricism. The difference arises because so many of the memories we need are now in the explicit domain and easily available. In pre-literate societies, it takes so much longer to marshal the information that a new design will need.
Design by Evolution Some designs evolve, and traditional craft manufacture depends on evolutionary design. It works through small doxastic changes in particular instantiations of a traditional design. Suppose the shaft of a spade is made longer. Depending on the condition of the soil, this change may be preferred by those who use it. If the longer handle is consistently preferred it will take over from the traditional as the canonical design. However, if it confers no advantage the longer spade will remain a dissenting instantiation. By this doxastic process, the spade design adapts ever more closely to its users’ needs. Consequently, slightly different canonical forms evolve in different parts of the world, with shorter spades where soils are sandy and longer ones in clay areas. Many artefacts develop like this. Think of smiths forging a sword. They hammer flat a bar of hot metal, then fold it along its length and hammer it again. Repeated heating, folding and hammering forges a laminated blade, far stronger than cast iron. Able to penetrate shield or armour, the first laminated finery steel swords were the terror weapons of their day, bringing great power to their owners. Thus was the legend of Excalibur borne, a laminated sword. Was it forged by smiths in a Celtic lake village, ruled by a powerful ‘Lady of the Lake’? Yet in the forging process, hammer and fold the metal one time too many and the laminar
9 Talk the Talk
89
structure breaks down. The blade is fatally weakened and will shatter. Excalibur did and was taken back to the Lady of the Lake for ‘re-forging’, presumably under warranty. Probably the sword-smiths fitted a new blade, scrapping the old. In Japan and Europe smiths quickly learned, by a doxastic ‘trial and error’ approach, how far forging could safely be pushed. For a thousand years, knowledge of how to forge these ‘magic swords’ passed from master to apprentice as ‘mysteries of the craft’, long before science explained why it worked. People enthuse over the design of craft products, implying that superior designers of the past knew their work better than those living now. Such nonsense! The craft product is well designed because it evolved over many generations. Good design results this way, though at a snail’s pace. If patterns of use change too fast, evolutionary craft design cannot respond in time. As a design strategy, it will fail in rapidly changing societies. The parallel with genetic evolution is obvious. However not all evolutionary design is now necessarily slow; with the present-day access to large volumes of explicit memory from storage media and the internet, engineering development—one form it takes in advanced societies—can be quick. Engineering development, not radically new invention, makes the greater contribution to wealth creation, so economists have argued.
Revolutionary Designs An alternative to evolutionary design is revolutionary design. In 1712 Thomas Newcomen demonstrated the first continuously acting steam engine, unlike anything seen before. Was this a revolutionary design? It had precursors. Pistons in cylinders featured in pumps since Classical times, and Thomas Savery (1650–1715) had used condensing steam to raise water twenty years earlier, but without a piston. Denis Papin (1647– c.1712) (inventor of the pressure cooker) experimented with an engine driven by gunpowder—the first internal combustion engine, but altogether too alarming in operation and seemingly impossible to tame. In 1707, after nerve-racking experiences, he took advice from Leibniz, and
90
W. Gosling
built a piston-in-cylinder steam machine, but with each stroke manually controlled. For all this, it was Newcomen who designed and built the first continuously working steam engine with practical uses. His contemporaries thought it quite new. Nor is this a unique example; transformative designs appear all the time and always have. The first stone hand-axe with finger holds was doubtless as revolutionary in its day, as must also have been the first wheeled vehicles. Soon after 700 AD, Viking longships began appearing with the keels they had previously lacked.2 At first, they leaked so badly that, by law, crews could not be obliged to sail a ship needing to be baled more than twice a day. The construction problems were soon overcome. Longships with keels had such superior sailing qualities they made possible Viking voyages that changed the politics of Europe. For a time the world’s greatest sea power, Viking ships voyaged to Constantinople in the East and Newfoundland in the West. This pattern is typical: an initial design revolution was followed by long evolution, which made it fully effective. The transformative design idea behind personal phones—cellular radio—was developed in the late 1950s at the Bell Telephone Laboratories.3,4 After initial testing in Washington for the FCC, Martin Cooper (b. 1928), Vice President of Motorola, took cellular technology to New York City. On April 3, 1973, Cooper made a call on Sixth Avenue from a handheld phone. This call, made to the Bell Labs, began a dramatic transformation of personal communications technology. By 1978 the analogue Advanced Mobile Phone Service (AMPS) began a Chicago trial covering 54,000 square kilometres in ten ‘cells’. After a few teething troubles were sorted, the trials proved an extensive cellular radio telephone system viable, allowing seamless handovers between transmitter/ receivers in adjacent cells. Here too an initial design revolution was followed by rapid evolution, particularly in handsets, making light, compact personal cell phones practicable, driven by advances in chip design. However analogue cell phone systems, like AMPS, had problems with speech quality, interference and radio bandwidth congestion. A digital version was the inevitable next step. A second advance in the design of cellular phone systems began, leading to the digital GSM system, adopted by Europe in 1987 and operating in Finland by 1991. The world
9 Talk the Talk
91
standard by 2014 with 90% of the market, it was in use in 219 countries.5 On the back of the continuing rapid improvements in silicon microcircuits, GSM feature phones, beginning with Apple iPhone, soon became the canonical cell phone design. These were all technological revolutions. Viking longships were built straight from the wood, with their new keels patched-in to existing hull designs, and not too well at first. Newcomen started from scratch and cut the metal to make his atmospheric engine. By contrast, the cell phone system was largely an assembly of pre-existing parts—radio receivers, transmitters and switching systems originally designed for other applications—yet this does not make it less of a revolution. System designs too can be revolutionary. Something appeared with no prior counterpart. The same could be said of computers. ENIAC, arguably the first truly general-purpose electronic computer (1946), was an assembly of electronic circuits built, in style then current, from valves (tubes) and other common radio parts. It did things never done before—a revolutionary design using existing techniques. All products, processes and services begin with a design revolution, although sometimes long forgotten, lost in the past. They come to maturity by subsequent evolutionary design changes, triggered by feedback from users. Evolutionary changes in a revolutionary design should be regarded as normal, not an indication of initial failure. Thomas Kuhn’s model6 of the development of science famously envisaged occasional revolutions interspersed with long periods of orderly progress. Technology follows this pattern too, with infrequent revolutionary designs interspersed with periods of evolutionary development. Yet despite revolutions in design tempered by subsequent evolution, imperfections in many highly successful designs are later detected, sometimes long after. They are imperfect but were good enough.
The Quest for Perfection—A Deadly Trap In the mid-1830s Charles Babbage (1791–1871), the first computer pioneer, was seduced by the lure of perfection. He abandoned his design for the limited but potentially useful ‘difference engine’, designed for the
92
W. Gosling
automatic calculation of mathematical tables and the like. Instead, he turned his mind to the design of a more powerful ‘analytical engine’, with many of the operating features of a true computer. The British government, appreciative of Babbage’s help as a code maker and breaker, had financed his work from 1823, but now cut off the money, disconcerted by his abandonment of the difference engine, of which they had been promised so much. The young Augusta Ada, Countess of Lovelace (1815–52)7 designed a procedure to enable the analytical engine to calculate the Bernoulli numbers. This was the first published computer program in history, and the birth of software. The Bernoulli numbers, an inevitable object of curiosity in this context, are a sequence of numbers, rational but not easily calculated. The beginning of the first Bernoulli sequence is 1, −1/2, 1/6, 0, −1/30 and so on. The analytical engine was never built but has been emulated on modern computers, confirming that her program would have calculated them successfully. Seven years later, under the auspices of the British Association for the Advancement of Science, a distinguished committee of experts was convened to consider whether the analytical engine should, after all, be built. As to cost, they said ‘it would surprise us very much if it were found possible to obtain tenders for less than 10,000 [pounds]’ (maybe half a million in modern terms). They did think it would work, and that it might prove useful, but on balance advised against building it. At this price, a computer did not seem good value for money. Charles Babbage was soon forgotten, remaining so until his notebooks were rediscovered in 1937. Interest in his work revived, and a difference engine of 31-digit accuracy was constructed in 1991 at London’s Science Museum, exactly to his original design. It performed as intended. But as for the analytical engine, it has been emulated using modern computers but never actually built, even as a demonstration. The pursuit of the perfect design is seductive but dangerous. Go for the perfect and the work of design can stretch out indefinitely, while the cost of pursuing it grows and grows. Perfection beckons to us but too often leads to disaster. The hard truth for designers is that all our imperfect world wants is a ‘good enough’ design. It must come acceptably close to
9 Talk the Talk
93
satisfying our needs and not cost too much. Later there may be something better, but that will be then and this is now.
Notes 1. Stone, R. (1992) Introduction to Internal Combustion Engines 2nd ed. Macmillan (London, UK). 2. Hale 1998. 3. Lewis 1960. 4. Schulte & Cornell 1960. 5. Gosling 2000b. 6. Kuhn 1970. 7. Augusta Ada was the only legitimate child of Lord Byron, the poet. In 1835 the twenty-year-old Miss Byron, an able mathematician, married William, 8th Baron King, to become Lady King. Three years later he was elevated to Earl of Lovelace, his wife becoming Countess of Lovelace. In her early twenties she developed an illness which, from its symptoms, seems likely to have been a food allergy. Her doctors prescribed laudanum—opium dissolved in brandy—to which she became addicted. Her judgement was undermined. The scientific writing became ever wilder and she tried to create a mathematical model of horse racing. It failed, and at the time of her death she had £2000 of gambling debts, well over £100,000 in current money, her husband having already settled more.
10 Patterns of Innovation
Design starts with just a specification for a new product, process or service. Yet after the expenditure of human effort and material resources, we confidently expect an instantiation of a new design to appear and are rarely disappointed. Between the two all is mystery, the terrain dark, little-known and perilous. How is this no-man’s land crossed? What are the steps and pathways we have to take? The designer’s mind at work is not easy to study, much must be deduced from their observable behaviour patterns. On this basis, it seems that three different types of innovation can be distinguished.
More of the Same The pattern of innovation most commonly encountered is more of the same. The limiting parameters of an existing technology can usually be extended further, and substantial advantage is gained by doing so. In electronics, microcircuits grow ever more complex, so silicon chips offer more functions for the same price. Structural materials are progressively lighter for a given strength. Diesel engines grow more powerful for their weight. All this is ‘more of the same’: it is characteristic of technology © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_10
95
96
W. Gosling
everywhere. Because essentially evolutionary, this style of innovation is predictable and its success highly probable. Risk is low, so industrial R&D centres work this way when possible, and small companies with limited resources have no alternative. Research centres, aiming at major technological advances, are far less involved. Risks of ‘more of the same’ innovations are low, but rewards from new designs are modest because any advantage over competitors is short lived—they can do it too. Yet economists argue that for society as a whole far more benefit comes from this style of innovation than any other. Neither romantic nor dramatic, it does not reach the history books, but it makes the world go round.
Passing Over of Quantity into Quality1 ‘More of the same’ innovation can turn a familiar product, process or service into something essentially quite different when it has been continued far enough—often a long way. This is the second style of innovation: quantity into quality. Thus the earliest mechanical clocks filled a large room and needed a great deal of power to make them work. In the beginning, as was long-established practice in windmills, the internal gearing was fabricated from wood, though this made for more friction. The mechanism was so hard to drive it was powered by ponderous stone weights unwinding the rope from large drums, and the whole thing was expensive. In the West, clocks therefore appeared in monasteries, used to ensure the correct timing of the Holy Office and were also seen as public clocks. Early clocks with wooden mechanisms were generously lubricated with goose grease. This was a vital material for medieval engineers, who regularly designed with wooden surfaces rubbing against each other, in gearing, push-rods, axles or wherever. Properly lubricated, early wooden clocks had a long life—a few work still. However, through a century of ‘more of the same’ development, encouraged by a desire to make them less expensive to build and house, the clock mechanism became all-metal, getting smaller and easier to drive, so that much smaller weights could be used. After a while, and
10 Patterns of Innovation
97
following the move to operation by spring power instead of weights, clocks were small enough to sit on a shelf and be contained within a single large box, usually ornately decorated as befitted an object of pride. Domestic clocks had appeared. Later, with yet more dramatic size and cost reduction, the time was right for the manufacture first of the pocket watch and then the wristwatch, both altogether new things, previously unseen. From public clocks to watches was a classic example of ‘the passing over of quantity into quality’. The progressive ‘more of the same’ development undertaken finally resulted in a clearly qualitative change. The development of radar in the mid-twentieth century was another example: also a result of progressive ‘more of the same’ development of radio equipment. At first, used solely for communications and broadcasting, radio transmitters became more efficient and could emit higher power at ever higher frequencies, while receivers had greatly improved sensitivity to weak signals, thanks in part to many innovations by Edwin Armstrong. These advances made it possible to reliably detect the weak radio energy reflected back from aircraft and ships ‘illuminated’ by a radio transmitter, and hence to work out their range and bearing. Radar, too, was a result of ‘quantity into quality’ developments, improvement in radio equipment leading to the qualitative change into radar. Yet another was the coming of ‘the computer on a chip’. The earliest digital microcircuit chips carried a few circuits which performed the simple logic functions: AND, OR and NOT. However the number of these logic circuits, or ‘gates’, it was technically possible to form on a chip grew with exponential rapidity. Soon chips could be fabricated carrying thousands, and later millions, of gates. They were sufficiently complex by 1971 for Federico Faggin (b. 1941) to design the 4004, a complete functioning computer on a chip, the first of its kind commercially available. Selling for $60, when the smallest computers were priced in hundreds, this was the earliest microprocessor. When I first saw one, the 4004 sent a shiver down my spine, for I knew the future was upon us, exciting but wholly unpredictable. This was quite new—the ‘passing over of quantity into quality’ yet again. It made the personal computer affordable and transformed computer use into a universal everyday activity. These ‘quantity into quality’ developments are less common than ‘more of the same’ and not always quite so easily predictable, yet can
98
W. Gosling
usually be foreseen reasonably well. They are tackled by major strategic development programmes in large companies and are not impossible for smaller firms. The advantages gained in this way are easier to protect by patent than ‘more of the same’ innovations. So ‘quantity into quality’ innovation can yield an enduring competitive advantage to those who have the courage to go for it. The fortunes of great companies have been built on successful examples.
The Knight’s Move The third category of innovation is the knight’s move—far less common than the other two. This is not the steady progress in a straight line of ‘more of the same’ or ‘quantity into quality’, but requires a radical change of direction. Occasionally, capriciously it almost seems, design takes a ‘knight’s move’, the result of an out-of-line jump, creating something altogether different, which leads to a new category of PPS, sometimes a whole new technology. An example was Parson’s compound steam turbine of 1884, a radical departure that quickly made it possible to use steam to drive the largest ships and also electrical generating stations of up to many megawatts capacity, totally eclipsing the reciprocating piston engines formerly used. Another knight’s move was the invention of solid- state electronic devices by William Shockley, John Bardeen and Walter Brattain—a jump right out of the line of the valve development then successfully continuing, on which the electronics of the day had been based. Another example was the sudden appearance of the graphical computer interface in 1973 from the work of Alan Kay and Adele Goldberg (b. 1945) when all other computers were controlled by keystrokes. Every time, a new day had spectacularly dawned. ‘Knight’s moves’ originate anywhere—in university laboratories, research centres, with lone inventors occasionally. However, a successful launch of the resulting innovation always demands considerable resources, so effective exploitation of ‘knight’s move’ inventions is the domain of large, well-heeled organisations. University or small-company designers may have the initial idea, but once they have protected their intellectual property rights through patents they will need to seek a larger partner,
10 Patterns of Innovation
99
corporate or individual, with sufficient resources to carry the development to success. The knight’s move has been problematic because we have so rarely seen it coming. ‘Prediction is difficult, especially if it’s about the future,’ said Niels Bohr, and those who aspire to look ahead usually base their predictions on ‘more of the same’. So, the mid-nineteenth century writers, projecting forward the growth of road traffic in cities, warned that by the 1920s London’s streets would be knee-deep in horse droppings. In the first decades of the twentieth century, many were convinced, by similar ‘more of the same’ reasoning, that the future of flight, commercial and military, would be airships. With a few exceptions, futurologists do not spot the knight’s move. This is why early twentieth century visions of the future bear little relationship to what actually came to pass. In 1936 H. G. Wells (1866–1946), a highly regarded oracle, inspired the film Things to Come, which envisioned an art-deco style future without computers, internet or jet aircraft.2 A supposedly early twenty-first-century aircraft appears, propeller driven and with a semi-open cockpit. Reflecting general fear of poison- gas warfare in the late thirties, the pilot wears a gas mask. It has the form of a high dome over his head—who knows why, but it looks good. Poison gas was not used in World War II, which began three years after the film’s release. It is too dependent on the weather to evoke great military enthusiasm.
The Prophetic Role of Precursors ‘Knight’s move’ inventions have an unruly quality. Coming out of the blue as they do, to foresee a ‘knight’s move’ innovation might be thought quite hopeless. Always a bright idea coming from an individual or small group, such leaps of imagination seem impossible to predict, let alone plan for. Yet if we are open to them subtle hints are available of what might lie ahead. Sometimes the innovative process goes astray, leading us into a path where available techniques are not adequate to fully realise the design we want. Such a result may sometimes be seen as a precursor, which hints at
100
W. Gosling
the world to come, yet is to have little part in it. These precursors are innovations that turn out, at best, to have only a limited future themselves, but can give some indication of ‘knight’s move’ developments which will appear later and be more soundly based. The history of technology is full of precursors, offering their glimpses of radical change to come. Precursors should not be ignored. They let us live, for a time, half in a future they cannot quite deliver, allowing us to get some grasp of what things will be like there. A precursor can also have a transitional role, for a time doing well enough for some potential users what will ultimately be done much better by a different technology later displacing it. And of course, times change and techniques develop, so today’s precursor, using new techniques, might become viable tomorrow. So why do precursors matter? By definition, precursors are unsuccessful, technology ‘might-have-beens’. Failures all to varying degrees, in the longer term they are doomed to receive little notice outside the history books, but they can be important as predictors.3 A classic example was Charles Babbage designing the first computer while Augusta Ada, Countess of Lovelace, pondered its software. Sadly, his entirely mechanical ‘analytical engine’ of 1833 was so complex, massive and costly it was doomed never to be built. Yet it was a valid precursor, pointing forward to what was achieved in the 1950s. Until the twentieth century, ships at sea were unable to send messages to land. Seafarers simply vanished over the horizon to an unknown fate, incommunicado until their next landfall. Yet nearly two generations before the first maritime radio, there were a few vessels privileged to ‘talk’ to the land from mid-ocean. From the 1850s, ships that laid under-sea telegraph cables routinely sent messages back over the wire to their point of departure. Those who were aboard had the novel experience of being at sea yet in communication with the land, and in principle, through the land-based telegraph network, with all the rest of the world. Cable ships can be seen as the precursors of ships with maritime radio, which appeared half a century later. Charles Wheatstone’s ABC telegraphs, direct reading and easy to use (if slow), gave private house-to-house communication in a central London network from 1860 for many years, only made obsolete by the
10 Patterns of Innovation
101
telephone. It was a precursor to the telephone, but also, considering its alphabetic character and taking a longer view, might be seen as a precursor to twenty-first-century texting. They may well ultimately be seen as misconceived, yet precursors cannot be ignored, because they hint at a coming ‘knight’s move’. Typically built around an idea of great charm, they reflect a human want or need, yet the techniques deployed deny them something required for complete success. So we cannot but take these romantic failures seriously. But because precursors give us a glimpse into an unknown future their significance is not easily readable, and they sometimes come in confusingly many forms. They are difficult to interpret, not least because our own hopes, fears and wishes colour what we think we see. But then foretelling the future was always like that.
Lucretia, Dead and Buried In 1825 a tragic event brought a new mind into the community thinking about telecommunications, giving it a new path of development, a knight’s move of the greatest importance. At the start of the year Samuel B. Morse, a professional portrait painter, rode horseback four days from his New Haven home to Washington, commissioned to paint General Lafayette. On 11 February he received a letter telling him his wife Lucretia was sick. Starting at once, he arrived home ten days after her illness began, only to find her dead and buried. Small wonder that he became an enthusiast for telegraphy’s promise of ‘instant’ messages. Morse needed advice about the techniques he might use for a telegraph design. In the United States it was going to be necessary to signal over greater distances than the English needle telegraphs were achieving. He turned to the Albany Academy where Joseph Henry (1797–1878) was teaching, a gifted scientist, later professor at Princeton University and the first Director of the Smithsonian Institute. For some years Henry had researched electromagnets—a coil of wire wound on a soft-iron rod as a core. When a current passed through the wire the core became a magnet, but when the current was turned off the magnetism vanished again. As shown to Morse, the electromagnet was a
102
W. Gosling
two-state device, either magnetised or not. So, on a pivot nearby mount a small iron arm which can be pulled to the core when magnetised, add a spring or weight to return the arm to its rest position when the current is off, and you have a current detector. It is suitable for use as a sensitive telegraph receiver if a coil of many turns is used, but is clearly binary: either attracting the armature or not, with no intermediate states. The sending end was easy: a switch or ‘key’ adapted for speedy operation, which connected the transmission line to a battery—the Morse key. This too is a binary device. Morse had the bones of his system, but all it could send was two kinds of signal, indicating line energised or not, in modern terms ‘1’s and ‘0’s. The Morse code made it possible to use this constrained resource to send all the letters of the alphabet (in upper or lower case), numbers, punctuation and any other symbols, as needed. So the text of any kind could be transmitted over this purely binary channel. Dots in the Morse code were binary ‘1s’, dashes were three successive ‘1s’ without gaps between them, and spaces were zeros or groups of zeros. The channel had only two states: ‘1’s and ‘0’s—a true binary system. Dots, dashes and spaces helped with memorising the code, no more. In the United States, Morse is widely thought to have invented the telegraph. He did not. That honour fell to Paul Schilling, a German aristocrat working for the Tsar. William F. Cooke, an Englishman, inaugurated the first commercial telegraph service, five years before Morse. Yet what Morse did was more significant than either of those achievements. He conceived and built the first communications system using serial binary digital signals. The Morse code, patented in 1838, bears witness to his achievement. He was the prophet of our digital age. Morse’s binary digital signalling is the foundation of current information technology, a key invention. Even so it had a major setback in the middle of the nineteenth century. First the telephone, then sound broadcasting and later television emerged, all transmitted in analogue form. In telephones, from their inception in 1878, Morse’s binary signalling was displaced by analogue transmission of the voice, in which the pressure variation in a sound wave was represented by a continuously varying electrical voltage. Potential digital designs doing the same thing would have required switching techniques far faster than were available at the time.
10 Patterns of Innovation
103
Adequate for telegraphy, mechanical switches could not operate fast enough to encode voice or video signals, which had to wait for electronics.
The Analogue Era Analogue transmission of voice, music or pictures became canonical for nearly a century. Binary Morse telegraphy looked more and more like a precursor only; it held on just as a dissenting design for niche applications, where signals were weak, or suffered interference. With just two possible signals, ‘0’ and ‘1’, there was a much better chance of distinguishing them even on a bad circuit. The inherent robustness of binary systems kept telegraphy alive, but only where analogue telephony was ineffective. At sea and in the air, where antennas were often inadequate and working radio ranges long, Morse transmissions were often the safer bet. They were popular with radio amateurs too, but in this case, because the equipment was simple and cheap. With the coming of electronics in the twentieth century, switches became feasible which could operate in a millionth of a second, and later much less. In 1937 Alec Reeves (1902–71) proposed ‘pulse code modulation’, giving good quality speech transmission in binary digital form by using fast electronic switches. This was enough to enable Morse’s digital system to begin retaking the territory it had lost, while fast electronic switches also went on to new triumphs in computing. For analogue transmission it was the beginning of a long decline: by the end of World War II, the digital age was already upon us. The two modes of transmission exchanged places: today transmission of sound and vision by digital systems is canonical, and analogue transmission is a dissenting form. As for transmission, so also for recording. The sound, vision and data on present-day DVDs (‘digital versatile devices’) and SDHC chips are all in the binary digital form that Morse pioneered. Morse’s binary telegraphy conquered the world, then appeared relegated to dissenting status by analogue techniques. Yet all it needed for binary coding to reclaim its lost empire was switching technique much faster than the Victorians had. Although the code Morse patented in 1838 has been displaced by newer codes in contemporary systems, these
104
W. Gosling
too are still transmitted as binary digits. Morse’s telegraph was the ancestor to all present canonical IT systems. With the benefit of hindsight, we can now regard analogue transmission as a precursor to the digital age.
The Honourable Failures In fact, major advances in technology usually do have precursors, honourable failures doing the same thing in a technically different way, but which proves a blind alley. Initially, it may give results more easily, which is why it appears earlier than the mainstream. So it was, for example, with steam passenger cars, which, though perfectly functional, were soon swept away by cars with internal combustion engines, far more flexible in use. Then again, in the early days of radio, at the beginning of the twentieth century, the unsolved problems were how to generate high radio power at the transmitter and how to detect weak signals picked up by the distant receiving antenna. Both problems were ultimately solved by the coming of vacuum tube electronic devices: valves. However, these were the product of a knight’s move, and not generally available until after 1913. Before that great ingenuity was expended on devising viable radio signal detectors.4 They were precursors, no more, to what was about to come. For transmitters what was needed was alternating current at high frequency and high power. There were plenty of alternators around, rotating machines for generating a lot of electrical power at a frequency of 50 Hz (60 Hz in the US). For broadcasting, the need was for power at frequencies a thousand times higher. Ernst Alexanderson (1878–1975) was a graduate of the Stockholm Royal Institute of Technology. In the early 1900s, while working for GE in the United States, he solved this problem, following a ‘more of the same’ strategy. Could the existing rotating alternators generate at this frequency? He increased the rotational speed, to a limit dictated by the risk of bursting, and increased the number of power cycles delivered by each rotation. With Reginald Fessenden (1866–1932), a Canadian radio pioneer, as his sponsor, Alexanderson delivered to the station at Brant Rock, on the Massachusetts coast, a 75 kHz, 500 watt alternator, suitable for
10 Patterns of Innovation
105
transmitting voice and music. The first entertainment broadcast ever, of readings and carols, had Fessenden playing the violin himself. Transmitted on Christmas Eve 1906, it was received several hundred kilometres away. Despite this success,5 the Alexanderson alternator proved a precursor only. In that same year of 1906, Lee de Forest announced his audion, fore-runner of later electronic valves (tubes), around which transmitters and receivers of the future would be designed. Precursor television systems of the 1920s were mechanical, using a disc or mirror drum for both scanning and reconstructing the picture. However mechanical television, though an easy entry point, quickly faded away once electronic television appeared, offering vast superiority in definition and picture size. In the air, the airships of DELAG—the world’s first commercial airline—provided scheduled services in Germany from 1909, and thirty years of varied development followed. But airships were always painfully slow: 131 km/h, or 82 mph, for the 1938 Graaf Zeppelin II LZ 130. The LZ 130 was then the largest object ever to fly, three times the length of the Boeing 747, but it was not a step towards the future. Badly affected by weather, airships suffered disastrous structural failures, and can now be seen as precursors only to our airliners. Kites, not balloons, were modern airliners’ technology ancestors. Another precursor was Concorde. One day supersonic transportation will be routine, but not using the aluminium alloy structure and air- breathing turbine techniques that Concorde was stuck with. That plane evokes happy memories for me though, a real beauty—three hours twenty minutes was my personal best Atlantic crossing. A future generation of supersonics, maybe using titanium or steel structures and some future version of scram-jets for propulsion, will take an hour or less, travelling outside the atmosphere, causing no sonic boom.
Precursors Now? An obvious question is whether there are important precursors now, as yet unrecognised for what they are. One could be in nanotechnology. Can there be versatile machines smaller than the eye can see? On silicon
106
W. Gosling
chips, it is already possible to design artefacts with dimensions down to less than a thousandth of the thickness of a human hair, so it seems not wholly precluded. People have already built nanomachines this way— using the same techniques as when silicon chips are made. They perform indifferently but are exciting because they may prove to be precursors. So the prospects for nanotechnology cannot be ignored, bringing hopes of advancing medicine, easing pollution problems, and transforming manufacturing processes. It would also have a chilling military potential—lethal robot drones might be possible no larger than an insect. Even if the nanomachines now built prove precursors at best, ultimately they could attract more effective construction techniques. If so, they will take the world by storm. The present technology of space travel is surely also just a precursor. Space travel has charm and fills a need, but its feasibility is marginal, achieved at great cost. Space vehicles eighty per cent propellant at take-off seem wrong. They are reminiscent of the 1930s airships, almost entirely filled with lifting gas. One day we may find how to do it better. Space is close after all; it begins only twenty or thirty kilometres above the Earth’s surface. In 1895 Konstantin Tsiolkovsy (1857–1935), the founding father of astronautics, suggested building a tower into space, but no material, known or conceivable, has sufficient compressive strength to make this possible. However certain exotic fibres or wires have exceptional tensile strength, enough to reach a satellite in geostationary orbit. Would it be possible to build a space elevator? It would be hopelessly slow. Calculations suggest it would take at least five days to lift a load to the geostationary orbit. This is not the technology we await, but as to the future, who knows? So what determines whether the inventive thrust leads to a successful product, process or service (PPS), or generates only a precursor? To answer this question demands deeper insight into the innovative process. There are forces that push the innovative process along and others that pull.
10 Patterns of Innovation
107
Notes 1. A phrase taken from Friedrich Engel’s unfinished book on dialectical materialism Dialectics of Nature (1883). 2. Wells, H. (1933) The Shape of Things to Come Hutchinson (London, UK). 3. Nova, N. (2011) Les flops technologiques Editions FYP (Paris, France). 4. Phillips, V. (1979) Early Radio-Wave Detectors IEE (London, UK). 5. Antonini, Y. (2007) La TSF Editions Alan Sutton (Saint Cyr sur Loire, France).
11 Invention Push and Market Pull
What forces start innovations going and propel them forward? Two important ones that drive innovation to successful outcomes are invention push and market pull. These two give rise to quite distinctive innovative patterns and flourish in different work environments. Invention-push developments drive forward by their own dynamic, not in response to external demands. They are born as the bright idea of researchers, inventors or designers—some in industry, some working independently or in a university. Funding is from existing resources, loans, grants or an accumulated deficit. By contrast market pull only begins when the new PPS gets to users. Directly or through market forces, they feedback what is right and what wrong about the new product when in use. Often the design is changed to meet their wishes, or new designs are launched. Meanwhile use brings money, and hopefully funding problems grow more tractable. We begin with invention push. Somebody becomes aware of a possible design advance potential in what they are doing. It will be a ‘quantity into quality’ advance or a ‘knight’s move’. If the idea is adequately backed, a new invention-push innovation will emerge, having characteristics different from anything seen before. So invention push drives the doxastic
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_11
109
110
W. Gosling
method into unexplored territory. It has three component parts: charm, perceived feasibility and perceived market.
Charm The most elusive component of invention push is charm, the ability to capture designers’ hearts. An idea with charm has a ‘rightness’ about it, a feeling of inner logic, the elegant way, indeed the only way to choose. Some ideas have charm, others none. Only charm persuades young designers to make a cause of a hunch, and lures their seniors to find the financial backing. Six factors, among others, generate charm: • Elegance, through intrinsic simplicity and minimal use of resources, which gives the idea the feel of inevitability.1 Technologists call it a ‘sweet’ design and want to use it. Example: a dishwasher in which the rotating arm is driven by hydrodynamic forces—no drive motor needed. • Something unexpected, even surprising, about the idea—‘I’d never have thought of doing it that way’. Example: mobile phone text messages, despite the limitations of a ten-button keypad. • Perceived analogy to a successful existing design. Example: the success of ships at sea encourages the use of lighter-than-air flotation vessels— balloons and airships. • Seeing the design as a logical next step in a sequence of ideas, which seems to suggest a kind of historical inevitability about doing it. Examples (1): Tommy Flowers builds the Colossus computer using 1600 (later 2400) valves, which made the next step, to a fully stored- program computer—Tom Kilburn’s ‘Baby’—seem inevitable. Example (2): the ability to keep a near-flat surface while forming transistors in a silicon chip (planar transistors) made connecting them on that surface as an integrated circuit looks the logical next step. Thus was microelectronics born. • A sense of discovering something, which had been much sought after over a long period, so answering a classic puzzle at last. Example: Ørsted’s discovery of the deflection of a magnetic needle by an electric current flowing nearby was the long-sought interaction of electricity
11 Invention Push and Market Pull
111
and magnetism.2 This advance in technique led immediately to a design advance: Schweigger’s ‘multiplier’ (galvanometer), making communication over extreme distances possible. • The ability of the design to convince crucial authorities in the field of its merit, even when others remain sceptical. Example: in 1936 Robert Watson Watt convinced Air Marshall Sir Hugh Dowding that radar could successfully detect and locate attacking aircraft. Despite mistrust by Frederick Lindemann, Churchill’s own science guru, Dowding triggered the construction of Chain Home meter-wave radar system, giving the RAF a crucial edge in Luftwaffe attacks on Britain. If it has one of these six factors a design will have charm, and if most are present charm will be compelling. The power of charm drives champions of an innovation, evoking single-minded devotion. Charm is essential for a new idea to be taken up—without it there would be no hope at all. Yet charm can so easily be a deceiver. Ideas of the greatest charm later proving not feasible or unwanted have ruined many enterprises—history is full of examples. Memorable is Brunel’s South Devon atmospheric railway (1847), its coaches propelled by a vacuum pipe laid between the rails, so no locomotive was needed. Its charm was enticing but to make it work reliably proved impossible. Jacob Samuda (1811–1844) invented the atmospheric railway. A Sephardi, he was, according to his tombstone ‘the first Jewish engineer’. After completing an engineering apprenticeship, he set up the shipbuilding firm of Samuda Brothers with his brother Joseph. In 1843 they built the Gypsy Queen, which was fitted with his improved high-pressure steam engine. On her first trial, she exploded. Samuda was killed, along with six others. Always in love with innovation, Brunel became aware of the atmospheric railway and used the idea on his South Devon Railway. Expensive to run and unreliable, after a year locomotives took over. It had lost £400,000. Lines from Dublin to Dundalk and London Bridge to Croydon also failed. Yet the atmospheric railway was a precursor to electric railways, moving the energy source, an air pumping station, off the track to a fixed location, like an electric power sub-station. The advantage was clear.
112
W. Gosling
From airships, all the way to gallium arsenide logic chips, many ideas of great charm came to little, yet cost their sponsors dear in finding out. So if invention push seems exceptionally strong, the design team should look back at the sources of charm, to make sure all the push they generate is realistic. If an innovation simply oozes charm it is also worth looking extra hard at its feasibility and market prospects. Are the problems being underrated, the prospects pitched too high, all on a tide of charm- engendered warm feeling?
Perceived Feasibility The second component of invention push is perceived feasibility. Is there a reasonable chance of creating the new product with the resources available? Note the perception of feasibility is being debated. That it actually is feasible will not be confirmed until later, when it is instantiated. At the stage of invention push, what counts is the opinion about the feasibility of the innovation, and the judgement will be proved right or wrong later. When available, good science backing the idea is encouraging, but mistakes about feasibility are commonplace, generally overoptimistic. • Needing a multi-wire line in a wooden support, Wheatstone’s five- needle telegraph was too expensive for long-distance use. Later Cooke’s one-needle coded telegraph proved feasible over long lines and was commercially successful. • The rigid airships of the 1920s, of which much was hoped, suffered structural failures which brought them to destruction. The problem was intractable with the structural materials and design methods then available. We could solve the problem now when it is too late. • Even the Wankel rotary engine, elegantly promising light-weight car and aircraft engines in the 1970s, at first had unacceptably short life, due to rotor wear. It took extended development to overcome the problem. The engine is now established and successful, so perseverance paid.
11 Invention Push and Market Pull
113
These problems were none of them fully foreseen. To improve perceived feasibility, choosing a new design that makes maximum use of existing successful designs is the low-risk strategy. In the out-turn though, this low risk may prove a chimaera if there are even the smallest changes from prior art.
Perceived Market The third necessary factor for invention push is the firmly held conviction, again not necessarily well-founded, that enough of a user-base exists to justify development. Over-optimism about the perceived market is so common it must be regarded as normal; few innovations get off the ground without it. • For the first two years of operation, the world’s first commercial telegraph took less money for sending messages than it did for admission to see working equipment in the ‘telegraph cottage’. • The take-up of DAB sound broadcasting in the UK is slower than hoped, despite the advantages it offers. Many listeners still stay with FM, imperfect but good enough. • The UK-France Channel Tunnel struggled to achieve the passenger numbers on which the project was posited. If the low early utilisation had been accurately predicted the project might have died. We may bless the error, now the construction is seen as ‘sunk costs’. The perceived market is hard to estimate, and in these examples was exaggerated. In a few cases though, such as texting on cell phones, the market turned out far larger than perceived. Potential users find it hard to foresee what the newcomer will do for them without trying it.
114
W. Gosling
From Push to Pull If any of the three essentials—charm, perceived feasibility and market— are missing or inadequate, invention push will be weak. Its thrust yields a precursor at best, signalling an important future in which it has no part. But whether wisely or foolishly, with good reason or not, if all three of charm, perceived feasibility and perceived market can attach themselves to a new idea, a strong push will be generated. In due course, a new design emerges, different from anything seen before. Then the real troubles begin. No demand evoked the invention-push newcomer, and its initial design is without user feedback. The inventors are optimistic, but people who have to bring it to market see no ready-made slot for it to drop into. Because invention-push innovations are so hard to establish in the market, they easily end as nobody’s baby: born at great expense only to languish from inattention. They rarely evoke early market pull—though worth their weight in gold when they do. Yet many think it wiser to stay with products people already want to buy. Some companies do precisely that. A well-known maker of garden equipment has not introduced an invention-push product for years. Their first, on which the company was founded, proved so painful to bring to success that the directors did not wish to repeat the experience. Yet industries pay a high price for ignoring invention push because successfully launched push innovations of today are antecedents of a useful and profitable world to come. Astute entrepreneurs, knowing they cannot ignore invention push, try to use both push and pull forces to carry innovation forward. It is not easy, so the marketers will first sell to early adopters, people willing to take a design before its canonical form is clear. The ever-desirable market pull arises in the use environment when designers begin to get feedback from users. So when market pull is felt the situation is equivalent to an order having been placed on the designers, but often changing the specification with which the product, process or service is compliant. A redesigned PPS, when it appears, will be the consequence of the pull force exerted by this requirement: it is a new market pull development.
11 Invention Push and Market Pull
115
In the market pull domain a new product, process or service arises simply because a potential user asks for it, and seems a plausible purchaser. A large organisation may issue specifications and invite tenders— this is typical of military and utility procurement, and is an obvious example of market pull. But it can work in other ways. Salespeople in business may grow aware that a new product, process or service would find buyers if it had certain specified characteristics, and they begin to champion it within their company. Because market pull has the capacity to adapt the characteristics of the innovation more closely to the wishes of the potential customer it has seemed like an unalloyed good to many, and in the past ‘get ever closer to the customer’ was seen by some designers as the golden rule, a self-evident truth propagated with enthusiasm by many a business school. In technologically static markets this may indeed be the case. However, customer preference hides a serious threat when technology begins to change rapidly. It becomes prone to overnight cliff-edge discontinuities, to which the supplier cannot adapt sufficiently quickly.3 When a design with strong push encounters a market that pulls it with increasing power a dramatic outcome is assured. The cellular phone had the greatest conceivable charm and was pulled vigorously by a large population of latent users in a market previously not served. Its adoption expanded with explosive force. This powerful market pull enabled the cell phone to create the fastest growing industry in history. To control theory (Chap. 13) such explosive growth is the effect of positive feedback in a control loop, which has a loop gain greater than a critical value—the Nyquist limit. If so, the faster it goes. Exactly the same mechanism causes ‘bubbles’ in the financial markets, over-inflating values to a point where later collapse is inevitable. However technology bubbles are more firmly tied to real-world events, and therefore more rarely burst. However such bursts do happen, commonly when an innovation has such charm that problems of feasibility and market are given insufficient weight. The atmospheric railway is a text-book example.
116
W. Gosling
Design Too Has a Natural History How do designs evolve? How does innovation begin, then continue, and what draws it to a close? Can we map the course from the first ideas to a proven design of social and commercial value? The natural history of an innovation is an individual thing, often blown this way and that by accidental circumstances. However in general terms, smoothing out perturbations as far as possible, the history often does follow a broadly similar path, and we will now try to identify the form it takes.4 To simplify things, the argument following will assume that a single measure of merit can be used to evaluate the worth of successive changes in a design. Although only for a small minority is this true, a simplified picture will reveal suggestive patterns. The natural history of a design has three distinct regions, corresponding to the three phases in a design’s life cycle. • In the first of these not much seems to happen, despite continuing investment in design, and maybe some partial instantiation. Measurable improvement is barely visible. Only invention push is impelling the work forward—the investment is deepening understanding of the techniques employed but does not yet contribute much to creating a viable design. • After a time the design does begin to improve noticeably, turning into a reasonably stable entity. Soon it is possible to begin selling—a population of users starts to grow, exerting market pull on the innovators, weak at first but strengthening. Once the point has been passed where push has less power than market pull the launch has been successful. • In the final phase, improvement slows once more as the innovation comes towards maturity. The launch of the new product, process or service is sometimes represented by a graph, its shape familiar to anyone who has ever done research on growth in almost any field (Fig. 11.1). It is called the natural growth curve.5 It represents the relationship between the ‘goodness’ of the innovation—its measure of merit—and the investment made in it. This may
11 Invention Push and Market Pull
117
merit
investment Fig. 11.1 The natural growth’ curve
be expressed in money, or in person-hours, or some other measure if it seems more appropriate. Plotted as a graph, in almost all cases the curve of merit against investment will approximate to the S-shaped ‘natural growth’ curve. Why this shape of the curve is so often seen is well understood. At first, there is an investment with very little gain in merit. In the middle region, the figure of merit improves rapidly; market pull is increasingly important in this region, with invention push fading away. Some inventions have powerful push, but market pull never develops, so they never achieve a self-sustaining market position: ground-effect aircraft are a good example. They may possibly be precursors, but no more than that. Others, such as hovercraft, struggle hard but eventually do generate enough pull to continue. Yet others quickly develop strong market pull and never look back—cell phones are the supreme example. Finally, in the last phase of the sigmoidal curve the line flattens out again: there is less and less further improvement despite continuing investment. The innovation has become mature. Putting resources into selling and cost reduction by improved production methods brings higher returns than R&D spending. It is time to divert research resources to other projects at an earlier stage in their life cycle. Only a limited development programme continues, directed largely by the manufacturing function, and it concentrates on cost reduction.
118
W. Gosling
Complications The S-curve considered thus far is grossly simplified; the real world is far more complicated. The first issue arises because a new product, process or service is rarely launched into a market not already being served by something else. The new PPS, beginning as a dissenting design, has to establish itself in the face of an existing one, which already has canonical status. The consequence is to delay the point at which the innovation can survive without continued invention push to keep it moving along. This increases the entry cost—the investment required before the newcomer is safely established. The result is to protect a canonical design against incoming innovations, even where these will ultimately give significant advantage. Think of horse-drawn vehicles being displaced by mechanical propulsion, and the pattern is clear. However, much more complex interactions between new and old designs are possible. The most spectacular type of disruptive innovation can lead to enforced rapid exit from a particular sector by well-managed companies, once dominant.6 The excessively accommodating suppliers are the most vulnerable, those who believe they must at all times be driven by the detail of what their customers say they want, and that marketing trumps all in business. The reality is more complex. This supposed ‘self-evident truth’ fails, because sudden step discontinuities can hit user demand at any time. In one elegant and convincing study Clay Christensen (b. 1952) looked at the computer hard disc business.7 He studied the computer industry’s transition from 200 mm diameter magnetic discs to the 134 mm size. The old larger discs were for some years the canonical design in minicomputers, then a valuable market for hard discs. At first, despite their modest price, the 134 mm discs were used only in low-cost and personal computers because of their generally inferior performance. However, as more of the smaller discs entered into the use environment their technical characteristics were rapidly improved by user feedback. Soon they were good enough, given their price advantage, to displace the larger ones until then almost universal. The market for 200 mm hard disc
11 Invention Push and Market Pull
119
assemblies disappeared abruptly, leaving their suppliers with intractable difficulties. This is a classic of disruptive technology. Christensen also draws attention to the disruptive displacement of valves by transistors as the canonical electronic active device during the 1960s (see Chap. 6). The first germanium transistors were expensive, slow, and did not work well above room temperature. In most uses transistors could not compete with valves. Their customers assured valve makers that what they really wanted was not transistors but valves, miniaturised and with improved characteristics. However transistors were well suited for a few niche applications: portable radios, hearing aids and some military uses. Manufactured for these markets, they improved rapidly, particularly after replacing germanium by silicon, overcoming the problems at higher temperatures. This enabled transistors to move into more and more additional market sectors, driving valves out. The changeover was quite sudden. By the 1980s transistors were the canonical form except for a few niches where valves clung on, though not for long as things turned out. In some markets, new PPSs appear in rapid succession. Their individual S-curves overlap to give the appearance of continuous exponential growth. An example is the silicon chip industry, sixty years old and not mature yet. Companies in technologies like this concentrate their efforts on early and mid-phase R&D, with no late-phase development.
Sailing-Ship Effect This has assumed that the appearance of a new design has no impact on the characteristics of the old, which is rarely the case. But when a new, dissenting design bids to replace an existing canonical one, the older design may put on a new burst of defensive development: sailing-ship effect.8 It is particularly likely to happen when the old product, process or service has been long mature, and is in the late, flat phase of the S-curve. It happens if, and only if, further development of the canonical form is moved out of the hands of typical late-phase developers, who will depend more on experience than deep insight. They must be replaced by people with background typical of the early phases of design, who will
120
W. Gosling
have a better scientific tool-kit and no excessive respect for the traditional approaches. This happened when steam ships threatened to take over from sail, hence the name. In the forty years from 1850 sailing-ships were transformed for the better. They were faster—as much as 30%—and needed less labour to crew them, only a quarter compared with earlier ship designs. Yet still in the end steam conquered all. A twenty-first-century example of sailing-ship effect is the present improvement in petrol (gasoline) engines for ordinary road cars. Under pressure from the dissenting designs of electric and hybrid vehicles, manufacturers seek ways to make conventional cars more efficient. Although useful advantages are gained by using more energy-efficient electric power steering and automatic transmission in place of the hydraulic versions which have been canonical hitherto, designers also hope for major reduction in fuel consumption and atmospheric pollution from engine improvements. Yet here too, in the end, sailing-ship effect will not save the internal combustion engine, not least because of its unfavourable environmental impact. We witness its slow terminal decline; it will go the way of the vacuum valve. The launch of something new is always a precarious business. Sailing- ship effect is either a blessing or a curse, depending on how it is used. It can give proprietors of an old technology additional valuable time to position themselves in the new. So making and selling hybrid cars could be a useful bridge to the production of electric cars. However, if initially promising advances are used as a justification for staying with the obsolescent technology, doomed to die, the decision will prove disastrous.
Notes 1. Maeda, J. (2006) The Laws of Simplicity MIT Press (Cambridge, USA). 2. From the eighteenth century, and even before, there was a conviction that there must be such a connection but nobody had managed to tie it down. 3. Christensen, C. (1992) The Innovator’s Challenge Thesis, Harvard Grad. Sch. of Bus. Admin. (Boston, USA). 4. Rogers, E. M. (2003). Diffusion of innovations (5th ed.) Free Press (New York, USA).
11 Invention Push and Market Pull
121
5. de Solla-Price, D. (1963) Little Science, Big Science Columbia U. Press (New York, USA). 6. Christensen, C. (1997) The Innovator’s Dilemma Harvard Bus. Sch. Press (Boston, USA). 7. Christensen, C. (1992) The Innovator’s Challenge Thesis, Harvard Grad. Sch. of Bus. Admin. (Boston, USA). 8. The expression has been in use since the 1970s but its origin is unknown.
12 Deep Design
A persistent and common objection to technology is that the product, process or service (PPS) simply does not do what was promised when the user was persuaded to adopt it. Maybe it does not do what is claimed at all, or it does work acceptably for a while, but breaks down unreasonably early. If it fails can it be repaired, and will there be knock-on damage? What happens when it comes to the end of its useful life? Is getting rid of it a problem? Above all, is it socially acceptable or thought a menace? All of these problems need to be considered and resolved when the design is being put together. They must be foreseen and some provision made to avoid or manage them. Design is complete and exhaustive if, and only if, this is done, in which case it is called a deep design. Any innovation has seven hurdles to leap before it is fully successful and acceptable, seven environments it must be able to survive to be a deep design.1 • The first is in the mind of the inventor, who has to think up the new PPS, and begin to conceive its form, something different from anything done before. This is the intrapsychic environment, where the challenge is to form and articulate some conception of a solution.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_12
123
124
W. Gosling
• Creating, building or realising the imagined product, process or service successfully is the second hurdle: the construction environment. If you cannot create it, you cannot use or sell it. • The adoption environment is the third. The innovation must be sold or otherwise transferred to a user or intermediary. Will people want the thing enough to pay for it? To deal with this issue exhaustively is a major challenge, the essence of marketing, and beyond the scope of this book. • The fourth hurdle, the use environment, follows. Those who adopt the innovation begin to use it, and become aware of its strengths and limitations. They feed knowledge of these back to the designer. The importance of this feedback cannot be overstated. • The failure environment is the fifth. Eventually the innovation must fail, everything does, but how soon and how often? Will there be collateral damage? Can it be repaired? If so how easily and at what cost? • A product, process or service at the end of its life will be abandoned, destroyed where it is, removed or dismantled—the scrapping environment and sixth hurdle. Costs and benefits may arise. Will this cause a problem? • The seventh and final challenge to designs is the social environment. Will society allow what is proposed, or is its impact as a whole likely to prove unacceptable? So let us begin by thinking about the first two—the intrapsychic and the construction environments. Get beyond them and at least we will have something real in our hands.
Can Something Come from Nothing? How does a wholly new design come into being? The first challenge is simply to imagine the new product, process or service in some form that might conceivably work. Is it to be radically different from what has been seen before, or will it differ only in some particular? In this intrapsychic environment, the first of all steps that must be taken is to conceive some way, any way, that the design’s aim could conceivably be achieved.
12 Deep Design
125
In 1753, over the initials ‘C.M.’ in a brief article for The Scots’ Magazine, somebody—we are still not sure who—proposed the first electric telegraph. Twenty-six insulated wires were to run between transmitting and receiving sites, one marked with each alphabet letter. Close to the receiving end of each wire would be a piece of paper bearing its letter. Applying an electric charge to that specific wire at the transmitting end causes the individual piece of paper to rise at the receiving end, due to electrostatic attraction, thus signalling the letter. So the electric telegraph had passed its first challenge: the idea existed in C.M.’s mind, and after this publication in many other minds too. How is something wholly new like this conceived? How is this trick done? Creativity has been extensively studied, yet the mental processes underlying designing from scratch retain perplexing features. Psychological research in the area is lively, but not yet conclusive.2 What has become clear is the crucial importance of visualisation as part of the creative process, and the need to break a conventional set of mind so as to perceive wholly new things.3 Plato had his ideas about creativity, believing it happens to us when the mind was ‘visited by one’s daemon’, a memorable phrase which does not help at all. John Stuart Mill in his System of Logic (1843) seemed to think creativity trivial: articulate the specification clearly enough and the solution will become obvious. Steve Jobs offered his explanation: ‘Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesise new things.’ The central mystery of creativity remains. More recent work emphasises the role of metaphors, which enable the thinker to mentally reorganise models and processes, bringing in novel elements to reposition the problem. While, as Jobs notes, this can feel unsurprising, it can also be accompanied by an ‘aha!’ moment of recognition, and if so creativity receives powerful reinforcement. The psychologist Mihaly Csikszentmihaly talks about a state of mind having a heightened focused attention, but also a sense of continuity, or being on a roll, which he calls ‘Flow’.4 Another approach addresses the
126
W. Gosling
‘inspired guessing’ (and hunch-testing) of heuristics. Thanks to Gerd Gigerenzer and others, heuristics has become a major research topic in psychology, with the promise of important applications.5 All invention of new designs begins in the minds of designers as phantasy—imagination or daydream. ‘Phantasy’ implies something different from ‘fantasy’. According to the Oxford English Dictionary6 the identical- sounding words should be regarded as quite separate: ‘fantasy’ implies ‘caprice, whim, fanciful invention’, while ‘phantasy’ suggests ‘imagination, visionary notion’. The forms phantasy constructions can take depend critically on the thoughts the designers are able to formulate, and therefore on their own intrapsychic phantasmal stock, their memories and recollections of things, which provides components from which phantasies are assembled. Intrapsychic phantasmal objects, as here, are neither ideas nor concepts, but explicit memories, or sometimes virtual memories that have been constructed. New designs come from the content of the mind, the intrapsychic phantasmal environment—where else? However, the usefulness of the content is enhanced by manipulating pre-existing intrapsychic entities in various ways. Certain psychological processes seem particularly common in the phantasmal environment,7 among them are: symbolic representation, wish fulfilment, having a phantasy of people in particular roles, ‘seeing’ entities mentally and then assembling or disassembling them, spatial and temporal displacement, interchange of spatial and temporal patterns and translation of phenomena between different sensory domains. Many of these processes can occur at an unconscious level of mental activity, as in dreams.8 The phantasmal stock of objects consists not only of things the person concerned has seen or experienced but may also include constructs—virtual artefacts. An example would be ‘black holes’ in astronomy. Impossible to see directly, they are predicted by cosmology and have observable indirect effects. Many people, therefore, are content to adopt them as intrapsychic objects.
12 Deep Design
127
Metaphor Plays Its Part The dictionary thinks of metaphor as primarily a figure of speech, in which a name is applied to something to which it is not properly applicable, yet an analogy exists. Common in everyday speech, many words start as metaphors, then become accepted in their own right with their origin forgotten. So, if we are feeling driven, we can sail away, on a steamship without sails, yet staying in touch because we contact our base thanks to the ship’s antennas. The italicised words are naturalised metaphors, and there are many, many more. However, both George Lakoff and Helen Haste have shown metaphor to be more significant than as a technical term in grammar.9,10 Manipulating intrapsychic entities to enable them to perform new roles exploits metaphor. It can involve the temporary substitution of the desired product, process or service by something already known that does not meet the requirements of the specification, yet seems cognate. Then the known becomes a metaphor for the unknown, and it may be possible to see how to progress from one to the other. So, a ship can be a metaphor for a flying vehicle. The ship floats in water because it is lighter than the water it displaces, so logically an airship must be lighter than the air it displaces—not an easy thing to do. In 1670, Francesco Lana de Terzi had the idea and suggested four hollow metal spheres containing a vacuum might serve as the lifting mechanism. If the metal were thin enough they would be lighter than the weight of the air they displace. In reality, the vehicle would not be feasible because of atmospheric pressure on the thin spheres, which would crush them. This germ of an idea could be made practicable only by filling the lifting spheres with a light gas, say hot air or hydrogen, which would counter the atmospheric pressure whilst adding only a little to the weight. They became balloons. A century after Lana de Terzi, the Montgolfier brothers, who owned a paper factory, built a person-carrying hot-air balloon, largely of paper. It ascended and flew. Henri Giffard (1825–1882) created the first air-ship in 1852. Just as for Lana de Terzi, ships on water were his suggestive metaphor. A ship has a narrow configuration to ease its passage through the water and steers by
128
W. Gosling
using a rudder; the airship must do likewise, as Giffard’s did. Its hydrogen gas-bag was cylindrical, tapering to a point at either end and a rudder was the ship’s only control surface. Yet this alone does not exhaust the metaphor’s creative power. With a little more thought it is also clear that because the airship moves in a three-dimensional medium (as against the two-dimensional water surface ships float on) the airship will need near- horizontal flat surfaces which can be pivoted to a down or up gradient to steer the vessel up or down—elevators. Finally the ship metaphor suggests the need for a means of propulsion. Giffard used an air-screw driven by a steam engine. Until the coming of jet propulsion in the mid-twentieth century, all subsequent aircraft used airscrews driven by an engine—their canonical propulsion system. So a single metaphor has provided the basis of almost the whole airship design. This is unusual, and in other cases several different preceding intrapsychic entities act as metaphors for different aspects of a new design. Sometimes a helpful metaphor, which has led to innovation, is pursued too far and too literally. Thus sea ships became a metaphor leading to airships, but encouraged poor design of the aircraft control systems. On the flight deck of the big rigid airships of the 1930s stood a helmsman, with a polished wooden wheel in his hands, like a ship’s wheel, steering the craft in azimuth by moving the rudder planes. But an airship, unlike the ship at sea, also has elevator planes in its tail assembly, and must be steered in elevation too. How was it done? Reverting to the metaphor, simply by installing a second helmsman to control it, holding a wheel connected to the elevators. The two helmsmen stood side by side, with the airship’s captain behind them, giving orders, as on the bridge of a ship. It was clumsy, using three people to pilot the aircraft, and ill- suited to a vehicle travelling much faster than a ship at sea. A single pilot controlled later airships using a joystick and a foot-operated rudder-bar, controls evolved in airplanes, crucially without the ship metaphor as their background.
12 Deep Design
129
How Does Creativity Operate in Practice? The evolution of electric telegraph design is a good example, following a predictable path. Relevant metaphors for early telegraph designers could only have been writing and speech, so a phantasy of the likely use environment for long distance communication requires the machine to produce either speech sounds or alphanumeric symbols (letters, numbers and punctuation marks). A myth about ‘sympathetic needles’ pointing to letters mechanically had already become widespread. In his book Magiae Naturalis (1558) the cryptographer and playwright Giambattista della Porta described a method of communication based on the supposed ‘sympathy’ between two needles magnetised at the same time from a single magnet. He claimed moving one of the needles would induce an identical ‘sympathetic’ movement in the other. In his book he suggests that a distant friend ‘though he be confined by prison walls … can communicate … by means of two compass needles circumscribed with an alphabet.’ Utter nonsense, it became a suggestive metaphor and concentrated attention on more realistic ways of doing it. In the eighty-four years after C.M.’s first beginnings all but two of the early telegraph designs indicated letters directly, by uncovering or pointing to them. The telegraph pioneers saw this ‘direct reading’ principle as the over-riding challenge to a satisfactory telegraph design. In the United States, Samuel Morse was obliged by the great distances to use a single-wire line and could not then make his telegraph direct reading, so he resorted to the ‘Morse code’, patented in 1838. Even so, he thought coding a disadvantage because it did not fit the accepted metaphor, and at first provided a primitive automatic sender and a recording receiver, to allow decoding off-line, using a code book. His telegraphists found learning the Morse code no serious hurdle, and offline working was abandoned. All but one of the telegraph pioneers had got it quite wrong, aspiring to design telegraphs not requiring the operators to memorise a code. Yet within a couple of years all telegraphs in commercial services were coded, either Cooke’s version using one or two needles, or their Morse
130
W. Gosling
counterpart—the two canonical designs. Why were early designers so consistently mistaken? As early as 1825, the first successful telegraph built by Schilling von Canstatt (1786–1837) shows the answer. He was in the Russian diplomatic service at the time, and coded messages were routine. So having cipher clerks good at coding and decoding messages available to him, he could easily form a phantasy about a coded telegraph. This would have given him confidence, so Schilling’s 1825 telegraph was a unique design, rejecting direct reading, but it did not alter the then-received view of the goal to be attained. An invention’s acceptability and prestige at the moment it appears is a social construct, not something in any way objective. Twenty years later telegraphy using a single wire plus Morse code proved a triumph, and lasted a long time. When a new graduate I worked with an elderly technician called Bill. As a young man he had routinely received telegrams sent directly across the Indian Ocean, using a mirror galvanometer. He sat on a stool inside a large light-tight wooden box, his head immobilised by a headrest. Just above him was a shelf with a candle, and a couple of metres away, at the far end of the box, was the Kelvin galvanometer. As Bill told it to me, the trick was to slide the candle along the shelf until its reflection in the galvanometer’s mirror fell directly into his eye. When signals arrived the light he saw blinked in Morse code. Letter by letter he called out the message to a note-taker outside the box, which he did for about twenty minutes at a time, after which Bill handed over his job to the note taker, took a well-earned twenty-minute break, then moved into the note taker’s seat, and so on, for the rest of his shift. Thus were remote parts of the British Empire held together, even into the twentieth century. Conscious thought models cannot wholly explain the creative behaviour of designers, as the way the Victorian telegraph’s history unfolded shows clearly. Success depends on an extensive stock of internal phantasmal objects, together with facility in assembling and transforming them. Individual designers vary widely in both respects.
12 Deep Design
131
Can It Be Made or Put Together? The second hurdle innovation must clear is to create the product, process or service successfully: the construction environment. If you cannot make it, you cannot use or sell it. Just before his death in 1837 William IV signed Cooke and Wheatstone’s telegraph patent. By 1839 a commercial line was working along the Great Western Railway from Paddington to West Drayton, and later Slough. The electric telegraph had survived the construction environment. In 1844 Samuel Morse and Alfred Vail repeated this success in the United States, using their different and superior telegraph. They were fortunate. Many designs fall at this crucial second hurdle, bright ideas incapable of being successfully built. Early aircraft were at times like this: great concepts but never to fly. A remarkable example is the Caproni Ca60 Noviplano airliner of 1921. Count Gianni Caproni, a qualified civil engineer whose doctorate was in electrical engineering, founded an aircraft construction company in 1911. His company successfully built military aircraft during World War I. At the end of the war he dreamed of a transatlantic airliner to carry 100 or 150 people. At the low in-flight speeds then practicable, the wing area needed to provide enough lift was too much for the wood and doped-fabric airframes in use, even with a triplane. However Caproni’s firm had access to sets of well-made triplane wings, left over from a wartime bomber project. He realised he could achieve the wing area needed to sustain the large aircraft he had in mind by using multiple sets of wings. Designed as a flying-boat, it had three sets of triplane wings, one at the bow, one amidships and one astern. The aircraft looked quite beautiful, in its eccentric way, and calculations showed it should carry a hundred people with enough lift to get off the water. Sadly, there was no wind-tunnel for testing prior to construction and computational resources were derisory by present standards. On its first flight the Noviplano proved virtually uncontrollable. It achieved an altitude of eight metres, but crashed destructively, although the pilot survived. The project was abandoned.
132
W. Gosling
Failures like this are common, but their significance can be overstated. New designs are launched into the construction environment every day, and all but a fortunate few have problems. Alan Kay once suggested: ‘If you don’t fail at least 90 per cent of the time, you’re not aiming high enough.’ And in fact many failed designs will ultimately make it, after a certain amount of massaging. The origins of the nuclear power industry are an example. Assembled under the supervision of Enrico Fermi, the Chicago Pile-1 was the world’s first nuclear reactor, built on a squash court at the University of Chicago. The first self-sustaining chain reaction was initiated on December 2nd 1942, after much tinkering. The forerunner of all subsequent reactors used in nuclear power generation around the world, the Chicago ‘pile’ had proved its feasibility. However things did not go smoothly. When Fermi designed the ‘pile’ he miscalculated, unaware of the neutron losses due to foreign bodies and dirt, trapped within. When the pile was set to work there were only slight signs of a sustained nuclear chain reaction at first. With the whole project on a knife-edge, for a couple of days Fermi sweated it out. ‘If only I’d made it twice as big,’ he was heard to say.11 Slowly the reaction built up, until it neared design levels. The pile design had survived the construction environment. The computer had a similar history.12 When J. Presper Eckert and John Mauchly of the University of Pennsylvania announced the success of ENIAC in 1946, the first general purpose digital computer had come through the construction environment, but the passage was fraught. ENIAC was assembled from 18,000 valves and numerous other components, all originally manufactured for use in radio. With a length of 30 m, it consumed 140 kW of power and weighed 32 tonnes,13 yet the computer was less powerful than programmable calculators on sale today— the cumulative product of subsequent design revolutions, new techniques and much engineering development.
12 Deep Design
133
The Role of the Model To test the feasibility of a design it is necessary to represent it somehow, to deduce the likely characteristics of the design when instantiated, and decide whether it could conform to the specification. All such representations are a kind of model—a word understood in this sense since before Shakespeare used it in his Henry IV Part II (Act I, Scene III). Models may represent many aspects of a design, some visual but others that are not. Form and function may be modelled, in the same model or in complementary ones. There may be many models in a design exercise. Sometimes an already-built example of the thing itself may serve as the model for one proposed—the obvious tactic if something is replicated, or has only small changes. Viking longship builders did not draw or model ships they meant to build, but replicated existing ones. As the source of dimensions of the ship components, and as templates for their assembly, existing ships served well. Even modest dimensional variations would not be too difficult, but the method is less suited to radical change. This explains why the first longships with keels took in water so badly. Often, though, something existing cannot serve as a model because nothing is close enough to what is wanted. For example, although details of an existing building are used as models for the comparable parts of a new one, buildings are rarely exactly replicated.
Abstract Models For designers the abstract models have always been important. The commonest were pictures and technical drawings, for which elaborate conventions of representation evolved. For a couple of generations the lay person’s abiding image of a designer was of somebody working at a large drawing board, busily deploying set-square, compasses and a well- sharpened pencil on a ‘double elephant’ sheet of cartridge paper. These old-time designer-draughtsmen developed great skills in projecting complex three-dimensional objects onto a flat sheet of paper. They also
134
W. Gosling
evolved geometrical constructions performed on the board, which replaced many design calculations. The designer-draughtsmen’s geometrical constructions solved mathematical problems arising in design, and elegantly for their day. That style no longer appeals, yet mathematical models remain the key to design, more obviously than ever before. Using cheap computing power, software for design has blossomed. Specific packages aim at a particular profession or interest. They facilitate two and three dimensional drawing, rendering, depiction and all the complementary calculations. Some offer picture animation, previously unheard-of in modelling, but empowering designers in new ways. A computer executing ten million instructions per second, which might have cost $4,000,000 near the middle of the last century, can be bought for $400 today, or less. From multiple cabinets filling a large air- conditioned room the computer has transformed itself into a tablet, held in the hand. So now, with computer power costing so little, design is different. We never touch a pencil, instead working with touch-screen, mouse (or equivalent) and software on screens. Calculations are done for us, and the old ways are barely a memory. The great versatility of the computer is because it is a truly general purpose digital machine. Loaded with the right software it can perform many modelling roles, the number seemingly unlimited. It can, for example, simulate the functioning of a PPS, without in any way illustrating what the finished thing will look like. At the other extreme it can model a product or process in space, showing exactly how it will appear in three dimensions, yet have nothing to offer about its functioning. Or it can do both. In the evolution of a design, functional models are the most important of all. When the first stab at a design is complete, it is highly desirable to build, or computer simulate, a functional model, to check things work as they should. Let us look at functional models. Begin with products—for them they are of three types, usually built successively: • Laboratory models, hand built as one-offs without constraint on the materials, components or tools, so long as it works. In software this might be an A model.
12 Deep Design
135
• Prototypes are hand assembled from the right materials and parts but not using production tools or methods. • Preproduction is built from the right parts using production tools, but in small volume—perhaps a software B model. After all this, supplemented by careful testing, production can be undertaken with some confidence. Making a choice between building physical models and computer simulation can sometimes be difficult. Although the simulation is cheaper, it is also a step more distant from the thing itself. Error in building the software simulation is possible, and can be costly. To physically build a model is always the lower risk, avoiding a transition between the real and cyber worlds, where a simulation must exist. Though useful, there are situations in which this list of different types of functional models has no applicability at all. Because the cost puts it out of the question, there are no laboratory models of suspension bridges or tower buildings, although particular components may be subjected to functional modelling. It is impossible to build a laboratory model of an aircraft, so invariably real-world testing starts with a prototype. However even in these cases, much more functional modelling has become possible using computer simulation, and its use has rapidly increased for ordinary products.
The Visuals Space models, by contrast, represent the product, process or service as it actually looks, and may be either true to size or scaled, either up or down. They permit visualisation and show relationship to the environment. Many traditional design drawings, renderings and depictions are in this class, as are the product of 2D and 3D computer-aided design packages. If built in the solid, at full scale yet still not functional, they are called mock-ups. Designers of airliners often build full-scale space models of parts of a proposed aircraft to support marketing, usually choosing to model the flight deck so a crew can try it out. The galley area is also modelled for
136
W. Gosling
similar reasons. Few are more influential in choice of new aircraft than the top cabin crew. Physical space models of large products, such as aircraft, are expensive to build, so this costly practice is increasingly replaced by ‘walk through’ 3D simulations, sometimes using virtual reality. Scale models can also be functional, used for wind-tunnel testing of aircraft fuselage, wing and other components. The Wright brothers’ 1903 success was helped by using a simple bicycle-powered wind-tunnel to aid their wing design. Similarly ship hull designs are tested by towing models in water tanks. Here too falling cost has resulted in such methods being eclipsed by computer modelling.
Patterns of Interconnection Topological models represent the way in which constituent parts of a product, process or service are interrelated or connected. An electronic circuit diagram is an archetypal topological model.14 Also typical of this class are flow models in chemical engineering or hydraulic systems, as well as models of road networks used to assist traffic management. It is of the essence of all topological diagrams that they appear to show the component parts laid out in a certain pattern, yet in fact when the real thing is constructed the parts may not be in this spatial relationship at all. What is being represented, and with precision, is the pattern of interconnection between them, but not their location. The official map of the London Underground, a classic of twentieth century design, is a pure topological model. On display in all the stations, it shows with great clarity how to get from one station to another using the rail network, which line to choose and where to change lines, if need be. However, the positions of the stations on this map correspond only roughly to their actual geographical location, nor do the map distances between them signify actual walking distances.
12 Deep Design
137
Topology to Reality: The Preternatural Chip In 1952 Geoffrey Dummer (1909–2002) proposed the idea of fully integrated electronic circuits formed directly in a crystal of semiconductor. It seemed like science fiction at the time but was instantiated by Jack Kilby (1923–2005) only six years later, and brought to fully viable form by Robert Noyce (1927–1990) in 1960. The Kilby precursor formed all the components in a silicon crystal but used external gold wires to interconnect them. By contrast, Noyce did the interconnection with metal tracks on the chip surface. The complexity of circuits formed then launched into exponential growth, the number of transistors on a chip of a certain size doubling every twenty months or so. As they increase in number they also get faster in action, by around 30% in the same time. So, as well as being more complex, new systems go quicker than those that came before.15 Microcircuits are formed on the surface of a ‘chip’ of mono-crystalline silicon by photographic techniques. The silicon surface is oxidised then ‘masked’ using photolithography to cut away the oxide selectively. It can then be chemically processed by diffusing chemicals—‘dopants’—into the parts of the surface not masked. The depth of dopant penetration is set by time and temperature.16 It all derives from photolithography, a technique originally aimed at printing photographs on paper.17 An image is formed on a flat metal surface using a light-sensitive coating, originally glue sensitised with potassium dichromate. The coating is washed away in either water or a solvent if not exposed, but becomes insoluble where light has fallen on it, a permanent pattern adhering to the plate. The rest is exposed to chemical etching. The plate is then inked, for printing on paper. This venerable process is not too difficult to adapt for chip manufacture. Replace the metal plate by silicon with a layer of oxide on its surface, then apply a light-sensitive coating, expose it to the light patterned your design requires, then etch to form the desired pattern in the oxide layer, just as the metal plate was etched in the earlier process. The thin, tough oxide pattern left can mask the underlying silicon during subsequent high temperature processing. All that is needed now is to expose the
138
W. Gosling
masked silicon to dopant vapour in a furnace, and you have the means of making functioning chips. Almost unbelievably, with this technique it is possible to form and interconnect electronic circuits, like computer gates (each composed of half a dozen transistors or so), directly in the surface of the crystal. Using photo-reduction of the initial light pattern, they can even be made small enough to form millions on a single chip. Since 1960 the limit to the number of transistors on a chip has increased about 50 million times, while the cost of forming each transistor has fallen to the point where it roughly equates to the cost of printing a single letter on a single page of a single newspaper. In all history no other technique has developed remotely as fast or as far as this one. It is now feasible to build artefacts of biological complexity on an industrial scale. From this, the whole present-day metamorphosis of IT and digital electronics resulted—internet, cell phones, tablet computers, Google, Skype and all. However, the technique for making the chips is one thing, and the ability to design its use is another. At the beginning of microcircuit history the negatives of the pattern required for photolithography were drawn by hand, then photo-reduced. Moore’s Law—doubling in complexity every twenty months—meant this soon became impracticable. The configuration of millions of gates on a chip is ‘more than a headful’ for any design team. Designing at this complexity proved a great challenge. Silicon chip designers seized on category of topological models called hardware description languages (HDL); of these, Verilog’s VHDL has been widely used. The solution to designing the million gate chip emerged. HDLs are similar to software languages; indeed this was the metaphor from which they derived. They have a vocabulary corresponding to various ‘cells’—groupings of gates to form structures with distinct functions. Already proven to work on a chip created using a specified set of techniques, they have assured mutual compatibility. Proven subsystems, they are used on a ‘mix and match’ basis. Hardware description languages also have a ‘syntax’ allowing assembly together of the cells only in ways sure to result in a functioning design.
12 Deep Design
139
The designer is now free to think at a higher level, choosing and assembling cells, not individual gates, so making the design problem tractable once again. Also HDLs have associated software packages which enable them to simulate the properties of a system once designed, thus confirming whether or not it has the correct functionality. Another software package then converts the HDL text into a physical model—a layout for the chip and a mask set which will be used in its manufacture. So the HDL creates both functional and space models from a topological starting point.
The Confidence to Go On Once rigorous modelling of both function and structure of the innovation has been successful, confidence in the design will be strong. For a high volume product there is still much to do, though. The manufacturing facility must be set up and optimised, which will involve the solution of many problems, not least among them designing the production process. Even so, by then the product will seem likely to survive the production environment, and can be prepared for launch in the market. This chapter has concentrated its arguments towards products, but processes are not too different from low volume products. Services are another matter. The system will probably be a one off, and attention needs to concentrate on means to reduce risk of design errors. With little hardware, and that little probably bought ‘off the shelf ’ with specifications well-defined, hardware risk is minimal. By contrast the demands on software are exacting. The temptation to commission the writing of bespoke software is powerful. It is often done in an attempt to exactly mimic an existing system of service, with which the users are familiar—to create a transitional model of the system in short. All experience says this is an extremely high risk strategy. Existing much-used and well tested software is far safer, even in those cases when the integration of packages from different sources threatens difficulties in bringing them together.
140
W. Gosling
It is always safer to devote available software expertise to resolving such compatibility problems as they reveal themselves, rather than to writing wholly new software. If there is an echo of ‘brute force and ignorance’ here, so be it. There are times, however, when bespoke software is genuinely unavoidable. Conditions of contract become critical, and not least the inclusion of acceptable get-out clauses. People have more often had their career expectations radically curtailed by signing large contracts for bespoke software than by any other class of human activity, except perhaps duelling.18,19
Notes 1. Gosling,(2001) ‘The luckiest number of all’ The Guardian Dec. 20 (UK). 2. Sternberg, R. (1999) Handbook of Creativity Cambridge U. Press (Cambridge, UK). 3. Gooding, D. (1990) Experiment and the Making of Meaning Kluwer (Dordrecht, Netherlands). 4. Csikszentmihaly, M. (2013) Creativity Harper Perennial (New York, USA). 5. Gigerenzer, G. (1999) Simple Heuristics That Make Us SmartOxford U. Press (Oxford, UK); (2008) Gut Feelings: Short Cuts to Better Decision Making Penguin Books (London, UK). 6. 2nd ed. V 4.0. 7. O’Doherty, E. (1963) ‘Psychological aspects of the creative act’ in Conference on Design Methods Pergamon (Oxford, UK). 8. Wiseman, R. (2009) 59 Seconds Pan Books (London, UK). 9. Lakoff, G.and Johnson, M. Metaphors We Live By U. of Chicago Press (Chicago, USA). 10. Haste, H. (1993) ‘Dinosaur as metaphor’ Modern Geology18, 347–368; (1994) The Sexual Metaphor Harvard U. Press (Cambridge, USA). 11. G. P. Thomson, private communication. 12. Isaacson, W. (2014) The Innovators Kindle Books (online).
12 Deep Design
141
13. The tale that there was a 27 ton laptop version is wholly apocryphal. 14. Gosling, W. (2000) Radio Spectrum Conservation Newnes (Oxford, England). 15. For Gordon E. Moore’s own take see: http://www.youtube.com/ watch?v=V-pk-A8IqE4. 16. See: www.intel.com/Assets/PDF/General/308301003.pdf. 17. This old process, first used by Nicephore Niepce in the 1820s, was greatly improved by Henry Fox-Talbot a decade later. 18. Dex, R. (2013) ‘BBC suspends chief technology officer and ends digital plan—after 100 m bill’ The Independent May 24th (London, UK). 19. Fellows, T. (2014) ‘How to avoid getting sucked into the black hole of software development’ E&T 9, 7, p. 25.
13 A Hazardous Business
In the evolution of many products, processes and services, from the nineteenth century telegraph to the modern computer, there have been innumerable examples of designers making decisions which look unwise in hindsight, and even some leading to disaster. Evidently designing, because partly intuitive, is a risky business. Risk is unavoidable, but what can be done to reduce it to within acceptable limits? To clarify this problem a brief trip into classical control theory may help. Control theory began with Adam Smith (1723–1790) looking at how markets worked. In the nineteenth century, James Clerk Maxwell (1831–1879), Alexander Lyapunov (1857–1918) and others made major contributions, while in the twentieth century, Harry Nyquist (1889–1976) and Lev Pontryagin (1908–1988) were important.1,2 The classical theory’s model of a control process envisaged a control action taken on an object, and then feeding back to the controller of some measure of the effect that action had produced. This is called closedloop control. One helpful metaphor is that the controller is trying to move the object towards a target. After each action, the residual distance between object and target is fed back to the controller. Knowing this, the controller can decide the scale and direction of the next control action, in order to reduce the residual error. Think about steering a car. This involves © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_13
143
144
W. Gosling
the driver comparing the car’s position on the road with where it should be, then taking a control action. If the car is too far to the left the steering wheel is turned a little clockwise, if too far to the right it is turned the other way. The driver sees the resulting change, and steers again if needed—all running on implicit memory with little conscious thought. In principle closed-loop control could make the final location error vanish, though there are many reasons why it probably will not quite. Closed-loop control appears in many aspects of life, from regulating the temperature of our bodies to not falling off when riding a bicycle. Adam Smith believed (correctly, with reservations) that markets worked through closed-loop control, adjusting the supply of goods to the demand by using price as the feedback mechanism. In a market economy, consumers control producers primarily by the magnitude of the profit they grant them, through buying or not buying their offerings. In the past it was widely believed that command economies could devote money which would have gone on capitalist profit to other purposes, such as reducing prices. Sadly, it is not so. Without profit being taken consumers lose control of the producers—the loop is not closed. Rational producers will then conduct their affairs for their own benefit, not the consumer’s. The increase in production costs exceeds the profit usually taken in a market economy. Everything is different, because there is no feedback. This is also a recognisable category of control situation, called open- loop control. Firing a rifle is an example: a sharpshooter carefully aligns the rifle on the target, but once she has squeezed the trigger the bullet will follow a trajectory she can no longer influence. Aiming predetermines everything. A miss will happen if either the alignment of the rifle is in error when firing, the target moves after firing, or some influence, such as a cross wind, modifies the trajectory in an unforeseen way during the flight of the bullet. For a ‘bull’s eye’ three things are needed: perfect information during sighting, a stationary target and no unpredicted extraneous influences during the bullet’s travel.
13 A Hazardous Business
145
Control in Design How is the design process controlled? Evidently when the design first appears, in its early, invention-push phase, the design process is under open-loop control. This is the period of maximum vulnerability to error, since actions taken at this time, in the open-loop domain, have no intrinsic method of correction. Later, when designers receive feedback, by testing functional models of the product, process or service (PPS), or from users through market pull, the control loop closes. The design then converges towards its target, and the risk of failure gets less. So what might be done to reduce the risk of error in the early stages? The first and most obvious action is to minimise the number of innovations on which the success of the design depends. The first cellular mobile phone trials conducted in Chicago in 1978 by Motorola were necessarily radical in the system concept, but to the maximum degree they used tried and reliable radio equipment, already manufactured by the company with only minimal modification where unavoidable. This kept the risk down. So the problems they had to resolve in the trials were not made worse by the need to struggle with untried equipment. The system was a great success, leading to the design of dedicated optimal hardware, and the highly functional cell phone systems we have today. Simple arithmetic covers this point. The probability of success of the whole design is the success probability of each of the necessary individual innovations all multiplied together. If only one innovation is needed, and it has the high success probability of, say, 97%, that is also the overall success probability. However with four such 97% innovations, all essential to success, the chance of a happy outcome would drop to just over 88%, seriously worrying, while sixteen would bring the chance down to 74%, an unacceptable figure. Exuberant designers incorporate all the good ideas they can think of in each new design. They have an ever-optimistic temperament, and their design careers can sometimes be brilliant, but are commonly quite short. At the other extreme are hesitant designers, so afraid of failure that they never innovate. They are not sought-after. Between these extremes, the best designers judiciously mingle courage where it is essential with
146
W. Gosling
caution everywhere else. A partnership can do the trick, like the audacity of I. K. Brunel linked to the natural caution of Daniel Gooch. When needless innovation has been cut to a minimum, what more can be done to reduce failure risk? Since initially we are in an open-loop control situation, three things are needed for guaranteed success: perfect information on which to base decisions, absence of unforeseen perturbing factors while it is all happening, and a target that does not move. Hardly ever are all these in place.
Perfect Information? Often design mistakes are made because of misinformation at the outset. Sometimes hard information is simply not available, leading to ‘self- evident’ assumptions that prove wrong. Early designers of aircraft were handicapped by lack of knowledge about the atmosphere as a medium for travel, particularly in bad weather. Inadequate structural designs resulted. There are many more examples where lack of knowledge of the use environment undermined the success of radical designs. In the nineteenth century telegraph designers, ill-informed about the ability of operators to learn code, saw their challenge as producing direct-reading telegraphs useable by any literate person. This proved expensive, slow and unnecessary. Cooke and Morse both built successful coded telegraphs allowing a single wire between sender and receiver. Commercially essential for long distance telegraphy and later radio, it presented few problems to telegraphists. Other causes of early design error come from the designer’s situation— perturbing factors and a moving design target. The world changes, people’s perceptions of things alter, and techniques evolve. When Armstrong designed his FM broadcasting system it was reasonable for him to assume a strong radio signal coming from a roof-mounted antenna. By the time it was being implemented, television was taking over as the primary source of in-house entertainment and radio was being increasingly relegated to a portable role, in which FM proved adequate but less successful than hoped.
13 A Hazardous Business
147
Often the designer also faces the issue of a moving target. The specification for the design comes from the sponsor, notoriously prone to a change of mind before it is done. A promising design can then entirely miss a revised target, a principal cause of the failure in many large software projects. Sometimes action by the designers themselves, changing their own objectives, may precipitate such an outcome—the Charles Babbage story. He began design of the ‘difference engine’, in 1822, and obtained government financial support for the project. By 1837, influenced by Lady Lovelace, he had lost enthusiasm for his original concept and proposed a new machine, much better in principle. The ‘analytical engine’ was designed to have many characteristics of our present computers. However this change of target, after the promises of great things from the difference engine, shook the confidence of Her Majesty’s Government, who stopped the funding. At times a misperception of the rate of change of technology precipitates failure. In the late 1950s valves (tubes) were beginning to give place to transistors as the canonical form for active electronic devices. However customers told the Radio Corporation of America (RCA), then one of the largest US electronics companies, what they wanted was not transistors—too much of a leap in the dark—but better, smaller, more advanced valves. RCA tried a radical redesign of the older device intending to compete with the newcomer. The nuvistor their innovation was called. Unwittingly exploiting sailing-ship effect, they aimed to challenge transistors in many applications. However as the development of the nuvistor continued the transistor also went on improving, and equipment designers learned how to use it more effectively. Soon it was evident that the nuvistor could only be a dissenting design, in a niche market. For a few years it held a modest market share, retreating further as transistor specifications powered ahead. Wounded by the nuvistor debacle, RCA was taken over by GE in 1986 and subsequently broken-up. The UK Plessey Company had long wished to make its own valves, irked by dependency on specialist manufacturers. In 1960 it began to develop a transistor-sized valve quite different in geometry from the nuvistor. Their Technical Director, Geoffrey Gaut (1909–1992) aborted the project at the prototype stage, convinced that the future was with
148
W. Gosling
transistors. Two of the most important skills of technical direction are getting projects started and knowing when to kill them off; Gaut had both.
Tragedy Following a Moving Target For large projects, particularly when State sponsored, a change in objectives can be politically inspired, in response to pressure from special interests or public opinion. The British R101 airship is a prime example. Intended as the UK flagship of the air, it crashed on its first flight in October 1929. The R101 was at first designed assuming propulsion by seven petrol engines, weighing a little over half a tonne each. A decision was taken, according to Nevil Shute3 under political pressure, to switch to diesel engines, weighing about a tonne. It was believed that a move to oil engines would reduce fire risk, an understandable public obsession where hydrogen-lifted ships were concerned. Working out with the awful inevitability of a Greek tragedy, this attempt at fire-risk reduction itself gave the ship a far worse vulnerability, a loss of structural integrity. A new purpose-built light-weight six-cylinder diesel engine was specified for the ship, but development went too slowly for deadlines to be met, and nothing could be done to accelerate it. The first flight’s date was already fixed immovably. Anxiety about the availability of the new engine became acute. Finally, in an atmosphere of crisis, an existing eight- cylinder design was adopted instead. Originally meant for railway use, it had not been developed under aircraft-style weight control. An early job I had was in the aircraft industry, with de Havilland (now British Aerospace). In the design office a banner on the wall carried the words ‘Simplicate and add more lightness!’ That about sums it up; aviation weight control needs to be draconian. Designing for low weight is a state of mind. Designers of railway engines are not under such severe pressure. The new diesels weighed two tonnes each, four times heavier than the petrol engines for only about 10% more power. The R101, as it had at first been conceived, was now seriously short of lift—around eleven tonnes from the heavier engines alone, and more from weight over-runs elsewhere, rumoured due to relaxed weight
13 A Hazardous Business
149
control. To recover the situation an extension in the airship’s length was tried, by cutting the ship in half and building in an additional new bay, departing from the original structural design. Involving some re-skinning of the ship, the changes increased the number of internal lift-producing hydrogen ballonets. Even these late structural modification did not entirely solve the lift problem, particularly at the beginning of the long maiden voyage, when carrying a maximum weight of fuel and stores. At the start of its scheduled flight to India there was still a short-fall of hydrogen lift, due to excess weight. This led to over-reliance on aerodynamic lift, prejudicing a previously sound design. The tragedy ground inexorably on, with increasing forebodings among the engineers. Nets retaining the ballonets were let out to enlarge them shortly before take-off, Shute claimed, increasing lift a little. They chafed against each other, so he alleged, leading to hydrogen leakage and the final, fatal crash in France. This is possible, but the required replacement of the external ‘skin’ of the airship had been much more extensive than first thought, and Shute believed its strength was compromised by this. Shortly after takeoff a split in the forward outer cover is known to have occurred, which reduced speed, and hence aerodynamic lift, not long before the crash. Whatever the cause, the R101 was unable to maintain height, and crashed into a hillside near Beauvais. After a moderate ground impact the ship burned. The cause of ignition is uncertain. The psychological and political consequences of the disaster ended the UK airship programme, even though the competing R100 made a successful Atlantic crossing, though with some worrying moments. Nevil Shute was a member of the R100 design team. Later he withdrew some of his more negative statements about the R101’s design.
The Prospect of Failure Often the initial information used in design is imperfect, there are perturbing factors while the design is being completed, and the target aimed at moves, all of which was true of the R101. In the initial invention-push domain of innovation the failure rate is high because the design is under open-loop control. At various points in the process people have to resort
150
W. Gosling
to doxastic empiricism, partly intuitive and not reducible to an algorithm. Judgments made in good faith can be mistaken, an inevitable cause of design risk. In the real world no design can ever be entirely without risk in its early stages. From whatever cause, all risk will be minimised by closing the design control loop as soon as possible. User reaction is best for this, so the sooner the innovation moves into the use environment the better. However a variety of factors may delay the transition, so increasingly the design loop may be closed long before through functional modelling of the design—prototypes, or now computer simulation. One of the ways a designer can achieve greatness is through skill in recovering from design failures. Sometimes a partnership achieves wonders. One partner is creative in a free way, and the other subsequently modifies the designs to make them work. When I. K. Brunel was creating the Great Western Railway, Daniel Gooch (1816–1889), his twenty-one-year old deputy, arranged to have sight of all the great engineer’s drawings before they went to the workshops. Minor errors in the overall track, signalling and system design were corrected. The design of bridges was checked. Brunel’s locomotive designs were revamped, since Gooch knew he was not strong in this area. Thus the great man’s reputation was kept unblemished. Whether he knew what was going on is an interesting question. A highly competent engineer, Daniel Gooch was the archetypal ‘safe pair of hands’. Having learned his profession under George Stephenson (1781–1848), in 1837 he was recruited to the Great Western Railway by Brunel. In 1865, five years after Brunel’s death, leaving the GWR as an employee (though remaining a Director), he became Chief Engineer of the Telegraph Construction Company. His task was to lay a telegraph cable across the Atlantic.4 Although without previous experience of cable laying, Gooch was the greatest project manager of nineteenth century England. Charles de Sauty, a most able telegraph engineer, supported him. Even so, there was widespread pessimism about the task Gooch took on. In 1858 the previous Transatlantic cable had failed within a few days. The ‘electrician’ in charge was the aptly named Wildman Whitehouse (1816–90), whose sole qualification was as a surgeon. deSauty served under Whitehouse but disagreed with him and was banished to the American end of the link. With no power and little influence, he watched while the ‘electrician’ destroyed the cable from the British end.
13 A Hazardous Business
151
Whitehouse was determined to use his own patented automatic telegraph receiving equipment. Using Kelvin’s mirror galvanometer, signals were received, but were far too weak to operate the Whitehouse automatic receiver. The pulses of current received were not only weak but also distorted. Michael Faraday and others correctly attributed this to the reactive effect of the extreme length of the cable. If this was right—and subsequent events showed that it was—there could be no ‘quick fix’ of the problem. The cable could operate but only at a much lower rate of transmission of messages than was practicable on shorter land lines. It was claimed that some traffic actually was passed over the cable but at a very slow rate. Later some doubt was cast on this claim but it seems likely. The earning capacity of the cable was closely related to its transmission speed, so low speed looked like a serious commercial set-back. Against Kelvin and de Sauty’s advice, Whitehouse raised the voltage applied to the cable to try to get more current through it. He used enormous induction coils, a metre and a half long, to generate high voltages. At an estimated 2000 volts the cable insulation broke down irretrievably, and an investment of £350,000 was lost (tens of millions in present day terms). Laying a new and improved cable began in 1865, but less than 1000 km from its destination, with 2000 km already laid, it broke and sank in deep water. Attempts were made to grapple for it without success. The following year, after understandable difficulties fund raising,5 another cable was laid, and this time successfully. Brunel’s Great Eastern, the largest ship afloat, had tanks roomy enough to carry the full length of cable needed and then some, avoiding tricky splicing of a new section to a part-laid cable at sea. Despite superstition, she left Valentia Island, off the coast of Ireland, on Friday 13th of July 1866, and arrived, after only minor incidents, at Heart’s Content Bay, Newfoundland on the 27th. Tests showed the cable in good order and passing traffic at eight words per minute. Although slow this was acceptable as the basis of a commercial service. With the benefit of fair weather, the Great Eastern had been able to navigate a straight course, so she still had cable left in her tanks. She went again to the end of the lost 1865 cable. After trying for a fortnight, it proved possible to grapple the cable, splice more on to it, and complete this one too. So by late summer of 1866 two transatlantic cables were working satisfactorily.6 Celebrations of the cable’s success were ecstatic on
152
W. Gosling
both sides of the Atlantic. Heads of State exchanged congratulatory messages, and Gooch was honoured with a baronetcy.7 Messages carried on the 2nd of August 1866, the first day of commercial operation, brought in £1000. The first commercial cable sent was ‘A treaty of peace has been signed between Austria and Prussia’. Within a fortnight Gooch was noting in his diary takings of £12,000 for a single day. With low running costs, the cables seemed like a licence to print money. Before long competition was inevitable: a French cable layer departed the Breton coast for a successful Atlantic crossing.8
Risk and Irrationality The management of risk is made more difficult because our intuitive attitudes to it are far from what rational calculation should tell us. Gerd Gigerentzer’s work has brought new insights.9 After the ‘9/11’ jihadi attack on New York, which killed nearly 3000 people, many of those who wished to travel between New York and Washington decided flying had become too dangerous, and opted to drive instead. In reality road travel is so much more accident-prone than flying that this choice is estimated to have cost over a thousand lives, which Gigerentzer describes as the second wave of the jihadi killings. The road tragedies happened because of a misperception of risk. In road accidents people die in ones and twos, and at many locations. This produces far less psychological impact than when thousands die in a single incident, even if the annual loss of life in road accidents is larger. People routinely make gross errors in estimating risk like this. Part of the problem arises from a failure to distinguish between risk and uncertainty. Risk relates to events which are foreseen and, at least in principle, can have a probability assigned to them, based on past history. Considering all such events it is possible to predict the risk of failure for a complex system.10By contrast uncertainties are wholly unforeseen future events, to which no probabilities can be assigned. The only defence against them is taking steps towards damage limitation in worst-case scenarios. The 9/11 outrage was possible because risk calculations based on earlier hi-jacking incidents suggested it was safer not to tackle the terrorists in the air, but wait until the aircraft landed, considered certain to happen, sooner or later. The jihadi suicide crews in the 9/11 attack had no
13 A Hazardous Business
153
intention of landing the aircraft they took over, nullifying the basis on which these calculations were made. This was the uncertainty, never considered, which made the attack possible. For cultural reasons, the West had been slow to recognise suicide attacks as an act of war. Uncomfortable awareness of risk leads to an excessively conservative solution. Yet maybe only a radical approach can give us passage to the future. So being conservative can also, at times, be the more risky course. It must all seem hazardous. Is there any way the risk can be reliably curbed?
Notes 1. Gosling, W. (2012) Helmsmen and Heroes (2nd edition), Kindle Books (online). 2. Pontryagin was blinded by a primus stove explosion at age 14, but went on to become a mathematician of stature, with the help of his mother, Tatyana Andreyevna, who acted as his mathematical reader. 3. Shute, N. (1954) Slide Rule: Autobiography of an Engineer Paper Tiger (London, UK). 4. Gordon, J. (2002) A Thread Across the Ocean Simon & Schuster (London, UK). 5. Cookson, G. (2003) The Cable Tempus Pub. (Stroud, UK). 6. To prove that high voltages were not needed de Sauty had the two cables connected together at the US end. He then sent signals on the double trip, 5000 km across the Atlantic and back, to a Kelvin galvanometer in England. His ‘battery’, still in the Science Museum, was a silver thimble, loaned by Emily Fitzgerald, daughter of the owner of Valentia. It held lemon juice, with a fragment of zinc used as the second connection. A triumph for the technology of feeble currents! 7. Michael Faraday was twice offered a knighthood, but twice declined. It is said that this was because to accept would have been in conflict with his religious principles as a Sandemanian. 8. After the Atlantic cable, the Great Western Railway, facing problems, invited Sir Daniel Gooch to Chair their Board. Ever a safe pair of hands, this he did, and guided them to renewed prosperity. 9. Gigerenzer, G. (2014) Risk Savvy: How To Make Good Decisions Penguin Books (London, UK). 10. O’Connor, P. (2002) Practical Reliability Engineering (4th Ed.) John Wiley (New York, USA).
14 Will Anyone Want It?
Introducing something new to potential users is always problematic, because so often it invites them to abandon old ways that seem to work, and to think again. Will they take the risk? Apple had this problem in the early eighties when launching the Macintosh, the first computer aimed at a mass market having a graphical user interface. Using a reference to George Orwell’s dystopian vision in his book 1984 and suggesting the essential authoritarianism of bureau computing, their brilliant film clip convinced many.1 All innovative designs have the same challenge; each responds in its own way.
The Adoption Environment Will people want the innovation enough to acquire it? This is about the adoption environment, the third hurdle it must face. The innovative design must be sold (or otherwise transferred) to users if it is to take root. Take the case of the telegraph. The aim of early telegraph designers was to promote a new system for instantaneous communication, particularly to railway company directors who owned the track alongside which the telegraph line could run. A direct reading telegraph, which any literate © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_14
155
156
W. Gosling
person could use, seemed to them essential for success. By contrast, Schilling von Canstatt, because he was a professional diplomat, had an adoption environment unique to him. He did not have to ‘sell’ his system to potential sponsors unfamiliar with communicating by way of a code. His design appeared twenty years before most others, but did not survive once the career diplomat, dead of typhoid fever by 1837, was no longer there to drive it forward. Outside his own world it attracted no potential users until Wheatstone modified it into a direct-reading variant.
Tawell the Quaker Murderer Sometimes a particular event, coinciding by chance, can transform the adoption environment for an innovation. It happened to the telegraph in 1845, when John Tawell, an orphan brought up by an English Quaker family, murdered Sarah Hart. In 1814, when he was thirty years old, Tawell shocked his Quaker foster parents by being convicted for forgery of a five-pound note, a crime for which he might then have been hanged. They used what influence they had, and he was transported to Australia instead. Released in 1820, he set up as a pharmacist there and grew rich. In 1831, with a wife and two sons, he returned to London. The sons soon died, and his wife also sickened. Sarah Hart was engaged as a live-in nurse. After his wife also died, Sarah bore Tawell two children. However, their relationship did not prove as durable as might have been hoped. In 1844 Tawell married a wealthy Quaker widow. Sarah, short of money, threatened trouble, so he moved her to a cottage in Slough, then a country village. On 1 January 1845, he bought poison and visited Sarah. A neighbour saw him leave her house, looked in and found her writhing on the floor, near death. A doctor was called, and after seeing Sarah’s condition he sent his cousin, a clergyman, to the railway station, hoping to detain Tawell. He arrived there just in time to see his quarry boarding a London train. Using Wheatstone’s five-needle instrument, Slough railway staff telegraphed Paddington to say a murder had been committed and the
14 Will Anyone Want It?
157
suspect was on a London train. Sergeant Williams, of the railway police, met the train and followed Tawell, who was later arrested. In due course, after a farcical trial,2 he was publicly executed at Aylesbury before a 10,000 crowd. Unfortunately, the hanging was botched because the hangman misjudged the drop, although in his defence he claimed it was the result of Tawell having inexplicably gone off his food and lost weight. Instead of his neck breaking at once he was left kicking.3 The event hit the headlines.4 From then on the telegraph was in the public eye. Thanks to the increase in its use and Cooke’s cost reductions, it started making money at last. Telegraphs began to look like a profitable business. Over half a century later Guglielmo Marconi’s wireless telegraph, which made communication with ships at sea practical, received a similar boost to its adoption. By 1910 Dr. Hawley Crippen, an American homoeopath living in England, was in trouble. His wife Cora, a loose- living music hall performer, disappeared from their London home. He claimed she had returned to the US, but her friends were suspicious. Repeated police searches of Crippen’s house followed. A floor was dug up. In panic, Crippen fled across the Atlantic with his mistress, Ethel le Neve, a former typist travelling disguised as a young man. Unwisely, they did so on the radio-equipped SS Montrose. The captain identified them, exchanging messages with Scotland Yard. Crippen was arrested on arrival at Quebec, the first fugitive from justice captured by radio. The case created a furore in the British press and publicised the new marine wireless telegraphy as little else could have done. Crippen and le Neve were both tried for murder, but she was acquitted. In November 1910 he was hanged at Pentonville. If Cora was alive and somewhere in the US she certainly made no move to prevent the execution. In 2007 results of DNA tests cast doubt on his conviction. They appeared in a peer-reviewed scientific paper, published by a journal of impeccable reputation. Human remains discovered in his house by police, at the trial alleged to be Cora, proved to be male. Yet hyoscine, a little- used poison, was found in the remains, and Crippen had purchased hyoscine before Cora vanished. A possibility is that the body found was Cora’s lover, and she did indeed go to the US covertly, in fear for her life.
158
W. Gosling
Did she do nothing later because she thought the execution just, even if for the wrong crime? Probably more important than the Crippen case for the adoption of maritime radio was the sinking of the Titanic in 1912. The ship, carrying 2216 passengers and crew, struck an iceberg 650 km south of Newfoundland on its maiden transatlantic voyage. The RMS Carpathia, summoned by radio calls from the Titanic, rescued 710 people from the lifeboats. The Titanic’s radio operators, Jack Phillips and Harold Bride, signalled until water entered the wireless cabin. As they left it for the lifeboats, the two men turned in opposite directions to their different lifeboat stations. Only Bride survived.
Conflict over Innovation Timely publicity can launch a product, process or service safely into its market. Even so, many designs fail in adoption: they are built but, as it turns out, nobody seems to want them. Potential users already pursue their objectives in a different way that seems satisfactory. Not having sought or asked for the innovation, they reject persuasion to adopt it, on the general principle ‘if it ain’t broke, don’t fix it’. It is not surprising; their apparent conservatism can have compelling reasons. Capital investment and training costs may be sunk in the ‘old’ way of doing things which will be lost if it is supplanted. Also whilst it is feasible to predict where the doxastic method will most likely end up, it is much more difficult to guess the date at which that end state will approach. Sometimes decisions are vitiated by changes in technology, not because these were wholly unforeseen but because they happen sooner than expected. This is particularly true in high-risk activities, like warfare. In war so much is at stake, so many lives can be lost and so much damage done. Doing what worked well last time seems a seductive option, even when new technology is available. The wooden fighting ships at the beginning of the nineteenth century, in Lord Nelson’s day, battled by firing broadsides at each other. Thanks to ‘more of the same’ extrapolation from that experience, by the end of the century the Navy’s settled perception of the future of sea warfare was one
14 Will Anyone Want It?
159
of the similar battles between fleets of heavily armoured ships at a range of several kilometres, with big rifled turret guns replacing cannon broadsides. In consequence, the British Admiralty was quite sceptical about submarines at first. Many were convinced that these small, slow boats could play no part in the kind of long-range sea warfare they were expecting. Lord Walter Kerr, the Royal Navy’s First Sea Lord from 1899 to 1904—a committed big-gun surface-fleet man—never gave much support to submarines. However, Lord Fisher, who succeeded Kerr, could see a valuable role for undersea warfare, and held quite a contrary view. The Royal Navy had acquired five US-built Holland submarines by that time, and it went on to build a substantial submarine capability.5
It Won’t Work and We Don’t Need It Even more than outright hostility, the innovative designer is certain to meet with incomprehension and indifference. In most fields of innovation, respected commentators have expressed profound scepticism about advances that later proved of the utmost significance. A few quotations will be enough to make the point. How, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense. Napoleon I, speaking of Fulton’s steamboat trials on the Seine. (1803) In little over sixty years steamships dominated all marine and river navigation. The canal system of this country is being threatened by a new form of transportation known as ‘railroads’ … As you may well know, ‘railroad’ carriages are pulled at the enormous speed of 15 miles per hour by ‘engines’ which, in addition to endangering life and limb of passengers, roar and snort their way through the countryside, setting fire to crops, scaring the livestock and frightening women and children. Governor Martin Van Buren writing to President Andrew Jackson. (1830) Later he himself became President in a railroad age. There is a great distinction between telephone companies and gas and water companies. Gas and water are requisites for every inhabitant ... but the
160
W. Gosling
telephone cannot, and never will be ... enjoyed by large masses of the working classes. Arnold Morley, MP and Minister for Telecommunications (1895). Heavier-than-air flying machines are impossible. Lord Kelvin, president of the Royal Society. (1895) But they had already flown—many successful gliders, and Forlanini’s steam helicopter in 1877. The cinema is little more than a fad. It’s canned drama. What audiences really want to see is flesh and blood on the stage. Charlie Chaplin, actor and film studio founder (1916). These iron coaches will never replace the cavalry (1917). The tank was a freak. The circumstances which called it into existence were exceptional and not likely to recur. If they do, they can be dealt with by other means (1919). Major General Sir Louis C. Jackson, whose background was cavalry. Tanks played a major role in subsequent twentieth-century land warfare, and horse cavalry fought no more. The elaborateness of the equipment precludes the possibility of television being available in homes or businesses generally. Walter Gifford, president of AT&T 1925–1948, speaking of the first public transmission of television by wire (1927). I think there is a world market for maybe five computers. Thomas Watson, chair of IBM (1943). Yet IBM later invested to become world-dominant in computer manufacture. It will be of no importance in your lifetime or mine. Bertrand Russell, philosopher, to Grace Wyndham Goldie, on her appointment as Head of BBC Television News and Current Affairs (1948). Goldie replied that television was a bomb, about to go off! There is practically no chance communications space satellites will be used to provide better telephone, telegraph, television, or radio service inside the United States. Thomas Craven, FCC Commissioner (1961). Commercial communications satellites were in use by 1965, and already being designed when Craven spoke. There is no reason anyone would want a computer in their home (1977). People will get tired of managing personal computers and will want instead terminals (1992) Ken Olson, president and co-founder of Digital Equipment Corp. He built minicomputers, once revolutionary but long displaced by personal computers.
14 Will Anyone Want It?
161
Although some of these quotations are striking and some amusing, few are truly unique; they could be multiplied many times over, and signal a human trait making the adoption environment more difficult for innovators than it might be. Yet the conservative impulse, leading at times to remarks like these, deserves respect. We live in a world that works, imperfectly at times but for the most part about adequately. We are accustomed, as Matthew Arnold put it, to ‘... the world, which seems to lie before us like a land of dreams, so various, so beautiful, so new…’ He sees it as illusory, yet from infancy to adult life we have learned to make our way in that world, know what to expect of it and how to evoke its responses. We cling to this hard-won knowledge, defending it against innovations which have yet to win our trust, or become familiar. In technology, as elsewhere in life, all advance also implies loss, and losses must be mourned—it is part of being human. To welcome innovation requires courage, and it takes time to achieve, hours for a fortunate few, but decades for many. Those slow to adapt should be dismissed lightly. Napoleon, who spoke scathingly about steamboats, remodelled the politics of Europe. Lord Kelvin, who said foolish things about flying machines, was one of the great scientific innovators of the nineteenth century. Bertrand Russell, so dismissive of television, was a philosopher of stature. Among commercial digital computer builders, Thomas Watson led IBM which achieved world dominance, for a while. None of these people lacked courage. What they did lack was the opportunity or inclination to think as long as was really needed about the matters on which they delivered such misguided opinions. So they were wrong, quite wrong.
A Case History: Adoption of the Telegraph It is in this discouraging environment that innovative designs are obliged to seek general adoption. Cooke and Wheatstone received their patent for the electric telegraph in 1837, and Morse in 1838.6 However the English telegraph was in commercial operation by 1839 whereas in the US it was 1844 before a service began, though Morse’s was the technically superior system.
162
W. Gosling
This difference between the fate of the English and US telegraphs in the adoption environment was due to a more optimistic climate of opinion in Britain at the time. Crucial was the influence of Isambard Kingdom Brunel, always ready to welcome innovation. Following its initial rejection by the London and Birmingham line, Brunel encouraged the installation of the telegraph along the track of his Great Western Railway, initially from Paddington to West Drayton, but soon extended to Slough, about 32 km. In striking contrast, Morse had the greatest difficulty finding influential supporters, and in the end was obliged to turn to the US government. After an unsuccessful appeal in 1838, Morse returned to Washington in December 1842. To demonstrate his telegraph, he improvised a line between two committee rooms in the Capitol Building. In 1843 Congress granted $30,000 (something approaching $2 million in present money) for an experimental 61 km telegraph line between Washington and Baltimore along the existing railway track. News of the Whig Party’s nomination of Henry Clay for U.S. President was telegraphed from the party’s Convention in Baltimore to Washington on 1 May 1844. On May 24th Morse sent the famous words ‘What hath God wrought’ from the Supreme Court chamber in Washington to Baltimore, and the line was officially opened.
It’s for Real Now After innovations achieve adoption, the use environment comes next, the fourth hurdle for the new product, process or service. Valuable feedback comes from the users about how successful the design has been, and what must be done to improve it. For most early telegraph designers, having little idea of what traffic the Victorian telegraph would carry, what future telegraphists could be trained to do, or how they might use their equipment, this environment was completely unknown territory. Its characteristics came to be understood better when the telegraph was in daily use. Cooke saw building a costly five or six wire transmission line as a serious vulnerability for the Wheatstone design, as also was the restricted alphabet. Overcoming these challenges needed a telegraph using fewer wires, ideally one, yet sending many more symbols. How could this be done?
14 Will Anyone Want It?
163
With the five-needle telegraph instrument each letter was identified by a pattern of spatial deflections—for each letter two needles swung, one to the right, the other left, to point to the symbol, while the other three lay still. Cooke saw these patterns on a daily basis. He perceived a direct connection between letters and patterns of left-right deflections, augmenting his phantasmal stock. Cooke ingeniously switched these deflection patterns from the space domain into the time domain, leading to signalling by groups of deflections, sent as a time sequence. For this a one-needle telegraph is sufficient, the simplest conceivable. It was coded but required only one line, with an earth connection for the return current. The use environment became clearer in other respects too, once the commercial service began. Receiving telegraphists wanted to give their visual attention to the note pads on which incoming messages were written; repeatedly having to look up at the dial of Cooke’s instrument slowed them down. Liberated from the constraints of direct-reading, could they not receive incoming signals some other way? So on Cooke’s needle telegraphs small metal ‘sounders’ were placed where the needle would hit them, producing a distinctive note, of different pitch between left and right. They could listen, interpret and write simultaneously. This translation of the deflection patterns from sight to hearing, resulted in a ‘speaking telegraph’, speeding the telegraphists’ work. A stable canonical form had come to telegraph design in Britain. On routes where traffic was exceptionally heavy, such as the London-Birmingham, a dissenting design survived. This was a two-needle speaking telegraph (with four distinct notes and a two-wire line) capable of signalling beyond thirty words per minute by means of a continuous tinkling, a sound unique to Victorian telegraphy. In the US, where greater operating range was essential, Morse’s telegraph, influenced by Joseph Henry, used electromagnets as electric current detectors, not needle galvanometers. It was a coded telegraph from the start, using the ‘Morse code’ patented in 1838. However Morse, like his European counterparts, was critical of Cooke’s coded telegraphs, fearing they would be rejected. Once telegraphs got going, however, operators found the code was not a significant barrier to sending or receiving. Morse’s receiver, soon called a ‘telegraph sounder’, was clearly audible, making a distinctive noise when current was received, then a different
164
W. Gosling
one when it ended—‘click’ then ‘clack’. Without needing instruction, operators soon read it by ear. With only one transmission wire (using an earth return) the single- needle speaking telegraphs and the Morse sounders slashed construction costs. Telegraphists used audible codes to receive beyond twenty words a minute on a single line. The telegraphs were economical and effective, but were used in a way the early designers never foresaw. On both sides of the Atlantic, under the pressure of real-world use, the telegraph evolved in an unexpected direction. This autonomous late development pattern has characterised many innovations. By the 1860s telegraph services were extensive, and in some cities central telegraph offices employed a dozen or more telegraphists to send and receive messages. The qualities required for their work included intelligence, fluent literacy and manual dexterity, but no great physical strength. Working in a near-domestic environment, increasingly telegraphist was a well-paid job appealing to young women who might otherwise have been shop-workers or school teachers. They were the vanguard of the multitude later employed as telephone operators, secretaries and personal assistants.
Surprise, Surprise! When the something new gets to the users odd things happen, as they did with the telegraph, transformed in a few years. Another classic example of the emergence of something unexpected, thanks to the doxastic method in the use environment, is texting (short message system or SMS) on mobile phones. Once telephones were using digital speech transmission it became easy for them to carry alphanumeric messages as well. Intended to send texts of up to 160 characters, SMS was designed by Friedhelm Hillebrand (b. 1940) and Bernard Ghillebaert (b. 1952) in 1984 and made freely available world-wide. Introduced on European GSM cell phone systems, in December 1992 the first-ever text message—‘Merry Christmas’—was sent in the UK. Later SMS was extended to other phone systems, some with the 160 character limit relaxed.
14 Will Anyone Want It?
165
From its beginning the phone system designers were confident texts would not be sent at all by ordinary users, who were expected only to receive them, as ‘broadcasts’. Only ten or at most twelve buttons were available on the phone, so sending the alphabet would require three letters assigned to each key. The consensus view was that for general users sending a message would be hopelessly slow and difficult. SMS texts, the designers concluded, would be used only by the system managers, who would find it of value for sending broadcast messages to all users. Also it might have an internal future, carrying notes sent between engineers running the system, who, being geeks of a sort, would master the impossible key-pad. These expectations were at once totally confounded, to the surprise of the designers. It soon seemed everybody wanted to use SMS, and by 2010 every month 508 billion texts were being sent world-wide, 193,000 per second. The skill and adaptability of the users had been grossly underestimated, as it was with Victorian telegraphists. How could the designers have been so wrong? The parallels with the early telegraph are interesting. For both salvation came first in a well chosen code. For the telegraph, as well as the Cooke and Morse codes, later there were short code groups used for specific and frequently used messages. Thus ‘73’ meant ‘best wishes’, ‘66’ ‘love and kisses’, while ‘AAA’ transmitted without gaps between the letters stood in for laughter. Conventions like these made the system more comfortable for operators to use. At first, modern texts were frequently sent in a code. This was ‘textese’, a synthetic language devised to convey an intelligible message using the fewest possible button pushes. In textese punctuation and capitalisation were ignored. Determiners like ‘a’ and ‘the’ were cut and numbers used as homophones, so ‘hate’ becomes ‘h8’ and ‘too’ is transformed to ‘2’. For words without well-known abbreviation, users commonly removed the vowels, leaving a string of consonants interpreted by guessing. Thus ‘labour’ becomes ‘lbr’ and ‘incredible’ goes to ‘ncrdbl’.7 Ambiguities must be resolved by context, more often successful than not. A complete manual was compiled.8 Only one form survives from the Victorian telegraph code into textese. In both agreement is signalled ‘K’.9
166
W. Gosling
Yet, successful as it was, textese proved just a transient phenomenon. Modern phone software tries to predict the word being keyed after the first couple of letters, leaving the user to accept or reject the guess. Prediction of words as they are written is surprisingly successful, but if your vocabulary tends to the recondite and sesquipedalian you may hit trouble. As Wittgenstein observed: ‘The word “word” is a word, but the word “erudite” is an erudite word’. SMS is now the most common of all data transmission types, used by 3.6 billion people—78% of all mobile phone subscribers and, quite unbelievably, approaching half of the world’s current population. Such vast growth over only twenty years! Very many of us have chosen SMS as our preferred means of communication simply because it suits us best in our circumstances of use, and the pressure on everybody to adopt it grows because we are social animals. Experiences like this in the use environment should give a clear and unvarnished view of how the design is working out. It is this alone that, in control theory terms, fully closes the loop and gives some hope that the design, by iteration, can converge on the users’ needs. Only with the users’ settled approval can the introduction and adoption of something innovative ever be said to have been completed. Even then, there is no escaping the fact that, after however long a time it may have worked successfully, sooner or later it is bound to fail; everything does. So how is that to be handled?
Notes 1. The celebrated Apple advertisement ‘Why 1984 will not be like “1984”’ is on YouTube at ‘Apple Macintosh 1984 Commercial’. 2. The defence tried to persuade the jury that Hart’s death was due to eating too many apple pips. They are very mildly toxic. 3. There was an unseemly altercation between officials on the scaffold, divided into two factions: one wanted to pull him up and drop him again, the other wanted to have men swing on his legs to hasten asphyxiation. There was some scuffling, and once this had died down it was noticed that Tawell had become very peaceful.
14 Will Anyone Want It?
167
4. Gordon, K. (2003) www.btp.police.uk/about_us/our_history/crime_ murder_of_sarah_hart_1845.aspx. 5. In the following quarter century, the only European big-gun sea battle of the kind the Royal Navy planned for was Jutland (1916)—only the third in history between steel warships. The Navy big-gun battleship was nearing obsolescence, lingering only until aircraft carriers replaced it as the canonical capital ship. 6. Hubbard, G. (1965) Cooke and Wheatstone Routledge (London, UK). 7. Done successfully in the un-pointed writing of classical Hebrew, so there was an excellent precedent. 8. Crystal, D. (2009) Txtng the gr8 db8 Oxford Univ. Press (Oxford, UK). 9. Many think ‘K’ is an abbreviation of ‘OK’, and some note the similarity between ‘O’ and ‘K’ in Morse Code. But ‘K’ was in use on the needle telegraphs by 1839, also the year ‘OK’ first appears in print, in the Boston Morning Post for the 23rd of March.
15 Failure Foreseen
Eventually, whatever it is, the product, process or service will become obsolete or fail. It must, in the end but how soon? Can it be repaired and at what cost? Will there be consequential damage, even some kind of disaster? The failure environment is the fifth challenge to a design.1
Anticipation of Failure Once a commercial telegraph service got started in Victorian England, service faults began to happen. In the 1840s a new breed of engineers appeared, along with the technicians who ably supported them. They could quickly locate and mend failures in equipment and on lines, using increasingly sophisticated electrical test instruments. Undersea cables were more difficult to deal with, but by 1866 they had been grappled and repaired from the bottom of the Atlantic. The Society of Telegraph Engineers (now the Institution of Engineering and Technology, the IET) began in 1871, and a new profession came to maturity. So for the telegraph, the fifth hurdle was passed—they had acceptable failure characteristics and were maintainable.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_15
169
170
W. Gosling
This success was more by luck than judgement. Those who designed them merely hoped for the best, and as it turned out they were not disappointed. Today it is different; designers are expected to predict the manner and frequency of failures to be expected. Failure, though a matter of chance in timing, really ought to be a designed event. For this, advance calculation of predicted reliability statistics is essential. Such calculations are subtle, but there is a large literature about how they are best done.2 Redundancy in systems is a common way to extend expected life— incorporating more than one component for a particular function, so if one fails another takes over. The internet is a salient example. Designed to survive nuclear warfare, if part of the network becomes inactive the messages will take alternative routes around the failure zone. Systems repaired by remote command are common in space vehicles. Actively self-repairing systems are also possible. All mammals, including humans, are totally dependent on self-repair throughout their lives. People in whom it fails, even to a partial extent like haemophiliacs, lead a threatened existence. In the long run, self-repair will surely also grow in importance in technology. So, there it is. Will the length of life between failures be adequate to the needs of the user? Some inventions work well for a time, but not reliably or long enough, so they succumb to the failure environment early. The voltaic pile, an electrical power source invented by Volta in 1800, was popular at first because it was capable of generating high voltages and could mimic many familiar effects of static electricity. Humphry Davy had one at the Royal Institution generating 1300 volts. However, their life was short—just days. The first telegraph builders wisely steered clear of Volta’s pile, instead choosing a new kind of battery named after William Grove, its Welsh inventor. The Grove cell, with its excellent record for endurance, used a zinc anode in dilute sulphuric acid and a platinum cathode in concentrated nitric, the two acids separated by a porous ceramic pot. Its only drawback was that in use it gave off nitrogen dioxide. The ‘telegraph cottage’, as it was at first called, had to rely on good ventilation to avoid poisoning telegraphists with this noxious substance. In the 1860s the gravity cell replaced the Grove cell and a hazard was removed.
15 Failure Foreseen
171
At the extreme in failure design are communication satellites. Once in orbit around the Earth, they are almost wholly inaccessible, as yet. Nearly impossible to repair, their failure characteristics are of the highest importance. Complex assemblies of electronic equipment, their components and subassemblies, must be subjected to minute scrutiny. Possible ways in which each might fail, and frequency with which they will do so, are carefully determined. Redundancy is often built in to give a measure of failure resistance—as one part fails another takes its function.
A Disaster of Major Proportions Not only the probability but also the consequences of failure are of concern in deep design. Sometimes, failure is little more than an inconvenience, remedied easily by repair or replacement. In other circumstances, failures are a major threat to life and property. A truly awful example is the Chernobyl disaster of April 1986, far the worst disaster so far in the history of nuclear power. Any overview of what happened is handicapped because the personnel operating the reactor can tell us nothing—many they were dead within a day or two, or a few months at best. Chernobyl was attributed at first to operator error, and this was a contributory factor, but nuclear reactor design shortcomings are now seen as a major cause of the catastrophe: there was no guaranteed-effective ‘Stop’ button. We begin with how a reactor works. In a nuclear reactor, it is neutrons that do the business. Neutrons are sub-atomic particles without electric charge and therefore able to collide with the atom’s nucleus, itself a mix of neutrons along with protons having a positive charge. However, to react effectively with the nucleus the neutrons must be moving slowly, ‘thermal neutrons’ not moving significantly faster than the atoms around them, which jiggle about purely due to heat. The reactor core is made of 235U (uranium 235) which occurs naturally. The superscript is the weight of the nucleus measured in neutron- sized units. If a thermal neutron makes it to an atom it combines with the nucleus, which gains weight to become the isotope 236U. Isotopes are the name given to the same stuff but with a different atomic weight, from one, or sometimes more, added neutrons.
172
W. Gosling
The newly formed 236U is unstable, and the nucleus breaks up promptly, usually into two parts, a krypton isotope and a barium isotope, emitting three neutrons as it does so. These neutrons are moving too fast to be useful, so they are slowed down, becoming thermal neutrons. A moderator material does the job—graphite was commonly used, the stuff in pencil leads—extracting the energy of motion from the fast neutrons and turning it into heat. This is a major part of the reactor heat, used to raise steam and drive the turbines. Getting this heat out of the Chernobyl RBMK reactor was done with water, running through the reactor at some twelve and a half tonnes per second. The reactor temperature is regulated by driving control rods down from the top into channels in the active core. The rods contain a neutron poison and progressively shut down the reaction. It took about two minutes to shut down an RBMK, with some initial activity increase due to use of graphite caps on the rods.
What Happens If the Pumps Fail? The emergency diesel generators on site—meant to supply electrical power to the reactor’s water pumps in event of failure of the public supply—took about a minute to get up speed, during which interval the cooling flow would stop, all 750 tonnes of it. Even with the reactors powered down this was unacceptable and dangerous, because it was still essential to remove a lot of heat. So the reactor technical staff several times conducted experiments to see if the rotating energy of the generators could be used to fill this brief electricity supply gap. The experiment had been attempted in 1982, 1984 and 1985 without success. In the ill-fated 1986 attempt, all the safety systems were disabled, and control of the reactor was transferred from the SKALA computer system to the human operators. The diesel drive to the pumps was stopped and the reactor powered down from its normal 3200 MW (1 MW = 1000 kW), aiming at 7–800 MW. In the event, the heat level dropped too low, to 500 MW. A withdrawal of control rods was needed, but instead they were still being inserted by automatic actuation. It was therefore replaced by manual operation. It seemed like a good idea at the time.
15 Failure Foreseen
173
What happened subsequently is disputed. The heat level quickly dropped to 30 MW. Water absorbs neutrons so preceding this point the cooling water had been helping to hold the reactor down. Water in the cooling channels began to boil as its flow rate decreased. Steam does not absorb neutrons, so as boiling began the reactor got still hotter, producing more steam—a positive feedback loop. The operating crew had little understanding of what was going on by then and, close-up to an out-of- control nuclear reactor, they were probably getting panicky. I certainly would have been in their situation: it was far too late to run. Later, recording instruments showed that the heat produced by the reactor exceeded 30,000 MW, nine times the normal operating level. An emergency shutdown was attempted. It failed because by then the reactor was so hot that the control rods were destroyed during their slow insertion. All prospect of control was lost, and a series of explosions ensued, killing many people, destroying the reactor and startingfires. In the aftermath the only thing that seems reasonably sure is that four hundred times more radioactive material was released at Chernobyl than from the nuclear bomb on Hiroshima. Four kilometres from Chernobyl, the ghost town of Pripyat, completed in 1970 for the power plant’s workers, lies silent still. The population of 50,000 once there were all evacuated after the explosion, although a few have returned to their homes despite the danger to their health. The debate that followed3,4 had competing factions: the pro-nuclear group did their best to play down the significance of the disaster, while anti-nuclear partisans tried to make it look as bad as possible, and all the parties, including the Soviet government, did their best to evade responsibility, in some cases knowingly promoting outright lies. By now it is hard to find the truth. As to the number of lives lost, this has been variously estimated, with implausibly low figures from those favourably disposed to nuclear power countered by figures orders higher from those opposed to it. The forty-seven ‘liquidators’ and fire-fighters who entered the plant immediately after the explosions, with little protection were sent to highly probable deaths. The few survivors suffered severe illnesses. At first the World Health Organisation suggested 4000 of the 600,000 in the immediate area of the reactor would die from consequences of the disaster. However radioactivity was detected at a distance
174
W. Gosling
of 1000 km, which led others to argue that the long-term death toll must be greater—some suggesting as many as 200,000 but without convincing evidence for such a large figure. Those who would minimise the significance of the disaster point out that deaths attributable to Chernobyl are few compared with those due to smoking. Whilst quite true this is wholly irrelevant. After the disaster Chernobyl’s other reactors were taken out of service and the site has not generated electricity since 2000. A ‘sarcophagus’ over the reactor was hurriedly constructed, a concrete and steel structure aimed at keeping radioactivity from escaping. Over the last few years this has begun to show signs of deterioration. Now a construction programme called the New Safe Confinement aims to remove remaining radioactive material and make the area habitable. The New Safe Confinement, a two billion-euro project, is a corrugated metal hut, no less than 108 m high, 250 m wide and 150 m long, moved on rails until over the sarcophagus and reactor building. Fukushima-style robots inside the structure will dismantle the sarcophagus and reactor, the remnants being put into storage nearby. The consequences of the events at Chernobyl were dire for the already fragile Soviet Union, which collapsed five years later. Regulations governing the design and operation of nuclear reactors have been greatly tightened up since 1986, worldwide. The Fukushima disaster of 2011, however, suggests that flaws remain either in the safety rules for nuclear power or their enforcement, and possibly both. The damaged plant there is now being cleaned up by specially designed robots which find and remove hazardous material. The work went slowly because of the many robot failures in the intensely radioactive environment.
What Are the Lessons to Be Learned? The story as generally told starts much too late, after the decision had been made, and repeatedly implemented, to try to find a way to keep the water pumps going in event of public electricity supply outage. The preferred method was by using the large rotating energy of the alternator sets. All this was precipitated by the long run-up time of the standby
15 Failure Foreseen
175
diesel generators. To me it still seems a strange thing to do. A far more obvious approach would have been to speed up the changeover, and there are known ways in which that might have been done, and had been done before, although on a smaller scale. Was the great charm and elegance of the reactor solution too seductive to refuse, seeming to need little new or additional hardware but only some software changes? Charm is a fatal deceiver sometimes. A deeper lesson for technologists is the danger of trusting too much in scientific models. The scientific approach is to acquire knowledge, in this case to build a complete theoretical model of the reactor so as to cope with those failure modes that the model predicts. This was inadequate in two ways. • The first is that no scientific model can ever be guaranteed complete. Things can happen that the model builders do not take into account because they think them too unlikely, ranging through a whole slew of natural events, to the highly eccentric way in which the crew were trying to operate the Chernobyl No. 4 reactor. An incomplete model cannot predict all the possible failure modes, so the designers will proceed unaware of them. • The second inadequacy is that scientific models promote the illusion of perfect knowledge. We are presented with a picture having complete internal coherence and aligning with all known science. What could be better? However they exclude things that cannot be modelled, either because they are outside current scientific knowledge, or because modelling them is intractable with the means or mathematics available. So the model is actually imperfect, but not in an obvious way. It has great charm for those who devise it, and charm, as ever, can be a deceiver. Except in trivial cases, scientific models are always incomplete yet promote an illusion of perfection, which makes them potentially dangerous. If used as a hard-and-fast template for a design they are also a form of open-loop control, the shortcomings of which are explained in Chap. 13—to succeed they need perfect information at the outset, no perturbation during the control process, and an objective that does not move.
176
W. Gosling
They work well for relatively simple situations, like the gravitational model of the solar system, where the sun and planets are few and their properties of have been extensively studied and are changing only very slowly. By comparison the RBMK nuclear reactor had vastly more independent parts than the solar system and was subject to human intervention and natural events such as weather and earthquakes. A truly complete scientific model is impossible. Yet a scientific model is still an excellent thing to have: a good starting point, but no more then that. Its great virtue is to ensure that subsequent empiricism is ruled by well-informed opinion—the essence of the doxastic method. Technology’s approach starts out with the presumption that partial ignorance can never be entirely eliminated, and therefore sees the need to prepare for what cannot be foreseen, as well as what can. For example some reactors have a provision for drowning them in boron, a neutron poison, should they run out of control. This kills the reactor stone dead; it could have saved Chernobyl from outright disaster. We shall never know because the scientific model did not predict a need for it. Putting it in would have reflected a technology mind-set, that if the worst can happen, sooner or later it probably will, and you had better prepare for it. Better a dead reactor than tens of thousands of dead people.
The Day I Wrecked a Nuclear Submarine Reading about what happened at Chernobyl, I felt that a factor in the runaway of the RBMK reactor, aside from its design problems, was the transfer of the control rods from the normal automatic control to manual. The manual control was inevitably slow in operation, and this puts me in mind of a disaster I personally caused. Many years ago I was shown the control area, still charmingly called ‘the bridge’, of a US Navy ‘Benjamin Franklin’ Class nuclear submarine. The man in command—at this distance in time I remember neither his name nor rank—guided me into the cramped working area. Two men were seated in front of instrument panels in well-upholstered swivel seats. The one on the right was steering the submarine in azimuth, while on the left it was the depth that was being controlled. The instruments indicated
15 Failure Foreseen
177
that we were then travelling on the surface and straight ahead. To my surprise the commander tapped the left-hand man on the shoulder, he got out of his seat and it was indicated to me that I should sit in his place. This I did, taking hold of the control stick and looking straight ahead at the depth gauge. ‘Take us down to a hundred feet’, he said, ‘but be careful because we are in shallow water; only two hundred here’. I gingerly pushed the control stick forward a little and watched the depth gauge. I watched and watched. Nothing happened, so I pushed the stick a bit further, but still there was no perceptible response. Another forward nudge, and suddenly we were diving, but much too fast. So I pulled the stick right back, but there was no response in time. We continued our rapid dive. Before long lights flashed and a siren sounded. ‘You’ve hit the sea bed real hard!’ said the commander. ‘The pressure hull is still intact maybe, but this boat’s for dry dock now’. I got quietly from my seat with a shamefaced expression. So was I headed for the escape apparatus? No, and in the outcome we did not even get our feet wet. It was not a real submarine that I was in, but a training simulator on a California land site belonging to Rockwell International. That day, I learned how difficult, even counter-intuitive, it is to manually control a system with a long response time. People can indeed learn how to do it, but they need a lot of training and regular practice to keep up the skill. At the beginning of the Chernobyl experiment the reactor was powered down but the heat output fell too low, to a fraction of what was required. It was decided to withdraw some of the control rods to restore activity, and this had to be done manually as the automatic mechanism had been de-activated. I wonder, did those manual controllers at Chernobyl replicate my mistake? The system was slow acting because of the slow movement of the control rods and the great mass of the reactor that had to heat up. Did they, like me, see no initial response to their control action and feel it safe to increase it? As I understand it, after no initial response the heat began to increase rapidly, like my submarine going into an irreversible dive. The feeling of being under some powerful external control was most unpleasant, though I knew it was only a simulation. Chernobyl must have been incomparably awful for the crew.
178
W. Gosling
Designing for Failure In all innovations the rate of failure tends to follow the so-called ‘bath tub curve’. On first introduction failure rates are high, but they progressively decline to a near-constant low level. After a substantial time it ages and grows progressively less able to accommodate the demands placed on it. The failure rate then rises again. When railways first began their accident rates were so high that the humorous magazine Punch suggested a chair should be bolted to the front of every locomotive and a law passed to require a railway company director to ride in it. Similarly, when air transportation was introduced there was a high accident rate, with frequent news broadcasts of air-crash items. In both cases, using important lessons learned in the use environment, changes were quickly introduced which resulted in a great reduction in failures. Per journey mile, both trains and planes are now safer than any other mode of travel, even walking. The failure modes of a design should obviously be central in all safety-critical situations, and this has been far better realised since Chernobyl in the nuclear power industry, and also much more widely. Engineering safety-critical systems is now an important discipline in its own right.5 All failures are bound to have a range of negative human consequences, anything from mild inconvenience to death. Designers have to assure themselves what these consequences will be in each particular case. Those who do not have a sufficient concern for the failure environment to take all of this into account before launching a design are certainly incompetent and may possibly be criminal. Designing the failure behaviour of a new product, process or service as carefully as its other characteristics cannot be an optional extra. To hide the known vulnerability of the Chernobyl nuclear reactors was not the greatest villainy of the Soviet regime, but it was a crime even so.6 Disasters like Chernobyl are extremely rare, but less dramatic failures can still cause much hardship. Even as simple a thing as a computer failure may deny the users service on which they are dependent, and could endanger life. Even less serious failures, if repeated, will harm the adoption of the product—maybe a more extended systems depends on
15 Failure Foreseen
179
it—and may involve the manufacturer in higher warranty costs. They also bring closer the day when the product, process or service will have to be discontinued, the scrapping environment to which we now turn.
When the Party Is Over The sixth hurdle an innovation must surmount comes when it ends its life and reaches the scrapping environment. When its use finishes it will be abandoned, destroyed where it is, removed or dismantled. Can a problem be foreseen? How difficult or costly will the product, process or service be to scrap? Will it leave an undesirable legacy? So far as we know, none of the Victorian telegraph designers gave any attention to end-of- life issues. When a line was no longer wanted it was easy to take down the poles and reel up the wire. They were lucky—such benign scrapping potential is rare. Brunel’s amazing ship The Great Eastern was so solidly built that scrapping her in 1888 threatened bankruptcy to the firm that did it. In Britain the economics of nuclear power generation, at first thought highly favourable—‘electricity too cheap to be worth metering’—were compromised by the problems of reactor decommissioning and the large consequent costs which had not been adequately considered. It has proved extremely difficult to find sites for the disposal of spent nuclear fuel, largely because of predictable public reaction in the localities concerned. This results in political pressure against all such proposals. Dumping toxic wastes, particularly in less developed countries, has become a vexed issue. Dennis Gabor once said: ‘The problems of the technology of today are no longer the satisfactions of the primary needs or of archetypal wishes, but the reparation of the evils and damages by the technology of yesterday’. Maybe that was overstated, but the remark had substance. Long before that, however, lives were being blighted by the abandonment of industrial sites once they were no longer profitable. Industrial and post- industrial areas of the developed world still suffer this affliction. A few decades after the terminal decline of its once important copper industry, the lower Swansea Valley in Wales was reduced to an area of the
180
W. Gosling
most dramatic desolation, with ruins of industrial buildings everywhere and much of the soil hopelessly poisoned by copper salts. Those responsible for the calamity were long gone, people dead and companies wound up. No enforceable legal liability for the clean-up could be established. For half a century the devastation remained, the area blighted as if by war, with no remedy in sight. In 1961, however, opinion was mobilised in favour of finding a solution.7 Swansea City Council played a crucial role by co-ordinating the effort and channelling money into site recovery. They were aided by local volunteers, from schoolchildren to Territorial Army reservists. The entire community was involved in restoring their land. Technical advice was sought from the nearby university. The botany department suggested plants which could survive, even in the poisoned soil. Before long banks of lupins flourished in the poisoned ground, but bore flowers of a strange rusty red. They looked exotic, but helped to purge the soil of its heavy metal poisons. Following successful rehabilitation, the area is now transformed. Today it comprises a mix of pleasant woods and parkland, light commercial development and a few of the original structures, preserved as industrial archaeology exhibits. The result in the lower Swansea valley was encouraging. However, in many other cases the outcome has been far less auspicious. In the past companies made no provision for funding clear-up costs at the end of their operations, and the result was dereliction. Certainly they were no credit to the process designers, who had simply not considered the scrapping environment as part of their responsibilities. There is now widespread agreement that this situation is intolerable, and clean-up after industrial operations has become a legal requirement in many countries.
A Happy Ending Yet scrapping can have a positive side too, where the residues from the past retain some market value. This makes it possible to offset scrapping costs, sometimes handsomely. David, an academic friend of mine, once
15 Failure Foreseen
181
took-over a newly acquired hill-top research site on which stood two abandoned wooden lattice towers about 20 metres high, massively constructed during World War II to carry metre-wave antennas for some now forgotten radio system. David needed the site cleared. Where was he going to squeeze out the money for their removal? An estimate was sought from a demolition firm. The demolition man duly arrived and carefully examined the towers. Taciturn by nature, he returned to David and merely said ‘Three hundred’. My friend had no objection to dickering. ‘Two-fifty’. he countered. The demolition man’s face went into incomprehension mode. David realised there had been a failure of communication, but managed to sort things out. The masts were built from high quality teak. The demolition contractor, certain he could sell the well-seasoned wood at a good price, was willing to pay three hundred for the privilege of taking them away. What was more he paid with a wad of fifties, pulled from a back pocket.
Notes 1. Gosling, W. (2000a) ‘We do it’ The Guardian Nov. 30 (UK). 2. O’Connor, P. (2002) Practical Reliability Engineering (4th Ed.) John Wiley (New York, USA). 3. Read, P. (1994) Ablaze: Story of Chernobyl Mandarin (New York, USA). 4. Medvedev, G. (1987) Chernobyl Notebook Kindle Books (Online). 5. See www.scsc.org.uk. 6. For nuclear power, in general, there were new developments which seemed to offer promise. Part of the problem had been the very large power ratings of reactors. This meant that if anything went wrong the power levels to be dealt with were unprecedented. A new generation of much smaller generating stations has been proposed—using Small Modular Reactors (SMRs)—typically generating 500 MW as against 3200 MW for the Chernobyl reactor that ran wild. As described all will have facilities for a sure reactor kill in an emergency. The larger number of SMRs envisaged, well distributed around the power network, could ease some of the man-
182
W. Gosling
agement problems there also. However, the dramatic fall in the cost of renewable energy—solar, wave and wind power—and the apparent solution of their energy storage problem, makes the economic case for nuclear power seem intractable. 7. Hilton, K. (1967) The Lower Swansea Valley Project Longman (London, UK).
16 Can Machines Think?
So what of computers, the pinnacle of chip technology, machines that some say think—or at least appear to, even if they do it differently from us. What are the consequences of their appearance? They take over much of what was previously done by people and are integrating increasingly into every aspect of our lives. The term ‘artificial intelligence’ has come back into wide use again, after being eclipsed for a time. If you drive a recent car, when you steer or engage a gear you may be merely sending a signal to a computer, which does the action for you. • The watch on your wrist contains a computer chip sorting out the complexities of time and date—unless it is one of those ludicrous mechanical ‘prestige’ watches. • You talk or text with your friends via a cellular mobile phone, which contains a computer itself, but also relies on several more in the network to keep it all functioning for you. • If you become seriously ill, you will go into hospital intensive care, where a computer will watch over you, monitoring your condition. • If you are a soldier you may fight using flying military robots—cruise missiles and drones—which depend on many computers, and artificial intelligence, in their chain of command. © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_16
183
184
W. Gosling
• At your work, should you use a machine tool it will be computer controlled, and the desktop computer is the commonest aid to commerce. • If you are a serious scholar or a curious seven-year-old, you can access the World Wide Web through your computer, and all human learning will be at your fingertips. In every aspect of our life—birth and death not excluded—these ‘thinking machines’ will be helping in the background. We have become virtual cyborgs, living more intimately and continuously with our artefacts than humans ever previously did. Our information machines, the computers, are capable of many mind-like activities—logical decision, recall of information, inference, calculation and pattern recognition among them. So are we right to call them ‘artificial intelligence’? The question has been and remains hotly debated.1 For many, the idea is deeply alarming. Do computers threaten us? Is it possible these putative ‘thinking machines’ could turn against us and take our place? Some fear the possibility, yet because these ‘electronic brains’ lack aspects of our nature which are not in theirs, it is difficult to see how or why this could happen. Without emotions, they lack our concerns, our enthusiasms and ambitions. Even our ability to make decisions has been shown to depend as much on emotion as on reason. Most of all they lack our creativity.2 They have neither consciousness of self, nor fear, whether of death or anything else, so they do not suffer the existential anxiety3 profoundly influencing our own thinking. Having no consciousness of self, computers do not dream of silicon heaven. They have no personal objectives, no agenda, but only those software goals we ourselves have given them. Tyrants might use them to enforce their will, but the people behind them would be our enemies, not the machines they deploy. These differences change everything. The nightmare of super- intelligence, of digital machines whose ‘brains’ function like ours but so much faster that they supersede us, remains remote. No machine will fully emulate the human mind in the foreseeable future. They cannot replace us until they are capable of emotion, not merely simulated but actually felt. But, to press the point, is it not conceivable that computers could one day be designed having genuine emotions? Certainly, they can
16 Can Machines Think?
185
be made to recognise our emotions, and can themselves simulate emotion even now, but actually feeling and being motivated by their own emotions is quite another matter. There are no plausible suggestions for how it might be done. It is equivalent to the problem of endowing digital machines with the consciousness of self. Nobody has much of a clue about how to do it, and from where we are at present it seems light years away. Even if technically we could find a way to work the trick, serious ethical issues would follow. If computers did manage to get really human- like, the moral challenges we have had so much trouble handling around the institution of human slavery would be replicated with them. The genuinely conscious, emotional computer is not remotely possible now; may it long remain so. Yet working together, the two ‘thinking’ entities, a human mind in partnership with a computer, can accomplish things neither could do alone. The remainder of the human story will surely tell of this partnership between us and these tractable digital machines. We have entered the age of the information cyborg, and we have done it without noticing. So let us return to the story of the computers we know, machines without consciousness, having neither fears nor aspirations, artefacts which, if they can be said to think at all, are not thinking like us. The twentieth century saw them in four canonical forms. Mainframe computers dominated until around 1975, although from 1957 minicomputers began to achieve increasing success. Personal computers appeared from 1974, soon displacing minicomputers, while graphical user interface (GUI) personal computers took over after 1983.
Computers: The Successive Generations Mainframe computers were canonical designs from the beginning (around 1943) until 1975, and for many years came overwhelmingly from one source: IBM. Mainframe installations were increasingly organised around the concept of bureau computing, where users brought their programmes to the computer bureau and had them loaded and run, in a batch with others. The machines were operated by the staff of the computer bureau, usually a free-standing unit with its own management and hierarchy. Little
186
W. Gosling
pre-written applications software existed back then, except some for business in COBOL. In those days the computer users themselves wrote the software in the FORTRAN or ALGOL languages. Remembered with distaste by all who suffered under it, bureau computing was a frustrating, slow and tedious procedure and a serious check to creativity, because it ruled out ‘hands-on’ changes and improvisation. It was better than nothing, but not much better. At that time, one mainframe builder was head and shoulders above the rest in terms of influence and impact on the market. Through a combination of large R&D investment, existing strength in office machines, shrewd marketing and an unswerving commitment to keeping its hardware working long before this could be taken for granted, IBM dominated American computer manufacture. By the 1960s it was producing 70% of the world’s computers and 80% of those in the US. However, IBM did not succeed in holding this position. One threat was the danger, in a fast-changing technology market, of getting too close to the customer, who changes his buying behaviour much faster than the manufacturer can adapt.4 Another was an inflexible internal company sociology, born of long success and stability. A strong preference for internal promotion arose from a settled conviction that the best people were already with IBM. One evening just before Christmas 1987, I was waiting for a delayed flight at O’Hare over a whisky sour, I had a surprisingly unbuttoned conversation with John Akers, then President of IBM. He was pessimistic about the future of his company and thought himself unable to modify its internal culture. Collectively, IBM believed itself invulnerable, and its powerful main-frame faction could not be persuaded that the chance of losing dominance in computer hardware manufacture was real. Nothing is more dangerous to a company than long-continued success. Akers left IBM in 1993, replaced by Louis V. Gerstner Jr. (b. 1942), the first IBM boss recruited from outside. Gerstner rescued them from disaster by a policy of determined radical change, the only way possible. I admired his insight, courage and tenacity. Minicomputers appeared from 1957, remaining significant until the end of the century. By early exploring the potential of complex silicon chips, which mainframe designers were slow to do, the minicomputers
16 Can Machines Think?
187
achieved near-mainframe performance at a much lower size and cost. DEC’s PDP-8, first shipped in 1965, was the first practical alternative to bureau computing for many users, me among them. The Digital Equipment Corporation (DEC) achieved a position in minicomputers similar to IBM in mainframes. Founded in 1957 by Kenneth Olsen and Harlan Anderson, both MIT electronics engineers, DEC employed 120,000 people in 1990 with over $14 billion annual turnover. Olsen had immense success at first with ‘dumb’ user terminals linked to a minicomputer for providing computer services to individuals. He could not bring himself to believe that personal computers could displace them, as in the out-turn they rapidly did. After 1990 DEC began to lose market fast and was finally bought by Compaq Computer in 1998. A significant minicomputer manufacturer in the 70s was Hewlett Packard, but they made a successful transition to building personal computer hardware and their peripherals, even ultimately acquiring Compaq, and with it the ghost of DEC. Sometime before, they had already managed a dramatic shift in their company’s goals, from a highly respected manufacturer of precision laboratory instruments to a major player in computers and peripherals. Such commendable flexibility is rare indeed.
The Universal Computer Culture Computers small and inexpensive enough for use in our homes first became feasible in the 1970s. They were used by individuals working alone, not through a bureau. Advances in semiconductor technique— particularly large-scale circuit integration—made it possible to construct a powerful computer on a single semiconductor chip, containing all the arithmetic, logic and control functions needed by a central processing unit. A small firm named Micro Instrumentation and Telemetry Systems (MITS) launched the first personal computer, the Altair 8800 in 1974, using an Intel microprocessor. At first, the memory size was as little as 256 bytes, later much extended. Launched in self-assemble kit form, the Altair sold for $395. Subsequently, fully assembled machines and enhancements to the basic Altair appeared, offered by MITS and others.
188
W. Gosling
Many regard H. Edward Roberts (1941–2010), head of MITS, as the inventor of the personal computer. Though Federico Faggin designed the chip, it was the Altair computer that started the revolution. Just before Roberts’ death Bill Gates (b. 1955) visited him to pay his respects. The Altair was appreciated by geeks and enthusiasts, but anything like it had no chance of achieving a mass market. The personal computer industry really took-off from 1977, when Apple Computer, founded by Steve Jobs and Stephen Wozniak (b. 1950), introduced the Apple II. This was the first mass-produced personal computer, factory assembled on a line like a television set. Its success established Apple Computer, which later grew in time into the world’s largest computer company. But this is to get ahead. Bill Gates and Steve Jobs must be regarded as the joint founders of the universal computer culture of our time. It is interesting to compare the background of the two men. Jobs was the child of an unmarried twenty- three-year-old woman student at the University of Wisconsin and a Syrian doctoral candidate there. The biological father’s family firmly vetoed marriage for the couple, so Steve was adopted by the Jobs family at birth, in San Francisco. He always regarded his adoptive parents as his ‘true’ parents and they did a great job bringing him up. His interest in electronics is said to have blossomed from ten years old, which seems surprisingly late to me, given his amazing subsequent history in the industry. I was reading electronic circuit diagrams before I could read English, and I was far from alone in this. He did spend a brief period of study at Reed College, from which he withdrew because of the financial burden on his parents. In truth, in the technology of computers, he was an autodidact. All his days he was committed to the Bay Area counterculture, the ‘alternative’ lifestyle. By contrast Gates’ father was a respected lawyer, his mother a bank director. He seems never to have been attracted to the counterculture. The expensive private Lakeside School, near Lake Haller, at the northern boundary of Seattle, provided his education. It has had a favourable staff/ student ratio, said to be 1:7. Its policies towards its students were both flexible and liberal, allowing Gates to spend an increasing part of his school time on the computer, excused mathematics classes for the purpose. While at school, he gained access to a DEC PDP-10 belonging to
16 Can Machines Think?
189
the Computer Center Corporation. He developed advanced skills in software engineering but never completed a formal university-level course. In his chosen subject, Bill Gates was another autodidact. After the school discovered his programming skills, Gates wrote a program to schedule students in classes, arranging that he himself was placed in classes with ‘a disproportionate number of interesting girls’. He is said to have remarked ‘It was hard to tear myself away from a machine at which I could so unambiguously demonstrate success’. In 1973, at the end of his school career, he sat for the national SAT for university admission, scoring 1590 out of a possible maximum of 1600. Intended for a law career he took up a place at Harvard, but it did not work out that way, and after a year he dropped out, with full approval of his parents, who recognised his entrepreneurial skills even at this early stage. With his school-friend Paul Allen (b. 1953) he founded Microsoft, which led to both MSDOS and Windows, the two most widely used personal computer operating systems until the coming of iOS, and in their time installed on the overwhelming majority of computers. So how do the two men compare? Their childhood and early lives were different, yet they both became computer autodidacts. They achieved a most powerful command of their subject, despite being college dropouts, no doubt from exceptionally high intelligence combined with driving motivation to build the computers of the future. The ‘charm’ of computing bewitched them both. They both understood that the distance between the user and the digital machine, characteristic of the earlier computing age, had to be broken down. Only a personal computer could do that. Despite the success of Apple II and the MSDOS PCs, both men recognised the shortcomings of machines controlled by symbol strings— precursors and no more to the people-oriented ‘real’ computers to come. The great difference between them is that for Jobs the design of the hardware had primacy, and Apple produced some of the most beautiful and durable computing machinery the world has yet to see. For Gates the hardware was always a given, out there, built by others for him to work his magic on. He accepted the concept of the general-purpose digital machine at the most profound level, a brave, far-sighted thing to do. We needed both of them; their monument is our present world.
190
W. Gosling
What Happened to the IBM PC? IBM did not enter the small but fast-growing market for personal computers until 1981, when it introduced the IBM Personal Computer, with a configuration and appearance not so different from the Apple II. Bathing in the reflected glory of IBM’s mainframe dominance and their deservedly high reputation for user support, the IBM PC was soon the world’s favourite personal computer. Its microprocessor was the Intel 8088 and its operating system Microsoft’s MS-DOS. Both rapidly became canonical. It had a primitive interface, but no worse than its rivals: key sequences served not only for alphanumeric data entry but also performed all control operations. The system needed a good deal of learning—a geek’s delight. Later, it was the coming of the GUI, first the Lisa, then Mac, then Windows, that created ‘the computer for the rest of us’. The IBM PC achieved a major share of the market early, but IBM was unable to dominate personal computer supply as it had mainframes. New chip-based hardware technologies were making them cheaper to set up for manufacture, so smaller companies could enter the market. If rivals tried to achieve a USP by competing with IBM on some technical basis— additional computing power or memory—their machines, made using Intel microprocessors and the MS-DOS operating system, came to be known as ‘IBM compatibles’. They were called ‘IBM clones’ if, with identical specifications, they competed on lower price. Continuing advances in software and operating systems were matched by rapid development of microprocessors of ever-greater speed and complexity, with resulting increases in personal computer power. In real terms, computing costs tumbled and went on tumbling. In 2005, IBM sold its personal computer business to Lenovo, a Chinese company. Why did newcomers find it possible to push established computer builders aside? Too concerned with the problems of current products, and with thirty years of successful ‘more of the same’ innovation behind them, not many were able to see the long-term danger. In the 1980s I visited the Putney headquarters of ICL, a British mainframe manufacturer, to see a new computer about to be marketed. A potential customer, I chatted with one of their senior engineers. The IBM PC was selling well
16 Can Machines Think?
191
I observed; were they not worried about potential competition from personal computers? He gave me a tolerant, kindly smile and led me across the room to a cabinet about the size of an under-bench domestic refrigerator. He slapped it fondly with his hand. ‘Fifty megabytes!’ he said proudly. ‘No personal computer will equal that’. I was less than half persuaded, aware that Corvus Systems already had a 20 MB personal computer disk on the way. Today I have a store on my desk of 2 TB capacity which I can easily lift it with one hand. It works well with my iMac. ICL was formed by merging International Computers and Tabulators, English Electric Leo Marconi and Elliott Automation in 1968. Large business mergers were, at that time, seen by many in government as the key to forming single powerful national companies combining the virtues of their constituent parts. In practice, the business world, of which they had no experience, rarely works like that. What follows a merger is a short period of internal political battle, from which one ‘partner’ emerges victorious and the others fade into insignificance. Reminiscent of Japanese Sumo wrestling, after a brief period of extreme activity, one contestant is left standing. Enhanced in status and wealth, he may be panting a little, but is smiling and unchanged. ICL was taken over in 2002 by Fujitsu.5
Notes 1. Johansen, M. (2018) ‘Science Fiction at the Far Side of Technology’ in Baron, C. Halvorsen P. & Cornea, C. Science Fiction, Ethics and the Human Condition Springer (Cham, Switzerland). 2. Damasio, A. (2008) Descartes’ Error Vintage Digital/Kindle (online). 3. Tillich, P. (1975) Systematic Theology, Vol. 2 U. of Chicago Press (Chicago, USA). 4. Christensen, C. (1992) The Innovator’s Challenge Thesis, Harvard Grad. Sch. of Bus. Admin. (Boston, USA). 5. Chandler, A. (2001) Inventing the Electronic Century Harvard Univ. Press (Boston, USA).
17 Lady Lovelace’s Revenge
There is something unique about computers. It is not they are fast and clever, in their own nonhuman sort of way. They have a quality quite new in the non-living world of built or fabricated things. They are general- purpose machines. Until now, machines have always been specialised to a particular function, whereas computers are not. • The printing machines in a newspaper works do one thing only, but do it well: they print. A computer, being a general-purpose machine, can do the same thing, given the right software and an ink-jet printer. It does it more slowly and at a higher price, but it can do it, even so. • Then again, in days gone by a fax machine would take messages from the telephone line and print them out. A computer can do the same, and with fax, e-mail or text, as you please. My iMac handles my e-mails, letting me exchange messages at a rate that Victorian letter writers would have envied, despite their five postal deliveries a day. Yet, the self-same machine on Skype software lets me see and talk to my friends and colleagues around the world—in the United States, Europe, the Middle East and Australasia—all for free, and is able to do that because of its general-purpose character.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_17
193
194
W. Gosling
• My iMac is also most of my library. Encyclopaedias, books, maps are all at my fingertips, literally. Naturally, I compose my own books on the computer, serious books like this one and detective stories that I write for amusement. What is so much more, it corrects my dyslexic spelling, makes the chapter list and does all the other tedious things writers hate. It translates between languages, and not badly for a machine. Willard Quine once said that all translation is indeterminate. True, and some translations are more indeterminate than others, but computers get steadily better. • Photoshop tidies up my pictures on the computer I use for everything else and gives me the portrait painter’s privilege of making those tiny subtle changes that warm the sitter’s heart. I only ever photographed one woman who scolded me for making her portraits too glamorous, but she was a Harvard professor, self-consciously dedicated to truth. • When I nudge my iMac towards the music files it offers me a beautiful imitation of Gilbert Rowland playing harpsichord sonatas by Antonio Soler. I can also hear the voice of Siwsan George singing Welsh folk, or Fado from the late, great Amália Rodrigues—music to slash your wrists by, my daughter said. But my favourite is Adele Anthony playing Phillip Glass’s Violin Concerto. The second movement brings a lump to my throat every time. • The internet is my shop, for food, wine, clothes, furniture, anything— all at sharp prices and delivered to my door. And on and on and on; this is only a fraction of the possibilities. It was inevitable things would go this way because from the ground upward the twenty-first-century electronic computer was designed as a general- purpose digital machine. Inspired by Alan Turing (1912–1954) in his famous paper of 1936, the notion of a general-purpose machine is something quite new; there has never been one before in human history. Within the scope of its computing power, and provided it has the right peripherals, the kinds of things these digital machines can do have no limit. It is the software that specialises the digital machine to any one particular task, and without software, it can do nothing. All technology is the interaction of techniques with design. A general-purpose machine can be regarded as a ‘black box’ which makes available a range of
17 Lady Lovelace’s Revenge
195
powerful information processing techniques, but it is not until it is endowed with software that it can perform a technological function. It is the software which embodies the design package able to turn techniques into technology.
How Things Began In 1843 Augusta Ada, Countess of Lovelace, published the first software: a computer programme for calculating the Bernoulli numbers. Although she was a mathematician well regarded by her contemporaries, the epoch- making significance of writing software for use with a general-purpose computer was not understood. It did not help that she wrote her programme for Babbage’s analytical engine which did not yet exist and was never built. But if Ada was ignored in the nineteenth century she has certainly had her revenge in the twenty-first. New computer apps keep appearing, many not envisaged at all when somebody designed the hardware they are to be used on. Because of the ultimate versatility of the general-purpose computer they work well even so. In the mid-twentieth century, at the dawn of the computer era, we mostly wrote our own software. This led some in the computer world to regard the assertion of property rights in software as somehow not legitimate, despite the skilled labour required to create it. Software, they argued, should be freely available to all, without restriction.1 But it was Bill Gates who understood the adoption environment for software better than anybody else. An early campaigner for proprietary paid-for software, in 1976 Gates made himself unpopular by complaining about illegal copying of Microsoft BASIC. Designing software packages was labour- intensive, he argued, and would make rapid progress only if the software were paid for by those who benefited from using it. This became the business model for the software industry. Any dispassionate view of what followed could hardly deny that it produced excellent software, in a timely fashion and at prices the market was willing to pay. The dispute over whether a successful business could be based on building and selling software was over long ago. Bill Gates and Microsoft demonstrated the positive answer beyond question.
196
W. Gosling
The most important software for the first personal computers was suitable operating systems. The operating system determines the look and feel of the computer for the user, giving its distinctive ‘personality’. The facilities it provides set the programming style for application software designers, and also the user cultures growing up around it. The first extensively used operating system designed for personal computers was Control Program/Monitor (CP/M) (1973). The designer was Gary Kildall, employed by Intel, the great microprocessor manufacturer. A crucial problem facing his design was the painfully small memory capacity of personal computers at that time. Unbelievably, the earliest versions of CP/M are said to have needed only 4kB of memory. Intel seems to have seen little commercial value in CP/M—the first of several business judgments, surprising in hindsight, that characterise the evolution of personal computers to their canonical form. At no cost, Intel gave the rights in CP/M to Kildall, who marketed the software by mail order, ultimately selling over 600,000 copies. Leaving Intel, he founded Digital Research (1974), with Dorothy McEwan. CP/M, improved and developed, received a great boost when adopted for the Apple II (1977). Digital Research had a turnover of $44.6m in 1983 but thereafter declined fast, as CP/M was displaced by Microsoft’s MS-DOS. The company was sold in 1991. On 8 July 1994, Kildall fell and hit his head in a Monterey biker bar. The circumstances remain unclear. He died three days later. By 1980 Tim Paterson, an employee of Seattle Computer Products, had developed a new operating system named QDOS (aka 86DOS), intended for the new Intel 8086 processor, doing it single-handedly in two months, using the CP/M user’s handbook as a guide to the features he needed to include. Later the US Courts ruled QDOS independent of CP/M. Many believe QDOS stands for ‘Quick and Dirty Operating System’, but this has been denied. IBM finally decided to go into personal computers in 1979. In 1980 they talked to Bill Gates about Microsoft versions of the BASIC programming language written for small computers. They also sought an operating system, but Microsoft was not offering that type of software at the time. IBM considered CP/M but proved unable to close a deal with Kildall. Believing that CP/M was the only viable system for small
17 Lady Lovelace’s Revenge
197
computers, he overplayed his hand. The problem derived from two radically different company styles, leading to mutual incomprehension and mistrust. Thinking they might do business better elsewhere, IBM returned to Microsoft, giving them a contract to provide a new operating system. Gates bought all rights in QDOS from Seattle Computer for $100,000. After a rewrite, fixing bugs and making other improvements, he re- christening the more mature system Microsoft Disc Operating System (MS-DOS). Gates sold MS-DOS to IBM on a royalty basis. They accepted a non-exclusive deal, allowing Microsoft to licence MS-DOS to other users. This may look like an error of judgement by IBM, but it made sense then—the more modest fee demanded for a non-exclusive deal enabled IBM to get within a competitive price target for the PC. Gates was presented with a unique selling proposition (USP) for MS-DOS: it was the operating system chosen, over all others, by the mighty IBM. Microsoft sold it widely to hardware builders who cloned IBM machines or built IBM compatibles. This had two important effects: it built Microsoft into a major company, and it produced a host of competitors for IBM personal computers, distinguished mainly by lower price. MS-DOS became the canonical personal computer operating system, but at the same time, the margin attainable on the IBM PC is rumoured to have fallen near to 25% of the ex-works cost, a small fraction of the margin on IBM mainframes. If true this would have helped a shift in political power within IBM towards the mainframe faction. A long-term fall in the value of mainframe sales as a percentage of the computer market had begun, but many at IBM were not convinced of it.
Pretty as a Picture In both CP/M and MS-DOS the sole mode of input to the computer was the keyboard, which could only enter strings of letters and figures. These had to be used both for text entry and for commands. The computer keyboard thus had two modes of use, and a distinctive key (‘the golden key’) switched between text and commands—clumsy and
198
W. Gosling
accident-prone. Depending on the golden key, the same letters entered could be a command or a word. The letter ‘e’ was particularly hazardous because as text it was just ‘e’ but as a command, it meant ‘erase’. What was needed was two separate input channels, one for text and data, the other for commands. At the time there seemed no sensible alternative to text input through a keyboard, but how to input commands? Douglas Englebart (1925–2013) found an answer, inventing the computer mouse in 1968 while working at the Stanford Research Institute. It hardly seems necessary to describe it. He did not, however, correctly foresee its future use environment. Fearing it would be incompatible with a conventional two-handed keyboard, he put great effort into developing a ‘chord’ keyboard, which could be used with one hand only. Each entry required depressing several keys at once, as with some musical instruments. I used such a keyboard for a while but found it without any obvious advantage. Troublesome to learn, it never achieved general acceptance. People with certain physical challenges welcomed it though, so it found a small but important niche for itself. Before long Englebart’s preoccupation with the chord keyboard helped kill off funding from ARPA for his project. Some of his co-workers left for Xerox’s Palo Alto Research Center (PARC), due to differing views about where computing was headed. Engelbart saw a future of networked, timeshare mini-computing, as did many back then. However, in the late 1960s, when the temper of the times, political and social, made centralised power more than usually unfashionable, the enthusiasm of the young was inevitably captured by personal computers. At Xerox PARC Alan Kay, Adele Goldberg and their colleagues, some of them migrants from Englebart’s group, developed the graphical user interface (GUI) design further, introducing the ‘desktop’ visual metaphor. They also changed from a vectored display, in which individual characters were ‘drawn’ on a dark screen, to a bit-mapped display, like a television screen, giving a fully pictorial image. This made computer graphics practicable. Alan Kay, who himself coined the term ‘personal computer’, is one of the truly original minds in computer design. He once said ‘I had the fortune or misfortune to learn how to read fluently starting at the age of three. So I had read maybe 150 books by the time I hit 1st grade. And I
17 Lady Lovelace’s Revenge
199
already knew that the teachers were lying to me’. Impressed by the work of the psychologists Jean Piaget (1896–1960), Lev Vygotsky (1896–1934) and, especially Jerome Bruner (1915–2016), Kay introduced radical ideas into his design thinking. He came to regard schoolchildren as the obvious group to work with on new computing projects.2 Events justified him; the interaction between the designers and children created the graphical user interface we all now use. The computer was rescued from the ghetto of the geeks, who alone learned to fluently use earlier designs controlled by symbol strings. As Apple later put it, the graphical user interface created ‘the computer for the rest of us’. Kay’s revolutionary Alto computer was completed in 1973, and a batch was built for internal Xerox use, costing $12,000 each. The first recognisably modern personal computer, it embodied a mouse, menus and icons, as well as television-style bit-mapped graphics. Adele Goldberg worked closely with him, designing software for the computer. Pointing to pictorial symbols (icons) or lists of menu choices and clicking a button was enough to call up the desired functions. In a way now familiar, the user could select commands, call up files, start programs and do routine tasks, solely through a mouse. In the new interface, applications, windows, pull-down menus and dialogue boxes were always used in a standardised way, giving a uniform ‘look and feel’ to all software.3 Alto received an enthusiastic reception by the technical community when first exhibited in 1973. The new computer looked as though it had a golden future, but it was not to be. The Xerox management of the day decided that commercial exploitation of Alto did not fit their plans for their business, and canned it. In retrospect, this has seemed to many like a cash-rich company perversely choosing not to become a major player in computers. ‘[Xerox] could have been, you know, a company ten times its size’, said Steve Jobs in 1995. ‘Could have been IBM—could have been the IBM of the nineties. Could have been the Microsoft of the nineties’. Yet, the actions of Xerox senior executives were entirely rational, given their situation. They ran a successful photocopier company, soon proving an enviable ‘cash cow’. If Xerox went into computers there must be a risk to it from the negative cash flow that such a major venture would initially cause. Well-placed companies, such as IBM seemed to be, might compete from a stronger background. Even if the project succeeded, the influence
200
W. Gosling
of the people heading the copier business would be threatened by those running the computer division. If it grew ten times larger, as Jobs suggests, the copier business might sink without trace. Xerox executives were understandably cautious: they opted to stay with their well-established and highly successful mission. A great opportunity was there, but it seemed far too risky to take up.
A Courtesy to Apple Apple Computer, who already knew about the Alto developments, were negotiating a financing deal with cash-rich Xerox. As a courtesy, they were invited to Xerox. A party of eight visited in December of 1979. They saw Kay’s GUI at work, and Apple sought rights to use it. They were given them at no cost because Xerox considered they had no commercial value to their copier business. Adele Goldberg recalls that, hoping Xerox would run with Alto, she argued for three hours against giving the presentation to Apple but was finally ordered by her management to do so. Did they hope that Apple’s intervention might remove the threat of a hazardous in-house computer initiative? Apple then commenced the design of Lisa, the world’s first commercial graphical user interface computer, and a significant advance on Alto. Lisa shipped in the spring of 1983 at $12,000. It was a failure due to its high price. However, the Lisa experience led to the Apple Macintosh computer, successfully introduced in 1984. Apple’s Jef Raskin (1943–2005) had been developing a low-cost machine to replace the Apple II, which he called Macintosh—a ‘computer by the million’ for the non-geek market, made on high throughput production lines. Its target price was $1000. It is difficult to be sure what followed, later hotly disputed by the parties. Despite powerful pro-GUI feeling inside Apple, in line with his price objective, Raskin felt unable to adopt a GUI or to give the machine the power such an interface needed.4 Steve Jobs displaced Raskin on the Macintosh project in 1980. The design was completed as a more powerful GUI machine by Jobs’ hundred-strong design team, which had an average age of twenty-two.
17 Lady Lovelace’s Revenge
201
Madly in love with Lisa from afar in 1983, I was too hard-up to offer for her. I began my deep and meaningful relationship with Macintosh early in 1985. My first had 128k of RAM, helped out by two single-sided 3.5-inch floppy discs, each holding 350k. The processor ran at around 8 MHz. Later I traded up, increasing the RAM to 1M, and added a 20MB external hard disk, costing nearly $3000. Could such a large store ever be filled? It seemed impossible to me then. Thirty years on I back up to a chip, with over 10,000 times more capacity than that hard disk. It cost less than $50. In 1983 the Apple Board decided it needed to strengthen its top leadership. John Sculley was recruited by Jobs as Apple CEO from being President at PepsiCo. The Macintosh was launched in 1984 when Sculley set the price at $2495. Jobs thought that was $500 too much for it to succeed as a high volume product. This difference opened a rift between them. In 1985, after increasing disagreements, Jobs left Apple when the Board refused to make him CEO, displacing Sculley. After his departure, and Sculley’s too in 1993, the firm went into decline. Apple resumed growth in 1997 when Jobs returned. Fifteen years on it became the world’s most valuable public company, but by then Jobs was dead. Sculley had decided in 1984 not to licence the Mac OS to third-party computer builders. Still seeing Apple as primarily a hardware supplier, and fully aware of the painful IBM PC cloning experience, the aim was to induce purchasers to choose Apple hardware in order to benefit from the graphical user interface. Some have argued that this was an error of judgement, keeping MS-DOS alive a little longer and denying Apple potential software domination. So why was the decision made? According to Sculley, the problem was fundamental: the Mac graphical user interface had been realised partly in hardware. It could therefore only be incorporated in a Mac-style computer. By then many users believed that the time for the graphical user interface (GUI) had come. Led by Bill Gates, in 1985 Microsoft introduced Windows, a GUI operating system. Early versions were open to criticism but led on to Windows 3 in 1990, a greatly improved, highly successful implementation. When installed it gave IBM PCs, clones and compatibles a Mac-style GUI. The rest is history.
202
W. Gosling
Use of personal computers continued to spread as they became cheaper and more user-friendly, while their application software proliferated. As early as 1997 some 40% of households in the United States owned a personal computer. This rate and scale of adoption would have been impossible without a GUI. Through the 80s and 90s, the power of personal computers continued exponential growth, as predicted by Moore’s Law. Today top-end personal machines have computer power far greater than the mainframes priced in millions with which the modern computing age began. GUI personal computers are now the canonical computer design form, and Microsoft Windows rapidly became the canonical operating system. Windows remains so to this day for desktop and laptop computers with Apple’s elegant System X and Linux as dissenting designs. For tablets and feature phones new operating systems have evolved, like iOS, still with graphical interfaces but moving away from the desktop metaphor. Meanwhile, the production of application software has not stood still. In the days when Microsoft and Adobe were growing to great enterprises, their business model envisaged ready-to-use software distributed to purchasers in recorded form, at first on magnetic floppy discs but later on DVDs. Subsequently, with the rapid spread of broadband access, the practice of distributing software through direct download to the purchaser expanded fast. When this happens costs for recording media, recording process, packaging and mailing are eliminated. A fall in viable prices became possible, leading to the emergence of a new software marketing tool: the app. An app is a software package available for distribution solely by download, and the prices charged are much more modest in real terms compared with many of those asked for earlier generations of software. The cost of entry into the market for a new app builder is low, so there are many more of them. This has increased competition and puts further downward pressure on price. The result was an explosion in software buying, and thus in the range of capabilities, it can give a computer, tablet or feature phone.
17 Lady Lovelace’s Revenge
203
Why Did Computers Evolve That Way? Using the insights into design this book offers, can we see why the modern computer went the way it did? CP/M followed a classical invention- push and market-pull model and might have improved further, making entry prohibitively expensive to new operating systems, but it did not. The severe limit on available memory, imposed by early computer chips, restricted its development. So investment needed to go from push to pull for a competing system was too low to exclude new entrants through the 1970s. Kildall’s impossible negotiating style made IBM turn for an operating system to Microsoft, which was crucial to what followed. IBM’s non- exclusive deal with Gates handed Microsoft a powerful unique selling proposition (USP); so MS-DOS, ‘the operating system that IBM uses’, soon became the canonical PC operating system, taking over the former CP/M niche. The amazing rapidity of hardware improvements in small computers made possible new features in MS-DOS, raising cost of entry for any later competitors so that it remained secure right up to the arrival of the GUI. The coming of the GUI was crucial to the emergence of the computer as indispensable to modern developed societies. Its present near-universal use would not have happened with computers interfaced through symbol strings. Engelbart, Kay and Goldberg, with their co-workers, created the GUI between them. As so often, these exceptional people ‘knife and forked’ a doxastic solution, and had essentially completed the task by 1973. The graphical user interface, an important step towards user- friendliness, was characterised by powerful ‘invention push’, attributable to the usual three factors: • Great charm, in many cases stimulated by reading the famous 1945 Atlantic Monthly article ‘As We May Think’ by Vannevar Bush. His vision proved an uncannily accurate prediction of the future of computing, including many features of the graphical user interface. • Perceived feasibility, from growing computer power at rapidly falling cost, and
204
W. Gosling
• Perceived market from the potentially large numbers of ‘non-geek’ users—Apple’s ‘computer for the rest of us’ that the GUI alone made accessible. To succeed the GUI needed both hardware and software innovations, new techniques and new designs, but was held back because ‘market pull’ was so long in coming, delayed by multiple causes, among them: • Engelbart misunderstood the use environment, doubting the mouse could be compatible with a conventional keyboard. • Xerox chose not to exploit Alto at all. • Sculley set Apple prices to reach early profitability, the objective for which he had been appointed. In the development of the modern personal computer, decisions were taken by various players that may seem questionable in hindsight. The universal adoption of the GUI personal computer was probably delayed by at least five years, compared with the best conceivable scenarios. Resulting damage to growth of the world economy is real but hard to estimate. Even though a catch-up product, Microsoft scooped the pool with ‘Windows’ because it did not require a hardware change, being compatible with a large number of Intel-based computers hitherto running MS-DOS. This was what was widely awaited at the time. By the early 1990s, the computer had achieved the canonical form of a PC with GUI, but IBM was not the dominant player nor even Apple. The winners were Microsoft and the low-cost PC hardware builders, choosing the Windows operating system. Gates’ business plan, based on the sale of closed-source software, succeeded beyond all expectations except his own. Since Kay and Goldberg created the GUI there has been a large volume of excellent work on human-machine interaction, much stimulated by enthusiasm for computer games. Brenda Laurel has used a theory of the theatre to achieve useful insights.5 A pretty example of a theatre-style software package is the earlier version of Apple’s Time Machine backup software, which, in a manner reminiscent of Star Trek TV, successfully exploited the paradigm of time travel.
17 Lady Lovelace’s Revenge
205
Before the end of the twentieth century, the original canonical form of personal computer hardware had already split in two: desktop machines and portable laptops. Subsequently, two more canonical forms appeared: the feature phone and the tablet, led by the iPad. They have new GUIs, moving away from the desktop metaphor, canonical since its introduction by Kay and Goldberg forty years before.
he Social Role of the Versatile T Digital Machine Over the last two decades, specialised apps have appeared making possible the birth of social media. These inexpensive and widely accessible designs let anyone communicate with other people to build relationships, or maybe about some shared concerns. The best known are Facebook and Twitter; each has its own distinctive character, better experienced than described, particularly as the published descriptions tend to colloquial geek-speak. More specialised social media also flourish. Linkedin is oriented towards business and industry, while YouTube specialises in video, and has some truly remarkable film footage in its archives, including Fritz Lang’s Metropolis (1927). Social media allow people to exchange, publish and access information through the internet. Operating through dialogue, with the many people talking uninhibitedly to the many who receive, often they enable communities to reach a consensus. They are of great and growing popularity: internet users visit social media sites more often than any others. The time spent on social media in the US was 1100 million hours in July 2011 but grew to 2020 million only a year later. The new media give ordinary people a voice, yet not influence some claim. Recent history contradicts this flatly. They have given a powerful new weapon to the proponents of participatory politics. Web sites have been opened to facilitate this possibility, the best known being one (www. change.org) which specialises in running petitions. At the time of writing well over a hundred million people worldwide are participating in varying degrees.
206
W. Gosling
• In October 2011, twenty-two-year-old Molly Katchpole started an online petition. It protested against an announced $5 fee imposed on debit cardholders by the Bank of America. Within three days signatures on the petition numbered 75,000 rising to 300,000 later, and over 21,000 promised to close their accounts with the bank if the charge were imposed. Bank of America withdrew its proposed charge. • In December 2011, the proposed Stop Online Piracy Act (SOPA) became a feature of US politics. Presented as a means of preventing copyright infringement and intellectual property theft on the Internet, it appeared to have strong support in Congress. Outside many had a different view, fearing it could destroy the freedom of the internet and lead to widespread censorship. Coordinated through social media, a synchronised Internet blackout was organised to protest against the legislation. Congress responded: it was blocked. This is not a purely US or developed world phenomenon. • In August 2013, Meriam Yehya Ibrahim Ishag, reared as an Orthodox Christian in which faith she remained, was condemned to death as an apostate from Islam. She first married a Muslim, which the court deemed had made her a Muslim herself. After that marriage ended in divorce she later married a Christian. She was brought before a Sudanese court charged with adultery because the second marriage (to a non-Muslim) was considered invalid. She was also charged with apostasy from Islam. She was found guilty and sentenced to 100 lashes and death. There was an immediate international outcry. In England Emily Clarke started an online petition to the Government of Sudan for clemency, attracting over a million signatures. Meriam was freed in July 2014. Social media can influence events to a major degree. The US presidential campaign supporting Barack Obama used the social media to produce a record turnout in 2008, and the same was true in the 2012 elections, which relied on social media more heavily than ever. Social media do impact politics, around the world. They are irresistible to those who have a dedicated cause because they can reach large audiences and create support networks online. True, internet activism is
17 Lady Lovelace’s Revenge
207
overwhelmingly the domain of the under-thirties at present.6 They would be the early adopters, but this way of doing things will surely spread to higher age-groups.
A Voice Without Influence? In the late 1980s, I was spending an afternoon at the Institute of Directors building on Pall Mall, London. I had eaten my lunch there and settled down quietly to read. Late in the afternoon, I heard chanting in the street from a group of anti-capitalist protesters. As they passed the front door of the Institute of Directors they began to shout ‘Come out you bastards!’. Seeing no reason not to oblige them, I went out through the door and stood on the steps. They were no vast multitude, only a few hundred protesters, constrained to a fairly tight column in the roadway. I waved in a friendly fashion and there was a momentary silence, then somebody shouted ‘Who are you?’, as uncomprehending faces turned towards me. ‘I’m one of the bastards’, I explained. They seemed puzzled by that. I think I looked wrong, to their eyes, for a representative of the boss class. Nothing happened for a few moments, then they began to march along Pall Mall again, with me alongside them. As we drew level with the Athenaeum Club, feeling hungry I decided to drop in for a cup of tea and a toasted teacake, which it does extremely well. So I left them, still marching on in the direction of St. James’s Palace and chanting ‘Maggie, Maggie, Maggie—out, out, out!’ Margaret Thatcher was prime minister at the time. They had a voice, nobody stopped them and I would have been indignant if anybody had tried, but they had not the slightest influence on the politics of the day. No doubt they felt they had witnessed for what they believed, but that satisfaction was the only pay-off they received. Though participants hate to admit it, the same was true of so many of the marches and demos of the era. Those involved felt good, striking a blow in a virtuous cause, but they had no political effect—the era of a voice without influence indeed. Now the social media have changed everything. A million signatures on a petition are far harder for a government to ignore than a few hundred people shouting in the street.
208
W. Gosling
But there is more. Once recorded in any form no secrets, state or private, are any longer safe anywhere. The social media have the effect that actions of one conscientious or disaffected person can result in irrepressible worldwide publication—remember Wikileaks? What does this do to established forms of political activity? Are social media campaigns making democracy work better than parliaments can? Their advocates think so, and many people get into the debates who might otherwise not be heard, but their specific intent is to force the hand of elected governments. It all depends on what you mean by democracy. The doxastic method has constructed a social revolution, a tiger nobody has yet shown much skill in riding.
Notes 1. The Free Software Foundation, launched in 1985, energetically promoted this view. The seemingly ‘immaterial’ nature of software encouraged such thinking. A variant of the ‘free’ software movement favours ‘open source’ software—source code freely available to all, to develop their own variants. The not-for-profit Open Source Initiative, favours this approach. Notable successes include the Linux operating system and Firefox web browser. 2. Winnicott, D. (1971) Playing and Reality Tavistock Publications (London, UK). 3. The Kay-Goldberg team also created the Ethernet local area networking protocol, linking Alto computers on the PARC site into a functioning net. Ethernet is still in widespread use half a century later. 4. Levy, S. (1994) Insanely Great Viking (New York, USA). 5. Laurel, B. (ed.) (2013) Computers as Theatre (2nd edition) Kindle Books (Online). 6. Cohen, C. & Kahne, J. (2015) ‘New Media and Youth Political Action’ in Allen, D & Light, J.S. (eds) From Voice to Influence; Understanding Citizenship in a Digital Age U. of Chicago Press (Chicago, USA).
18 The Dark Side
Technology is our human thing, it makes us what we are, but we are not wholehearted about it. Indeed, its manifestations sometimes evoke strongly negative responses. The robot from ‘Metropolis’, Fritz Lang’s film masterpiece of 1927, symbolises this widespread fear of technology. A metallic travesty of a female body, controlled by evil, it creates widespread chaos and destruction. It meets its end burnt at the stake like a mediaeval witch. Yet experimental evidence for our dependence on technology is compelling. If technology were to decline or die out so also would we. Cambodia was a recent example. In their bizarre social experiment, from 1975 to 1979 the Khmers Rouges attempted to reorganise their society to use as little technology as possible. Maybe hankering after a Marxist variant of the ‘noble savage’ of Michel de Montaigne (1533–1592), they dreamed of a subsistence agrarian economy, ‘free of Buddhism, money or education’. Their ‘revolution’ was led by a failed engineering student, Pol Pot (1925–1998). He was charismatic but had been repeatedly unsuccessful. His first failure was at a prestigious Lysée in Cambodia, to which he gained entrance through the influence of his female cousin and sister, both close friends of the king. Moving to Paris he studied engineering at
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_18
209
210
W. Gosling
the École Française de Radioélectricité (1949–1953) but left after failing his examinations. In power, he opposed both education and technology and was nothing if not radical. Cambodian cities were emptied of people, expelled to the countryside to live as best they could. Many starved. It was criminal to be literate or speak a foreign language unless specifically authorised. Western- style medicine was replaced by traditional herbal remedies. Of a population well over 7 million at takeover, some 2 million died in four years, by execution, starvation or untreated disease. A good thing, claimed State- controlled radio, since only 1 or 2 million were needed to build the new society, or could be supported by it. As for the others: ‘To keep you is no benefit, to destroy you is no loss’, was the official slogan. It could not last. With Vietnamese assistance, the Khmers Rouges were driven from power in 1979. Perhaps the classic attempt to reverse technology-driven social change was the outlawing of firearms in Japan by Ieasu Tokugawa, coming to power as Shogun (a type of military dictator) after the battle of Sekigahara (October 1600). His victory was made possible by extensive use of the tanegashima, a firearm developed from the European arquebus. Guns had been manufactured in Japan since 1550, and by the end of the sixteenth century, there was a third of a million in the country. They allowed Ieasu’s Eastern Army of 75,000 to defeat his opponent’s 120,000 strong Western Army, who relied on sword, bow and spear. Wisely, 40,000 of them defected during the battle. Yet, despite the source of his victory, once in power Ieasu Tokugawa became convinced that fire-arm technology would undermine the Japanese social order. The sword-fighting skills that ruling-class men, and only they, traditionally acquired through long adolescent training would become obsolete. A low-class person with a tanegashima could kill the greatest sword master. This could not be allowed, because it would subvert the social order. So for more than two centuries import or manufacture of firearms and technology products was harshly suppressed. However, Japan was obliged to recognise the resulting national vulnerability when US Commodore Matthew C. Perry arrived in 1852. Perry’s flotilla of steam warships, with their invulnerable iron hulls, fired shell- gun salvos into buildings in Edo, the Japanese capital (now Tokyo),
18 The Dark Side
211
demonstrating ships and weapons technology far beyond anything Japan could access. Following this profound humiliation and after a desperate last stand—the Boshin war of 1868–1869—the Shogunate was broken. The new Imperial government acquired an impressive iron battleship, the Kōtetsu, whose crew successfully repelled an attempted boarding using a Gatling gun, with much loss of life. Japan had begun its remarkable half- century transition from feudal economy to modern state. Restraints imposed by the Shoguns on use of technology were no more. From time to time there were also attempts to hold back technology in Europe. When firearms appeared on the battlefield in the fourteenth century there was a call for them to be banned for killing Christians—presumably a shot or two at Jews or Muslims was thought fair game. It came to nothing; soldiers found firearms far too useful. Nor was this concern restricted to military matters. In the nineteenth-century English textile workers protested against new technologies which reduced the labour content of finished goods. Between 1811 and 1816 there was both political protest and damage to the new machines. Magistrates who sought to implement the law were threatened or attacked. This Luddite agitation led to widespread unrest, ultimately suppressed by the army. Sentiments like these survive to this day, often fuelled by alarming tales about future effects of coming technology.1 Why? Is it because the outcomes of changing technology seem other than we hoped or wished for?
No Return Ticket In one way the concern about technology is understandable, and in this respect, science is no different. They both always, everywhere, offer only a one-way ticket; it is in their nature, they can do no other. Either of them will take you somewhere new, every destination more exciting than the last maybe, but in the nature of things, there is no realistic way of going back. In this they differ fundamentally from other factors causing social change, most of them far more mutable. Science and technology live in the present and look to the future. Both irreversibly transform the way we perceive things. New knowledge about the universe cannot be unlearned, and neither can we forget new abilities to create things in the
212
W. Gosling
everyday world—products, processes or services. For both of them, the past is dead—a good story to recall, often a source of pride, but never to rise again. Moving inexorably, they draw along with them the cultural core of the society in which they are embedded, and it too cannot revert. Yet despite their similarity in this respect, these two are not the same, nor should they be confused. Science changes forever our understanding of the universe. Technology transforms forever the fabric of our lives. Look at some technology examples: • Once the flint handaxe appeared it was the only choice for cutting and scraping until it was replaced in its turn by metal tools. • Hard, shaped horse collars made forever obsolete the earlier style of horse harness, for plough, cart or carriage. • The post windmill, which could be oriented to face the wind, eclipsed its fixed orientation predecessors, and nobody chose to return to the old way. • Steam railways ended any future for horse-drawn travelling coaches, and on the oceans, after a struggle, sail gave way to steam. • In the twentieth century, the use of tanks, barbed wire, trenches and machine guns drove horses from the field of battle. They will never return, despite their six-thousand-year involvement in warfare. • And now the age of microelectronics has arrived, transforming into a fading memory once large and powerful industries that formerly made and used vacuum electronic devices in radios, televisions and even computers. Today, they are the concern of museums and the stuff of nostalgia. In the past, distinguished and knowledgeable people have been unenthusiastic about new technology because they were quite unable to foresee its utility. Instead, they were much more aware of the disruptive potential to the existing order in which they had some stake, if only emotional. Negative and fearful pronouncements now seem absurd appeared perfectly rational and tenable when they were made, often referring to early imperfect instantiations of the new ideas. As Benjamin Franklin, witnessing his first balloon ascent, said to a bystander who expressed doubt as to what use it might be ‘Sir, what use is a new-born baby?’
18 The Dark Side
213
The problem arises because people think of technology as static, not in ever-continuing change. For example, a conventional depiction of the horrors of working-class life, still met with among those without first- hand experience, uses a view of technology half a century out of date. In the nineteenth and early twentieth centuries, the ‘brainless’ machines of that day were partnered by human beings, who supplied them with the missing faculties no machine could then possess, such as vision, memory and simple judgement. So the age of the ‘machine minder’ came into being, the person whose mental functions served to patch the inadequacy of the machines as then used. It was a pattern of working life many thought unworthy, even intolerable.2 Yet this pattern was obsolescent half a century after it became commonplace. Now the machine minder can be seen as the product of a transient phase of the industrial revolution, a phase passing as digital machines grew more ‘intelligent’. Only low-wage economies in the more unfortunate parts of the world constitute a significant drag on this desirable technological and social transition.3
Dislike of the Incomprehensible Arthur C. Clarke said any smoothly functioning technology has the appearance of magic. That alone will make it seem uncanny, even threatening. How it works will at first be little understood and difficult to communicate. In a blog (2011) ‘Itwanda’ wrote: ‘Yes—I have to admit, I am among the people with a natural (understandable) fear of technology. And I cannot say what this fear is about; it is not a general distrust, rather the complexity of technology that sort of scares me’. When technologies are new they are often badly explained by the elites who use them. Earlier books on technologies now commonplace make the point. Textbooks on ‘wireless telegraphy’ dating from the early twentieth century contain passages now quite puzzling, even to present-day radio experts. The result of these inadequate early explanations can be to arouse suspicion. A mature technology must be capable of explanation in terms most people can grasp. Generally, acceptable things have to make sense to us; only thus can we minimise our fears and maximise the usefulness of the innovation.
214
W. Gosling
But what about things too recondite for ordinary people to understand? Science is a different matter, but in technology, there can be none, otherwise ordinary people cannot install and maintain them, maybe not even use them. Whatever cannot be described well enough to be widely understood simply has not been well enough grasped by the supposed authorities in the field who do the explaining. This view is often not popular with reputed experts because it undermines their desire to belong to an elite, a secular priesthood whose deep mysteries are incommunicable. Nevertheless, it is obviously true. Ideas are not widely adopted which cannot be understood, at least in outline, by a significant fraction of the population. In the history of technology, however, there are many examples of the mysterious becoming routine. True, explanations take time and repetition to achieve simplicity. Yet simple explanations do appear in the end. They must if the innovation concerned is to be comfortably integrated into our social lives. When they do, we feel easier about technology, seeing it as part of everyday experience. For many centuries people in the West accepted Aristotle’s theories of mechanics. Things moved, he taught, when a force acts on them but when it acts no longer they stop. Common sense? If so an Aristotelian car would stop dead if you took your foot off the accelerator—but in reality, there never was such a thing. And what about cannonballs, which streak across the sky with nothing obviously pushing them? Air flowing from the front pushes them at the back the Aristotelians said, but unconvincingly. So in the seventeenth century, Newton’s laws of motion supplanted Aristotle’s. Assuming no surrounding medium, he said, things stay in unchanging motion (or at rest) until a force acts on them. Force changes motion, but when that force stops things continue in motion as they were at that moment. To stop your car you must lift your foot from the accelerator, but also push the brake pedal too, without which it goes coasting on. It sounds like common sense now, but it was cutting-edge science four centuries ago, understood by only an educated few. Without the power of flight, not fast on foot and unequipped with much in the way of tooth and claw, throughout our evolution we were an animal forever on the lookout for unexpected threats. Even when not on high alert, we were obliged to remain subliminally on guard. New things
18 The Dark Side
215
and new situations have always been a powerful stimulus to defensive caution. Yet the creation of something new was technology’s aim from the beginning. This antithesis cannot be disregarded or easily smoothed away, an ancient tension between what we want and what we fear. Unwilling to forgo the new and positive advantages the doxastic method seems to offer in a particular case, we seek ways to ease the anxiety intrinsic to impending change. The commonest way to do it is to find transitional forms, which bridge from the old technology by making the new resemble it as much as possible, in look and feel. The first motorcars copied the configuration and performance of horse-drawn carriages, which eased their acceptance. Looking like a carriage without horses, they had modest top speeds, typically below 35 kph. However, this proved just a transitional phenomenon. Compared with horses, the far more powerful internal combustion engine could pull much heavier vehicles at far higher speeds. Travelling faster, users met with problems of control, suspension and structural integrity unknown to earlier coach and road builders. Within a decade the ‘horseless carriage’ transformed itself into a recognisable car. However, during its brief reign, the horseless carriage served as a useful transitional form, its performance, size, shape and general configuration similar enough to the horse-drawn carriages of the past to diminish anxieties. Fears about the dangers of flight were also contained by packaging the flying machines in a reassuring transitional form. By the start of the twentieth century, balloons had been flying for over a century, so airships seemed a less radical development than aeroplanes. Airships became even less alarming for the passengers when they were made to seem as much like ocean liners as possible. Although with far less floor area, they variously had bars, restaurants, sleeping cabins, even small dance floors. The comfort drawn from the transitional form faded, however, after a sequence of spectacular airship disasters. The attempt at an ‘ocean-liner’ transitional form made a brief reappearance even after aeroplanes supplanted airships. The Bristol 167 ‘Brabazon’ was a truly enormous propeller-driven monoplane with eight radial engines. The initial proposal was for an 8-m diameter fuselage, nearly 25% larger than a Boeing 747, with upper and lower decks. The specification included a dining room, 37-seat cinema, promenade and
216
W. Gosling
bar, as well as sleeping accommodation. It could carry 100 passengers, but at a cruising speed of only 400 kph (250 mph). Surprisingly, under the socialist government of the day, a prototype was built of this elite luxury airliner, funded by £6 millions of public money. Its first flight was in September 1949. No commercial airline wanted the aircraft, and in 1953 it was finally sold as scrap for £10,000. The economics of the transitional form had proved intractable.
The Fear of Enslavement Yet thinking of objections to technology, what many people fear most is its character, perceived as overwhelming, compelling and ungovernable. A new technology appears somewhere: few people might have asked for it, and nobody voted for it. Yet even so, once it appears it seems to sweep all before it. Think of transportation: after Nicolas-Joseph Cugnot demonstrated his steam-driven tricycle in 1765 there was a strong impetus to develop both steam-driven road coaches and railway locomotives. By the end of the eighteenth century, they were beginning to appear. Not everybody was happy about it, as might be expected. The horse- coaching interest tried to oppose the change through a public campaign urging the dangers of the new mode of transportation. A colour print of c.1800, artist unknown, is entitled New Principles or The March of Invention. An example of anti-steam propaganda, it depicts a dramatic explosion in a steam-driven coach, evidently a boiler bursting. There are dire consequences for the passengers, who are dismembered, though in comical, not horrific, representation. Meanwhile, a horse-drawn coach sails serenely past, and yokels, in rustic attire, look on in wonderment. Interestingly, in the background are depicted a couple of other technical innovations of the day: a light carriage drawn by a kite, and two balloons in flight. One balloon has passengers representing HM Government, who announce ‘We’ll take a flight to heaven tonight’, and the other carries HM Loyal Opposition, who counter ‘We’ll watch their motions’. Amusing though the print was, the campaign itself had little success. The world’s first public inter-city railway line, the Liverpool and Manchester, began steam-hauled service for both freight and passengers
18 The Dark Side
217
in 1830. Unlike earlier railways, which continued with at least some horse traction, this line was steam drawn, nothing but, and proved highly successful. A decade later a railway mania developed in Britain, and in one year alone (1846) Parliament authorised the construction of 15,300 km of new railway lines, not all of which were ever built. By the 1880s, steam cars were seen in Europe, and by 1902 the majority of the thousand or so cars on US roads were steamers, a transitional form displaced in turn, within little more than another decade, by cars with internal combustion engines. Horses were speedily relegated to ceremonial, leisure and sporting uses, and the post and stage-coaches were no more. It all happened so fast. If it promises relief from existential anxiety for those who use it, such as by reducing the probability of death in battle, transition to a new technology is quite inexorable. Plate armour appeared on European battlefields in the fourteenth century and spread rapidly until the sixteenth, after which firearms, which could easily damage or penetrate armour, lead to rapid decline in its use. By the eighteenth century it became ceremonial wear only, and soon not even that. There have been many later examples of technology meant to reduce personal risk in war—things like long-range artillery and tanks. Their adoption was always rapid. Today use of unmanned cruise missiles and drones seems compelling. Their controllers are invulnerable, far away from the conflict. Quaintly denounced by some as too one-sided a way of killing, they promise hope of defence against the insurgency wars that seem likely to come. War promotes urgency, but in civil technologies, changes are possible which can be as fast. Think of the collapse of passenger transportation by ship on oceanic routes, and its replacement by aviation in the second half of the twentieth century. Almost the only sea-going passenger ships today are short-range ferries and the cruise ships, which increasingly seem not transportation but floating hotels. Quick technology transitions like this seem totally compelling as if there were an uncontrollable external force driving the change. Could this be technology itself, in the role of the great puppet master, enforcing our obedience? Sometimes it can seem so, a master whose dictates the world has no option but to obey. And worse, the changes it seems to force on us are not always comfortable. As Alice Kahn wrote: ‘For a list of all
218
W. Gosling
the ways technology has failed to improve the quality of life, please press three’. The ever-present fear this can generate is of heedlessly empowering unwanted, damaging or destructive designs, which will be built and widely used, once completed. Does this seeming loss of autonomy justify a reaction against all technology? In fact it is entirely down to human choice how technology develops and is used, which in turn will respond principally to our internalised ethical standards and the pressures of society.
Information Explosions A distinguished Swiss biologist vividly describes his world in pessimistic terms. He sees it drowning people with unsought information, an overabundance he believes confusing and harmful to the mind. He predicts a threatening future in which people will lose their jobs in large numbers, the painfully acquired skills of important classes of workers will be abandoned, long treasured art objects will be disregarded, the beautiful replaced by the ill-made, and the customary behavioural patterns of various social groups shaken to their foundations. All of this will be the unavoidable consequence of the intrusion of a wholly unsought new information technology into ordinary life. What Conrad Gessner (1516–1565) was troubled about was the recent invention of printing with movable type.4 The new printing press would displace scribal copying of books and so lower their price and greatly increase their availability while weakening constraints on what gets published. The printed books would not be works of art like the older ones but produced down to a price so that they could be sold in much larger volumes. This would be a danger in itself because every kind of people would acquire books, including those whose education was insufficient for them to interpret what they read reliably. Hard-earned wisdom that earlier books enshrined would be scattered and lost. Gessner saw all this as undermining both scholarship and good order in society.
18 The Dark Side
219
Prediction is always difficult, and as it turned out things went differently. Printing and the spread of literacy are now generally considered to have been an important step towards modern liberal democratic societies. Some printed books were tawdry it is true, but others were fine indeed, as much works of art as their scribal predecessors, and now highly collectable. Even professional scribes were not in fact banished by the introduction of printing. They continued for several centuries, decorating printed books or creating niche products such as presentation scrolls and documents. A few do so to this day. When new technology concerns ways of storing or transmitting information, suddenly much more is available than before, and people fear the consequences.5 Anxieties about uncontrollable explosions of information are not new. Worries about it go as far back as the history stretches. Each new information technology evokes fears of its impact on mind, brain and the established order of society. These concerns return repeatedly, little having changed except for the specific new technology on which the anxiety is focussed.6 Possibly, it might all date back to the birth of literacy itself. Socrates criticised reading, claiming it ‘creates forgetfulness in the learners’ souls’, because they do not use their memories. People of traditionalist temperament warn against adopting new information technologies uncritically; they believe doing so will risk abandoning the edifying media they were reared on. They do this unaware that the older technology they benefited by was thought just as harmful too when first introduced. The alarm bells for information technology have been heard so often. They rang out over radio broadcasting when it began—it would disturb the balance of the excitable minds of children, pundits declared—and again over the coming of both television and the internet, which would discourage both children and adults from reading books. Oddly enough, when e-books appeared, strikingly increasing the amount that children read, we were warned that losing physical contact with books on paper was bound to have ill consequences. The frequency with which impending disaster has been predicted in the past, along with the benign out-turn, has a lesson to teach.
220
W. Gosling
The Monstrous and the Damaging If we hanker for benign technology it is the propensity to design monstrous or damaging things that we should seek to control, since only in designs can the destructive potential in the techniques ever be released. It is in design that the potential threat to humanity, to every one of us, lies. So how do we ensure that designers do not use the available techniques in ways we rightly fear? How can we secure our future in the longer term?7 The answer is at once simple and yet difficult. If we want to continue as the dominant species on this planet we are obliged, individually and collectively, to make ourselves more ethically aware and more morally responsible.8 It is a big task, but maybe not impossible. Steven Pinker presents evidence that the manners and morals of Western Europe have improved greatly in the past thousand years.9 Yet it will not be a case of simply doing what we do already but trying harder. We need a change of mindset.10 It is particularly in the aftermath of technology revolution that ethical problems are likely to become acute. Collective regulation of the use of new technology, often by governments, more rarely by associations of users, may be the only workable solution.11 Today, a new consideration has also entered the debate. Social attitudes are changing. Some people understand the need for an ethical dimension in design significantly better than others. A study of attitudes among British teenagers is intriguing. The data reported comes from combined studies of nearly six hundred 14- to 15-year-olds in English schools and over seven hundred 11- to 21-year-olds in a nationally representative UK sample.12 The investigation reveals that the vast majority of young people of both sexes are sympathetic to technology. However, girls in general are more concerned about ethical issues in science and technology, and more widely—girls are more upset by events in the news. Girls are also less likely than boys to believe that technology will solve all contemporary problems. Indeed, they are more likely to fear that the future could be worse than the present. However, girls’ concerns do not diminish their interest in science and technology; in fact, it was precisely those girls who
18 The Dark Side
221
were most interested in, and positive about, science and technology who expressed the most ethical concerns. Advancing feminism in the developed world has produced many significant social changes. By no means, the least important is that it has led women to enter the technology professions in far larger numbers than once they did. One positive consequence could be to make the risk of humanity’s self-destruction more tractable.
Neo-Luddism Among those who challenge the doxastic method as a solution to human problems are supporters of Neo-Luddism—their own chosen designation. The historical Luddites were named after Ned Ludd, an eighteenth- century youth, perhaps mythical, said to have smashed stocking frames in protest at the mechanisation of weaving. His name was adopted by nineteenth-century mill saboteurs. Later a fictitious King (or General) Ludd was imagined living in Sherwood Forest. His ‘signature’ appears on some Luddite proclamations. The most extreme among modern neo-Luddites believe that the industrial revolution and almost the whole of technology have been harmful to us. However, these are few in number, and a majority point to the disadvantages and dangers they see in some particular technologies and seek to regulate the situation through normal political processes.13 In an American romantic simple-life tradition that goes back at least to Henry David Thoreau (1817–1862) and also the Western frontier culture, many Neo- Luddism believers do not reject all technology, but merely seek a return to a simpler agrarian lifestyle.14 They have a little in common with the Greens in politics, an overlap but not any kind of congruence. Some Greens, like those who look to nuclear power or carbon dioxide sequestration to reduce atmospheric carbon, will use whatever technology it takes to solve environmental problems. Many Neo-Luddites advocate adoption of what is known as the precautionary principle, admitting no technology which has not been proven in advance to be harmless to humans. This would indeed be a safe
222
W. Gosling
strategy, but it is impossible to implement. How could it ever be reliably proved that a proposed technology is entirely safe? It is difficult to imagine, for example, how one would carry out a randomised double-blind test on cellular phones. As to the rigorous implementation of the precautionary principle retrospectively, this would rule out many things that we currently find indispensable, like the use of fire, aviation, ships and wheeled transport, all of which can harm human beings in certain circumstances. The sentiments that drive Neo-Luddism may be humane, even admirable in some ways, but their implementation seems hopeless. Science and technology can, in their different ways, offer the possibility of change, but in both cases, it is a one-way ticket. The Cambodian experiment with reversing technology ended as a horrific disaster; there were simply too many people in the country for it to support them in the way favoured. Even the attempt by the Tokugawa Shogunate merely to halt technology a little before the place it had got to when Ieasu Tokugawa came to power, left Japan hopelessly vulnerable and impoverished. These failures happened although both attempts were made by harshly authoritarian governments with all power in their hands; no sizeable democracy has ever ventured down this route. If it were conceivable that technology change might not be a one-way ticket, what is incontestable is that nobody has yet managed to find the return half or use it successfully. It is the sheer difficulty, even in an authoritarian society, of withdrawing from technology by conventional means that has tempted some to look beyond them, for example, to violent direct action. The most notorious is Ted Kaczynski (b. 1942). A mathematics PhD and once an Assistant Professor in a leading university, in 1978–1995 the power of his convictions led him to conduct a letter-bomb campaign against those he believed supportive of technology. Three people died and twenty-three were injured, one permanently blinded. He was tried and sent to prison for life. Violent or peaceful, there seems no way open to achieving their ends for Neo-Luddism, except one. It is worth recalling the quotation from Buckminster Fuller (1895–1983): ‘You never change things by fighting
18 The Dark Side
223
the existing reality. To change something, build a new model that makes the existing model obsolete’. If a well-entrenched technology is offensive, a possibility may exist of designing something superior to supplant it, and without the negative aspects.
Notes 1. Jones, S. (2006) Against Technology Routledge (London, UK). 2. ‘Saturday Night and Sunday Morning’ (1960), a film produced by Tony Richardson. 3. Rosling H. (2018) Factfulness Hodder and Stoughton (London, UK). 4. Eisenstein, E. (1979) The Printing Press as an Agent of Change Cambridge U. Press (Cambridge, UK). 5. Gleick, J. (2011) The Information: A History, a Theory, a Flood Fourth Estate (London, UK). 6. Carey, J. (1992) Communication as Culture Routledge (London, UK). 7. Platts, J. (2014) ‘Achievement motivation, not utilitarianism’ The Friends Quarterly, Feb. (London, UK). 8. Nordhaus, W. (1982) ‘How Fast Should We Graze the Global Commons?’ Amer. Econ. Rev. 72 2. 9. Pinker, S. (2011) Better Angels of Our Nature Alan Lane (London, UK). 10. Sand, M. (2018) Futures, Visions and Responsibility Springer (Cham, Switzerland). 11. Spar, D. (2015) Pirates, Prophets and Pioneers Kindle Books (online). 12. Haste, H. (2008) with Muldoon, C., Hogan, A., & Brosnan, M. If girls like ethics in their science and boys like gadgets, can we get science education right? British Science Festival (Liverpool, UK). Haste, H. (2013) ‘Deconstructing the elephant and the flag in the lavatory: promises and problems of moral foundations theory’. J. Moral Education 42 3. 13. Glendinning, C (1999) Off the Map Shambhala Publications (Boston, USA). 14. Zerzan, J. (2005) Against Civilization Feral House (Port Townsend, USA).
19 Be Careful What You Wish For
‘Be careful what you wish for, lest it come true’.1 Sometimes the technology achieves, even exceeds, all we ask of it, yet the outcome is far from what its sponsors wanted. Sometimes a series of minor triumphs build towards an ultimate disaster. War is the most obvious example of this paradoxical situation. The military has always wanted more bang for their buck, and technology has given it to them in unimaginable degree. The result, however, is the obsolescence of formal warfare on the grand scale, and loss of the social function that war once served. So let us review what has become of war. To start with the land battle: an instructive measure of what is going on is the magnitude of kill numbers on either side. At the tactical level disaster strikes when armies break ranks and run, which is increasingly hard to prevent once they have lost 20% or more of their number. When routed they suffer far heavier casualties than those who stand and fight, so a small initial differential kill rate between the two sides can be amplified by subsequent events. The initial killing advantage can be the result of seemingly minor changes in technology. At first, the hand-cannon seemed of marginal significance on the battlefields of the fourteenth- century Europe, compared with a mass charge by armoured knights, but it made plate armour obsolescent well within a century. © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_19
225
226
W. Gosling
Under the influence of technology, the conduct of war has been transformed beyond all expectations in the last few hundred years, at times changing faster than any other area of human technical activity. Just consider how the weapons that infantry carry have changed. Hand-held projectile launchers, evolving by stages from long bow to cross-bow, then in turn arquebus, musket, rifle and automatic weapons have dominated the land battle for a millennium. The reason is not hard to see: once the hand-cannon appeared the role of the archer in warfare was doomed, and each new projectile weapon similarly doomed its predecessor. Although by the fourteenth century ribauldequin multiple-barrel cannon were already in use for crowd control, multiple firing weapons remained uncommon while munitions were still hand-made. However, manufacturing was being transformed by the coming of steam power in the eighteenth and nineteenth centuries. The conduct of war could not be unaffected. Multiple and rapid-fire weapons soon appeared, like the French mitrailleuse (1851), which had twenty-five 13 mm calibre barrels arranged in cylindrical configuration. When a handle was cranked they fired and reloaded in succession. The similar U.S. Gatling gun (1861), invented by Dr. Richard J. Gatling, had six, or later ten barrels. Also hand-cranked, even with inexperienced users it was capable of firing three rounds of just under 15 mm calibre per second. It saw use in the American Civil War and the Japanese Boshin War, among others. With further development, the rate of fire was ultimately doubled. Both were forerunners of the true automatic machine guns that followed. The first wholly successful example, the Maxim gun, invented (1883) by Sir Hiram Maxim (1840–1916), was capable of firing up to ten heavy calibre rounds per second. This fearsome weapon was demonstrated to potential buyers by using it to cut down small trees. The lines by Hilaire Belloc2 (1870–1953) celebrated the power of the machine gun in verse: Whatever happens we have got The Maxim gun, and they have not.
The highest publicly admitted rate of fire of any present gun exceeds 150 rounds per second from the Russian 30 mm calibre Kashtan, an
19 Be Careful What You Wish For
227
advanced Gatling gun designed by Arkady Shipunov for ship defence. In general military use, rapid-fire weapons came into their own only from the nineteenth century, once they were backed by steam-powered factories able to produce the ammunition they consumed in such large quantities. Only developed countries can use them, or the unusual few that have wealth enough to buy large quantities of ammunition in the international market. In World War I, universal deployment on the Western Front of barbed wire, machine guns, mining and trenches had reduced the belligerents to a kind of siege warfare. The use of massed artillery followed by frontal assaults failed to achieve a breakthrough, despite great loss of life. At Verdun, in 1916, there were 700,000 casualties, at the Somme, in the same year, there were more than a million, and the Third Battle of Ypres, a year later, took 600,000 casualties. There seemed no viable way to unlock the situation. Mediaeval commanders, familiar with sieges, would have expected no less, using such assaults only as a last resort. It was generally accepted that starving the defenders or internal treachery were the only truly practical ways to win sieges. It was tanks that ended the long siege on the Western Front in World War I. This was the major innovation in the land battles of the early twentieth century. They were known to the Royal Navy personnel who designed them as ‘land dreadnoughts’, and their evident success at the battles of Cambrai and Amiens in the same year broke the tactical stalemate. The use of horses as attack platforms in warfare, having lasted for six millennia, drew rapidly to a close. Cavalry generals and horse enthusiasts damned these ‘iron carriages’ and denounced the change, but their protests had little impact. Modern armies retain horses only for purely ceremonial functions, if that. In 1939, at the beginning of World War II, there was an encounter— one can hardly call it a battle—between Polish cavalry and German tanks. The cavalry had been harrying German infantry with some success, so tanks were sent in to restore the situation. The events were later reconstructed in a post-war Polish film, and somehow a myth grew up that the cavalry actually chose to attack the tanks. They were not such fools of course, but even so, they were annihilated.3 Even when using only their machine guns, tanks kill horses and riders but are little affected by
228
W. Gosling
anything the cavalry can do. In its land battles, World War II (1939–1946) was dominated by tanks. Although the reign of the tank may now in turn be drawing to a close, war will not return to horses. Successful experience with helicopter gunships in varied battle conditions already suggests that, compared with this much faster airborne attack platform, the tank’s low speed and large radar, geophone and thermal signatures, so hard to disguise, may prove fatal to its long-term prospects.
War in the Air The military saw uses for aircraft early on, at first merely for intelligence gathering but quite soon for bombing from the air and direct aircraft- aircraft battles—‘dogfights’. At first, it was not clear whether airships or aeroplanes would be the canonical fighting platform. In the 1914–1918 war, the airship was a large target at risk from fire, because in Europe it was always hydrogen lifted. However, it had the significant advantage of being able to climb stably through clouds. Aeroplanes, still in their infancy, lacked the instrument necessary to make this possible. The artificial horizon, which told a pilot whether his plane was flying straight and level or not, was patented by Elmer Sperry in 1911. Without it, pilots flying into cloud became completely disoriented, yet it was used on military aircraft only from 1916. Before that, a plane entering the cloud would often go out of control and into a dangerous spin. So fighter planes, unable to climb and intercept them, could not attack an airship flying above the clouds. The airship was also tantalisingly invisible to ground-based gunners, even if clearly audible. Unfortunately, the airship crew, as unable to see the ground below them, could not locate their target. The answer was to lower a brave man below the airship in a basket attached by a wire rope. Once under the cloud, he could look for the target and advise the captain by telephone when to release the bombs. Because the airship was underway, wind resistance dragged the basket backward, so it was not directly under the ship. This improved the chance that bombs would not hit the observer.
19 Be Careful What You Wish For
229
In addition, the airship’s great advantage back then was its capacity for longer flight duration. In 1917 the German High Command received a message from a military post in German East Africa, indicating that they were under British attack, and running low on fuel and ammunition. An airship, the LZ104, was dispatched loaded with stores. After crossing into Africa they were recalled by wireless telegraphy before reaching their objective since British forces were by then in control there. They turned around and flew back again to the Jamboli base in Bulgaria, returning after four days continuously in the air, but with sixty-four hours of fuel still in hand. Yet, less than a year later it was destroyed. After the end of hostilities, the United States and many major European nations began to build military and civil airships. At first, hopes were high. In the US, the Navy made its bid for recognition as the airship service, sponsoring a film emphasising the sole advantage they had by then over planes, their long endurance in the air. Yet, there were too many airship disasters in the years that followed. Among them were the German LZ104 in 1918, the British R38 in 1921, the US Roma in 1922, the French Dixmude in 1923, the US Shenendoah in 1925, the British R101 in 1930, the US Akron in 1933, the Soviet W7 in 1934, US Macon in 1935 and Germany’s Hindenburg in 1937. Enthusiasm for airships declined; the surprise is that people persevered as long as they did. The airships were soon widely acknowledged as subject to two crippling disadvantages. The first was the ultra-lightweight construction techniques that had to be used and pushed to the limit if a worthwhile load was to be carried, particularly in helium ships. This resulted in a persistent risk of structural failure in bad weather, or in executing extreme manoeuvres. The second disadvantage was that in practice airships cruised at speeds not much over 120 kph (75mph), due to the large drag on the body of the aircraft. A third is that their ground handling was labour intensive and tricky in wind. Meanwhile, aeroplanes were making technical progress in all directions and by the 1930s heavier-than-air machines, faster and more tractable, were the clear winners. By World War II the canonical design for a military aircraft was a metal fuselage monoplane with in-line liquid- cooled engines. There were dissenting designs, the most celebrated being the de Havilland Mosquito, a monoplane but made from wood, and used
230
W. Gosling
for reconnaissance or as a light bomber. Although successful in Europe it was less suitable for the tropics, being subject to fungal wood rot. The most significant and controversial aspect of military aviation has been bombing from the air of targets which are not exclusively military, bringing the civilian on to the battlefield. Inaccuracy of targeting using unguided free-fall bombs (‘iron bombs’) is large: miss-distance averaging up to 5 km under air battle conditions. In World War II the conviction grew that either it was necessary to bomb only the biggest targets, or bombs were needed which devastated an area large enough for accuracy of fall to be unimportant. The first led to the bombing of major cities— the ‘thousand bomber’ raids—with all the human suffering that resulted. The second approach led to ever-larger bombs, up to ten tonnes with chemical explosives, and then nuclear weapons, which can be million- tonne equivalents and produce severe damage over a radius exceeding 70 km, so target miss-distance is of no practical significance. Realistically, nuclear weapons seem to be unusable in any likely or foreseeable war situation, so military thinking moved to a conviction that somehow the accuracy of bombing must be improved. After some investment in guided bombs launched from piloted aircraft, the current area of enthusiasm is flying robotic-style vehicles. As ‘drones’ and cruise missiles they are with us now. Compared with free-fall bombs, the accuracy of their attacks is greatly improved. The present generation of drones are not autonomous and therefore they are not true robots: somewhere, thousands of kilometres away, a human being is overseeing them and has ultimate control. It is entirely possible that fully autonomous military robots will appear in the next twenty years, set free from human control, potentially a more controversial matter than nuclear weapons. If ever they are deployed serious legal and moral issues will arise. The rules which should govern the actions permitted to robots are crucial. Best known are Isaac Asimov’s ‘Three Laws of Robotics’ (1942). Autonomous military robots would violate Asimov’s first law, ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm’. The British EPSRC4 jointly with the Arts and Humanities Research Council issued Principles for designers, builders and users of robots in 2011. The first of these is:
19 Be Careful What You Wish For
231
‘Robots should not be designed solely or primarily to kill or harm humans’. Will this admirable ethical principle hold?
The Doomsday Machine Technologies have been proposed, and even implemented to a degree, which could have unbelievably awful consequences, among them nuclear weapons, an almost universal source of dread during the Cold War period when they were new. The ultimate in this respect is the ‘Doomsday Machine’, designed to destroy all life on the planet. The idea emerged when the Soviet Union and the US maintained a fragile peace by a policy of mutually assured destruction (MAD), a geopolitical equivalent of ‘touch me and we’re both dead’. Provided that its design required it to trigger automatically on detecting use of nuclear weapons, and if it could not be disabled by human intervention, it was thought to provide a highly credible threat that should dissuade attackers. Fortunately, the Doomsday Machine, more closely examined, is a crazy notion, not just morally unacceptable but impossibly difficult to realise in practice. Herman Kahn (1922–1983), who considered himself primarily a futurologist, proposed a ‘Doomsday Machine’ consisting of a control computer linked to a large number of hydrogen bombs. It would detonate them altogether if there were a nuclear attack by another nation. Considered as a design, from the standpoint developed in this book the idea fails in important ways. First, it is not feasible because one cannot destroy all life on the planet by using a plausible number of hydrogen bombs. It could never possibly be given a try-out, so there could be no feedback from the use environment, crucial to the success of any innovation. The failure characteristics of the weapon are totally intractable too since a partial malfunction would probably destroy the country housing it. And how could you scrap it at the end of its useful life? Yet, for a time the idea was taken quite seriously. Some of the many disadvantages of the Doomsday Machine were played out in the masterly 1964 film directed by Stanley Kubrick (1928–1999). Kubrick feared that New York City would be an obvious target for the Russians in the event of a nuclear exchange, and had considered relocating to Australia. After
232
W. Gosling
much study, he concluded that the US policy for nuclear warfare was largely incoherent and in places risible. The film started out as a serious movie, but Kubrick found many of the true and accurate scenes he tried to shoot were so absurd, paradoxical or simply ludicrous that he revamped it as a black comedy, one of the best of all time. Under the title Dr Strangelove, or How I Learned to Stop Worrying and Love the Bomb his film described the deployment of a Doomsday Machine. In the film, a multi-aircraft US nuclear attack on the Soviet Union is initiated by an insane officer commanding a Strategic Air Command base. After many mishaps, the attack is recalled but one aircraft has been damaged and does not receive the recall command. It drops its nuclear bomb on Russia, which has an unannounced Doomsday installation. Nothing can stop the destruction of all life on the planet, and this despite the utmost goodwill and cooperation between the leaders of the Soviet Union and the US. The film was a commercial success, widely viewed and undoubtedly influential. Although mutually assured destruction remained the defence strategy of the major powers until the collapse of the Soviet Union, nobody attempted to build a Doomsday Machine. Often the move towards a new technology like this with potentially awesome destructive properties is represented as a slippery slope—take one step towards it and the inexorable slide to Hell begins. However, regarding only the research and development of the new techniques that may be required, this is surely mistaken. Moral issues arise acutely only with the design of a hyper-destructive instantiation. Nuclear technology exemplifies this point. Weapons of mass destruction—also coyly referred to as ‘weapons of urban depletion’—rightly evoke great fear of hyper- destructive technology. Yet, researching nuclear physics, even nuclear engineering techniques, does not of necessity lead to their use in a weapon. The same techniques are exploited for low-carbon energy production. Not until the act of designing a weapon has been completed is its military use enabled. Between designing for weapon or civil use the boundary is clear, and it is not difficult to know where it lies, or when it has been crossed, provided nothing of what is happening is concealed.
19 Be Careful What You Wish For
233
The Captains and the Kings Depart Since the Bronze Age, and probably long before, a powerful motive for going to war was in order to get rich quickly. High-status captives could be ransomed, with great financial advantage to the captor. A victor could enrich himself by looting things of value, by theft of title to land and seizing people, for use or sale as serfs or slaves. Many great noble families of Europe got their start in this way. For monarchies war was the basis of internal as well as external political stability. Kings and rulers needed to sustain an inflow of wealth, so they could buy the loyalty of powerful subjects through gifts. Even when the highest motives were claimed for a conflict, religious or ethical, war continued as a profitable activity for winners. The last major example of this kind of thinking was the campaign among the victors for ‘war reparations’ from Germany after World War I—no less than State organised and sponsored looting. But modern wars are so destructive, much less of worth remains in the defeated country. After the 1914–1918 war little of value flowed from the losers to the victors; after the 1939–1945 affair, it is striking how wealth flowed in the opposite direction. In the past full-scale war has also been considered a reasonable and acceptable default solution for the resolution of irreconcilable political differences—the Clausewitzian ‘continuation of politics by other means’. Such a view was always a little facile; those who quote his words5 forget that he also adds that ‘but to politics it must return’. Now everything has changed: the extreme devastation of all-out war in our time makes it no longer seem an attractive option, even for the victor, in either political or economic terms. That war has got so much worse in its material and human consequences some think the fault of the technologists. Realistically, though, it will be inevitable for so long as humans combine great virtuosity in technology, yet cling to war as a social institution. Some historians have claimed that, despite all the destruction and suffering, the overall effect of wars has been beneficial.6 In essence, they argue that the requirements of war created societies with sufficient internal cohesion to secure social order, giving a guarantee of security for property and life in peacetime. Archaeological evidence suggests that in
234
W. Gosling
Palaeolithic times people had a 10–20% chance of death by violence, compared with around 1% in present-day Europe, and only twice that even in the US. It is argued that the difference comes from the degree of security in modern well-organised societies. The exigencies of war lead to tight and coherent internal security. Yet, circumstances alter cases. Even those who argue for the social merits of war now seem to agree that its latter-day destructiveness shifts the balance against it as a beneficial resource. The survival of war as a social possibility served us usefully in the past, for so long as the cost, in lives and damage, was within what we were prepared to contemplate. Now extreme military technology has made it no longer cost-effective. To put it plainly, technology has ruined war by its success in making conflict ever more terrible. As always, there is no going back. So as rational choices, all we are left with is irregular and low-intensity military operations,7 insurgency8 and cyber war. Even this last has been challenged,9 but that seems an increasingly minority view.10 Cyber war is the only form that seems likely to be cost-effective and holds out any prospect of economic advantage to the victor. Given the formative power exercised since prehistory on human societies by the demands of winning future wars, the growing obsolescence of all-out war, brought to ruination by technology, is a monumental social challenge, surely one of the greatest that faces us. Monarchy, dictatorship, aristocracy and male dominance are among the institutions evolved to meet the exigencies of war, the consequences of a perceived need to configure society for military success. The elevation of the soldierly virtues of courage, self-sacrifice and obedience to orders, has distorted the education of our children, particularly boys. Yet, these venerable social institutions have shaped our world, for good or ill, and lie at the heart of many of our social assumptions. Yet, to see the banishment of large-scale war as an unalloyed good is altogether too superficial, taking no account of the social consequences of the loss of this time-honoured institution. Could this be the greatest technology down-side of them all? But too late to worry now—that pass is already sold. We shall need to discover answers quickly to a far-reaching ethical and practical challenge. A clear way through nobody sees so far; what is certain is that we shall find no way back.
19 Be Careful What You Wish For
235
Notes 1. Traditional, source unknown. 2. Belloc H. (1898) The Modern Traveller Duckworth (1972 facsimile ed.) (London, UK). 3. ‘Polish Cavalry vs Tanks’ YouTube, on-line. 4. Engineering and Physical Sciences Research Council. 5. Actually the book was written from his notes, after his death from cholera in 1831, by his wife Marie, so the words are possibly hers. 6. Morris, I. (2014) War: What is it Good For? Kindle Books (online). 7. Kitson, F. (2010) Low Intensity Operations Kindle Books (online). 8. Marcus, J. (2014) ‘Is Military Force No Longer the Way to Win Wars?’ BBC News 22 November (online). 9. Rid, T. (2013) Cyber Warfare Will Not Take Place Hurst & Company (London, UK). 10. Beckett, M. (2019) ‘When, not if ’ in ‘Cyber Resilience’ Prospect UK (October 2019).
20 Past, Present, Future
It was Charles Darwin (1809–1882) jointly with Alfred Russel Wallace (1823–1913) who first conceived a theory of evolution of living things based on natural selection (1858). At the outset, their theory was far from perfect, and at that time not even convincing to many in science. Darwin published The Origin of Species in 1859, before anyone knew for sure how characteristics of plants and animals are transmitted from one generation to the next. Some of the then widely-held hypotheses did not sit well with evolution. However, genetics, the scientific understanding of these inherited characteristics, developed rapidly soon after the turn of the century. Darwin’s ideas were then reformulated and confirmed, notably by Theodosius Dobzhansky (1900–1975), and today Neo-Darwinism is securely based in genetics. Evidence for its validity seems unshakeable. In any population, so Neo-Darwinist doctrine runs, genetic traits raising the probability of successful reproduction in a particular environment will appear with increasing frequency in successive generations. There are a few caveats, but it is not far from the truth to say that ultimately individuals who have these ‘survival’ genes will be in an increasing majority. This was precisely how elephants grew large and cheetahs fast on their feet; they evolved in ways that fitted their environments. By contrast, genes less favourable to reproductive success in an animal’s © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_20
237
238
W. Gosling
environment, and individuals who carry them, will be seen less often, and ultimately will vanish altogether.
Two Altogether Different Types of Evolution In the world of theoretical biology, everything aligns comfortably with Neo-Darwinian ideas. Among all nonhuman animals evolutionary survival strategies seem genetically based, and therefore the outcome of such natural selection processes. Yet once people began thinking of human evolution the picture grew considerably more complex. As we have seen, a most important difference between us and all other animals is that we use sophisticated languages. How human infants acquire language is the subject of a lively and continuing debate. We are none of us born with the capacity to talk—we learn it from our mothers. But among animals only we seem to have the ability to do that. Spoken language is what matters. Of the world’s thousands of languages today, only a minority have writing. Although Gladstone remarked that ‘books are the voices of the dead’, great cultures have flourished using writing very little. The Celts, arriving in Western Europe in the ninth century BC, are a prime example of how this worked. They made swords of laminated iron, even a form of steel, that briefly gave them a powerful military advantage. Their surviving artefacts show qualities of workmanship and a sophistication of design that astonishes us even now. Yet few Celts wrote much—it was frowned on for religious reasons, so it seems. Instead, they had a strong oral tradition, kept up by the Bard, an influential elite. Everything in the extensive bardic professional training was committed to memory, something we now find astonishing. Julius Caesar claimed that an Irish Bard’s training could take twenty years, so much had to be memorised.1 Once culture appears it too begins to evolve, though with a quite different selection process from genetic evolution. Individuals choose to vary cultural practices, and the fate of the changes depends on their consequences. They may become general in the culture, or die out, depending on the value others give them. Human societies constitute an evolving population, but overwhelmingly this is cultural, not genetic evolution.
20 Past, Present, Future
239
Herbert Spencer wrote about cultural evolution in 1858, the same year as Darwin’s great book. Lewis Morgan gave it a scientific basis in his 1877 book Ancient Societies. Later it blossomed into a major area of study.2 Differences between the two types of evolution are profound. The most obvious is that heredity and natural selection together take generations to effect change in a population, but cultural evolution can go orders of magnitude faster, taking only a few years. So, after the coming of language, for humans’ genetic evolution, the once-crucial biological engine of change, largely surrendered its power to cultural evolution. In our story, the tortoise does lose out to the hare. The other obvious difference has been the symbiosis between cultural evolution and technology. All human cultures have evolved by using technologies of many kinds for a wide range of purposes, from cooking, agriculture and medicine to war. The doxastic method has been central to cultural evolution.
Evolutionary Failures Although either kind of evolution, genetic or cultural, is capable of creating diversity and matching animals, including people, to the environment in which they live, evolution does have failures. Genetic evolution is particularly vulnerable. Because it takes many generations to have an effect, it cannot keep up if the environment changes rapidly. The alternative may be migration to where it is not needed, but if an environmental change is fast and extensive enough neither evolution nor migration works and extinctions occur. When this happens the path of evolution adopted has turned out a dead-end.3 An often-quoted example of an evolutionary dead-end is the fate of the dinosaurs. First appearing 200 million years ago they were close to extinction 65 million years before our time. Many had gone down the evolutionary path of large size. Some were large indeed, which protected them against predators and helped maintain a higher body temperature in animals probably not endotherms (‘warm blooded’). Diplodocus, with a length of 33 m and a weight in the range 10–16 tonnes, was one of the bigger ones, although not the biggest by far. This adaptation was a
240
W. Gosling
disaster when the extinction event came. Only small dinosaurs survived; as birds they are with us still. What killed off non-bird dinosaurs is disputed. Many believe an extra- terrestrial body, 5 to 15 km in diameter, triggered the extinction by hitting the Yucatán Peninsula in Mexico at extreme speed. This theory has wide acceptance but is not unchallenged. Others have also been urged.4 Among them is the suggestion that the little cynodonts, rat-like creatures, came out of their holes and ate the dinosaur eggs—a romantic thought. But not only genetic mechanisms cause extinctions. In a different way, they can also be a consequence of pathological cultural evolution. This is obvious in the collapse of great nations and civilisations—the extinction of cultures.5,6 An example is the death of the Soviet Union. The failure arose from many internal problems coming together. After the collapse of the Tsarist government by the end of World War I, the existing Russian culture morphed in a direction dictated by Marxist-Leninist ideas. Social institutions without long-term viability resulted and reform proved impossible, leading to sudden collapse in 1991. I was in Warsaw just after the death of Communism, or the ending of the Russian occupation as the Poles preferred to say. My task was to advise on technical problems that would arise as the Polish industry was privatised. Colleagues in a company manufacturing equipment for the telecommunications industry throughout the Soviet economic empire told me a revealing story. In the 1980s the US and Western Europe were rapidly converting long-distance cable telephone circuits to use optical fibre instead, with major advantages. One day my Polish friends were visited by a government official who told them, in a shame-faced way they thought, that the ‘Socialist Block’ had come to the decision that they would not be changing to optical fibre, which did not have the advantages claimed in the capitalist world. The ‘Socialist Block’ would therefore stay with cable. This Moscow triumph of industrial politics was obviously ludicrous, doubtless the result of successful lobbying and special pleading by the cable-making interests. My friends moved the small team they already had working on fibre terminals into the basement, where it continued to function as a secret ‘skunk outfit’. Less than a year later the same official
20 Past, Present, Future
241
came again, to tell them that the decision on cables versus fibres had been reversed, and they had six weeks to develop suitable terminals. ‘We shall be delighted to carry out your instructions’, was the response, to knowing smiles all round. This is but one small example of the way that the Soviet Union fouled up its technology in a society where the decisions of professional politicians took priority over everything else. The extinction of the Soviet Union, a global superpower, in under a year and without external attack, though predicted by the statistical anthropologist Emmanuel Todd,7 was astonishing. Yet it has many historical precedents.8 The fall of the British Empire is another good example. At its zenith it comprised a quarter of the world’s land area, and one-fifth of the world population. From origins in the seventeenth century, this cultural construct became the largest empire in history. Despite losing its American colonies in the eighteenth century, it grew strongly throughout the nineteenth. Yet it vanished soon after the end of World War II. In 1886, as a member of Gladstone’s government, Lyon Playfair (1818–1898), a Scots scientist, produced a report drawing attention to the risk from shortcomings of science and technology education in the UK. It led to some response, but too little and too late. Though by the mid nineteenth century a world leader in technology, Britain could not adapt fast enough to a changing environment. Its politics and education—the whole national ethos—were preoccupied with the military, political and administrative challenges of retaining and exploiting its highly profitable empire. Britain was outflanked by Germany, the USA, and later Japan, all of whom adapted better and faster to rapidly developing technology. So, the British Empire outlasted British technological pre-eminence by only sixty years. There are far-reaching differences between extinctions due to defective cultural evolution and failures in genetic evolution. The big dinosaurs became extinct through the death or reproductive failure of many individuals. After a while their numbers dropped below a viable minimum, and soon there were none left at all. When a culture collapses it is different. The distinctive culture is lost, but the people involved live on, sometimes in reduced circumstances. The Soviet Union’s collapse did not
242
W. Gosling
reduce the population living in the area it had occupied, indeed there was patchy economic recovery once the dead hand of Moscow was lifted. Another difference between the two types of evolution is that while maladapted animals die out completely—becoming totally extinct—cultures scarcely ever do. Jacobins leftover from the French Revolution of 1789 were active in Paris as late as the 1850s. Neo-Nazis are still found in Germany and Scandinavia, despite the awful crimes of the Hitler period. The countries that constituted the former Soviet Union are home to ageing Bolsheviks, who explain how Communism never failed, there were just ‘errors’. Worldwide, the remnants of defunct cultures and ideologies survive, mere shadows of what they once were, yet clinging to their threatened existence.
Could Technology be a Dead-End? If technology is the evolved cultural adaptation of humanity, is it possible that it could prove to be a dead-end? Evolutionary adaptations fail when they cannot evoke change in an organism, a species or a society, fast enough to enable it to survive a changing environment. Extensive, rapid environmental change is always the most dangerous. However, change is much less threatening to cultural than to genetic evolution, because cultures can adapt so much quicker—well over a hundred times faster in humans. Another factor favourable for technology is its global nature. Any threatening environmental change would also have to be global. This is unlikely; only a few things could impact the New Zealanders and the Inuit to the same degree and at the same time. For as long as people remain on Earth, they will be obliged to continue using the doxastic method. Yet nothing is forever. In five or six billion years our Sun will change into a red giant, its surface where Earth’s orbit now is. All water will have boiled away and there will be no life on our planet. ‘Til all the seas gang dry, my dear, and rocks melt wi’ the sun’, said Burns presciently. The human race will have gone long before that though, most likely extinct, but conceivably transferred to another location in the universe. The Earth experiment will be over.
20 Past, Present, Future
243
So, we are stuck with the doxastic method, it is deep in us. But is it a soul-less inhuman, number-crunching kind of thing? Technology is not a sterile activity, as anybody involved knows well, and at its best, it can be great fun. Most important of all it has people, their needs and wishes, at its centre, and nothing else but the needs of people can justify its existence.
Notes 1. C Julius Caesar Commentarii de Bello Gallico Book IV. 2. Boyd, R. & Richerson, P. (1985) Culture and the Evolutionary Process U. of Chicago Press (Chicago, USA). Boyd, R. & Richerson, P. (2004) Not by Genes Alone U. of Chicago Press (Chicago, USA). 3. Benton, M. (2003) When Life Nearly Died Thames & Hudson (London, UK). 4. Haste, H. (1993) ‘Dinosaur as Metaphor’ Modern Geology 18 347–368. 5. Tainter, J. (1988) The Collapse of Complex Societies Cambridge U. Press (Cambridge, UK). 6. Diamond, J. (2013) Collapse Penguin Books (London, UK). 7. Todd, E. (1976) The Final Fall Karz (New York, USA). 8. Davies, N. (2011) Vanished Kingdoms Penguin Books (London, UK).
21 Technology and Dreaming
All the most challenging movements and forces in our society, politics not least, can be reduced to side-shows by the still greater force of technical innovation, its compelling power so often unrecognised until too late.1 To speak of the accelerating rate of technology change has become a cliché, but how fast is technology actually changing? The dramatic and undoubted revolution in transportation between 1800 and 2000, all the way from horse-drawn coach travel to a jet airliner, might be quantified by the count of passengers in each vehicle multiplied by their average speed over a journey—the number of passenger kilometres per hour. On this basis, the growth of transportation technology in two hundred years is a change of ten thousand times. Compare this with microelectronics technology. Quantified as the number of devices on a chip multiplied by their speed, it grew by more than the same ten thousand times in the fifteen years between 1975 and 1990, continuing to grow as fast after that, as it does still. Looked at this way, the microelectronics revolution is going thirteen times faster than its transportation counterpart, which itself has had the most radical of social impacts already and is by no means finished yet. Because in microelectronics extreme techniques are available to us does not ensure that designs based on them will necessarily follow, however. © The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4_21
245
246
W. Gosling
Designs do not come unsought. New techniques are the gateway to designing new products, processes and services of great significance, yet those who open the gates may be blind to what might lay beyond them, and if so opportunities are missed. If so, for years, even centuries, people do not consider, and therefore do not design or instantiate, potential uses of techniques available to them. Hero of Alexandria (c. 10–70 AD) was an engineer with a prolific output of useful innovation. He invented vending machines, mechanical musical instruments, the syringe, force pumps, and a variety of stage machinery. In the middle of the first century, Hero built a steam-driven engine, his aeolipile. It worked, spinning quickly and impressively when a fire burned under the boiler—the first working steam turbine. A gateway to practical steam power in a rotative engine had been opened, but nobody went through it. For a dozen centuries, Hero’s machine had no successors. Why? This is not the only puzzle of its kind. It comes at first as a surprise to realise that from Classical times people had techniques fully adequate for building flying machines. They had excellent wood and woodworkers, the ability to fabricate small metal fittings, fine fabrics, varnishes, ropes and cords.2 Yet they did not design an aircraft. Classical Greeks and Romans did many things in technology but never flew, though they had myths about human flight which must have tempted them. So if not the Greeks, why did mediaeval engineers in Europe not build a successful flying machine even centuries later? They too had the necessary techniques. True, they had no engine to power an aircraft, not least because they had done nothing with Hero’s demonstration of a steam turbine. Yet they could have built a useful glider, and after all, back then world-wide travel depended on sailing ships, which were also without engines. In our time gliders have proved capable of journeys of thousands of kilometres using only natural air currents to propel them, like sailing ships. Why this blindness? Journeying around Europe after the Black Death, the most striking sight would have been the weed-grown fields, good agricultural land with nobody to till it. The most pressing and disruptive social problem at that time was a widespread shortage of labour. It was this challenge that the engineers addressed. Among the technological
21 Technology and Dreaming
247
solutions found to this problem were the rigid horse collar, the post windmill and improved watermills. Flight would surely not have seemed the urgent problem of their day. Though science does what it will, technology does what it must. In mediaeval times no flying machines were built, though they might ultimately have travelled world-wide in them at speeds high for that day. It is not as if there was anything particularly exotic about building heavier-than-air flying machines. Kites, which originated in China well before the current era, had spread to Europe by the Middle Ages. For centuries Chinese armies exploited kites for signalling in war, and they would doubtless have been as useful to the European military if they had them. Even a person-carrying kite could have been built, still more interesting to the military for its intelligence potential. They never appeared. In 1902 Lela Cody was the first woman to fly in a heavier-than-air machine—a kite. She wore a fashionably large hat and long skirt, having planned only to observe the trials of her husband’s new military man- carrying kites. She participated in the trials herself on a whim.3 Had we got started on them at a much earlier date, after man-carrying kites, there could have been so much more. In 1488 Leonardo da Vinci (1452–1519) designed a glider big enough to carry a person.4 This was solely a design innovation, using only construction techniques already familiar in Leonardo’s day. As with so many of his designs, nothing came of it. Finally, in October 2002, a glider was built from his 500-year-old drawings, using only materials to which he had access. The frame for the linen wing was of black poplar, the structure held together by leather straps and hemp string. Judy Leden, the internationally renowned hang- glider champion, courageously acted as Leonardo’s test pilot, and successfully flew the glider to a safe landing. Why was it not done in Leonardo’s time? If only mediaeval engineers had built successful gliders, the Victorians would surely have added steam engines to propel them, as they did for sailing ships. In 1877 Enrico Forlanini (1848–1930) did build a small helicopter powered by a steam engine. From the drawings, its design appears to have been sophisticated, with contra-rotating rotors. Rising to a height of 12 metres, it hovered briefly. Thought useless for any practical purpose, it was seen, like Hero’s steam turbine and Leonardo’s aircraft, as
248
W. Gosling
a bit of fun, soon forgotten. Steam-driven flying machines were not a social priority.
The Victorian Computer To press the point further, there is the strange mystery of why an electrical computer was never built in Victorian England. Wilhelm Schickard (1592–1635) built the first mechanical calculator, ‘the Calculating Clock’, in 1623, however, his designs were lost. The mechanisms described by Blaise Pascal (1623–1662) some twenty years after Schikard, and also by Gotfried von Liebnitz (1646–1716) in the 1670s, formed the basis of eighteenth-century calculating machines. The mechanisms were trains of gear wheels, or their equivalent, and the arithmetic used was decimal. To the Victorians, the technology problems of the day were perceived from a mechanical perspective, and likely to be solved by using steam. They were drawn to the mechanisation of factories, the railways and steamships, increasingly seen as essential to international trade and the retention of the highly profitable British Empire. But they did attempt computers. In the atmosphere of the great nineteenth-century flowering of mechanical engineering,5 the earlier mechanical calculating devices were the obvious prior thinking to be drawn on when attempting a computer design. Charles Babbage began to work on his ‘difference engine’ in 1822, a mechanical computer on the grand scale. It would have had 25,000 parts, weighed 15 tonnes and stood about 2.5 metres high. The difference engine was barely even a precursor to our computers, certainly not the real thing. Yet it represented the end, not the beginning, of a Victorian incursion into automatic computing. It was never completed in his lifetime. Why did Babbage’s machine have no early successors, and none electrical? The electromechanical computer could have happened; the necessary techniques were all there. By the 1850s, among batteries extensively used to power telegraphs, Grove cells were established as particularly reliable. Galvanometers as current indicators were commonplace. An electromagnetic relay was invented by Charles Wheatstone in the 1830s and described in the Cooke and Wheatstone telegraph patent of 1837. A relay (in this context) is an
21 Technology and Dreaming
249
electric switch operated by an electromagnet. It was also invented independently in the US by Leonard Gale (1800–1883) but in a superior form. An electrical device which can perform simple logic functions is called a gate—the basic component of all digital computers. The relay is an electromechanical gate. So by the middle of the nineteenth century, the Victorians had relays as gates, a reliable source of power and sensitive indicators—all the techniques needed for building computers. Even the science which could have accelerated the doxastic method was in place. Leibniz considered binary arithmetic in the seventeenth century, but it was George Boole, researching symbolic logic, who pioneered the binary symbolism essential to modern computer architecture and in 1854 published his findings.6 Yet it did not happen. Enthusiasm for electrical technology then was centred on extending the range of the telegraph and on undersea telegraphy. Any affinity with computing was seemingly not perceived at that time. So a Victorian electrical computer, though feasible, was culturally inhibited, as a mediaeval flying machine had been. Eighty years passed before, in 1937, Claude Shannon had the genius to apply Boole’s ideas to electrical circuits. Four years later Konrad Zuse (1910–1995) demonstrated his Z3 relay-based computer, but it was too late to have much influence. In only two more years Tommy Flowers completed Colossus, the first successful electronic digital computer, replacing relays by much faster valves (electron tubes). Colossus made possible magnificent achievements at Bletchley Park in breaking the wartime Nazi codes, themselves thought unbreakable because they used complex mechanical encryption.
Humanity’s Missed Chances Flying machines, computers, kites for propulsion and elevation—similar spectacular non-appearances could be instanced in many branches of technology. Why? Many artefacts they left behind bear eloquent witness that our predecessors were not less intelligent than we are, nor lacking in ingenuity or imagination. Nor is it good enough to answer simply that
250
W. Gosling
these things did not appear because nobody thought of them. Why not? This is no vain or idle question because had these events gone differently all history would have changed. In thinking up new products, processes or services, achieving success is critically dependent on the richness and range of mental objects available to the designer. If crucial intrapsychic entities are missing or misunderstood it is certain that the design will not happen. Yet the range of internalised entities designers can access is determined by the totality of their life experiences; in short, what they can do is strongly determined by the culture they live in. So people cannot design more than a little way outside their cultural context: they lack the intrapsychic resources required. There is nothing inevitable about the exploitation of techniques. Sadly, all the possibilities inherent in any technical competence are not immediately obvious when it first becomes available. To deploy techniques at all, let alone successfully, a design will also be needed, drawn from its own autonomous domain of thought. Without relevant designs, we cannot exploit whatever techniques we have. This book has argued that the design process is not wholly objective. An exhaustive yet objective approach, considering and evaluating each possibility against all the other competing possibilities, is impossible to complete. All but the most trivial designs could exist in many alternative forms, a number so large that checking out all of them is not realistic. Designing cannot therefore be made algorithmic, and designs cannot be derived by calculation alone. All involve, at some point, the exercise of human judgement outside any algorithmic frame. If only mediaeval engineers had established the building of successful gliders, they could have learned to travel the world with them, as they did with sailing ships. The Victorians would surely have used steam engines to propel them. Yet neither of these things ever happened. In the nineteenth century, British and the US telegraph engineers had mastered all the techniques needed for an electrical computer, but for lack of designs, it was never instantiated. In total contrast, the arrival of general-purpose digital machines changed all that. During the twentieth century, crucial internet developments—Google, Skype and the rest—required no new techniques, only imaginative design of software. The internet and general- purpose computers were already there, and also software design tools.
21 Technology and Dreaming
251
Only technology creates social revolutions undiluted by compromise. Yet the control loop always closes, as in all the best control systems, it must do. As we have seen, at a particular time technology does not create all the possible worlds, but only those needed and urged by the society in which it is then imbedded. We still have a choice. When, in the chaos of technological advance, it creates something outside the generally desired bounds, like Hero’s steam turbine, Leonardo’s flying machine or Forlanini’s helicopter, the event is without influence, treated as a curiosity or a toy.
The Influence of Culture The mediaeval flying machine and the Victorian electrical computer never happened. They did not appear because neither flourished in the intrapsychic environment of those who might have designed them. In these and all the many other cases of missed technological chances, nobody was thinking productively about the right sort of things, nobody asked the right questions. Helen Haste put it definitively: ‘Progress in any field of knowledge derives from asking the right new questions, at the right time—when the field is receptive to them. The absence of new emerging questions not only stunts a field, it perpetuates existing frames of thought, the metaphors and models which constrain innovation’. Although techniques were in place that could have been effective, nobody created designs to utilise them. The golden key to exploiting available techniques for realising our dreams is, and must always be, an enhanced capacity for innovative design. For the most part, designers are creatures of their own time and place, yet even so, there are always a few exceptional people who push beyond the bounds of their familiar world. They are empowered to do so by an untypical background, so their store of intrapsychic objects contains uncommon things, special to them. Alas, they are rare. Could their number be increased? Any life experience capable of being successfully internalised might conceivably yield that golden key, and there is no sure way of telling the most fruitful ones in advance. Important channels can include:
252
W. Gosling
• Practising new skills, and having working interactions with differently skilled people in varied contexts, which can transform a whole worldview. • Awareness of history, and not merely in the hope of discerning trends, but also as a rich source of metaphors for resolving present-day challenges. • A penchant for new experiences through travel, because it offers some escape from purely local cultural restrictions, and may facilitate realising how arbitrary all cultural patterns are. • Immersion in the arts and literature, since the creative artist has the same challenge as the technologist, of envisaging something outside the ordinary run of things yet communicable to other people. • Conversations with a friend, spouse or partner with a disparate intellectual background. All of these have helped facilitate design creativity. The most effective preparation for innovative design may well seem the least obvious. Innovative technologists stock their intrapsychic domain with varied experiences, which helps them evade constraints on imagination in their time and place. Fluency in the techniques they will use, and understanding of the science behind them, is necessary but not sufficient. Designers need a rich foundation for creating new metaphors in their inner world. Steve Jobs argued: ‘A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions … The broader one’s understanding of the human experience, the better design we will have’. Great designers are educated broadly and, at best, eccentrically. Albert Einstein once said ‘Creativity is intelligence having fun’. Innovators who use the doxastic method also have to face the challenge of combining the attributes of the maverick and the ‘safe pair of hands’, so that their innovations are borne but then get implemented successfully. They must reconcile the paradox that it is impossible to advance technology without being both of their time and place, yet also outside them.7 This is why partnerships, like Gooch and Brunel, have often done well, but only if the maverick is in command, as with them.
21 Technology and Dreaming
253
Optional Futures Older by far than the sciences, technology and its doxastic method were our settled adaptation for millennia. They are us. We cannot annul what has happened since we made our fateful choice. We are too numerous, have learned too many of nature’s secrets, and become too adept at using them. For us, time will not run backwards. So we follow the doxastic method where it takes us, but steer its direction by our perception of pressing human needs. Brought together, techniques and design have led us to dominance in our world. Our turn towards them ensured our unique place among the things living in our world, a star part in Earth’s drama. That initial turn led to great civilisations, transcendent works of art and scientific understanding of the universe. We could not divest ourselves of them if we wished; we have no option but to use them as best we can. By the doxastic method of collective trial and discovery we have built our theatre, the stage set on which the drama of life on Earth is being played out. The backdrop and scenery are ever-changing, yet the action moves inexorably in one direction, and we cannot guess at the ending. For the fortunate, it has already trebled their expectation of life in a supportive world.8 Other species survive now only by our consent or inattention. The planet itself is at our mercy. As it develops the enterprise of humanity encounters profound challenges, some material, others moral, and their resolution must command our enduring attention.
Notes 1. Susskind J. (2018) Future Politics Oxford U. Press (Oxford, UK). 2. Gimpel, J. (1988) The Medieval Machine 2nd ed. Pimlico (London, UK). 3. By then it was too late for the person-carrying kite; airplanes were around the corner. 4. Laurenza, D. Taddei, M & Zanon, E. (2006) Leonardo’s Machines David & Charles (Newton Abbot, UK). 5. Wilson, E (2014) The Meaning of Human Existence Kindle Books (online).
254
W. Gosling
6. Boole, G. (1854) An Investigation of the Laws of Thought (facsimile) Dover Publications (New York, USA). 7. Egan, T. (2014) ‘Creativity versus Quants’ New York Times March 21 (New York, USA). 8. Rosling H. (2018) Factfulness Hodder and Stoughton (London, UK).
Appendix: A Clutch of Designers
Design depends on human judgement, and cannot be reduced to an algorithm. Some people have more talent for this extra-rational process than others, which is why there have always been great designers as well as indifferent ones. In every age and technology, in every nation and continent, there have been star designers. Charles Babbage (1791–1871), first to design a digital computer, was one of the stars. A few more follow, just a very personal pick beginning arbitrarily in the late eighteenth century. Samuel Finley Breese Morse (1791–1872), an American, was a portrait painter by profession. Wrongly thought by many to have invented the electric telegraph, he did something far more significant: Morse designed and demonstrated the first use of serial binary digital communication. Hardly guessing what was to come, he launched our digital age. Isambard Kingdom Brunel (1806–1859) is mostly remembered for the Great Western Railway, and the ship Great Eastern, largest in the world for a generation, but he influenced and encouraged innovation much more widely. It was with his active support that Cooke and Wheatstone established the world’s first commercial telegraph line in 1839, from London to Slough.
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4
255
256
Appendix: A Clutch of Designers
William Morris (1834–1896), a polymath whose fabrics and wallpapers are still in demand, was a leader of nineteenth-century design. Father of the Arts and Crafts movement, he is influential to this day. ‘Coco’ Chanel (1883–1971) brought ethnic themes into haute couture and invented the ‘little black dress’, a fashion standard. French, from a humble background, bizarre in her views and lifestyle, she was one of the three or four most influential couture designers of all time. W. O. Bentley (1888–1971), trained as a railway engineer, but made a world reputation designing rotary aero engines and cars. His design innovations, including light alloy pistons and short-stroke engines, feature in mass-produced cars today. Edwin Armstrong (1890–1954) made viable sound broadcasting. In 1914 he patented regeneration, offsetting inadequacies of early valves (US: tubes) and tuning circuits. Practical super-het design, canonical after 1935, was also Armstrong’s (1918). He proved FM viable (1919). His super-regeneration (1922) made useable VHF and UHF radio bands. In 1954, hounded by patent litigation, Armstrong jumped from his sixth-floor apartment window. Thomas Flowers (1905–98) researched for the UK Post Office at Dollis Hill from 1930. In 1943, supported by Alan Turing, Flowers designed the electronic computer, Colossus, used for breaking German codes from February 1944. The Mark 2, with 2400 valves (tubes), gave intelligence vital for the D-Day European landings of June 1944. It shortened World War II and saved many lives. Tom Kilburn (1921–2001) designed, and successfully built, the first stored-program computer (‘Baby’) at Manchester University in 1948. Before ‘Baby’ computers did not store their programs, which were set up manually. ‘Baby’ was a crucial step towards the first commercial computer. Kilburn also led the design of ‘Atlas’, then the world’s most powerful computer, and the first supercomputer (1962). Alan Kay (b. 1940), an American, led the design of Alto, the ancestor of all personal computers as we presently know them, originating the graphical user interface. To a virtuoso grasp of computer science, Kay added insights into education and psychology all too rare among engineers, as well as outstanding personal creativity. He gave us the ideas that led to laptops, tablets and e-books.
Appendix: A Clutch of Designers
257
Federico Faggin (b. 1941), born in Italy and educated at Padua University, was the first to put a complete and functioning computer design, the 4004, on a single silicon chip. He did it using his own self- aligned silicon gate technique, which proved so advantageous that it became canonical in the MOS designs, which soon dominated electronics hardware. Adele Goldberg (b. 1945), an American software designer, perfected the widely used Smalltalk language. In the 1970s, with Alan Kay, she developed object-oriented programming and also the templates used to create software designs. All this was crucial to the development of the graphical user interface for the computer as we now know it. James Dyson (b. 1947) born in England and educated at the Royal College of Art. He achieved fame with the design of the cyclonic vacuum cleaner, sold in volume from the late 1980s and thereafter becoming canonical. From 1993 his company manufactured domestic appliances to his unique designs. He has used his status, reputation and money to combat the mid-twentieth-century flight from design in engineering education. Timothy Berners-Lee (b. 1955) is English, an Oxford physics graduate. He designed the World Wide Web whilst working at CERN. A precursor called Enquire was instantiated in 1980, and a proposal for a mature system finalised in 1990. A web site was operating a year later. The server and first web browser were to Berners-Lee’s design. He made the internet viable, keystone of our modern world. Jonathan Ive (b. 1967), who became senior vice-president of design at Apple, demonstrated his creativity in all their products, from the iMac— the first he master-minded and Apple’s timely lifesaver—right through to the MacBook Air, iPod, iPhone and iPad. This remarkable Englishman, educated at Newcastle Polytechnic, changed all our lives.
Index1
A
Agriculture, 1, 18, 27, 239 Animals, 1, 3, 18, 21, 25, 49, 62, 166, 214, 237–239, 242 C
Complexity, 3, 19, 20, 22, 59, 70, 73–75, 137, 138, 183, 190, 213 Culture, 2, 14n5, 20, 22, 67, 186–189, 196, 221, 238–242, 250–252
Digital, 3, 10, 14n3, 19, 20, 50, 70, 90, 97, 102–104, 132, 134, 138, 161, 164, 184, 185, 189, 194, 205–207, 213, 249, 250 Doxastic, 6, 8–13, 14n6, 15n12, 15n13, 15n14, 35, 45, 55, 61, 64, 67, 71, 76, 79, 88, 89, 109, 150, 158, 164, 176, 203, 208, 215, 221, 239, 242, 243, 249, 252, 253 E
D
DELAG, 3, 33, 105 Determinism, technological, 2
Evolution, cultural, 2, 21, 61, 239–241 Extra-terrestrials, 3, 240
Note: Page numbers followed by ‘n’ refer to notes.
1
© The Author(s) 2020 W. Gosling, Culture’s Engine, https://doi.org/10.1007/978-981-15-4592-4
259
260 Index F
S
Forces, social, 3
Science, 2, 3, 5–7, 9–14, 20, 22, 30, 34, 35, 38, 41n1, 42n14, 54, 71, 72, 81n3, 89, 91, 92, 111, 112, 137, 175, 211, 212, 214, 220–222, 237, 241, 247, 249, 252, 253
M
Machines, general-purpose, 3, 20, 193, 194, 250 Microelectronics, 2, 19, 22, 50, 110, 212, 245
T P
Precursor, 3, 34, 37, 38, 56–58, 89, 99–101, 103–106, 111, 114, 117, 137, 189, 248 R
Revolution, 1–3, 17–19, 23n2, 25, 27, 31, 34, 38, 47, 48, 51, 61, 65, 90, 91, 132, 188, 208, 209, 213, 220, 221, 245, 251
Technology, 1–3, 5–14, 14n5, 14n6, 15n14, 17–19, 21, 22, 23n2, 25–41, 45, 46, 48–51, 57, 61–65, 67–81, 83, 88, 90, 91, 95, 98, 100, 102, 104–106, 115, 119, 120, 123, 147, 153n6, 158, 161, 170, 176, 179, 183, 186, 188, 190, 194, 195, 209–223, 225, 226, 231–234, 239, 241–243, 245–253