184 60 12MB
English Pages [188] Year 2008
HISTORY OF TECHNOLOGY
HISTORY OF TECHNOLOGY
Editor Ian Inkster INSTITUTE OF HISTORICAL RESEARCH Senate House, University of London, London WC1E 7HU EDITORIAL BOARD Professor Hans-Joachim Braun Universitat der Bundeswehr Hamburg Holstenhofweg 85 22039 Hamburg Germany Professor R. A. Buchanan School of Social Sciences University of Bath Claverton Down Bath BA 7AY England Professor H. Floris Cohen Raiffeisenlaan 10 3571 TD Utrecht The Netherlands Professor Mark Elvin Research School of Pacific and Asian Studies Australian National University Canberra, ACT 0200 Australia Dr Anna Guagnini Dipartimento di Filosofia Universita di Bologna Via Zamboni 38 40126 Bologna Italy Professor A. Rupert Hall, FBA 14 Ball Lane Tackley Oxfordshire OX5 3AG England
Dr Richard Hills Standford Cottage 47 Old Road Mottram-in-Longendale Cheshire SK14 6LW England Dr Graham Hollister-Short Imperial College Sherfield Building London SW7 2AZ England Dr A. G. Keller Department of History University of Leicester University Road Leicester LE1 7RH England Professory Carlo Poni Via Filopanti 4 40100 Bologna Italy Dr Saptal Sangwan National Institute of Science and Technology and Development Studies Dr K. S. Krishmanana Road New Delhi 110012 India
History of Technology Volume 28, 2008
Edited by Ian Inkster
Bloomsbury Academic An imprint of Bloomsbury Publishing Plc LON DON • OX F O R D • N E W YO R K • N E W D E L H I • SY DN EY
Bloomsbury Academic An imprint of Bloomsbury Publishing Plc 50 Bedford Square London WC1B 3DP UK
1385 Broadway New York NY 10018 USA
www.bloomsbury.com BLOOMSBURY, T&T CLARK and the Diana logo are trademarks of Bloomsbury Publishing Plc First published 2008 by Continuum International Publishing Group Copyright © Ian Inkster, 2008 The electronic edition published 2016 Ian Inkster has asserted his right under the Copyright, Designs and Patents Act, 1988, to be identified as Author of this work. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. No responsibility for loss caused to any individual or organization acting on or refraining from action as a result of the material in this publication can be accepted by Bloomsbury or the author. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: HB: 978-0-8264-3875-1 ePDF: 978-1-4411-4121-7 ePub: 978-1-3500-1909-6 Library of Congress Cataloguing-in-Publication Data A catalogue record for this book is available from the Library of Congress. Series: History of Technology, volume 28 Typeset by Fakenham Prepress Solutions, Fakenham, Norfolk NR21 8NN
Contents
The Contributors Editorial Notes for Contributors
vii ix xiii
Special Issue: `By Whose Standards? Standardization, Stability and Uniformity in the History of Information and Electrical Technologies' Edited by James Sumner and Graeme J. N. Gooday JAMES SUMNER AND GRAEME J. N. GOODAY
Introduction
1
LAURA DENARDIS
IPv6: Standards Controversies around the Next-Generation Internet
15
ANDREW L. RUSSELL
Standardization across the Boundaries of the Bell System, 1920±38
37
STATHIS ARAPOSTATHIS
Morality, Locality and `Standardization' in the Work of British Consulting Electrical Engineers, 1880±1914
53
CHRIS OTTER
Perception, Standardization and Closure: The Case of Artificial Illumination
75
JAMES SUMNER
Standards and Compatibility: The Rise of the PC Computing Platform FRANK VERAART
Basicode: Co-Producing a Microcomputer Esperanto
History of Technology, Volume Twenty-eight, 2008
101 129
vi
Contents
KAREN SAYER
Battery Birds, `Stimulighting' and `Twilighting': The Ecology of Standardized Poultry Technology
149
Contents of Former Volumes
169
The Contributors
Dr Stathis Arapostathis AHRC Research Fellow Centre for History and Philosophy of Science Department of Philosophy University of Leeds Woodhouse Lane Leeds LS2 9JT United Kingdom Email: [email protected] Dr Laura DeNardis Executive Director, Yale Information Society Project Lecturer and Associate Research Scholar Yale Law School 127 Wall Street New Haven, CT 06511 United States of America Email: [email protected] Prof Graeme Gooday Professor Centre for History and Philosophy of Science Department of Philosophy University of Leeds Woodhouse Lane Leeds LS2 9JT United Kingdom Email: [email protected]
Dr Christopher Otter Assistant Professor Department of History Ohio State University 106 Dulles Hall 230 W. 17th Ave Columbus, OH 43210 United States of America Email: [email protected] Dr Andrew L. Russell Assistant Professor Program in History, College of Arts and Letters Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 United States of America Email: [email protected] Dr Karen Sayer Senior Lecturer Department of Humanities Leeds Trinity and All Saints Brownberrie Lane Leeds LS18 5HD United Kingdom Email: [email protected]
History of Technology, Volume Twenty-eight, 2008
viii
The Contributors
Dr James Sumner Lecturer Centre for the History of Science, Technology and Medicine University of Manchester 2nd Floor, Simon Building Brunswick Street Manchester M13 9PL United Kingdom Email: james.sumner@ manchester.ac.uk
Dr ir Frank Veraart Assistant Professor Industrial Engineering and Innovation Sciences Technische Universiteit Eindhoven Postbus 513, 5600 MB Eindhoven The Netherlands Email: [email protected]
History of Technology, Volume Twenty-eight, 2008
Editorial JOURNALS UNDER THREAT: A JOINT RESPONSE FROM HISTORY OF SCIENCE, TECHNOLOGY AND MEDICINE EDITORS
We live in an age of metrics. All around us, things are being standardized, quantified, measured. Scholars concerned with the work of science and technology must regard this as a fascinating and crucial practical, cultural and intellectual phenomenon. Analysis of the roots and meaning of metrics and metrology has been a preoccupation of much of the best work in our field for the past quarter century at least. As practitioners of the interconnected disciplines that make up the field of science studies we understand how significant, contingent and uncertain can be the process of rendering nature and society in grades, classes and numbers. We now confront a situation in which our own research work is being subjected to putatively precise accountancy by arbitrary and unaccountable agencies. Some may already be aware of the proposed European Reference Index for the Humanities (ERIH), an initiative originating with the European Science Foundation. The ERIH is an attempt to grade journals in the humanities ± including `history and philosophy of science'. The initiative proposes a league table of academic journals, with premier, second and third divisions. According to the European Science Foundation, ERIH `aims initially to identify, and gain more visibility for, topquality European Humanities research published in academic journals in, potentially, all European languages'. It is hoped `that ERIH will form the backbone of a fully-fledged research information system for the Humanities'. What is meant, however, is that ERIH will provide funding bodies and other agencies in Europe and elsewhere with an allegedly exact measure of research quality. In short, if research is published in a premier league journal it will be recognized as first rate; if it appears somewhere in the lower divisions, it will be rated (and not funded) accordingly. This initiative is entirely defective in conception and execution. Consider the major issues of accountability and transparency. The process of producing the graded list of journals in science studies was overseen by a committee of four (the membership is currently listed at http:// www.esf.org/research-areas/humanities/research-infrastructures-includingerih/erih-governance-and-panels/erih-expert-panels.html). This committee cannot be considered representative. It was not selected in consultation with any of the various disciplinary organizations that currently represent our field such as the European Association for the History of Medicine and Health, the Society for the Social History of Medicine, the British Society
History of Technology, Volume Twenty-eight, 2008
x
Editorial
for the History of Science, the History of Science Society, the Philosophy of Science Association, the Society for the History of Technology or the Society for Social Studies of Science. Journal editors were only belatedly informed of the process and its relevant criteria or asked to provide any information regarding their publications. No indication has been given of the means through which the list was compiled; nor how it might be maintained in the future. The ERIH depends on a fundamental misunderstanding of conduct and publication of research in our field, and in the humanities in general. Journals' quality cannot be separated from their contents and their review processes. Great research may be published anywhere and in any language. Truly ground-breaking work may be more likely to appear from marginal, dissident or unexpected sources, rather than from a wellestablished and entrenched mainstream. Our journals are various, heterogeneous and distinct. Some are aimed at a broad, general and international readership, others are more specialized in their content and implied audience. Their scope and readership say nothing about the quality of their intellectual content. The ERIH, on the other hand, confuses internationality with quality in a way that is particularly prejudicial to specialist and non-English language journals. In a recent report, the British Academy, with judicious understatement, concludes that `the European Reference Index for the Humanities as presently conceived does not represent a reliable way in which metrics of peerreviewed publications can be constructed' (Peer Review: the Challenges for the Humanities and Social Sciences, September 2007: http://www.britac.ac.uk/ reports/peer-review). Such exercises as ERIH can become self-fulfilling prophecies. If such measures as ERIH are adopted as metrics by funding and other agencies, then many in our field will conclude that they have little choice other than to limit their publications to journals in the premier division. We will sustain fewer journals, much less diversity and impoverish our discipline. Along with many others in our field, this Journal has concluded that we want no part of this dangerous and misguided exercise. This joint Editorial is being published in journals across the fields of history of science and science studies as an expression of our collective dissent and our refusal to allow our field to be managed and appraised in this fashion. We have asked the compilers of the ERIH to remove our journals' titles from their lists. IAN INKSTER (History of Technology) HANNE ANDERSEN (Centaurus) ROGER ARIEW & MOTI FEINGOLD (Perspectives A. K. BAG (Indian Journal of History of Science)
on Science)
History of Technology, Volume Twenty-eight, 2008
Editorial
xi
JUNE BARROW-GREEN & BENNO VAN DALEN (Historia Mathematica) KEITH BENSON (History and Philosophy of the Life Sciences) MARCO BERETTA (Nuncius) MICHEL BLAY (Revue d'Histoire des Sciences) JOHANNA BLEKER (Medizinhistorisches Journal) CORNELIUS BORCK (Berichte zur Wissenschaftsgeschichte) GEOF BOWKER & SUSAN LEIGH STAR (Science, Technology and Human Values) WILLIAM R. BRICE (Oil-Industry History) MASSIMO BUCCIANTINI & MICHELE CAMEROTA (Galilaeana: Journal of
Galilean Studies)
JED BUCHWALD & JEREMY GRAY (Archive for History of Exact Sciences) VINCENZO CAPPELLETTI & GUIDO CIMINO (Physis) CATHRYN CARSON (Historical Studies in the Natural Sciences) ANNAMARIA CIARALLO & GIOVANNI DI PASQUALE (Automata. Journal
of Nature, Science and Technics in the Ancient World) MARK CLARK & ALEX KELLER (ICON) ROGER CLINE (International Journal for the History of Engineering & Technology) STEPHEN CLUCAS & STEPHEN GAUKROGER (Intellectual History Review) HAL COOK & ANNE HARDY (Medical History) LEO CORRY, ALEXANDRE MeÂTRAUX & JuÈRGEN RENN (Science in Context) BRIAN DOLAN & BILL LUCKIN (Social History of Medicine) HILMAR DUERBECK & WAYNE ORCHISTON (Journal of Astronomical History & Heritage) MORITZ EPPLE, MIKAEL HaÊRD, HANS-JOÈRG RHEINBERGER & VOLKER ROELCKE (NTM: Zeitschrift fuÈr Geschichte der Wissenschaften, Technik
und Medizin)
PAUL FARBER (Journal of the History of Biology) MARY FISSELL & RANDALL PACKARD (Bulletin of the History of Medicine) ROBERT FOX (Notes & Records of the Royal Society) MARINA FRASCA SPADA (Studies in History and Philosophy of Science) STEVEN FRENCH (Metascience) ENRICO GIUSTI (Bollettino di Storia delle Scienze Matematiche) JIM GOOD (History of the Human Sciences) WILLEM HACKMANN (Bulletin of the Scientific Instrument Society) ROBERT HALLEUX (Archives Internationales d'Histoire des Sciences) BOSSE HOLMQVIST (Lychnos) ROD HOME (Historical Records of Australian Science) MICHAEL HOSKIN (Journal for the History of Astronomy) NICK JARDINE (Studies in History and Philosophy of Biological and Biomedical
Sciences)
TREVOR LEVERE (Annals of Science) BERNARD LIGHTMAN (Isis) CHRISTOPH LuÈTHY (Early Science and Medicine) MICHAEL LYNCH (Social Studies of Science) STEPHEN MCCLUSKEY & CLIVE RUGGLES (Archaeostronomy:
Astronomy in Culture)
History of Technology, Volume Twenty-eight, 2008
The Journal of
xii
Editorial
PETER MORRIS (Ambix) IWAN RHYS MORUS (History of Science) E. CHARLES NELSON (Archives of Natural History) IAN NICHOLSON (Journal of the History of the Behavioural Sciences) EFTHYMIOS NICOLAIDIS (Kritiki: Critical Science and Education) KATHY OLESKO (Osiris) LILIANE PeÂREZ (Documents pour l'Histoire des Techniques) JOHN RIGDEN & ROGER H STUEWER (Physics in Perspective) JULIO SAMSo (Suhayl: Journal for the History of the Exact and Natural
Sciences in Islamic Civilisation) SIMON SCHAFFER (British Journal for the History of Science) NORBERT SCHAPPACHER (Revue d'Histoire des MatheÂmatiques) JOHN STAUDENMAIER SJ (Technology and Culture) CLAIRE STROM (Agricultural History) PAUL UNSCHULD (Sudhoffs Archiv) PETER WEINGART (Minerva) MICHIO YANO & KEN SAITO (SCIAMVS: Sources and Commentaries in Exact Sciences) STEFAN ZAMECKI (Kwartalnik Historii Nauki i Techniki) HUIB ZUIDERVAART (Studium: Tijdschrift voor Wetenschaps- en Universiteitgeschiedenis/Revue de l'Histoire des Sciences et des UniversiteÂs)
History of Technology, Volume Twenty-eight, 2008
Notes for Contributors
Contributions are welcome and should be sent to the editor. They are considered on the understanding that they are previously unpublished in English and are not on offer to another journal. Papers in French and German will be considered for publication but an English summary will be required. The editor will also consider publishing English translations of papers already published in languages other than English. Include an abstract of 150±200 words. Authors who have passages originally in Cyrillic or oriental scripts should indicate the system of transliteration they have used. Be clear and consistent. All papers should be rigorously documented, with references to primary and secondary sources typed separately from the text, double-line spaced and numbered consecutively. Cite as follows for: BOOKS 1. David Gooding, Experiment and the Making of Meaning: Human Agency in Scientific Observation and Experiment (Dordrecht, 1990), 54±5.
Only name the publisher for a good reason. Reference to a previous note: 3. Gooding, op. cit. (1), 43.
Titles of standard works may be cited by abbreviation: DNB, DBB, etc. THESES Cite University Microfilm order number or at least Dissertation Abstract number. ARTICLES 13. Andrew Nahum, `The Rotary Aero Engine', Hist. Tech., 1986, 11: 125±66, esp. 139.
Please note the following guidelines for the submission and presentation of all contributions: History of Technology, Volume Twenty-eight, 2008
xiv
Notes for Contributors
1. Type your manuscript on good-quality paper, on one side only and double-line spaced throughout. The text, including all endnotes, references and indented block quotes, should be in one typesize (if possible, 12 pt). 2. In the first instance, submit two copies only. Once the text has been agreed, then you need to submit three copies of the final version, one for the editor and two for the publishers. You should, of course, retain a copy for yourself. 3. Number the pages consecutively throughout (including endnotes and any figures/tables). 4. Spelling should conform to the latest edition of the Concise Oxford English Dictionary. 5. Quoted material of more than three lines should be indented, without quotation marks, and double-line spaced. 6. Use single quotes for shorter, non-indented quotations. For quotes within quotes, use double quotation marks. 7. The source of all extracts, illustrations, etc. should be cited and/or acknowledged. 8. Italic type should be indicated by underlining. Italics (i.e. underlining) should be used for foreign words and titles of books and journals. Articles in journals are not italicized but placed within single quotation marks. 9. Figures. Line drawings should be drawn boldly in black ink on stout white paper, feint-ruled paper or tracing paper. Photographs should be glossy prints of good contrast and well matched for tonal range. Each illustration must have a number and a caption. Xerox copies may be sent when the article is first submitted for consideration. Please do not send originals of photographs or transparencies but, if possible, have a good-quality copy made. While every care will be taken, the publishers cannot be held responsible for any loss or damage. Photographs or other illustrative material should be kept separate from the text. They should be keyed to your typescript with a note in the margin to indicate where they should appear. Provide a separate list of captions for the figures. 10. Notes should come at the end of the text as endnotes, double-line spaced. 11. It is the responsibility of the author to obtain copyright clearance for the use of previously published material and photographs.
History of Technology, Volume Twenty-eight, 2008
Introduction JAMES SUMNER AND GRAEME J. N. GOODAY
DOES STANDARDIZATION MAKE THINGS STANDARD?
Technical standards, so often designed to promote their own invisibility, have received increasing analytical attention in recent years.1 Successful acts of standardization may redistribute labour, risk and responsibility, mediate the boundaries of human and automated activity, and re-present socially motivated changes as objective necessities. With the growth of modernity theory, depersonalized systems and standardized artefacts have been identified as a key factor in the emergence of the present-day social order.2 Management historians perceive standards-based systematization as responsible for bringing corporate behaviour into a `rational' state ± or, alternatively, for concealing its contingent aspects.3 Historians of science and technology, similarly, have focused on the role of standardization, established via expert authority and precision measurement, in directing the course of everyday life.4 In cases ranging from the Whitworth screw, through units of absolute electrical resistance, to the information communication networks so culturally prominent in the twenty-first century, the emergence of uniform standards and expectations serves as a focus of enquiry and explanation.5 Yet, this well established historiography of uniformity sits alongside an equally familiar motif: the seemingly unending ability of heterogeneity and cultural localism to survive and flourish in the modern world.6 How can we reconcile the two? The first step is to note that standardization is widely understood as the enterprise of specifying standards in general, which is not the same thing as achieving universal acceptance for any standard in particular. As the old joke has it, `The nice thing about standards is that you have so many to choose from'.7 Increased standards-setting activity may actually frustrate convergence, by helping to articulate and stabilize a range of possibilities. Standardization, moreover, may be sought on distinct and conflicting criteria: in a world where conditions and opportunities for production vary, insisting on uniform product standards can lead to variations in process, and vice versa. On top of this, there is the role of the technological consumer: when the uniformity of `modern' artefacts becomes conspicuous, individuals and groups often rebel, expressing their cultural identities by personalizing their preferred configurations or reclaiming abandoned options. Like mundanity and creativity, the homogeneous and the heterogeneous seem inextricably entwined. History of Technology, Volume Twenty-eight, 2008
2
Introduction
This tension informs current work on standardization from a variety of disciplinary viewpoints. Economists formerly characterized standardssetting policy in terms of an optimal static balance between `freedom' and `order'; more recent work emphasizes the dynamic complexity of the acts of coordination involved.8 Studies of regulatory science, mass culture and European cities have found trends of globalizing homogeneity and local negotiation co-existing in striking tension over the long term.9 Some political scientists and management theorists favour understanding standards as a form of `soft law', potentially as coercive as formal directives, yet often derived from cultural norms, and subject to various forms of contest and revision.10 Accounts based on historically distant case studies, too, have begun to affirm the lesson that standards `are neither simplifying nor uniform in their effects'.11 The present collection, addressing standardization in broad terms across the nineteenth and twentieth centuries, is intended as a contribution to this literature. A generation ago, historians of science and technology were closely influenced by social-constructivist accounts that typically saw trajectories of technical change ending in convergence to universal interpretations (consensus) or stable technologies (closure).12 Latourean analysis, in particular, saw metrological standardization as a foundation of `universal' technoscience.13 But historians find that their narratives rarely have such neat endpoints. Dissent and controversy are hard to erase altogether: steam locomotion, for instance, faced critics and competitors from its earliest beginnings until its (developed-world) obsolescence, while the technical arguments mustered against nuclear power seem unlikely to vanish in the foreseeable future. Convergence of material forms is correspondingly limited, as illustrated by the iconic case of the mains electrical power connector. `Why isn't there a single plug that could be used everywhere?' is an oft-quoted condensation of the plea for uniformity: the answer lies not in any lack of a suitable standard, as the International Electrotechnical Commission (IEC) makes clear, but in the want of any determining reason for this standard to overcome an established, stable plurality.14 The technoscientific enterprise has, in fact, survived remarkably well for long periods without universal uniformity. Perhaps the only really famous victim of mixed standards was the catastrophic failure of a NASA orbiter in November 1999, resulting from a confusion between US conventional (`English') measurements and the metric system mandated by the project guidelines.15 Even here, humiliating as the loss was, the orbiter could be replaced; US engineers' commitment to feet and inches has not been, despite repeated efforts.16 This kind of exceptionalism may even be bolstered by such `modernizing' effects as globalization and large-scale manufacturing concentration, which permit parallel technical artefacts (as in the power-plug case) to be produced quickly and easily under multiple standards regimes.17 In previous work, one of the editors has drawn attention to the metrological fallacy: `... the view that well-defined universal standards and History of Technology, Volume Twenty-eight, 2008
James Sumner and Graeme J. N. Gooday
3
units are somehow necessary and sufficient to facilitate the practice of measurement and thus that the history of measurement consists in explaining how past measurers overcome the lack thereof.'18 Here, we apply the same critical approach to uniform standards in general. If universal specifications are not a defining feature, necessary outcome or conscious goal of technoscience (or `modernity'), we must ask in whose interests, and for what purpose, the drive for homogeneity does arise ± where, indeed, it does. Pressure for a `single plug that could be used everywhere' may reasonably be located among technological consumers (specifically, frustrated international travellers), but, elsewhere, a variety of interest groups and motivations may be at work: producers seeking monopoly advantage; mediators seeking a reliable basis for dealing; expert communities seeking to destroy rival communities' status by abolishing the demand for their expert judgement. Moreover, when no standard is present, what prevails instead? Not chaos, by and large, and certainly not the cessation of technical change. One possible answer, as John Staudenmaier has observed, is negotiation.19 Standardization is itself a negotiated process, as all the accounts in this collection attest; indeed, standards have elsewhere been characterized as crystallized expressions of the coordinations (i.e. settled negotiations) between actors in technical systems.20 Yet, what an agreed standard provides, above all, is the means to replace some process of negotiation, formerly ongoing and mutable, with a settled understanding. Looking back from within a standardized culture, such negotiations often seem wilfully irrational: endless unloading and repackaging of goods to accommodate local spatial constraints; unreliable translations among a Babel of languages; incommensurable value definitions leading to heated, irresolvable disputes between traders. It is the task of the enlightened historian to bring some symmetry to the account, considering also the ends of status, identity and integrity that negotiation may serve. Such is the aim of this collection. STANDARDS STORIES
Historians tell stories: the seven accounts in this collection seek to characterize not states, but processes of standardization. All draw attention to the contested nature of these processes, but the controversies they focus on emerge at various levels of abstraction. Debate may concern whether standardization should be introduced at all, as discussed in Karen Sayer's chapter; or the appropriate conceptual arena for invoking standardization, as Stathis Arapostathis relates; or else which of the competing standards structures to accept, as demonstrated by Laura DeNardis. Some of the standards initiatives narrated here succeed, to varying degrees; some fail; others undergo so much conceptual change that the question becomes hard to judge or meaningless. What is common to all the accounts is that conforming to standards, or doing otherwise, is a conscious (and contingent) act on the part of nearly all those involved. History of Technology, Volume Twenty-eight, 2008
4
Introduction
Of the seven chapters, DeNardis' perhaps has most in common with the `standards battle' narrative familiar from economic literature: two or more standards regimes, often incommensurable and mutually hostile, are presented as such to a community able to select between them, the result usually being that only one survives.21 For the historian, such accounts owe their interest to the variety of cultural and institutional meanings bound up in the points of technical distinction: the 7ft in rail gauge spoke for smooth running and Brunel's personal vision of an independent system; DIN specifications, in Cold War East Germany, for the validity of continued technical association with the West; the metric system in general, for the ethic of rational universalism.22 A variant is the `standards battle that never was', typified by the case of the Dvorak Simplified Keyboard. Here, a well specified but largely unadopted standard is given prominence in the narrative, counterpointing a dominant standard whose nature (often including serious alleged failures) would otherwise be invisible.23 What is distinctive about the case DeNardis presents is that it concerns competition not only between standards regimes, but between underlying modes of standardization. Open Standards Interconnection (OSI) offered an ethos of top-down prescription by an international authority; the opposing mentality, grounded among practising engineers in the Internet Engineering Task Force, saw legitimacy as constructed from the bottom up, striving to modify the established working norms by `rough consensus'. Being, in essence, professional standardizers, these internet architects were keenly attuned to the philosophies underlying successive proposals as they were distributed (in, for the time, an uncommonly systematic standard fashion) via the internet itself. The OSI philosophy, though supported by `most Western European governments, the United Nations, influential vendors and user organizations', was ultimately rejected as the established advisory bodies endorsed the `bottom-up' approach ± and, thereby, reinforced their own legitimacy. To some extent, this is a story about national style: there is a strong hint of `can-do' frontiersman rhetoric in the comments of members of the United States-focused IETF, and evident mistrust of the international(ist) bureaucracy promoting OSI. The senior standardizers responsible for the choice did much to conceal this element by portraying the issue as purely technical. We can, however, discern a species of technical advocacy that appears quite distinct from conventional political concerns, manifested above all in the IETF rejection of `kings, presidents and voting'. Authoritarianism, representative government and direct participatory democracy were all equally suspect, as liable to inhibit the established internet standardizers' paramount virtue: continued smooth running. Andrew Russell's story, too, concerns a debate over which standards should apply, but introduces an extra degree of freedom. DeNardis' actors on all sides confined their battle to the conceptual territory of `the internet' (although the nature, governance and perhaps geographic scope of that internet were subject to negotiation). Russell, tracing the standards History of Technology, Volume Twenty-eight, 2008
James Sumner and Graeme J. N. Gooday
5
activities of AT&T, shows the corporation's approach to be essentially unbounded: in doing so, he breaks new ground in characterizing the relationship between systems and their associated standards. The wonderful example of the hardware-store washers ± useful, by pure happenstance, as `slugs' for the defrauding of AT&T's coin-operated telephones, and thus subject to sustained attack from the corporation's sophisticated standards lobbying ± produces two revelations that deserve close attention. The first is that many at AT&T believed, in Russell's words, it would be `easier to change the world' outside the Bell System than to modify the coin boxes within it. The second is that they were partially correct: standards redefinition ultimately played a part in the diminution of the `slug racket', albeit after many years of wrangling, and with the aid of more conventional legislative and law-enforcement lobbying. Russell frames this activity as extending across the boundaries of the Bell System (the boundaries, that is, of AT&T's switched telephone network). We might, however, prefer to interpret AT&T's business not (only) as telecommunications in general, but as standards-management in general. The AT&T engineers who pursued the washer campaign through ASA committees, though dealing largely with representatives of industries they did not know well, must have found the formalities and procedures of their task rather familiar. From a standards perspective, AT&T was not a monopoly: ranged against manufacturers in other sectors, it proved a powerful yet resistible combatant. This perception may help us to avoid what Russell identifies as a caricature of the `sluggish, arrogant, and solipsistic' corporate monolith: further work may shed light on the role of the industry standards committee as an institutional space allowing competitors to cooperate, and collaborators to compete.24 Arapostathis' story, likewise, captures a battle between two communities, each seeking to legitimate and extend its activities through a process we may term `standardization'; but the distinction here is more radical. On one side, manufacturers of electrical equipment sought the kinds of regime addressed in most standards literature: agreed, uniform specifications of material dimensions, operating capacities, procedures and protocols. Opposing them were the consultants, a community whose status and livelihood depended, by definition, on the need for ad hoc measures in design and execution. Put another way, the consultant's role was to negotiate and achieve accommodation between the constraints of the site, the customer's expectations and the electrical system itself: if the manufacturers' standardization culture became dominant (extending to both site and expectations), this skill of negotiation would become redundant. In response to this situation, Arapostathis reports, the consultant electricians imposed an alternative standards rhetoric of their own: the public good, they argued, would be best served by regulation not of the practice of electrification, but of the profession, with agreed modes of negotiation affirmed by standard contractual formulations. In some cases, History of Technology, Volume Twenty-eight, 2008
6
Introduction
these consultants actively opposed uniform specifications as liable to promote a monopoly culture: far better that the skilled consultant choose, from among `many manufacturers with many standards', the best materials for his individual client. Crucially, this was promoted not as a subjective process, but as a disinterested selection among various precisely specified possibilities. In terms of the taxonomy offered by Timmermans and Berg, the consultants sought authoritative `procedural' and `performance' standards that would govern standards selection at the `design' level.25 Not all standardization, then, is even hypothetically part of a search for one standard. Arapostathis' account closely engages the themes of moral authority and trust, reflecting prevailing trends in the historiography of nineteenthcentury British engineering and precision measurement.26 Trust is a form of codified expectation, as, of course, is all standardization; yet, trust is necessary only where there remains some process of negotiation. Whereas Arapostathis' conception of the consultant involves some innovation in the use of standards as an analytical category, Chris Otter's piece probes the limits of the approach. Readers accustomed to economic history and business strategy literature may question whether the issue of the gas mantle's eclipse or survival, for instance, is a `standardization' issue at all. Otter addresses no specifications, proprietors or systems formalisms, instead telling how the general principles of certain technical strategies were accepted, and displaced, at the cultural level. There is merit, however, in assimilating to standardization the processes generally described as `technological stabilization' and `closure'.27 In the ascendancy of the cultural norms associated with the mature gas mantle or electric filament bulb, what is (partially) closed is the same process of negotiation (over such questions as: what is acceptable? what is in keeping? what modifications ought to be made?) seen to be curtailed in more generally recognized standards cases. Further, successfully stabilized technologies tend to resemble successful standard specifications in presenting the appearance of a unidirectional trajectory. Electric lamps, having been enshrined as the `basic illuminants in Western society', spread their light ever increasingly across that society and beyond. It is this trajectory that Otter deconstructs, drawing on the contextual approach that challenges linear or progressive narratives of technical development. Ostensibly `old' and discredited technical forms may actually represent the majority use pattern, be considered more appropriate to the context of their use and undergo innovations of their own; they may co-exist indefinitely with their `newer' rivals, and may even supplant them.28 As Donald MacKenzie has pointed out: Instead of one predetermined path of advance, there is typically a constant turmoil of concepts, plans, and projects. From that turmoil, order (sometimes) emerges, and its emergence is of course what lends credibility to notions of `progress' or `natural trajectory'. With hindsight, the technology that succeeds usually does look like the best or the most natural next step.29 History of Technology, Volume Twenty-eight, 2008
James Sumner and Graeme J. N. Gooday
7
Such contingentism is now mainstream in the discussion of broad technical forms. For dominant standard specifications, however (especially as described by economic historians), a kind of determinism persists: indeed, the most striking feature of path-dependency theory is that the path has some determining agency in its own right.30 Whereas we may readily understand how electric lighting may, in some contexts, be challenged by the gas lamp, it appears difficult to conceive challenges to the standard rail gauge or QWERTY keyboard. If we interpret stable technical forms as `soft standards', however, we may realize that the paths associated with `harder', more formally specified standards are often meaningfully challenged by alternatives of lesser formalization. Standard-gauge rail systems are commonly considered `standard' as a result of their success relative to broader and narrower gauges; their crucial battles, in fact, have been against road and water transportation. To what extent has QWERTY displaced Cyrillic keyboards in Russia or Bulgaria? How has it fared against paper and pen, digitizing tablets and various voice transcription approaches in different periods? There is no reason why the alternatives in a standards debate must be equivalent in their mode of action or level of formal codification; and often much reason why they might not, as the narratives of DeNardis and Arapostathis in their different ways make clear. This shift to address the `softer' side of standardization, we may note, accords with ongoing analysis of recent innovations in public standardsetting practice, notably the ISO 9000 `quality management' initiative, which mandates in general terms such activities as goal-setting and recordkeeping. The growing body of literature concerned with such measures sees no problem in addressing as `standards' such entrenched cultural norms as the requirement to shake hands.31 One valuable insight from this literature is that the successful standardization of assertions and expectations about a practice may altogether fail to standardize the practice itself.32 Also germane to the `soft standards' perspective is the focus on classification as an explanatory category, elaborated in the work of Bowker and Star: classifications may be more or less rigorously formalized and, like standards, manipulate the boundaries of perceived objectivity to `valorize' or `silence' contested points of view.33 Notions of `harder' and `softer' standards are more fully explored in James Sumner's tale of the IBM PC-compatible computing platform. Sumner weighs up the explanatory relevance of a gamut of normative possibilities, from the specification of a microprocessor, through the contents of an instruction manual, to loose received notions of what a computer or its user interface ought to look like. The PC cannot be explained satisfactorily as a formal set of integrated standards: it is a `broad constellation of specifications, varying in their exactitude, across the levels of hardware, operating software, applications software and user culture'. Within this framework, Sumner is keen to stress that acceptance and rejection of standards are not the only possible responses to a standards agenda: compatibility deserves particular attention, being a form History of Technology, Volume Twenty-eight, 2008
8
Introduction
of co-option of that agenda that nevertheless strives to retain some mode of working outside it, to subvert its terms, or perhaps to supersede it. Like Otter, Sumner frames his account as a defence of contingency: the hegemony of the PC was not as inevitable as is conventionally represented. Indeed, a narrative focusing purely on the success of IBM's promoted standards must be unsatisfactory, as its consequence was the opposite of `vendor lock-in'. IBM had enjoyed an almost AT&T-like dominance of large commercial computing systems, yet, within the `PC' culture it had ostensibly created, its hardware production activities declined to marginal status: the corporation could neither farm revenues from its standards agenda, nor determine its development. If there is a chief inheritor of IBM's cultural monopoly, it is Microsoft, a software corporation barely engaged in the hardware business. Microsoft reaped the rewards of proprietary control, to an extent that IBM did not, because computer users came to see software alone, not integrated hardware/software systems, as the fundamental defining characteristic of use. The significance of this kind of perceptual shift again demonstrates the value of tracing the interplay between asymmetrical standardization (or stabilization) agendas. Frank Veraart's approach, too, softens the boundaries of how standardization is seen to act: guidelines for the structuring of computer programs in the Basic language, though interpreted subjectively by users, were ultimately refined into algorithmic rigour and used as a basis for automated interconversion. Veraart aptly characterizes the Basicode specification as a `microcomputer Esperanto': a consciously designed universal language. Universalism, as an ethical principle in standards initiatives, is of course rather different from the drive to apply a given standard universally. Instead of seeking to `change the world' into conformity with one common standard (as did Russell's innovative monopolists), the universalist enthusiasts of Basicode hoped to evolve a common mode of action, suited to the world as it stood, using formalized translation to surmount divergence among the dialects of Basic. A similar ethos informed the UCSD `p-System' discussed briefly by Sumner; indeed, the approach has its precursors in mechanical cases, including the iconic railway break-of-gauge problem. Universalism, however, tends not to prevail: heterogeneous standards, though mutually incommensurable, have advantages particular to the contexts of their production and use. Esperanto, borrowing elements from many language communities, has a user base in none. Basicode's automatic translation came at the expense of denying all its programmers certain possibilities conventional in their proprietary Basic dialects. The price of obviating such restrictions, as seen in the functionally elegant pSystem, is a degree of complexity that slows the system and may call for drastically improved technical resources ± at which point, the object of universal running across available systems is defeated. By common consent, the world's second language is not Esperanto, but English; adoption of the p-System wilted in the face of Microsoft's PCHistory of Technology, Volume Twenty-eight, 2008
James Sumner and Graeme J. N. Gooday
9
DOS. Was Basicode, correspondingly, trumped by the emerging dominance of a proprietary Basic? No. Much as in the PC hegemony case, perceptions shifted to the extent that the question no longer held meaning. Basic, with its familiar English-language borrowings, had achieved notability around the beginning of the microcomputer boom as the coding language of the wider public;34 by the 1990s, with microcomputers well established as software-playing consumer durables, the public no longer coded, while professional coders turned to other languages. The ultimate consequences for the Basicode standard are particularly intriguing. With the relevant mass culture gone, the hobbyist subculture that had created the standard re-appropriated it, directing it towards teaching and exemplification rather than applications development. Faced with their explicit avowal ± `We do not strive for a system to make professional programs. We do not want a Basicode where everything is possible' ± we cannot treat the undoubted marginalization of Basicode as an instance of failure. All six of the papers addressed so far engage cases with a principally electrotechnical or informatic dimension. While this is largely a consequence of the project's IEC sponsorship, it is true that the standardization of life processes was, until recently, somewhat overlooked; it is now a thriving field, however, and deserves to be better integrated with the literature studied by historians of the physical sciences and engineering. Russell, in his overview of the field, highlights the crucial role of standardized organisms such as Drosophila in the development of scientific consensus.35 We might add literature on the rise of evidencebased medicine, the standardization of vaccines and drugs, and the increasingly informatic character of genetics research.36 Sayer, who provides our seventh paper, provides a useful bridge towards this literature: her chickens and eggs are themselves profoundly electrotechnical. Parallels with Thomas Hughes' classic account of the nature of technical systems, articulated with reference to electrical power, are unmistakeable.37 Chickens, barn, lighting and electrical supply form the interacting components of a system; the system's boundaries are also the boundaries of its maintainers' ability to control events directly. The system has a variety of inputs, and one valued output ± egg supply ± that serves straightforwardly as a metric for problem solving. The key purpose of the artificial lighting regime is to induce chickens to lay all year round, rather than seasonally: this is the quintessential engineer's project of levelling the load graph, bringing the output at any point in time as close as possible to the average. The chief advantage of electrical lighting over alternatives is not cost or convenience, but the controllability needed for this project of load-levelling. The spread of systematic egg production across Britain followed the spread of ready electrical inputs from the National Grid; the distribution of output eggs was similarly systematic in character, customer satisfaction resting on the standardizing initiative of the Lion Mark test process. At the same time, standardized life processes seem to provoke tensions History of Technology, Volume Twenty-eight, 2008
10
Introduction
not found in the physical and informatic cases, particularly where, as in the case of the breakfast egg, they are routinely encountered by a mass consuming public. While the standardization of internet protocol packets is generally deemed a benign necessity (by those who have the remotest inkling that such entities exist), the standardized egg is suspect. Sayer highlights the vexed issue of consumer demand for the natural in agricultural production: `technology' and `nature' are seen to be opposed, and the marketers' guile is stretched to its utmost in reconciling hazily pastoral rhetoric with reliability guarantees founded on standards-based systematization. Equally prized, and similarly counterpointed to the `standard' in popular thinking, is the idea of the authentic: authenticity is a consideration as regards not only agriculture, but also such technological commodities as beer, furniture and `vintage' equipment of all kinds.38 Yet, some forms of standardization seem to count in favour of Nature and authenticity. Sayer points out that concerned consumers today are increasingly drawn to the organic sector ± a phenomenon based on distinct standards of permitted production technology and backed by an extensive certification regime. The nature of `Nature', we may note in closing, is not a new concern for standardization theorists. In the 1920s, Albert W. Whitney, a mathematician who had become an influential consultant to the insurance industry, became concerned with the issue that frames our own investigation: the state of tension between stability and freedom. Whitney drew a direct analogy between planned technical development by humans and evolutionary nature. Both produced not a continuum of forms, but a `discrete and actually enumerable ordered assemblage'; this `sameness' was, however, accompanied in both cases by `a strong flavour of variety and individuality'.39 Both tendencies were crucial to Whitney's conception of progress, and both were indefinitely sustainable in a relationship that Whitney, through explicit gendering, presented as a marriage: Variation is creative, it pioneers the advance; standardization is conservational, it seizes the advance and establishes it as an actual concrete fact. Variation is primarily concerned with quality, standardization is primarily concerned with quantity . . . If nature had no mechanism for fixing and holding the type, she would have no way of capitalizing her discoveries . . . Variation is the active, creative, masculine force in evolution; standardization is the passive, brooding, conservational, feminine force out of which comes the potency of the next advance . . . Standardization is thus the liberator that relegates the problems that have already been solved to their proper place, namely to the field of routine, and leaves the creative faculties free for the problems that are still unsolved. Standardization from this point of view is thus an indispensable ally of the creative genius.40 Similarly, in human production, says Whitney, whereas the opponents of standardization characterize it as `producing a world of universal, dull mediocrity in place of the world of color and scintillating lights and History of Technology, Volume Twenty-eight, 2008
James Sumner and Graeme J. N. Gooday
11
shadows and heights and depths that we have under the play of individual initiative', its stabilizing effect in fact protects us from the `mad, restless, wearying world of infinite but meaningless variety' that would otherwise ensue; it is nothing more than `kiln-dried custom', necessary for the creative act to function.41 Whitney, who as an insurance consultant, focused particularly on safety issues, was himself professionally engaged in promoting standards and norms; most of the standards participants discussed in this collection, we suspect, would endorse his legitimation of the standardizer's role as part not only of `modern' life, but of life in general. Yet, his characterization of the marriage between uniformity and variation offers no direction on how the matrimonial responsibilities should be decided: how far should we standardize, and in what cases, and why? As our contributors demonstrate, this question admits of many localized responses, but no standard answer. ACKNOWLEDGEMENTS
This introduction has benefited greatly from comments by the contributors, particularly Andrew Russell and Stathis Arapostathis, and by two anonymous referees. Notes and References
1. Excellent surveys of recent and classic literature on technological standardization exist from a variety of disciplinary standpoints. The most influential texts in the history of science and technology are noted in A. Slaton and J. Abbate, `The Hidden Lives of Standards: Technical Prescriptions and the Transformation of Work in America', in M. T. Allen and G. Hecht (eds), Technologies of Power (Cambridge, MA, 2001), 95±143. A more detailed survey, by a historian of technology addressing standards practitioners, is A. Russell, `Standardization in History: A Review Essay with an Eye to the Future', in S. Bolin (ed.), The Standards Edge: Future Generations (Ann Arbor, 2005), 247±60. For the science and technology studies tradition in relation to economic and policy literature, see T. Egyedi, `A Research Autobiography from an STS Perspective', in J. Schueler, A. Fickers and A. Hommels (eds), Bargaining Norms, Arguing Standards (The Hague, 2008), 34±47. For economic theory, see K. Blind, The Economics of Standards: Theory, Evidence, Policy (Cheltenham, 2004), 14±54. For socially informed perspectives in political science, see N. Brunsson et al., A World of Standards (Oxford, 2000), 1±17. Some critique of the overall state of the field is offered in J. Yates and C. N. Murphy, `From Setting National Standards to Coordinating International Standards: The Formation of the ISO', Business and Economic History On-Line, 2006, 4: 1±25. 2. A. Giddens, The Consequences of Modernity (Cambridge, 1990); U. Beck, Risk Society: Towards a New Modernity (London, 1992). A useful review of some relevant literature is T. Misa, `The Compelling Tangle of Modernity and Technology', in T. Misa, P. Brey and A. Feenberg (eds), Modernity and Technology (Cambridge, MA, 2003), 1±30. 3. A. Chandler, The Visible Hand: The Managerial Revolution in American Business (Cambridge, MA, 1977); and, for a contingentist response, Y. Shenhav, Manufacturing Rationality: The Engineering Foundations of the Managerial Revolution (Oxford, 1999). 4. Notable cases addressing state institutions in this context are P. Lundgreen, `Measures for Objectivity in the Public Interest', in P. Lundgreen, Standardization, Testing, Regulation: Studies in the History of the Science-Based Regulatory State (Bielefeld, 1986); D. Cahan, An Institute for an Empire: The Physikalisch-Technische Reichsanstalt, 1871±1918 (Cambridge, 1989). 5. A. E. Musson, `Joseph Whitworth and the Growth of Mass-Production Engineering', Business History, 1975, 17: 109±49; S. Schaffer, `Late Victorian Metrology and its
History of Technology, Volume Twenty-eight, 2008
12
Introduction
Instrumentation: A Manufactory of Ohms', in R. Bud and S. Cozzens (eds), Invisible Connections: Instruments, Institutions, and Science (Bellingham, WA, 1992), 24±55; J. Abbate, Inventing the Internet (Cambridge, MA, 1999). 6. G. Basalla, The Evolution of Technology (Cambridge, 1988); N. Oudshoorn and T. Pinch (eds), How Users Matter (Cambridge, MA, 2003); C. Geertz, Local Knowledge (New York, 1983). 7. This aphorism appears in Andrew S. Tanenbaum's textbook, Computer Networks, 1st edn (Upper Saddle River, NJ, 1981), 168 ± itself a standard work ± and is widely attributed to Tanenbaum, though the underlying sentiment is older. Tanenbaum adds that if you do not like the available standards, `you can just wait for next year's model'. 8. P. David, `Standardization Policies for Network Technologies: The Flux between Freedom and Order Revisited', in R. Hawkins, R. Mansell and J. Skea (eds), Standards, Innovation and Competitiveness: The Politics and Economics of Standards in Natural and Technical Environments (Aldershot, 1995). 9. H. Rothstein et al., `Regulatory Science, Europeanization, and the Control of Agrochemicals', Science, Technology and Human Values, 1999, 24(2): 241±64; T. Misa, Leonardo to the Internet: Technology and Culture from the Renaissance to the Present (Baltimore, 2004), 225±9; M. HaÊrd and T. Misa (eds), Urban Machinery: Inside Modern European Cities (Cambridge, MA, 2008). 10. Brunsson et al., op. cit. (1), esp. 3±13; S. Krislov, How Nations Choose Product Standards and Standards Change Nations (Pittsburgh, 1997). 11. Slaton and Abbate, op. cit. (1), 136; and see also M. HaÊrd and A. Jamison (eds), The Intellectual Appropriation of Technology: Discourses on Modernity, 1900±1939 (Cambridge, MA, 2003); B. Marsden and C. Smith, Engineering Empires (Basingstoke, 2005). 12. Characterizations of `closure' are usually informed by the account presented in T. Pinch and W. Bijker, `The Social Construction of Facts and Artifacts', in W. Bijker, T. Hughes and T. Pinch (eds), The Social Construction of Technological Systems (Cambridge, MA, 1987), 17±50, on 44±6. Bijker later nuanced the account, preferring the term `stabilization' to refer to the semantic conventionalization of a technology within one social group, and `closure' for the shift to a conventional understanding across social groups (i.e. the decline of interpretive flexibility): W. Bijker, Of Bicycles, Bakelites, and Bulbs: Towards a Theory of Technological Change (Cambridge, MA, 1995), 84±8. 13. In particular, Joseph O'Connell, `Metrology: The Creation of Universality by the Circulation of Particulars', Social Studies of Science, 1993, 23(1): 129±73. 14. `Why Are There So Many Different Plugs and Sockets?', n.d., IEC website, available online at www.iec.ch/zone/plugsocket/ps_intro.htm, accessed 13 August 2008. 15. Arthur Stephenson et al., Mars Climate Orbiter Mishap Investigation Board Phase 1 Report (1999), available online at ftp.hq.nasa.gov/pub/pao/reports/1999/MCO_report.pdf, accessed 19 June 2008. 16. For the roots of anti-metrication activity in US engineering, see Shenhav, op. cit. (3), 60±1. 17. Krislov, op. cit. (10), 23. 18. G. Gooday, The Morals of Measurement (Cambridge, 2004), 11, emphasis added. 19. J. Staudenmaier, `The Politics of Successful Technologies', in S. Cutcliffe and R. Post (eds), In Context: History and the History of Technology (Cranbury, NJ, 1989), 150±71, on 157±61. 20. S. Schmidt and R. Werle, Coordinating Technology: Studies in the International Standardization of Telecommunications (Cambridge, MA, 1998). 21. For an overview of literature in this tradition, see V. Stango, `The Economics of Standards Wars', Review of Network Economics, 2004, 3(1): 1±19. Notable standards-battle case studies not cited by Stango are P. David and J. Bunn, `The Economics of Gateway Technologies and Network Evolution: Lessons from Electricity Supply History', Information Economics and Policy, 1988, 3(2): 165±202; M. Cusumano, Y. Mylonadis and R. Rosenbloom, `Strategic Maneuvering and Mass-Market Dynamics: The Triumph of VHS over Beta', Business History Review, 1992, 66: 51±94. 22. Marsden and Smith, op. cit. (11), 151±5; R. Stokes, Constructing Socialism (Baltimore, 2000), 117±25; K. Alder, The Measure of All Things (London, 2002). 23. For further details, see Sumner's contribution to this volume. 24. A theme addressed briefly with respect to the railway case in Chandler, op. cit. (3),
James Sumner and Graeme J. N. Gooday
13
122±44; and see also C. Shapiro, `Setting Compatibility Standards: Cooperation or Collusion', in R. Dreyfuss, D. Zimmermann and H. First (eds), Expanding the Boundaries of Intellectual Property: Innovation Policy for the Knowledge Society (Oxford, 2001), 81±101. 25. S. Timmermans and M. Berg, The Gold Standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care (Philadelphia, 2003), 24±5. Modes of standardization that address practice, as opposed to output, have drawn increasing attention from management theorists, as formal standards-setting bodies have become increasingly engaged with the management process: Brunsson et al., op. cit. (1), 4±5, 71±84. 26. Gooday, op. cit. (18); T. Porter, `Precision and Trust: Early Victorian Insurance and the Politics of Calculation', in M. Wise (ed.), The Values of Precision (Princeton, 1997), 173±97. 27. A similar project was undertaken by Schmidt and Werle: inspired by Bijker's 1990s characterization of `stabilization' (note 12, above), they present stabilized artefacts, having semiotic as well as material `obduracy', gaining standard-like agency in the world. Schmidt and Werle's research focus, however, is on standards committees, and their account is therefore framed by the distinction between the actualities of stability and the potentialities of standards; the focus in our collection is on standardization in practice. Schmidt and Werle, op. cit. (20), 19±20. 28. See, in particular, D. Edgerton, The Shock of the Old (London, 2007). On the problematics of `technological failure' assessed in light of this kind of contingentist framework, see K. Lipartito, `Picturephone and the Information Age: The Social Meaning of Failure', Technology and Culture, 2003, 44(1): 50±81. 29. D. Mackenzie, `Introduction', in D. Mackenzie, Knowing Machines: Essays on Technical Change (Cambridge, MA, 1996), 6, emphasis added. 30. See the extensive literature proceeding from P. David, `Clio and the Economics of QWERTY', American Economic Review, 1985, 75(2): 332±7. 31. Brunsson et al., op. cit. (1), 13. 32. Brunsson et al., op. cit. (1), 145. 33. G. C. Bowker and S. L. Star, Sorting Things Out: Classification and its Consequences (Cambridge, MA, 1999). In Bowker and Star's conception, standards arise when classification is applied, and themselves have the power to classify, in a manner that persists over time and/ or spatial and cultural distance: see 5, 13±16. 34. `Basic is the people's language!' was jocularly presented as a campaign slogan in the foundational activist±enthusiast newsletter People's Computer Company, 1972, 1(1), October: back cover. 35. Russell, op. cit. (1). Russell draws our attention to the writings of Robert Kohler (on Drosophila), Karen Rader (on standardized mouse strains for biomedical research) and Daniel Todes (on the industrially inspired physiological research of Ivan Pavlov). 36. Timmermans and Berg, op. cit. (25); C. Bonah, `The ``Experimental Stable'' of the BCG Vaccine: Safety, Efficacy, Proof and Standards, 1921±1933', Studies in History and Philosophy of Biological and Biomedical Sciences, 2005, 36: 696±721; C. Gradmann and J. Simon (eds), Evaluations: Standardising Pharmaceutical Agents, 1890±1960 (Basingstoke, forthcoming); S. Rogers and A. Cambrosio, `Making a New Technology Work: The Standardization and Regulation of Microarrays', Yale Journal of Biology and Medicine, 2007, 80: 165±78. 37. T. Hughes, Networks of Power (Baltimore, 1983); T. Hughes, `The Evolution of Large Technological Systems', in W. Bijker, T. Hughes and T. Pinch (eds), The Social Construction of Technological Systems (Cambridge, MA, 1987), 51±82. 38. Addressing the brewery case, one of the editors has written on attempts to finesse the conceptual divide between systematization and authenticity, through the grounding of a rhetoric of `specialness' in the particularities of large-scale production. J. Sumner, `Status, Scale and Secret Ingredients: The Retrospective Invention of London Porter', History and Technology, 2008, 24(3): 289±306. 39. A. Whitney, `The Place of Standardization in Modern Life', Annals of the American Academy of Political and Social Science, 1928, 137: 32±38, on 33. Our thanks to Andrew Russell for drawing our attention to this source. For some comparable past approaches, see Basalla, op. cit. (6), 1±25. 40. Whitney, op. cit. (39), 34±35. 41. Whitney, op. cit. (39), 34±35, 37.
History of Technology, Volume Twenty-eight, 2008
IPv6: standards controversies around the next-generation Internet LAURA DENARDIS
INTRODUCTION
Information and communication technology devices can exchange information only if they adhere to agreed upon technical rules, or protocols, that provide structure to the binary code underlying digital information exchange. These technical standards, on the surface, might appear not socially significant. Yet, as historian Ken Alder, writing about the political economy of the emerging metric system during the French Revolution, explains: [A]t the core of `universal standards' commonly taken to be products of objective science lies the historically contingent, and further. . . these seemingly `natural' standards express the specific, if paradoxical, agendas of specific social and economic interests.1 This historical account describes the origin of Internet Protocol version 6 (IPv6), an Internet routing and addressing standard designed to exponentially expand the number of devices able to connect to the Internet. To communicate over the Internet, devices must use a globally unique IP address. In 1990, the Internet standards community identified a crucial design concern: under existing specifications, the supply of potential addresses would eventually become depleted. The solution, it was generally agreed, involved a redesign of the protocols involved; but approaches to the problem varied. Against a backdrop of Internet globalization and competing international protocol alternatives, the standards community selecting the new Internet protocol established a guideline to evaluate competing alternatives based on what they described as objective technical criteria independent of sociological considerations or market factors. This account of the origins of IPv6 describes how the issue of technical protocol selection was also an issue of institutional and international power selection between the prevailing Internet establishment versus later Internet entrants positioned to change the global balance of power and control over the Internet's architecture. The selection of IPv6 raises two historical themes. First, the issue of History of Technology, Volume Twenty-eight, 2008
16
IPv6: standards controversies around the next-generation Internet
standards selection was also an issue of economic and political power selection among an entrenched institutional structure of trusted insiders, an internationally expanding sphere of stakeholders, dominant networking vendors, and newer market entrants. Secondly, the account demonstrates how standards selection can occur outside of the realm of market economics, with the Internet standards community viewing the protocol selection as too complex for market mechanisms. The selection of IPv6 ultimately reinforces how technical standards are not only technical design decisions, but have inherent economic, institutional, and political implications. INTERNATIONALIZATION
In 1990, some members of the Internet governance institutions responsible for setting the Internet's architectural directions began raising concerns about a potential global shortage of IP addresses.2 Each device exchanging information over the Internet uses a unique binary number (IP address) identifying its virtual location, loosely analogous to a unique postal address identifying a home's physical location. The established addressing standard, Internet Protocol version 4 (IPv4), prevalent since the early 1980s, specified each Internet address as a unique 32-bit number such as 01101001001010100101100011111010.3 This address length provided 232, or nearly 4.3 billion, unique Internet addresses. To ensure that assigned addresses were globally unique, addresses were, at the time, allocated by a centralized organization called the Internet Assigned Numbers Authority (IANA), located at the University of Southern California's Information Sciences Institute (USC-ISI). Internet governance institutions, especially the Internet Activities Board (IAB), had considerable influence over the Internet's direction. They were responsible for setting Internet standards, resolving technical issues, managing the Request for Comments (RFC) system by which standards and other proposals were published, performing strategic planning, and serving as an international technical policy liaison. The IAB had its roots in the Internet Configuration Control Board (ICCB), founded by the Defense Advanced Research Projects Agency (DARPA) in 1979, at a time when the US Department of Defense funded the bulk of networking development through a variety of US institutions. Reflecting this trajectory, the eleven men comprising the IAB in 1990 were primarily American and worked for corporations, universities, and research facilities. The IAB had established the Internet Engineering Task Force (IETF) in 1986 as a subsidiary task force serving as the primary standards organization developing Internet protocol drafts. IETF working groups conducted the bulk of standards development. At an August 1990 IETF meeting in Vancouver, some participants projected that the current address assignment rate would deplete much of the Internet address space by March of 1994.4 Furthermore, IAB members acknowledged the `rapidly growing concern internationally' that address History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
17
allocation lay primarily in the hands of the US-centric IANA. The two general assumptions were that the `IP address space is a scarce resource' and that, in the future, a more international, non-military, and non-profit institution might potentially assume responsibility for address allocations.5 After several months of discussions, IAB chairman Vinton Cerf issued a recommendation to the Federal Networking Council (FNC), then the US Government's coordinating body for agencies supporting the Internet, that responsibility for assigning remaining addresses be delegated to international organizations, albeit with the IANA still retaining centralized control: With the rapid escalation of the number of networks in the Internet and its concurrent internationalization, it is timely to consider further delegation of assignment and registration authority on an international basis. It is also essential to take into consideration that such identifiers, particularly network identifiers of class A and B type, [i.e., those used for networks capable of including very large numbers of hosts, typically used by the largest organizations] will become an increasingly scarce commodity whose allocation must be handled with thoughtful care.6 The IAB believed that the internationalization and growth of the Internet warranted a global redistribution of remaining addresses, but also recognized that this institutional tactic alone was insufficient for meeting the requirements of the rapidly expanding Internet. The IAB held a difficult two-day meeting in January, 1991, at the USC-ISI in Marina del Rey, California, to discuss future directions for the Internet. The IAB pondered whether it could `acquire a better international perspective' by supporting international protocols, increasing international membership in the IAB, and holding some meetings outside of the United States.7 The topic of Internet internationalization included the controversial issue of export restrictions on encryption products and the divisive issue of Open Systems Interconnection (OSI).8 At the time, interoperability between different vendors' computer networking systems was problematic. OSI, promoted by the International Organization for Standardization (ISO), was an international standards effort supported by numerous governments, and OSI protocols were in contention to become the standard for interconnecting diverse networking environments. Even the United States, in 1990, mandated that US Government-procured products conform to OSI protocol specifications.9 Yet OSI was incompatible with TCP/IP, the wider collection (suite) of protocols in which IPv4 was embedded, and the question of global adoption remained unsettled. OSI protocols had limited deployment relative to TCP/IP, but had the backing of international governments, the US National Institute of Standards and Technology (NIST), and increasing investment by prominent vendors such as Digital Equipment Corporation (DEC). TCP/IP, on the other hand, was the working set of protocols supporting the Internet, was increasingly being used in private History of Technology, Volume Twenty-eight, 2008
18
IPv6: standards controversies around the next-generation Internet
corporate networks, had the backing of the Internet's technical community, and had well-documented specifications, active standards institutions, and working products. Within IAB deliberations, the issues of OSI and internationalization existed alongside concerns over Internet address space constraints. These issues surfaced together in a 1991 meeting attended by 23 prominent Internet technical contributors including Vinton Cerf, Bob Braden, Jon Postel, and Robert Hinden.10 The congregation was later described as `spirited, provocative, and at times controversial, with a lot of soulsearching over questions of relevance and future direction.'11 MIT's David Clark commenced the meeting with a presentation identifying problem areas in the Internet. The first area addressed the multiprotocol question of whether the Internet should support both TCP/IP and OSI protocols, a question Clark phrased as `[m]aking the problem harder for the good of mankind.'12 Clark identified a conflict between an ability to fulfil technical requirements expeditiously versus taking the time to incorporate OSI protocols within the Internet's architecture. He emphasized that any potential top-down mandates would not be as efficacious as grassroots approaches centred on working code. Other issues included the impact of the Internet's expansion and growing commercialization on routing and addressing architectures. The group generally failed to reach consensus on architectural directions, but the IAB decided to convene again in June for a three-day `Architecture Retreat', to attempt to achieve some consensus on the Internet's technical and policy directions. The promised retreat included 32 individuals from the IAB, the IETFderived Internet Engineering Steering Group (IESG), and some guests. These individuals represented universities, research institutions, corporations, and the United States Government. Five IAB members, including Clark and Cerf, published the outcome of the retreat as an informational RFC in December of 1991. This document, `Towards a future Internet architecture', outlined a blueprint for development over the next five to ten years, seeking discussion and comments from the Internet community. An uncontested assumption was that the Internet faced an inevitable problem of address space exhaustion: `The Internet will run out of the 32-bit IP address space altogether, as the space is currently subdivided and managed.'13 This possibility, along with concerns about the burdens growth would place on the Internet's routing functionality, was identified as the most urgent technological problem confronting the Internet. Rather than initiate incremental changes to address presumed address scarcity, the group believed it should ultimately replace the current 32-bit global address space. DEFINING THE INTERNET
The participants identified some initial possibilities for extending the Internet address space. One option was to retain the 32-bit address format, but eliminate the requirement of global uniqueness for each address: History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
19
instead, different Internet regions would be formalized, with each address available for use in multiple regions, and gateways automatically translating addresses as information traversed the boundaries between them. An alternative was to maintain global uniqueness but expand the Internet address size from 32 to, for instance, 64 bits. Simultaneously, with international pressure to adopt OSI protocols as a universal computer networking standard looming, those involved in the Architecture Retreat were obliged to consider whether the Internet should offer multiple protocol options, and whether the IAB should mandate certain protocols ± and if so, which. Such questions encouraged a focus on what `the Internet' was (or might be). International pressure to adopt OSI protocols as a universal computer networking standard loomed large. The technologists tackling these questions were based in the United States, and had been in control of Internet architectural directions for, in some cases, twenty years. They commented as follows: The priority for solving the problems with the current Internet architecture depends upon one's view of the future relevance of TCP/ IP with respect to the OSI protocol suite. One view has been that we should just let the TCP/IP suite strangle in its success, and switch to OSI protocols. However, many of those who have worked hard and successfully on Internet protocols, products, and service are anxious to try to solve the new problems within the existing framework. Furthermore, some believe that OSI protocols will suffer from versions of many of the same problems.14 The participants in the Architecture Retreat drew a sharp demarcation between the Internet as a communications system, and the Internet as a community of people and institutions. Defining the Internet with regard to what they termed a `sociological' description (`a set of people who believe themselves to be part of the Internet community') was deemed ineffective.15 Having committed to defining `the Internet' architecturally, the group crafted a universal description of the Internet that served to maintain the status quo. Historically, noted the group, Internet connectivity had been defined by Internet Protocol connectivity. Those using IP were on the Internet, and those using another protocol were not. If someone could be `PINGed' (reached using the standard connection test tool for IP), they were on the Internet; if they could not be PINGed, they were not on the Internet. This definition, however, did not necessarily match the networking circumstances of 1991. Many corporations operated large, autonomous TCP/IP networks which, though based on IP, were isolated from the public internet, accessible only to authorized business partners and customers: nonetheless, because such authorized users could PING each other, they fulfilled the standing criterion for `being part of the Internet'. Conversely, some companies were meaningfully connected to the public Internet, but through gateways which meant they were not using end-to-end IP, and so History of Technology, Volume Twenty-eight, 2008
20
IPv6: standards controversies around the next-generation Internet
would be considered `not on the Internet'. In a bid to address this difficulty, the IAB grappled with a definition of the Internet tied to higherlevel name directories, rather than IP addresses. Ultimately, though, the 1991 Future Internet Architecture document endorsed the principle that protocol homogeneity ± meaning TCP/IP ± is: the magnetic center of the Internet evolution, recognizing that a) homogeneity is still the best way to deal with diversity in an internetwork, and b) IP connectivity is still the best basis model of the Internet (whether or not the actual state of IP ubiquity can be achieved in practice in a global operational Internet.)16 This definition of the Internet would retain TCP/IP as the Internet's protocol suite rather than considering the possibility of an OSI protocol supplanting IP. This issue presented obvious institutional control repercussions because including OSI protocols would mean that the standards-setting organization ISO, responsible for OSI protocols, might encroach upon the existing standards-setting institutional structure. For the Internet Protocol to retain dominance as the homogenous underpinning of the Internet (its `magnetic center'), it would have to meet rapidly expanding international requirements, in particular for more Internet addresses. At the November 1991 IETF meeting, held at Los Alamos National Laboratory, a new working group formed to examine the address depletion and routing table expansion issues and make recommendations.17 The group, known as the ROAD group, for ROuting and ADdressing, issued specific recommendations for the short term but failed to reach consensus about a long term solution. The IESG synthesized the ROAD Group's recommendations and forwarded an action plan to the IAB for consideration. Part of the IESG's recommendation was to issue a call for proposals for protocols to solve the addressing and routing problems. Some of the options discussed in 1992 included: `garbage collecting', the reclaiming of assigned but unused Internet addresses; segmenting the Internet into areas connected by gateways; or replacing IP with a new protocol that provided a larger address space. Some of these options never gained traction. Plans for other options proceeded, including short term measures to conserve addresses and the development of a new Internet protocol. As the IESG chair summarized, `our biggest problem is having far too many possible solutions rather than too few.'18 THE IPV7 `FIASCO' OF 1992
In 1992, the Internet's technical community experienced an institutional controversy within the context of Internet internationalization, discordance about OSI versus TCP/IP, projected address space exhaustion, the growing economic importance of the Internet, and the identified need for a new Internet protocol. The IAB was at this time seeking to `move towards internationalization of IETF and IAB activities,'19 while several IETF History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
21
working groups were developing alternative protocol solutions to address the issues of IP address space exhaustion and routing table growth. Also in this year, a group of Internet technology veterans led by Vinton Cerf established a new Internet governance institution, the Internet Society (ISOC), a non-profit, membership-oriented institutional home and funding source for the IETF. One impetus for the establishment of this new institutional layer was the emerging issue of liability. Would IETF members face lawsuits by those organizations or institutions which believed Internet standards selection caused them injury? Other drivers included a decline in US Government funding of Internet standards activities, and an increase in commercialization and internationalization of the Internet. The ISOC board would consist of fourteen trustees, with greater international representation than previous Internet oversight groups, and paying corporate and individual members. Discussions within ISOC mirrored the IAB in highlighting the group's desire for greater international involvement, including a more formal relationship with the International Telecommunication Union (ITU) and the establishment of ISOC chapters around the world.20 Many characteristics of this new organization differentiated ISOC from traditional Internet standards activities within the IETF: links to international standards bodies, greater international participation, direct corporate funding, and formal paying membership. At the inaugural ISOC trustee meeting, held at ISOC's first annual International Networking Conference in Kobe, Japan, Lyman Chapin (the newly-appointed IAB chair, and also an ISOC trustee) presented a new IAB charter `which would accomplish the major goal of bringing the activities of ISOC and the current Internet Activities Board into a common organization.'21 The IAB was to be renamed the Internet Architecture Board, and formally constituted as a technical advisory group of ISOC. The organizational integration of the IAB with the new incorporated, commercially- and internationally-funded entity would, it was intended, provide greater legal status and legitimacy for the group. Yet this new direction also brought the IAB into tension with the established ethos concentrated in its standardizing body, the IETF. One particular controversial decision by the `new' IAB was to spark a conflagration which led members of the technical community to solidify the Internet's architectural direction, restructure the Internet's policymaking structure, and articulate the IETF's overarching philosophy and values. At the June 1992 Kobe meeting, the IAB reviewed the findings and recommendations of the ROAD group, and a similar report from the IESG, on the problem of Internet address space exhaustion and router table expansion. The IAB referred to the problem as `a clear and present danger' to the Internet and felt the short-term recommendations of the ROAD Group, while sound, must be accompanied by an expectation that the IETF `aggressively pursue' plans for a successor to IPv4. The numerals 5 and 6 were (it was believed) in use for existing research networks, so the hypothetical successor was dubbed `IP version 7' (IPv7).22 Quite History of Technology, Volume Twenty-eight, 2008
22
IPv6: standards controversies around the next-generation Internet
atypically, the IAB took the top-down step of proposing a specific protocol for this purpose. This proposal, `TCP and UDP with Bigger Addresses' (TUBA), would leave unchanged the `higher-level' protocols from the TCP/IP suite, responsible for directing the rate and ordering of data transmission between the two ends of a connection. IP itself, however, would be phased out wholesale, to be replaced by protocol structures from the ConnectionLess Network Protocol (CLNP), a standard ISO had specified as part of the OSI protocol suite, and which offered a much greater address length (up to 160 bits).23 Through this partial commitment to OSI, the IAB's decision promoted greater internationalization of the standards process. Several of its members were already involved in OSI integration. Ross Callon, an MIT and Stanford graduate, worked for DEC on `issues related to OSI± TCP/IP interoperation and introduction of OSI in the Internet.'24 Callon had previously worked on OSI standards at Bolt, Beranek and Newman (BBN); Lyman Chapin, the IAB's chairman at the time of the decision, who was then at BBN and himself involved in OSI work, noted the irony of using the TCP/IP-based Internet to communicate the formal ratification of an OSI-related standard. Another IAB member, Christian Huitema, had also participated in OSI developments. Vinton Cerf, Chapin's predecessor in the chair and among the most influential of Internet advocates, noted that `with the introduction of OSI capability (in the form of CLNP) into important parts of the Internet. . . a path has been opened to support the use of multiple protocol suites in the Internet.'25 The object, then, was to use TUBA as a lever towards greater integration of internationally-sanctioned protocols into the Internet environment. Rank-and-file participants in the primarily American IETF working groups were outraged. Their dismay surfaced immediately on the Internet mailing lists, and in force at the IETF meeting held the following month. While the IETF mailing lists are noted for the expression of candid opinions, the reaction to the IAB proposal was unusually acrimonious, collectively displaying `shocked disbelief' and alarm at a recommendation which `fails on both technical and political grounds.'26 The following abridged excerpts from the publicly available IETF mailing list archives (2-7 July 1992) reflect the IETF participants' diverse but equally emphatic responses to the IAB recommendation: Now, under ISO(silent)C auspices, the IAB gets to hob-nob around the globe, drinking to the health of Political Correctness, of International networking and poo-poo'ing [sic] its US-centric roots. I view this idea of adopting CLNP as IPv7 as a disasterous [sic] idea. . . adopting CLNP means buying into the ISO standards process. . . As such, we have to face the painful reality that any future changes that the Internet community wishes to see in the network layer will require ISO approval too. Procedurally, I am dismayed at the undemocratic and closed nature of the decision making process, and of the haste with which such a major decision was made. History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
23
The IAB needs to explain why it believes we can adopt CLNP format and still have change control.27 The IAB's proposal was controversial for several reasons. Its recommendation had circumvented procedural tradition within the standards-setting community: standards percolated up from the working groups, via the IESG, to the IAB, not the converse. Recommendations usually involved a period of public (i.e., IETF participants') review and comment. Some IETF participants suggested that the IAB had lost the legitimacy it had once possessed through being comprised of experienced veterans from the early days of DARPA-funded networking, and that new IAB members were often not involved in direct coding or standards development. Others argued that vendors, especially DEC, with its heavy investment in OSI, had undue influence in standards selection. The greatest concerns related directly to the competition between the IETF and ISO as standards bodies and issues of standards control. Some IETF participants believed that adoption of an OSI-related standard meant relinquishing administrative and technical control of protocols to ISO. Would the IETF still have `change control'? IETF participants feared that protocol development would subsequently be subjected to the ISO's lengthy, top-down, and complex standards development procedures. From a technical and procedural standpoint, some questioned why there was no comparison to the other IPv4 alternatives IETF working groups were already developing. Many recommended that the community examine other alternatives for the new Internet protocol, rather than uniformly pursuing the OSI-inspired TUBA. The backlash over the IAB's recommendation was multifaceted, involving concerns about CLNP's association with ISO, questions about whether CLNP was the best alternative, concern about the influence of network equipment and software vendors, and alarm about what was interpreted as the IAB's topdown procedural manoeuvre. These concerns pervaded deliberations at the twenty-fourth IETF meeting convened the following month in Cambridge, Massachusetts.28 Participating in the more than 80 technical working groups held during the IETF meeting were 687 attendees, a 28% increase over the IETF's previous meeting. Technical and procedural challenges associated with Internet growth were the predominant topics of discussion, and the culmination of the meeting was a plenary session delivered by MIT's David Clark. Within the IETF community, Clark was respected as a long time contributor to the Internet's architecture; he had chaired the ICCB beginning in its inaugural year (1979), and had also previously served as the IAB's chair. Clark's plenary presentation, `A cloudy crystal ball: visions of the future,' reflected the angst IETF working group participants felt about the IAB's recommendation, and ultimately articulated the philosophy that would become the IETF's de facto motto. Clark described the IAB's current role as `sort of like the House of Lords,' advising and consenting to the IESG's proposals, which themselves should percolate up from the IETF History of Technology, Volume Twenty-eight, 2008
24
IPv6: standards controversies around the next-generation Internet
working group deliberations. Clark suggested that more checks and balances would be advantageous. An enduring legacy of Clark's plenary presentation was an articulation of the IETF's core philosophy: We reject: kings, presidents and voting. We believe in: rough consensus and running code.29 In particular, the phrase `rough consensus and running code' would become the IETF's operating credo. The standards community, according to Clark, had traditionally succeeded by adopting working, tested code, rather than proposing top-down standards and making them work. The message was clear: reject what was considered the IAB's top-down mandate for a new Internet protocol. The IETF's resistance to the OSIrelated TUBA was also evidenced by presentations and discussions on competing, non-OSI proposals to expand the Internet address space, as will be discussed below. The IAB formally withdrew its draft at the IETF conference, which concluded with several outcomes: 1) the IETF would continue pursuing alternative proposals for the next generation Internet protocol; 2) the Internet's core philosophy of working code and rough consensus would remain intact; 3) the standards decision process and institutional roles required examination and revamping, and 4) the rank-and-file IETF participants had asserted a grassroots counter balance to the influence of the IAB, the influence of (some) vendors in the standards process, and the government- and vendor-influenced momentum of the OSI protocols. One of the specific institutional outcomes of the Kobe affair and subsequent discussion, on the IETF boards and at the Cambridge meeting, was a consensus decision on a procedure for selecting members of both the IAB and the IESG, which presented standards recommendations from the IETF-based standards-setting community to the IAB. Immediately following the IETF meeting, Cerf, still ISOC president and responsible for the selection of many IAB and IESG members, called for a new working group to examine issues of Internet leader selection, as well as standards processes.30 Steve Crocker headed the working group, designated the Process for Organization of Internet Standards (POISED) Group, whose specific charter was to assess Internet standards procedures, IAB responsibilities, and the relationship between the IAB and the IETF/ IESG. Some of the working group's conclusions included term limits for IAB and IESG members and a selection process by committees and with community input.31 An IETF nomination committee would consist of seven members, chosen randomly from a group of IETF volunteers, and one non-voting chair selected by the Internet Society.32 Borrowing a metaphor from the broader 1990s political discourse, Frank Kastenholz summarized the course of events on the IETF mailing list: `the New World Order was brought in when the IAB apparently disregarded our rules and common practices and declared that CLNP should be IP6. They were fried for doing that.'33 In short, the IAB recommendation and subsequent controversy resulted in a revamping of History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
25
institutional power relations within the standards setting community, an articulation of institutional values, and a demonstration of IETF institutional resistance to adopting any OSI-related protocols within the Internet's architecture. ENGINEERS, CORPORATE USERS AND MARKET ECONOMICS
Discussions about the protocol to replace IPv4, now commonly referred to as Next Generation IP (IPng), dominated the IETF mailing lists and the following IETF meeting held in Washington, D.C., in November 1992. The meeting commenced with technical presentations on the four rival proposals then in contention to become IPng. TUBA, the OSI-friendly focus of the Kobe controversy, remained an option, but was challenged by three alternatives. The `P' Internet Protocol (PIP), championed by Bellcore's Paul Tsuchiya, would be a completely new protocol, developed within the Internet's standards-setting establishment. Steve Deering of Xerox PARC announced the Simple Internet Protocol (SIP), which would extend the IP address size from 32 bits to 64 bits. Sun Microsystem's Bob Hinden introduced IP Address Encapsulation (IPAE): this was actually a transition mechanism from IPv4 to a new Internet protocol, assumed to be SIP. The competing proposals, especially SIP and TUBA, were not radically different from a technical standpoint. The key distinguishing factor was the question of who would be developmentally responsible for the architectural underpinning of the Internet: the established participants within the Internet's traditional standards-setting format, or ISO. Notably, Hinden's presentation, in emphasizing how IPAE differed from TUBA, stressed as a key selling point that it would retain existing semantics, formats, terminology, documentation, and procedures, and would have `[n]o issues of protocol ownership.'34 At the following IETF gathering (July 1993) in Amsterdam, the first ever held outside of North America,35 a Birds of a Feather (BOF) group formed to discuss the decision process for the IPng selection. A BOF group is similar to an IETF working group, but has no charter, convenes once or twice, and often serves as a preliminary gauge of interest in forming a new IETF working group. Two hundred people attended the IPng Decision BOF, also called IPDecide; consensus opinion suggested that the IETF needed to take decisive action to select IPng. Participants suggested that the marketplace already had an overabundance of protocol choices, and that `[t]he decision was too complicated for a rational market-led solution.'36 CERN's Brian Carpenter doubted that the general public was aware that solutions to the problem were being discussed or even that a problem existed. He believed it would take several years for the market to understand the problem, and agreed with those who suggested `we still need Computer Science Ph.D.s to run our networks for a while longer.'37 The standards-setting community accordingly created a new ad hoc working group to select IPng. The new working group was directed by two History of Technology, Volume Twenty-eight, 2008
26
IPv6: standards controversies around the next-generation Internet
Internet veterans, Allison Mankin of the Naval Research Laboratory, and Scott Bradner of Harvard University's Office of Information Technology: both were members of the IESG. In December 1993, Mankin and Bradner posted as an RFC a formal `white paper' solicitation of proposed specifications for IPng.38 The document invited any interested parties to recommend requirements IPng should meet and to suggest evaluation criteria which should determine the ultimate selection. Submissions would, it was made clear, become publicly available as informational RFCs, and the IPng Working Group would take note of this input during the selection process. By this point, several potential sets of requirements for the new protocol were already circulating through the standards community. A comparison of available IPng proposals by Tim Dixon, secretary of Reseaux AssocieÂs pour la Recherche EuropeÂenne (RARE), the European Association of Research Networks, had been republished as an informational RFC in May 1993. The report concluded that the PIP, TUBA, and SIP proposals had minimal technical differences and that the protocols were too similar to evaluate on technical merit; a similar point had been raised by the IPDecide BOF in Amsterdam.39 Instead, Dixon's report suggested a political rationale for a formal selection process: `the result of the selection process is not of particular significance, but the process itself is perhaps necessary to repair the social and technical cohesion of the Internet Engineering Process.' Dixon highlighted the ongoing tension over OSI, suggesting that TUBA faced a `spurious ``Not Invented Here'' Prejudice'. The new protocol, he warned, ironically faced the danger of what many perceived as the shortcomings of the OSI standards process: `Slow progress, factional infighting over trivia, convergence on the lowest common denominator solution, lack of consideration for the end-user.'40 The IETF BOF group raised another rationale for conducting a formal protocol evaluation process, citing the possibility of `potential legal difficulties if the IETF appeared to be eliminating proposals on arbitrary grounds.'41 Bradner and Mankin's white-paper solicitation arrived amid this context of technically similar proposals, ongoing anxiety about OSI, concerns about possible legal repercussions of the protocol selection, and rapid global Internet growth. It received 21 responses. Three submissions came from companies in industries considered, at the time, likely to become future `information superhighway' providers: cellular telephony, electric power, and cable television. Other submissions addressed specific military, security, or corporate user requirements. Several submissions were recapitulations of the actual protocol proposals. The established Internet standards-setting community in general was united on at least one point: `the IETF should take active steps toward a technical decision, rather than waiting for the ``marketplace'' to decide.'42 Yet some of the white-paper responses reflected market requirements of large corporate Internet users, which comprised a major marketplace sector of an increasingly commercialized Internet industry. Large History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
27
corporate Internet users did not uniformly agree with the need for a next generation Internet Protocol. Historian of technology Thomas Hughes suggests that new technology advocates err severely in underestimating the inertia and tenacity of existing technological systems.43 Once developed and installed, technological systems acquire conservative momentum. In the case of a new Internet protocol, United States corporate users represented a conservative foundation for IPv4. US corporate Internet users generally had ample IP addresses, and substantial investment in IPv4 capital and human resources. Boeing Corporation's response to the white paper solicitation sought to summarize the U.S. corporate user view: `Large corporate users generally view IPng with disfavour.'44 Boeing suggested that Fortune 100 corporations, then heavy users of internal TCP/IP networks, viewed the possibility of a new protocol as `a threat rather than an opportunity.'45 In the early 1990s, large US corporations primarily operated mixed protocol network environments, rather than a single network protocol connecting all applications and systems. The prevailing trend was to reduce the number of network protocol environments rather than expand them: they possessed, in the words of the Boeing response, `a basic abhorrence to the possibility of introducing ``Yet Another Protocol'' (YAP).'46 Furthermore, the Boeing paper suggested that large corporate users could not identify with the issue of address depletion. According to Internet address distribution records, at the time, Boeing controlled 1.3 million unique addresses.47 Large American corporate Internet users generally had sufficient, if not superfluous, Internet address reserves; only a new `killer app' requiring IPng would motivate them to replace their current implementations with a new Internet protocol. The extent of conservative momentum behind the IPv4 standard was reinforced by a white-paper response from the computing giant IBM: `IPv4 users won't upgrade to IPng', it stated, `without a compelling reason.'48 Similarly, BBN, the developer of the original switching nodes which had paved the way for Internet communication, noted that the IPng effort was `pushing' network technology. The BBN response stressed that marketplace demands should instead drive the development of IPng, and questioned whether IPv4 users would ever have a compelling justification to upgrade to a new protocol.49 In contrast, companies without significant IPv4 investment, positioned to profit from the availability of more addresses or the development of new products and services, embraced the idea of a new protocol. For example, cable companies envisioned opportunities to become new entrants into the Internet Service Provider market and providers of converged video, voice, and data services. This interest was reflected in Time Warner Cable's response to the solicitation, `IPng Requirements: A Cable Television Industry Viewpoint.'50 The response touted the potential for cable television networks to become the dominant platform for delivery of interactive digital services supporting integrated voice, video, and data. At the time, only a small percentage of American consumers had home History of Technology, Volume Twenty-eight, 2008
28
IPv6: standards controversies around the next-generation Internet
Internet access and there was no interactive network combining video and data transmissions. The purpose of the response was to position Time Warner, and the cable industry generally, as dominant future providers of converged `information superhighway' services, and to embrace IPng as a potential protocol supporting broadband interactive cable service. The cellular industry was another sector not involved in Internet services, but hoping to become competitive through the potential of converged voice and data services. Mark Taylor, the director of system development for McCaw Cellular Communications, Inc., responded on behalf of the Cellular Digital Packet Data (CDPD) consortium of cellular providers. The primary requirements of the digital cellular consortium were mobility, the ability to `operate anywhere anytime' and scalability, meaning `IPng should support at least tens or hundreds of billions of addresses.'51 THE SELECTION: ISO VERSUS IETF?
Upon completion of the white paper solicitation process, Bradner and Mankin would make the final recommendation to the IESG for approval. Additionally, the IESG also established an `IPng Directorate' to function as a review body for the proposed alternatives which already existed prior to the public solicitation. The IPng Directorate, over the course of the selection process, included the following individuals: J. Allard, Microsoft; Steve Bellovin, AT&T; Jim Bound, Digital; Ross Callon, Wellfleet; Brian Carpenter, CERN; Dave Clark, MIT; John Curran, NEARNET; Steve Deering, Xerox PARC; Dino Farinacci, Cisco; Paul Francis, NTT; Eric Fleischmann, Boeing; Robert Hinden, Sun Microsystems; Mark Knopper, Ameritech; Greg Minshall, Novell; Yakov Rekhter, IBM; Rob Ullmann, Lotus; and Lixia Zhang, Xerox.52 This group represented numerous technical constituencies, spanning routing, security, and protocol architectures; in other important senses, however, it lacked diversity. The majority of members represented American software vendors or service providers: these corporations would presumably incorporate the new standard, once selected, into their products, and therefore had an economic stake in the outcome. There was no direct government representation on the IPng Directorate, no individual end-users, and only one large corporate end-user. By the final evaluation process, three proposals were in contention to become the next generation Internet Protocol: TUBA, SIPP (Simple Internet Protocol Plus), and CATNIP (Common Architecture for the Internet). All would provide larger address fields, allowing for substantially more addresses, and each, if adopted, would become a universal protocol. Historian Janet Abbate suggests that `technical standards are generally assumed to be socially neutral. . . but have far-reaching economic and social consequences, altering the balance of power between competing businesses or nations and constraining the freedom of users.'53 Although the proposals had technical differences, they were distinguished above all History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
29
by who was behind the development of the standard, and whether it would preserve IP or discard it. Protocol ownership and control continued to remain a significant concern. The SIPP proposal was a collaborative merging of the earlier proposals IPAE, SIP and PIP, championed by longstanding IETF participants Steve Deering of Xerox PARC and Bob Hinden of Sun Microsystems. Sun was closely associated with TCP/IP environments, and had a vested interest in maintaining IP as the dominant protocol. SIPP was the only proposal to preserve IP, but would expand the address size from 32 bits to 64 bits. CATNIP would be a completely new protocol, intended to provide a convergence between the Internet, ISO protocols, and Novell products. The CATNIP proposal, authored by Robert Ullman of Lotus Development Corporation and Michael McGovern of Sunspot Graphics, was explicit in its endorsement of ISO standards and its assertion that convergence with ISO protocols was an essential requirement. CATNIP would use the ISO-specified NSAP address space; so would TUBA, which, as before, specified the OSI protocol CLNP, and represented an even greater endorsement of ISO. The IPng Directorate considered CATNIP inadequately specified, so the decision lay ultimately between SIPP, an extension of the prevailing IETF Internet Protocol, and the ISO/OSI approach of TUBA. The likelihood that the IP approach (offered by SIPP) would be selected permeated several aspects of the selection's lexicon and process. The very name of the future protocol ± `IP next generation' ± reflected the initial assumption that the new protocol would be an extension of the existing Internet Protocol, IP. Additionally, the IAB's 1991 `Towards the Future Internet Architecture' document (RFC 1287) had reaffirmed IP as the one defining architectural component of `being on the Internet'. If a different network layer were selected, in the minds of many, something fundamental to the Internet's being would be lost. The overall selection process, and even the specific evaluation criteria, also featured an element of strategic demarcation of `technical' and `political' considerations. Bradner and Mankin recognized and acknowledged the political dimension, characterizing it as pressure for convergence with ISO versus pressure to resist ISO standards and retain protocol control within the IETF. As they described in their IPng Area Status Report at the IETF meeting in Seattle on 28 March 1994, the pressure for convergence with the ISO is something the Working Group should understand, yet must `dismiss as not a technical requirement.'54 The process, then, exhibited some asymmetry about what was considered political, with positions advocating technical convergence with the ISO standard deemed political but positions against convergence with the ISO standard (i.e. preserving IP) considered technical. The 1991 Internet architecture document had likewise characterized `powerful political and market forces' behind the introduction of the OSI suite, and this sentiment appeared to persist years later.55 The `technical' evaluation criteria also featured a subjective element. For example, the promoted criterion of History of Technology, Volume Twenty-eight, 2008
30
IPv6: standards controversies around the next-generation Internet
`simplicity' presupposes acceptance of the aesthetic judgment that simple protocols are preferable to complex protocols. This criterion also perhaps contradicts other technical criteria such as supporting a diversity of network topologies, operating over a range of media, and supporting a variety of transmission speeds. The factor which the IPng Directorate acknowledged as `political' related to control over the standard. The IETF wanted change control ± managerial authority over future change ± even if they selected the OSIbased protocol, TUBA. This issue represented an area of discord even within the TUBA Working Group, with some arguing that only ISO should control the standard and others believing the IETF should have authority to modify it. This issue permeated deliberations within the working groups and the IPng Directorate, was reflected in the mailing list forums, and even in draft proposals issued by competing groups. For example, the proposed CATNIP alternative included the following statement: The argument that the IETF need not (or should not) follow existing ISO standards will not hold. The ISO is the legal standards organization for the planet. Every other industry develops and follows ISO standards. . . ISO convergence is both necessary and sufficient to gain international acceptance and deployment of IPng.56 On the opposing side, one IETF participant declared that `the decisions of ISO are pretty irrelavent [sic] to the real world which is dominated by IETF and proprietary protocols.'57 This question of whether change control would be vested in the ISO or IETF remained a significant factor in the evaluation process. In July 1994, at the opening session of the thirtieth meeting of the IETF in Toronto, Bradner and Mankin presented their recommendation that SIPP, with some modifications, become the basis for IPng. More than 700 people attended ± a high attendance rate attributable to excitement about the protocol announcement and an increase in press representation.58 The numeral 6 (incorrectly assumed unavailable, it had emerged, at the time of the `IPv7' discussion) had been assigned to IPng, so the new protocol would be named IPv6. Mankin and Bradner recounted how the IPng Directorate had identified major technical flaws in each proposal. CATNIP, as noted above, had been considered insufficiently developed. The general technical assessment suggested `both SIPP and TUBA would work in the Internet context', despite technical weaknesses in each approach; yet the assessment of TUBA was also stressed to be `deeply divided.'59 The Directorate identified some technical weaknesses in the CLNP protocol, the centrepiece of the TUBA proposal, but division also focused on the question of IETF ownership of the protocol. Two of the comments Mankin and Bradner cited in their presentation reflected this division: `TUBA is good because of CLNP. If not CLNP, it is a new proposal. . . [i]f TUBA becomes the IPng, then the IETF must own TUBA.' History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
31
Christian Huitema, an IAB member involved in the SIPP Working Group, later summarized his assessment of the reason TUBA was not selected: `In the end, this proposal failed because its proponents tried to remain rigidly compatible with the original CLNP specification.'60 Yet the TUBA proposal was seemingly caught in a lose-lose situation. If the IETF modified CLNP, some argued, this would negate the advantage of CLNP's installed base, and would diminish the possibility of meaningful convergence between ISO and IETF standards. If the IETF could not modify CLNP, it would lose control of the Internet. Accordingly, SIPP was approved by the IESG, and IPv6 became a `proposed standard,' in accordance with the IETF's conventional nomenclature, on 17 November 1994, concluding two years of deliberations over the choice of a new protocol. The address length was eventually settled as 128 bits, increasing the number of available addresses from approximately 4.3 billion under IPv4 to 3.461038. The selection retained IP, though modified, as the dominant network layer protocol for the Internet and settled the issue of who would control the next generation Internet protocol. The final rejection of the OSI-based protocol, CLNP, solidified the position of the IETF as the standards body responsible for the Internet's architectural direction. Bradner and Mankin closed their IETF plenary presentation recommending IPv6 with the following two quotations and a concluding sentiment: `In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away.'±Antoine de Saint-Exupery `Everything should be made as simple as possible, but not simpler.'± A. Einstein (IETF work is trying to find the right understanding of the balance between these two goals. We think we have done that in IPng.)61 CONCLUSION
The issue of technical protocol selection was also an issue of institutional power selection in the context of Internet globalization. Examining IPv6 against its discarded and historically overlooked alternatives reveals tensions of various kinds: those between established vendors, such as DEC, and newer entrants like Sun Microsystems; the Internet's grassroots rank-and-file establishment versus newer institutional formations like the Internet Society; trusted and familiar insiders versus newer participants; and the American-dominated IETF versus the more international ISO. The OSI protocols were backed by most Western European governments, the United Nations, influential vendors and user organizations, and were consistent with United States Governmental directives. The selection of IPv6, an expansion of the prevailing IPv4 protocol, solidified and extended the position of the Internet's traditional standards-setting establishment to control the nature and direction of the Internet itself. History of Technology, Volume Twenty-eight, 2008
32
IPv6: standards controversies around the next-generation Internet
The selection of IPv6 occurred outside of the realm of market economics, with the Internet's technologists describing the protocol selection as too complex for markets and suggesting that corporate users, many with ample IP addresses, were not even aware of the presumptive problem of address space exhaustion. The IPv6 selection process seemed to contain an inherent contradiction. The technical community was adamant about eliminating `sociological' considerations from what they considered a purely technical protocol decision. Years earlier, the IAB had drawn a demarcation between the Internet as a communications system and the Internet as a community of people: only its technical, architectural constitution could define the Internet. Yet the outcome of the IPng selection process appeared to address `the Internet', at least in part, as the community of people who would either retain or gain control of its architecture. In architectural decisions on the next generation Internet Protocol, a distinct consideration seems to have been the retention of the IAB-IESG-IETF institutional structure (and personnel), and avoidance of relinquishing control to a more international standards body. Despite the Internet standards community's strategy of excising sociological considerations from its architectural decisions, the history of IPv6 indicates that the definition of the Internet, ultimately, is people. ACKNOWLEDGMENTS
A special thanks to colleague and friend Janet Abbate for her invaluable contributions to this research. GLOSSARY
BBN
Bolt, Beranek and Newman. Commercial technology corporation, foundationally involved in networking innovations which led to the Internet. CATNIP Common Architecture for the Internet. Unsuccessful, OSIfriendly 1994 IPng proposal. CLNP ConnectionLess Network Protocol. Part of the OSI group of protocols. DARPA Defense Advanced Research Projects Agency. US Department of Defense agency responsible for funding and development of networking innovations which became the Internet. DEC Digital Equipment Corporation. Commercial hardware vendor. FNC Federal Networking Council (US). Coordinating group for federal agencies supporting Internet activity. IAB Internet Activities Board (1986±1992); Internet Architecture Board (1992±present). Committee with high-level responsibility for Internet engineering and architecture. IANA Internet Assigned Numbers Authority. Central body overseeing the assignment of IP addresses. History of Technology, Volume Twenty-eight, 2008
Laura DeNardis ICCB IESG IETF
IP IPng IPv4 IPv6 IPv7 ISO ISOC ITU MIT NIST OSI
RARE RFC
ROAD SIPP
33
Internet Configuration Control Board. Predecessor to the IAB, 1979±1984. Internet Engineering Steering Group. Review body constituted of senior IETF participants. Internet Engineering Task Force. Network engineering community which, since 1986, has performed the bulk of Internet standardizing activity through its working groups. Formally overseen by IAB. Internet Protocol. An addressing and routing standard for exchanging information over the Internet. Next Generation IP. Used speculatively to describe the intended long-term successor to IPv4, eventually dubbed IPv6. Internet Protocol version 4. First widely used implementation of IP, dominant since the early 1980s. Concerns over its address space shortcomings prompted the search for a successor. Internet Protocol version 6. Designation given to SIPP following its 1994 endorsement as successor to IPv4. Internet Protocol version 7. Designation used for the hypothetical successor to IPv4 around 1992. `IPv6', which had been assumed unavailable, was later preferred. International Organization for Standardization. Global nongovernmental body concerned with industrial standardssetting in general. Internet Society. International governance institution, founded 1992. Provides an institutional home for IAB and IETF. International Telecommunication Union. Established telecommunications standardization body, an agency of the United Nations. Massachusetts Institute of Technology. National Institute of Standards and Technology (US). Nonregulatory governmental standards agency. Open Systems Interconnection. ISO-supported networking standards initiative, with an international (and internationalizing) support base. Usually discussed as an alternative to TCP/IP. Reseaux AssocieÂs pour la Recherche EuropeÂenne. Community of European networking organizations, largely pro-OSI. Request for Comments. In networking parlance, a proposal announced to the user/developer community (the name indicating its origins in the open peer-review principle) as part of a formalized distribution service. An RFC may be purely informational or may specify proposed techniques, conventions or actions related to Internet standards. Routing and Addressing. IETF working group, formed 1991. Simple Internet Protocol Plus. Proposed successor to IPv4 (1994), combining elements of previous non-OSI approaches. Adopted as IPv6. History of Technology, Volume Twenty-eight, 2008
34
IPv6: standards controversies around the next-generation Internet
TCP TCP/IP TUBA
UDP USC-ISI
Transmission Control Protocol. Higher-level protocol within TCP/IP: regulates the overall flow and ordering of data transferred between the two ends of a networked connection. Internet Protocol Suite (historically, `Transmission Control Protocol/Internet Protocol'). Generality of the collection of networking protocols built on and including IP. TCP and UDP with Bigger Addresses. Proposed successor to IPv4 (1992-4), retaining higher-level elements of TCP/IP but including formats from the OSI-endorsed CLNP as a replacement for IP. IAB-endorsed, but unpopular with rank-and-file IETF participants. User Datagram Protocol. An element of TCP/IP, performing a similar role to TCP. University of Southern California Information Sciences Institute.
Notes and References
1. K. Alder, `A Revolution to Measure: The Political Economy of the Metric System in France', in M. Norton Wise (ed.), Values of Precision (Princeton, 1995), 39±71. 2. Concerns about possible IP address exhaustion were raised during the Internet Activities Board (IAB) teleconference of April 26 1990 (meeting minutes archived at www.iab.org/documents/iabmins/IABmins.1990-04-26.html) and at the IAB quarterly meeting on 28±29 June 1990 (meeting minutes archived at www.iab.org/documents/ iabmins/IABmins.1990-06-28.html). 3. J. Postel (ed.), `DOD Standard Internet Protocol', RFC 760, January 1980, documents the original Internet Protocol specification. See also J. Postel, `Internet Protocol, DARPA Internet Program Protocol Specification Prepared for the Defense Advanced Research Projects Agency', RFC 791, September 1981. 4. S. Bradner and A. Mankin, `The Recommendation for the IP Next Generation Protocol', RFC 1752, January 1995, 4, archived at www.ietf.org/rfc/rfc1752.txt. 5. Internet Architecture Board teleconference minutes, 26 April 1990, archived at www.iab.org/documents/iabmins/IABmins.1990-04-26.html. 6. V. Cerf, `IAB Recommended Policy on Distributing Internet Identifier Assignment and IAB Recommended Policy Change to Internet ``Connected'' Status', RFC 1174, August 1990, 1, archived at http://tools.ietf.org/html/rfc1174. 7. Internet Activities Board, Meeting Minutes, 8±9 January 1991, `Foreward' [sic], archived at www.iab.org/documents/iabmins/IABmins.1991-01-08.html. 8. See A. Russell, ` ``Rough consensus and running code'' and the Internet-OSI standards war', IEEE Annals of the History of Computing, July±September 2006. 9. The United States Federal Information Processing Standards (FIPS) Publication 146-1 endorsed OSI compliant products in 1990. In 1995, FIPS 146-2 retracted this mandate. 10. For a list of meeting attendees, see the meeting minutes archived at www.iab.org/ documents/iabmins/IABmins.1991-01-08.html. 11. D. Clark et al., `Towards the Future Internet Architecture', RFC 1287, December 1991, 2, archived at www.ietf.org/rfc/rfc1287.txt. 12. Internet Activities Board, Summary of Internet Architecture Discussion, 8±9 January 1991, Appendix A, David Clark's presentation, archived at www.iab.org/documents/iabmins/ IABmins.1991-01-08.arch.html. 13. Clark et al., op. cit. (11), 4. 14. Clark et al., op. cit. (11), 2. 15. Clark et al., op. cit. (11), 9. 16. Clark et al., op. cit. (11), 10. 17. The formation and objectives of the ROAD Group are described in the Proceedings of the Twenty-Second Internet Engineering Task Force, Los Alamos National Laboratory, Santa Fe, New Mexico, 18±22 November 1991.
History of Technology, Volume Twenty-eight, 2008
Laura DeNardis
35
18. P. Gross and P. Almquist, `IESG Deliberations on Routing and Addressing', RFC 1380, November 1992. 19. Internet Activities Board, minutes of 7 January meeting, 1992, archived at www.iab.org/documents/iabmins/IABmins.1992-01-07.html. 20. Internet Activities Board, op. cit. (19). 21. Internet Society, minutes of Annual General Meeting of the Board of Trustees, Kobe, Japan, 15 June, 1992, archived at www.isoc.org/isoc/general/trustees/mtg01.shtml. 22. Internet Activities Board, meeting minutes from the INET 92 conference, Kobe, Japan, 18±19 June, 1992, archived at www.iab.org/documents/iabmins/IABmins.1992-0618.html. 23. R. Callon, `TCP and UDP with Bigger Addresses (TUBA), A Simple Proposal for Internet Addressing and Routing', RFC 1347, June 1992. 24. According to RFC 1336, `Who's Who in the Internet, Biographies of IAB, IESG and IRSG Members', published in May 1992. 25. RFC 1336, op. cit. (24). 26. Jon Crowcroft, posting on the IETF mailing list, 2 July 1992; Marshall Rose, posting on the IETF mailing list, 7 July 1992. 27. Postings of Jon Crowcroft, 2 July; Craig Partridge, 2 July; Marshall Rose, 7 July; Steve Deering, 2 July; Deborah Estrin, 3 July 1992. 28. According to M. Davres, C. Clark and D. Legare (eds), Proceedings of the TwentyFourth Internet Engineering Task Force, MIT, Cambridge, Massachusetts, 13±17 July 1992. 29. From David Clark's plenary presentation, `A Cloudy Crystal Ball, Visions of the Future': Davres, Clark and Legare (eds), Proceedings, 539. 30. S. Crocker, `The Process for Organization of Internet Standards Working Group', RFC 1640, June 1994. 31. See the following RFCs: Internet Architecture Board and Internet Engineering Steering Group, `The Internet Standards Process ± Revision 2', RFC 1602, March 1994; C. Huitema, `Charter of the Internet Architecture Board', RFC 1601, March 1994; E. Huizer and D. Crocker, `IETF Working Group Guidelines and Procedures', RFC 1603, March 1994; S. Crocker, `The Process for Organization of Internet Standards Working Group (POISED)', RFC 1640, June 1994. 32. The process is described in Huitema, `Charter of the IAB'. 33. Frank Kastenholz, posting on the IETF mailing list, 24 March 1995. 34. Hinden and Crocker, `IP Address Encapsulation'. 35. 46 per cent of the 500 attendees represented countries other than the United States, whereas previously held meetings averaged between 88 and 92 per cent American attendees, according to the Proceedings of the Twenty-Seventh Internet Engineering Task Force, Amsterdam, Netherlands, 12±16 July 1993. 36. From the Minutes of the IPng Decision Process BOF (IPDECIDE) reported by Brian Carpenter (CERN) and Tim Dixon (RARE) with additional text from Phill Gross (ANS), July 1993, accessed at http://mirror.switch.ch/ftp/doc/ietf/93jul/ipdecide-minutes-93jul.txt on 12 August 2003. 37. Brian Carpenter, submission to big-internet mailing list, 14 April 1993. 38. S. Bradner and A. Mankin, `IP: Next Generation (IPng) White Paper Solicitation', RFC 1550, December 1993. 39. IPDECIDE minutes, op. cit. (36). 40. T. Dixon, `Comparison of Proposals for Next Version of IP', RFC 1454, May 1993. 41. IPDECIDE minutes, op. cit. (36). 42. Bullet point presented by the IETF chair in a meeting entitled `IPDecide BOF' at the 1993 IETF Amsterdam. 43. T. Hughes, American Genesis: A History of the American Genius for Invention (New York, 1989), 459. 44. E. Fleischman, `A Large Corporate User's View of IPng', RFC 1687, August 1994, 1. 45. Fleischman, op. cit. (44), 2. 46. Fleischman, op. cit. (44), 6. 47. Boeing held at least 20 distinct Class B address blocks and 80 Class C address blocks. Each Class B address block contains more than 65,000 addresses and each Class C contains 256 addresses, so Boeing controlled at least 1.3 million IP addresses. S. Romano,
History of Technology, Volume Twenty-eight, 2008
36
IPv6: standards controversies around the next-generation Internet
M. Stahl and M. Recker, `Internet Numbers', Network Working Group RFC 1117, August 1989. 48. E. Britton and J. Tavs, `IPng Requirements of Large Corporate Networks', RFC 1678, August 1994. 49. J. Curran, `Market Viability as a IPng Criteria [sic]', RFC 1669, August 1994. 50. M. Vecchi, `IPng Requirements: A Cable Television Industry Viewpoint', RFC 1686, August 1994. 51. M. Taylor, `A Cellular Industry View of IPng', RFC 1674, August 1994. 52. Bradner and Mankin, op. cit. (4). 53. J. Abbate, Inventing the Internet (Cambridge, MA, 1999), 179. 54. From S. Bradner and A. Mankin, `IPng Area Status Report' archived at www.sobco.com/ipng/presentations/ietf.3.94/ report.txt. 55. Clark et al., op. cit. (11), 2. 56. M. McGovern and R. Ullman, `CATNIP: Common Architecture for the Internet', RFC 1707, October 1994. 57. Donald Eastlake, posting on big-internet mailing list, 14 September 1993. 58. According to the Director's Message, Proceedings of the Thirtieth IETF, Toronto, Ontario, Canada, 25±29 July 1994. 59. S. Bradner and A. Mankin, `IP Next Generation (IPng)', 1994, text version of presentation made at the IETF meeting in Toronto on 25 July 1994, archived at www.sobco.com/ipng/presentations/ietf.toronto/ipng.toronto.txt. 60. C. Huitema, IPv6: The New Internet Protocol (Upper Saddle River, NJ, 1996), 5. 61. Bradner and Mankin, op. cit. (59), parentheses in original.
History of Technology, Volume Twenty-eight, 2008
Standardization across the Boundaries of the Bell System, 1920±38 ANDREW L. RUSSELL
INTRODUCTION
Standardization provides a useful starting point for examining the development of technological systems. Its utility comes from the pivotal position of standards within a system: whether they take the form of consistent interfaces, commodified raw materials or regularized labour practices, standards are necessary for integrating a heap of parts into a functional whole. A classic case in point is the Bell Telephone System. A number of excellent studies have shown how the creation of the modern Bell System was the result of extensive technical, administrative and political efforts to combine a variety of disjointed units (including local and regional operating companies, Western Electric, Long Lines and, after 1925, Bell Labs) under the direction of AT&T executives. An ideology of standardization drove the successful creation of the monopoly Bell System ±most clearly articulated in Theodore N. Vail's slogan `One System, One Policy, Universal Service'. As with other large technical systems, standards were both cause and consequence of systematization.1 In the formative years of the Bell System, executives and managers pursued a thorough and far-reaching programme of standardization. By 1929, AT&T had defined standards for an astonishing variety of functions, including telephone plant design, underground cables and raw materials; the manufacture, distribution, installation, inspection and maintenance of new equipment, business and accounting methods, and non-technical supplies (such as office furniture, janitors' supplies, cutlery and china); and provisions for safety, health, and even natural disasters such as sleet storms.2 This comprehensive programme of standardization, when combined with strategies in the market and political spheres, generated powerful momentum and created the foundations of AT&T's control over the American telephone industry. Indeed, we might see the AT&T standards strategy as the centrepiece of its transformation from a small entrepreneurial venture into the source of American leadership in global communications and electronics. As the many histories of the Bell System History of Technology, Volume Twenty-eight, 2008
38
Standardization across the Boundaries of the Bell System, 1920±38
have shown, AT&T maintained this momentum during subsequent decades through a sometimes uneasy truce with federal antitrust regulators. This truce eventually began to unravel in the late 1960s and ended with the divestiture of the Bell System in 1984.3 It is unfortunate that historians have not studied standardization in the mature Bell System (after 1920) with the same level of scrutiny that they have devoted to standardization in the Bell System's formative years (before 1920). Given what we know from historians and economists of monopoly firms, we might assume that standardization in the monopoly Bell System occurred in a monolithic and almost petulant manner. Indeed, this is precisely the style of standardization that the Federal Communications Commission described in its highly critical 1939 investigation of the American telephone industry. `Centralized control over engineering, standardization, and manufacturing,' the FCC declared, could provide opportunities for the suppression of inventions, the failure to take advantages of outside improvements, and the sale and installation of outdated or inferior equipment by Western Electric to the regional operating companies.4 The FCC concluded that AT&T used standards ± and the process of standardization ± to construct and protect its monopolistic autonomy. But, as a matter of public policy, monopoly was an acceptable compromise so long as the Bell System provided a highquality telephone service to users and paid consistent dividends to investors. It is my contention that this caricature of monopoly standardization ± sluggish, arrogant and solipsistic ± paints a distorted picture of the Bell System's struggle to achieve standardization. If we look more closely at what AT&T executives and engineers were doing, we will of course see a number of projects to develop standards that would bring greater efficiency through centralized control. But, if we persist in viewing the history of the Bell System from the vantage point of standardization, we might be surprised to see dozens of standardization projects that spanned the boundaries of their monopoly system. In this chapter, I review two of these standardization projects in order to show how system engineers laboured to maintain the momentum of a technological system ± despite its secure status as a monopoly ± by influencing technical standards that were not under their monopolistic jurisdiction. In today's antitrust parlance, they leveraged their monopoly power to influence competitive lines of business. My first example is an instance of what we might term `interfering infrastructures' ± the problem of inductive interference that resulted from the close proximity of transmission wires used by telephone systems, electrical light systems, electrical power systems and electrical railway systems. My second example is AT&T's efforts to eliminate the use of illegal telephone slugs, some of which were metal washers manufactured to meet existing industry standards. An important conceptual question lies beneath the surface of these two examples: why did AT&T engineers invest time and energy into standardization projects that reached beyond the boundaries of the History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
39
AT&T monopoly? This paper considers three possible explanations. The first is economic: just as intra-firm standardization brought efficiencies to the operation of AT&T's telephone network, AT&T executives might have believed that inter-firm (and inter-industry) standardization would generate similar efficiencies, reduce costs and increase profits. A second explanation is jurisdictional: participation in industry standardization could have provided a channel for AT&T to influence the standards process to benefit its proprietary interests. A third explanation is cultural: by participating in industry and national standardization projects, AT&T engineers could embody the professed public service ethic of AT&T and, at the same time, enhance the prestige and vitality of professional organizations (such as the American Institute of Electrical Engineers) and stabilizing institutions (such as the American Engineering Standards Committee). These three explanations need not be mutually exclusive; indeed, my point is that we need to keep each of these three types of motivation in mind in order to reach a richer understanding of standardization within and beyond the boundaries of the Bell System. INTERFERING INFRASTRUCTURES: THE PROBLEM OF INDUCTIVE INTERFERENCE
Throughout its early history, AT&T did not encourage its engineers to collaborate openly in technical societies and industry groups. To the contrary, AT&T leaders recognized that their competitive advantages flowed from the company's premium on secrecy and patent protection. These attitudes, however, began to change in the years before the First World War. One indication of changing attitudes may be seen in a speech at the 1915 conference of Bell System engineering and manufacturing personnel, in which H. F. Albright asked the Western Electric and AT&T engineers to reconsider the potential benefits of professional activities outside the Bell System. Albright suggested that individual employees would gain `an enlarged circle of acquaintances' and learn about other engineering methods. The company as a whole would benefit, as well: . . . through such associations the company obtains recognition for its principles and achievements; its worth and position in the community are better known; the quality of its scientific work and its efficiency in production becomes better known and our customers and friends learn to better appreciate our pioneer work in the development of the art of telephony.5 AT&T engineers quickly learned that `outside' cooperation had more than a social function. A good example of the technical benefits of cooperation may be seen in AT&T's efforts to address inductive interference generated by the close proximity of other networks that utilized electrical current, such as electrical power lines, lighting systems and railroad equipment. Some information infrastructures, such as telegraph and railroad networks in the nineteenth century, grew in a largely complementary manner.6 The History of Technology, Volume Twenty-eight, 2008
40
Standardization across the Boundaries of the Bell System, 1920±38
electrical networks and infrastructures of the twentieth century, however, created new problems that engineers attacked by using both technical and organizational means. Telephone engineers had long been familiar with interference, such as `crosstalk' (speech from one conversation was audible in another) and `babble' (unintelligible background noise), that resulted from placing telephone circuits in close proximity.7 By the mid-1910s, however, Bell engineers became concerned with other sources of electrical interference that originated not within their networks, but instead from parallel and intersecting lines operated by power and light companies. This type of `inductive interference' was deeply problematic because it undercut one of the central technical objectives of Bell engineers: to increase the efficiency and sensitivity of transmission equipment. As Bell engineers were lowering their limits for acceptable levels of interference in an effort to improve call quality, power companies were expanding their reach by building more (and more powerful) lines and transmission facilities. The first efforts to address the problem of inductive interference systematically occurred in California, under the auspices of the California Railroad Commission. Between 1912 and 1917, the Commission's Joint Committee on Inductive Interference ± consisting of and funded by representatives from the telephone, power and railroad industries ± performed a number of field and laboratory tests and wrote dozens of technical reports, many of which were compiled in a 1919 final report. The report identified some `guiding principles' for preventing interference, including standards for minimum distance between power lines and communication lines as well as design and construction rules for apparatus that were incorporated into Commission rules. However, the report's authors also acknowledged the complexities of inductive interference and underscored the need to conduct further studies of the scientific and practical aspects of the problems at hand.8 Perhaps the greatest contribution of this effort was its demonstration that a cooperative approach could generate new solutions to technical problems. Despite this lesson, telephone and power companies around the country turned to litigation throughout the 1920s in an effort to deflect the costs of solving the problem onto their rivals. As early as 1920, an internal Bell System conference dedicated an entire session to working through AT&T's approach to the problem of inductive interference. Although existing laws and precedents seemed to indicate that the first party to construct facilities had the right to exclude other parties, AT&T's Chief Counsel N. T. Guernsey stressed that most interference cases were not so clear. Accordingly, Guernsey, in comments echoed by AT&T's newly appointed Vice President and Chief Engineer, Bancroft Gherardi, articulated a preference to avoid litigation if possible. This preference had multiple sources, including the imperative to keep costs down, the `necessity of avoiding controversy with our friends who are engaged in the power business' and the desire to maintain a favourable image in the eyes of regulators and the general public.9 History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
41
Gherardi personally led AT&T's participation in the effort to settle through cooperation the problems generated by the interfering infrastructures. Beginning in 1921, Gherardi represented the Bell system in two ad hoc Joint General Committees: one with the Association of American Railroads and the other with the National Electric Light Association (NELA). Because of his training as an electrical engineer and his longstanding participation in the AIEE, Gherardi was the right man for a task that was part diplomacy, part engineering. His counterpart representing NELA was Robert Pack, a respected power engineer and active member of NELA.10 In 1922, NELA and the Bell Telephone System created a Joint Development and Research Subcommittee to investigate further the problems of inductive interference. By 1924, these groups joined with representatives from the electric and steam railroad industries to form the American Committee on Inductive Coordination. Gherardi was the group's Chairman; Pack was one of three Vice-Chairmen. Together, the two men presented a report on the Committee's work to the general session of the NELA convention in May 1926. In his remarks, Pack matter-offactly noted three areas of effort. First, the committee had created `Principles and Practices for the Joint Use of Wood Poles' and distributed it to NELA member companies and AT&T associated companies. Secondly, he reported some progress toward a statement on procedures for dividing the costs incident to inductive coordination. Thirdly, he noted the recent approval of funds for further development and research, which, to his regret, had not progressed as far as the first two areas.11 Gherardi departed from Pack's reporting style to relay his own personal reflections as a visitor to the NELA. His diplomatic skills were on full display in his short speech: he noted the pleasure of `wearing the badge' of the group at its convention (his sixth consecutive appearance) and spoke of `a change in my attitude toward the meeting, and a change in the meeting's attitude toward me'. He continued: I can feel that there has been a closer and closer bond between us . . .. We have put further and further behind us the proposition that inductive coordination was a problem to fight about, and we have more and more fully accepted the view that inductive coordination was a problem to work out together, quite a different attitude from fighting it out.12 Reports from NELA's Inductive Coordination Committee at the group's meetings in 1926 and 1927 further indicate that earlier tensions between the power and telephone companies had been reduced to a matter of cooperative research and routinized solutions. The 1926 report by Howard Phelps noted `In contrast with the experience of previous years, the one now closing has been singularly free from controversy and threatened court actions'. He subsequently directed the rest of his report toward new problems with inductive interference from radio and automatic train control systems. One year later, J. C. Martin opened his report on the History of Technology, Volume Twenty-eight, 2008
42
Standardization across the Boundaries of the Bell System, 1920±38
committee's work by noting two `outstanding facts' of the previous year. First, the committee had emerged `finally and completely' from its reputation as a body that handled a controversial problem with the telephone industry to a body that dealt with a `common electrical industry problem'. Secondly, he reported that relations between the staff and engineers of NELA and AT&T had been further strengthened. The remainder of Martin's report discussed what he felt were more pressing problems ± again, inductive interference from radio and automatic train control.13 Gherardi himself, speaking from the audience at the 1928 AIEE meeting, confirmed that the group's turn from conflict to collegiality had borne fruit. Reflecting on the joint work between the Bell System and NELA over the past several years, Gherardi declared that `we came to the conclusion that 10 per cent of our problem was technical and 90 per cent was to bring about between the people on both sides of the question, a friendly and cooperative approach'.14 Although these ad hoc committees generated standards and recommended practices (such as recommendations for satisfactory distances between electrical wires connected to the same poles), they did not solve the underlying scientific and technical problems associated with inductive interference. Indeed, telecommunications and electrical engineers continue to struggle with similar problems as they seek to use power lines as a delivery mechanism for broadband communications in the twenty-first century.15 Nevertheless, cooperative organizations such as the Joint General Committees and American Committee on Inductive Coordination created institutional means for defusing a potentially costly confrontation between some of the major forces in American high-tech industry. Through this new approach ± perhaps most visible in the rhetorical shift from `inductive interference' to `inductive coordination' ± they redefined their confrontation as a problem that could be managed through collaborative research and inter-industry standardization. BANCROFT GHERARDI AND THE AMERICAN ENGINEERING STANDARDS COMMITTEE
Gherardi's enthusiasm for this cooperative solution to a difficult technical and organizational problem foreshadowed his more substantial commitment to the cause of industrial standardization. By the late 1920s, Gherardi's faith in engineering cooperation, combined with his longstanding interest in technical standardization, led him to get closely involved with the activities of the American Engineering Standards Committee (AESC). A small group of respected electrical, mechanical, civil and mining engineers formed the AESC in 1918 as an institution that could negotiate solutions to the same types of inter-industry technical problems as Gherardi had been investigating through the ad hoc Joint General Committees. By the mid-1920s, the AESC had proven to be a productive venue for reaching a national consensus among engineers as well as representatives from government, academia, the History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
43
insurance industry, trade associations and safety groups. Three factors drove the rapid growth of the AESC in the early 1920s: the interest of engineers in the elite technical societies, increasing participation from trade associations, and the support of political leaders such as the highly regarded mining engineer and Secretary of Commerce, Herbert Hoover.16 At first, AT&T participated in the AESC in a very limited way. It did not contribute to any AESC projects until 1921, when it sent an engineer to only one committee, `Symbols for Electrical Equipment of Buildings and Ships'.17 AT&T joined the AESC in earnest in 1922, when the Bell Telephone System formed the Telephone Group (together with its nominal partner, the United States Independent Telephone Association) and became a dues-paying Member Body of the AESC.18 By the end of 1927, dozens of Bell System engineers were involved in the work of 21 AESC sectional committees such as the National Electrical Safety Code committee as well as committees that created standards for manhole frames and covers, tubular steel poles, methods for testing wood, directcurrent rotating machines, induction motors and machines, and draftingroom drawings.19 Each of these projects dealt with technologies that lay at the boundaries between the telephone business and other industries. They each were important (or, in some cases, vital) for the operation of the Bell System, but, unlike standards for the telephone network and equipment, not subject to AT&T's monopoly control. As the AESC formed new committees, it was very careful not to tread on AT&T's turf and there is no evidence that AT&T submitted any of its internal standards for AESC approval. The full name of an AESC committee responsible for standards for insulated wires and cables illustrates the point clearly: `Wires and Cables, Insulated (Other than Telephone and Telegraph)'.20 Gherardi became personally involved in the AESC as the organization reached a turning point in 1928. In response to increasing amounts of interest from all aspects of industry ± not just engineers ± the AESC made fundamental changes to its structure and process, and reconstituted itself as the American Standards Association (ASA) in July 1928.21 Most of the organization's reforms were aimed at making it more welcoming and efficient for industry representatives of all stripes ± passing control, as the New York Times noted blandly, from engineers and scientists to `the executives of railroad, public utility companies and industrial concerns'.22 Indeed, the conspicuous omission of the word `engineering' from the group's new title indicates the extent to which control over standardization had spread from the domain of scientists and engineers into the domain of corporate executives and trade associations. In the reconstituted body, engineers and scientists retained a smaller sphere of influence in the ASA Standards Council, while the industry executives formed a Board of Directors that assumed responsibility for the ASA's financial administration.23 Gherardi was a member of the Board of Directors from 1929 to 1935, and also played a key role in the ASA Underwriters' Fund, which History of Technology, Volume Twenty-eight, 2008
44
Standardization across the Boundaries of the Bell System, 1920±38
raised hundreds of thousands of dollars for ASA coffers by soliciting direct contributions from industrial firms.24 Gherardi's importance to the ASA ± and the ASA's importance to Gherardi ± was underscored by his election as ASA President for the years 1931 and 1932.25 Despite the potentially crippling effects of economic depression, Gherardi could boast by the end of his term in 1932 that the ASA consensus-driven standards process was alive and robust. During 1932, 2,700 individuals from 570 technical, trade and government bodies were involved in ASA projects ± more people than ever before.26 In the standards committees of the ASA, AT&T found venues to leverage its status and power to extend its technical jurisdiction beyond the boundaries of the Bell System. A close look at AT&T's extended efforts to revise a single seemingly mundane standard for lock washers illustrates how the company's engineers used the industry standards process to attack critical system problems that the monopoly Bell System could not solve by itself. TELEPHONE SLUGS: A `PETTY RACKET'
To understand why AT&T engineers thought the standardization of lock washers could help solve a critical system problem, it is necessary to take a slight excursion and consider some of the history of coin-operated telephones. The first coin-operated telephone was invented in 1888, but Bell companies did not adopt them immediately on a large scale. When they first appeared, coin-operated telephones were well suited for two different purposes: for convenient on-the-go calls in busy public areas, and for residential customers or shops ± particularly in Chicago ± who preferred the option to pay on a per-call basis instead of a more expensive monthly subscription.27 From the perspective of Bell System engineers, these coin-operated telephones had a major disadvantage: they could be tricked. Instead of inserting nickels, dimes or quarters, some customers used metal objects ± known as slugs ± that were a similar size and weight to the legal coins. Although the practice of using slugs was tolerated in some cases by local operating companies, in most cases, slugs posed a costly problem. For example, one 1927 report suggested that in Detroit alone, over 15,000 slugs were found in coin-operated phones each month, which translated to $750 in lost revenue.28 As engineers from Western Electric, AT&T and the operating companies studied the problem, they realized that any exclusively technical solution to the slug problem would be costly and excessively difficult to engineer. One possibility they considered was to design coin boxes to use non-circular or octagonal tokens; but this solution would have triggered other substantial system problems, such as increased installation and maintenance costs.29 Bell System engineers also considered making changes to the slots used to filter and collect nickels, dimes and quarters, but these channels were already built to meet precise tolerances designed to allow legitimate coins to work. In both cases ± the introduction of History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
45
irregular tokens and the redesign of coin channels in existing telephones ± the costs of fixing the slug problem within a system context were prohibitive, and both alternatives were rejected as short-term solutions.30 Unable to solve the slug problem through an internal technological fix, AT&T engineers chose to attack the problem by turning to institutions outside the Bell System. Between 1927 and 1938, AT&T cultivated relationships with two communities: private firms active in industry standards committees and government officials who took an interest either in the standardization process or in connections between `the slug racket' and other forms of organized crime. In their efforts with both communities, AT&T's strategy was based on a fascinating assumption: it was easier to change the world than it was to change a technology embedded deep within the Bell System. In 1927, the Superintendent of the Michigan Bell Telephone Company alerted AT&T engineers that a significant portion of slugs discovered in coin boxes were in fact washers that were manufactured to conform to a particular industry standard. Many of the slugs that turned up in Bell coin boxes were, from a different perspective, simply standard iron washers that coincidentally had similar dimensions to nickels, dimes or quarters.31 Two of the leading engineering societies in the country ± the American Society of Mechanical Engineers and the Society of Automotive Engineers ± had separately published these washer standards in the early 1920s. Beginning in 1926, these two groups combined efforts under the auspices of the American Standards Association and formed ASA Sectional Committee B27, `Standardization of Plain and Lock Washers'. Since this was a clear opportunity to eliminate the offending sizes of washers that were being used as slugs, AT&T sent one of its senior equipment engineers, George K. Thompson, to participate on the B27 Committee beginning in late 1927.32 The pace of work in the ASA Committee was slow ± so slow that when Thompson retired in 1930, the Committee had not even published a draft of the revised washer standards. When he retired, Thompson left the AT&T washer standards campaign in the hands of Eliot W. Niles, an engineer in the Department of Development and Research. By early 1931, progress seemed imminent: the B27 Committee had prepared a tentative standard with revised dimensions for lock washers. However, in June 1931, the ASA Standards Council reviewed the Committee's work and discovered a violation of ASA rules that caused further delay. The problem was that ASA procedural rules required sectional committees to have an even representation of producers and consumers ± in this case, manufacturers and buyers of washers. With 18 committee members designated as consumers and only 11 designated as producers, B27's membership failed to meet the ASA's procedural standard. It took the Committee another full year to canvass existing members for manufacturers who might be interested, convince six of these manufacturers to join the Committee and obtain the ASA's approval for this change. After these new members were approved, they needed several additional months to review the proposed specifications.33 History of Technology, Volume Twenty-eight, 2008
46
Standardization across the Boundaries of the Bell System, 1920±38
As the standards process plodded along, AT&T also utilized a second, more aggressive tactic to recruit allies among other industrial firms. Thompson and Niles were eager to learn of companies that manufactured brass tags, commemorative coins or non-standard washers that could be used as slugs, and AT&T was not shy about dispatching company representatives to warn these companies about the damage their products were causing. This approach worked well with small companies, but larger manufacturers or Bell System suppliers ± such as Bethlehem Steel ± were less easily persuaded (or intimidated) by letters, calls or even visits from AT&T representatives.34 In 1933, a full six years after AT&T first identified the standard washers that were being used as slugs, AT&T officials finally found a strategy that helped them bring the work of the B27 Committee to completion. Upon discovering that washer dimensions specified in an Air Corps Standard contained the same specifications as some of the offending slugs, AT&T officials pressured Harry H. Woodring, the Assistant Secretary of War, to support a new standard. Woodring, spurred to action by letters and meetings with Niles and A. E. Van Hagen (an AT&T official based in Washington), persuaded the Army±Navy Standards Board to back the changes favoured by AT&T. This appeal, directed toward a high-ranking military officer, sparked a final surge of support that culminated in the publication of the revised washer specification as an ASA-approved `American Standard' in 1934.35 The long-awaited victory was bittersweet. By itself, the new standard ± a significant technical, organizational and political achievement that took 7 years ± was not a wholesale solution to the slug problem. ASA standards were used only on a voluntary basis, and the ASA, by design, had no authority to enforce compliance with its standards. Even though AT&T had spent the last 7 years building a strong network of partners through the standardization process, this alliance could not protect the Bell System from those elements of American industrial society who did not want to adhere to the consensus industry standard. The offending standard was eliminated, but the slug problem remained. By the mid-1930s, exasperated AT&T executives appealed to regulators and law-enforcement officials for their help in stopping the fraudulent manufacture and use of telephone slugs. This political strategy began to pay dividends in 1936. In February of that year, the New York District Attorney arrested three men alleged to be responsible for manufacturing and selling a majority of slugs used to defraud coin-operated boxes used by telephone companies, public utility companies and restaurants. As the arrest was announced, a representative from New York Telephone took advantage of the publicity to disclose the extent of the slug problem: he reported that, in 1935 alone, New York Telephone recovered 4,277,256 slugs, which amounted to $344,524 in lost revenue. This announcement was a shrewd public relations move, calculated to build a sense of indignation against the `slug racket'. Twenty more suspects were arrested in an April 1936 sting, and 16 of them (including their `spearhead') were History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
47
convicted by the end of June.36 Reflecting on these arrests, an outraged editorial in the Washington Post asked the public to rise above this `petty racket' and suggested that a cultural standard could succeed where a technical standard did not: Petty rackets in which the public at large is able to participate with slight danger of detection are not so easy to control. They constantly crop up in one form or another. The ultimate hope of exterminating them lies in elevating standards of personal conduct through education in the home and schools . . . For immediate relief from mass pilfering a great deal can be done by unrelenting pursuit of the individuals who earn a living by encouraging such practices.37 Buoyed by public support for police action against the slug racket, AT&T and the regional Bell Associated Companies pressed state regulators around the country to pass laws that made the use of telephone slugs a crime punishable by fine, imprisonment or both. In December 1937, the Washington Post reported the first arrest under the District of Columbia's new law prohibiting the use of telephone slugs. The article concluded by noting the financial benefits of such laws for the telephone company: `In 38 states where similar laws have been enforced, company officials said losses had ``dropped tremendously''.'38 Of all the different tactics used by AT&T men since discovering the slug problem in 1927, this lobbying offensive ± a political solution to a technical problem ± yielded the best results by far.39 This brief history of AT&T's anti-slug efforts illustrates some of the more general features of AT&T's attitude toward industry standardization. Beginning in the 1920s, AT&T engineers joined dozens of consensus standards committees. Their experiences in these committees were as diverse as the standards they sought to influence. In many of these committees, such as those that set standards for wood poles and acoustic terminology, work proceeded in a harmonious fashion.40 In other cases, such as the battles for control of radio transmission, the standards-setting process became a lightning rod for scientific, technical and political controversy.41 Sometimes, AT&T participated in more targeted and specific institutions, such as the American Institute of Electrical Engineers, Institute of Radio Engineers, American Society for Testing Materials, and the National Electric Light Association; other times, it participated in larger and more bureaucratic bodies such as the ASA and the International Electrotechnical Commission.42 AT&T's motivations for joining these committees also varied. In some cases, industry standards helped to improve the efficiency of operations in the Bell System. In other cases, standards work helped AT&T engineers to either establish or enhance their personal reputations and professional status. In still other cases, AT&T strove to shape the industry consensus around solutions and technologies that it favoured. Amidst this variety, AT&T engineers effectively learned a valuable overarching lesson: they could use industry standards committees to solve critical problems with the telephone system that AT&T could not solve on History of Technology, Volume Twenty-eight, 2008
48
Standardization across the Boundaries of the Bell System, 1920±38
its own. Moreover, standards committees provided avenues for AT&T to throw its weight around in American industry, politics and society. The standardization process could be painfully slow over the short term, but AT&T managers such as Bancroft Gherardi realized that, over the long term, they could leverage standards committees to extend their influence over separate, non-telephone lines of business. Through these standards committees, AT&T executives and engineers were able to expand their company's influence, even if they also learned that there were limits to the utility of the consensus standards process. It remains unclear if regulators in the FCC and in the Department of Justice failed to notice this activity or if they simply accepted it as a normal feature of monopoly control. CONCLUSIONS
In recent years, historians of technology and industry have, following the lead of Thomas Hughes and others, moved from a focus on individual technologies and companies as the central units of historical analysis to a broader focus on `networked systems'. As we broaden our focus and follow the logic of networks, we may soon discover that we should not stop at the boundaries of any single networked system. Instead, we need to locate networked systems within a broader context ± in this case, within a number of networked systems that provided the infrastructure for the American industrial economy and the foundations for American world power in the twentieth century.43 Two conclusions follow from this point. First, we can see that `boundary technologies' (such as telephone slugs/lock washers) emerge as important sites for examining the growth of technological systems. These technologies are important, even in monopoly systems that we might expect to be selfcontained or subject only to hierarchical managerial control. Secondly, as we follow Hughes's example and continue to study the development of large technical systems, we need to recognize that these systems ± and their builders ± were not always complementary, and rarely as symbiotic as the growth of the American rail and telegraph networks in the nineteenth century. When did conflicts arise? What were the subjects of dispute? How were such conflicts resolved? Asking these questions can help us extend and refine the important Hughesian concept of technological momentum, specifically by pointing out how momentum is neither effortless nor inevitable. In the history of American telephone networks, momentum did not follow automatically from the entrepreneurial efforts of AT&T's system builders; nor did it emerge naturally from any sort of path dependence. Instead, momentum was the consequence of continuous work and deliberate diplomacy in multiple venues. The contingencies of momentum are especially evident when we examine standardization efforts that spanned the boundaries of the Bell System ± that is, technologies that were vital for the functioning of the Bell System, yet not subject to AT&T's monopoly control. In the introduction, I posed a question: why did AT&T engineers invest time and energy into History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
49
standardization projects that reached beyond the boundaries of the AT&T monopoly? The examples discussed in this paper illustrate that we need to consider simultaneously three different motivations, namely efficiency, power and culture. When we consider standardization in the mature Bell monopoly, efficiency and power went hand in hand. The participation of Bell executives and engineers in standard-setting projects to diminish inductive interference and to eliminate illegal telephone slugs helped them to solve critical problems that threatened the continued growth and efficient operation of their monopoly network. At the same time, AT&T engineers also found within these standardization projects new opportunities to patrol the boundaries of their technological system. By considering Bancroft Gherardi, an executive whose career has been ignored by the numerous scholars who study the early Bell System, we can appreciate how power and efficiency can sometimes be achieved through cultural avenues. Gherardi emerges from these episodes as a skilful diplomat and engineer who worked within the Bell System as well as across the boundaries of the System. Even the most cynical readers might concede that he did so in a genuinely cooperative spirit. I do not wish to portray Gherardi as motivated solely by altruism ± surely he was not. I do, however, wish to suggest that, by studying Gherardi and his fellow standards engineers, we may come to see with more clarity the cooperative social networks that sustained the American style of competitive managerial capitalism. ACKNOWLEDGEMENTS
William D. Caughlin and George Kupczak at the AT&T Archives and History Center provided invaluable assistance and access to source material. Critiques from Michael Aaron Dennis, Richard R. John, Louis Galambos, Stuart W. Leslie and Kenneth Lipartito greatly improved this chapter; remaining errors are my responsibility alone. Finally, I am pleased to acknowledge the generous support of the John Hope Franklin Humanities Institute at Duke University and the IEEE Life Members Committee. Notes and References
1. Thomas P. Hughes, Networks of Power: Electrification in Western Society, 1880±1930 (Baltimore, 1983). On AT&T's early history, see George David Smith, The Anatomy of a Business Strategy: Bell, Western Electric, and the Origins of the American Telephone Industry (Baltimore, 1985); Robert W. Garnet, The Telephone Enterprise: The Evolution of the Bell System's Horizontal Structure, 1876±1909 (Baltimore, 1985); Neil H. Wasserman, From Invention to Innovation: Long-Distance Telephone Transmission at the Turn of the Century (Baltimore, 1985); Kenneth Lipartito, The Bell System and Regional Business: The Telephone in the South, 1877±1920 (Baltimore, 1989); Milton Mueller, Universal Service: Competition, Interconnection, and Monopoly in the Making of the American Telephone System (Cambridge, MA, 1997); Richard John, `Recasting the Information Infrastructure for the Industrial Age', in Alfred D. Chandler, Jr and James W. Cortada (eds), A Nation Transformed By Information: How Information Has Shaped the United States from Colonial Times to the Present (New York, 2000); Robert MacDougall, `Long Lines: AT&T's Long-Distance Network as an Organizational and Political Strategy', Business History Review, 2006, 80: 297±328.
History of Technology, Volume Twenty-eight, 2008
50
Standardization across the Boundaries of the Bell System, 1920±38
2. Harold S. Osborne, `The Fundamental Role of Standardization in the Operations of the Bell System', American Standards Association Bulletin, September 1931, 3; and O. C. Lyon, `Standardization of Non-Technical Telephone Supplies', American Telephone and Telegraph Company, Plant and Engineering Conference of the Bell System, New York City, December 6±10, 1920, Section IV, 97±103. Throughout the 1920s and 1930s, a number of AT&T engineers published comprehensive overviews of the Bell System and the important role of standardization. See J. N. Kirk, The Need for Standardization of Design, Construction and Maintenance Practices in Telephone Work and the Effect upon Service (AT&T Information Department, 1921); Harold S. Osborne, `Standardization in the Bell System', Bell Telephone Quarterly, 1929, 8: 9±28; Harold S. Osborne, `Standardization in the Bell System ± II', Bell Telephone Quarterly, 1929, 8: 132±52; Bancroft Gherardi and Frank B. Jewett, `Telephone Communication System of the United States', Bell System Technical Journal, 1930, 9: 1±100; and Frank B. Jewett, `Some Fundamentals in Standardization', Bell Telephone Quarterly, 1938, 17: 17±27. 3. Accordingly, most histories that trace the creation of the Bell System stop around 1920, if not earlier. For a study of the Bell System's momentum throughout the twentieth century, see Louis Galambos, `Looking for the Boundaries of Technological Determinism: A Brief History of the Telephone System', in Renate Mayntz and Thomas P. Hughes (eds), The Development of Large Technical Systems (Boulder, 1988). See also Hughes, op. cit. (1) and Thomas P. Hughes, `Technological Momentum', in Merritt Roe Smith and Leo Marx (eds), Does Technology Drive History? The Dilemma of Technological Determinism (Cambridge, 1994). 4. United States Congress, Report of the Federal Communications Commission on the Investigation of the Telephone Industry in the United States (Washington, DC, 1939), 252. 5. H. F. Albright, `The Business Activities and Relations of Members of Engineering and Manufacturing Departments Outside the Western Electric Company', Manufacturing and Engineering Conference, 1915, 2. 6. John, op. cit. (1), 55±106. 7. M. D. Fagen (ed.), A History of Engineering and Science in the Bell System: The Early Years (New York, Inc.), 324±36. 8. Railroad Commission of the State of California, Inductive Interference between Electric Power and Communication Circuits: Selected Technical Reports with Preliminary and Final Reports of the Joint Committee on Inductive Interference and Commission's General Order for Prevention or Mitigation of Such Interference (Sacramento, 1919). The 1916 Transactions of the AIEE contain the transcripts of two lively discussions on these issues, including implications for the AIEE's standard wave forms. See A. H. Griswold and R. W. Mastick, `Inductive Interference as a Practical Problem', Transactions of the American Institute of Electrical Engineers, September 1916, 16: 1051± 94; and Frederick Bedell, `Characteristics of Admittance Type of Wave-Form Standard', Transactions of the American Institute of Electrical Engineers, September 1916, 16: 1155±86. 9. Fagen, op. cit. (7), 336. For positions articulated at the AT&T conference, see Bancroft Gherardi, `Introductory Remarks on ``Our Legal Rights in Interference Cases'' ', N. T. Guernsey, `Our Legal Rights in Interference Cases', H. S. Warren, `Interference Problems', Frederick L. Rhodes, `Remarks on ``Interference Problems'' ', Harold S. Osborne, `Inductive Interference' and D. H. Keyes, `Inductive Interference Problems: Method of Attack', all in AT&T, Plant and Engineering Conference of the Bell System, 1920, Section IV: 2±55. 10. Gherardi became an AIEE Association member when he graduated from Cornell in 1895, served as AIEE Vice-President from 1908 to 1910, was named an AIEE Fellow in 1912, served on a number of AIEE committees as well as the AIEE Board of Managers from 1905 to 1908 and 1914 to 1917. He was later elected AIEE President for 1927±28. See Fagen, op. cit. (7), 336±7; Lewis Coe, The Telephone and its Several Inventors (Jefferson, NC, 1995), 158±9; and `Bancroft Gherardi ± Biographical Data', September, 1949, Box 1133, `Gherardi, Bancroft ± Biography ± 1873±1941', AT&T Archives, Warren, New Jersey. Pack became NELA President for 1926±27. 11. Bancroft Gherardi and Robert F. Pack, `Report on Joint General Committee, Bell System and N. E. L. A.', National Electric Light Association Proceedings, May 1926, 83: 191±3. 12. Gherardi and Pack, op. cit. (11); American Committee on Inductive Coordination, Bibliography on Inductive Coordination (New York, 1925). 13. Howard S. Phelps, `Report on Inductive Coordination Committee', National Electric Light Association Proceedings, May 1926, 83: 851±2; J. C. Martin, `Report of
History of Technology, Volume Twenty-eight, 2008
Andrew L. Russell
51
Inductive Coordination Committee', National Electric Light Association Proceedings, June 1927, 84: 625±6. 14. Bancroft Gherardi, `Discussion at Pacific Coast Convention', Transactions of the American Institute of Electrical Engineers, 1928, 47: 50. Another example of this general lesson appears in the preface of a 1936 book on inductive coordination: `. . . a very minor amount of cooperative work in advance planning of facilities or in correction of existing unfavorable situations will in most cases enable both companies to serve the same customers ± the public ± at no greater cost.' Laurence Jay Corbett, Inductive Coordination of Electric Power and Communication Circuits (San Francisco, 1936), xiii. Corbett was a power engineer who was involved with the study of inductive interference since the California Railroad Commission investigations. For a similar quote attributed without reference to Pack, see Fagen, op. cit. (7), 337. See also `Symposium on Coordination of Power and Telephone Plant', Transactions of the American Institute of Electrical Engineers, June 1931, 50: 437±78. 15. Osborne, op. cit. (2), 151; `IEEE Starts Standard to Support Broadband Communications over Local Power Lines', 20 July, 2004, online at http://standards.ieee.org/announcements/pr_p1675.html. 16. See Chapter 2, `From Industry Standards to National Standards: 1910±1930', in Andrew L. Russell (ed.), ` ``Industrial Legislatures'': Consensus Standardization in the Second and Third Industrial Revolutions', 2007, Ph.D. dissertation, Johns Hopkins University. 17. Work of the American Engineering Standards Committee (Year Book) (New York, 1921), 20, 25. 18. American Engineering Standards Committee Year Book (New York, 1924), 17. At this time, there were 22 other AESC Member Bodies. AT&T engineers, consistent with their company's commanding technical and business position, were far more active and dominant than their colleagues from the independent companies. For example, in 1927, AT&T sent engineers to 21 committees; USITA engineers participated in nine. American Engineering Standards Committee Year Book (New York, 1928), 57, 64. 19. American Engineering Standards Committee Year Book (New York, 1923). 20. AESC Year Book, op. cit. (18), 39. 21. `Standards Group to Broaden Scope', New York Times, 1928, 8 July: 40; `Scientific Events: The American Standards Association', Science, New Series, 1928, 68(1751): 53±4. 22. `Executives to Direct Standards Body', New York Times, 1929, 8 July: 36. 23. American Standards Year Book (New York, 1929), 7. 24. They found immediate success: in 1929 alone, this fund was responsible for adding $74,000 to the ASA's annual income of $54,000. AT&T was one of the groups of large American firms to contribute. The other contributors were Aluminum Company of America, Bethlehem Steel, Consolidated Gas, Detroit Edison, Ford Motor Company, General Electric, General Motors, Gulf Oil, Public Service Corporation of New Jersey, Standard Oil Corporation of New Jersey, US Steel, Westinghouse Electric and Manufacturing Company, and Youngstown Sheet and Tube Company. In 1930, the ASA announced that they had obtained the means to increase their budget by $500,000 over the next 3 years. `Plan to Enlarge Standards Work', New York Times, 1930, 5 January: N21; William J. Serrill, `President's Report', American Standards Year Book (New York, 1930), 9±10; and `Milestones of the ASA', Industrial Standardization, 1943: 330. 25. `Gherardi Heads Standards Group', New York Times, 1930, 12 December: 17. 26. `Group Hears of Gain in Standards Work', New York Times, 1932, 1 December: 38; `Industrial Standardization', Wall Street Journal, 1932, 2 December: 2. Ten days later, the AIEE awarded its prestigious Edison Medal to Gherardi, `for his contributions to the art of telephone engineering and the development of electrical communication'. `Bancroft Gherardi Wins Edison Medal', New York Times, 1932, 12 December: 11; and `Award of the Edison Medal to Bancroft Gherardi', Science, New Series, 1932, 76(1981): 562. 27. Fagen, op. cit. (7), 153±6, 160±2, 170±1. 28. E. M. Gladden to L. B. Wilson, 11 July 1927, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives. 29. George K. Thompson to C. J. Davidson, 11 October 1927, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives. 30. AT&T Outside Plant Development Engineer to L. F. Morehouse, 7 November 1932,
History of Technology, Volume Twenty-eight, 2008
52
Standardization across the Boundaries of the Bell System, 1920±38
Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives. Eventually, AT&T introduced new models of coin-operated telephones that implemented different designs ± none of them completely successful ± to detect and prevent the use of illegal slugs. 31. E. M. Gladden to L. B. Wilson, 11 July 1927, AT&T Archives. Gladden reported that the Association members `did not appear to welcome' AT&T's suggested solution, which was to confiscate dies and commemorative coins of the offending sizes. Many Association members, it turned out, worked for or owned companies that used such equipment for legitimate purposes. 32. George K. Thompson to C. J. Davidson, 11 October 1927, AT&T Archives; F. J. Schlink to George K. Thompson, 19 November 1927, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives; George K. Thompson to W. F. Hosford, 20 December 1928, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives. Thompson had been involved with coin boxes for over 30 years, ever since he filed the first Bell patent for coin telephones back in 1895. 33. C. B. LePage to E. W. Niles, 9 June 1931, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives; American Standards Association, `American Tentative Standard ± Lock Washers', 1931, November; C. B. LePage to P. G. Agnew, 12 July 1932, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives. 34. Correspondence in the AT&T Archives reveals at least three firms that cooperated with AT&T's direct approach: the Rome Brass and Stamping Company of Rome, New York; the Dennison Manufacturing Company of Framingham, Massachusetts; and Patterson Brothers of Park Row, New York City. Gladden to Wilson, op. cit. (31); `Fraudulent Use of Slugs in Coin Box Telephones (Confidential)', 9 October 1933, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives; AT&T Outside Plant Development Engineer to Morehouse, op. cit. (30). 35. Harry H. Woodring to A. E. Van Hagen, 13 October 1933, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives; E. W. Niles to Z. Z. Hugus, 22 December 1933, Location 482 07 03 08, `American Standards Association Committee on Washers, 1927±1934', AT&T Archives. 36. `Third Man Seized in Sale of Slugs', New York Times, 1936, 9 February: 24; `Merchant is Guilty in Fake Coin Racket', New York Times, 1936, 24 June: 19. 37. `The Slug Racket', Washington Post, 1936, 11 February: 8, emphasis added. 38. `D. C. Property, Telephone Slug Measures Pass', Washington Post, 1937, 27 April: 15; `Police Accuse Two in Phone ``Slug Racket'' ', Washington Post, 1937, 3 December: 30; `Two Tried in First Phone Slug Case', Washington Post, 1938, 10 February: 18. 39. There was no neat ending or systematic solution to the scourge of telephone slugs, which continued to pose a problem for much of the twentieth century. From the 1930s to the 1950s, the manufacture and sale of slugs was closely linked to organized crime. See `7 Indicted in Slug Racket', New York Times, 1941, 6 May: 23; and `Slug Dropped in Phone Box Leads to Mobster's Arrest', Hartford Courant, 1952, 21 November. By the 1960s, however, the same technical practice ± tricking network devices to pirate network access ± took on a new set of cultural meanings and associations when adopted by Yippies and phreakers. A cultural history of slugs would be a fascinating project. 40. See the committee records and correspondence in Location 484 04 04 02, `A.S.A. Sectional Committee on Wood Poles', AT&T Archives; and Location 419 01 02 16, `A.S.A. Committee Z24 on Acoustic Terminology, 1932±1938', AT&T Archives. 41. On AT&T's involvement with radio and radio standards more generally, see Reich, The Making of American Industrial Research, 170±238; and Hugh R. Slotten, Radio and Television Regulation: Broadcast Technology in the United States, 1920±1960 (Baltimore, 2000). 42. Osborne, op. cit. (2), 150±1. 43. Thomas P. Hughes, `From Firm to Networked Systems', Business History Review, 2005, 79: 587±93.
History of Technology, Volume Twenty-eight, 2008
Morality, Locality and `Standardization' in the Work of British Consulting Electrical Engineers, 1880±1914 STATHIS ARAPOSTATHIS In memory of my grandmother, Olympia Kaloussi Assuming, then, that the buyer's engineer has come to stay, it would appear that a condition precedent to the standardisation of plant is the standardisation of the buyer's engineer. (R. Percy Sellon, `The Standardisation of Electrical Engineering Plant', Journal of the Institution of Electrical Engineers, 1899±1900, 29: 294) INTRODUCTION
In this chapter, I study the co-construction of engineering expertise, electrical technologies and professional identity in the practice and work of consulting engineers. I demonstrate that the issue of the standardization of the consultants' engineering practices was closely related to the formation of their identity, arguing that in the system-building process of electrical installations, the consultants' expertise and authority were vulnerable to local interests, strategies and local expertise. Local particularities played an important role in the stabilizing of technical decisions, framing the consultants' engineering practice and shaping their identity: as identities and reputations became contested, competing definitions of `standardization' were invoked by rival interest groups. Recent studies of engineering cultures in the long nineteenth century by Graeme Gooday, and by Ben Marsden and Crosbie Smith, have focused on the ways in which knowledge-making procedures and technological practices related to existing social networks, and the social and cultural capital of the actors who made the cognitive statements or developed the engineering activities.1 This historiographical approach prioritizes the study of the co-construction of expertise and technoscientific knowledge History of Technology, Volume Twenty-eight, 2008
54
The Work of British Consulting Electrical Engineers, 1880±1914
and practice. In what follows, I will apply this approach in addressing British electrical engineering consultants not only as system builders, but as experts. The late nineteenth and early twentieth centuries saw the rise of new professional classes in British society. Harold Perkin has argued that the emergence of the class of professionals, and the rise of their social ideals, influenced and transformed British middle-class society of the period.2 The professional class based its social legitimation and authority on the merit of its members' specialist training and occupational expertise.3 Professionals opposed both the traditional ideals of aristocrats of social success and the establishment of a solid social order based upon birth, wealth, land ownership, or patronage, and the businessmen's ideals which prioritized corporate competition and the individuals' success in the marketplace as the driving forces of society's change. Further, these professional norms were antagonistic to working-class ideals which stressed manual labour and collectivism as means to effect social progress and change.4 Geoffrey Searle has argued that the professionals sought to establish a moral authority and a new ethos that would secure British society from the hazards of individualism and utilitarianism.5 Terry Gourvish has pointed out that the intention of the several professions was `to raise status, financial rewards and occupational security by means of differentiation, regulation and an emphasis on the gentlemanly virtues of education and a middle-class morality'.6 In the industrial world, the professionalization of engineering was a process that started in the early decades of the nineteenth century and developed through the late Victorian and early Edwardian period.7 The new local or national engineering institutions8 wanted to secure for their members a gentlemanly status.9 Angus Buchanan has argued that nineteenthcentury engineers `were consciously and unconsciously seeking to fulfill the criteria of Gentlemen Engineers'.10 Consulting electricians confronted a variety of challenges to their professional conduct and practice as interest in electric light and power schemes increased. As mediators of electrical innovations, the consultants had to balance the interests of users against those of manufacturers. The latter group demanded guarantees of fair competition from the consultants while simultaneously pressing them to standardize their practice, on occasion publicly blaming freelance engineers for the lack of standardization in the British electrical industry. Further pressure was imposed on the practising consultants by general social changes and emerging professional ideals. To consumers, and to the municipal authorities, such issues as morality and impartial judgement became increasingly significant in assessing consulting engineers' activities. Allegations against the moral integrity of the consultant were potentially harmful not only to his professional activity, but also, I argue, to his engineering practices. The trustworthiness of the technologies went hand in hand with the trustworthiness of the advisors. In attempting to fashion themselves as independent innovators, the History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
55
consultants had to defend their ad hoc approach to system-building processes.11 Standardization of engineering practice would, the consultants recognized, involve the abolition of their control over the innovation process in the electrical installations they supervised, correspondingly handing the manufacturers and contractors a more active role in technological choices. Accordingly, the consultants promoted an ethos of professional activity standardized through the regulation of the profession, rather than through standardized practice.12 The dispute between these two competing models of standardization was a central issue in the formation of the consultants' identity. TENSIONS OVER CONTRACTS AND SPECIFICATIONS
One of the early cases in which consultants' professional activity became the focus of public debate, carried on in the specialist press, was in the early electrification of Dublin. In late 1888, the municipality decided to proceed with the electrification of the city and, by June 1889, it had acquired licenses for supplying electricity for public and private lighting. The Dublin authorities called in Edward Manville (1862±1933) to act as their advisor.13 He and the local engineer, Spencer Harty, supervised the whole process and were responsible for the selection of machinery.14 Initially, only two contractors submitted designs for the substation system ± the Westinghouse Company and that of the English electrical engineer, J. E. H. Gordon (1852±93).15 Gordon's proposal influenced Manville and Harty to recommend the single-phase alternating current system with substations, rather than banking the transformers in the customers' premises.16 Wishing, however, to award the contract to an Irish contractor instead, the local authorities suggested that other competitors should amend their designs and adopt the substation principle: on this understanding, the authorities ultimately selected an Irish firm, the Electrical Engineering Company of Ireland. The decision irritated Gordon, who, when he found out that another firm had been awarded the contract on a design principle he had promoted, threatened the authorities with legal action.17 Faced with this, the authorities forced their consulting engineer to reconsider his proposal and to agree to the establishment of a system with house-to-house transformers. After securing Manville's agreement, the authorities awarded the contract to the Electrical Engineering Company of Ireland, also approving the company's proposal to collaborate with an English electrical engineering firm, Messrs Hammond and Co., who would act as sub-contractors.18 The consulting engineer's role in this controversy drew criticism from the technical journal, The Electrical Engineer. An editorial entitled `Reform in Consulting Engineering' brought into question the integrity and ability of the consultant, criticizing Manville for leaking technical information to competing contractors and accusing him of being unable to guarantee fair competition among the contractors.19 The leader emphasized that: History of Technology, Volume Twenty-eight, 2008
56
The Work of British Consulting Electrical Engineers, 1880±1914 No consulting engineer has a legal or moral right to seek fishing plans, to put contractors to great expense, and then use the plans. He has no more right to give information to one contractor of the information supplied by another contractor, than he has to appropriate personal property to his own use.20
Further, the editors questioned the whole system of tendering, arguing that the Institution of Electrical Engineers, as the relevant professional association, should take measures against malpractice, and that legal procedures should be pursued to secure the intellectual property of the contractor. The proposed `reform' was to revise the role of the consultant: rather than merely drawing up specifications of the general characteristics of projects, consultants should be required to contribute more actively to the specification of technical details.21 In spite of its harsh tone, The Electrical Engineer's approach was not a personal attack on Manville, and did not reflect any systematic opposition to consulting engineering. (Two years later, an editorial entitled `Are Consulting Electrical Engineers Required?' would affirm that `the consulting engineer was and would become more of a necessity.')22 Rather, it advocated clear patterns and rules in business activities to guarantee fair competition. The other leading journals, The Telegraphic Journal and Electrical Review and The Electrician, drew broadly similar lessons from the controversy, though neither accepted the hard-line implication of malpractice on the part of the consultant. The Telegraphic Journal, while strongly critical of The Electrical Engineer's inflammatory tone, acknowledged its aim of achieving `honest and straightforward business relations between the various branches of the electrical profession'; the editors advised consultants to be more cautious in their business transactions in order to avoid allegations against them for professional misconduct.23 The Electrician, likewise, acknowledged that changes in the tendering system could prevent tensions between the contractors and the consultants.24 The reaction to the Dublin controversy was strong enough to initiate a debate, played out in the pages of these journals, about the qualifications and the qualities of the consultants. The editors of these journals possessed a thorough understanding of the heterogeneous and complex character of the emerging industry.25 Charles Biggs, for instance, became editor of The Electrical Engineer in 1888, following 9 years as chief editor of The Electrician: an electrical researcher and patent-holder in his own right, Biggs was engaged in the fashioning of the electrical engineer as `practical man', developing a practical and commercial orientation for his journals, whilst at the same time championing the mathematical and scientific approach of Oliver Heaviside.26 After Biggs' resignation, The Electrician became the responsibility of a string of editors and assistants, including William Henry Snell, Alexander Pelham Trotter, W. G. Bond and Edward Tremlett Carter: all had engineering training and were prolific commentators, networkers and authors of papers and books.27 Trotter, the central figure in the editorial team from 1890 to 1895, maintained a consulting practice History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
57
alongside his journalistic work.28 Chief editor of The Telegraphic Journal and Electrical Review was Harry Robert Kempe (1852±1935), an engineer whose electrical practices combined mathematical perspective, mechanical ingenuity and a strong commitment in the emerging culture of accurate measurements and tests.29 Accordingly, the journals addressed the emerging electrical industry in broad technical, legal, political and commercial terms. As such, they served not only to circulate information among engineers, but to carry the latest news from the industry to municipal councils, the directors and shareholders of the electrical engineering companies, and the lay public. Though editorials had a key role in shaping the direction of debate, all sides tended to be represented, as discussion and commentary, solicited from the readership in general, made up a large proportion of the content. Whereas The Electrician tended to impose restrained standards of debate, The Telegraphic Journal and Electrical Review (which became simply The Electrical Review from 1891) was a venue for enthusiastic controversy between participants.30 During the 1890s, this journal, being more focused on industrial, commercial and professional matters than its rival, provided more comprehensive coverage of issues relevant to the professional conduct and role of the consulting engineers, although The Electrician was equally ready to intervene and comment in cases in which consulting practice was considered defective. The public debate that ensued in the journals' pages ensured that questions of fair competition and the morality of business transactions played an ongoing role in the development of electrical engineering identities, particularly as concerned the authority and professional status of the consultant. LOCALITY, MACHINERY AND EXPERTISE: ARTEFACTS WITH NATIONAL IDENTITY?
The localist tendency in the selection of equipment, which played a role in the Dublin incident, recurred more noticeably elsewhere towards the end of the decade. By February 1899, the Glasgow Corporation had decided to electrify its whole tramway network, passing control of the whole undertaking to the Tramway Committee.31 They approached as consultant the American engineer, Horace Field Parshall (1865±1932), who was an authority in electric traction and three-phase transmission systems.32 By the end of July 1899, the Corporation advertised for tenders on Parshall's specified plant for four low-speed (75 revolutions per minute) engines that would drive three-phase generators for the supply of highvoltage current at 6,500 volts.33 The electrical part of this installation raised no complaints. Parshall's authority in three-phase plant design and transmission systems was unquestioned in a British electrical industry perceived to be lagging in these areas: the only established three-phase system, serving Dublin's electric tramways, had been designed by Parshall himself.34 Parshall's proposals regarding appropriate choice of engines, by contrast, attracted extensive controversy: in an area like the Clyde Valley, History of Technology, Volume Twenty-eight, 2008
58
The Work of British Consulting Electrical Engineers, 1880±1914
where marine industry was thriving, local expertise played a considerable role in the Committee's decisions. On the Committee's suggestion, the specifications for the engines comprised only general directions, instructions and construction details that were far from being compulsory for the contractors.35 Despite the open character of the specification, its guidelines were characteristic of the practice of Parshall and of the American experience that he was importing. In the United States, engineers preferred to reduce the revolutions and to increase the weight of the engines, and so used lowspeed engines with heavy flywheels.36 This use of low-speed engines was likewise appropriated gradually in Continental Europe; British engineers, however, preferred to work with high-speed engines of low weight. The expertise, the practical experience and the know-how acquired from the flourishing marine industry were the main factors informing and preserving the idiosyncratic British practice. Marine conditions, where space was limited and where it was advantageous to have smaller and lighter and yet powerful engines, had made high-speed engines the dominant and established tradition in the British mechanical engineering manufacturing sector.37 Unsurprisingly, then, the selection of the machinery, the consultant's proposals and specification raised a `British vs. American' public dispute in Glasgow. Several competing contractors, as well as local councillors and the contemporary technical journals, opposed Parshall's proposals. Technology, expertise and politics comprised a whole. A negative comment on the technological part of the installation was also an attack on Parshall's authority and upon the Committee's policies. Several British and American companies submitted bids for the contract on the engines. The principal competitors were D. Stewart and Co. of Glasgow, Hick Hargreaves and Co. Ltd of Bolton, and the American companies E. P. Allis and Co., and Harvey and Williams. The lowest tenders were those of the Glasgow and Bolton firms; Parshall, however, recommended the Milwaukee-based Allis ± a recommendation that the Tramway Committee endorsed by a slender majority of seven votes to six. Parshall stressed that although he would have preferred to recommend the product of a British firm, the British manufacturers did not fulfil his requirements or his preferred design principles. He argued, further, that, in the event of a decision against his advice, he would accept no responsibility for any future problems.38 In a letter to the Manchester Courier of 16 August 1899, one of the unsuccessful British bidders, Hick, Hargreaves and Co., argued that: . . . we, and other British engine builders, may fairly complain of having been specially invited to incur the serious expense involved in the preparation of designs and estimates if it was a foregone conclusion that only the engines of one, and that an American, firm would satisfy the requirements of the committee and their engineer.39 The specification, they insisted, was excessively vague, while parts of the History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
59
plant specified were alien to established British practice; British materials, moreover, were more durable and manufactured from better materials.40 The Committee's behaviour was presented as `un-businesslike' and immoral, as, by extension, was that of their consultant.41 In the proceedings of the City Council on 17 August 1899, the decision of the Tramways Committee was opposed by a large majority of councillors. The opposition was led by Bailie James W. Thomson, a professional engineer who had been with the J. and J. Thomson engineering company in Glasgow.42 Thomson asked for a delay to the decision-making process, with Parshall required to submit new specifications: his aim, he stressed, was `that Mr. Parshall should be asked to make a specification on which every firm would be able to tender on the same basis' (a comment noted in the record as being greeted by `Applause').43 Interviewed by the technical journal, The Engineer, Thomson was intensely critical of Parshall's practices and straightforwardly questioned his authority. As he stated: The drawings of the American engine indicated to him distinctly that it was weak in many parts, and far too heavy where weight was not wanted. The bed-plate, for example, was much too light, as were the columns supporting the cylinders; while the crank pins and shaft generally were ridiculously heavy, and they varied in diameter, the smaller diameter pin being on its low-pressure crank.44 Thomson raised the possibility of employing a mechanical engineer to assist Parshall in drawing the new specification, in order to ensure both technical validity and fair competition.45 In so doing, Thomson questioned not only Parshall's engineering competence, but his impartiality. The Electrical Review and The Electrician, alongside other technical journals, gave extensive coverage to the Glasgow installation. Opinions were divided. The Electrical Review and another leading periodical, The Engineer, condemned Parshall's technical advice, his proposals in favour of the American manufacturers and the procedure followed by the Tramways Committee. An editorial in The Electrical Review proposed a competing expert judgement of its own: We say nothing of the electrical plant at present, as the case may be less strong in this respect, but in the matter of large steam engines we do say that no American firm has yet produced better steam engines than are turned out in our English shops. As regards finish, the English engine is far better. In material generally, the English engine is fully abreast of it rival, while in the wearing quality of the cylinders, we possess an undoubted and considerable superiority. As regards finish, however, it is probable that neither in England nor America is it equal to that of some of the work done in Switzerland.46 The employment of an American expert, said The Engineer, was, above all else, unnecessary: `The first thing that strikes us a remarkable is that Glasgow could not find an efficient consulting or advising engineer in History of Technology, Volume Twenty-eight, 2008
60
The Work of British Consulting Electrical Engineers, 1880±1914
Scotland, or failing that in England.'47 Again, it concluded by questioning Parshall's neutrality, impartiality and integrity. Either his honesty or his accuracy was at fault: `Mr. Parshall states that Hick, Hargreaves, and Co., Limited, guarantee a consumption of 15lb of steam per hour against 14lb guaranteed by Messrs. E.P. Allis and Co. But we have excellent reasons for saying that Hick, Hargreaves, and Co., Limited, guaranteed 13.5lb, or 1.5lb less than stated by Mr. Parshall.'48 The Electrician, in common with a further periodical, Engineering, adopted a different argument: although, with an American consultant, `it might be argued that the engineer is naturally predisposed to follow American rather than English practice', this did not translate into professional misconduct or engineering malpractice.49 Both journals concentrated their criticism on the procedures followed by the municipality, arguing that Glasgow's authorities had not secured the appropriate conditions for fair competition. In The Electrician's view: If it had been their intention to consider tenders from Glasgow firms only, it should have been clearly advertised beforehand; and since the firms in question might not have had wide experience in his work, the consulting engineer might have drawn up his specification on an educational basis. But to invite tenders on a broad basis, to obtain and publish detailed particulars and prices from firms who have already bought their experience, and then to refuse to accept any tenders, until the proteÂgeÂs of the City fathers have had a better insight into the methods of their trade competitors, is not acting up to a high standard of morality.50 Ultimately, after the decision of the City Council as well as under the public pressure, the Tramways Committee reconsidered, finally splitting the contract between Parshall's favoured contractor, Allis, and a Bolton company, John Musgrave and Sons. Thus, they appeased the opposition led by Thomson whilst also securing the rapid delivery of the first engines ± something only the American firm could guarantee51 ± with a view to having them in place before the Electrical Exhibition planned to take place in Glasgow in 1901 ± a consideration for the authorities from the start.52 This revised resolution was ultimately supported by the majority of the council.53 In the aftermath of the dispute, The Electrical Review praised the Committee's final decision to secure the services of two contractors, as it would result in a profitable combination of engineering practices: We believe that in working out the design it was desired to take the strong points of the English engine, and the strong points of the American engine, and embody them in a single machine. Therefore in preventing English makers from adhering to standard practice they were placed on the same footing as American makers. The final result will be that the bearing pressures will be about three-quarters of the considered common American practice and about a half of that common to ordinary English practice.54
History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
61
Finally, the power station supplying the Glasgow tramways network55 was built along the same lines as were established during this episode; and, in the midst of public disputes, patriotic overtones, local politics and social pressures from several directions, both on the microsocial municipal stage and in the broader engineering world comprising the technical journals and the body of engineers±experts. The case of Glasgow shows that local interests and local expertise defined the character of the technical choices. Instead of the uniformity and standardization of the engineering plant, heterogeneity and bricolage56 were the outcome. `MR. CONSULTING ENGINEER, WILL YOU PLEASE LET ME STANDARDISE'57
The practices of electrical engineering consultants were defined variously by the educational and professional background that informed their tendencies towards specific systems and technologies, and by the local particularities of the projects with which they became involved. The latter could include municipal politics and financial considerations, the material and demographic characteristics of the relevant urban space, and the power of pre-established local agency and expertise of various kinds. At the same time as adapting to these local concerns, the professional consultant had to prove himself as an `individual professing to be an unbiased and competent expert',58 an `impartial' system builder.59 The consultant's necessary reliance on local and contingent socio-technical networks for legitimation as a trustworthy innovator produced a variety of co-existent yet competing systems, often rather small in scale.60 Having played a prominent role in the development of the electric light, power and traction industries in Britain, electrical engineering consultants often found themselves blamed by manufacturers as one of the main causes of the lack of standardization in these sectors when the `standardization' of engineering practice in general started to be a matter of concern.61 The debate between the manufacturers and the consultants was not only a dispute about the meaning of `standardization', but also a conflict of vested interests that influenced the shaping of the consultants' professional identity.62 `Standardization' as an issue in electrical practice was brought into the fore in early 1898, with the setting up of a joint committee to agree the technical details of specifications by the Municipal Electrical Association, representing municipal electrical engineers, and the Electrical Engineering Plant Manufacturers' Association. The initiative was inspired by the contemporary perception of the British electrical industry's backwardness in comparison to the Continent and the United States; at the same time, it was a move by the associations to secure the professional interests of their members. For the manufacturers, consensus on standards seemed to offer the promise of improved production and reduced costs; the risks for the buyer of machinery would be reduced, as would the legal and technical responsibilities of the supervising engineer in municipal works. However, the committee did not go further than drawing up some preliminary History of Technology, Volume Twenty-eight, 2008
62
The Work of British Consulting Electrical Engineers, 1880±1914
guidelines for the specifications. The final outcome was merely an attempt to regulate professional relations between manufacturers or contractors and the buyer's (municipal or consulting) engineers.63 The outcome of the initiative disappointed the editors of The Electrician. They would have preferred measures toward standardization, not of legalistic clauses in technical specifications, but of the technical components of the electric power networks themselves. The journal supported a reduction in the powers and rights of consulting engineers, which it saw as a barrier to this material standardization. Consulting, the argument ran, was a competitive profession, and fierce professional antagonism resulted in the demoralization of professional activity and a degradation in quality of the electrical machinery and of the electrical industry in general. The increased educational institutionalization of the discipline, along with the fact that people from various educational and professional backgrounds could establish a consulting practice without qualifications, made the profession of electrical engineering almost chaotically competitive: [Y]outhful and inexperienced men have sought to wrest from the leaders of the profession the patronage of those about to employ contractors, partly by undertaking to perform their duties for ridiculously low remuneration and partly by earning a reputation for drafting sharper specifications and securing lower tenders than the more experienced and reputable engineers have the conscience to do.64 While it did not tar the whole discipline with the same brush, The Electrician's editorial as good as argued that the majority of individuals then active as electrical consultants lacked the moral authority to fashion a credible role for the engineer. The publication thus dimmed the public image of the professional consultant and deconstructed his authority to act as mediator and to contribute in the innovation process. Discussions about standardization (or the lack of it) in Britain were thus accompanied from the outset by growing pressure, from the technical journals, manufacturers and contractors, for the reform of electrical consultancy practice. The Electrician supported a moderate programme of standardization that would not impede innovation and progress, while noting that the implementation of such a programme was difficult because of the competing interests and authorities that struggled for control of the installations.65 Discussion became more heated after a paper entitled `The Standardisation of Electrical Engineering Plant'66 was delivered by R. Percy Sellon (1863±1928)67 to the Institution of Electrical Engineers in February 1900. Sellon, a prominent figure in the British manufacturing world, with extensive experience in the Brush Electric Engineering Co. and later in the British Electric Traction Co., provided the term `standardization' with a specific meaning. In his words, standardization was: . . . the general acceptance, to a far greater extent than at present obtains in this country, of certain standards of output, quality, History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
63
efficiency, or other characteristics of electrical engineering plant, to meet ordinary requirements of usage for light, traction, and power.68 Sellon distinguished four main factors as contributing substantially to the success of a manufacturing undertaking. These were, first, the reduction of labour costs; secondly, the reduction of production costs, through a reduction of costs in raw materials; thirdly, the governance of the corporate environment, particularly the management of the production process; and, finally, the systematization and replication of production. Sellon distilled these concerns to a principle of the `repetition of the manufacture'.69 He believed that the importance of manufacture was overlooked or even neglected by manufacturers, managers, engineers and the users. His aim was to initiate a serious discussion on the meaning, the scope and the modes of standardization, and, if possible, to establish activities that would facilitate the quick and efficient implementation of standards in Britain. Sellon argued that, whereas the early years of a given industry were characterized by the experimental development of methodologies and technologies, standardization was a necessary characteristic of more mature industrial sectors: the British electrical industry, he said, had now made the transition to maturity.70 The advantages of a standardized engineering production would be capitalized on by manufacturers and users alike. Thoroughly standardized processes would secure `repetition' in manufacturing activities, leading to reductions in cost and the uniformity of products. On the users' side, standardization would provide advantages such as: `(a) Less capital outlay, (b) Prompt delivery, (c) Immunity from the risks attendant on novel designs, (d) Full manufacturer's guarantees.'71 Appealing to patriotic sentiment, Sellon asserted that the lack of standardization was the reason ± or at least a prominent reason ± for Britain's lack of competitiveness in comparison to the American or German electrical industries: `. . . the flood of American electrical engineering plant into Great Britain is swelling in spite of the fact that we have the advantages of lower wages, (on the whole) cheaper materials, and a longer experience of factory organisation.'72 The causes of Britain's lack of standardization, said Sellon, were twofold. On the one hand, he blamed manufacturers who pursued proprietary machinery and processes, aiming to bind users to their particular technologies and thus to acquire a larger share from the market.73 He argued that that was a short-sighted corporate strategy focused only on immediate profits: the establishment of standards would bring longer-term benefits. On the other hand, and at far greater length in his paper, Sellon criticized the practices of the buyers' engineers ± that is, the corporate chief engineers, municipal electrical engineers and, above all, the consulting engineers. The organizational pattern followed in British electrical engineering works, and particularly in the public works, gave the `buyer's engineer' the authority to control the innovation process in the electrical system and to impose upon the manufacturers his own ideas and innovations, with no need to consider standardization of History of Technology, Volume Twenty-eight, 2008
64
The Work of British Consulting Electrical Engineers, 1880±1914
either materials or practice.74 Sellon acknowledged the engineer's authority to specify and define the preferred outcome, but denied the engineer's right to get involved in the design and construction details of the machinery. These, he insisted, must be affirmed as under the manufacturers' control: That the practical operation of this system in the past has been primarily responsible for the relative absence of standardisation in this country, with its attendant evils alike to user and producer, appears certain. For the user's engineer has frequently been out of touch with manufacturers; and the temptation to strike out on new and `showy' lines, suggestive of individuality, ability or foresight, has, in the very nature of things, been great. The result has been the issue of specifications too often calling for wanton divergences from previous practice or existing manufacturers' standards.75 Moreover, Sellon stipulated, standardization should be applied not to the `means' or `constructional details' of the undertaking or the machinery, but rather to the `ends' or the `performance' required.76 Under this conception, the necessary role of the mediating engineer was reduced to a minimum. Throughout, Sellon promoted the causes of the manufacturing sector and straightforwardly attacked the practices and the roles of professional groups such as the municipal engineers and the consultants. The latter were Sellon's particular target, on account of the freelance and ad hoc character of their professional engagement. Examination of the published responses to Sellon's paper shows that his considerations and ideas were applauded by the majority of the engineering world. There was a consensus among practising electrical engineers, manufacturers, consultants and municipal engineers on the importance of standardization for the future of the electrical industry. Almost all parties concurred that the Institution of Electrical Engineers had to display some initiative, or at least promote discussion on the issue. At the same time, the various meanings of `standardization' and the ways of materially producing `standardization' were subject to fierce debate. Contributions were published from prominent figures in electrical engineering manufacturing, such as Rookes Evelyn Bell Crompton (1845±1940)77 and Sebastian Ziani de Ferranti (1864±1930).78 Crompton accepted, with Sellon, that the introduction of standards in the early stages of the electrical industry would have been harmful for its development. His critique was more moderate than Sellon's, as he argued that consultants were not the only party responsible for the lack of standardization in the electrical industry: he attributed their heterogeneous practice mainly to the variety of alternating current systems available in the electric power supply sector.79 Ferranti expressed admiration for Sellon's paper: had it been presented a decade earlier, he said, the British electrical industry would not have acquired its unwanted multiplicity of machinery, systems and practices.80 Like Sellon, Ferranti took a strong line against the broad definition of the consulting engineer's role: History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
65
. . . without desiring to be in the least harsh or unpleasant, I should like to suggest very respectfully to this Institution that the true function of the consulting engineer is fully to appreciate the scheme that he is carrying out, and, having appreciated that scheme, clearly to lay down the ends which he desires to accomplish, and to leave to manufacturers and people tendering the means which they think fit in best with his standards, and which they think best meet the conditions which that engineer desires to accomplish. Then his great work comes in in sitting [sic] in judgement on behalf of his client upon the various offers made under his requirements, and telling his clients not which is the cheapest, not which is right in this detail or that, but telling his clients what it will pay them best to buy.81 Faced with this concerted threat to their status, the consultants reacted by trying to deconstruct Sellon's representation of their practice as the main cause of the drawbacks of the British electrical industry. Stephen Sellon, a contemporary consulting engineer who specialized in electric traction (and was no known relation to Percy Sellon), commented ironically: The tone of the paper throughout seems to be like a voice crying in the wilderness, `Mr.Consulting-Engineer, will you please let me standardise ± that is, if you have come to stop.' Come to stop, indeed! I hope that the author . . . has taken the trouble to look round the portraits which he will see in this theatre. If he has, can he then suggest that the consulting engineer is like some mushroom, which has just cropped up, some sort of noxious article which has suddenly been introduced into the electrical world, and that he is desirous that it should be immediately obliterated? At any rate we consulting engineers have some little more regard for the men who have, to a considerable extent, aided and abetted in improving the trade and commerce of this country to the extent they have.82 Another vocal defender was Robert Hammond (1850±1915) of the aforementioned Hammond and Co., a leading consulting engineer who had entered the electric industry as a contractor, but, in 1893, decided to concentrate exclusively on his consulting practice.83 Standardization, said Hammond, should be supported in electrical engineering practice as far as this would result in the reduction of production costs and the improvement of the quality of the machinery, which were the buyers' main concerns. For Hammond, `standardization' meant a regulation and stabilization of the professional transaction between the buyers' engineer and the manufacturer or contractor: what mattered was the standardization of the formalities of the contracts, the establishment of a standard format of drawing up engineering specifications. Hammond's notion of `standardization' was far from being specified at the level of practices and design processes. His framework retained the consultant's duty and privilege to control the undertakings and make decisions on the basis of the local particularities, and according to the financial interests of his customers. Indeed, Hammond particularly worked History of Technology, Volume Twenty-eight, 2008
66
The Work of British Consulting Electrical Engineers, 1880±1914
to establish the consultant's role as safeguarding the buyer's interests, serving impartially in a manner that the manufacturers, by definition, could not: I would say of the manufacturers that it is natural they should consider their children, their own children, as prize babies, and the best babies ever born. One little disadvantage that the manufacturer is under is that he sees so much of his own baby, and therefore he perhaps unduly prizes it. The one advantage of the consulting engineer is that he sees the babies of many, and he is able to compare them, and is able finally to decide upon what a model baby ought to be.84 Hammond was supported by General Charles Webber (1838±1904), a leading electrical engineer who had serviced as president at the Institution and was one of the founders of the Society of Telegraph Engineers, the precursor of the Institution.85 Webber had vast experience in public works both from his army career, which lasted more than 30 years, and from the consulting practice he had established after retirement. He argued that the standardization in the American electrical industry had resulted from the US market's domination by few large companies such as the Westinghouse and General Electric. By contrast, in Britain, the small and local manufacturer informed the structure of the market and settled the framework of technological change. The US system, said Webber, could guarantee standardization and prosperity for the large companies, but not the interests of the buyers and consumers: `The capitalist who has provided the money by which standardisation is possible does not do if for the user's benefit.'86 Conversely, the British system, by tending to deconstruct monopolies and promoting competition, so reducing prices, provided more guarantees for the consumer's interests. In this system, the consulting engineer had an important and crucial role to play as a guard of fair competition and the users' rights. Webber stressed that: . . . although his choice affected by price, [the consultant] naturally, for his own reputation's sake, specifies both quality of manufacture, endurance and accessories, so that his client may have an article which will be both efficient and durable. Herein lies the advantage of many manufacturers with many standards. The ability of the engineer, with such a variety of choice, shows itself in the knowledge of, and experience with, these numerous standards, and not in his cleverness in framing an ideal specification which fits into none of these standards.87 With the multiplicity of manufacturers' products available, consultants guaranteed the smooth functioning of a free trade; their removal would most likely result in the elevation of a handful of figures, semi-educated in industrial matters, who could readily be co-opted by monopolists, to the detriment of society and technical development. Echoing Hammond, Webber promoted the regulation of professional relations between the History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
67
contractors and the buyer's engineer. In arguing for the standardization of contracts, he suggested the technical and legal documents in use by the Admiralty as a potential model.88 Both sides in the struggle for the control of electrical installations, then, sought to shape the concept of `standardization' to their own ends. The consultants, focusing on imposing standard forms on the character of commercial transactions, was in part formulated to deflect the alternative conception promoted by the manufacturers, with its focus on the engineering and innovation questions consultants still held to be their own. `WHAT IS A CONSULTING ENGINEER?'
In my accounts of the Dublin and Glasgow controversies, I have shown how extensively the advice and authority of `experts' were mediated by local politics, interests and social networks, decisions being subject to intervention from municipal councils, the committees with contracting responsibility, and the contemporary technical press. One of the core issues in all the contemporary disputes was the appropriate qualifications that a professional consultant should fulfil. `Morality', `neutrality' and `impartiality' were established as desirable qualities for the professional freelance engineer from the early phase of the supply industry. There was a considerable interest in the public sphere in the definition of the ideal professional consultant and the appropriate standardization of professional practice. In January 1893, an editorial in The Electrical Review posed a crucial question: `What is a consulting engineer?' The piece was motivated by a view of the common practice of the consulting engineer as characterized by partiality and immorality. British consultants involved in the infant electricity supply industry, the argument ran, maintained close relations with contractors and manufacturers. The majority of them drew up their specifications so as to ensure that only their favoured contracting firms would be able to tender ± a dissimulation that was apparently blatantly obvious: Is anyone prepared to deny that the foregoing is typical of the method of so-called engineers, who of recent years have become so plentiful as to bring disgrace and dishonor on what should be a noble and honorable calling? Is anyone prepared to deny that the words `consulting engineer' in the left hand corner of business letter paper do not call up a cynical smile on the face of the readers if they `know'? Are these two words not suggestive of hypocrisy and dissimulation in the vast majority of existing cases?89 The journal then tried to define what the function and practice of the consultant should ideally be. Consulting engineers, it said, should thoroughly grasp their clients' requirements, advise upon the best way of fulfilling these requirements, and put these requirements in a plain, simple and precise technical language in order to facilitate the contractors' History of Technology, Volume Twenty-eight, 2008
68
The Work of British Consulting Electrical Engineers, 1880±1914
activities and practices. The editors stressed that `it is, above everything a sine quaà non (we may be forgiven the pleonasm), that a consulting engineer's judgment and opinion should be unbiassed [sic] by even personal considerations'.90 This definition placed the client and the public at the centre of discussion. Any technological decision should be based on the appropriateness and the merit of specific design solutions or specific machinery, rather than the advisor's links with any manufacturer or contractor. The consultant's credibility should rest, in particular, on the long standing of his honest and disinterested dealing: even those recommendations that, inevitably in any consultant's career, appeared mistaken in hindsight must be honest mistakes, demonstrably free of favouritism. The editors particularly criticized engineers who combined contracting and consulting activities: `. . . it is difficult to conceive how an impartial opinion can be expected from a man who derives some pecuniary benefit by the sale, through his influence or advice, of any commodity.' The journal stressed the need for the regulation of the consulting profession, in terms of a necessary `winnowing of the wheat from the chaff'.91 The definition of who was the legitimate engineer for consulting practice remained an open question during the 1890s and the early years of the twentieth century. In 1903, the leading consultant Alexander Kennedy (1847±1928)92 was called upon to lecture the novice engineers of the City and Guilds Central Technical College, London, on the subject of `Consulting Engineers and their Work'.93 Kennedy drew a firm demarcation between manufacturing interests and consulting practice. At the core of his arguments was the need to regulate professional activity and to impose clear restrictions that should frame the practice of technologists in order to guarantee their impartiality and integrity. In particular, he stressed: In the first place it is a very dangerous thing for you to own any patents in your own line, no matter how ingenious they are, if you are going to take to consulting work. You cannot put a patent of your own in your specifications, and you cannot use it at all without disagreeable things being afterwards said.94 According to Kennedy, moral consultants should not be `the financial owners, or the beneficial owners, of such patents'.95 Kennedy noted the particular importance of the formal specifications drawn up by consultants: technical and legal documentation regulated relations between the contractor, the manufacturer and the municipal or corporate authorities for which freelance engineers worked. Kennedy stressed that a good specification should be `perfectly clear and perfectly complete, so that any person tendering to it may know exactly what he is tendering to and what he is expected to do'.96 Clarity and completeness would secure the clients' interests, would avoid the potential cost of further work and should serve to document the opportunity for all contractors to compete on a fair basis. Furthermore, he advised, the specification should provide room for History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
69
innovation by the contractors without loss of the control of the installation. A system controlled by an impartial expert guaranteed not only the clients' interests, but the process of continuing technical improvement. Kennedy's strictures, reiterated 3 years later in his presidential address to the Institution of Civil Engineers, reflected the increasing concern within the engineering community to regulate and define the role and the practice of consultants.97 Above all, in the expanding electric light, power and traction industries, the definition of the legitimate consultant was an issue of widespread current debate reflecting broader social ideals on all sides. In constructing the appropriate professional characteristics, themes of morality and disinterest remained crucial. In the early twentieth century, there was an increasing demand for the standardization of the professional practice of consulting engineers that resulted in the establishment of an independent Association of Consulting Engineers. The recent work of Hugh Woodrow characterizes the establishment of the Association as an evolutionary process that lasted for 5 years from 1908 to 1913.98 The Association was fundamentally the initiative of electrical engineers. New developments in electro-technology, and increasing interest from municipal authorities in electric light, power and traction schemes, made electrical engineering consulting particularly attractive to practising engineers in other specialisms who were prepared to transfer; in this climate, the flexibility of consulting practice and the extent of competition between vested interests led to widespread perceptions of malpractice that, established members of the profession believed, needed to be addressed at the institutional level by the profession as a whole. All branches of engineering were represented at the Association's inaugural meeting on 13 July 1910.99 The establishment of the Association reflected not only the interests and needs of its members, but also the ideals promoted by the contemporary technical press and leading members of the profession. Its attempts at reform, moreover, were embedded in the general social climate of rising professional ideals in Britain at this time. The first set of governing rules, introduced in early 1912, were characteristic.100 The most important stipulations, and those that generated the most controversy, were that a consultant member must have no partnerships with manufacturing concerns; must not advertise his services; and must not receive `any royalty on patented articles or processes used on work done for his clients without authority from them in writing'.101 The Association, then, was principally concerned with drawing a line between professional and engineering activities on the one side, and commercial and `trade' activities on the other, thus establishing the consulting engineers as unbiased experts. Since the Association was not a rival or competing professional body to the existing institutions, members were required to hold memberships both of the Institution of Civil Engineers and of the professional bodies representing their particular branches of specialization.102 The Association's attempt to insulate the engineering practice from the `dirty' commercial side of the engineering History of Technology, Volume Twenty-eight, 2008
70
The Work of British Consulting Electrical Engineers, 1880±1914
profession sat easily with the gentlemanly ethos that the engineers of the Victorian and Edwardian period were so keen to embody.103 The Electrical Review, however, expressed reservations about the Association's ability to regulate the profession, treating its foundation as tending towards establishment of a monopoly in the consulting business that would be harmful for the industry in general.104 Furthermore, the editors argued, the rules would discourage consultants from engaging in inventive and patenting activities that were at the core of the engineering profession: `. . . such a restriction would have excluded men like the late Lord Kelvin and Prof. Ayrton, and would exclude such living celebrities as Edison, Tesla, Marconi, Sir Oliver Lodge, and quite a number of our most eminent professors of electrical engineering.'105 CONCLUSION
In this paper, I have unravelled the process of institutionalization and identity formation of professional consulting engineers. In the late 1880s, and mainly in the 1890s, increasing interest from municipal authorities in establishing electrical systems provided electrical consultants with a thriving job market; yet, at the same time, competition became stiffer, as the flexibility of the consulting practice allowed mechanical and other engineers to shift their activities into the electrical industry, while the institutionalization of electrical engineering education provided the new industry with young electricians attracted by the prestige of a consulting career.106 In decisions regarding electricity supply systems for cities and regions, consultants' expertise and authority proved to be malleable by local networks of expertise, local councillors and individual contractors: the consultant's mediating role was always under negotiation, and the trustworthiness of the technology was inseparable from that of the consultant himself. In this climate, the quest to define the `moral' consultant was part of an emerging ideal characteristic of British professional society in this period, but it was also a rhetorical construct that was used whenever technical choices were debated in order either to construct or to deconstruct the expert's authority and legitimacy, as happened in Glasgow. Standardization, under the competing definitions offered by the advocates of the consultants and of the manufacturers, played a key role in debates over the formation of the consultant's identity, in which control of the innovation process was a major concern. Whereas manufacturers strove to reduce the consultants' control over systems development, the consultants turned to regulation of the professional activity to fashion and display themselves as morally sound mediators. Thus, the Association of Consulting Engineers attempted to draw a line between contracting and consulting, setting the rules that would secure the `morality', `impartiality' and `independence' of the practising professionals (and also, of course, their rights, powers and privileges). Though clearly acting in their own interests, their action was not purely opportunistic: the consultants were part and parcel of an industrial culture in which creativity, manifested History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
71
through the bricolage of a variety of artifacts and systems, was considered important for the social and cultural capital of engineers in general. In 1892, one of the leading consulting engineers in the electrical industry, John Hopkinson (1849±98),107 argued in his presidential address to the Junior Engineering Society: `The Engineer who can only do that which he has seen done before may be a practical man, but he will always belong to a lower grade of his profession.'108 The standardization of the electrical plant conflicted with this cultural trait because, as The Electrician noted, `perfect standardisation of central stations would inevitably tend to a more or less perfect standardisation of station engineers' salaries ± at a low figure ± and to lessen consulting work'.109 Standardization was a sociotechnical problem that caused tensions and conflicts in the engineering community, with substantial repercussions for the identity of the practising consulting engineers. ACKNOWLEDGEMENTS
I am grateful to Prof. Robert Fox, Prof. Graeme Gooday and Dr Aristotle Tympas for their comments on earlier drafts of the paper. I presented a short version of this work in 2006 at the annual ICOHTEC Conference in Leicester, and would like to thank the audience of that event for their comments, especially Dr Mats Fridlund. Also, I would like to thank the two anonymous referees for their comments as well as Dr James Sumner for his invaluable editorial assistance, instructions and suggestions. Research for this paper was initiated as part of my D.Phil. thesis in Oxford and continued as part of my research in the `Owning and Disowning Invention' project, which is based at the University of Leeds and funded by the AHRC. Notes and References
1. For recent historical works on trustworthiness, expertise and authority in technological systems, see G. Gooday, The Morals of Measurement: Accuracy, Irony, and Trust in Late Victorian Electrical Practice (Cambridge, 2004); B. Marsden and C. Smith, Engineering Empires: A Cultural History of Technology in Nineteenth-Century Britain (London, 2005); C. Smith, I. Higginson and P. Wolstenhome, ```Imitations of God's Own Works'': Making Trustworthy the Ocean Steamship', History of Science, 2003, 41: 379±426. 2. H. Perkin, The Rise of Professional Society: England since 1880, revised edn (London, 2002). 3. Perkin, op. cit. (2), 117. 4. Perkin, op. cit. (2), xxii±xxiii, 3±4, 6, 8, 117, 121±3. 5. G. R. Searle, Morality and the Market in Victorian Britain (Oxford, 1998), 130. 6. T. R. Gourvish, `The Rise of the Professions', in T. R. Gourvish and A. O'Day (eds), Later Victorian Britain, 1867±1900 (London, 1988), 13±39, on 32. 7. Gourvish, op. cit. (6), 29±31; R. A. Buchanan, `Gentlemen Engineers: The Making of a Profession', Victorian Studies, 1983, 26(4): 416±17. 8. R. A. Buchanan, The Engineers: A History of the Engineering Profession in Britain, 1750± 1914 (London, 1989), 88±103, 106±22, 125±42, 233±5 (appendix). 9. Gourvish, op. cit. (6), 30, 32; W. J. Reader, A History of the Institution of Electrical Engineers (London, 1987), 71±4; W. J. Reader, ```The Engineer Must Be a Scientific Man'': The Origins of the Society of Telegraph Engineers', History of Technology, 1991, 13: 112±18, esp. 112; Marsden and Smith, op. cit. (1), 254±8.
History of Technology, Volume Twenty-eight, 2008
72
The Work of British Consulting Electrical Engineers, 1880±1914
10. Buchanan, op. cit. (7), 410. 11. Ad hoc professional practices influenced also the character of management consulting, a professional activity that emerged in the twentieth century, see C. D. McKenna, The World's Newest Profession: Management Consulting in the Twentieth Century (Cambridge, 2006); C. D. McKenna, `Agents of Adhocracy: Management Consulting and the Reorganization of the Executive Branch, 1947±1949', Business and Economic History, 1996, 25 (1):101±111. 12. A comparable case is that of plumbers' (less successful) attempts to secure professional status as sanitary engineers in the United States around 1910: A. Slaton and J. Abbate, `The Hidden Lives of Standards: Technical Prescriptions and the Transformation of Work in America', in M. T. Allen and G. Hecht (eds), Technologies of Power (Cambridge, MA, 2001), 95±143, discussed on p.113. 13. For Manville, see `Obituary Notices: Sir Edward Manville', Journal of the Institution of Electrical Engineers, 1933, 72±73: 611. 14. `Analysis of Various Tenders Received for the Dublin Corporation Electricity Supply, 1890', Dublin Corporation Reports &c., 1890, 3: 550±74. 15. Dublin Corporation Reports &c., op. cit. (14), 566, 570. For Gordon, see `Obituary: James Edward Henry Gordon', The Electrician, 1892±93, 30: 417±18. 16. Dublin Corporation Reports &c., op. cit. (14), 573±4. 17. The Electrical Engineer (1891), 7: 189. 18. `Report of the Electric Light Committee', Dublin Corporation Reports & c., 1891, 3: 535± 6; The Telegraphic Journal and Electrical Review, 1891, 28: 814; The Electrician (1891), 26: 338±9. For Hammond and Co., see B. Bowers, `Hammond, Robert (1850±1915)', in David J. Jeremy (ed.), Dictionary of Business Biography, Volume 3 (London, 1985), 21±3; `Obituary Notices: Robert Hammond', Journal of the Institution of Electrical Engineers, 1915±1916, 54: 679±82. 19. The Electrical Engineer, op. cit. (17), 188. 20. The Electrical Engineer, op. cit. (17), 190. 21. The Electrical Engineer, op. cit. (17), 188. 22. The Electrical Engineer, 1893, 11: 408±9, esp. 408. 23. The Telegraphic Journal and Electrical Review, 1891, 28: 259±60, on 259. 24. The Electrician, op. cit. (18), 472. 25. For the role and the editorial strategies of The Electrician and The Telegraphic Journal and Electrical Review (later The Electrical Review) in the 1880s and 1890s, see P. Strange, `Two Electrical Periodicals: The Electrician and The Electrical Review, 1880±1890', IEE Proceedings, 132(8): 574±81. 26. P. J. Nahin, Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age (Baltimore, 2002), 103±8; B. J. Hunt, The Maxwellians (Ithaca, 1991), 69±72, 139±43; R. Sharp, `Biggs, Charles Henry Walker (1845±1923)', Oxford Dictionary of National Biography, 2004. 27. Strange, op. cit. (25), 575. 28. A. P. Trotter, Early Days of the Electrical Industry, and Other Reminiscences of Alexander P. Trotter, ed. F. W. Hewitt (London, 1948), 85±99, 104±9. [IET Archives, UK 0108 SC MSS066, Box 1]. See also B. Bowers, `Alexander Trotter: A Well Connected Electrician', IEE Proceedings, 1998, 136: 337±40. 29. `Obituary Notices: Harry Robert Kempe', Journal of the Institution of Electrical Engineers, 1935, 77: 893±4. 30. Strange, op. cit. (25), 578. 31. The Electrical Review, 1899, 44: 208. 32. For Parshall, see `Obituary: Horace Field Parshall', The Engineer, 1932, 154: 637; `The Late Dr. H.F. Parshall', Engineering, 1932, 134(2): 744±5. 33. Engineering, 1899, 68(I): 238; The Engineer, 1899, 88: 199; The Electrical Review, 1899, 45: 180. 34. H. F. Parshall, `Dublin Electric Tramway', Minutes of the Proceedings of the Institution of Civil Engineers, 1898, 133(3): 174±92. 35. Engineering, op. cit. (33), 238. 36. Engineering, op. cit. (33), 239. 37. P. Dowson, `The Supply of Electric Energy for Traction Purposes', The Electrical Review, op. cit. (33), 827±8, esp. 828. 38. Engineering, op. cit. (33), 239.
History of Technology, Volume Twenty-eight, 2008
Stathis Arapostathis
73
39. The Electrical Review, 1899, 45: 331. The letter was published in other contemporary technical journals: see `The Glasgow Tramway Engine Contract', The Electrical Review, op. cit. (32), 331±2; `Engines for the Glasgow Tramways Power Station', The Electrician, 1899, 43: 641±2. 40. The Electrical Review, op. cit. (33), 332. 41. The Electrical Review, op. cit. (33), 332. 42. The Engineer, op. cit. (33), 198. 43. Glasgow Herald, 1899, 18 August, 9. 44. The Engineer, op. cit. (33), 198. 45. The Engineer, op. cit. (33), 198. 46. The Electrical Review, op. cit. (33), 344. 47. The Engineer, op. cit. (33), 198. 48. The Engineer, op. cit. (33), 196. 49. The Electrician, op. cit. (39), 632±3, on 633; Engineering, op. cit. (33), 238±9. 50. The Electrician, op. cit. (39), 632±3, on 633; Engineering, op. cit. (33), 238±9. 51. The Electrical Engineer (1899), 24: 260, 273. 52. Engineering, op. cit. (33), 239. 53. `Glasgow Engine Contract', The Electrician, op. cit. (39), 708. 54. The Electrical Review, op. cit. (33), 746±7. 55. For a description of the installation, see The Electrical Review (1901), 49: 349±52, 393± 5, 435±8, 477±9. For the tramways industry in Glasgow, see J. P. McKay, Tramways and Trolleys: The Rise of Urban Mass Transport in Europe (Princeton, 1976), 173±84; Jubilee of the Glasgow Tramways, 1872±1922 (Glasgow, 1922). 56. Historians and sociologists of science have used the term `bricolage' to denote the scientists' ad hoc, albeit innovative and original, appropriation of existing cognitive models, knowledge, artefacts as well as social roles, practices and identities. The result of the appropriation process is a new creative socio-scientific regime; see M. Biagioli, Galileo Courtier: The Practice of Science in the Culture of Absolutism (Chicago, 1993); M. Biagioli, `Scientific Revolution, Social Bricolage, and Etiquette', in R. Porter and M. Teich (eds), The Scientific Revolution in National Context (Cambridge, 1992), 11±54; B. Barnes, Scientific Knowledge and Sociological Theory (London, 1974), 45±68, esp. 58, 146; D. MacKenzie, `An Equation and Its Worlds: Bricolage, Exemplars, Disunity and Performativity in Financial Economics', Social Studies of Science, 2003, 33(6): 831±68. 57. From Stephen Sellon's comments in R. P. Sellon, `The Standardisation of Electrical Engineering Plant', Journal of the Institution of Electrical Engineers, 1900, 29: 334 (discussion). 58. The Electrician, 1897±98, 40: 660. 59. The Electrical Review, 1893, 32: 59. 60. Thomas Hughes has pointed out the variety of systems that existed in the early twentieth century in London. T. P. Hughes, Networks of Power (Baltimore, 1983), 227±61. For the various technological styles developed by leading consultants and the reasons that shaped their practice, see E. Arapostathis, `Consulting Engineers in the British Electric Light and Power Industry, c 1880±1914', 2006, D.Phil. thesis, Oxford, MS.D.Phil. c.19846. 61. For the early initiatives and the history of the Engineering Standards Committee as well as the role of leading consulting electricians in its activities, see R. C. McWilliam, BSI: The First Hundred Years, 1901±2001 (London, 2001), 11±57. 62. Marsden and Smith have shown elegantly that, in the 1860s, the quest for the establishment of standards of measurements in telegraphy went hand in hand with the renegotiation of the identity of the telegraph engineer and the emergence of the scientist± engineer: Marsden and Smith, op. cit. (1), 216±20. 63. The Electrician, op. cit. (58), 661±2. 64. The Electrician, op. cit. (58), 661. 65. The Electrician, 1898, 41: 289±90. 66. Percy Sellon, op. cit. (57), 291±303, (Discussion) 304±44. 67. For Sellon, see `Obituary Notices: Robert Percy Sellon', Journal of the Institution of Electrical Engineers, 1928, 66: 1242. 68. Percy Sellon, op. cit. (57), 291. 69. Percy Sellon, op. cit. (57), 292. 70. Percy Sellon, op. cit. (57), 291±2, 296.
History of Technology, Volume Twenty-eight, 2008
74
The Work of British Consulting Electrical Engineers, 1880±1914
71. Percy Sellon, op. cit. (57), 292. 72. Percy Sellon, op. cit. (57), 292. 73. Percy Sellon, op. cit. (57), 291. 74. Percy Sellon, op. cit. (57), 293±4. 75. Percy Sellon, op. cit. (57), 293. 76. Percy Sellon, op. cit. (57), 295. 77. For Crompton, see B. Bowers, R.E.B Crompton: An Account of his Electrical Work (London, 1969). 78. For Ferranti, see J. F. Wilson, Ferranti and the British Electrical Industry, 1864±1930 (Manchester, 1988). 79. Percy Sellon, op. cit. (57), 304. 80. Percy Sellon, op. cit. (57), 312. 81. Percy Sellon, op. cit. (57), 313. 82. Percy Sellon, op. cit. (57), 334. 83. For Hammond, see `Obituary Notices: Robert Hammond', Journal of the Institution of Electrical Engineers, 1915±16, 54: 679±82. 84. Percy Sellon, op. cit. (57), (Discussion) 311. 85. For Webber, see `Major-General Charles Edmund Webber (1838±1904)', Journal of the Institution of Electrical Engineers, 1905, 35: 584±6. 86. Percy Sellon, op. cit. (57), (Discussion) 316. 87. Percy Sellon, op. cit. (57), 316±17. 88. Percy Sellon, op. cit. (57), 319. 89. The Electrical Review, op. cit. (59), 32: 58±9. 90. The Electrical Review, op. cit. (59), 59. 91. The Electrical Review, op. cit. (59). 92. For Kennedy, see A. Gibb, `Sir Alexander Blackie William Kennedy', Obituary Notices of Fellows of the Royal Society, 1938, 2(6): 213±23; G. F. Kennedy, History of Kennedy and Donkin (Liphook, 1988), 1±2; R. E. D. Bishop, `Alexander Kennedy: The Elegant Innovator', Transactions of the Newcomen Society, 1974±76, 47: 1±8. 93. A. B. W. Kennedy, `Consulting Engineers and their Work', inaugural address, 1902± 03 session, City and Guilds Central Technical College. The lecture was published in Engineering, 1903, 75: 428±30. 94. Kennedy, op. cit. (93), 430. 95. Kennedy, op. cit. (93), 430. 96. Kennedy, op. cit. (93), 429. 97. A. B. W. Kennedy, `Presidential Address', Minutes of the Proceedings of the Institution of Civil Engineers, Part I, 1907, 167: 2±27. 98. H. Woodrow, Tales of Victoria Street: A Story of the Association of Consulting Engineers (London, 2003), 1±13. 99. Woodrow, op. cit. (98), 6. 100. The Electrical Review, 1912, 71: 251. 101. The Electrical Review, 1911, 69: 1075. 102. The Electrical Review, 1912, 70: 141. 103. Buchanan, op. cit. (7), 407±29; Buchanan, op. cit. (8), 44, 195; Gourvish, op. cit. (6), 30, 32; Reader, op. cit. (9), A History, 71±2; Reader, op. cit. (9), ```Scientific Man''', 112; Marsden and Smith, op. cit. (1), 254±8. 104. The Electrical Review, 1910, 67: 124. 105. The Electrical Review, 1912, 70: 41. 106. For the hierarchical structure of the British engineering community in the long nineteenth century, see Buchanan, op. cit. (7), 412. 107. For Hopkinson's life and career, see B. Hopkinson, `Memoir', in B. Hopkinson (ed.), Original Papers by John Hopkinson, Volume 1 (Technical Papers) (Cambridge, 1901), ix±lxii; J. Greig, John Hopkinson: Electrical Engineer (London, 1973). E. Hopkinson, The Story of a MidVictorian Girl (Cambridge, 1928); M. Hopkinson, Memories of John Hopkinson, by His Sister Mary (Manchester, 1901). 108. J. Hopkinson, `Presidential Address to the Junior Engineering Society', in Original Papers by John Hopkinson, op. cit. (107), 254. 109. The Electrician, op. cit. (65), 289±90.
History of Technology, Volume Twenty-eight, 2008
Perception, Standardization and Closure: The Case of Artificial Illumination CHRIS OTTER
INTRODUCTION
The gas mantle (Figures 1 and 2), invented by Auer von Welsbach in 1886, usually occupies a marginal position in the history of artificial illumination technology. It is generally dismissed as either a doomed imitation of the electric filament lamp, or the brief final chapter in the history of gaslight.1 Yet, in the early twentieth century, the gas mantle (or incandescent gaslight, as it was sometimes known) was routinely described in very different terms. In 1902, the former metropolitan gas examiner William Dibdin argued that gas mantles made Liverpool Britain's best-lit city. `Probably in no other town,' he claimed, `has public lighting been more successfully and economically studied and carried into effect.'2 Liverpool's city engineer, C. R. Bellamy, thought that the extension of this system was `irresistible'.3 Reporting to Liverpool's Electric Power and Lighting Committee in 1901, he observed that the city had 152 electric arc lights, compared with 8,908 incandescent gas mantles. Twenty years later, Britain was using 60 million gas mantles annually, and Germany 90 million.4 Engineering texts were still recommending their use for street lighting in the 1950s.5 The gas mantle, then, was by no means `doomed' in 1902, and neither was its history a brief one. Dibdin thought it the most efficient artificial illuminant in existence.6 Its colour was easily manipulated, it came in many different shapes and sizes, and it had the backing of the powerful gas industry. Yet, the gas mantle has, for almost all practical purposes, ceased to exist. Along with other forms of artificial illumination (candles and oil lamps, for example), it has been replaced by electric light. Today, only two forms of illumination are generally used in the West: electric filament and vapour or fluorescent light. To adopt the parlance of the sociology of technology, there has been a process of `closure'.7 A period of tremendous technological fluidity appears to have given way to one of stabilization and consensus. Historians and sociologists of technology have explained this process of closure and stabilization in several ways. The first is by arguing that History of Technology, Volume Twenty-eight, 2008
76
The Case of Artificial Illumination
Figure 1 Gas mantle, 1917. Reproduced from R. F. Pierce, `Recent Developments in Gas Lighting', in Illuminating Engineering Practice: Lectures on Illuminating Engineering (New York, 1917), facing p. 166.
Figure 2 Inverted gas lamp with cluster of mantles, circa 1910. Reproduced from M. C. Whitaker, `Incandescent Gas Mantles', in Lectures on Illuminating Engineering Delivered at the Johns Hopkins University, Volume 1 (Baltimore, 1910), 231.
History of Technology, Volume Twenty-eight, 2008
Chris Otter
77
electric light had intrinsic properties (such as brightness, whiteness and cleanliness), which determined its success. There was also something intrinsic to candles, oil and gaslight (dimness, yellowness, dirtiness) that guaranteed their failure. This technological determinism assumes that electricity's success was inevitable. Wolfgang Schivelbusch, for example, argues that `the victory of the new technology [i.e. electric light] could not be put off for ever'.8 His position assumes that there is a physiologically `standard' way of seeing, which artificial illuminants either replicate or distort. A second causal mechanism is cultural. Electric illumination satisfied a `modern' cultural desire for radiance, spectacle and surveillance. Such arguments appear regularly in cultural histories of nineteenthcentury cities, particularly Paris, where electricity has been described as the `culmination' of a century's relentless drive towards the spectacular.9 Since electricity alone is assumed to have been capable of producing such spectacle, these arguments usually implicitly involve technological determinism. A third mechanism explains the success of electric light by invoking the eye's `natural' physiology. Schivelbusch argues that electric arclight, by engaging the colour-producing retinal cones of the eye, replicated the physiological effect of daylight. Gaslight, meanwhile, could only activate the retinal rods, which registered light, but not colour, thereby generating a visual field typical of moonlight: monochromatic and peppery.10 By producing brighter, whiter light, electricity could, quite literally, `turn night into day'.11 Where candles, oil and gas provided varying degrees of bilious, unreliable illumination, electric light was capable of standardizing and stabilizing the way people saw. These three forms of determinism, in which causal mechanisms reside within the illuminant, modern culture or the retina, respectively, have been critiqued from within the field of science and technology studies. In the social construction of technology (SCOT) model, the success or failure of particular technological forms is explained by the shifting interests of particular social groups.12 In his study of fluorescent lighting, Wiebe Bijker argues that the final, stabilized form of the high-intensity fluorescent lamp was the consequence of ongoing conflict and compromise between utility companies and lamp producers.13 This technology was not, in any objective sense, `better' than its competitors, nor was the perception it produced any more `natural'. In Thomas Hughes' `systems' approach, explicated most extensively in Networks of Power, the triumph of electricity networks has been the result of the various, interacting parts of those systems overcoming several technical problems (or `reverse salients') and acquiring semi-autonomy from surrounding environments (or `momentum').14 Like the proponents of SCOT, Hughes emphasizes the `heterogeneous engineering' of humans as well as technological artifacts.15 However, where, for SCOT arguments, closure is ultimately caused by social factors, in the systems approach, technological factors play a more substantial role. In this chapter, I will assess whether any of these models retain explanatory validity, at least for the history of illumination technology. History of Technology, Volume Twenty-eight, 2008
78
The Case of Artificial Illumination
Since this particular history of `closure' covers a long historical period, my focus is necessarily broad, running from the early nineteenth to the midtwentieth century, although most of the discussion concerns the period from 1880 to 1930. I examine four particular uses of illumination: the lighting of streets, the projection of illumination over distance, the creation of white light, and the facilitation of visual acuity, or detail, in specialized professional activities. I concentrate primarily on the history of illumination in Britain, although I will also refer to America, Germany and France. Focusing on four distinct areas of use ensures that neither the idea of a single trajectory of development, nor the validity of a single model of closure, is assumed in advance. My studies of artificial illumination suggest two conclusions. The first is that no existing model of closure is individually adequate to explain the success of electric light. A systems model, for example, is more successful at explaining the momentum of street lighting systems than it is at explaining why British lighthouses retained oil lamps for far longer than might be supposed. The trajectory of electric light emerges as splintered and multiple. If standard forms appear, they are rather evanescent and never achieve universal use. Indeed, `closure' itself is too absolute a concept to be of much use when explaining the history of illumination technology. The second conclusion is that the literature on artificial illumination, and indeed that on technological standardization and closure more broadly, has paid insufficient attention to perception itself, or indeed bodily practice in general.16 This absence is not limited to SCOT and systems models. Economics-focused literature on standardization usually models humans as rational actors whose choices are primarily determined by access to information mediated by the market.17 The inferiority or superiority of a given technology is obvious to those with requisite knowledge, and is hence determined in advance and largely unaltered by the struggle with other technologies.18 With illumination technology, however, the human user is not simply a rational, calculating actor, but also a perceiving subject, who, when faced with new forms of illuminant, often finds his or her perceptions unsatisfactory, strange or disturbed. Perception produces its own forms of `information' that often fail to correlate with more scientifically established types of information. Whenever new illumination technologies are introduced, perception must be reconfigured. A history of illumination technology that ignores the historical phenomenology of perception remains a history of illumination severed from human experience. Reconnecting illumination technology with vision demonstrates that perception itself can never be fully standardized. Eyes themselves have physiological idiosyncrasies, while viewing subjects have rather different psychologies, or `personal equations'.19 Environments vary dramatically, as do the uses of illumination within them. The standardization of illumination technology, therefore, never guarantees a concomitant standardization of perception. This obstinate refusal of perception to be History of Technology, Volume Twenty-eight, 2008
Chris Otter
79
fully standardized, to be technologically captured and instrumentalized, helps to explain why closure has been such a prolonged, apparently infinitely deferred, process. STREETS: SECURITY AND SPEED
Britain was the first country in the world to use gas for large-scale street lighting. London's Pall Mall was gaslit in 1807; by the early 1820s, many towns, including Preston, Liverpool and Manchester, were using gas to light their streets.20 Seven hundred and sixty towns had a public gas supply by 1849.21 By the time of the gas mantle's invention in 1886, the gas industry had acquired, in Hughesian terms, enormous technological momentum. Its extensive infrastructure ensured that old open-flame burners could quite easily be replaced with gas mantles, especially following several innovations to the mantle, which was made sturdier, inverted and lit by high-pressure gas. In 1915, two illuminating engineers observed that `during the last few years the number of streets brilliantly lighted by this means in London has enormously multiplied'.22 In Westminster, for example, electric light had been replaced by highpressure incandescent gaslight.23 When two major new streets, Kingsway and Aldwych, were built in early twentieth-century London, they were lit with mantles, each providing 1,000 candle-power.24 Berlin's Koeniggraetzerstrasse was illuminated by 4,000 candle-power gas mantles in 1910, and the municipality was preparing to replace some of its electric street lamps with gas mantles.25 Nonetheless, several factors, including poor quality of gas, badly situated lamp-posts and erratic maintenance, meant that many urban streets remained comparatively gloomy. In such circumstances, however, it was not an obvious move to replace gas with electric light, which was often expensive, abrasive and poorly focused. In the 1880s, Wimbledon, for example, abandoned gas lighting and reverted to oil.26 By 1911, over 1,000 towns worldwide were using acetylene for illumination: the generating equipment could be placed in the base of the lamp-post.27 Irrespective of the particular illuminant, the purpose of street lighting was `to provide sufficient illumination to facilitate the movement of vehicles and pedestrians through all the streets of a city without danger of accident or fear of molestation'.28 Illuminating engineers often connected lighting with policing: `. . . it is a well-known fact that where streets and places of public resort are thoroughly well illuminated crime becomes a vanishing quantity.'29 Illumination, more basically, should provide the generalized illumination of road surfaces, allowing the discernment of moving objects and the focused illumination of salient objects (particularly signs). In central urban areas, it might also be expected to facilitate more detailed acts of perception, like the perusal of newspapers. In the early twentieth century, the automobile changed the parameters of discussion about street lighting, particularly in America, where debates about speed limits were common and nocturnal accidents were increasing.30 Although vehicles themselves were increasingly illuminated, it was acknowledged History of Technology, Volume Twenty-eight, 2008
80
The Case of Artificial Illumination
that street lighting needed improvement, to allow the position and speed of other objects to be accurately discerned by motorists.31 During the First World War, when London's streetlamps were extinguished or occluded to minimize air-raid damage, road accidents markedly increased.32 In 1921, the illuminating engineer, Alexander Trotter, noted how electric illumination allowed traffic to move `at a pace which a few years ago would have been condemned as reckless and furious'.33 By this time, illuminating engineers had developed a sophisticated calculus for establishing appropriate levels of street illumination for increasingly busy streets. Streets were divided into classes according to volume of traffic or crime risk, apposite positions of lamp-posts were established, and contour diagrams of illumination levels produced.34 In 1911, the American illuminating engineer Louis Bell argued that when illumination crossed a `critical point of 1 lux or more', details appeared more clearly and objects assumed `a more natural aspect'. 35 He recommended this level for first-class streets: in third-class streets, 0.1± 0.3 lux was sufficient to see vehicles in time to avoid them. Engineers were sensitive to the complex interactions of technology, practice and perception. Illuminating engineering manuals often included substantial explanations of ocular physiology, and some added discussions of the psychology of illumination.36 This more scientific approach to illumination might seem to have promised to resolve the question of which artificial illuminant was most appropriate for street lighting, and hence stabilize illumination technology and standardize perception. Perusal of early twentieth-century illumination manuals and periodicals, however, shows little sign of movement towards either standardization or closure. Engineers argued, for example, that topographical, spatial and architectural variations made standardizing either practice or perception impossible. The sheer variety of streets was described by J. M. Waldram, former president of the Illuminating Engineering Society, in 1952: Roads in Britain are tortuous, hilly, and irregularly intersected with T- and cross-junctions; their widths and surfaces vary; they are flanked by buildings and fences of which no two are alike and which may be changed without reference to the lighting engineer.37 While engineers agreed that increased volume of traffic travelling at greater pace necessitated a corresponding increase in illumination, most appeared to consider high-pressure incandescent gaslight capable of meeting the demand. Many engineers continued to advocate gas for street lighting. E. N. Wrightington, a Boston gas engineer, wrote in 1910 that incandescent gaslight, with its splendid distribution, was `an almost ideal illuminant' for streets, allowing clear and distinct perception of distant objects `beyond and between lamp standards' without glare. In tests, which were photographed (see Figures 3 and 4), gas and electric light of identical candlepower were placed on the same lamp-posts in the same street. Gas mantles, Wrightington concluded, more effectively illuminated History of Technology, Volume Twenty-eight, 2008
Chris Otter
81
Figure 3 Gas lamps on Marlboro Street, Boston, circa 1910. Reproduced from E. N. Wrightington, `Principles and Design of Exterior Illumination by Gas', in Lectures on Illuminating Engineering Delivered at the Johns Hopkins University, Volume 2, 1910, 879.
Figure 4 Electric Lamps on Marlboro Street, Showing Inferior Illumination to that of the Gas Lamps. Wrightington, op. cit. (Figure 3), 880.
History of Technology, Volume Twenty-eight, 2008
82
The Case of Artificial Illumination
the street, because during darkness, the eye became more sensitive to their bluish-green light (a phenomenon known as the Purkinje effect).38 A similar comparative test in Lewisham in 1930 showed gaslight providing `a perfectly safe and almost ideal illumination from the point of view of all road users, who are easily able to detect approaching vehicles, even if these vehicles are inadequately lighted themselves'.39 By 1930, however, gas was not in the completely dominant position it had been in 1900. The electrical industry had overcome technological problems and was developing large-scale infrastructure. In the United Kingdom, the National Grid was established in 1926. Electric power was, to adopt Hughes' language, acquiring momentum.40 In the gas industry, some were feeling socially marginalized: a 1933 report complained that gaslight was not being given `a fair chance on merits and economies'.41 Nonetheless, in Britain (as in France), the gas infrastructure was more extensive, and electricity more expensive, than in the Unites States.42 Hughes' analysis needs complementing by acknowledging the momentum of the gas industry. Gaslight continued to develop in the first half of the twentieth century, despite the corresponding momentum of electricity. The powerful inertia of the former clearly retarded the development of the latter. The continued momentum of the gas industry was not only due to the rising use of gas for cooking and heating. Innovation in British gaslight technology did not cease after 1900, as the pouch mantle (1925) and George Keith's 1937 self-intensifying gas mantle demonstrate. Meanwhile, new technologies of combination, manoeuvrability and control maintained gas lighting's popularity as a street light into the 1950s (Figures 5 and 6).43 In 1952, British streets were lit by filament, vapour and fluorescent lamps, and numerous forms of gas mantle. Multiple illuminants, supported by two energy systems, each with significant technological momentum, typified the first half of the twentieth century, rather than closure in any simple sense. SIGNALS, SEARCHLIGHTS AND BEACONS
Perhaps because of their relatively remote location (on railway tracks, battlefields and coastlines), signals, searchlights and beacons have received rather less historical scrutiny than the streetlights of Paris and London. This is a regrettable absence, since it was partly through such technologies that the long-distance circulation of goods, money, information and people acquired the speed and impetus necessary for something like `global capitalism' to exist. Here, I will briefly consider signals and searchlights, before concentrating on developments in lighthouse technology in the later nineteenth century. Early railway signalling was performed by hand, often by policemen, but a technical replacement soon appeared in the form of the hinged semaphore post (1841).44 Such signals were often telegraphically coordinated, through the `block system', but, when they were illuminated, oil, particularly paraffin, remained the prevalent illuminant well into the History of Technology, Volume Twenty-eight, 2008
Chris Otter
83
Figure 5 Gas lamp with mantles in line and adjustable reflectors, circa 1952. Reproduced from Minchin, `Lanterns for Gas Sources', in J. M. Waldram, Street Lighting, with Chapters on Gas Lighting by L.T. Minchin (London, 1952), 327.
Figure 6 Gas lamp with wing reflectors, circa 1952. Minchin, op. cit. (Figure 5), 328.
History of Technology, Volume Twenty-eight, 2008
84
The Case of Artificial Illumination
twentieth century, because it was easy and cheap to use. Electricity, then, was used to transmit telegraphic messages, but not to project visual signs. To distinguish such messages (`stop' from `go', for example), coloured illuminants were developed. The Liverpool±Manchester railway used red for danger from 1834, and the Great Northern Railway soon added green, which generated specific problems on account of colour-blindness.45 These oil lamps were regarded as highly reliable: one 1912 signal engineering textbook observed that some could burn `for about seven days continuously without attention'.46 Gas and electric illumination were, however, beginning to be adopted in greater numbers in larger towns, but contemporary descriptions of railway signals, which sometimes noted how oil lamps were maintained in case the gas ran out, hardly suggest that irresistible momentum was being generated.47 Military illumination might seem an obvious site for the precocious introduction of `modern' systems like electricity, but this statement is only partially accurate. In the nineteenth century, Morse code was transmitted over large distances by heliographs. Limelight, which produced a brilliant, concentrated light, was used to send messages after dark.48 Heliographs were combined with telegraphs in later nineteenth-century imperial wars, particularly in central Asia.49 Electric searchlights, meanwhile, were used by the French during the Crimean War, and by both sides during the Franco±Prussian War.50 By the early twentieth century, motorized vehicles could carry searchlights to the front lines of battle.51 They also had offensive uses: battleships were equipped with searchlights that could temporarily blind enemies, leading to `the retina becom[ing] overexcited'.52 When war broke out in 1914, military electrical engineers were distributed round the coast operating searchlights. The First World War, however, was hardly an electric war, even at the level of basic military illumination. Oil lamps, visible at 2±4 miles, were the standard equipment for army signalling in 1916.53 A similar story is evident with British lighthouses. In the mid-nineteenth century, some 1,200 ships, and between 800 and 1,200 British mariners, were annually lost at sea, often because of poor visibility around coastlines.54 Lighthouses were usually lit by oil, rather than gas ± a technological choice justified on the grounds of economy and simplicity by Trinity House, the institution responsible for administering lighthouses since Henry VIII's reign. The introduction of electric arc lights created another competitor to oil. Yet, there was no immediate replacement of `weaker' oil lamps with demonstrably more `potent' gas or electric lamps. The reasons for this can be discerned by examining two occasions on which these lamps were compared. In 1863, the inventor John Wigham devised a gas burner for lighthouses that he argued far outshone extant oil lamps. John Tyndall, the scientific advisor to Trinity House, tested the lamp, finding it 12 times brighter than comparable oil lamps. In tests in Norfolk, however, Trinity House found the oil lamp 35 per cent brighter than Wigham's lamp, and argued that equipping lighthouses with gas involved expensive, complicated plant that was inappropriate for such isolated, History of Technology, Volume Twenty-eight, 2008
Chris Otter
85
precarious places.55 The Trinity House report also argued that since Wigham's lamp had a larger point of illumination, its light was distributed in more directions and hence indicated direction more ambiguously than smaller oil lamps.56 Such illuminants had, to adopt the language of Bijker and Pinch, substantial `interpretative flexibility'.57 For Tyndall, a better lamp was a brighter lamp, measurable with the photometer. For Trinity House, issues of economy, facility and direction were more salient, although its photometers clearly produced different results to those of Tyndall.58 These debates were unresolved when new tests were undertaken in 1884±85, involving oil, gas and electric light, at the South Foreland lighthouse at Dover. The results were, again, inconclusive, providing all groups with support for their particular position. Electric arclight was acknowledged to have greater `penetrating power', but some local mariners found it dazzling.59 In most meteorological conditions, the lights' performance was similar, so on the grounds of efficiency, and simplicity, the experimenters concluded that `for the ordinary necessities of lighthouse illumination, mineral oil is the most suitable and economical illuminant', while `for salient head-lands, important land-falls, and places where a very powerful light is required, electricity offers the greatest advantages'.60 Experimental `facts' were palpably open to several interpretations. Some observers noted that in fog, the bluish rays of the arclamp were more rapidly absorbed, and hence much of its power was wasted.61 This curious combination of penetration and lack of diffusion had also been noted with street lights: one 1902 report concluded that, with arclight, `the air is not filled with light as it should be', unlike the gas mantle, which `suffuse(s) the air with light'.62 Meanwhile, refinements to oil lamps, including the replacement of vegetable oil with petroleum and the addition of mantles, maintained their cost-effectiveness well into the twentieth century.63 In the case of lighthouse illumination, then, the degree of interpretative flexibility of the artifact, the inability of experiments to conclusively prove anything (the `experimenter's regress') and the clearly identifiable relevant social groups suggest that the SCOT model is highly applicable.64 The decision to maintain oil lamps in Britain was largely the result of the capacity of Trinity House to make its ideas of how lighthouses should be illuminated, and how mariners should see by them, triumphant. Lighthouse illumination was `socially constructed' in the sense that a powerful social group produced a relatively durable set of rhetorical truths about a spatially and technically specific form of illumination. Supporters of electric light drew attention to its `penetrative power', while advocates of oil and gas claimed arclight's colour made it unreliable in fog. The argument could not be resolved by appeal to standard perception, since there was no consensus as to what constituted standard perception. This flexibility of perception, and its inseparability from power and rhetoric, makes the SCOT approach more appealing than models that conceptualize the human agent as a rational being whose sensory systems are simple intermediaries between mind and world. Yet, perception has an obdurate History of Technology, Volume Twenty-eight, 2008
86
The Case of Artificial Illumination
structure of its own that cannot simply be seen as an effect of power, consensus or rhetoric. I will pursue this point in the second half of the paper. WHITE LIGHT AND COLOUR PERCEPTION
Cultural histories of the modern city often describe `floods' of light saturating boulevards and department stores.65 The volume of this flood was impressive, but equally important was its quality. Electric light, some claimed, could truly replicate white daylight ± something registered in the `Great White Way' epithet, used to describe the radiant main streets of American towns.66 Schivelbusch, as noted earlier, argued that by reproducing the solar spectrum, electric light standardized perception by making the eye see colours as it did by daylight, creating an `electrical apotheosis'.67 In the SCOT literature, by contrast, colour perception is seen as thoroughly socially constructed. Bijker notes how the production of `white' light by the high-intensity fluorescent light was not inevitable, and it would have been perfectly possible to replicate the different `white' light of the older incandescent lamp. The meaning and perceptual experience of `white' light, then, are historically malleable, rhetorically constructed and defined primarily by those with interests (economic, political, cultural) in promoting particular kinds of light, not a pure physiological `fact' divorced from representation.68 Other scholars have noted how the desire for `white' light is not ideologically innocent but inseparable from ideas about race: photography, for example, used bright white light explicitly to privilege and reinforce the image of the white face, and not other skin colours.69 In the later nineteenth and early twentieth centuries, the effect of artificial illumination on colour was regularly discussed. One of the simplest techniques for comparing illuminants was the spectroscope, which recorded the pattern of lines produced by a given radiating body.70 Those promoting electric light used spectroscopy to demonstrate gaslight's intrinsic inferiority. In 1879, the electrical expert, Paget Higgs, argued that gaslight was insufficiently hot to produce lightwaves from the upper end of the visible spectrum: `It is impossible to add the indigo and violet, and this is the cause of [gaslight's] inferiority. The electric light is more complex . . . [It is] the same as that of the sun.'71 This position was supported by further tests, and also received substantial practical support from those using electric light in industries in which nocturnal colour perception was important, like drapery. A report from the Warehouseman and Drapers' Trade Journal of 1882 was typical: Colours cannot be properly selected by gas or candlelight. Not merely do blues and greens get mixed up, but almost every tint and shade is altered by the yellow of the lamps and candles, and it is one of the great advantages of electric light that it enables us to see colours as they really are.72 This had implications for consumers: if one chose clothes by electric light, History of Technology, Volume Twenty-eight, 2008
Chris Otter
87
one could guarantee they would look identical the following morning. In the 1890s, drapers were the most common traders using electric light in `small provincial towns', and this was not simply because of reduced fire risk.73 Here, it is tempting to suppose, we have reached solid facts: under electric light, colour perception was standardized. Colours looked as they `really were', while gas and oil distorted them. This position is, however, little more than crude technological and physiological determinism. It mistakenly assumes that every observer using every kind of electric light, in every environment, experienced an equivalent chromatic effect: it presumes that it is possible to perfectly choreograph vision and illumination, and thus standardize the act of perception. Many observers actually found electric illumination's chromatic effects unsatisfying. Arclight often appeared bluish: `. . . the colour unfortunately turns out to be one of the greatest objections against the general use of the light from the voltaic arc for interiors.'74 Wallpaper, carpeting, clothing and cosmetics had often been designed to be seen by oil or gaslight, and the new, bluer illumination had unsettling aesthetic effects. These were particularly felt in traditionally nocturnal spaces like the theatre. Actors at Chicago's Academy of Music walked out during the first night that the theatre used arclight, complaining that the illuminant distorted their make-up.75 Additionally, the visual experience of gaslight had acquired, or at least retrospectively acquired, certain aesthetic connotations irreducible to whiteness. In her memoirs, the actress, Ellen Terry, contrasted `the thick softness of gaslight', with its `lovely specks and motes', with the `naked trashiness' of electricity.76 Her response should be taken very seriously, since it highlights those affective and emotional dimensions of perception that cannot be dismissed as mere eccentricities. They were unquantifiable and idiosyncratic, perhaps, but not irrational: they are every bit as integral to human experience as intellectual calculation. `White' electric light, then, did not simply reveal a `natural' world in all its chromatic exactitude to eyes desperate to escape from a behind distorting veil of yellow. But it would be incorrect to view gaslight's `yellowness' simply as a `social construction' of the electric industry. Descriptions of gaslight's tendency to produce jaundiced light existed well before electric light was commonly used.77 This was also apparent in practice: those using gaslight for work requiring accurate colour perception often neutralized its yellowness by situating a blue glass between flame and eye.78 The gas mantle, meanwhile, was frequently described as imitating electric light precisely by purging itself of the yellowness of its predecessors. Eyes that had become accustomed to arc lights would find this illuminant more favourable. Mr A. Pollitt, speaking before the British Association of Textile Works Managers in 1906, observed that the best-lit workroom he had ever seen used 43 self-intensifying gas mantles. `The area of the room was 6,030 square feet,' he noted, `the whole was brilliantly lighted, and the work carried on was of a most exacting character, and in all colours.'79 Not all gas engineers, however, sought to reproduce the `white light' of History of Technology, Volume Twenty-eight, 2008
88
The Case of Artificial Illumination
electricity, since `white light', as Ellen Terry's observation suggests, was not universally demanded. The gas engineer, William Sugg, argued that artificial illumination should imitate soft morning light, diffused through clouds, rather than bright white daylight. Noonday sun was too blue and bright; we draw curtains to protect our eyes from its intrusive astringency. Perfectly `scientific' arguments were used against those urging the adoption of arclight.80 Abandoning yellowness altogether, moreover, might seem wholly counterintuitive: after all, the sun was demonstrably yellow. William Gibbs exploited this fact in Lighting by Acetylene (1898): Man for some thousands of years has had for his type of light the sun, and it is without doubt true that the sunlight is yellow. He takes most kindly to a yellow light, which is the reason the electric arc is so unpleasant, with its bluish tint and moonlight effect, and also the reason that the Wellsbach [sic, referring to Welsbach's gas mantle] seems green to most of us. As a matter of fact, the acetylene flame is very like sunlight, and its effects on the eye cannot but be beneficial, on account of its perfect steadiness.81 Other writers, however, lauded acetylene's whiteness: its colour, like the gas mantle's, simply did not appear the same to every observer.82 `Whiteness' was then, as Bijker notes, a flexible term, open to seemingly endless interpretations and redefinitions. When users found electric lights blue, they were sometimes informed that this was not because the illuminant was itself intrinsically bluish, but because their habituation to yellow gaslight distorted their perception. `Correct' perception by electric light took slow ocular, habitual and even emotional recalibration, not access to the latest and most objective information. William Preece, Chief Electrical Engineer for the Post Office Department, expressed the frustrations of the rationalizer in 1892, and hinted at the `backwardness' of English perception for whom `a whiter light . . . appeared to be blue. The Americans did not call it blue at all. When they had been accustomed to them the imaginary blueness rapidly disappeared'.83 Successful use of electric light, then, involved establishing a durable relation between illuminant, ocular sensation, states of perception and signifiers like `white'. Others pushed the point further, acknowledging the essentially relative nature of all colour perception. In a discussion at the Institution of Electrical Engineers in the same year, Major-General Festing noted that, by day, arc lamps seemed yellowish by contrast with whiter daylight, while morning light appeared harsh to those who had been `dancing all night', but fresh and beautiful to those waking from a `good night's rest'.84 Perception was situational, embodied, relational. It depended (and depends) on who, and where, you are. William Ayrton concluded pithily that `white light is what you see most of. Simply that'.85 The experience and definition of white light, then, were actively resistant to permanent stabilization. Contemporary manuals of dress and decoration often acknowledged this complicated illuminatory and chromatic situation by providing charts listing the impact of various forms of History of Technology, Volume Twenty-eight, 2008
Chris Otter
89
illuminant on different coloured surfaces. `Every lady,' intoned George Audsley in Colour in Dress (1912), `before deciding on a colour for evening wear, which may appear in every way suitable and effective by daylight, should carefully test its appearance and note the change it undergoes when seen by the description of artificial light under which it is likely to be worn.'86 Arclight and electric filament lamps altered daylight hues, such as reddening browns and darkening greens, while incandescent gaslight brightened crimson and made oranges warmer.87 Every illuminant, from candle to arclight, had numerous unsettling and strange chromatic effects. There was no simple dichotomy between electric light, which replicated daylight colours by stimulating retinal cones, and non-electric light, which distorted daylight colours by failing to activate them. As Bijker shows, this issue was unresolved in the 1940s, although the debate was often between incandescent electric light and various forms of fluorescent light. Newer systems of measurement, like colour temperature, were advocated to replace the older wavelength model, now dismissed as unsophisticated and misleading (since white light, by definition, is not composed of light of a single wavelength).88 When introduced into art galleries, fluorescent light produced rhetorical effusions reminiscent of those surrounding arclight in the 1880s. Recalling the first fluorescent light installation at the National Gallery in 1947, the curator, Philip Hendy, observed that `we really could see subtle differences in tone, and differences of the very kind which disappear under the exaggerated warmth of tungsten lighting'.89 Fluorescent light, in turn, failed to permanently stabilize perceptual conditions. The following decade, an Impressionist exhibition at the Tate drew complaints from the picture restorer, Helmut Ruhemann: `. . . warm fluourescent lamps of a sulphurous hue had turned the Impressionists' subtle blues to an inky blue, and slightly distorted many other colours. The pictures looked curiously jaundiced, like colour reproductions.'90 Ruhermann recommended cool fluorescent lamps as the only light apposite for masterpieces. The production of stabilized colour perception, shared by large groups of people across substantial tracts of time, has proved frustratingly difficult. Colour perception shifts gradually, along with collective visual habits, and is never heading towards, and never arrives at, a final, settled, standard or natural position. The eye's own irregularities, technological transformation and socio-cultural values all frustrate stabilization. Eyes and technology never perfectly interlock to produce `accurate' colour perception, unsituated and independent of time and place. In 1910, the psychologist, Robert Yerkes, acknowledged the friability of the connection between eye and illumination technology, and the relativity of individual perception: `. . . my individual preference for the light of the tungsten lamp over that of the mercury vapour lamp may at any time prove to lack a reasonable scientific basis.'91 This `personal equation', or `individual preference', was not essential or ingrained, but neither was it the product of capricious plasticity. It was a result of the obduracy of habit: As we have grown accustomed to see our world, so we wish to continue History of Technology, Volume Twenty-eight, 2008
90
The Case of Artificial Illumination to see it. Against this human tendency ± and a valuable one it is ± the innovator among illuminating engineers must work, for the majority of us have no special desire to learn to see things correctly in purple, or in green light.92
The reason why certain forms of colour perception have become relatively stable and normative in the West is because of a combination of the slow work of habituation with more large-scale processes of technological momentum and social consensus. It has taken much unspectacular, unconscious work to accustom eyes to seeing by filaments and fluorescent tubes. This work of habituation, which took place in factories, homes, shops and museums, was, as the discussions at the Institute of Electric Engineers made clear, not independent of how light was named and represented, but neither was it a simple effect of discursive convention. Historically significant shifts in visual perception are not just discursive ruptures. They are slower and often subtler, involving bodily practice and environmental reorganization as well as technological innovation and new forms of representation. VISUAL ACUITY, DETAIL AND DISTRIBUTION
Artificial illumination was not simply assessed in terms of its effect on colour perception. The degree of visual acuity, or `distinguishing power', that a given illuminant could provide was equally debated, particularly following the formalization of the concept of 20±20 vision.93 Comfortable scrutiny of the printed or written word was, however, dependent on contrast, focus and diffusion, rather than whiteness. Lighting for reading was primarily a muscular and only secondarily a retinal question: it involved different bodily systems. This made it physiologically, and technically, different from lighting for colour work, as Dibdin acknowledged: `. . . if we want colours, we use it [a lamp] at a high incandescence. If we want the greatest possible reading power, we shall get a maximum for the purpose at a low temperature.'94 The aim here was to allow the perception of detail without providing excessive glare or contrast, which generated eyestrain, as Clarence Ferree and Gertrude Rand noted in 1917: `. . . when the proper distribution effects are obtained, intensities high enough to give the maximum discrimination of detail may be employed without causing appreciable fatigue or discomfort to the eye.'95 In 1914, Ferree argued that indirect illumination, whereby the eye saw only illumination and not the light source, was the best technique of providing detail perception. Semi-indirect illumination, where the light source was partially visible, was admissible at lower illumination levels.96 The art of distribution, then, involved using globes and screens, appropriately reflective surfaces and more calculated positioning of lights. The electrical engineer, Augustus Noll, recommended one 16-candlepower lamp for every 100 square feet of ceiling.97 The entire visual environment was theoretically calculable. Manoeuvrability was also routinely engineered into lamp design: by the early twentieth century, History of Technology, Volume Twenty-eight, 2008
Chris Otter
91
the universal swivel-joint was commonly used for office lamps.98 The gas mantle, originally so stiff and delicate, was produced in more flexible forms. In 1895, a Birmingham company produced a pendant that combined `free motion in all directions' with `the swinging balance principle', ensuring that `all vibration is obviated, thus ensuring long life to the mantle'.99 Given the importance of `distribution effects', the type of energy from which illumination was derived was seldom the most important question facing engineers. People continued to read by non-electric light for several decades. Although arc lights were permanently adopted in the South Kensington Museum and the British Museum reading room in 1880, this was by no means representative, as has sometimes been asserted.100 Achieving effective distribution from arc lights could prove difficult: its high intrinsic radiance could be fatiguing or even dangerous.101 Ventilating gas lamps, which minimized the heat and fumes of open flames, were introduced into several libraries at precisely this time. In 1890, the librarian William Greenhough described them thus: These regenerative gas-lamps possess several admirable features which render them most suitable for use in libraries and reading rooms . . . immense economy of gas; intensity, purity, and steadiness of the light; absence of downward shadow; practical completeness of combustion; the amount of radiant heat becomes hardly perceptible five feet below the burner; and most important of all, the lamp, acting as a ventilating light, not only removes the vitiated air of the room, but also carries at once away the gas fumes.102 In other spaces of individual contemplation, oil and gaslight continued to facilitate comfortable lucubration. The Railway Engineer reported in 1905 that toughened gas mantles produced `most brilliant' light in a Pullman carriage on the London, Brighton and South Coast Railway.103 The Pintsch system, using compressed oil gas and mantles, became very popular: around 61,000 such mantles were in use in Britain in 1910 (Figure 7).104 Oil lamps were being used on some minor railways into the 1930s, and even in the United States, occasional gas lights could be found in the 1940s.105 Meanwhile, for bedside reading, gentleness and convenience maintained candles' popularity at precisely the time at which one might assume electric closure was taking place. Alexander Trotter, the illuminating engineer cited above, noted in 1921 that `for domestic lighting, nothing can compare for comfort, beauty and efficiency with good candles . . .. One candle is enough to go to bed by, and even to read in bed with'.106 The use of such illumination was not limited to those too poor to afford, or too ignorant to appreciate, electricity and neither was it an act of romantic resistance to the encroachments of modern technology. It was instead a rational, physiologically satisfying way to comfortably read: the demise of the practice probably has more to do with fire risk than efficiency of perception. Electric light did, however, become commonly used for some specific History of Technology, Volume Twenty-eight, 2008
92
The Case of Artificial Illumination
Figure 7 Railway carriage lit by gas mantles on the Pintsch system. Reproduced from A. C. Humphreys, `Gas and Oil Illuminants', in Lectures on Illuminating Engineering Delivered at the Johns Hopkins University, Volume 1, 1910, 177.
practices requiring detailed perception. This was especially true for medical illumination, particularly of the inner cavities and orifices of the human body, where the heat and flames of non-incandescent light created particular problems. These problems were not insurmountable: Philipp Bozzini developed a lamp using a wax candle for endoscopic purposes in 1806, while various forms of oil and gas lamp were used for rhinoscopy and laryngoscopy across the century.107 An incandescent electric dentist's light, using platinum, was created in 1864.108 The electric cystoscope, pioneered by the urologist Max Nitze, became the main endoscopic tool, with an
History of Technology, Volume Twenty-eight, 2008
Chris Otter
93
electric light situated at one end of a tube, an eyepiece at the other and several interposing lenses allowing examination of the bladder, urethra and larynx. This device was highly successful and could be equipped with a small camera, thus becoming a photocystoscope, `the ne plus ultra of cystoscopic delineation'.109 By 1917, the insides of the trachea, esophagus and lungs might all be scrutinized with bronchoscopes with suitable extensions.110 For observation of visceral space, then, electric light achieved salience relatively early, partly because of its physical coolness. `It gives rise to but little heat,' according to one rhinology manual.111 This should not be overstated, however: electric cystoscopes caused minor electric shocks and sometimes quite severe burns.112 Non-electric forms of illumination remained in use for other forms of detailed perception. Limelight was regularly used for microscopic and photomicroscopic lighting in the 1890s, and mobile gas mantles were often the preferred method of microscopic illumination in the early twentieth century.113 By the 1920s, American laboratories usually used either electricity or acetylene to light their microscopes.114 Here, of course, colour perception was as important as acuity and, again, electric light produced a peculiar array of responses. In 1902, the zoologist, Maynard Metcalf, grumbled that incandescent electric light was `too yellow' for microscopic work unless a high (50-volt) current was used.115 Thirteen years later, the histologist, Simon Gage, complained that nitrogen-filled tungsten lamps were generally too violet, distorting greens and reds in particular. In a reproduction of nineteenth-century optical practice, this necessitated the adoption of special filters to make them appear more like daylight.116 CONCLUSION
Donald MacKenzie has argued that the trajectory of technological artefacts is never predetermined and sometimes even incoherent. Instead of a clean line running from invention to widespread use, there is `typically a constant turmoil of concepts, plans, and projects. From that turmoil, order (sometimes) emerges, and its emergence is of course what lends credibility to notions of ``progress'' or ``natural trajectory'' '.117 With illumination technology, the period 1880±1930 is well characterized as a period of productive, fertile turmoil, rather than one of mounting electric ascendancy. It is possible to depict this period as the time at which the electric filament lamp became established and stabilized, but this trajectory would assume a coherence that is largely the result of historical hindsight. It would, moreover, ignore the rather different rates at which different kinds of electric light were adopted for different practices. I began this article by outlining five closure mechanisms used by historians and sociologists to explain how it was that from such turmoil, electric filament and fluorescent lamps became the two basic illuminants in Western society. Clearly, no single closure mechanism can capture the complexity of this process. Monolithic forms of determinism ± technoloHistory of Technology, Volume Twenty-eight, 2008
94
The Case of Artificial Illumination
gical, cultural or physiological ± offer explanations that are both crude and ahistorical. STS approaches ± the SCOT and systems models ± are more properly historical, in their emphasis on process, emergence and relationality, and their refusal to assume that anything essential about machines, cultures and bodies can act as an identical causal mechanism in many different times and places. These approaches can very successfully explain certain trajectories of artificial illumination. Oil remained the primary lighthouse illuminant in the early twentieth century, not because it was intrinsically better than gas or electricity, but because a powerful institutional structure persisted that preferred the easily transportable and perfectly bright oil lamps over more complicated technical systems. Here, it is easy to identify relevant social groups, whose choice, like that to continue using gas mantles in twentieth-century streets, cannot be seen as intrinsically irrational or obstructive, as some analyses imply.118 With street lighting, however, the multiplicity of relevant social groups ± local councils, utility companies, consumer groups ± makes a convincing SCOT analysis much less viable. Here, the systems model, with its focus on largescale momentum acquired across time and operative through material systems as well as social or institutional strategies, is more fruitful. When commentators moaned that gaslight was not being given a `fair chance' in 1933, they were complaining that the momentum of the electrical system, from underground cables to pricing and management, was slowly marginalizing gaslight. But, neither the SCOT, nor the systems, model provides an adequate methodology for exploring the dimension of perception, which, after all, is the real interface between human users and illumination technology. The systems model, with its emphasis on problem solving and the production of infrastructure, effectively ignores smaller-scale, and potentially less tangible, questions like human perception. The SCOT model, while admirably demonstrating the historical flexibility of perception and avoiding the concept of the purely rational actor, cannot grasp vision as anything other than a social construction. For a given illuminant to successfully secure particular forms of perception, macrosystemic momentum and rhetorical consensus cannot be the only explanatory mechanisms. Habitual, tacit practice, customs and spatial arrangements must also be considered, as must phenomenological and physiological dimensions of the eye that are not explicable in terms of power or social interests alone. Discussions of interpretative flexibility must be complemented by discussions of other modalities of flexibility, such as perceptual or spatial, which might be a way of reintroducing the agency of more marginal groups.119 Mikael HaÊrd has argued, with reference to diesel engine technology, that an emphasis on local stabilization is more empirically satisfying than an emphasis on global closure.120 His conclusions are applicable to illumination technology. As William Preece of the Post Office admitted, it might be perfectly possible for the `same' technological artefact to produce `white' light for one group of users and `blue' light for another ± a History of Technology, Volume Twenty-eight, 2008
Chris Otter
95
difference more explicable in terms of bodily habit, experience and practice than rhetoric, politics or social interest. This is not to posit `practical closure' as some sort of universal: rather, it is to argue that closure has a local texture, and that local practices can explain what might, on first glance, seem irrational or inexplicable. Some users, like actors, struggled to use new technologies because their habitual ways of seeing were disturbed, while others, like surgeons, liked new illumination technologies because they promised to secure very specific forms of visual acuity. Electric light was adopted in surgery with more rapidity than for bedside reading because of many factors: the sensitivity of corporeal space to heat, the need for extremely small illuminants, and the relatively powerful and organized social groups involved. The penumbral viscidity of bodily organs generated a radically different visual environment from the monochromatic, planar page. But no general model can be inferred from this example, as the radically different case of lighthouse illumination demonstrates. The spatial particularity of the ocean (its boundless horizontality, its chromatic monotony), the need to perceive distance and direction (rather than detail) and the structures of habituation generated by ships, roiling seas and naval discipline made the sailors' perception demonstrably different from the surgeon's. Mariners operated in almost the polar opposite of laboratory conditions: their environment was uncontrollable, and their visual needs simple, primal and vital. To provide an adequate account of the success of electric illumination, the historian must combine multiple closure mechanisms with a richer, more materially and phenomenologically nuanced account of the variety of perceptual practice and its variegated change over time. The success of electric light, then, cannot be explained in terms of a single closure mechanism, however general. But this would surely be a rather unsurprising conclusion. As SCOT analyses have often suggested, closure is never completely permanent, and often never actually reached.121 Supposedly outmoded forms of illuminant are often resurrected: witness the return, in the 1970s, of sodium vapour lighting because of its softer light, in comparison to mercury vapour lamps.122 Changing environmental sensibilities have, of course, forced re-evaluation of energy use: high-intensity illumination has become rather less appealing than efficient illumination. The candle, meanwhile, remains in use in the West for ceremonial, religious, spiritual, romantic and atmospheric purposes: these uses are only marginal or residual if one regards worship, relaxation and dinner parties as themselves marginal and residual practices. Candles are not insignificant for aromatherapists or victims of blackouts. The gas mantle itself is not an entirely obsolete technology: it can be found, in revived form, in several British pubs and has also been used to illuminate certain upscale American neighbourhoods.123 One can even buy gas mantles on Amazon. Mantles usually function as historical technologies, capable of evoking the `feel' of the past: such technological `staging' of history is, again, hardly insignificant. To push the point further: a focus on the centres of a few rich Western History of Technology, Volume Twenty-eight, 2008
96
The Case of Artificial Illumination
cities ± London, Paris, New York ± has entrenched an electrocentric view of illumination and an excessively coherent trajectory, or set of comparative urban trajectories, of the rise of electric light.124 A shift of focus, to the rural and the non-Western, is the most basic, effective way of unsettling this. The idea of electrical closure makes no sense when applied to, say, Afghanistan, where the exploitation of natural gas only began in the late 1960s.125 In much of Africa, one finds the use of kerosene lamps, photoelectric kits or electric lights plugged into car batteries: even in South Africa, with its relatively precocious electrification, 41 per cent of homes had no electricity in 1997.126 Examples from the `shadow cities', slums and shanty towns of the global South could be multiplied.127 As David Edgerton has recently shown, such places should not simply be dismissed as technological backwaters, where `residual' technologies linger prior to some indefinitely deferred modernization, but seen as spaces of innovative use, with variegated and complex technological ecologies of their own.128 Viewed thus, the idea of closure, whatever its cause, can be said to have only local and episodic relevance to the history of how humans have lit their societies. ACKNOWLEDGEMENTS
I would like to thank James Sumner and Graeme Gooday for their generous advice and support, and insightful comments throughout the writing of this article. I would also like to thank Kristina Sessa and the two anonymous referees for helpful comments. Notes and References
1. The gas mantle was a conical structure of metal oxides, strengthened with collodion. See W. Schivelbusch, Disenchanted Night: The Industrialisation of Night in the Nineteenth Century, trans. Angela Davis (Oxford, 1988), 47±50; B. Bowers, Lengthening the Day: A History of Lighting Technology (Oxford, 1998), 127±34. 2. W. J. Dibdin, Public Lighting by Gas and Electricity (London, 1902), xviii. 3. Cited in Dibdin, op. cit. (2), 310. 4. T. Thorpe, Dictionary of Applied Chemistry, 3rd edn (London, 1922), 355. 5. For example, J. M. Waldram, Street Lighting, with Chapters on Gas Lighting by L.T. Minchin (London, 1952). 6. Dibdin, op. cit. (2), 425. 7. `Closure' refers to the process whereby an artefact acquires stable meanings, to the point at which an artefact acquires stable form, and the process whereby one artefact is chosen over another. An excellent analysis of the social shaping of closure is T. Misa, `Controversy and Closure in Technological Change: Constructing ``Steel''', in W. Bijker and J. Law (eds), Shaping Technology/Building Society: Studies in Sociotechnical Change (Cambridge, MA, 1992), 109± 139. 8. Schivelbusch, op. cit. (1), 50. 9. V. Schwartz, Spectacular Realities: Early Mass Culture in Fin-de-SieÁcle Paris (Berkeley, 1998), 21. 10. Schivelbusch, op. cit. (1), 118. 11. This throwaway phrase appears regularly in primary and secondary sources. See, e.g. Dibdin, op. cit. (2), 18; A. Alfred Alvarez, Night: Night Life, Night Language, Sleep, and Dreams (New York, 1995), 21. 12. T. Pinch and W. Bijker, `The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other', in W.
History of Technology, Volume Twenty-eight, 2008
Chris Otter
97
Bijker, T. Hughes and T. Pinch (eds), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology (Cambridge, MA, 1987), 17±50. 13. W. Bijker, `The Social Construction of Fluorescent Lighting, or How an Artifact Was Invented in its Diffusion Stage', in W. Bijker and J. Law (eds), Shaping Technology/Building Society (1992), 89. See also W. Bijker, Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change (Cambridge, MA, 1995), 199±268. 14. T. Hughes, Networks of Power: Electrification in Western Society, 1880±1930 (Baltimore, 1983). 15. J. Law, `Technology and Heterogeneous Engineering: The Case of Portuguese Expansion', in Bijker et al., op. cit. (12), 111±34. 16. Some historians of technology have paid close attention to the complexity of the body/technology relationship. See, e.g. essays in I. Morus (ed.), Bodies/Machines (Oxford, 2002); C. Lawrence and S. Shapin (eds), Science Incarnate: Historical Embodiments of Natural Knowledge (Chicago, 1998); S. Schaffer, `Astronomers Mark Time: Discipline and the Personal Equation', Science in Context, 1988, 2: 101±31; O. Sibum, `Reworking the Mechanical Value of Heat: Instruments of Precision and Gestures of Accuracy in Early Victorian England', Studies in History and Philosophy of Science, 1995, 26A: 73±106. 17. See, e.g. S. J. Liebowitz and S. E. Margolis, `Path Dependence, Lock-In, and History', Journal of Law, Economics, and Organization, 1995, 11(1): 214. 18. V. Stango, `The Economics of Standards Wars', Review of Network Economics, 2004, 3(1): 5±6, 10±11. 19. On the personal equation, see Schaffer, op. cit. (16) and G. Gooday, `Spot-Watching, Bodily Postures and the ``Practised Eye'': The Material Practice of Instrument Reading in Late Victorian Electrical Life', in Morus, op. cit. (16), 165±95. 20. B. G. Awty, `The Introduction of Gas-Lighting to Preston', Transactions of the Historic Society of Lancashire and Cheshire, 1975, 125: 82; M. E. Falkus, `The British Gas Industry before 1850', The Economic History Review, 1967, 20(3): 498. 21. J. Wilson, Lighting the Town: A Study of Management in the North West Gas Industry 1805± 1880 (London, 1991), 36. 22. L. Gaster and J. S. Dow, Modern Illuminants and Illuminating Engineering (London, 1915), 19. 23. E. N. Wrightington, `Principles and Design of Exterior Illumination by Gas', in Lectures on Illuminating Engineering Delivered at the Johns Hopkins University, October and November 1910, Under the Joint Auspices of the University and the Illuminating Engineering Society (two volumes), Volume 2 (Baltimore, 1911), 871. 24. W. H. Y. Webber, Town Gas and its Uses for the Production of Light, Heat and Motive Power (New York, 1907), 169. 25. Wrightington, op. cit. (23), 868±70. 26. See, e.g. C. H. Cooper, `Progress in Oil Lighting at Wimbledon', Association of Municipal and Sanitary Engineers and Surveyors, Proceedings, 1889, 15 (28 September 1889): 21. 27. Acetylene gas is produced from the reaction of calcium carbide and water. See Bowers, op. cit. (1), 135±7. 28. W. Harrison, O. F. Haas and K. Reid, Street Lighting Practice (London, 1930), 10. 29. Dibdin, op. cit. (2), 1. 30. P. D. Norton, `Street Rivals: Jaywalking and the Invention of the Motor Age', Technology and Culture, 2007, 48(2): 338±9. K. Bolton, `The Great Awakening of the Night: Lighting America's Streets', Landscape, 1979, 23(3): 45. 31. On the need to improve street lighting, see J. M. Bryant and H. C. Hake, Street Lighting (Urbana, IL, 1911), 21. 32. Harrison et al., op. cit. (28), 15. 33. A. P. Trotter, The Elements of Illuminating Engineering (London, 1921), 93. 34. On classes of streets and zoning, see L. Bell, `Principles and Design of Exterior Illumination', in Lectures on Illuminating Engineering, Volume 2, 1911: 820±3; Harrison et al., op. cit. (28), 121±34, 145±60. 35. Bell, op. cit. (34), 820. 36. For example, L. Bell, The Art of Illumination, 2nd edn (London, 1912). On the psychology of illumination, see R. Yerkes, `The Psychological Aspects of Illuminating Engineering', in Lectures on Illuminating Engineering, Volume 2, 1910: 575±604.
History of Technology, Volume Twenty-eight, 2008
98
The Case of Artificial Illumination
37. Waldram, op. cit. (5), 129. 38. Wrightington, op. cit. (23), 844, 867, 879±81. On the Purkinje effect, see N. Wade and J. BrozÏek, Purkinje's Vision: The Dawning of Neuroscience (Mahwah, NJ, 2001). 39. Gas World, 29 March 1930, quoted in D. Chandler, Outline of the History of Lighting by Gas (London, 1936), 262. 40. Hughes, op. cit. (14), 350±62. 41. T. P. Ridley, `The Commercial Development of the Gas Industry in Great Britain', 1933, quoted in F. R. Jervis, `Gas and Electricity in Britain: A Study in Duopoly', Land Economics, 1948, 24(1): 35. 42. Harrison et al., op. cit. (28), 29. 43. L. T. Minchin, `Gas Light Sources and their Characteristics' and `Lanterns for Gas Sources', both in Waldram, op. cit. (5), 318±21, 324±31. 44. C. B. Byles, The First Principles of Railway Signalling, Including and Account of the Legislation in the United Kingdom Affecting the Working of Railways and the Provision of Signalling and Safety Devices (London, 1910), 6. 45. J. Bailkin, `Colour Problems: Work, Pathology and Perception in Modern Britain', International Labour and Working-Class History, 2005, 68: 93±111. 46. L. P. Lewis, Railway Signal Engineering (Mechanical) (New York, 1912), 83. (Italics original). 47. Lewis, op. cit. (46), 83. 48. F. C. Keyser, Lecture on `Visual Signalling' (Aldershot, 1891), 5. 49. Keyser, op. cit. (48), 16. 50. R. L. Hippisley, Lecture on `Electricity and its Tactical Value for Military Operations' (Aldershot, 1891), 3. 51. F. Nerz, Searchlights: Their Theory, Construction and Applications, trans. Charles Rogers (London, 1907), 42±4. 52. Nerz, op. cit. (51), 59. 53. E. J. Solano (ed.), Signalling, Morse, Semaphore Station Work, Despatch Riding, Telephone Cables, Map Reading (London, 1916), 212. 54. R. MacLeod, `Science and Government in Victorian England: Lighthouse Illumination and the Board of Trade, 1866±1886', Isis, 1969, 60: 7±8. 55. MacLeod, op. cit. (54), 16±19. 56. `Lighthouse Illuminants', Science, 1885, 5(105), 6 February: 112. 57. `Interpretative flexibility' means that identical artefacts mean different things to different social groups, making them effectively different artefacts. See Bijker, `Social Construction of Fluorescent Lighting', op. cit. (13), 76. 58. On photometry, see S. F. Johnston, A History of Light and Colour Measurement: Science in the Shadows (London, 2001). 59. `Lighthouse Illuminants', Science, 1886, 7(166), 9 April: 332. 60. `Lighthouse Illuminants', op. cit. (59), 333. 61. H. M. Paul, `The Electric Light for Light-Houses and Search-Lights', Science, 1885, 5(107): 202. 62. J. Naysmith, `High-Pressure Gas Lighting', Manchester Association of Engineers, Transactions, 1902: 19. 63. On petroleum, see R. F. Bacon and W.A. Hamor, The American Petroleum Industry (New York, 1916), 881. On oil lamps with mantles, see B. Cunningham, The Dock and Harbour Engineer's Reference Book (London, 1914), 301. 64. On the experimenter's regress, see H. Collins and T. Pinch, The Golem: What Everyone Should Know About Science, 2nd edn (Cambridge, 1998), 97±8, 101±3. 65. See, e.g. Schivelbusch, op. cit. (1); Schwartz, op. cit. (9). 66. D. Nye, Electrifying America: The Social Meanings of a New Technology, 1880±1940 (Cambridge, MA, 1990), 61. 67. Schivelbusch, op. cit. (1), 50. 68. Bijker, Bicycles, op. cit. (13), 246. 69. R. Dyer, White (London, 1997), 89±90. 70. On spectroscopy, see S. Hong, `Theories and Experiments on Radiation from Thomas Young to X Rays', in M. J. Nye (ed.), The Cambridge History of Science. Volume V. The Modern Physical and Mathematical Sciences (Cambridge, 2002), 282.
History of Technology, Volume Twenty-eight, 2008
Chris Otter
99
71. P. Higgs, The Electric Light and its Practical Applications (London, 1879), 5. 72. Quoted in The Electrician, 1882, 8, 13 May: 418, emphasis added. 73. P. Scrutton, Electricity in Town and Country Houses (London, 1898), 138. 74. K. Hedges, Useful Information on Electric Lighting (London, 1882), 77. 75. H. Platt, The Electric City: Energy and the Growth of the Chicago Area, 1880±1930 (Chicago, 1991), 30. 76. Cited in M. Dillon, Artificial Sunshine: A Social History of Domestic Lighting (London, 2002), 183. 77. See, e.g. S. Chandley, `The Combination of Colours', The Builder, 1873, 31, 18 January: 40. 78. G. Harlan, Eyesight, and How to Care for It (New York, 1887), 97±8. 79. Cited in Webber, op. cit. (24), 163. 80. W. Sugg, The Application of Gas to the Lighting of Open Spaces and Large Buildings (London, 1882), 10±13. 81. W. Gibbs, Lighting by Acetylene: Generators, Burners and Electric Furnaces (New York, 1898), 13. 82. J. M. Crafts, `A Lecture upon Acetylene', Science, 1896, 3(63): 384. 83. Comment by W. Preece in discussion section to A. P. Trotter, `The Distribution and Measurement of Illumination', Institute of Civil Engineers, Minutes, 1892, 90: 142, my emphasis. 84. Cited in R. Appleyard, The History of the Institution of Electrical Engineers (1871±1931) (London, 1939), 142. 85. Cited in Appleyard, op. cit. (84), 142, emphasis in original. 86. G. A. Audsley, Colour in Dress: A Manual for Ladies (London, 1912), 36. 87. Audsley, op. cit. (86), 38. 88. Bijker, Bicycles, op. cit. (13), 242. 89. P. Hendy, `Discussion', in W. E. Rawson-Bottom and J. B. Harris, `Artificial Lighting as Applied to Museums and Art Galleries', Transactions of the Illuminating Engineering Society, 1952, 23(1): 24. 90. H. Ruhemann, `Experiences with the Artificial Lighting of Paintings', Studies in Conservation, 1961, 6(2/3): 83±4. 91. Yerkes, op. cit. (36), 602. 92. Yerkes, op. cit. (36), 601. 93. J. Barr and C. Phillips, `The Brightness of Light: Its Nature and Measurement', The Electrician, 1894, 22: 525. On 20±20 vision, see R. Carter, A Practical Treatise on Diseases of the Eye (Philadelphia, 1876), 57±9. 94. Dibdin, op. cit. (2), 17. 95. C. E. Ferree and G. Rand, `Lighting in its Relation to the Eye', Proceedings of the American Philosophical Society, 1918, 57: 5, 446. 96. C. E. Ferree, `The Problem of Lighting in its Relation to the Efficiency of the Eye', Science, 1914, 40(1020): 89. 97. A. Noll, How to Wire Buildings: A Manual of the Art of Interior Wiring, 4th edn (New York, 1906), 113. 98. W. T. Sugg, The Domestic Uses of Coal Gas, as Applied to Lighting, Cooking and Heating, Ventilation (London, 1884), 104; W. Grafton, A Handbook of Practical Gas-Fitting: A Treatise on the Distribution of Gas in Service Pipes, the Use of Coal Gas, and the Best Means of Economizing Gas from Main to Burner, 2nd edn (London, 1907), 104. 99. `An Incandescent Gas Light Pendant', Chemical Trade Journal, 1895, 16: 227. 100. Rawson-Bottom and Harris, op. cit. (89), 7. 101. W. H. Greenhough, `On the Ventilation, Heating and Lighting of Free Public Libraries', The Library, 1890, 2: 426. 102. Greenhough, op. cit. (101), 427. 103. Anon, `Incandescent Gas Lighting for Railway Carriages', Railway Engineer, 1905, 26: 216. 104. A. Humphries, `Gas and Oil Illuminants', in Lectures on Illuminating Engineering, Volume 1, 1911: 165. 105. K. Kichenside, 150 Years of Railway Carriages (London, 1981), 29; J. White, The American Railroad Passenger Car (Baltimore, 1978).
History of Technology, Volume Twenty-eight, 2008
100
The Case of Artificial Illumination
106. Cited in Bowers, op. cit. (1), 22. 107. D. Margolin and J. Marks, `History of Endoscopic Surgery', in C. Andrus, J. Cosgrove and W. Longo, Minimally Invasive Surgery: Principles and Outcomes (Amsterdam, 1998), 5±7; F. Semeleder, Rhinoscopy and Laryngoscopy, their Value in Practical Medicine, trans. E. Caswell (New York, 1866), 9. Gas mantles for rhinoscopic use were discussed in G. MacDonald, A Treatise on Diseases of the Nose and its Accessory Cavities, 2nd edn (London, 1892), 27. 108. E. H. Fenwick, A Handbook of Clinical Electric-Light Cystoscopy (London, 1904), 8±9. 109. Fenwick, op. cit. (108), 89. 110. H. W. Loeb, Operative Surgery of the Nose, Throat, and Ear for Laryngologists, Rhinologists, Otologists, and Surgeons, Volume 1 (St Louis, 1917), 192±3. 111. H. Lack, The Diseases of the Nose and its Accessory Sinuses (London, 1906), 34. 112. J. P. Blandy, `Urological Instrumentation', in A. Mundy (ed.), The Scientific Basis of Urology (Oxford, 1999), 295. F. Bierhoff, `The Cystoscope and Ureter Catheter in the Diagnosis and Prognosis of Surgical Diseases of the Kidney', American Journal of Surgery, 1905, 19(3): 60. 113. L. D. McIntosh, `The Portable Lime Light', Proceedings of the American Society of Microscopists, 1891, 13: 41. S. Simon Schwendener and K. W. NaÈgeli, The Microscope in Theory and Practice, 2nd edn (New York, 1892), 270. For gas mantles in laboratories, see C. D. Marsh, `The New Biological Laboratories of Ripon College', Journal of Applied Microscopy and Laboratory Methods, 1901, 4(2): 1152. 114. W. H. Walmsley, `Acetylene Gas as the Illuminant in Photomicrography', Transactions of the American Microscopical Society, 1897, 18: 138; E. M. Chamot, Elementary Chemical Microscopy, 2nd edn (New York, 1921), 158. 115. M. M. Metcalf, `An Electric Lamp for Microscope Illumination', Science, 1902: 937. 116. S. G. Gage, `Artificial Daylight for the Microscope', Science, 1915, 42(1085): 534. 117. D. MacKenzie, `Introduction', in Knowing Machines: Essays on Technical Change (Cambridge, 1996), 6. 118. See, e.g. J. Diamond, Guns, Germs and Steel: The Fates of Human Societies (New York, 2005), 249. 119. The `relevant social groups' discussed by SCOT are invariably powerful ones, which has generated criticism for ignoring power relations and broader social structures. See, e.g. H. K. Klein and D. L. Kleinman, `The Social Construction of Technology: Structural Considerations', Science, Technology, and Human Values, 2002, 27(1): 28±52. 120. M. HaÊrd, `Technology as Practice: Local and Global Closure Processes in DieselEngine Design', Social Studies of Science, 1994, 24: 551, 577±8. 121. Bijker, Bicycles, op. cit. (13), 85. 122. J. Jakle, City Lights: Illuminating the American Night (Baltimore, 2001), 89. 123. Jakle, op. cit. (122), 258. 124. I have been guilty of this: see C. Otter, `Cleansing and Clarifying: Technology and Perception in Nineteenth-Century London', Journal of British Studies, 2004, 43(1): 40±64. 125. C. Johnson, Afghanistan (Oxford, 2004), 37. 126. A. C. Revkin, `Turning on the Lights where Electricity Is Rare', New York Times, 2005, 15 February. South African data from in L. Thompson, A History of South Africa (New Haven, 2001), 283. 127. See, e.g. M. Davis, Planet of Slums (London, 2006); R. Neuwirth, Shadow Cities: A Billion Squatters, a New Urban World (London, 2005). 128. D. Edgerton, The Shock of the Old: Technology and Global History since 1900 (Oxford, 2007).
History of Technology, Volume Twenty-eight, 2008
Standards and Compatibility: The Rise of the PC Computing Platform JAMES SUMNER
INTRODUCTION
Most mass-market computers today follow a single technical archetype: what used to be known as the `IBM PC standard', but has extended so far beyond its original specifications and become so ubiquitous as scarcely to require definition. A computer for individual use, typically, is a box with separate keyboard and monitor, or else a flat notebook with a fold-out screen; employs a Microsoft operating system, on an International Business Machines (IBM)-derived hardware architecture, using an Intel-derived microprocessor; and is primarily set up for use as an office tool, though it can be coaxed into performing many other tasks. Compatibility or interoperability with these norms and expectations is a principal consideration for office, educational and home users. The `PC' did not reach its commanding position through legislative intervention or formalized regulatory guidance. It emerged as a de facto standard in the early to mid-1980s, well before such developments as Microsoft's series of Windows operating systems. Its trigger was the 1981 entry62 of IBM, the globally pre-eminent producer of large computers for commercial use, into the personal microcomputer market, which at that time belonged to numerous producers committed to a diverse range of incompatible systems. IBM's tremendous capital resources, reputation for market dominance and pre-existing penetration of the white-collar world led commentators to assume that its product, the Personal Computer (PC) 5150, would succeed over its established competitors. This prophecy selffulfilled. The IBM PC sold far beyond expectation, causing many (eventually most) other producers either to leave the desktop computer market altogether, or to produce `IBM-compatible' or `clone' machines. In certain computing fields, rarely glimpsed by non-experts, alternative archetypes have endured: powerful workstations for specialized commercial and scientific applications, for instance, and large multi-user systems. In the mass market, however, the only arguable challenge to the PC is the Macintosh platform promoted by Apple Incorporated. Into the 1990s, as other alternatives faded, Apple based its marketing increasingly on a History of Technology, Volume Twenty-eight, 2008
102
The Rise of the PC Computing Platform
rhetoric of defiant difference from an undesirable PC hegemony, portrayed as buttoned-down, user-hostile and monopolistic. Yet, today's Macintoshes, though promoted in similar terms, are themselves built around Intel processors, and run Microsoft's operating systems alongside Apple's: the strategy is not to reject the PC norm, but to co-opt it.63 In characterizing the rise of the PC as a shift away from heterogeneous development, and towards convergence on a uniform standard, we may be tempted to see the two positions as highly distinct, mutually exclusive states. On this reading, the PC's triumph seems an inevitability. Popular histories usually present it as a positive development, banishing a confused Babel of unworkably incompatible devices that made users overly dependent on their proprietors.64 Counter-narratives are occasionally voiced by partisans of Apple and various extinct proprietary systems, along the lines of the `VHS/Beta' and `QWERTY/Dvorak' fables familiar from path-dependency literature.65 Such accounts accuse IBM of battering a sub-optimal standard into market dominance by dint of something other than technical merit (capitalization, sales force, exploitation of the ignorant), stifling a technically superior standard that would have been better for everyone; or else, by sluggish monopoly development of a dominant design, of frustrating diligent competitors keen to innovate.66 In both `pro-PC' and `anti-PC' accounts, an either/or dichotomy is assumed. This paper aims to develop a more subtle approach. Standardization cases of the QWERTY variety invoke a straightforward acceptance/ rejection binary, but the more complex case of a computer system offers diverse possibilities of co-option, assimilation and compromise.67 The PC was not (and is not) a single standard, but a broad constellation of specifications, varying in exactitude, across the levels of hardware, operating software, applications software and user culture.68 At all of these levels, possibilities exist ranging from conformity, through compatibility and interoperability, to outright rejection. When we examine the rise of the PC in more detail, we discover a complex range of strategies played out in the service of a variety of interest groups. The PC itself, moreover, was (and is) highly mutable over time. In the taxonomy promoted by David and Greenstein, de facto standards may be `sponsored', promoted by specific proprietary interest groups (producers, mediators, users), or `unsponsored', accepted by general consensus (usually in the `general good').69 The PC's trajectory cuts repeatedly across these categories. When first introduced, predominantly as a hardware specification, it was strongly IBM-sponsored, but, as various competitors routinized the cloning process, it became essentially unsponsored by the mid-1980s. This very `unsponsoring' focused attention on the software elements, with the result that the PC became increasingly Microsoft-sponsored ± a situation that endured as IBM's role faded. Application software producers, meanwhile, began by expecting the level-playing-field opportunities of an unsponsored standard, only to find that Microsoft's dominance at the operating system level gave it the financial and technical advantages to gain control of various applications sectors.70 History of Technology, Volume Twenty-eight, 2008
James Sumner
103
I begin my study with a brief overview of IBM's entry to the microcomputer market. This is a familiar case in the secondary literature, but there has been a tendency to treat IBM's manoeuvre as a logical, almost predestined success story ± a problematic reading, given that IBM lost control of its platform within a few years. I highlight the contingency in the situation, stressing that the corporation entered a mature market with an existing de facto standard, built around Digital Research's CP/M system, and was obliged to choose one of various available responses to it. In the next section, I outline analytically the wide range of possible responses to standardization initiatives, all of which were economically or culturally `rational' to some producers or users at some point. The following two sections offer a counterpoint to the established (mostly USoriented) literature with a focus on the British case, including some complexities thrown up by the `jilted' DR's attempts to retain or regain elements of its product's standard status. Finally, I reassess the nature and decline of heterogeneity in personal computing culture. The `PC hegemony' did not arise inevitably, through some technologically determined industry logic. Its development mutated across space and time, with economic, cultural and political factors clearly at work. STANDARDS VERSUS STANDARDS: THE ENTRY OF IBM
The very notion of a `standard business microcomputer' requires historical explanation: at the end of the 1970s, the `standard' business approach was not to use microcomputers at all. Large commercial organizations had mostly automated their information-processing activities with what seemed the appropriate tools for the job: central mainframe computers running batch operations, often on specialized sites, with a distinct labour force, separated from the executives and clerical staff by mutual consent. Devolving computing power to the level of the individual desktop not only implied a mammoth retraining exercise, but seemed to fly in the face of the economies of scale enjoyed by a large firm. Smaller businesses with only a few staff might use no computer equipment at all, except perhaps through shared access to large-scale equipment provided by a service bureau. Often, as in the information-intensive banking sector, personal computers were seen to lack the power and flexibility of older, specialized equipment.71 Indeed, the popular-culture fashion for personal microcomputing that flourished around 1977±82 was sometimes seen as prejudicial to established business needs. In the United Kingdom, especially, what David Skinner termed the `millennial' position ± information technology, embodied by the microcomputer, somehow holding the key to the struggling nation's wider fortunes ± enjoyed support at national policy level and was seized upon by a number of established and start-up manufacturers. 72 Consultants with experience of pre-`micro' business computing spent much of their time damping what they saw as hype, dissuading customers from buying inappropriately basic systems or assuming that a computer History of Technology, Volume Twenty-eight, 2008
104
The Rise of the PC Computing Platform
would `solve the problem' without extensive (and expensive) custom coding and user training.73 Nonetheless, particularly in the Unites States, microcomputers were put to use in various office roles. Word-processing presented a relatively easy transition, as the stand-alone computer replaced the stand-alone typewriter. Conventional book-keeping tasks (payroll, sales ledger, purchase ledger) had been heavily standardized as information management tasks in the pre-computer age, and were accordingly easy to transfer.74 By around 1977, a de facto standard configuration of hardware and software was emerging to meet these needs in the Unites States. Its most visible component was the software operating system, CP/M (`Control Program/ Monitor', later `Control Program for Microcomputers'), developed from 1973 by the computer scientist Gary Kildall and marketed through his company, Digital Research (DR). CP/M allowed use of the expensive, but fast and versatile, `floppy disc' storage systems that were invaluable for information-heavy commercial tasks. A piece of application software developed on one CP/M computer could typically ± though not always ± be run on another, of different manufacture, with little or no adjustment. This allowed the growth of a sector of office-oriented software, notably the word-processor WordStar (from 1978) and the database manager dBASE (from 1980), among the first microcomputer applications to be tagged as `industry-standard'.75 CP/M was barely fully commercialized, however, when, in 1977, the US corporations Apple, Commodore and Tandy RadioShack began, almost simultaneously, to push microcomputers as volume commodities into a market that was rapidly growing out of its hobbyist beginnings to encompass family homes, education and small business. Cheaper and more basic than CP/M computers, these machines were highly incompatible with CP/M, and also with each other. Notably, in unexpanded form, they relied on cheap cassette-based storage rather than floppy discs. To the manufacturers and most users, however, these were not significant disadvantages. The new machine architectures found enthusiastic support communities of their own, partly thanks to the inexpensive volume manufacturing made possible by closed proprietary standards.76 IBM's 1981 entry to the business microcomputer market, then, took place at a time when that market was well developed, and the concept of a microcomputer widely familiar in the United States. IBM had played little part in this culture; while its dealership networks were superbly effective, they had been built around long-term support relationships, not the highvolume retail box-shifting that now dictated the movement of most individual computers. IBM's lack of an established microcomputing presence was reflected, too, in the 5150's heavy reliance on off-the-shelf components from third-party manufacturers (most conspicuously, the Intel microprocessor). This approach, which contrasted strongly with the corporation's norm of closed internal production, allowed rapid, low-risk development and straightforward third-party expansion. (It has even been History of Technology, Volume Twenty-eight, 2008
James Sumner
105
characterized as an `open-standards initiative' on the part of IBM, though this is surely an overstatement: as we shall see, the PC architecture became fully `open' only later, as the original proprietary intentions were subverted by other producers.)77 One feature setting it apart from both the established CP/M base and the 1977 proprietary machines was the choice of a new-generation (`16bit') microprocessor. Even here, the model used was the slowest and least powerful of the new breed, chosen largely on grounds of established thirdparty support and concern about drawing trade from IBM's largercomputer divisions.78 Technically speaking, the machine was not only unoriginal, but underwhelming given a list price above $1500 for the most basic (non-disc) specification. Yet, the PC was never intended as an `advanced' machine ± as its creators freely admitted79 ± but as a robust one: its defining advantage lay in the IBM name underwriting a solid overall specification.80 In one crucial respect, however, the IBM PC ignored convention and, in doing so, ultimately redefined it. CP/M, as we have noted, dominated the established business market for machines using older (`8-bit') processors. DR was then in the process of engineering a 16-bit version, CP/M-86, to be compatible with the IBM PC's processor. Following the strategy already used for hardware components would have led IBM simply to co-opt CP/M-86. Yet, IBM also addressed its enquiry to Microsoft, a much smaller software company, then known mainly for its version of the hobbyist programming language, Basic. Microsoft had little experience in developing operating systems, and no product to offer, but was able to buy out a system from another Seattle-area firm. With some swift modifications, this became `PC-DOS' for the purposes of the IBM deal, while Microsoft retained the right to adapt it for sale to third parties as `MS-DOS'. It was under the latter name that this most unexpected system ± which bore some resemblance to CP/M, but offered no direct compatibility or interoperability ± endured for over a decade as the most widely used software product in personal computing, single-handedly securing Microsoft's long-term future. Digital Research and CP/M, meanwhile, descended slowly into obscurity. Accounts of this episode often assume the character of folk legend, presupposing for IBM a single moment of decision between Microsoft and DR in which happenstance details of corporate etiquette acquired vast significance.81 In fact, IBM endorsed both CP/M-86 and PC-DOS at the PC's launch (as well as a third option, the UCSD p-System), and negotiations with DR rumbled on for months afterwards. Indeed, the influential market report, Billion Dollar Baby, released the month of the launch, praised IBM for the uncharacteristic step of throwing its weight behind the `largest and best defacto standard which is the CP/M software base'.82 Nonetheless, CP/M-86 was unavailable for a few crucial months after the machine's launch, and retailed at $240 as against $40 for the immediately available Microsoft system. It appears that DR's Gary Kildall attached significant royalty demands, quite reasonably assuming a History of Technology, Volume Twenty-eight, 2008
106
The Rise of the PC Computing Platform
position of strength for his industry-standard product. IBM, however, while paying lip service to the importance of this installed base, seems deliberately to have fostered the smaller and (ostensibly) more pliable Microsoft, correctly reasoning that its own unique scale and reputation would inspire a sufficient user base.83 Microsoft's second-hand system, as the standard that killed a standard, has inevitably been controversial. Its initial value to IBM, like that of the PC's hardware, lay not in technical sophistication, but in rapid availability. The sense that corners were cut has fuelled long-standing criticism both of its limitations ± it constrained users, for instance, to running a single application at any one time ± and of its similarities to CP/M. To DR partisans, MS-DOS was nothing more than `an unauthorized clone of CP/M', and even Microsoft's successor products such as Windows 95 and 98 `are still Digital Research CP/M at their essential core'.84 For this reason above all, the meaning of the PC was open to question and redefinition. As presented to buyers, it was simply a logical extension of IBM's large-computing provision into microcomputing. Members of the PC's development group sometimes spoke in different terms: their distinctly un-IBM-like machine was an almost independent development, for which IBM merely served as `venture capitalist'.85 Bill Gates of Microsoft, meanwhile, moved remarkably fast to emphasize his company's contribution to the design ± a manoeuvre that would eventually lead, with the development of the clone industry, to Microsoft's eclipse of IBM as principal gatekeeper of the archetype.86 Concurrently with this shift, a birthright mentality incubated within Digital Research. Kildall and others believed that CP/M not only should but practically could regain its `industry standard' role, if only the IBMendorsed complex supporting Microsoft could be effectively challenged at the machine archetype level. This determination played a significant role in stirring up the heterogeneous platform developments of the 1980s, as we will see in the following sections. First, however, it is necessary to clarify some of the ways in which a `platform battle' might be fought. The issue at stake is not, in fact, standards-compliance, but the rather broader concept of compatibility.87 THE COMPLEXITIES OF COMPATIBILITY
Archetypally, an engineering standard is something in the nature of the Whitworth screw, or the standard rail gauge.88 On the face of it, such standards can only be accepted or rejected; once accepted, they tend to persist, because the price of switching to an incompatible standard is topto-bottom re-engineering. Microcomputers display a broader range of possibilities, not because they feature `more complex' technology ± the securing of Whitworth's standards was a project of awesome complexity ± but because they are highly underdetermined.89 Computer hardware is typically able to interpret code, or software, designed for purposes unguessed by the hardware designers. One consequence is that software History of Technology, Volume Twenty-eight, 2008
James Sumner
107
Figure 1 Idealized schematic of the compatibility principle underlying the UCSD p-System. Despite the dissimilarity of the hardware, users should be able to use applications software (e.g. a word-processor) identically on both machines.
compatibility may be engineered between hardware systems designed in isolation from each other, and having few or no elements in common. This principle was elegantly demonstrated by the UCSD p-System (Figure 1), the third option announced for the IBM PC in 1981, and also engineered for the Apple II and the most popular mini- and mainframe computers. The software performed a remarkable feat of translation, persuading each of the very different hardware platforms to behave like a common `pseudo-machine' (`p-Machine'). Developing this complex system was time-consuming, however, and the advantages to the user might be outweighed by need for additional resources such as memory, and a loss of speed or convenience compared to a machine's `native' behaviour.90 This was especially true where the hardware bases were radically different. CP/M had succeeded as a software lingua franca where the p-System did not, because it was limited to machines with one specific processor architecture.
Figure 2 Idealized schematic of the SoftCard's compatibility principle. Here, there are, in essence, two distinct machines, from the hardware level up, within the same physical box.
History of Technology, Volume Twenty-eight, 2008
108
The Rise of the PC Computing Platform
Many attempts to engineer compatibility, for this reason, followed a hybrid hardware and software approach: the most practical way to run CP/M on a machine without an appropriate processor was usually to plug an appropriate processor into it. Shortly prior to the PC-DOS deal, Microsoft ± primarily motivated by the desire to expand the market for its CP/M Basic ± launched the Z-80 SoftCard, a device by which CP/M could be run on the popular Apple II computer (Figure 2).91 The resulting entity, with two microprocessors in one box, could operate both the `industry-standard' CP/M business packages and the considerable range of Apple titles, with its greater focus on education and leisure software. The SoftCard illustrates an important ambiguity. It extended the functionality of Apple's machines, but reduced their users' dependence on the Apple-proprietary software base (`lock-in'). It was not at all obvious whether the ultimate consequences for the survival of Apple's proprietary platform would be positive or negative.92 Manufacturers' attempts to answer this kind of question are key to understanding the heterogeneity of microcomputer systems in the mid-1980s, as we shall see later. The marketing advantages of servicing multiple platforms lay not only in the breadth of software available: hardware manufacturers could play on customers' fears of `backing the wrong horse', ending up with heavy investment in a marginalized platform. The now familiar term `futureproof' was apparently coined in 1982 by a small British producer, Tycom Corporation, in advance publicity for a system known as Microframe. Aimed at research and business users, the device was underdetermined to the point of having no default configuration at all. Interchangeable processors mounted on standard circuit boards would run all the major commercial and research-oriented operating systems of the day, including MS-DOS, the entire CP/M range and p-System, to cover `over 85 per cent of currently available software'.93 An important element of cross-compatibility rhetoric was the `switch' metaphor (Figure 3). In the mechanical world, ingenious attempts to engineer flexible or dual-running approaches to incompatible systems are frequently proposed, as in the attempts to surmount breaks of rail gauge using large-wheeled `compromise cars', or wheels capable of sliding on their axles; yet, such approaches are often found unwieldy or impractical.94 In microcomputing, this should not be a problem. With the material form of the machine unchanged, the flick of a switch (or some relatively simple keyboard command) might set the charges dancing through another processor, or perhaps merely in another pattern, to effectively give the user a different computer. Though seductive to users, the position-switch concept may mislead as an account of compatibility. It not only distracts attention from more subtle forms of compatibility that operated by other means, but also overplays the simplicity of making use of a `compatible' system in most practical cases.95 A system advertised as compatible with, say, CP/M might in fact display any of the following behaviours:
History of Technology, Volume Twenty-eight, 2008
James Sumner
109
Figure 3 The position-switch concept. First published in Future Computing Illustrated for October 1980, before IBM's Personal Computer plans were generally known. The presentation is tongue-in-cheek, playing on the contrast between IBM's dominance in large computing ± the 1401 was an old mainframe warhorse, produced 1959±71 ± and the unfamiliar markets opened up by new microcomputers such as the TRS-80. Reprinted in Isaacson and Juliussen, Billion Dollar Baby, 39. Reproduced by kind permission of Portia Isaacson Bass.
. software products advertised for CP/M systems, and sold in a standard format (such as the 8-inch floppy discs popular in the late 1970s), could be run directly . standard distributed copies of CP/M software could be bought directly, but required a relatively simple and uniform `translation' process before use . the machine could not run any CP/M software from the open retail market, but `translation' of existing products could be undertaken quickly and straightforwardly by the hardware manufacturer's agents, or other niche suppliers.
In addition, variations in visual display, memory and various other factors meant that some applications would fail to run on some `compatible' systems, while others would behave idiosyncratically. Often, these problems were resolvable by software methods, but suppliers were by no means obliged to resolve them. Such complexities received increasing attention from market analysts with the rise of the IBM PC and its compatible-by-design imitators. The History of Technology, Volume Twenty-eight, 2008
110
The Rise of the PC Computing Platform
business information agency Future Computing distinguished operationally compatible machines, the true `clones' that could run software sold for the proprietary PC, from those set up to be functionally compatible.96 The latter approach, like the pseudo-machine, worked by inserting a compatible layer above an incompatible base, but at a different level: the machine would run technically distinct (but functionally similar) versions of the software packages used on true IBM machines, and could thus exchange data files with them (Figures 4 and 5). Functional compatibility was a minority option because of the capital and time investment needed to rewrite the applications, and obviously varied in its degree. What was less obvious, on account of the widespread promotion of the `clone' metaphor, was that operational compatibility, too, varied greatly:
Figure 4 Operational compatibility. Applications for the `true' IBM PC will run identically on the compatible machine. Packaged `IBM PC' software off the retail shelf can therefore be used directly by users.
Figure 5 Functional compatibility. The two users here can exchange data freely, although the technical bases of their computer use need have little or nothing in common.
History of Technology, Volume Twenty-eight, 2008
James Sumner
111
Figure 6 Partial operational compatibility. Here, although it is impossible to provide a truly equivalent hardware/operating system base, some applications will `fit' because the points of difference are not relevant to their functioning. This was often the case with text-only software that did not use the IBM PC's graphics facilities. Applications requiring very high compatibility, such as Microsoft Flight Simulator, were enshrined as compatibility benchmarks.
different machines would run varying proportions of true-IBM software given varying degrees of hardware reconfiguration, software modification and user tolerance of limitations or surprising effects (Figure 6). Less closely analysed in the technical press, but equally important, were the similarities and associations, strong and weak, that would never have been identified as `compatibilities' yet clearly operated in the same way. MS-DOS, for instance, was rather like CP/M in its behaviour. There was no operational compatibility at all between the two, but the similarities made it relatively easy for users to switch between them, and also for software producers to rewrite their CP/M software for MS-DOS. Tim Paterson, the originator of MS-DOS, firmly denies that the product was based directly on CP/M code, but there is no secret that it was partly inspired by a CP/M instruction manual.97 Such features as documentation, user support, suppliers' pre-existing reputations in other fields and the physical form of the computer must not be written off as `external' if we are to understand why users accept, reject or partially assimilate systems.98 A non-standard keyboard layout, in fact, was the most controversial feature of the PC 5150 after launch.99 Given this understanding, we are in a position to address the concept of niche.100 In the large-computing era, when IBM consistently held over 70 per cent of the US market, its competitors commonly depended on catering to the needs or expectations of specialized groups. This niche approach might as easily be social or geopolitical as technical: whereas Control Data Corporation specialized in fast-processing machines adapted to the scientific markets that IBM's range did not address,101 European producers such as ICL, Philips and Olivetti capitalized on local knowledge and national flagship status. As microcomputing became possible, Apple,
History of Technology, Volume Twenty-eight, 2008
112
The Rise of the PC Computing Platform
Commodore and Tandy RadioShack had successfully built a `home' niche in the United States, enticing non-corporate users with relatively low pricing, a preponderance of educational and leisure software, early availability of colour graphics and direct retail sales.102 Evidently, the archetypically white-collar IBM could not attack this market as easily as it could the CP/M base. The undeniable rise of the IBM PC platform placed niche producers in a precarious but interesting position. By continuing to promote non-IBMcompatible hardware and software, these producers could hope to maintain the `lock-in' of their users; but there was an ever increasing chance that the users would rebel, preferring even a difficult migration across platform boundaries to the risks of a fading standard. To go to the other extreme and embrace the PC wholeheartedly ± to join the `clone' manufacturers ± was even more problematic: the existing `lock-in' would be thrown away, and the producer would become the newest entrant to a bear-pit of vendors offering near-identical products at minimal profit margins. Unsurprisingly, producers looked to the complexities of compatibility as a means to avert this dilemma. The goal, usually, was to retain the original niche appeal whilst securing some or all of the advantages of the increasingly generic PC ± an approach I will refer to as niche-compatibility (Figure 7). Compatibility does not necessarily imply simple acceptance of a standard: it may mean toleration, co-option or even subversion. Such choices were greatly complicated by the number of elements in play ± hardware, operating system, applications software, user cultures ± and, specifically, by the looming spectre of the jilted CP/M.
Figure 7 Schematic of typical niche-compatibility approach in the early PC era. In essence, this approach differs from the SoftCard (Figure 2) only in that the hardware elements are supplied as a pre-assembled unit. In some cases, both operating systems employed the same hardware.
History of Technology, Volume Twenty-eight, 2008
James Sumner
113
THE BRITISH CONTEXT
It is particularly instructive to address the case of the United Kingdom, where (as throughout Europe) the `PC' platform was never wholly tethered to IBM moorings, and where the unusual influence of indigenous manufacturers illustrates a wide range of both non-compatible and nichecompatible approaches. By the time of the IBM PC's international launch, the British computer market was not only well established, but qualitatively different from that in the United States. In a mood of severe post-industrial decline, `millennial' assumptions informed both the fledgling Thatcher government's scheme for a post-industrial `information revolution' and the more humanistic initiatives clustered around the publicly owned British Broadcasting Corporation (BBC)'s Computer Literacy Project.103 1982, designated `Information Technology Year', was marked by a round of public events, education initiatives and state-funded research commissioning. At this time, the Commodore, Apple and TRS machines of 1977 were available through local dealers, but were promoted at higher than dollarequivalent prices to a public with significantly less disposable income. Correspondingly, though CP/M disc systems were likewise readily available, established small businesses were less likely to spend heavily on computerization. Public notions of microcomputing were more patterned by the indigenous Sinclair firm's radical strategy of design (in Leslie Haddon's phrase) `down to a price point', pursuing low-cost architecture with little regard to conventional expectations.104 In 1980, when the 1977 US models were retailing for around £350±£700 at their minimum specifications,105 Sinclair launched the ZX-80, a tiny vacuum-moulded microcomputer with a membrane keyboard, at £99.95 pre-assembled. The following year's ZX81 offered an improved system below £70 and was a conspicuous success, selling over 300,000 units by January 1982.106 This cost-cutting crucially relied on assembling as many hardware functions as possible on large custom-made chips ± the opposite of the IBM PC's design philosophy of standard parts, large-box design and conservative pricing. Though technically limited, even by comparison to the 1977 microcomputers, Sinclair's machines were hugely popular home purchases and attracted much attention in schools education, fuelled by the enthusiasm of practitioner advocates such as Eric Deeson.107 Sinclair dominated Britain's direct-retail computer market for around two years, ceding ground only when Commodore, apparently directly inspired by Sinclair's example,108 began to follow similar price-point tactics. Commodore carved a large chunk of Sinclair's `home computer' market in the United Kingdom and practically defined analogous sectors in other European countries, notably West Germany.109 Meanwhile, the indigenous Acorn Computers, which shared its Cambridge roots with Sinclair, focused on more robust and expensive machines; Acorn was contracted to provide the Computer Literacy Project's officially endorsed machine (dubbed the `BBC Micro') and subsequently took most of the schools trade, settling into a long-lasting niche based on education and specialized research needs. History of Technology, Volume Twenty-eight, 2008
114
The Rise of the PC Computing Platform
Across the first half of the 1980s, a variety of established electronics manufacturers, computer service firms and start-ups were similarly inspired to promote distinctive, usually incompatible microcomputers. Several in the `home' market, notably Dragon Data and Tangerine/Oric, achieved sales above 100,000 units and exported their machines to France, the Netherlands and elsewhere.110 Governmental exhortations to `Buy British', a long-established factor in information-technology appropriation decisions, were undimmed.111 Union Flags sprouted incessantly in marketing graphics, while national identity was manifest in such producer names as British Micro and Dragon (of Port Talbot, Wales). In the context of evolving computer standards, the very mention of this episode may require justification. The United Kingdom's early-1980s production culture, though remembered with much affectionate nostalgia, is commonly viewed through a presentist lens that makes its differences into foibles or errors.112 Against the looming challenge of the IBM PC, Britain seemingly responded with an uncoordinated patchwork of eccentric proprietary technology, most of it drastically underpowered for the great information adventure evoked in political rhetoric. The early Sinclair machines' cut-price positioning (the very cause of their success) forced the exclusion of most of the plug-in components necessary for broad compatibility, while the alternative approach typified by Acorn is stereotyped ± in a microcosm of the `British problem'113 ± as engendering hardware that, though innovative and reliable, was expensive, esoteric and commercially unappealing. Yet, this perception ignores the fact that many low-cost models in the British market were designed with a high degree of expansibility in mind: not as closed proprietary machines, but as niche-compatibles. That the contrary is widely assumed is a consequence of the tremendous initial impact of the cheap Sinclair models. Yet, Sinclair was also the birthplace of an expansible computer, the NewBrain, ultimately commercialized by Grundy Business Systems in 1982. Like the early Sinclairs, the NewBrain was ostensibly a `quirky' home computer: a tiny portable unit priced at £267, it gave output when not plugged into a monitor through an idiosyncratic, calculator-style fluorescent matrix. The base machine's modest 32-kilobyte memory was, however, designed as expandable to two megabytes ± an incomprehensible figure for any but a research or commercial application (compare the original-series IBM PC's limit of 640 kilobytes), and it was always intended to run CP/M.114 We find another demonstration of the niche-compatible philosophy in the complex formed by Acorn and its partner/competitor, Torch Computers, which had emerged from the same Cambridge Universityoriented context with the initial intent of selling Acorn-derived models to the business market.115 Acorn's BBC Micro, on the received view, epitomizes the `eccentric' ethic of lofty indifference to the compatibility agenda: incompatible with CP/M and other US standards, reliant on Acorn-specific operating and file systems, with a specially written dialect of the Basic programming language. This characterization, however, ignores History of Technology, Volume Twenty-eight, 2008
James Sumner
115
the machine's tremendous inherent expansibility. Among its multifarious connector sockets sat a port cryptically labelled `the Tube' ± a connector for an external unit housing a second microprocessor. In an elegant variation on the SoftCard principle, the base unit would continue to handle mundane input/output tasks, but the computer overall would take on the character of the added processor, running its associated software; and it could readily be swapped for an alternative unit.116 The acquisition of a second processor was promoted as a probable routine aspect of the microcomputer experience: early user-group literature, addressing new purchasers, describes the base machine as `only really half a computer'.117 The separation of parts was a positive advantage, not only to break up the expenditure, but because processor choice would depend on intended use (what would shortly be labelled `future-proofing'). Acorn's initial second processor, bundled with a version of CP/M licensed directly from Digital Research, was slow to arrive, but Torch stepped into the breach. The Torch Disc Pack combined a Z80 second processor and extra memory with twin disc drives in a flat rectangular unit that sat beneath the computer, and ran a reasonably effective CP/M clone.118 Torch marketed this unit, mainly to small business, as part of a compatible range of personal computers, workstations and communications systems; machines at the higher end combined three processors in a single box and could flip between CP/M and UNIX, the dominant system in multi-user research computing. The full range was launched on 4 July 1983, with advertisements celebrating `Independence of America Day'.119 In assessing the British response to US developments, it is crucial to note that at the end of 1982 ± Information Technology Year ± the IBM PC was absent from British soil. Whereas the 5150 had been available through US dealers from the autumn of 1981, limitations on production capacity and a priority for supplying the home market meant that it was not formally launched or supplied internationally, barring limited and irregular exports, until early 1983. The first PCs to roll off IBM UK's Greenock production lines encountered a diverse, well entrenched proprietary culture; several competitive CP/M suppliers at the higher end of the market; and direct home-grown competition, in the shape of Applied Computer Techniques (ACT). Established in 1965 as a computer time-sharing bureau, ACT built on its commercial contacts to distribute, from 1982, a machine named the ACT Sirius 1. This was a licensed equivalent to the Victor 9000 produced by Sirius Systems Technology, founded by the Commodore PET's designer, Chuck Peddle; its usual operating system was MS-DOS, although, like several other such machines, it had no underlying IBM compatibility. Launched in the United States shortly before the IBM PC, it made little headway there; but, in Britain, ACT scooped up a substantial portion of the emerging market for `business' microcomputers in the year of grace before the PC's arrival, later developing a successful Sirius-compatible machine of its own, the Apricot. Also pressing into the 1982 gap was Olivetti, the Italian national flagship with a background in History of Technology, Volume Twenty-eight, 2008
116
The Rise of the PC Computing Platform
pre-micro business computing, with the M20 ± yet another proprietary system. The rapid growth of IBM PC sales in the United States, therefore, did not lead Britons to assume automatically that a new hegemonic standard was in prospect. In 1982, even literature aimed purely at business emphasized CP/M, reflecting the strength of the installed base of 8-bit machines; sometimes, CP/M was noted as the `8-bit standard', with the question of the new 16-bit machines unresolved. IBM, of course, was highly recognizable: it had established a formidable large-computer presence in the United Kingdom, as in most of Western Europe. Yet, while its entry into microcomputing was duly noted, its favoured system was understood as merely one possibility, with DR's CP/M-86 at least equally worthy of attention.120 When, then, did the PC hegemony reach Britain? Unfortunately, we cannot assess the progress of the switch from available sales data, because the IBM machine's late arrival meant that it was vulnerable to diverse clone competitors almost immediately. An impression can be gained from editorial comments in the computer press: Practical Computing (launched for hobbyists but, by around 1983, largely oriented to business and professional users) waited until its June 1984 issue to declare the tipping-point reached. Its indicators were Olivetti's shift to PC-compatibility (carefully euphemized in advertising copy as `industry standard'), the growing list of clone models more generally and the US Victor/Sirius liquidation (a point of doubtful relevance, given ACT's independent success).121 In a national market without underlying IBM compatibility, MS-DOS could still be challenged by CP/M-86; both were offered on the ACT Apricot, and advertising copy for Ferranti's Argus Pro-personal described its CP/M-86 operating system as `industry standard' well into 1984. This characterization (presumably invoking the large surviving 8-bit CP/M base) was perhaps slightly disingenuous even for Britain, but suggests a surviving uncertainty that had altogether expired in the United States.122 United Kingdom-based columnists watching the well established IBM PC transfer to its new market showed a conspicuous lack of enthusiasm for a machine most of them were now advising their commercial readers to buy. In a Practical Computing survey of competing models, the terse encapsulation of the case in favour ± `It is the IBM PC' ± indicated weakness as well as strength.123 The benefits of the new de facto standard were spelt out, but the machine itself was stressed to be conservatively designed (inevitably, given the standardization strategy), relatively expensive (not inevitably) and, most importantly, not the only machine that could be claimed, or at least suggested, to `be' the PC. Unsurprisingly, the establishment of the PC standard produced new versions of the niche-compatible modification approach. In 1984, for instance, Torch followed up its CP/M Disc Pack with the Graduate, which made a BBC Micro approximately IBM-compatible by means of a second processor.124 Perhaps the most interesting approach came from Ferranti, History of Technology, Volume Twenty-eight, 2008
James Sumner
117
an old-established electrical engineering firm that had pioneered some of the United Kingdom's earliest commercial computing endeavours. Ferranti's Advance 86, announced in 1983, was available in two models. The 86a, costing £399, was the size and weight of a large home computer and presented firmly as such, distributed alongside Commodore, Sinclair and Acorn machines by W. H. Smith, the retail stationery chain.125 The 86b, at a guide price of £1,500, was available both from Smith's and through specialist dealers, and was more specifically promoted as IBMcompatible. All that distinguished the two was a top-mounted unit, much like the Torch Disc Pack, containing twin disc drives and expansion ports; owners of the 86a could easily upgrade by attaching this unit. The strategy, then, was to enter the established `home computer' and `IBMcompatible' markets simultaneously, in a manner that allowed the gap between the two to be easily bridged. DR'S REVENGE?
As CP/M wilted in the face of the PC-DOS/MS-DOS complex, Digital Research wondered how to reverse its unexpected reversal. One possibility was to exploit surviving areas of heterogeneity. Apple was then opening a new niche with the Macintosh, released in 1984, with a blaze of marketing presenting it as a corrective to the hegemonic IBM. By contemporary US standards, the Mac was almost aggressively PC-incompatible, yet its impressive mouse-driven graphical user interface (GUI) commanded immediate attention: at the time, Microsoft and IBM offered nothing comparable. DR's Graphical Environment Manager (GEM) was a similar, if more limited, GUI system, initially for CP/M, but with a PC version available from 1985. GEM may be regarded as a `conceptual compatible': it was not designed to offer any Macintosh-compatibility in technical terms, yet offered the user an environment similar to the Mac's GUI (sufficiently so to spark litigation from Apple). Its appeal was not, however, strong enough to overcome the very widespread lock-in to the text-based MS-DOS. The alternative strategy, which DR pursued in parallel, was to achieve some form of niche-compatible synthesis between CP/M and MS-DOS. This development track ran through various elaborations of CP/M-86 to DOS-Plus (1985), which could run CP/M-86 and MS-DOS 2.11 software simultaneously, and ultimately to DR-DOS (1988), positioned purely as an alternative to MS-DOS, which it battled on grounds of supplementary features into the early 1990s. So far as the United States was concerned, DR's primary goal in these developments was survival. In the less settled European markets of the mid-1980s, however, it could realistically aim ± briefly ± for something more: killing MS-DOS through compatibility. A new opportunity arose following the 1984 entry into the microcomputer market of Amstrad, a British electronics firm known previously for volume retail hi-fi equipment. Led by the charismatic Alan Sugar, by reputation a walking archetype of the hard-headed entrepreneur, Amstrad History of Technology, Volume Twenty-eight, 2008
118
The Rise of the PC Computing Platform
took Sinclair and Commodore's price-point philosophy as virtually its sole development criterion, promoting a cheerfully cynical take on the business of box-shifting. Sugar is on record as endorsing the marketing principle of the `mug's eyeful': Amstrad's CPC-464, positioned as an affordable home micro, was not to be small and sleek like the Sinclair machines, but as bulky and impressive as possible to the inexpert first-time buyer.126 Amstrad secured unusual success in breaking into the mature and apparently saturated market by offering the appearances of a `real computer', complete with monitor, at the home-user price of £350. There was no room for technological purity in the Amstrad vision. Legendarily, certain of the firm's models featured a technically redundant fan, included purely to assuage fears of overheating and meet received notions of how a personal computer should be.127 Amstrad's price-point philosophy, like Sinclair's before it, militated against standards-compliance. Sugar followed Sinclair in favouring custom chips over standard components; he also made a habit of buying up technologies that, though perfectly serviceable in isolation, were being dumped at bargain prices as the wider industry embraced their rivals. Most notably, Amstrad adopted Hitachi's 3-inch disc drive specification at the point of its evident eclipse by 3-inch models in the United States and Asia. Analogously, on the software side, Amstrad secured an inexpensive license on CP/M (not CP/M-86: the machine used the Z80A processor of the old 8-bit generation). Typically, the machine would be used in proprietary `home micro' mode, but Amstrad saw at least rhetorical advantage in connecting the machine to the long-established business software base. The connection established here was maintained with the follow-up PCW8256, marketed as a dedicated word-processor. As Sugar's biographer, David Thomas, notes, there was no significant market for such machines in the United Kingdom until Amstrad single-handedly created one through cut-price tactics, selling 350,000 PCWs within 8 months of the 1985 launch.128 Though addressed to an almost disjoint market from the CPC464's, the £399 PCW was another Z80A machine supplied with CP/M, and was, in fact, widely used outside its core word-processor configuration to run established CP/M software. The suggestion that this was somehow technologically inappropriate or `backward', in a climate of IBMcompatible hegemony and 16-bit processors, was rebuffed by Sugar in characteristic style: We had some very funny comments from the snobs in the market about the Z80: `Doesn't Mr Sugar know there is such a thing as an 8086 or even a 286 processor available for such applications?' What they'd missed is that the people who bought them didn't give a shit whether there was an elastic band or an 8086 or a 286 driving the thing. They wouldn't know what you were talking about. It was bringing computing to people who never even thought they would use a computer.129
History of Technology, Volume Twenty-eight, 2008
James Sumner
119
Specifically, the machine was to displace hundreds of thousands of manual typewriters from home and office desks across Britain and continental Europe. The result was a late-flowering island of Z80A-CP/M (and 3-inch disc) users, for whom technical `advancement' and IBM compatibility were simply not meaningful considerations. To insist on excluding this group from `real' computing culture is circular, as indeed it is for the contemporaneous home micros. Of course, none of this implied an aversion to the IBM standard on the part of Amstrad: the next logical step was a cut-price PC. What almost followed was an ironic reversal of the crucial 1980 decision, with Amstrad rather than IBM standing as kingmaker in Europe. Microsoft, buoyed by its deals with the US clone-makers, and beginning to assert itself as the arbiter of the PC standard, offered MS-DOS at a price Amstrad was not prepared to pay; DR, whose UK operatives had an established relationship with Amstrad through the two CP/M machines, stood ready with DOS-Plus, the MS-DOS (near-)compatible that maintained a bridge back to the CP/M base. Amstrad duly licensed DOS-Plus, adding in GEM on the rationale that the graphical environment offered a useful `mug's eyeful' for buyers loosely familiar with the Macintosh. The result could, we must appreciate, have been the long-term entrenchment of DOS-Plus in Western Europe, DR surviving as a US software corporation with a largely overseas base (as for Commodore in the hardware market). The similarities between MS-DOS and DOS-Plus would have permitted extensive de facto worldwide conventions on operating system behaviour, but this would not have prevented a persistent market duopoly. It was not to be. The very plausibility of this vision induced Microsoft to offer MS-DOS on more favourable terms, and Amstrad's machine was announced at £499 with both systems included. Only at this point did the ascendancy of MS-DOS in Europe become an inevitability.130 Yet, the runaway success of the machine was also a crucial blow for the value of `true IBM' equipment, and thus contributed to Microsoft's eclipse of IBM. By this point, established IBM PC purchasers were moving to the more powerful PC-AT launched in 1984. The AT was based on the Intel 80286 microprocessor, strongly compatible with the earlier 16-bit chips but offering new features. DR saw in the 80286 another opportunity to assault MS-DOS through compatibility. A product named Concurrent DOS 286 was intended to emulate MS-DOS perfectly, whilst also running GEM and making use of advanced possibilities of the 80286 not seen on the IBM/ Microsoft platform. This development coincided with moves by Acorn to re-address the commercial and research workstation sectors it had once delegated to Torch, through a range of niche-compatibles derived from the expansible BBC Micro principle and pre-announced under the designation `ABC' (for `Acorn Business Computer').131 Acorn and DR were reported to be working closely together to engineer Concurrent DOS for ABCs as well as for PC-AT clones, permitting full data interchange capability between the two communities.132 History of Technology, Volume Twenty-eight, 2008
120
The Rise of the PC Computing Platform
This approach ± compatibilism promoted so as to retain some distinctness of form ± was probably the most rational available to a company in Acorn's position. Its established platform was increasingly dependent on a rhetoric of difference, like Apple's, and was, if anything, more strongly identified with a specific niche area. To suggest that such a firm should have moved to embrace the obvious emerging de facto standard, by developing a straightforward PC clone, is to miss the point. There would have been nothing to lead customers to favour such a machine over the IBM PC itself. Acorn did not have the price-cutting instincts of an Amstrad or Sinclair; indeed, it had grown by supplying specifically those areas that did not respond to Sinclair's approach. The ABC range represented an attempt to co-opt the emerging IBMcompatible generation whilst maintaining a link to its strong established niche. THE END OF HETEROGENEITY?
The collapse of a proprietary niche does not necessarily indicate that the user community is not viable: if the producer ceases to produce, for whatever reason, there may be little the users can do. Over the course of the 1980s, Britain's various non-clone computer producers were steadily driven out of business by a variety of factors, often as simple as managerial inexperience or unavoidable economic happenstance. Contrary to received opinion, it is difficult to confirm naõÈ ve incompatibility as a significant factor: designers were closely attuned to the emerging standards (hence the niche-compatibles), and usually ignored them only in reasonable expectation of building an alternative market (as for the low-end Sinclair machines). We may, however, perceive a tendency to underestimate the work needed to engineer a successful niche-compatible, and its often fatal consequences in terms of poor performance or product delay. Grundy Business Systems, producer of the Newbrain, went into liquidation in mid-1983, before the promised expansion units to take advantage of CP/M compatibility could be brought to market, after overexpanding production against optimistic sales projections.133 Sinclair suffered a notorious fall from grace in 1984 with its new model, the QL, aimed at small business and more `serious' home users but maintaining the Sinclair ethic of unusual proprietary design. Again, it is important to realize that this was not irrational. Boasting a new-generation 32-bit processor and priced at £399 (compare the almost simultaneous UK launch price of £1,795 for the Apple Macintosh), the machine could have followed previous Sinclair models in building a new user market. It failed principally through embarrassing supply delays,134 which, alongside costly failures in Sinclair's non-computer activities, triggered a loss of shareholder confidence, resulting ultimately in the sale of Sinclair's computer business (including the still successful Spectrum home micro) to Amstrad in 1986.135 Ferranti's Advance (offering expansion from `home micro' to fully History of Technology, Volume Twenty-eight, 2008
James Sumner
121
fledged PC) was a quiet failure, illustrating the limitations of the expansible approach. Whereas Commodore and Sinclair's cheap, complete home systems were supported by an extensive, well marketed (if less than `serious') range of software, the Advance 86a offered a limited gateway to software possibilities that, though similarly extensive, were barely promoted at all to home-user audiences, particularly outside the United States. If W. H. Smith had hoped to rely on clip-on expansibility as a selling-point for a `small business' market intersecting with the home field it knew best, no such market emerged.136 Subsequent Ferranti micros were PC clones of conventional design. ACT/Apricot, likewise, made the jump to straightforward PC-compatibility in 1986. The cheap, non-compatible `home user' niches, in fact, present the commercial success stories of this era in Britain. When first-generation Sinclair and Commodore users upgraded, in the mid±late 1980s, it was generally not to IBM-compatibles, but to new, 16-bit proprietary models offered by Atari, an established US videogames and home micro specialist, and again by Commodore. In another example of the DR push into nonestablished markets, Atari's proprietary operating system, TOS, was in large part an adaptation of GEM. The principal survivor among `serious' adaptable niche producers, and arguably in the British microcomputer industry in general, was Acorn. Holding the core education market with its BBC Micro, the Cambridge firm continued to find purchasers, often with school-aged children, who valued its reputation and software base. Moderately enhanced versions of the machine appeared until 1986 (prompting bewilderment from some technically minded reviewers, as the processor and core architecture were now very old indeed; this, of course, as for the Amstrad machines, was to miss the point). Acorn suffered the same classic crisis as Grundy, however, overstocking heavily on a new, cheaper `home' machine that proved unsaleable. In February 1985 ± after the ABC pre-announcement, but before the new machines, and the fruits of the DR partnership, could be brought to market ± Acorn was rescued, on an arrangement amounting virtually to acquisition, by Olivetti of Italy. Far from representing a consolidation of the microcomputer industry, this development gave rise to one of the most interesting turns in its twisting saga. Acorn maintained its identity, and was not so much allowed to continue heterodox development as positively constrained to do so. Olivetti, now manufacturing PC clones under its own badge, could support a small, research-oriented manufacturer, whose ideas could serve local niches and might be of long-term benefit; not so an expansionist operation targeting Olivetti's existing, congested demographic.137 The niche-compatible ABC series was dropped, rapidly forgotten and Acorn thereafter focused on its existing home and educational markets, alongside a number of high-specification, low-volume research niches. Non-compatible approaches, then, could be resilient; yet none of the platforms discussed here endured indefinitely. In the mid-1990s, Commodore collapsed and Atari left the market, while even Acorn had faded History of Technology, Volume Twenty-eight, 2008
122
The Rise of the PC Computing Platform
gradually from existence by decade's end. A factor in these developments, we must admit, was the growing cheapness of generic PC hardware thanks to the mass-production feedback effect (rising demand and falling production costs reinforcing each other): ultimately, some users decided that the relative cost of niche loyalty was too high. Such are the complexities of computing platforms, however, that this economic factor must be treated as double-edged. Mass-production logic may, to some degree, determine component homogeneity: virtually all microprocessors in small computers today follow the Intel archetype. At the platform level experienced by users, however, this development may facilitate either uniformity ± more use of standard PCs ± or heterogeneity. Apple's 2006 co-option of Intel processors to run Microsoft Windows alongside its own proprietary operating system is a classic nichecompatible manoeuvre, which so far seems viable. As ever, a range of social, economic and happenstance factors will decide the case. CONCLUSION
The hegemony of the IBM PC archetype was not inevitable. It has always faced sustained competition from niche alternatives, presented at various technical levels through both incompatible and compatible approaches. Its uniformity in practice has been widely overstated, especially as regards the early years of its incubation and markets outside the United States. Since the PC's origination, its form has been contested and transmuted; its explanatory power as a category now seems to be waning. If so, of the numerous heterodox platform proprietors, Apple at least has survived it; others might plausibly have done so but for various contingent developments. A relatively unexamined area in my study is the issue of applications software, which became increasingly crucial with the eclipse of IBM and the fast-growing hegemonic role of Microsoft. There is an argument to be made that Microsoft enjoyed not so much a path-dependent apotheosis as a sustained run of luck in repulsing challenges to its special status. Beyond Digital Research, we might consider software applications such as WordStar or Lotus 1-2-3, which showed clear market dominance in the CP/M or early PC eras, and whose proprietors sought to make them integral to the PC standard.138 Perhaps the strongest example arose much later: the Netscape web browser, around 1995±97, presented a non-Microsoft hegemony in a field that seemed, to many, to be fundamentally redefining the personal computer. Microsoft's deflection of this challenge, though successful, drew on its monopolist advantages to an extent that directly triggered the antitrust civil actions of 1998, which nearly saw the corporation disaggregated into separate operating systems and applications arms. I do not deny, of course, the genuine uniformity that was established in the wake of IBM's intervention. The PC platform, after all, displays the signal feature of uniform standardization: it has become invisible. To the rising generation of non-expert users, the artefact is merely `a computer'. Computers, with the possible exception of Apple Macs, do not possess History of Technology, Volume Twenty-eight, 2008
James Sumner
123
kinds, and `PC' is merely an initialism, its specific association with the 5150 long forgotten. Yet, this uniformity has its limits. Microsoft today offers the dominant product in some, but by no means most applications sectors; sometimes, there is no dominant product. Microsoft's Windows operating system, offered in a limited range of configurations, is challenged by a variegated panoply of open-source alternatives, while user customization through software `extensions' is a key promotional feature of the Firefox browser. The personal computer's material form, meanwhile, is vastly more heterogeneous than it was in 1981. What defines the limits of uniformity? The mistaken belief that convergence is inevitable rests on the assumption that it is generally desirable ± an uncontroversial matter of convenience, evidently helpful in the eyes of producers, mediators and users alike. If we instead locate particular and contingent causes for convergence (such as mass-production logic or commercial monopoly-seeking), we may expect uniformity to be absent where these features cannot gain traction. This is often the case where specialisms present strong niche identity, where heterogeneity has little apparent cost or where high-volume commercial activity cannot easily demonstrate gains. To give just one example, the idea of a hegemonic standard coding language is unknown. Partisans of competing languages still enthusiastically debate their efficiency, intelligibility, fitness to specific purpose and aesthetics, while languages differing only moderately from their predecessors continue to be incarnated under new names. While this element of personal computing activity may be unfamiliar to most users, it is hard to think of a case more obviously crucial to the way computer systems evolve. Homogeneity, as noted, has a habit of becoming invisible; but so, too, does heterogeneity, if we make the mistake of reducing the computer to its most uniform elements. The uniform and the diverse, as ever, stand in complex and creative tension. ACKNOWLEDGEMENTS
Various iterations of this paper have benefited from careful reading by members of the Physical Science and Technology Reading Group, Centre for the History of Science, Technology and Medicine, University of Manchester; by Tom Lean, Andy Russell, Graeme Gooday and Leucha Veneer; and by the referees. My thanks also to Portia Isaacson Bass for making available some invaluable material. Notes and references
1. While it is frequently noted that IBM had already released single-user microcomputers, the 5100 Portable (1975) and 5110 (1978) range, these models had little direct influence on later developments. Costing in the region $10,000±$20,000, they sold in small numbers and were not addressed to the emerging personal microcomputer sector. 2. A brief investigation of this shift is given in J. Sumner, `What Makes a PC? Thoughts on Computing Platforms, Standards and Compatibility', IEEE Annals of the History of Computing, 2007, 29(2): 87±8.
History of Technology, Volume Twenty-eight, 2008
124
The Rise of the PC Computing Platform
3. J. Schofield, `Happy Birthday to the PC, a Tool that Changed the World', The Guardian, Technology supplement, 2006, 17 August, 6. 4. P. A. David, `Clio and the Economics of QWERTY', Economic History, 75(2): 332±7, presents the hegemony of the QWERTY keyboard as the result of path-dependent standardization on the `wrong system' (336, David's italics). David's concern was to demonstrate the historicity of economic processes, rather than to advocate for unadopted alternatives. Such advocacy is, however, a persistent feature of both professional and enthusiast technical literatures: see, e.g. R. Parkinson, `The Dvorak Simplified Keyboard: Forty Years of Frustration', Computers and Automation Magazine, 1972, November: 18±25, available online at http://infohost.nmt.edu/~shipman/ergo/parkinson.html, accessed 31 January 2008. The validity of the technical inferiority claim is challenged in S. J. Liebowitz and S. E. Margolis, `The Fable of the Keys', Journal of Law and Economics, 1990, 33: 1±26. A typically intense response from the technical advocate position is M. W. Brooks, `Dissenting Opinions', 1996, rev. 1999, available online at www.mwbrooks.com/dvorak/dissent.html, accessed 31 January 2008. 5. Andrew Russell's chapter in this volume critiques a similar `sluggish monopoly' characterization applied to AT&T. 6. My concern to address the systems context inevitably draws on the theoretical framework of Thomas P. Hughes. For present purposes, this is best seen via his treatment of computer networking: T. P. Hughes, Rescuing Prometheus: Four Monumental Projects that Changed the Modern World (New York, 1998), 255±300. 7. My approach here bears similarities to the concept of system modularity found in some of the more use-oriented economic literature: R. Langlois and P. Robertson, `Networks and Innovation in a Modular System: Lessons from the Microcomputer and Stereo Component Industries', Research Policy, 1992, 21: 297±313. This literature, however, addresses only the more readily specified hardware and software levels. 8. P. A. David and S. Greenstein, `The Economics of Compatibility Standards: An Introduction to Recent Research', Economics of Innovation and New Technology, 1990, 1: 3±41, on p. 4. 9. M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry (Cambridge, MA, 2003), 231±66. 10. My thanks to Ian Martin for his comments on this point. 11. D. Skinner, `Technology, Consumption and the Future: The Experience of Home Computing', Ph.D. thesis, Brunel University, 1992, 32±69. 12. See in particular L. Antill, `The Systems Approach', in D. Olney (ed.), Microcomputing for Business: A User's Guide (London, 1982), 9±82, esp. 11±13. 13. For the pre-digital and early digital systematization of corporate information management and exchange, see J. Yates, Control through Communication: The Rise of System in American Management (Baltimore, 1989); J. Yates, Structuring the Information Age: Life Insurance and Technology in the Twentieth Century (Baltimore, 2005). On pre-digital systematization as a gateway to computerization in general, see also J. Agar, The Government Machine: A Revolutionary History of the Computer (Cambridge, MA, 2003). 14. Campbell-Kelly, op. cit. (9), 215±21. 15. The essence of the TRS-80 user community is well characterized in C. Lindsay, `From the Shadows: Users as Designers, Producers, Marketers, Distributors, and Technical Support', in N. Oudshoorn and T. Pinch (eds), How Users Matter: The Co-Construction of Users and Technology (Cambridge, MA, 2003), 29±50. 16. P. Grindley, Standards Strategy and Policy: Cases and Stories (Oxford, 1995), 131±55. I accept Grindley's characterization of the subsequent PS/2 architecture, which I do not address here, as an unsuccessful attempt by IBM to `reclose' the standard. 17. J. Chposky and T. Leonsis, Blue Magic: The People, Power and Politics behind the IBM Personal Computer (London, 1989), 23±4. 18. `IBM's Estridge' (interview with Don Estridge), Byte, 1983, 8(11): 88±97, esp. 89. 19. Note also the machine's conservatism at the aesthetic design level: P. Atkinson, `The (In)Difference Engine: Explaining the Disappearance of Diversity in the Design of the Personal Computer', Journal of Design History, 2000, 13(1): 59±72, on 65. 20. One widely distributed version of the legendary account is R. X. Cringely, Accidental Empires: How the Boys of Silicon Valley Make their Millions, Battle Foreign Competition and Still Can't Get a Date, 2nd revised edn (London, 1996), 128±9.
History of Technology, Volume Twenty-eight, 2008
James Sumner
125
21. P. Isaacson and E. Juliussen, `IBM's Billion Dollar Baby: The Personal Computer', 1981, 1. (Not currently available online. Richardson, Texas: Future Computing Inc.) 22. The episode is covered in Chposky and Leonsis, op. cit. (17), 39±53; Cringely, op. cit. (20), 127±34; Campbell-Kelly, op. cit. (9), 239±40. 23. `CP/M, the First PC Operating System', Digital Research website, n.d., available online at www.digitalresearch.biz/CPM.HTM, accessed 27 September 2007. Tim Paterson, author of the project that became MS-DOS, repudiates this characterization: see P. Ceruzzi, A History of Modern Computing, second edn (Cambridge, MA, 2003), 270±1. 24. R. Langlois, `External Economies and Economic Progress: The Case of the Microcomputer Industry', Business History Review, 1992, 66(1): 1±50, on 21. 25. Gates to Portia Isaacson, 29 September 1981 (personal archives of Portia Isaacson Bass). 26. By `compatibility' in this paper, I generally intend what David and Bunn term `substitute' compatibility (as between an IBM and a clone PC, one of which may be used in place of the other). `Complement' or `vertical' compatibility (as between the PC and its software) is, however, a crucial element of system or platform definition, as will become clear. P. David and J. Bunn, `The Economics of Gateway Technologies and Network Evolution: Lessons from Electricity Supply History', Information Economics and Policy, 1988, 3(2): 165±202, on 171. 27. R. C. Brooks, `Standard Screw Threads for Scientific Instruments', History and Technology, 1988, 5: 59±76; D. Puffert, `Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge', Explorations in Economic History, 2002, 39: 282±314. 28. A defining influence in the development of electronic digital computing technology was the concept of the universal machine, a tool whose purpose and behaviour, understood in terms of symbolic manipulation, are not determined by its form or circumstances but could (within various material constraints) be extended to encompass anything. For the universalizing shift, see, e.g. A. Hodges, Alan Turing: The Enigma, revised edn (London, 1992), 290±305. 29. Frank Veraart's contribution to this volume discusses a rather different version of the universal translation principle. `Basicode' achieved acceptable speed and convenience by deliberately constraining the possibilities of the system, in a manner that suited hobbyists rather than professional users. 30. The Apple II, often noted as a `closed-architecture' machine, was in fact designed from the outset to be `open' to modular expansion possibilities such as the SoftCard: Langlois and Robertson, op. cit. (7), 307. 31. Compare Apple's dilemma over the licensing of its Macintosh user interface in 1987, in which the lock-in existed at the hardware level: Grindley, op. cit. (16), 152. 32. `Has the Future-Proof Computer Arrived?', New Scientist, 1982, 23/30 December: 800; `Tycom Offer a Free Computer' (advertisement), The Times, 1983, 27 September: 21A. Tycom claimed `future-proof' as a trademark; the earliest documented use of the term traced by the Oxford English Dictionary occurs in an advance report on the Microframe from February 1983. 33. G. R. Taylor and I. D. Neu, The American Railroad Network, 1861±1890 (Cambridge, 1956), 59±60. 34. Cf. David and Bunn, op. cit. (26), 170±1. David and Bunn's characterization of the `gateway' goes beyond most economic literature in capturing the looseness of compatibility in practice, but does not address less technically formalized modes of `compatibility'. 35. Summarized in R. Ward, `Levels of PC Compatibility', Byte, 1983, 8(11): 248±9. 36. Ceruzzi, op. cit. (23), 270. 37. An excellent study addressing physical form is P. Atkinson, `Man in a Briefcase: The Social Construction of the Laptop Computer and the Emergence of a Type Form', Journal of Design History, 2005, 18(2): 191±205. See also Atkinson, op. cit. (19), esp. 64±67, 70. 38. `IBM's Estridge', op. cit. (18), 90. 39. Grindley, op. cit. (16), 142±4, offers some useful comments on Apple's niche presence from an economic perspective. 40. M. Campbell-Kelly and W. Aspray, Computer: A History of the Information Machine, 2nd edn (Boulder, CO, 2004), 121±2. 41. Schmidt and Werle have applied the social constructivist's concept of interpretive
History of Technology, Volume Twenty-eight, 2008
126
The Rise of the PC Computing Platform
flexibility to the microcomputer, noting its various meanings as perceived by different user groups. Their principal distinction, between the `stand-alone' and `networked' machine concepts, is not addressed here, as it only became significant in the mass market towards the end of my study's timeframe. S. Schmidt and R. Werle, Coordinating Technology: Studies in the International Standardization of Telecommunications (Cambridge, MA, 1998), 74. 42. N. Selwyn, `Learning to Love the Micro: The Discursive Construction of `Educational' Computing in the UK, 1979±89', British Journal of Sociology of Education, 2002, 23(3): 427±43. A useful contemporary source on the emergence of this ethos is P. Kriwaczek, `BBC Television's The Computer Programme: Evolution and Aims', in `The BBC Computer Literacy Project: Has It Succeeded?', papers of Professional Group Committee C6 of the IEE, December, 1982, copy in British Library. 43. L. Haddon, `The Home Computer: The Making of a Consumer Electronic', Science as Culture, 1988, 2(1): 5±51, on 30. For Sinclair as a national success story, see R. Dale, The Sinclair Story (London, 1985); and for the problematics of this characterization, I. Adamson and R. Kennedy, Sinclair and the Sunrise Technology: The Deconstruction of a Myth (Harmondsworth, 1986). See also T. Lean, ```What Would I Do with a Computer?'' The Shaping of the Sinclair Computer, 1980±1986', MA thesis, University of Kent, 2004, 14±34. 44. For instance: [Microcomputers Etc. advertisement], Computer Age, 1(9), August 1980: 2. 45. Dale, op. cit. (43), 107. 46. For instance, E. Deeson, `The Sinclair ZX-81: Toy or Treasure?', Physics Education, 1981, 16: 294±5. Veraart's chapter in this volume examines the hobbyist cultures that also helped to sustain such machines. 47. B. Bagnall, On the Edge: The Spectacular Rise and Fall of Commodore (Winnipeg, 2005), 152, 159. 48. Haddon, op. cit. (43), 36±42, 45±9. 49. These products await serious analysis, but are often already the subject of chronicles by insiders or enthusiasts. See, e.g. D. Linsley, `A Slayed Beast: History of the Dragon Computer', revised edn, n.d., available online at www.dragon-archive.co.uk/index.php?option=com_content&task=view&id=8&Itemid=27, accessed 25 September 2007; J. Haworth, Oric: The Story So Far (Cambridge, 1992), available online at http://oric.ifrance.com/ oric/story/contents.html, accessed 3 October 2007. 50. The assertion of thorough-going `Britishness' in parts and manufacture was a running theme in the punched-card era: M. Campbell-Kelly, ICL: A Business and Technical History (Oxford, 1989), 79±81, 100±2. Governmental promotion and subsidy of British equipment are perhaps most closely associated with the Wilson administration's 1960s Ministry of Technology: op. cit., 246±8. 51. G. Laing, Digital Retro (Lewes, 2004), back cover, provides an evocative, rather tongue-in-cheek characterization: `Long before Microsoft and Intel ruled the PC world, a multitude of often quaint home computers were battling for supremacy . . .. Products from established electronics giants clashed with machines which often appeared to have been knocked together in a backyard shed by an eccentric man from Cambridgeshire. Plenty actually were. Compatibility? Forget it!' 52. For which (defined with respect to the earlier, large-computer case), see J. Hendry, Innovating for Failure: Government Policy and the Early British Computer Industry. (Cambridge, MA, 1989). 53. T. Lloyd, Dinosaur & Co.: Studies in Corporate Evolution (London, 1984), 106, 109±10, 114±15; Dale, op. cit. (43), 85. Specifications as given in Laing, op. cit. (51), 102±5, referring to the higher-specified of two models launched. 54. R. Woolnough, `Starting Young', The Times, 1983, 7 June: 23G. From a founding basis of direct collaboration with Acorn, Torch moved to be a largely independent OEM client. In 1984, however, Acorn acquired what amounted to a controlling stake in the smaller firm. 55. Lloyd, op. cit. (53), 113±14, presents the Tube concept as emerging as a means to avoid conflict among the broad range of expert constituencies within Acorn. 56. Beebug [BBC Micro user group newsletter], 1982, October, 1(6): 22. 57. For example, [Torch Disc Pack advertisement], Micro User, 1983, April, 1 (2): 3. 58. [Torch advertisement], The Times, 1983, 4 July: 25A.
History of Technology, Volume Twenty-eight, 2008
James Sumner
127
59. For instance, J. Derrick and P. Oppenheim, What to Buy for Business (London, 1982), 122±3 and passim. 60. Practical Computing, 1984, 7(6), June: 67. 61. [Ferranti Argus Pro-personal advertisement], Practical Computing, 1984, January, 7(1): 74±5. 62. Personal Computer World, 1984, April, 7(4), Supplement: 15. 63. R. Cullis, `To the BBC by Bus and Tube', Practical Computing, 1984, December, 7(12): 81±5. 64. M. May, `New Micro for Small Business', The Times, 1984, 8 May: 23H; P. Liptrot, `W H S Goes for the Advance', Home Computing Weekly, 1984, 15 May: 5. 65. D. Thomas, Alan Sugar: The Amstrad Story (London, 1990), 123±4. 66. Thomas, op. cit. (65), 230. 67. Thomas, op. cit. (65), 184. 68. Quoted in Thomas, op. cit. (65), 176. 69. Thomas, op. cit. (65), 224±6. 70. G. Kewney, `Newsprint', Personal Computer World, 1984, October, 7(10): 86. 71. `New Acorn Micro', Acorn User, 1984, October, 3(3): 7±8; G. Kewney, Personal Computer World, 984, 7(12): 100±1; P. Bright [ABC 310 benchtest], Personal Computer World, 1985, 8(4): 121±6. 72. C. Cookson, `Micro Boom's First Victim', The Times, 1983, 31 August: 1H. 73. Dale, op. cit. (43), 133±42; Adamson and Kennedy, op. cit. (43), 153±82. 74. Adamson and Kennedy, op. cit. (43), 217±24; Thomas, op. cit. (65), 188±206. 75. `Smiths: Advance Backs Out', The Times, 1984, 24 October: 34B. 76. M. Banks, `Making Acorn Fit the Space', The Times, 1985, 6 August: 23A. 77. Campbell-Kelly, op. cit. (9), 216±19, 251±66.
History of Technology, Volume Twenty-eight, 2008
Basicode: Co-Producing a Microcomputer Esperanto FRANK VERAART
INTRODUCTION
Hundreds of thousands of Dutch viewers must have seen the demonstration of Basicode on the television show Horizon in 1981 (Figure 1). The broadcast showed a group of five computer hobbyists, with different personal computers, who had worked together on a system to exchange software between their distinct machines. The system used a special language and formats, which the hobbyist designers called Basicode. Software written in Basicode was distributed by broadcasts on Dutch national radio; within 3 years, Basicode was used on British, German and other foreign radio stations. Basicode was a successful collaboration of computer amateurs and broadcasting professionals that came to an end with the rise of a de facto standard for personal computers in the late 1980s. Studying the history of Basicode's development raises interesting questions about the cooperation of amateurs and professionals, producers and users, and their co-construction of technology. The role of users has received much attention in the history of technology in recent years.1 In the past decade, Dutch history of technology has gained significant momentum through two projects focusing on technological developments in the Netherlands in the nineteenth and twentieth centuries. These projects, coordinated by Harry Lintsen and Johan Schot, resulted in a series of 13 volumes on Technology in the Netherlands (TIN).2 Through this work, a significant new approach in studying technological development evolved: not only invention and production, but also the adoption of technology played an important role. Whereas the nineteenth-century series addressed the question of a supposed technological lag, as seen in the minor role steam technologies played in the Netherlands in the early nineteenth century, the twentiethcentury series paid attention to the technological development of seemingly less industrial sites, such as households, offices and streets. The latter showed how technology became intertwined with a very broad range of aspects of our daily life. Accordingly, research focused on the interplay between the developers History of Technology, Volume Twenty-eight, 2008
130
Basicode: Co-Producing a Microcomputer Esperanto
Figure 1 Basicode as demonstrated on the Dutch television show Horizon in 1981. A Basicode computer program, broadcast by radio, was captured on a cassette tape (right of image) and loaded onto several different computer platforms. Reproduced from H. Janssen (ed.), Basicode (Hilversum, 1981), copy in Klaas Robers' personal collection.
and the users of technology. Dutch historians of technology adopted from women's and consumer studies a focus on the roles of intermediary actors, who mediate between developers and users of technology: this mediation, in the words of Schot and Albert de la BruheÁze, is `a process of mutual articulation and alignment . . . [in which] product characteristics, the use, the users and the user's demand become defined, constructed and linked'.3 In these studies on mediating actors, much attention is given to the different `users' in the process of design and mediation. Designers construct an `envisioned' user for their product. Intermediary actors, such as pressure groups, `represent' users during usability testing and at expert panels; they also actively construct a `projected' user, a vision of the ideal user. Mediators promote this projected view of use in their activities and publications. These studies show the importance of the various activities of mediating actors.4 This chapter seeks to extend elements of this approach into the history of computing ± a field in which sustained attention to the user is a relatively new trend, as is discussion of personal computing.5 Until recently, the preponderance of studies focused on designers, production and industrialization, in a variety of internal-technical, economic and business history contexts. At present, 90 per cent of Dutch people use computers at home. While sociologists are addressing present-day computer use, computer literacy and differences between social groups, there has been little systematic research into the historical development of computer use in the home, and even less about how users have shaped computer use.6 Another concern is to address differences in approaches to development History of Technology, Volume Twenty-eight, 2008
Frank Veraart
131
between `amateurs' (enthusiasts pursuing the technology outside the context of any formalized commercial or public-service commitment) and `professionals'. Identification of specific characteristics of Basicode due to the systematic involvement of amateurs may shed light on the development of computer use, technological development and standards formation by users more broadly, notably in the case of the present-day open source movement. In her 1984 ethnographic work, Sherry Turkle studied hobbyist culture, pointing out the close relationships computer hobbyists had with their machines. Turkle distinguished two different styles of relation. In the first style, associated with the early computer hobbyists who defined the field, transparency, predictability and reassurance played an important role. What these users strove for above all was a firm understanding of the electronics of the computer. In the second style, focus was more on the computer performance and programming, the more `mystical' elements of computer use, associated with computer `hackers' and users whom Turkle termed `wizards'. For these users, the understanding of the electronic workings was less important. With the use of `higher-level' programming languages, more elegant than raw machine code, these users focused on finding solutions for new ideas and problems.7 The working practices of hackers were studied also by Pekka Himanen in his philosophical work, The Hacker Ethic. Himanen analysed hackers and computer enthusiasts whose work led to the Linux operating system, finding that the motives of hackers were reinforced by passion and craftsmanship. Crucial to the development process was an open system for information sharing, created by the enthusiasts themselves, which enshrined a clear concept of peer recognition.8 Other scholars have nuanced these typologies, pointing out that there is not one single hobbyist or hacker `culture', but rather a mix of internally heterogeneous and historically dynamic cultures. Hobbyist culture has a lot of ambiguity, and even a sharp boundary between design and application seems rather problematic. For hackers, the production and consumption of technology was very much intertwined, and the two are not necessarily distinct categories.9 Finally, this chapter seeks to address the consequences of cooperation between consciously distinct `amateur' and `professional' groups for the ultimate fate and reputation of the Basicode technology. As Graeme Gooday has pointed out, `success' and `failure' are not particular to technologies themselves: they are judgements, constructed by the members of specific interest groups, which need not be accepted outside those groups, may be applied only in specialized senses, and are liable to vary or even to be reversed over time.10 As we will see, what might conventionally be portrayed as the `failure' of the Basicode approach was distinctly not so from the perspective of its creators and chief proponents. The technology underwent a distinctive full-circle trajectory from beginnings among enthusiasts, through co-option for mass-audience use by broadcasting professionals, to a final reformulation that firmly restated the original enthusiast ethic. History of Technology, Volume Twenty-eight, 2008
132
Basicode: Co-Producing a Microcomputer Esperanto PREPARING FOR THE `INFORMATION SOCIETY'
The development of Basicode must be understood in relation to awarenessraising activities surrounding the arrival of the `information society' and early personal computer use in the late 1970s. In this period, the Netherlands, in common with most of the industrialized West, was preparing itself for the arrival of the `information society'. A governmental commission of 1979, set up to assess the possibilities of the new microprocessor technology, resulted in action at the level of industrial policy, educational programmes and public awareness campaigns.11 With the price of microprocessors falling, pocket calculators and then fully fledged personal computers began to come within the financial means of individuals. The growing use of small computers by various technical hobbyists, however, received little or no attention from professionals in large-computing cultures. Television and telecommunication firms, when promoting information technology to home users, thought primarily in terms of service provision from large, centralized computer systems, connected by the telecommunications network to home consoles without processing power of their own, as later exemplified by the Dutch Viditel system, Britain's Prestel, or the more successful Minitel in France.12 In the second half of the 1970s, however, a growing group of technical enthusiasts had become interested in assembling their own computers from loose components. Electronics hobbyists were among the first to extend their hobby into computing. Radio amateurs and other technology-driven hobbyists soon followed. These first hobbyist computer users were primarily interested in the internal working of their computers. Members of the first Dutch computer club, established in 1977, focused on building machines around the MOS Technologies 6502 processor, so-called `KIM' computers.13 In its drive for transparency and predictability, the club exemplified the `tinkering' style associated with early computer hobbyists in case studies elsewhere.14 As the secretary of this club explained, `These people had nothing: no software, only a little board they built into a computer by knitting on more memory. Some of them eventually bothered to build a genuine cover for the computer'.15 The founders of the Dutch Hobby Computer Club (HCC) took another approach to the new technology. A few months after the foundation of the KIM group, mathematical students at Leiden University founded the HCC. These students were inspired by American computer clubs of the late 1970s, and by Britain's Amateur Computer Club, founded in 1973. The club started with 15 members; within a year, this figure had grown to 767. In contrast to the KIM club, HCC was a generalist community, not specifically aimed at a single type of computer. To support model-specific activities, however, the HCC founded user subgroups for the different computer models. An overview of user groups in 1985 showed that 69 per cent of all computer user groups were affiliated to the HCC.16 A useful facility for the computer hobbyists was a published members' list, which enabled members to find peers with the same model of computer and History of Technology, Volume Twenty-eight, 2008
Frank Veraart
133
similar computing interests. A version of this list published in 1979 revealed that about 40 per cent of the hobbyists had an interest in tinkering, and had built computers themselves or from kits. Others showed a more `wizardly' orientation, with an interest in the programming and application of computers. When the respondents were asked to list their top three modes of use for their machines, the most mentioned overall was `games'; when users' primary modes of use were listed, however, `computer studies' (16 per cent), `system software development' (13 per cent) and `programming studies' (12 per cent) all preceded `games' (9 per cent).17 This suggests that HCC members had a particular interest in using the microcomputer as a tool for computer education. In the late 1970s, computing and information technology courses mushroomed at Dutch technical institutes, and also in the public media. In autumn 1978, the Television Academy (TELEAC) broadcast a course that involved personal computers on Dutch national television. The course, `Microprocessors 1', was followed by almost 13,000 people. It gave a general overview of the uses of computers in businesses, the technical layout of a machine, its working and some elements of programming. The follow-up course, `Microprocessors 2', first broadcast in 1981, was totally focused on programming, in BASIC and in machine code. This course was taken by almost 10,000 people. The course used the specific dialect of BASIC used with the Apple II computer.18 The television courses and club activities helped to popularize personal computing. In the early 1980s, these activities became more commercial. Established businesses in electronics and other sectors began producing computers for the hobbyist market. These firms also established and supported clubs that used their computers. In association with the US corporation Commodore, manufacturer of the PET microcomputer, the PET Benelux Exchange club (PBE) was founded in 1979. A year later, the Philips P2000 Computer Club (P2C2) opened. P2C2 was a successor to a closed-membership club operated by the Philips research laboratories in Eindhoven, which had tested the P2000 computer designed by the Austrian branch of Philips in Vienna: unlike its predecessor, it was open to the general public, reflecting the public launch of the P2000.19 The foundation of clubs, the television courses and extensive publicity in magazines and books created an atmosphere of excitement about this `technology of the future'. To prepare the new generations for things to come, computer-enthusiast teachers introduced computers in primary and secondary school classrooms. Sometimes, they were backed by school boards or enthusiastic directors, but very often the process was driven by personal commitment. The courses often included the basics of programming and demonstrations of home-made programs created by the teachers. In 1979, hobbyist teachers founded the Didacom (for didactical computer use) group to support the use of computers in schools. Information and software were exchanged at regular meetings. A year later, the Teachip foundation was established by educationalists. This foundation worked together with schools for teaching. Both organizations supported computer History of Technology, Volume Twenty-eight, 2008
134
Basicode: Co-Producing a Microcomputer Esperanto
use at schools by informing teachers and through practical support, such as by developing means to exchange software. HOBBYIST CULTURE AND THE DISTRIBUTION OF SOFTWARE
As Himanen has pointed out, free exchange of knowledge and ideas is a key element in hobbyist culture; the activities of the early Dutch user community were focused firmly on the exchange of information and technology. Group meetings, fairs and magazines were all part of an infrastructure built by hobbyists. Though activities embraced both hardware and software, the increasing shift of attention to computer applications and programming meant that software played a particular defining role: the state of available software became closely associated with the status of the community overall, and, in line with Himanen's argument, status within the community could be gained with software development.20 New generations of computer hobbyists added further elements to the hobbyist culture: not only the production, but also possession and distribution of software became more important. This led to the collecting and `cracking' of software as two new features of the hobbyist culture. Some hobbyists started to build up large collections of software, while others also started to break copy-protection and adjust the code of commercial software products in order to run them on their own machines. The amount of available software became a status issue for members of the community, as well as for the popularity of computer models. This culture was supported by an accepted practice among members of copying each other's software. User groups and clubs were the meetingplaces at which hobbyists shared and traded software. Hobbyists had various other ways and means to distribute software: one of the easiest was to distribute printed versions of computer programs. This could be done in higher computer languages such as the dominant microcomputer language BASIC, but also in machine code. In hobbyists' magazines, these printed programs ± so-called `listings' ± were omnipresent. As interest in programming increased, storing the programs became an issue. Back in 1975, American hobbyists had worked together with computer developers and the computer magazine BYTE to create a standard for the storage of digital information in an audio format. In this so-called `Kansas City' standard, the digital 0 and 1 were represented by an audio signal of 1200 and 2400 Hz, and a transfer speed of 300 baud was agreed upon.21 In the first issue of the HCC magazine, this standard was introduced, together with the idea to store programs on cassette tapes in this fashion.22 The Dutch electronics magazine Elektuur started a `software service' in 1978, issuing LP records with programs in Kansas City encoding for the computer the magazine supported. Cassette audiotapes, however, turned out to be the most popular option for storing computer software. The cassettes themselves were cheap, and copying software using ordinary audio installations, already widely distributed and familiar, was easy. A History of Technology, Volume Twenty-eight, 2008
Frank Veraart
135
few computer models, including the Commodore PET, had a built-in cassette recorder to store and play the programs, but most computers used an external tape drive. The more expensive computers in the early 1980s changed from cassette to 5.25-inch floppy discs, which were also used in business-oriented (mini)computers. By 1984, most new personal computers were equipped with discs. However, this by no means meant the abandonment of tapes: it remained possible to use both tapes and discs on most of the computer systems, and many users continued using tapes, as the most generally accessible medium, to store, trade and share software for their systems, user-group meetings continuing to serve as a hub for this activity. The use of audiotapes for software storage only declined sharply with the growing popularity of IBM-compatible systems from around 1987 onwards. TOOLS FOR SOFTWARE TRANSFER AND THE CREATION OF BASICODE
The distribution and availability of software were important issues in selecting which of the various, largely incompatible, models of personal computer to buy: what determined a model's popularity, by and large, was not so much its stand-alone performance, as the presence or absence of an effective hobbyist infrastructure. Computer manufacturers, unsurprisingly, quickly became active in the setting up and fostering of user groups ± a process that underscored the importance of software and its distribution in the hobbyist community. The manufacturers' interests and ideas, however, tended to contrast with those of others keen to promote use. Computer companies were interested in developing the largest market share, and so supported initiatives that created niches for their computers. To achieve the best performance on their specific hardware assemblies, these producers promoted distinct, machine-specific versions of the Basic programming. This practice constrained (and sometimes prevented) the exchange of software with other types of machines, which hardened competition between the platforms of rival manufacturers. However, actors with a general interest in computer use, such as the hobbyist teachers mentioned earlier, viewed the differences in Basic as one of the negative consequences of competition, and sought ways to remove differences and permit more efficient software distribution. This led to the unique innovation of the radio broadcasting of software using Basicode, whose development saw hobbyists and professionals working together. Hobbyist±professional collaborations were established under the auspices of a radio show, Hobbyscoop, broadcast by the Dutch national public radio station, NOS. Since 1968, this weekly show had dealt with the latest in electronics and radio experimenting for an audience of primarily amateur enthusiasts. The show itself contributed to various radio experiments, such as the experimental broadcast of a stereo signal from 1974 onwards, and the radio transmission of telex, facsimile and videotex pages. In the spirit of these experiments, broadcasts of software for History of Technology, Volume Twenty-eight, 2008
136
Basicode: Co-Producing a Microcomputer Esperanto
microcomputers began in 1978. Radio and computer amateurs worked together with Hobbyscoop producer Hans G. Janssen in a couple of these experiments. The first software broadcast was a joint project between Hobbyscoop and an amateur radio magazine, Radio Bulletin. The experiment of broadcasting software using audio signals received additional impetus from Chriet Titulaer, course manager at TELEAC. Titulaer approached Janssen to broadcast the software used in the television course `Microprocessors 2'. Janssen agreed to support the television course, and the radio broadcasting of software became a regular feature of the Hobbyscoop show. Since the course used Apple II computers, the software was broadcast in this format. After the completion of the television course, software broadcasts continued and were widened in scope. By broadcasting software for the four most popular computer systems, the Apple II, Tandy TRS-80, Commodore PET and Exidy Sorcerer, Hobbyscoop tried to reach out to the broad community of computer hobbyists. Programs were broadcast for each of the systems independently, and software for only one of these systems was broadcast each week: therefore, it took a whole month before all four systems were served. This situation was far from ideal in the eyes of the broadcasting professionals of the NOS.23 The exchangeability of software among hobbyists was also a widespread concern within the Teachip foundation, which distributed software written by the participating teachers, and among other actors working on educational software development and distribution. To improve exchangeability, the Teachip group advised those developing software to avoid computer-specific instructions as much as possible, and to use an agreed convention for the overall structure of programs. As an aid to the teachers involved in programming, they published a list of the general BASIC commands that could be read by all computers, together with a conversion list for instructions that differed in various cases. This standardized program structure was based on a scheme made by the British association, Microcomputer Users in Secondary Education (MUSE). In this standardized format, the program was divided into blocks in which every block had compulsory line numbers and functions; the program was thus more easily accessible and adaptable for programmers on other computer systems.24 Initially, the Teachip protocol was established as a matter of user etiquette, rather than as part of an automated interface. By definition, programs written in accordance with the protocol could make only limited use of any given computer's capabilities.25 Hans Janssen also raised the problem of exchangeability with Klaas Robers, a friend with a background in amateur radio. Robers was struck by the similarities of the situation to a problem he had recently tackled in his job at the Philips research facilities in Eindhoven. There, Robers was involved in exploring the possibilities of the Philips microcomputer, P2000; but he had previously been researching the use of videodisc systems in education, and had developed a control system to access and play parts of the videodisc using his own Apple II computer. When the P2000 machines History of Technology, Volume Twenty-eight, 2008
Frank Veraart
137
came to Eindhoven, Robers was asked to adjust his set-up to the P2000 computer to make a complete Philips system for demonstrations. In order to do this, he was concerned to develop a means of using the existing Apple software on the `incompatible' P2000.26 Robers' hobbyist interest ± he spoke in interview of a `sense of solving one of the major hobbyists' problems' ± here combined with his professional interest as a Philips employee. While he felt no commitment to the NOS needs, whatever he came up with `needed to work for the P2000 as well', so the Philips computer could profit from the software tool he was working on.27 Robers came up with the idea of making computer-specific software interfaces to process software signals broadcast in a common medium. Three major elements needed to be specified: the broadcast signal, the software interfaces and the nature of the programs themselves. It was to the first two elements that Robers contributed the most. For the audio signals, he extended the Kansas City standard, adding other characteristics such as a seven-character word length, start and stop bits, an introduction tone and a transfer speed based on the slowest computer then available. He also introduced a checksum procedure, which made evaluating the accuracy of reception possible. A separate, computerspecific program would translate the signal for the various computer types. Robers contacted Hans Janssen after he had succeeded in getting the system to work for his own Apple and P2000 computers. In the early summer of 1982, Janssen organized a meeting of computer amateurs using different systems, at the NOS's Sandbergen education institute in Hilversum. Robers explained his Basicode system and the need for the computer-specific translation programs to the hobbyists. Over the summer, the amateurs developed the machine-specific pieces of software. In the autumn of the same year, the various results were tested at a second meeting. Robers recalls: `When I entered, I saw a room, and on the tables were the various computer types, but I heard they made same noises and saw they displayed the same listings, and I knew it was all right.'28 The last step was to set the rules for programming. Since the various computers used different dialects, and some were more limited than others in their capabilities, only a restricted version of BASIC could be used. Where variations existed, instructions still needed to be changed to the appropriate machine-specific versions by the users of the programs. To increase accessibility, a protocol was agreed upon, similar to the Teachip standard. This included rules on the use of variables and a recommendation to avoid the GOTO statement, inspired by the debate on structured programming and the use of BASIC in programming courses in schools. Like most of the other elements, this protocol was a joint effort of representatives of different computer associations, including Teachip, HCC and various user groups. Given the name `Basicode', this Esperanto for the microcomputer was introduced to a broader audience in August of 1981. The result of the hobbyists' hard work was publicly televised on Horizon. In September 1982, the first Basicode book was published, in English and Dutch. It History of Technology, Volume Twenty-eight, 2008
138
Basicode: Co-Producing a Microcomputer Esperanto
included instructions on how to use Basicode, a specification of the protocol and instructions for software translation for the different computer types. The designers of Basicode had computer hobbyists in mind as users for their set-up. Users were presumed to be technologically interested people with deep knowledge of their own machines ± much like the designers themselves. According to this philosophy, it was no problem that users still needed to adjust the broadcast programs, to accord with their own machines, by rewriting individual lines of code: this was well within their presumed capabilities. The main purpose of the software broadcasts, the designers felt, was to spread enthusiasm for programming. When a program was selected for broadcast on the Hobbyscoop radio show, the hobbyist programmers received their 5 minutes of fame in a short radio interview about their program. The team behind the development and broadcasting was also involved in program selection. They were careful only to select programs that were written in accordance with the protocol; thus, the broadcast programs also served as examples of good programming practices, under the definitions adopted by the designers.29 BASICODE-2: `THIS HAS GOT COMPLETELY OUT OF HAND! AND SO IT SHOULD!'
The Hobbyscoop broadcasts of software extended the popularity of computer use in the Netherlands. Of the growing number of people involved in the hobby, inevitably, not all were willing to explore the computer as fully as the Basicode developers might have hoped. For such people, the computer was gradually turning into a `software player', rather than a device to explore.30 The developers, who expected users to undertake a thorough study of the techniques underlying the computer's operation, as (from their perspective) befitted a genuine interest in the subject, were at first surprised to encounter complaints, largely focusing on the work of performing the necessary software adjustments. In a September 1982 evaluation meeting, however, Klaas Robers presented a scheme for a new implementation of Basicode. This would avoid the need for manual adjustment, and would be more suitable for a broader audience of computer users and newcomers. In the development of this second version, Robers worked with Jochem Herrmann, a student at the Eindhoven University of Technology. They came up with an idea for a new programming structure that used subroutines. In this fashion, a program written in the Basicode format would exist of general statements and subroutines. For each of the different microcomputers, a set of subroutines was added to the already existing computer-specific translation programs. Other alterations were a change in the programming order and some additional rules, including a ban on instructions that were not possible on all the computer systems involved. Over the winter and spring of 1983, the hobbyist designers of Basicode worked hard to generate the appropriate computer-specific subroutines, History of Technology, Volume Twenty-eight, 2008
Frank Veraart
139
whilst developing new translation programs for computer models that had appeared since the original specification. In July 1983, `Basicode-2' was announced: a book specifying the new protocol was accompanied by a cassette that contained the Basicode-2 programs and subroutines for the various computer types.31 Both book and cassette could be ordered from the NOS broadcasting organization in Hilversum. These changes meant that, after loading the computer-specific Basicode program, the users could run the broadcast Basicode programs immediately, without further adjustments to the programs to fit to their machines. At this level of automation, Basicode began to resemble the `pseudo-machine' concept incarnated in the UCSD p-System, also developed for cross-platform interoperability.32 The p-System, however, was a far more complex endeavour, commercialized with a view to `serious' (`software-player') computer use; Basicode-2, in its simplicity and constraint, continued to reflect an educational and hobbyist ethos. As before, hobbyist programmers were encouraged to send in programs. Before the actual broadcasts, these were selected and checked by Jacques Haubrich, a high-school mathematics teacher, hobbyist and member of the Basicode design team. To do this efficiently, Haubrich had built a screening program to check that the programs did not contain computerspecific instructions. This checking underlined the importance that the development group placed upon the broadcast programs as examples of good programming style, and on establishing Basicode as a true general language for home computers.33 The second version of Basicode gained considerable media attention and magazines called it a huge success. In 1985, out of 200,000 home computer owners, 30,000 listened to the Hobbyscoop broadcast on a regular basis. The radio show's airtime was extended to two shows per week, one on FM and one on AM. From 1984 onwards, Hobbyscoop also offered the Basicode programs via Viditel, the Dutch Viewdata/Videotex system; Basicode was in fact promoted as a means of popularizing Viditel (which, like its British and German equivalents, struggled to gain subscribers).34 The Dutch World Service, a sub-division of NOS, also made Basicode software broadcasts on a regular basis: the Basicode-2 book was, like its predecessor, made available in an English-language version, with World Service listeners in mind. Interest from outside the Netherlands came not only from individual users, but from national broadcasting organizations, through liaisons developed by Jonathan Marks of the NOS World Service. In Britain, the BBC took up Basicode in association with Radio 4's computer-enthusiast show, Chip Shop (the software transmissions taking place in the early hours of Sunday mornings, in order not to disturb the general listeners); Germany's WDR3 and broadcasters in Belgium, Denmark and Australia likewise adopted the Basicode-2 format to send software through the air.35 Having sold out within a year, the Basicode-2 book was reprinted in 1984. In a copy that Klaas Robers received from Hans Janssen, he enthusiastically wrote: `This has got completely out of hand! And so it should!'36 History of Technology, Volume Twenty-eight, 2008
140
Basicode: Co-Producing a Microcomputer Esperanto
Yet, the boom in attention also turned out to have a downside, in its effect on the relationship between the developing hobbyists and professional broadcasters. Negotiations with foreign broadcasters were performed by the NOS, which sold the idea and programs commercially. The legal department of the NOS found it necessary to make arrangements about ownership and copyrights for the Basicode software. On the initiative of Hans Janssen, on 24 January 1984, the developing team established the Basicode Foundation as a representative body for negotiations with the legal department of NOS. The board of the Foundation consisted of hobbyist developers and NOS affiliates; its aim was the promotion of computer use, and especially the use of Basicode and its development. To do this, members envisioned a third version of Basicode, with improved subroutines for graphics and sound, in order to keep up with the advanced possibilities of new computer systems. Besides this formal goal, the Foundation was involved in negotiating the terms of a contract regarding the copyrights of the programs. Negotiations between the Foundation and the legal department of the NOS progressed slowly, and were surrounded with much suspicion. Within the Foundation, the atmosphere of the hobbyist culture was noticeable, particularly in a disinclination to consider the monetization and proprietary protection of software. The developers, according to Robers' recollection, considered that `software development [was] something you did for fun': regarding reproduction, `the more copies, the better'.37 Hobbyists also placed a high value on respect for the individual developer, and this was brought to the fore in the negotiations. The foundation focused on two elements: the rights of the developers and control over the development of Basicode. Above all, obedience to the protocol and avoidance of computer-specific instructions were considered important. In a memo to the developers of translation programs, concerning the difficult negotiations, Robers and Herrmann wrote: `It seems to us, that it is doubtful if in the current state of affairs your rights are fully respected. Furthermore we also worry about the ``purity'' of Basicode in all it different forms (especially the foreign ones). Interchangeability of Basicode programs is the cornerstone of Basicode, especially internationally. It might be sensible to work together regarding the copyrights, to avoid others showing off with our common effort, completed in our leisure time and without payment.'38 The disagreements had started with negotiations undertaken by the NOS, before the formation of the Foundation. The NOS had sold the rights on Basicode and its translation programs to the German publisher Ravensburger, as a package, only afterwards informing the individual developers of the translation programs. Further, Ravensburger had subsequently altered parts of the software, adding computer-specific commands to allow a `save' function and the use of colour in a text-editing program. BBC Radio also adjusted the programs, in order to make possible specific instructions for the popular British ZX Spectrum computer.39 These elements frustrated the developers; for future cases, the Foundation History of Technology, Volume Twenty-eight, 2008
Frank Veraart
141
sought a role as guardian of the development of Basicode. The NOS, meanwhile, saw the Foundation's job as to serve as the software developers' legal representative; yet, the Foundation tended to consider the rights of the developers in individual terms. During the negotiating process, the NOS also had claimed the exclusive rights on broadcasting and the free use of programs sent in by hobbyists, including those sent to the Foundation. In October 1985, the developers in the Foundation came to the conclusion that the NOS was demanding too much: the rights it offered the developers did not seem to extend beyond those they could, in any case, assert without a formal contract. As tensions due to the two contrasting visions continued, the NOS representatives (Hermanns, Janssen and Marks) left the board, and the Foundation ± now composed only of amateur designers ± broke with the NOS. BASICODE-3: DEVELOPMENTS UNDER THE CONTROL OF THE BASICODE FOUNDATION
In the course of the conflict with the NOS, the hobbyists developed a third version of Basicode. Much to the surprise of the NOS and Hobbyscoop, the foundation immediately found another Dutch broadcaster, TROS, willing to broadcast in this format without raising the same legal obstacles. The third version again responded to developments in hardware, and had additional possibilities in the use of music, graphics and colour. As a result of the conflict, the NOS continued to broadcast in the Basicode-2 format, while, in July 1986, the TROS started Basicode-3 broadcasts. There were thus three broadcasts of Basicode software on Dutch radio per week.40 The introduction of Basicode-3 was again accompanied by a new book, this time published only in Dutch. The publisher was Kluwer, a well known Dutch producer of technical books. The volume served as a manual and explained all the new subroutines and instructions for new computer models. Two elements were different from the previous books: an appendix contained two pages detailing the copyrights of both the publisher and the developers of the translation programs; and the individual efforts of the designers were underscored; all the developers of Basicode-3 had their pictures in this book. Both elements underscored the Foundation's hobbyist view, and what it had learned from the NOS conflict.41 The Foundation also wanted total control over Basicode developments, as became clear in its negotiations with DDR Radio. Like many other countries in the 1980s, the German Democratic Republic (DDR) was working on programs popularizing computer use. Together with programs about computing on television, the state broadcasters had also begun experimental broadcasts of software by radio in October 1986. The response was tremendous: in January 1987, Joachim Baumann of DDR Radio Schulfunk (schools broadcasting) started broadcasting software for a 20-lesson BASIC course, set up by Professor Horst VoÈlz, and received 25,000 letters from enthusiastic listeners. Most were concerned with the History of Technology, Volume Twenty-eight, 2008
142
Basicode: Co-Producing a Microcomputer Esperanto
receiving and reading of the software; more broadcasts and further tests followed to improve the transmission technique. The software was, however, aimed exclusively at the East German-produced KC microcomputer. Translation possibilities for users of other models were not addressed until Friederich Dormeier, a West Berliner listener, wrote, pointing out the possibilities of the Dutch Basicode, which he had encountered by picking up the TROS broadcasts on AM radio.42 Interested, DDR Radio staff contacted the Basicode Foundation in Eindhoven at the end of 1987. At an initial meeting, held in Eindhoven in May 1988, preliminary contractual agreements were prepared that specified DDR Radio's rights to use the existing translation programs. With some help from the Dutch developers, the East Germans would develop translation programs for the local computers themselves. The Foundation monitored the development and diffusion of Basicode by exercising quality control on the translation programs, control over translations within the programs and a requirement for Foundation clearance for the first broadcast of Basicode programs. In November 1988, Klaas Robers and Jacques Haubrich travelled to East Berlin to help with the development of the translations programs. In December 1988, in line with the Basicode Foundation's hobbyist ethos, a collective for the authors of translation programs was established to guard the East German hobbyist programmers' rights. Further arrangements were made concerning the publication of a German Basicode-3 book, the rights of the groups to use each other's translation programs and measures to make sure that Basicode rules would be acknowledged. A month later, the use of Basicode was announced by the DRR radio show, REM: das Computermagazin. Test programs were broadcast in April 1989, and were monitored by computer hobbyist volunteers in Copenhagen and by Friederich Dormeier, the instigator of the contact, in West Berlin. In September 1989, DDR Radio started broadcasting Basicode software once every 2 weeks as part of REM.43 BASICODE-3C: FULL CIRCLE TO THE HOBBYIST BASICS
With Basicode-3, the Foundation regained full control over the development of the system. In the late 1980s, however, the `hobbyist' standard of Basicode gradually gave way to the `business' standard of IBM, as IBMcompatible machines running MS-DOS became the dominant de facto standard for personal computing.44 Within the Netherlands, the diffusion of IBM-compatibles into private homes was aided by national-level support programs such as the `PC-private' initiative, which allowed employees to buy computers for home use tax-free through the businesses they worked for. The hobbyist developers of Basicode reacted to this change much as they had previously when new computers entered the scene: they History of Technology, Volume Twenty-eight, 2008
Frank Veraart
143
developed new translation programs, as well as an input/output device for cassettes for the PC, to fit the PC into the Basicode system. Yet, the need for Basicode was fading as the popularity of the PC grew; with the majority of users possessing operationally compatible machines, the problem of software exchange was increasingly considered to be solved. The developers, however, not only persisted with conversion work, but envisioned a further new specification of the project. Since 1989, they had been working on new routines and items that would be included in an improved version of Basicode-3. The resulting version could handle colour better, and was labelled Basicode-3C. Alongside the introduction of this new version in March 1991, the basic tenets underlying the Basicode ethos were formulated in four points: . Basicode is for hobby and study purposes. We do not strive for a system to make professional programs. . Simple computer users have to enjoy the broadcast programs. Interested persons can use the programs as an example or help at programming courses. . Basicode programs need to be short and readable due to the educational character. We do not want a Basicode where everything is possible through an incomprehensible array of subroutines. . We only develop a thing that answers a true need of our public. It concentrates on the principles.45
This was a forceful reiteration of the aims and ambitions that had grown up around Basicode in the mid-1980s. The developers had refused to adapt to changing developments in large-scale software, and returned to an ethic of programming as a hobby and for educational purposes. From a mainstream position in the spotlight of the Dutch media, they returned to a hobbyist subculture. The final blow for the Basicode broadcasts came when the Dutch broadcasting system underwent reorganization in September 1991: faced with new commercial competition, the patchwork-style public broadcasting system was seen to be in need of more structure. As a daily format was introduced, weekly programs were seen as too irregular. With many users moving to the PC-compatible standard, and with the general decline of hobbyist programming, the professional broadcasters were unconcerned to find a means of retaining the Basicode broadcasts, and both NOS Hobbyscoop and TROS Basicode disappeared from Dutch radio. CONCLUSIONS
This article has described how hobbyist users' culture mattered in coconstructing the Basicode standards with broadcast professionals. It is intended, furthermore, to shed a different light on the development of personal computing technologies from that seen in the narratives constructed by business and economic historians. These historians understand the process of massive diffusion and adoption of personal computing History of Technology, Volume Twenty-eight, 2008
144
Basicode: Co-Producing a Microcomputer Esperanto
technologies as a process of `technology push' by American and Asian softand hardware firms. By changing the focus to users in a small European country, a rather different story emerges. This story, of which this article presents only a small part, elaborates some of the massive breadth of actions that, together, secured the adoption of the new computer technology in the home.46 On this more nuanced account, hobbyist users are seen to have been profoundly involved in the adaptation of technology to their needs. For them, the exchangeability of software was an important problem to overcome. The problem was created by their use of different computer types with type-specific computer languages, mostly dialects of Basic: to tackle these problems, hobbyists used computer-independent means of exchange, such as program listings printed in their magazines. User groups agreed upon rules and structures of programming, to increase readability and accessibility of computer programs. The development of Basicode, formalizing and automating these procedures in software, appeared to them as a logical next step. Basicode became a successful system because of the collaboration of both hobbyists and professionals in broadcasting. Radio broadcasting was a convenient way to convey software to students of televised courses and other early users in the early 1980s; yet, because of the lack of one dominant computer system in the Netherlands, only a portion of these early users were addressed by the early broadcasts. To reach as many people as possible was the collective goal of both the hobbyist users and the broadcasting professionals. Hobbyists developed the technology and the structure of the Basicode system, while the broadcasters of the NOS facilitated a platform on which this system ran, created publicity and diffused content. This collaboration led to a popular system used by thousands of Dutch computer hobbyists, and also caught the attention of foreign professional broadcasters, who copied the system with the help of the NOS. It was principally these negotiations between the NOS and foreign professionals that led to the surfacing of differences of opinion, over copyrights and the individuality of contributions (a central reward mechanism to the hobbyist culture) that led ultimately to a break between the initiators of Basicode. The later negotiations between the hobbyists and DDR Radio illustrate clearly how the hobbyist designers wanted control over the development and diffusion of the system: the designers effectively enforced the hobbyist culture by the creation of a collective, guarding the individual rights of East German hobbyist developers. The diffusion of the IBM PC architecture as a de facto standard, in the judgement of most users, demolished the need for the Basicode system; for lay users, the need for a `microcomputer Esperanto' was gone, and the designers of Basicode lost contact with the mass community that had formerly supported their work. In redefining their goals, they confirmed their target audience as consisting of programming enthusiasts such as themselves, even as this group shrank in size and returned to the status of a subculture. Yet, History of Technology, Volume Twenty-eight, 2008
Frank Veraart
145
both hobbyists and professionals continued to foster the Basicode system until 1991, losing its support from broadcasters only with the restructuring of Dutch radio. With the rise of the PC standard, it could successfully be portrayed as an irrelevance, and Basicode broadcasts were silenced. Notes and References
1. N. Oudshoorn and T. Pinch, How Users Matter: The Co-Contruction of Users and Technology (Cambridge, MA, 2003). 2. The projects resulted in the series Geschiedenis van de Techniek in Nederland, 1800±1890 (Zutphen, 1995) and Techniek in Nederland in de twintigste eeuw (Zutphen, 2000) and the book, H. W. Lintsen (ed.), Made in Holland, een techniekgeschiedenis van Nederland, 1800±2000 (Zutphen, 2005) (all in Dutch). For more on the background to these series, see J. Mokyr, `High Technology in the Low Countries', Technology and Culture, 2001, 42(1): 133±7. 3. J. W. Schot and A. A. Albert de la BruheÁze, `The Mediated Design of Products, Consumption and Consumers in the Twentieth Century', in Oudshoorn and Pinch, op. cit. (1), 230; more on mediation (in Dutch) in A. A. Albert de la BruheÁze and O. de Wit, `De productie van consumptie, de bemiddeling van productie en consumptie en de ontwikkeling van de consumptiesamenleving in Nederland in de twintigste eeuw', Tijdschrift voor Sociale Geschiedenis, 2002, 28(3): 257±72; O. de Wit and A. A. Albert de la BruheÁze, `Bedrijfsmatige bemiddeling, Philips en Unilever en de marketing van radio's televisies en snacks in Nederland in de twintigste eeuw', Tijdschrift voor Sociale Geschiedenis, 2002, 28(3): 347±72. 4. Albert de la BruheÁze and de Wit, op. cit. (3). 5. One treatment of personal computing exists, along these analytical lines, for a US case: C. Lindsay, `From the Shadows: Users as Designers, Producers, Marketers, Distributors and Technical Support', in Oudshoorn and Pinch, op. cit. (1), 29±50. In the more general literature, histories of the early development of personal computing and early users have been written for several countries. The United States is well covered: P. Ceruzzi, `From Scientific Instrument to Everyday Appliance: The Emergence of Personal Computers, 1970±1977', History and Technology, 1996, 13(1): 1±31; P. Ceruzzi, A History of Modern Computing (Cambridge, MA, 2003); M. Campbell-Kelly and W. Aspray, Computer: A History of the Information Machine (New York, 1996). There is a detailed study on the United Kingdom by Leslie Haddon: `The Roots and Early History of the British Home Computer Market: Origins of the Masculine Micro', University of London Ph.D. thesis, 1988; also L. Haddon, `The Home Computer: The Making of a Consumer Electronic', Science as Culture, 1988, 2: 7±51. There is also a study on Finland: P. Saarikoski, `The Role of Club Activity in the Early Phases of Microcomputing in Finland', in J. Bubenko, J. Impagliazzo and A. Solvberg (eds), History of Nordic Computing, Proceedings: IFIP WG9.7 First Working Conference on the History of Nordic Computing (New York, 2005). 6. Centraal Bureau voor de Statistiek, `ICT gebruik van personen naar persoonskenmerken', 2007, available online at www.cbs.nl/nl-NL/menu/themas/vrije-tijd-cultuur/ links/ ict-gebruik-van-personen-naar-persoonskenmerken.htm. 7. S. Turkle, The Second Self: Computers and the Human Spirit (London, 1984), 197. 8. P. Himanen, The Hacker Ethic: A Radical Approach to the Philosophy of Business (New York, 2001), 51. 9. B. Coleman and A. Golub, `Realizing Freedom: The Culture of Liberalism and Hacker Ethical Practice', presentation at the Society for Social Studies of Science Annual Conference, 2004; T. Hapnes, `Not Their Machines: How Hackers Transform Computers into Subcultural Artefacts', in M. Lie and K. H. Sùrensen (eds), Making Technology Our Own? Domesticating Technology into Everyday Life (Oslo, 1996); K. R. Fleischmann, `Do-It-Yourself Information Technology: Role Hybridization and Design±Use Interface', Journal of the American Society for Information Science and Technology, 2006, 57(1): 87±95. 10. G. Gooday, `Re-Writing the ``Book of Blots'': Critical Reflections on Histories of Technological ``Failure''', History and Technology, 1998, 14(3): 265±91. See also K. Lipartito, `Picturephone and the Information Age', Technology and Culture, 2003, 44(1): 50±80. 11. G. W. Rathenau, Rapport van de adviesgroep Micro-electronica (Gravenhage, 1980). 12. Ceruzzi, History of Modern Computing, op. cit. (5), 205±66.
History of Technology, Volume Twenty-eight, 2008
146
Basicode: Co-Producing a Microcomputer Esperanto
13. The MOS 6502 was a 8-bit microprocessor designed by MOS Technology in 1975. The KIM (Keyboard Input Monitor) computer was developed by MOS Technology and was sold as a kit. 14. Turkle, op. cit. (7), 197. 15. My translation. The original Dutch: `[D]eze mensen hadden niks. Geen software, alleen een plankje die ze lieten uitgroeien tot een computer door er meer geheugen aan te breien. Sommige namen dan nog de moeite een echte behuizing voor de computer te bouwen.' Ruud Uphoff, secretary of the KIM users group Holland, quoted in `KIM Gebruikers Club Nederland', Personal Computer Magazine, 1984, 2(1): 76±7. 16. `Clubs', Personal Computer Magazine, 1985, 7: 85. 17. The membership list is published together with the HCC-Nieuwsbrief (1979), held in the Dutch Royal Library, deposit TE 3519. 18. Attendees of this course were offered special rates on the purchase of an Apple II or a DAI, a Belgian clone of the Apple II. Because of the huge interest in this course, DAI encountered production problems and went bankrupt. `Van automatisering tot microprocessors', De Microcomputer, 1979, 12: 40±3; `De baby-boom breekt door in Pascal', Computemarkt, 1983, 12(9): 16±19. 19. `PET Benelux Exchange', Personal Computer Magazine, 1983, 1: 82; `PTC verwelkomt 15.000-ste lid', PTC Print, 1988, 18: 31; `Van de bestuurstafel', PCT Print, 1990, 2: 2. 20. A more elaborate analysis of the ethics of the first computer users appears in Himanen, op. cit. (8). 21. The standard was the outcome of a 2-day symposium organized by BYTE magazine in Kansas City in November 1975. M. and V. Peschke, `BYTE's Audio Cassette Standards Symposium', BYTE, 1976, 2: 72±3. 22. `Cassette-interface: hoe en waarom?', HCC Nieuwsbrief, 1977, 1: 2. 23. Teleac also had the idea of using parts of its television courses for software broadcasting by television, but this idea was abandoned because the quality of the television audio signal was inadequate, and not many television receivers were capable of audio recording. `NOS gaat uitzenden in Basic ``esperanto''', Automatiseringgids, 1981, 19 August: 3. 24. B. Vetter, `Software Werkgroep Teachip dd 17 maart 81', n.d., and A. Davidse, `Programmeerstandaard', n.d., both in Klaas Robers's personal collection; interviews with Klaas Robers on 13 April and 16 July 2004 in Delft. Various forms of compatibility are addressed in James Sumner's contribution to this volume. 25. Sumner, in his contribution to this volume, discusses a similar scenario under the heading of `partial operational compatibility'. A machine is usually judged partially compatible if, when tested against a range of pre-existing software, it is found to work with some but not all applications ± usually those that happen only to operate within certain technical limits. The Teachip protocol represented an attempt to ensure broad compatibility by methodically imposing such limits before the software was written. 26. The system was also introduced in an article in an educational magazine: `Een ``microcomputergetuurd'' beelbplaatsysteem voor zelfstudie', Onderwijsinformatie, 1978, 65: 4±5. 27. Interviews with Klaas Robers, Delft, 13 April and 16 July 2004. 28. Interviews with Klaas Robers, Delft, 13 April and 16 July 2004. 29. Interviews with Klaas Robers, Delft, 13 April and 16 July 2004. 30. The `software player' concept is defined in Haddon, `Roots and Early History'; Haddon, `Home Computer', both op. cit. (5). 31. H. G. Janssen (ed.), Basicode-2 (Hilversum, 1983). 32. See Sumner's contribution in this volume. 33. `Basicode project vol in ontwikkeling', Automatisering gids, 1982, 20 October: 13; Janssen, op. cit. (31); interview with Klaas Robers, 16 July 2004. 34. `Hobbyscoop boekt succes in binnen- en buitenland', Viditelgids, 1984, August: 80±1; Peter van Diepen, `Hobbyscoop Basicode', HCCNieuwsbrief, 1984, 61: 27±8; Peter van Tilburg, `Presentatie nieuwe Basicode-2 pakket', HCCNieuwsbrief, 1984, 64: 31; Ben Rintjema, `NOS Basicode', HCCNieuwsbrief, 1985, 74: 67. 35. Janssen, op. cit. (31), 9±17; `Hobbyscoop boekt succes in binnen- en buitenland', 80±1 (documentation and library of Dutch National Broadcasting Organization); S. Fox, `A Look Back at Basicode', Beebug, 1984, 3(2): 8±9. Marks was not successful in marketing Basicode to Finland: `Basicodes taivaalta', MikroBitti, 1985, 4: 66±7.
History of Technology, Volume Twenty-eight, 2008
Frank Veraart
147
36. Handwritten comment inside copy of Janssen, op. cit. (31), held in Klaas Robers' personal collection. My translation. The original Dutch: `Deze zaak is geheel uit de hand gelopen! En zo hoort het ook!' (emphasis in original). 37. Interview with Klaas Robers. 38. Memorandum by Klaas Robers and Jochem Herrmann to the Basicode developers, 1 November 1984, Robers' personal collection. My translation. The original Dutch: `Het lijkt ons echter allszins twijfelachtig of bij de gang van zaken zoals die nu loopt Uw rechten voldoende tot uiting komen. Daarbij maken wij ons ook zorgen over de ``reinheid'' van Basicode in zijn diverse vertakkingen (speciaal in het buitenland). Onderlinge uitwisselbaarheid van basicode-programma's is immers de belangrijkste steunpilaar waarop Basicode rust, juist ook international. Het is misschien verstandig voor wat betreft de auteursrechten op programma's de handen ineen te slaan, om daardoor te voorkomen dat anderen gaan pronken met ons gezamelijke werk geheel in de vrije tijd en zonder betaling verricht.' 39. J. Haubrich (ed.), Het BASICODE-3 boek (Deventer, 1986), 15; `Ir Klaas de uitvinder van de NOS-piepshow', Computer Plus, 1985, 1: 66±8. 40. The broadcasts took place on Mondays between 21.00 and 21.30 and Wednesdays from 18.10 to 18.20 on an AM frequency, and from 19.00 to 19.30 on FM. Interview with Klaas Robers; `Hobbyscoop snerpend door de ether', Personal Computer Magazine, 1984, 3: 81± 3; J. Haubrich, `BASICODE op de ST', ST28, 1990, 11: 36±7, 45 (Robers' personal collection); `Hobbyscoop boekt succes in binnen- en buitenland', op. cit. (35), 80±1; K. Koopman, `Een Esperanto voor computers, Hans G. Janssen over de opkomst en ondergang van Basicode', Aether, 2001, January, 58: 8±9; K. Steglich (ed.), Deutsche uÈbersetzung Basicode Bulletins (East Berlin, n.d.): Dir. fuÈr Computerliteratur und Software, Ministerium fuÈr Kultur, 30. 41. Haubrich, op. cit. (39). 42. L. Leppin and T. Schnabelt, Informatik und Rechentechnik in der DDR (Studienarbeit, n.d.), Chapter 6, `Informatiek in den Medien', paragraph `Computersendung im Rundfunk der DDR', available online at http://robotron.informatik.hu-berlin.de, accessed 17 August 2004. 43. Correspondence between Radio DDR and Basicode Foundation, Friedrich Dormeier and Basicode Foundation, in Robers' personal collection; E. Hermann, `Der DT64 Computerklub' (regarding Joachim Baumann), n.d., available online at http:// hometown.aol.de/oberlandmann/. 44. For a survey that addresses the established literature on this case, see James Sumner's contribution to this volume. 45. Memo: J. Haubrich, `Uitbreiding op BASICODE-3', 27 March 1991. In Robers' personal collection. My translation. 46. I discuss the activities of intermediaries in popularizing computer use in the Netherlands in greater depth in F. Veraart, `Vormgevers van Persoonlijk Computergebruik, de ontwikkeling van computers voor kleingebruikers in Nederland, 1970±1990', Ph.D. thesis, Eindhoven University of Technology, Stichting Historie der Techniek, 2008.
History of Technology, Volume Twenty-eight, 2008
Battery Birds, `Stimulighting' and `Twilighting': The Ecology of Standardized Poultry Technology KAREN SAYER
INTRODUCTION: ANIMAL MACHINES
[H]istories of the essentials of human life, such as food and drink, may, in their association with that pristine myth of purity, the `organic', and the romance of a simpler past untouched by chemicals in either agriculture or the processing industries, lose sight of the complex constructions of quality and trust that facilitate the growth of all commodity chains.1 Standardized commodities sit uneasily on the domestic dinner-plate. Though Peter Atkins is writing about milk, much the same may be said of another food produced directly from a live animal: the egg. Typically sold to us as coming `straight from the bird', the egg undergoes a succession of mediating processes to become a food object and a reliable commodity. As with milk, the boundaries of the acceptable egg have been defined, policed and contested through commercial, official/governmental and scientific discourses, even as the rhetoric of `natural' production has grown in sophistication. The systems and standards behind egg production, like all effective infrastructure, only become noticeable when defects are alleged ± as has happened repeatedly in public controversy over the ethics, hygiene and food quality standards of battery-farmed eggs. Sales rely on consumer confidence: today, fears of salmonella must be assuaged, and an ethos established of consummate care in the rearing and keeping of hens. Packaging is a crucial tool of communication. Safety is asserted via the Lion Mark (an old guarantor of `Britishness', and now also of health and safety standards), while box illustrations depicting chickens in the open air, typically with chicks and sunlight, draw on conventional images of rural felicity to suggest the hens' well-being, particularly in the growing History of Technology, Volume Twenty-eight, 2008
150
The Ecology of Standardized Poultry Technology
free-range and organic sectors. Yet, in characterizing the interplay of systematization and `Nature', so crucial to the identity of the modern egg, we must recognize that, to quote Donald Worster, agriculture `is a rearrangement, not a repeal, of natural processes' ± something William Boyd has demonstrated with reference to the US broiler industry.2 Even the most intensive farming represents what Worster terms an agrocecosystem, `an ecosystem reorganized for agricultural purposes'.3 In Britain, the poultry industry was slower than most branches of agriculture to experience the rapid growth associated with the coming of industrial methods.4 When it did so, from the early twentieth century, the move towards standardization, specialization and technology-dependent intensification, characteristically observed in modernization cases, was swift and evident. This chapter charts the growth of the movement, promoted strongly by a range of technocratic agriculturalists such as Harry R. Lewis, Edward Brown and W. P. Blount; the ensuing backlash, notably developed in the writings of Ruth Harrison, the Quaker animal rights activist; and the consequent tensions and ironies of a modern-day `niche' production sector that balances sophisticated systematization against forceful appeals to `Nature' and the rejection of the battery. The contested move to standardization was not, of course, only a phenomenon of British egg farming. International research fed into innovations in practice, the trade literature represented worldwide concerns and, by the 1960s, specialist producers of feed and equipment were selling their products across the globe. My account explores one technological element of this global phenomenon ± `photoperiodization', the timed artificial lighting of sheds to abolish the natural winter reduction in laying ± to characterize the role played by imported technologies in this complex shift. Another key focus is the significance of gender to this story. The industrial standardization of egg production involved a shift from lowtech women's labour to high-tech male management, as the move to mechanization and then automation gradually came to deskill and finally replace human labour; yet, marketing rhetoric continued to invoke images of `farmer's wife' figures, playing on received understandings of `traditional' egg production. I begin by surveying the problematics of small-scale, unregimented eggfarming. Crucially, and perhaps counter-intuitively, consumers before the Second World War did not typically trust the `natural' product. Limited systematization at the distribution level, and the hen's tendency to cease laying during the winter months, led to major concerns about the freshness and availability of eggs: this was a factor in the drive for quality initiatives, including the use of electric light to examine eggs prior to sale. Next, I consider how various technocratic interventions ± in particular, electric lighting ± were deployed to ameliorate the `natural' variation in egg production, allowing fresh eggs to be produced all year round. Finally, I explore the ways in which the ever more standardized egg was marketed increasingly with the imagery of uncaged hens in a bucolic idyll ± an incongruity that was not lost on critics such as Harrison. The most striking History of Technology, Volume Twenty-eight, 2008
Karen Sayer
151
feature of this narrative is the way in which consumer demand for the `natural' product grew as the poultry industry came to interpret `Nature' as essentially malleable. The consequences of this process are evident today in the routine, increasingly invisible application of large-scale industrial technology to guarantee the standardized quality of the `natural' egg. FROM `POULTRY-KEEPING' TO INTENSIVE PRODUCTION
At the start of the twentieth century, poultry-keeping in the United Kingdom was typically conducted on a small-scale, often casual footing. While production had expanded on the continent, especially in France and Denmark, in Britain, it had actually been in decline for most of the period from around 1750.5 Specialized interest was limited: poultry might be kept on a mixed farm, or at the very smallest scale as a `backyard' supplement to other activities. To the extent that dedicated `poultrykeepers' existed, they were not considered livestock producers and, in most of the official census returns, were subsumed into wider agricultural categories.6 Poultry-keeping, moreover, was commonly seen as women's work ± a source of `pin money' rather than serious commercial activity. Yet, at the turn of the twentieth century, there was also considerable concern, expressed in government reports and specialist agricultural publications, that Britain was not self-sufficient in eggs. In part, this was presented as an opportunity: if the British farmer could be persuaded to produce more eggs, then he (for the farmer, in this discourse, was characteristically assumed male) might capitalize on growing consumer demand and find a way out of ongoing economic depression. This project was to be achieved through a combination of breeding stock choices, new feeding regimes and methods of management. It entailed the disassociation of poultry from small-scale production, mixed agriculture and, symbolically, women. The Departmental Committee on Poultry Breeding in Scotland, reporting in 1909, specifically criticized women's methods of handling poultry. `The management of poultry,' it observed, `is generally relegated to the women members of the family, and the methods adopted are, in the majority of cases, very antiquated.' Though the Committee suggested that `the servant girl class' ought to be educated in the new methods in order that `poultrykeeping on the larger farms . . . be extended and improved', it associated good business practice with the (male) `farmers and crofters themselves' who were, as yet, indifferent to the possibilities. In other words, until the men could be persuaded that it could pay, the industry would not evolve.7 Much the same was said in Ireland. `Rearing poultry,' notes Joanna Bourke, `. . . was one of the most important occupations of the farm woman. Indeed, despite the impassioned debates and controversial decisions concerning the poultry industry from the 1890s, one thing was agreed: for better or (more commonly) for worse, the poultry industry was dominated by women.'8 Those who wished to intervene in the Irish industry, in fact, ultimately had to accommodate this state of affairs through the use of female History of Technology, Volume Twenty-eight, 2008
152
The Ecology of Standardized Poultry Technology
instructors.9 Though Bourke rightly argues that Irish farm women found it increasingly difficult to participate in the management of poultry, as the new methods involved moving the birds away from the farmhouse,10 it is striking that, in 1937, all of the County Poultry Instructors in the Irish Free State were women. In Britain, however, women's participation varied. Whereas all of the County Poultry Instructors in the North of Scotland, as in the Irish Free State, were women, in the West of Scotland, they made up 60 per cent of the instructors, in Wales about 50 per cent and in England 27 per cent.11 These reports also evidence increasing official concern to improve domestic production, distribution and marketing throughout the United Kingdom. In 1926, a national standard of grading by size was proposed, as was a system of stamping the eggs as a mark of quality (indicating freshness and cleanliness), on a voluntary basis.12 This was a complex task, in part because British poultry production was affected to a high degree by regional and organizational differences. In the industrial heartlands and seaside resorts of Lancashire, for instance, small farms continued to be economically viable at least until the Second World War: by the 1930s, as Michael Winstanley notes, Lancashire accounted for 14 per cent of the hens in England.13 In cases of small-scale specialization such as this, farmers' wives remained involved in the marketing of perishables; eggs were sold alongside milk and vegetables, out of farm carts and at local markets.14 We should note, though, that even in such areas, smaller farmers took on board many of the new methods associated with largescale poultry farming. This trend was promoted concertedly from the 1890s onwards by a variety of means: the dissemination of information through the local press; courses run by county councils (which, in Lancashire, were mostly attended by farmers' daughters) and agricultural colleges;15 trade journals such as The Feathered World and Poultry Science; advice literature published by specialists and trades bodies, and the meetings of poultry societies. It is clear from this specialist literature that both men and women were initially expected to manage and work with poultry. The photographic evidence also records the kinds of training that students received at the agricultural colleges (Figure 1). In 1948, however, of 259 images in a leading textbook, only nine show women involved in poultry or egg production, as against a large number of men at work.16 Many agriculturalists, we should note, remained sceptical of the benefits of specializing in a particular branch of poultry production. Often, this was grounded in a perception that `Nature' was peculiarly unbending in the case of breeding birds for profit or changing their habits. As late as 1929, for instance, Edward Brown, one of poultry farming's great advocates, observed of what he called `Special Egg Farms': On these there is usually a greater or lesser amount of intensification, in many instances much more than is justified. There is abundant experience that many failures have arisen from disregard of natural factors.17 History of Technology, Volume Twenty-eight, 2008
Karen Sayer
Figure 1 1930s.18
153
Swanley Horticultural College, Kent: student feeding chickens,
The task of promotion was further complicated by concerns that, as specialized commercial production spread after the First World War, so did disease among fowl, probably due to the deep-litter methods of rearing popular at the time.19 As disease peaked in the 1930s, so voluntary registration of breeders was introduced to improve the quality of the stock. Under these various initiatives, estimates of the number of eggs produced annually per laying bird rose steadily. From 72 eggs in 1908, the published figure rose to 100 in 1925, clearing 180 (around two eggs per person per week) in 1937±38.20 During the Second World War, though the rationing of poultry feed limited the possibility of large-scale change, action both from government and from within the trade continued to encourage large-scale and intensive production. A key aim was to increase yield in line with the international `nutritional policy' agreed at the 1943 United Nations Conference on Food and Agriculture at Hot Springs, Virginia, attended by representatives of almost all the major non-Axis agricultural producers. The policy stated that all nations should produce `the food for which their soil and climate are best suited', which, in the United Kingdom, meant `milk, eggs, fruit and vegetables'.21 Output was seen to dip slightly around the outbreak of war, but rapidly recovered and rose to just over 196 eggs per annum per bird by 1944±45. This figure was to surpass 200 in the 1960s.22
History of Technology, Volume Twenty-eight, 2008
154
The Ecology of Standardized Poultry Technology SEASONAL PRODUCTION, MARKETING AND STANDARDIZATION
The main problem for those who wanted to increase egg production was the laying season. As Edward Brown pointed out in Poultry Breeding and Production (1929), the . . . normal period of laying eggs is during the Spring and early Summer, which form the natural breeding seasons of the year. Profit in this branch, however, is largely determined by the eggs laid in the other months, when prices are much higher.23 Or, as stated earlier in the Journal of the Board of Agriculture (1904), in an effort to encourage better management of birds, `if eggs could always be produced in winter, poultry-keeping would, under almost all circumstances, be profitable'.24 It was this seasonality of production, coupled with poor distribution, that explains why, in this period, all good household manuals outlined how to test eggs for freshness. Mrs Beeton's, for instance, recommended that eggs be `broken in a cup, and then put into a basin' if they were purchased, as `by this means stale or bad eggs may be easily rejected, without wasting the others'.25 Such advice encouraged consumers to set their own standards and apply them at point of use: Victorians, certainly, did not trust the `natural' product. The question of freshness was one of the key issues whenever government departments investigated egg production. Not only were eggs often held back by farmers until higglers (itinerant middle-men) came to collect them: the higglers themselves would keep them until they had enough to ship to the wholesaler. `Fresh' eggs in Scotland might be between 10 days and 6 weeks old, and were rarely tested for freshness before they were dispatched.26 E. G. Warren, secretary of the Framlingham and District Agricultural Co-operative Society, Ltd, one of many organizations set up in 1903 to improve the marketing of eggs, considered that `[o]ur English farmer is too careless as to whether an egg is fresh or not . . .. Often he will send us as ``new laid'' all the eggs collected, without troubling to keep back those that have been partially hatched'.27 In the post-war period, the government, building on its wartime controls, with egg production rates secured, and with both eggs and feed taken off rationing in 1953±54, worked increasingly to improve distribution and marketing. These efforts led to the creation, in 1957, of the British Egg Marketing Board (BEMB). The BEMB oversaw the introduction of the British Lion mark, stamped on individual eggs as a guarantor of quality and national origin; this brand was to last until the demise of the Board in 1971. Establishing trust in the product was at the heart of the BEMB's campaigns. In order to build that trust, the Board had to provide, materially, a reliable standardized `natural' product ± something that relied on the technology of egg production and distribution. In order to benefit from the Lion Mark, producers had to send their eggs to registered packing stations, clean, fresh, in good condition and conforming to standard weights. These requirements in themselves acted as drivers for History of Technology, Volume Twenty-eight, 2008
Karen Sayer
155
the adoption of specialist breeds, new poultry feed and technologies. At the same time, specialist packing companies began producing egg boxes designed to appeal visually to the consumer. The idea of woman as the expert on eggs, we should note, persisted at the level of consumption: eggs were sold to `the housewife'. In a 1963 advertisement aimed at the trade, the BEMB made a splash of hiring 13 women ± `(and only one man!)' ± as its regional sales promotion team, `after all, it's women who do all the buying!'.28 In 1961, the BEMB ran a campaign in the national and provincial press with the following text: You can't trust every egg the hen lays. Even if she's a well-bred hen, and even if you do know just where she laid it! The only way to make sure an egg is in perfect condition is to do what the lion does ± have it examined over a strong light.29 This body text ran beneath a strap line ± `The best eggs are farm-fresh, and only the best of the nest will be stamped with the lion' ± and a photograph of a young boy in a barn, holding up an egg to the light. Together, photograph and text create a synthesis of Nature and technology. Technology, guaranteed by the BEMB, is used to test the egg so that the housewife no longer needs to; it is also there to ensure that the hen is `well bred', and that the farmer knows where she laid the egg. But that test (normally, in reality, carried out by women at a packing station under an electric light) is symbolically handed over to a child, in sunlight, and translated into a recognizably rural setting. The process draws the spectator into the countryside. `Nature' is discovered as if by chance, captured in an image and offered to the spectator, who, guaranteed its reliability through technology, can bring it home in the form of a boxed egg. As Douglas Sackman puts it, `Advertisers have shown us . . . how to use their products in ways that will fulfil expectations for a wholesome and happy home life, setting the table at which we consume nature as food'.30 Food is, indeed, as much a social as a natural product. As eggs were originally a seasonal provision, however, in the winter, they were traditionally sold preserved ± in water-glass or lime-water, and later also by refrigeration ± or imported. All these processes were seen to compromise the `naturalness' or `goodness' of the egg. One response had been to capitalize on the `Britishness' of Empire imports: noting that the Empire accounted for only 3 per cent of the extensive importation into the United Kingdom in 1926, the Imperial Economic Committee advocated standardization of grades and packing across the United Kingdom and the Empire, alongside increased domestic production. A natural pattern of seasonal variation was implied: domestic eggs were the proper thing in the summer, Empire eggs from November to March. Advertisements exhorting the consumer to buy South African (rather than `foreign') eggs ran regularly in the press during the winters of 1930±38.31 This trope of seasonal variation endured in the promotion of eggs to consumers until at least 1959, when a BEMB advertisement announced: `One of the few nice History of Technology, Volume Twenty-eight, 2008
156
The Ecology of Standardized Poultry Technology
things about February is CHEAP EGGS.'32 The approach, however, sat alongside far more extensive efforts to solve the problem by improving domestic winter production. THE DEVELOPMENT OF STEP LIGHTING AND THE VALUE OF ELECTRICITY
The seasonal variation in laying is due in large part to environmental changes in photoperiod and light intensity, as an effect of the length of the day and the weather. The increase in natural daylight that takes place after mid-December in the northern hemisphere stimulates sexual maturity in growing birds; it can even over-stimulate birds that are being reared, so that they peck and sometimes cannibalize each other. It also influences growth rate by allowing the birds to feed for longer. Patterns of change in day length, rather than hours of daylight overall, seem to direct the rate of egg production in laying birds: decreasing day length, correspondingly, depresses ovulation. Other factors include changes in ambient temperature, while response to seasonal change also varies slightly by breed.33 Some of the earliest experiments in the application of artificial illumination in egg production took place at the turn of the twentieth century, as poultry-keepers were increasingly exhorted to produce eggs `when they are scarce'.34 Much of the early work was American, reaching Europe when Harry R. Lewis discussed `American Methods of Lighting and the Associated Feeding Problems' at the First World's Poultry Congress in the Hague in 1921. Lewis referred to the work of Dr E. C. Waldorf, who, `some twenty years ago', tried `to determine the commercial possibilities of influencing egg production through the use of artificial illumination'.35 By 1928, `practical winter lighting programmes for layers' had been developed by Harper Adams College. In the latter half of the 1920s, while no fully elaborated mechanism underlay the practical results, artificial lighting began to be widely promoted as a valuable way of tackling seasonal production within the nascent UK poultry industry.36 Initially, it was widely thought that providing extra light during the winter months enabled the birds to eat more, and that it was this additional food that allowed them to produce more eggs. Edward Brown, for instance, in his aforementioned text, Poultry Breeding and Production, disseminated American research and advice about its application. Through artificial lighting, argued Brown, `winter production of eggs is considerably increased', the reason being that `through the hours of darkness [the birds] are consumers rather than producers', though yield through the rest of the year might decline, especially when the birds were bred for high winter yield.37 Similarly, a more `hands-on' publication, The Feathered World Year Book, in its 1937 edition's section on winter lighting, explained that it `simply implies a late ``supper'' or early breakfast for layers, given by artificial light . . . in order to increase or encourage production'.38 However, by 1948, Leonard Robinson was able to state authoritatively that `artificial lighting . . . does not increase egg production because the birds consume more food. Light stimulates the ovary, but an History of Technology, Volume Twenty-eight, 2008
Karen Sayer
157
increased food intake is necessary to develop the eggs'.39 Explaining the science behind this with reference to research originally disseminated via the World's Poultry Congress of 1936, Robinson went on to recommend various lighting systems to the farmer.40 Subsequently, there was a shift away from the use of artificial light as a supplement to daylight (what Robinson referred to as a `light ration')41 to the continuous use of artificial light. Robinson, for example, recommended that where birds `are kept in confinement the windows should be so arranged that the house may be flooded with direct light',42 while W. P. Blount's Hen Batteries of 1951 ± one of the first textbooks on the system ± assumed that artificial light would be used `to extend the hours of daylight, so that a combined 13±14 hour daylight plus artificial light' would be used.43 By 1964, in contrast, Poultry International was able to state that `[a]lmost every new laying and rearing house erected in Britain today is windowless so that both daylength and the intensity of light in the house can be accurately controlled'.44 It is in this context that we should read Ruth Harrison's critical observation in Animal Machines, published the same year, that `[c]hickens, like other animals, are fast disappearing from the farm scenery. Only 20 per cent are now on range, whilst 80 per cent have gone indoors'.45 This change in understanding, alongside the earlier introduction of vitamin D in poultry feed, was crucial to the intensification of the commercial poultry sector. The shift was, however, far from straightforward. There were widespread debates within the industry throughout the 1960s about the most appropriate lighting pattern to use, and whether or not daylight should be included in the programme. The influence of American research which favoured stimulighting ± `. . . short, constant daylengths during rearing, followed by a step-up lighting pattern during lay' ± was seen in the building of windowless poultry houses in the United Kingdom. Research undertaken at Reading, however, favoured the alternative approach of twilighting: `. . . a step-down daylength programme in the rearing house and natural lighting conditions (providing they do not decrease during production).'46 The only clear point of agreement was that electric light was best. Experiments on the impact of artificial illumination had begun after the introduction of electricity in the United Kingdom and United States, and predominantly used electric light. Most of those disseminating the findings explained that other forms of lighting were also practical, but they were nearly always represented as inferior to electricity. When Harry Lewis presented his `American Methods of Lighting' in 1921, he argued that `electric lights are far superior in efficiency, in labor cost and in cost of operation to any other method of providing illumination' and recommended that the farmer with 500 birds get a lighting unit `where public current is not available'. He recognized that `ordinary barn lanterns have been, and are today, used to some extent'. But, these, he argued, were of poor quality in terms of their light and difficult to manage, while gasoline lanterns were as bad, being labour-intensive and easily clogged by the dust History of Technology, Volume Twenty-eight, 2008
158
The Ecology of Standardized Poultry Technology
of the poultry house: although the light was `fine', they were `rather unsafe and expensive to use'. His preferred alternative was therefore acetylene gas, which, `while not equal to electricity in efficiency or safety . . . seems to show much promise'.47 In such discussions, the choice of technology was typically linked to the size of operation. With reference to Britain some years later, The Feathered World recommended that the `backyarder' with six birds might use `a couple of storm lanterns, a small acetylene lamp or a 40-watt electric bulb'.48 The last of these was preferable, because the light could be switched on automatically: this was the key attraction of electric lighting. Walter Brett, in Poultry Keeping Today, aimed at large and small-scale poultry keepers, argued that commercial farms were always best off using electric lighting; like Lewis, he suggested that if not yet on the Grid, they should use a generator. Nonetheless, even `backyarders' would find electricity best, though they could use acetylene effectively (as it would allow the poultry keeper to dim the light by limiting the amount of water supplied), or, if nothing else, hurricane paraffin lamps.49 Where many lights needed to be dimmed, as in large-scale operations, electricity was distinctly preferred. By the time of Blount's Hen Batteries, it was assumed that the commercial poultry farmer would be interested in knowing that 100-watt bulbs were more effective than 40-watt bulbs, and that redorange light was more stimulating than blue. Blount discussed the best type of reflectors for electric light, and the merits of fluorescent as compared to `normal' (i.e. incandescent) electric light, in terms of the installation and running costs.50 As always, concepts of safety were important, drawing on wider and long-standing debates about the value of electricity.51 What was key, however, was the commercial impact of the light source chosen. Electricity was seen as more cost-effective for the large-scale producer, particularly because it reduced labour costs, often couched in terms of convenience to the farmer by way of time spent caring for the birds. Electricity was also favoured, as research made it increasingly clear that it was crucial for the birds to experience a consistent pattern of lighting.52 As Blount put it, if `the lights fail or are used indiscriminately the stimulus given to [the pituitary] gland becomes erratic and any fluctuation in hormone production is reflected directly in terms of a sudden fall in egg production. This fall is quite dramatic'.53 In this respect, `standardized timing' was key to the effective commercial use of photoperiod lighting and, because electricity came to be perceived within the technical literature as more controllable than other sources of artificial illumination, it also came to be seen as the best way to achieve that standardization.54 At the same time, the 1926 Electricity (Supply) Act harnessed electrical distribution in the countryside to the National Grid and, in 1928, the promotion of electricity to stimulate demand in the countryside began. Texts such as Borlace Matthews' Electro-Farming (1928) highlighted the value of electric lighting for the poultry producer: any problems with the use of lighting were put down to poor practice by the farmer.55 The British History of Technology, Volume Twenty-eight, 2008
Karen Sayer
159
Electrical Development Association released a `Practical cinematograph film on rural electrification: showing how a public supply of electricity in rural areas can be, and is being, used', which demonstrated the use of electricity on the farm and in the farmhouse. The Association further encouraged demand by finding new uses for electricity on the farm, disseminated in handbooks including Electricity in Poultry Farming (1932). As John Weller argues, the introduction of the new power source changed the spaces of the farm, altering existing livestock housing, and later changing the design of new buildings, as technologies and farming methods that depended on electricity were introduced.56 This growth, however, must be understood in the context of electricity's limited availability and reliability. In 1926, perhaps 10 or 20 farms in the United Kingdom had an electrical supply.57 In the 1930s, some 35,000 farms received electricity through the National Grid, with around 6,500 farms added annually from 1948. However, as Leslie Newman has shown, these Grid supplies were connected mainly to farmhouses, and did not necessarily imply that electricity was being used on the farm for agricultural processes. Many farmers, in fact, used their own generators: there were around 200,000 static engines on British farms by 1950. Only in 1953 was a planned programme of electrification put in place, in which it was determined to connect 85 per cent of all farms over the following 10 years. In all, the proportion of farms served by public companies increased from 11 per cent in 1939 to 80 per cent in 1960.58 References to alternative forms of lighting in the technical literature should therefore be seen as accommodations of this limitation. In the 1950s, Blount, for one, noted that `coal gas, calor gas, or oil lamps' could be used if there was no access to electricity, or in a crisis such as a power cut.59 Nonetheless, as the electrical supply increased in reach and reliability, the character of specialized poultry farming was profoundly changed. Standardized lighting became only one element in a wider system of electrically powered, fully automated battery cage systems, employing mechanized feeding, watering, cleaning, collecting and packing systems, and dependent on new methods of disease control, feeding regimes and breeding programmes. Intensification was the inevitable result. Whereas only 11 per cent of British farmed chickens were in flocks greater than 500 in 1948, by 1988, there were 70 per cent of layers in flocks of over 20,000. As flock sizes increased, many smaller holdings went out of business; the number of holdings declined from 250,000 in 1957 to 44,000 in 1988.60 The new intensive approach had the guaranteeing of reliability at its very core; yet, as we will see in the next section, it also generated profound consumer unease, manifested in growing appeals to the sanctity of the once-suspect `natural' egg. ANIMAL MACHINES
Published in 1964, Ruth Harrison's Animal Machines: The New Factory Farming Industry was a vivid indictment of intensive farming methods, History of Technology, Volume Twenty-eight, 2008
160
The Ecology of Standardized Poultry Technology
targeted at a meat-eating public that had hitherto known little of their existence. Arriving in the wake of Rachel Carson's hugely influential Silent Spring, and bearing a foreword by Carson, the book took up protoenvironmentalist concerns over the safety and sustainability of agricultural technology, and married them to a new focus on the welfare and suffering of farmed animals. Harrison's writings and public campaigning led ultimately to the establishment of what became the Farm Animal Welfare Council, and to the passage of the 1968 Agriculture (Miscellaneous Provisions) Act, a regulatory framework for the welfare of farm animals.61 As the title suggests, Harrison's key argument was that farming had taken on the character of an industrial production culture not grounded in, and placing no value on, concern for living things. Battery cages in intensive rearing houses, she noted, had `the appearance of immense machines, which is indeed what they are'.62 The account draws sharply on the contrast between this systematized reality of intensive egg production and bucolic rural imagery. This had rapidly ± and without apparent irony ± become an advertising convention.63 In 1973, Poultry International devoted a special edition to egg marketing. `[T]he industry,' it noted, `must learn to merchandise eggs . . . being prepared to employ every available gimmick that will tempt the housewife to leave the supermarket with more eggs in her shopping basket than she originally intended to buy.'64 Typical responses to this imperative may be seen in the visual design of a number of egg boxes from this period. A Griffin Farm box shows stylised eggs and wheat, emphasizing the commonalities of rural production; on the Deans Farm box, a woman holds a basketful of eggs, playing directly on commonplace nostalgic associations of small-scale production and women as poultry-keepers.65 While Deans had indeed begun life as a small-scale family producer, it was now part of a vertically integrated industrial operation, having been bought out in 1969 by the feed manufacturer Dalgety; small-scale production remained significant in the United Kingdom, but the singlehanded `farmer's wife' was largely a thing of the past. While marketers strove to make their preferred interpretation of the egg more visible, the realities of production had become correspondingly invisible, as the new technical regime necessitated moving the birds behind locked doors. Harrison's account focused on changing relations in space and trust as specialization for profit took hold. Her tone was often satirical: In the windowless houses there are automatic switches which can be set to give the birds the exact amount of light they needs for maximum egg production and a complicated system of lighting patterns exists to guide the farmer to this happy state.66 Harrison's detailed account was underpinned by extensive reference to the trade literature, in which discussions of possible cruelty were already well established. Blount, for instance, had included a chapter entitled `Hen Batteries: Are They Cruel?' in his Hen Batteries 13 years earlier. The Royal Society for the Protection of Animals (RSPCA), he noted, believed that History of Technology, Volume Twenty-eight, 2008
Karen Sayer
161
they were, but had nevertheless brought very few prosecutions. Their hostility he characterized as due to a focus upon the hen's confinement and removal from fresh air and sunshine, the lack of a nest to lay in, inability to express behaviours such as scratching and dust bathing, and inability to mate, fly, walk or run. Blount answered the charge of cruelty by arguing that the system was actually superior to the natural scheme of things: the birds were given a regular balanced diet, were safe from predators, free from external and internal parasites (lice and worms), and didn't experience bullying. Mortality was lower than in other systems, the hens' weight higher and their laying greater (over 200 eggs each), which he interpreted ± along with the birds' apparently contented chirps ± as a sign of happiness.67 Blount characterizes the egg producer alone as fit to judge the issue: these matters needed no explanation for the `average person familiar with this method of poultry keeping [who finds that] the results, in terms of egg production, are highly satisfactory'. It is the `townsman [sic], who is not familiar with either hens or batteries',68 who questions the system: Now, you who eat an egg for breakfast, or like scrambled eggs for tea, or egg and bacon for supper, may never have given a thought as to whether your eggs were produced by healthy, happy hens. Yet any system of poultry husbandry which gives the attendant the chance of seeing each and every bird every day is better than that which allows him only to view them en masse ± i.e. as a flock.69 Blount's ignorant `townsman' is a purely rhetorical device: it is most unlikely that those not engaged in the trade would have read or been aware of his specialized textbook. Blount's argument flies in the face of his own evidence that behavioural problems did arise in his system: if the birds are not given well filled troughs, they become frustrated and panic.70 Moreover, much of the force of the argument depended on the supposed skill of the `poultryman', a figure whose relevance declined with increasing systematization and rising labour costs.71 Published in 1951, Blount's exposition may actually be read as standing on the tipping-point of the change in agriculturalist assumptions about Nature's malleability. While supportive of the battery, it is in many ways grounded in a much older discourse, in which the most efficient way to manage poultry for human purposes is sought in the light of knowledge of the birds' natural habits, behaviour and needs. In The Poultry Book (1873), William Bernard Tegetmeier, a leading expert in his day who was cited at least until 1930,72 stated explicitly that the mass rearing of poultry for profit would always fail because the birds need a large range ± at least an acre per 100 fowls ± while winter confinement results in `the want of fresh air, natural green [sic] and insect food, [which produces] unfortunate results'.73 For Tegetmeier and those writing in the period immediately after him, Nature was essentially immutable. We should not imagine that this position was immediately, or even swiftly, overturned by artificial lighting and related technical interventions. The early innovators and History of Technology, Volume Twenty-eight, 2008
162
The Ecology of Standardized Poultry Technology
disseminators of these principles stressed that their very artificiality was a source of danger. As Harry R. Lewis put it: . . . when the birds are put under lights, they are kept under a more or less artificial condition, an unnatural and unseasonable condition at least. Hence any faulty method of management, or even very simple mistakes in their care, due to carelessness or thoughtlessness, will react immediately in a very disastrous way.74 Despite the history of domestication, which Lewis Holloway argues `makes an understanding of [farmed] animals' existence outside of variable human±nonhuman relationships impossible',75 this sense that Nature was to some degree non-malleable persisted through the first half of the twentieth century, throughout the initial institution of electrification. Edward Brown, for instance, observed that birds `lit' for the first time in a given year often failed to perform as well in the year following, and that this `indicates that the system imposes a strain upon the body and its functions . . .. This is a further example of the limitations imposed by Mother Nature'.76 Similarly, when, in 1938, Walter Brett stated that the `beauty of winter night-lighting is that it produces the extra eggs without forcing the birds or, indeed, without doing them any harm whatever',77 this was on the basis that the birds' health mattered to the farmer: because it impacted upon productivity, it could not be ignored. Ten years later, Leonard Robinson, in recognizing artificial light as `the most powerful of all so-called stimulants', stressed that: . . . great care must be taken not to give the birds too much of it. Lighting can be overdone especially with prolific layers, resulting in thin-shelled eggs, shell-less eggs, prolapses and layer's cramp.78 Harrison's argument drew extensively on related observations in the technical literature. However, much of the force of Animal Machines came from its criticisms of the shift in agro-industrial thinking that had increasingly emerged in the few years since the writings of Robinson and Blount. With automated intensive production established and expanding, concerns for the limitations of `(Mother) Nature', with the birds interpreted purely as managed components. Harrison cites H. R. C. Kennedy of Farmer and Stockbreeder, who, in a 1962 discussion on the sudden death of laying birds, observes that . . . the modern layer is, after all, only a very efficient converting machine, changing the raw materials ± feedingstuffs ± into the finished product ± the egg ± less, of course, maintenance requirements.79 The fact that Harrison was able to stoke public concern simply by quoting the trade's own literature emphasizes the level of the discursive shift within the industry. Deborah Fitzgerald and Donald Worster have observed that farmers in North America and Europe underwent a shift to specialization as their products became standardized. Despite the emergence of new hybrid plant and animal breeds, the number of species actually farmed has History of Technology, Volume Twenty-eight, 2008
Karen Sayer
163
decreased sharply as agriculturalists, along with wholesalers and packers, answer market demands for consistent, reliable industrial output.80 Birds selected from this limited range of standard options have consistently been treated, within the trade, as both products and components of a technological system. For example, a 1963 Poultry International advertisement for the WELPLINE 937 hybrid strain, which carried the Registered Trademark symbol, documents the strain as having `topped all other nationally recognized strains' with a production average of 233 eggs per bird annually, `growing livability' at 96.9 per cent and `laying house livability' of 92.4 per cent.81 The financial profits to be made by the producer were increasingly highlighted in this type of advertisement. Hence, a 1967 advertisement in Poultry International asks: `What can KimberCHIKS do for me?' `KimberCHIKS' were products of the US corporation Kimber Farms' specialized breeding programmes designed to create the ideal hen for egg producers, amenable to step lighting patterns; `me' was the specialist farmer, as indicated by the accompanying photograph of a man outside a large-scale intensive concern. Answering its own question, the copy states: Plenty ± If you're an egg producer. The poultryman with KimberCHIKS in his houses has the advantage of one of the longest intensive research and development programs in the industry . . .. KimberCHIKS lay premium quality eggs months longer than most strains.82 In 1967, the Canadian breeder Shaver's campaign for its Starcross 288 told stories of what farmers might spend their profits on, highlighting the breed's world-wide reach and independence of Nature: A bird that came from Shaver put an Irish lass on wheels. Kitty O'Hara's father is an egg producer in Ireland. His flock? Shaver Starcross 288. The result? His profits have increased sufficiently to buy Kitty a new bike, his son a new motorcycle, and his wife a new washing machine. And that's not just the luck of the Irish. In more than 50 other countries around the world the Shaver 288 is a proven money-maker. Regardless of climate.83 `The farmer', we should note, may be addressed as the head of a family rather than a company representative, but is assumed male: by the 1960s, this assumption was quite general in breeders' and specialists' approaches to potential customers. Women's presumed role, as in other automated industries, lay in unskilled work associated with the mass production of eggs via battery systems ± work such as egg packing and candling. The hens themselves were no less changed. KimberCHIKS, 288s and their competitors were standardized birds, bred for a standardized and specialized industry, within which Nature was now assumed to be something that could be controlled. This change, above all, fuelled the unease that drew such fiercely critical responses as Animal Machines.
History of Technology, Volume Twenty-eight, 2008
164
The Ecology of Standardized Poultry Technology CONCLUSION
As Deborah Fitzgerald notes, `commodity crops and livestock' have tended to define agricultural history writing, while less prestigious animals and crops have languished largely unstudied.84 By focusing on the initially marginal case of egg production, this chapter has drawn attention to the role of status, representation and expectations in agricultural industry. As `poultry-keeping' gave way to `poultry farming', the activity was reorganized, reinterpreted and re-gendered. Increasingly, to the marketers, egg producers were presumptively male technocrats, responsive to a rhetoric of standardizing efficiency; rather neatly, at the same time, the consumers were presumptively female (`the housewife'), responsive to `Nature' in a way not previously identifiable. In the nineteenth century, the consumer did not trust the `natural' egg (hence the injunction to test for freshness). To producers, Nature was seen chiefly as a limiting force: the production process in practice showed a lack of malleability that rendered the promotion of `modern' farming, standardization and quality control doubtful ± a view echoed well into the twentieth century, but finally abolished in the bright early days of the all-electric battery. Standardization, as it took hold, was married to a rhetoric of trust and consumer confidence, as the BEMB encoded its guarantee in the visible form of the Lion Mark. The key aim of the eggmarketing initiative, of course, was to stimulate demand, assuming a reliable constituency of housewives to be served by systematic supply. Eggs were now trustworthy artefacts, undeviating from the norm of stable yearround production grounded in electrotechnical manipulation. Nature, the presiding assumption now ran, could be manipulated and transformed to meet economic need. The shift was a classic example of the denaturing processes of industrial production, or what Goodman, Sorj and Wilkinson have termed the `appropriation' of Nature.85 This appropriation served well the marketing initiatives of the BEMB and others; yet, the unpalatable `animal machine', which underpinned it, plainly did not. In its place were substituted appeals to the unappropriated nature of a bygone age. In recent years, we have begun to see more critical consumer engagement with the `natural'. Recurrent public health scares (most notably surrounding bovine spongiform encephalopathy and, as specifically regards eggs, the UK salmonella outbreak of the late 1990s) have eroded consumer trust in industrial processes in the production of food. There is also increasing ethical concern for the welfare of the animals involved in capital-intensive agriculture: in some ways, Harrison's precepts are being acted upon more than 40 years after the publication of Animal Machines. In viewing animals purely as components, moreover, the agroindustrial approach abolishes the local and familiar: hybrid breeding typically draws heavily on plants and animals not indigenous to the regions in which they are to be cultivated.86 Consumers have turned to considerations not only of the means of production, but of provenance ± the `local' being seen as authentic and trustworthy, and perhaps suggestive History of Technology, Volume Twenty-eight, 2008
Karen Sayer
165
of small-scale practice ± to guarantee the safety of what they are eating.87 The result has been an increasing market for `niche' (organic, freerange, locally sourced and occasionally rare-breed) eggs, and decline in sales of the standardized, homogeneous product that epitomized trustworthiness in an earlier period.88 Though price is still the primary consideration for many consumers (depending to a great extent, of course, on socio-economic position), others now demand the material reality of the `Nature' they have long seen in advertising imagery. One of the ironies of this situation is that today's marketing and distribution systems, part consequence and part creator of the standard egg, play a crucial role in rendering this change visible and sustainable. Supermarkets trumpet their `niche' credentials, achieved through the locking-in of producers whose inevitable goal is to service the wayward, systems-averse consumer as systematically as that consumer will tolerate.89 The Lion Mark was revived in 1998, more to secure connotations of healthy, salmonella-free flocks than of Britishness; concerns over `food miles' do, however, play a role in its retention, while increasing customer access to more specific source information ± in some cases, to the level of the individual farm ± relies on the tremendous informatic sophistication of present-day retail logistics. This, of course, is not to deny the fundamental point: in a highly standardized society, a `niche' egg is valued more highly than a conventional egg (not only culturally, but in straightforward monetary terms) to the extent that it can successfully be presented as nonconforming, un-engineered and marking a departure from the standard `ideal'. Technological standardization, whatever its reputation in other areas, is (and will remain) profoundly double-edged as far as the representation of Nature as food is concerned. ACKNOWLEDGEMENTS
I wish to acknowledge the help and advice of the following in the preparation of this chapter and reading of early drafts: Prof. Martin Hewitt, Dr Di Drummond, Prof. Graeme Gooday and Dr James Sumner. Notes and References
1. P. Atkins, `Laboratories, Laws and the Career of a Commodity', Environment and Planning D: Society and Space, 2007, 25(6): 967±89, on 975. 2. D. Worster, `Transformations of the Earth: Toward an Agroecological Perspective in History', Journal of American History, 1990, 76(4): 1087±106, on 1094; W. Boyd and M. Watts, `Agro-Industrial Just-in-Time: The Chicken Industry and Postwar American Capitalism', in D. Goodman and M. J. Watts (eds), Globalising Food: Agrarian Questions and Global Restructuring (London, 1997). 3. Worster, op. cit. (2), 1093. 4. M. E. Turner et al., Farm Production in England 1700±1914 (Oxford, 2001), 115. 5. J. Thirsk, Alternative Agriculture: A History from the Black Death to the Present Day (Oxford, 1997), 189±91. 6. `Poultry keepers' first appear in the decennial census records for 1861, but are quickly moved, with `poultry feeders and fatteners', into the broader category `dog, bird, animal keeper, dealer', and then to `framers and graziers', a yet broader category mainly
History of Technology, Volume Twenty-eight, 2008
166
The Ecology of Standardized Poultry Technology
devoted to crop-growers. Only in 1921, reflecting increasing specialization, does poultry farming regain its own category. 7. Departmental Committee on Poultry Breeding in Scotland, `Poultry Breeding in Scotland', 1909, xxxvi, cd. 4616: 24, para. 23. 8. J. Bourke, `Women and Poultry in Ireland, 1891±1914', Irish Historical Studies, 1987, 25: 293. 9. Bourke, op. cit. (8), 289. 10. Bourke, op. cit. (8), 310. 11. Feathered World Year Book and Poultry Keeper's Guide for 1937 (London, 1937), 11±14. 12. Royal Commission on Agriculture, 1895, xvi, c. 7623; `Poultry Breeding in Scotland', Report on Egg Marketing in England and Wales, MAFF, Economic Series number 10, report, 1926. 13. M. Winstanley, `Industrialization and the Small Farm: Family and Household Economy in Nineteenth-Century Lancashire', Past and Present, 1996, no. 152: 157±95, on 180. 14. Winstanley, op. cit. (13), 175±8. 15. Winstanley, op. cit. (13), 180, 182±3. 16. L. Robinson, Modern Poultry Husbandry (London, 1948). 17. E. Brown, Poultry Breeding and Production, Volume 2 (London, 1929), 633. For Brown, see B. A. Holderness, `Intensive Livestock Keeping', in E. J. T. Collins (ed.), The Agrarian History of England and Wales, Volume 7 (Cambridge, 2000), 487±94. 18. `Feeding Poultry at Swanley Horticultural College in Kent: Female Student at Work', Museum of English Rural Life (hereafter MERL), Voysey collection: MERL P DX281 PH3/408, n.d. 19. Robinson, op. cit. (17), 2±4; S. H. Gordon and D. R. Charles, Niche and Organic Chicken Products: Their Technology and Scientific Principles (Nottingham, 2002), 8. 20. Marketing and Preparing for Market of Foodstuffs: Sixth Report: Poultry and Eggs, Imperial Economic Committee, 1928, x, Cmd 3015, 17; Robinson, op. cit. (17), 5. 21. Robinson, op. cit. (17), 7. 22. Robinson, op. cit. (17), 5; K. E. Hunt and K. R. Clark, Poultry and Eggs in Britain, 1966±1967 (Oxford, 1967), 55. 23. Brown, op. cit. (18), 645. 24. `Reprint of Leaflet (No. 31) Issued by the Department of Agriculture and Technical Instruction for Ireland', Journal of the Board of Agriculture, 1904, 10: 529. 25. I. Beeton, Mrs. Beeton's Book of Household Management (London, 1861, facsimile 2000), 823. 26. `Poultry Breeding in Scotland', op. cit. (12), 95±7. 27. E. G. Warren, `Co-operation in Relation to Marketing . . . in Britain', in E. Brown (ed.), Official Report, Second National Poultry Conference, Reading, 1907 (London, 1907), 321. 28. History of Advertising Trust Collections (hereafter HAT), OM(L) 21: BEMB advertisement, `Self Service and Supermarket', 1963, February: 23160/1. 29. HAT, OM(L) 21: BEMB advertisement, Glasgow Daily Record, 1961, 13 January: 18254/3. 30. D. C. Sackman, `Putting Gender on the Table: Food and the Family Life of Nature', in V. J. Scharff (ed.), Seeing Nature through Gender (Lawrence, Kansas, 2003), 187. 31. Marketing and Preparing for Market of Foodstuffs: Sixth Report: Poultry and Eggs, Imperial Economic Committee, 1928, x, Cmd 3015; HAT, OM(S) 15, South African Eggs 12; 1931 for Evening Standard, 12 and 13 November. 32. HAT, OM(S)08; BEMB advertisement, Aberdeen Evening Express, 1959, week ending 21 February: 5909/15. `A Statement of What the Board Does for the Wholesale and Retail Trades', 1960, 14 May: 12912/4; `The Egg Industry', 1961, 18 February: 15700/3. 33. Gordon and Charles, op. cit. (19), 79±83 184, 190±1. 34. C. A. Flatt, Poultry Keeping Do's and Don'ts (London, 1925), 42. 35. H. R. Lewis, `American Methods of Lighting and the Associated Feeding Problems', Transactions of the First World's Poultry Congress at the Hague-Sceveningen, Vol. 1: Papers and Communications (Hague, 1921), 77. 36. Gordon and Charles, op. cit. (19), 4, 10. 37. Brown, op. cit. (18), 648±9.
History of Technology, Volume Twenty-eight, 2008
Karen Sayer
167
38. Feathered World Year Book, op. cit. (11), 75. It then went on to lay out the practical details for farmers. 39. Robinson, op. cit. (17), 241. Gordon and Charles, op. cit. (19), 7. 40. Robinson, op. cit. (17), 242±3. 41. Robinson, op. cit. (17), 256. 42. Robinson, op. cit. (17), 375. 43. W. P. Blount, Hen Batteries (London, 1951), 30. 44. `What Light Is Right?', Poultry International, 1964, July: 60. 45. R. Harrison, Animal Machines: The New Factory Farming Industry (London, 1964), 37. 46. `What Light Is Right?', op. cit. (44), 60±1. 47. Lewis, op. cit. (35), 80±1. 48. Feathered World Year Book, op. cit. (11), 76. 49. W. Brett, Poultry-Keeping Today: Pictured and Explained (London, 1938, reprinted 1941), 121±2. 50. Blount, op. cit. (43), 31±8. 51. See, e.g. G. J. N. Gooday, The Morals of Measurement: Accuracy, Irony, and Trust in Late Victorian Electrical Practice (Cambridge, 2004). 52. R. Borlace Matthews, Electro-Farming: Or the Application of Electricity to Agriculture (London, 1928), 302. 53. Blount, op. cit. (43), 33±4, emphasis in original. 54. Blount, op. cit. (43), 33. 55. Borlace Matthews, op. cit. (52), 295±8. 56. J. Weller, History of the Farmstead: The Development of Energy Sources (London, 1982), 169±73. 57. Weller, op. cit. (56), 169. 58. Weller, op. cit. (56), 164, 169±71; N. Harvey, A History of Farm Buildings in England and Wales (Newton Abbot, 1984), 211, 216. L. T. Newman, `The Electrification of Rural England and Wales', unpublished Master's thesis, Institute of Agricultural History and Museum of English Rural Life, Reading, 1991, 200. 59. Blount, op. cit. (43), 31±8. 60. Gordon and Charles, op. cit. (19). 61. R. Carson `Foreword' to Harrison, op. cit., (45); R. Caron, Silent Spring (London, 1963). 62. Harrison, op. cit. (45), 42. 63. Harrison, op. cit. (45), 56±9. 64. Poultry International, 1973, March: 6. 65. MERL 76/216/1-6. An image of these boxes is available online at www.rhc.rdg.ac.uk/olib/images/objects/70s/76_216.jpg. The boxes are recorded as `used up to 1976'. 66. `About Us', n.d., 2003, Deans Foods website, archived at web.archive.org/web/ 20050207180143/http://www.deansfoods.com/pages/aboutus.htm. The vertical aggregation was reversed in a management buyout, resulting in a newly independent Deans Foods in 1991; the company now trades as Noble Foods, specializing in niche production that includes a large free-range and organic component. 67. `About Us', op. cit. (66), 43. 68. Blount, op. cit. (43), 245±8. 69. Blount, op. cit. (43), 245. 70. Blount, op. cit. (43), 247. 71. Harrison, op. cit. (45), 52. 72. Harrison, op. cit. (45), 44±7. 73. His influence within the poultry industry also extended to the Unites States. W. Gilbey, Poultry-Keeping on Farms and Small Holdings (London, 1904), iii; Brown, op. cit. (18), 777; M. A. Jull, Poultry Husbandry (New York, 1930), 28. 74. W. B. Tegetmeier, The Poultry Book: Comprising the Breeding and Management of Profitable and Ornamental Poultry (London, 1873), 9±10, 384. 75. Lewis, op. cit. (35), 77. 76. L. Holloway, `Subjecting Cows to Robots: Farming Technologies and the Making of Animal Subjects', Environment and Planning D: Society and Space, 2007, 25(6): 1041±60, on 1044, and see also 1045.
History of Technology, Volume Twenty-eight, 2008
168
The Ecology of Standardized Poultry Technology
77. Brown, op. cit. (18), 649. 78. Brett, op. cit. (49), 121. 79. Robinson, op. cit. (17), 242, 243. 80. Harrison, op. cit. (45), 50, original emphasis. 81. Worster, op. cit. (2), 1101±3; D. Fitzgerald, `Eating and Remembering', Agricultural History, 2005, 79: 393±408. 82. Poultry International, 1963, October: 1. 83. Poultry International, 1967, January: 33. 84. Poultry International, 1967. The advertisement added that the birds, through consistency of lay in random sample tests, produced `the highest net income' for 1963±64 and 1964±65 according to the USDA. 85. Fitzgerald, op. cit. (81), 393±4. 86. D. Goodman, B. Sorj and J. Wilkinson, From Farming to Biotechnology: A Theory of Agro-Industrial Development (Oxford, 1987). 87. Worster, op. cit. (2), 1093±4, 1100. 88. J. Murdoch, T. Marsden and J. Banks, `Quality, Nature, and Embeddedness: Some Theoretical Considerations in the Context of the Food Sector', Economic Geography, 2000, 76(2): 107±25. 89. Cf. Murdoch et al., op. cit. (88), 110, 120. 90. At the time of writing, the website of Noble Foods (formerly Deans Foods) (www.noblefoods.co.uk) is particularly instructive in demonstrating all the major niche categories as volume commodities underpinned by sophisticated marketing, notably as regards box design.
History of Technology, Volume Twenty-eight, 2008
Contents of Former Volumes TWENTY-FIFTH ANNUAL VOLUME, 2004 (9780826471871)
MARTIN WATTS and D. JOHN LANGDON, An Early Tower Windmill? The Turweston `Post Mill' Reconsidered. NICK HAYES, Prefabricating Stories: Innovation and Systems Technology after the Second World War. JAN VAN DEN ENDE, Impacts of Technology Reassessed: A Retrospective Analysis of Computing Technology. IAN INKSTER, Introduction: Indisputable Features and Nebulous Contexts: The Steam Engine as a Global Inquisition. MARK ELVIN, Some Reflections on the Use of `Styles of Scientific Thinking' to Disaggregate and Sharpen Comparisons between China and Europe from SoÁng to Mid-Qing Times (960±1850 CE). ROB ILIFFE, Comments on Mark Elvin. H. FLORIS COHEN, Inside Newcomen's Fire Engine, or: The Scientific Revolution and the Rise of the Modern World. ALESSANDRO NUVOLARI, The Emergence of Science-Based Technology: Comments on Floris Cohen's Paper. GRAHAM HOLLISTER-SHORT, The Formation of Knowledge Concerning Atmospheric Pressure and Steam Power in Europe from Alcotti (1589) to Papin (1690). KENT D. DENG, Why the Chinese Failed to Develop a Steam Engine. DAVID WRIGHT, Response to Kent Deng. CHUN-YU LIU, Response to Kent Deng. RICHARD L. HILLS, The Development of the Steam Engine from Watt to Stephenson. IAN INKSTER, The Resources of Decisive Technological Change: Reflections on Richard Hills. NATHAN SIVIN and Z. JOHN ZHANG, Steam Power and Networks in China, 186098: The Historical Issues. R. BIN WONG, A Comment on Sivin and Zhang. TWENTY-SIXTH ANNUAL VOLUME, 2005 (9780826489708)
HOWARD DAWES and CHRISTOPHER DAWES in collaboration with GERRY MARTIN and ALAN MACFARLANE, Making Things from New Ideas. NICHOLAS GARCIÂA TAPIA, The Twenty-One Books of Engines and Machines Attributed to Pedro Juan de Lastanosa. History of Technology, Volume Twenty-eight, 2008
170
Contents of Former Volumes
R.ICHARD L. HILLS, Richard Roberts' Contributions to Production Engineering. MICHAEL PARIS, Promoting British Aviation in 1950's Cinema. Special Issue: Engineering Disasters R. ANGUS BUCHANAN, Introduction. DAVID K. BROWN, Maritime Disasters and the Law. DEREK PORTMAN, Suspension Bridges. R. ANGUS BUCHANAN, The Causes of the Great Sheffield Flood of 1864. P. R. MORRIS, Semiconductor Manufacturing and Chemical Contamination within Silicon Valley. BERENDA J. BUCHANAN, Gunpowder: A Capricious and Unmerciful Thing. JOHN H. BOYES, Engineering Disasters: Thoughts of a Factory Inspector. PETER STOKES, Fatigue as a Factor in Aeronautical Disasters. DAVID ASHFORD, Design Compromises in the Space Shuttle. HENRY PETROSKI, Past and Future Bridge Failures. TWENTY-SEVENTH ANNUAL VOLUME, 2006 (9780826495990)
HANS ULRICH VOGEL, The Diffusion and Transmission of the Rotary-Fan Winnowing Machine from China to Europe: New Findings and New Questions. DAVID PHILIP MILLER, Watt in Court: Specifying Steam Engines and Classifying Engineers in the Patent Trials of the 1790s. AÂNGEL CALVO, Business and Geopolitics in the International Transfer of Technology: Spanish Submarine Cables, 18491930. Special Issue: The Professional Identity of Engineers, Historical and Contemporary Issues IRINA GOUZEVITCH AND IAN INKSTER, Introduction: Identifying Engineers in History. ANDREÂ GRELON, French Engineers: Between Unity and Heterogeneity. MARIA PAULA DIOGO and ANA CARDOSO DE MATOS, Being an Engineer in the European Periphery: Three Case Studies of Portuguese Engineering. ANTONI ROCA-ROSELL, GUILLERMO LUSA-MONFORTE, FRANCES BARCA-SALOM and CARLES PUIG-PLA, Industrial Engineering in Spain in the First Half of the Twentieth Century: From Renewal to Crisis. Ordering in the UK/Rest of the World For Information on ordering, please contact: Customer Services History of Technology, Volume Twenty-eight, 2008
Contents of Former Volumes Tel: +44 (0) 1202 665432 Fax: +44 (0) 1202 666219 Ordering in North America For Information on ordering, please contact: Customer Services 1 800 561 7704 (toll-free number)
History of Technology, Volume Twenty-eight, 2008
171