Information Technology in Government: Britain and America [1 ed.] 0415174821, 9780415174824, 9780203267127

This book explores the huge impact of information technology on the governments of the UK and US over the last 20 years,

210 15 1018KB

English Pages 225 Year 1998

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Book Cover......Page 1
Title......Page 4
Contents......Page 5
List of tables......Page 9
Acknowledgements......Page 10
Abbreviations......Page 11
Introduction: information technology and a dream of the future......Page 14
Computerising the tools of government? The spread of information technology......Page 18
Innovation, expenditure and control: governmental responses to information technology......Page 48
Computerisation in the UK Benefits Agency......Page 69
Systems modernisation in the US Social Security Administration......Page 88
Computerisation in the US Internal Revenue Service......Page 106
Computerisation in the UK Inland Revenue......Page 126
New players: government contracting of information technology......Page 142
The ambiguous essence of the state of the future......Page 179
References......Page 204
Index......Page 221
Recommend Papers

Information Technology in Government: Britain and America [1 ed.]
 0415174821, 9780415174824, 9780203267127

  • Commentary
  • 33804
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Information Technology in Government

Information technology has been largely ignored in approaches to the study of government. This ground-breaking study establishes information technology as a vital feature of public administration, exploring in detail the real impact it has had on the central governments of Britain and America over the last 20 years. It reveals the hidden world of the two governments’ information systems, the struggle to keep pace with technological development, and the battle to fulfil the grand promises of their political masters. Helen Margetts resituates information technology at the centre-stage of public policy and management. It is now a vital part of any government organisation, opening new policy windows and enabling a vast range of tasks to be carried out faster and more efficiently. But it has introduced new problems and challenges. Four in-depth case studies demonstrate how information systems have become inextricably linked with the core tasks of governmental organisations. The key organisations examined are: • the Internal Revenue Service and the Social Security Administration in the US; • the Inland Revenue and the Benefits Agency in the UK. Information Technology in Government separates the rhetoric of politicians from the reality of the ‘information age’. Using the evidence of the last two decades, Helen Margetts questions those who attribute radical transformational powers to information technology, pointing to a more ambiguous, complex role for it to play in the state of the future. The response of both American and British governments to this complexity has been to contract out information technology development. This is the first book to cover this process, which has drawn a new array of major players— global computer services providers—into government. Control over these new players, who themselves control the new ‘lifeblood’ of government, will be a major task of governments in the future. Helen Margetts is Lecturer in Politics at Birkbeck College, University of London.

Routledge Research in Information Technology and Society

Books published under the joint imprint of LSE/Routledge are works of high academic merit approved by the Publications Committee of the London School of Economics and Political Science. These publications are drawn from the wide range of academic studies in the social sciences for which the LSE has an international reputation. 1. Reinventing Government in the Information Age International Practice in Public Sector Reform Edited by Richard Heeks 2. Information Technology in Government Britain and America Helen Margetts

Information Technology in Government Britain and America

Helen Margetts

London and New York

First published 1999 by Routledge 11 New Fetter Lane, London EC4P 4EE This edition published in the Taylor & Francis e-Library, 2003. Simultaneously published in the USA and Canada by Routledge 29 West 35th Street, New York, NY 10001 © 1999 Helen Margetts The right of Helen Margetts to be identified as the Author of this Work has been asserted by her in accordance with the Copyright, Designs and Patents Act 1988 All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data Margetts, Helen. Information technology in government: Britain and America /Helen Margetts. p. cm. Includes bibliographical references. 1. Information technology—Government policy—United States. 2. Information technology—Government policy—Great Britain. 3. Administrative agencies—United States—Data processing. 4. Administrative agencies—Great Britain—Data processing. 5. Administrative agencies—United States—Communication systems. 6. Administrative agencies—Great Britain—Communication systems. I. Title. HC110.I55M248 1999 98–24908 303.48´33´0941–dc21 CIP ISBN 0-203-02094-4 Master e-book ISBN

ISBN 0-203-20803-X (Adobe eReader Format) ISBN 0-415-17482-1 (Print Edition)

Para Pedro Mascuñán Pérez, con amor

Contents

List of tables Acknowledgements Abbreviations Introduction: information technology and a dream of the future 1

viii ix x xiii

Computerising the tools of government? The spread of information technology

1

Innovation, expenditure and control: governmental responses to information technology

31

3

Computerisation in the UK Benefits Agency

52

4

Systems modernisation in the US Social Security Administration

71

5

Computerisation in the US Internal Revenue Service

89

6

Computerisation in the UK Inland Revenue

109

7

New players: government contracting of information technology

125

The ambiguous essence of the state of the future

162

References Index

187 204

2

8

vii

Tables

2.1 Federal information technology obligations by agency, 1982–90 2.2 Information technology obligations as a percentage of discretionary budget in the US federal government, 1993 2.3 Information technology expenditure as a percentage of running costs in the UK central government departments, 1993 and 1995 3.1 Estimated administrative cost of main benefits in the Benefits Agency 3.2 Cumulative costs and savings for the Operational Strategy 3.3 Errors in income support payments, 1989–94 4.1 The SSA’s information technology budget versus other agencies, 1991 7.1 Commercial services as a percentage of IT expenditure in the US federal government, 1982–93 7.2 Commercial services as a percentage of IT expenditure across major departments in the US federal government, 1982–92 7.3 Defence expenditure as a percentage of IT expenditure in the US federal government, 1983–93 7.4 Commercial services as a percentage of IT expenditure by department for British central government, 1993–5

viii

38 39

40 65 66 66 85 136 136 137 146

Acknowledgements

This book is based on research carried out for H.Margetts (1996) Computerisation in Central Government in Britain and America 1975–1995: PolicyMaking, Regulation and Contracting in Information Technology, PhD Thesis, London School of Economics, which won the Walter Bagehot Prize of the UK Political Studies Association for the best thesis in Government in 1996. The research involved 61 interviews with government officials in the US and the UK. Some of these interviews are quoted here, anonymously. I would like to give warm and heartfelt thanks to the following people: Christopher Hood and Patrick Dunleavy for being the best imaginable supervisors, always perceptive, patient, generous and encouraging; all the US and British officials who gave time so generously to be interviewed and made the research a pleasure; Patrick Proctor, from Routledge, for being so patient; Janet Goodwin, for noble proof-reading; Douglas Moncrieff, without whom the thesis would never have been finished; Klaus Goetz, Anni Parker, Brian Barry, Matthew Smith, Joni Lovenduski, David Margetts, Richard Taylor and Stephen Lowenstein, for advice, friendship and distraction; my colleagues at Birkbeck College for being excellent people to work with when you are trying to finish a book; my students at Birkbeck College for prompting new ideas and lively discussions; members of EGPA’s permanent study group on informatization in public administration, 1991–1998, for stimulating comment and debate; Leslie Willcocks, William Heath, Nicole Kroon, Ivan Horrocks, Guy Fitzgerald and Paul Frissen for generous supply of expensive or otherwise unobtainable documentation; Martin and Elenore Ross, who became my US family during the course of the research and made me welcome in the most beautiful apartment in Washington; and finally, Pedro Mascuñán Pérez, to whom this book is dedicated.

ix

Abbreviations

ADP ASSIST BT CALS CAT CBMS CCA CCTA CGO CICA CIO CITU CODA COP CRS CSC CSD DHHS DHSS DISSC DITA DSS DTI DVLC DVOIT EDI EDS EPA ESRC FBI

automatic data processing Analytical Services Statistical Information System, UK British Telecom computer-aided acquisition and logistics support computer-aided transcription Central Budget Management Systems, US Central Computer Agency, UK Government Centre for Information Systems (formerly CCA), UK Committee of Governmental Operations, House of Representatives, US Competition in Contracting Act, US Chief Information Officer Central Information Technology Unit, UK Computerisation of Schedule D, UK Computerisation of PAYE, UK Congressional Research Service, US Computer Sciences Corporation Civil Service Department, UK Department of Health and Human Services, US Department of Health and Social Security (now DSS), UK Departmental Information Systems Strategy Committee, UK Departmental Information Technology Authority, UK Department of Social Security (formerly DHSS), UK Department of Trade and Industry, UK Driver and Vehicle Licensing Centre, UK Information Technology Arm of the Department of Transport, UK electronic ata interchange Electronic Data Systems Environmental Protection Agency, US Economic and Social Research Council Federal Bureau of Investigation, US x

Abbreviations

FCO FIRMR FTS2000 GAO GCN GDN GSA GTN HMSO HUD IBM ICL ICT IDC IRM IRMS IRS IS ISD ISM ISP IT ITO ITSA JANET JURIS MAFF MoD NAO NARA NASA NIST NPR OIG OIRA OIRM OMB OPM OPS

xi

Foreign and Commonwealth Office, UK Federal Information Resources Management Regulation, US US Government data and telecommunications network General Accounting Office, US Government Computer News, UK Government Data Network, UK General Services Administration, US Government Telecommunications Network, UK Her Majesty’s Stationery Office (now The Stationery Office), UK Department of Housing and Urban Development, US International Business Machines International Computers Limited information and communications technology International Digital Communications information resource management Information Resource Management Services, General Services Administration, US Internal Revenue Service, US information systems information systems development information systems management Information Systems Plan, US information technology Information Technology Office, Inland Revenue, UK Information Technology Services Agency, Department of Social Security, UK Joint Academic Network Justice Retrieval and Inquiry System, US Ministry of Agriculture, Fisheries and Food, UK Ministry of Defence, UK National Audit Office, UK National Archives and Records Administration, US National Aeronautics and Space Administration, US National Institute of Standards and Technology, Department of Commerce, US National Performance Review, US Office of the Inspector General, UK Office of Information and Regulatory Affairs, Office of Management and Budget, Executive Office of the President, US Office of Information Resource Management, US Office of Management and Budget, Executive Office of the President, US Office of Personnel Management, US Office of Public Service (formerly OPSS), Cabinet Office, UK

xii Abbreviations

OPSS OTA PAC PFI PICT PITCOM PNC POST PRA PRP SCSS SEC SMP SSA SSC SSI TMAC TSIS TSM TTP TUPE UK COD VFM

Office of Public Service and Science, Cabinet Office, UK Office of Technology Assessment, US Public Accounts Committee, UK Public Finance Initiative, UK (ESRC’s) Programme on Information and Communications Technology Parliamentary Information Technology Committee, UK Police National Computer, UK Parliamentary Office of Science and Technology, UK Paperwork Reduction Act, US President’s Reorganization Project, US Service Centre Support System, US Securities and Exchange Commissions, US Systems Modernization Project, Social Security Administration, US Social Security Administration, US Social Security Committee, UK supplemental security income Treasury Multiuser Acquisition Contract, US Taxpayer Service Integrated System, US Tax Systems Modernization Project, Internal Revenue Service, US trusted third party Transfer of Undertakings (Protection of Employment) Regulations UK Citizens Online Democracy value for money

Introduction Information technology and a dream of the future

When we look into the ambiguous essence of technology, we behold the constellation, the stellar course of the mystery. Martin Heidegger

Information technology has been heralded as a new fairy godmother for government. Politicians in the 1990s compete to associate themselves with the magical effects of her wand, which they claim will wave in the new age of government and an end to the ills of administration. In the United States, Vice President Al Gore, who first coined the phrase ‘information superhighway’ in 1977, announced: ‘With computers and telecommunications, we need not do things as we have in the past. We can design a customer-driven electronic government that operates in ways that, 10 years ago, the most visionary planner could not have imagined’ (NPR, 1993:121–2). He promised to connect every classroom, library, hospital and clinic to a ‘national information infrastructure’ by the year 2000. Politicians in Britain rushed to follow suit. In 1995 both Ian Taylor, Conservative Minister for Trade and Industry and Chris Smith, then Shadow Minister for National Heritage, described themselves publicly as the ‘British Al Gore’. Michael Heseltine, Deputy Prime Minister, claimed that information technology would bring ‘the nervous system of a new order’ to education (Guardian, 14 November 1995). After the 1997 election, the new Minister for the Civil Service, David Clark, was quick to promise that ‘Information technology will underscore our new Government programme. I look on information technology as the vehicle of opportunity for realising our aims in a coherent and dynamic manner’ (Computing, 26 June 1997). At the first party conference after the 1997 election, the new Prime Minister Tony Blair announced that within five years, 25 per cent of dealings between the public and the government would be done electronically, through television, telephone or computers. And in 1998 he followed up an earlier promise that ‘the information superhighway’ would form a key plank of educational policy (Herald, 21 March 1997) with the announcement of a deal with the head of Microsoft, Bill Gates, to place computers with Internet xiii

xiv Introduction

connections in each of the 32,000 schools in Britain by 2002 (Sunday Times, 19 October 1997). These promises of politicians fitted neatly into a more general theme of modernisation running through politics in both countries. Politicians’ speeches were peppered with the words ‘new’, ‘modern’, and dazzling images of the twenty-first century. In the US, President Clinton proclaimed, ‘Now we work to modernize government…so that again it is a force for progress and freedom in new era’ (speech at the 75th Anniversary of Time magazine, 3 March 1998). With the National Performance Review in 1993, Clinton and Al Gore committed themselves to a major programme of administrative reform to ‘reinvent government’. Launching the reforms, Al Gore stated: ‘It is time to get rid of the old way of managing the federal government.… What’s needed instead is an entirely new model of leadership based on clear sets of principles, flexibility, innovation, accountability and customer service.’ In Britain, the Conservative government of the 1980s introduced a series of radical management reforms, summarily described as ‘the New Public Management’ which placed the Civil Service in a state of ‘permanent revolution’, as one senior civil servant put it. And somewhere during the 1990s, the Labour party became ‘New Labour’. At the first party conference after the 1997 election, Tony Blair told delegates: Modernisation is not the enemy of justice, but its ally.…Progress and justice are the two rocks upon which the new Britain is raised to the heights. Lose either one and we come crashing down until we are just another average nation, scrabbling around for salvation in the ebbing tide of the 20th century. That is why we changed the Labour party—to make New Britain. (Press Association, 30 September 1997) The new Labour government was soon being described as ‘the government that promises in almost every breath to modernise Britain’ (The Times, 11 March 1998). Transformation through information technology was at the heart of this modernising theme, as politicians from both countries invoked images of the ‘information age’ to furnish their vision. They presented a picture of a new technological era, in which ‘old-style’ government was increasingly anachronistic and inadequate. In the US, Al Gore proclaimed: ‘We are determined to move from industrial age government to information age government.… the failure to adapt to the information age threatens many aspects of government’ (NPR, 1993:121–2). Clinton inserted a eulogy to the benefits of information technology into most of his speeches: ‘The high-tech information age means that all large bureaucracies will be restructured, that more decisions will be pushed down to the grassroots’ (speech on the Education Technology Initiative, 15 February 1996). As the twenty-first century approached, such claims increased in scope: ‘The new century is now barely

Introduction

xv

700 days away. It will be many things new: a time of stunning leaps of science; a century of dizzying technology; a digital century; an era in which the very face of our nation will change’ (speech on the 75th anniversary of Time magazine, New York, 3 March 1998). In Britain, enthusiasm for the computer revolution and disparagement of existing styles of government were fuelled by an influential pamphlet written by the Head of Labour’s Business Unit: Information Age Government: Delivering the Blair Revolution (Byrne,1997), which outlined a cornucopia of benefits to be gained from information technology and castigated the Civil Service for being ‘quite unfit’ to deliver the Prime Minister’s vision of an ‘information age’ society. The source of these claims that information technology has the power to transform government can be traced to both academic and popular writing on the subject. The wider impact of information technology on society has been called a major transition to the information age, a ‘revolution’, a ‘transformation’, a ‘third wave’. Ever since 1973, when Daniel Bell heralded The Coming of the Post-Industrial Society, attributing a ‘decisive role to information technology in transforming the industrial structure’, there has followed a burning rash of texts describing the transformation of society due to the development and use of information and communications technologies. Popularist writers such as Alvin Toffler claim that technological innovation will bring the end of government where ‘power has shifted away from the old hierarchs [sic], creating a far more fluid, confusing system, with continually shifting centres of power’ (Toffler, 1990:255). Management gurus enthusiastically pronounce the coming of the Intelligent Enterprise (Quinn, 1992) and The End of Bureaucracy and the Rise of the Intelligent Organization (Pinchot and Pinchot, 1994). In contrast, more pessimistic observers, such as Burnham (1983), believe that information technology will lead us through the ‘Control Revolution’ (Beniger, 1991) to the ‘Computer State’ (Burnham, 1983), a technology-induced totalitarian centralisation, with malign governments using massive databanks to control the people. All the above are united in their belief that information technology will bring transformation, to government as to all other areas of life. This book challenges the view that information technology will transform government, even bring the end of government. It establishes information technology as a vital, policy-critical feature of contemporary public administration. It demonstrates that not only does information technology shape government; government also selects and shapes information technology. Only by studying this complex two-way interaction over a substantial period of time will the effects on government of the computer revolution be established. In this sense the book is historical, covering the period from the 1970s to the 1990s and examining the implications of information technology decisions of the past for policy decisions of the future. It investigates the reality of what has happened, rather than accepting the wild predictions of politicians and commentators. It compares two central governments: Britain and the United States. In both countries, expenditure

xvi Introduction

on information technology by central government has risen from virtually zero to around 2 per cent of total budget over the last 40 years. The book contains eight chapters. The first chapter charts how information technology has spread across the departments and agencies within both governments. Information systems now lie deep in the heart of the tools of government policy, transferring money, authority and information, replacing organisational functions and creating new requirements for technical expertise. The second chapter looks at governmental responses to this influx of technology: innovation, expenditure and internal control and co-ordination. Information technology brings new pressures for government to innovate, especially in response to counter-innovations developed outside government. Such innovation is expensive, leading to new pressures for internal controls and a new need for co-ordination between departments. The following four chapters focus on two policy sectors in the two countries, taxation and the delivery of social security benefits, where information technology has been used since the early days of computers, where its use has increased dramatically over the time period and where strong claims have been made for its transformative power. In these two policy sectors, a significant chunk of the civilian state is covered in detail, illustrating the new range of risks involved when central bureaucracies embark on large-scale, technology-based projects. The four distinct stories of organisations battling to modernise themselves in line with the dreams of their political masters show that information technology brings problems as well as solutions to public administration. Chapter 7 looks at an important element of both governments’ development of information systems: the contracting out of large tranches of information technology work. The search for ‘organised expertise’ which information technology necessitates has drawn new players into government, in the form of huge global private sector computer services providers. Finally, the last chapter reviews the arguments of those who attribute radical transformational powers to information technology. It finds that the modernisation theme of government in the 1990s has its origins in the modernist tradition: the belief that government is in a state of continual progress towards some identifiable point. Modernist predictions of how government will operate in the ‘information age’ are re-examined in the light of evidence presented in the intervening chapters. This chapter presents an alternative perspective on the ‘state of the future’, challenging assumptions of previous work. It argues that information technology has brought government to the ‘ante-postmodernist’ era, where information systems form a vital, ever-changing part of the state, but where no overarching transformation may be identified. Information technology may not be suited to play the part that Clinton and Blair have written for it in their vision of the government of the future. Past experiences of governments in the US and Britain suggest a more mysterious role for information technology, as the new ‘ambiguous essence’ of the state (Heidegger, 1977:33).

1 Computerising the tools of government? The spread of information technology

Any consumer of newspapers, television, even of government documentation would know more about the potential for information technology in government than the reality. The most futurist developments are the most newsworthy. In America and in Britain publications called Government Computer News and Government Computing pump out stories of the latest developments and profiles of entrepreneurial systems managers. It would be difficult to guess from either publication that the UK Foreign Office and Treasury still use telegrams; that in 1995 there were few linkages by electronic mail across UK departments; or that in 1997 the US Internal Revenue Service was still served by some of the crumbling systems developed in the 1960s. The aim of this chapter, therefore, is to chart the actual, rather than potential, information technology developments within the two governments up until the 1990s. Widespread usage of the term ‘computer revolution’ has obscured the extent to which actual change has been and continues to be incremental, with implementation of only a fraction of the potential changes now possible due to technological advance. This chapter investigates the centrality of information technology to the basic functions of government by considering its relationship with the four ‘tools’ of government policy identified by Christopher Hood (1983): nodality, authority, treasure and organisation. Policy implementation depends upon information technology; computers also play a role in policy formulation, as new technological developments open up new policy possibilities. In the first two sections of the chapter differences in the extent to which information systems have shaped the functions across the four ‘tools’ are explored. Information technology cannot by itself change the inherent nature of the tools of government policy. But by using information technology, organisations may change the way that they use these tools in two ways. First, by reengineering the way that existing tasks are carried out. Second, by creating new tasks and opening up policy opportunities that were not previously possible. Changes in government use of computer technology have depended upon technological developments in general, broadly defined as follows. The 1950s marked the pioneer stage, where computers were used for scientific 1

2 Information technology in government

calculations and massive routine administrative tasks. The 1960s brought the development of large mainframe computers, large centralised computer systems, with the main processors held at regional computing centres usually communicated with by ‘dumb’ terminals without processing power. The 1970s was an era of applications development, with more variations in function so programmers, systems designers and developers played an important role. Terminals spread across departments in the 1970s; when prices of computers decreased dramatically in the 1980s, these were replaced by personal computers, with their own processing power and storage capabilities. Database technologies were also developed, providing a structured store of data, eliminating duplication and reflecting the nature of the data, rather than the needs of particular applications that processed them. By the mid-1980s automation in manufacturing and service sectors was still largely composed of discrete applications of information technology, but the technological trend was very strongly towards integration. In the 1990s the capacity of personal computers increased dramatically and networks to link them together became available to most organisations. Private and public telecommunications networks played an especially important part in these developments, bringing changes to the availability and flexibility of information technology. These changes mean that the possibilities that information technology offers in the 1990s are considerably greater than merely providing a faster, larger, automated filing system. Organisations of all kinds have taken on new functions that would not be possible without such technology. Information technology in private sector companies is now widely recognised as a crucial element in a company’s business strategy: It is now a truism that information technology has transcended its established administrative support functions and has moved towards playing a more central role in business operations’ (Loh and Venkatraman, 1994). Recent notable developments have included Electronic Data Interchange (EDI): the exchange of trade transactions such as orders and invoices between trading partners, potentially reducing the transaction costs and eliminating duplication of effort involved in re-keying documents generated by computer in the first place. The 1980s brought a dramatic increase in use of the Internet, a global research network, consisting of a loose confederation of interconnected networks providing services such as file transfer and electronic mail. Finally, virtual reality is an interactive, computer-generated, threedimensional, immersive display. At the time of writing most applications are in the early stages of use (Schroeder, 1995). Within government there are very few applications and virtual reality is most often used as a romantic analogy to describe the effects of existing types of information technology on public organisations (see Frissen, 1994a, 1994b, 1996b; Sabbagh, 1994). But research and development efforts have spread widely and the potential for specific, discrete applications especially within health and education is likely to be dramatic in the future.

Computerising the tools of government?

3

Effecting tools: how have they changed? In The Tools of Government Christopher Hood (1983) attempted to answer the question ‘What does government do exactly?’ Hood’s approach was to describe the tools that government continually draws upon, focusing on the point where government comes into contact with ‘us’, the world outside. He identified two basic distinctions between a government’s tools for dealing with the world outside government. First, he distinguished between ‘detectors’ (the instruments that government uses for taking in information) and ‘effectors’ (the tools that government employs to make an impact on the world outside). Second, he defines the four basic resources that government possesses by virtue of being government and on which they can draw for detecting and effecting tools: • nodality, denoting the property of being in the middle of an information or social network; • treasure, denoting the possession of a stock of moneys or ‘fungible chattels’; • authority, denoting the possession of legal or official power; and • ‘organisation’, denoting the possession of a stock of people with whatever skills they may have. This section tracks the progress of computers through each of the ‘resources of government’ identified by Hood (1983) and examines the role that computers have played in their utilisation from the 1980s to the 1990s. Effecting tools are those which the government uses to influence the world outside. They may involve the use of nodality, authority, treasure or organisation (further sub-divided here into organised expertise) or any combination of these. Nodality Hood (1983:21) described nodality as the property of being in the middle of a social network; enabling government to obtain a store of information or a panoramic picture. Government agencies are in a unique position both to demand information from citizens and to dispense information to them. It might be expected, therefore, that information technology, from the name itself, would have a vital impact on government’s nodal resources, providing the potential to reshape the nature of citizen-government interactions. Effecting tools based on the nodal resource include bespoke messages, grouptargeted messages and broadcast messages. Clearly, by reducing constraints of volume and distance, information technology increases the potential for broadcast messages. One way in which information technology has provided greater potential for government to divulge information to the outside world is through the Internet. The Internet is not strictly speaking a government-owned network; it is not owned or

4 Information technology in government

administered by any organisation. But there is a voluntary body, the Internet Architecture Board, which approves shared standards and a voluntary users’ group, the Internet Engineering Task Force. Moreover the Internet still receives some funding from the US Department of Defense. By 1994 the Internet had over 4 million host computers, 69 per cent of which were in the United States (Sabbagh, 1994:21). By 1997 the number of people using the Internet was estimated at 60 million worldwide (Guardian, 19 July 1997), up from 25 million in 1994, a more than tenfold increase from 1990 (Business Week, 26 June 1994). Europe was calculated as at least one year behind the US in Internet take-up in an IDC survey (Computing, 23 October 1997) and a Tandem survey of nearly 100 of Europe’s largest companies found that the average UK spend on Internet was 2 per cent of the IT budget compared with 10 per cent in Germany and France (Computing, 2 October 1997). In parallel, the US government is considerably further advanced than the UK government in Internet usage. In the US by the mid-1990s over 100 separate federal government networks were linked up to the Internet and a vast range of government publications was available on-line, including the 1996 federal budget documents. The US House of Representatives made the full text and status of bills and resolutions being considered in Congress available on the World-Wide Web. Citizens could take a trip to the White House site, furnished with detailed photographs of its occupants, and e-mail the President or Vice President with their views on government administration (although it should be noted that such a communication is likely to receive a short e-mail message in return, stating that the President is pleased to have received the communication and will consider it in due course, with no promise of further response). In 1995, as part of the National Performance Review administrative initiative, the US Internal Revenue Service made all its forms and publications available to taxpayers through the Internet (2.5 million forms and publications were downloaded in 1996), thereby receiving a ranking by PC Computing Magazine as one of the top 101 Internet sites (NPR, 1995b:17). The US Social Security Administration in 1996 launched a service to allow customers to request a statement of their social security earnings on-line, along with an estimate of their future benefits. In 1997, the judge in the much-publicised ‘Louise Woodward’ case, in which a British nanny was accused of murdering her US employers’ child, caused much controversy by announcing that his awaited verdict (an overturning of the jury’s decision) would be released first on the Internet. When the verdict was announced, the website was unavailable for three hours after it crashed in response to massive usage. In Britain technology-aided changes to government’s use of the nodal resource have been slower to develop. Only in 1995 did the Prime Minister’s Office begin trials to connect Downing Street to the Internet (Press Association, 6 October 1995). Then Leader of the Opposition, Tony Blair, and several shadow ministers already had e-mail addresses in 1995, although a survey

Computerising the tools of government?

5

by a Sunday newspaper received no replies from any of them (Independent on Sunday, 5 March 1995). A similar survey during the 1997 election campaign received replies from around 60 per cent of MPs (Guardian, 15 March 1997) and by this time www.royal.gov.uk would put you through to the Queen’s website. In 1997 David Clark agreed that a White Paper be published in 1998 on freedom of information, would be accompanied by a discussion forum on the Internet, hosted by UK Citizens Online Democracy (UK COD); an unusual stage for consultation over legislation. However, there was to be no formal contract and no government funding for UK COD. By the end of 1997 Blair was considering launching an Internet version of the TV show Question Time, allowing citizens to question him on policy by e-mail for half an hour each month, although Computing magazine suggested that ‘there are concerns that Blair’s computer skills may not be adequate’ (Computing, 11 September 1997). Efforts to increase the nodality of the Labour party, as opposed to the government, have more chance of success. In 1998, the Central Strategy Unit at No. 10 Downing St announced the piloting of a computer database, called Agenda, to be continually updated during the day, to tell ministers what to say about government policies and to give instant rebuttal to negative stories. The system was based on the ‘Excaliber’ system used by the Labour party to counter-attack negative press reports during the run-up to the 1997 election; in government it was intended to allow ministers to have policy discussions via e-mail and avoid formal meetings minuted by civil servants. A Downing Street source said that Blair was determined to use the system to keep all his ministers ‘on message’ (Daily Telegraph, 23 January 98). Thus in the US, information technology has been used more heavily and more enthusiastically to enhance the nodal resources of government, although the UK is starting to catch up. But there is clearer evidence in the US for transformation in government’s communication with private sector organisations than with government-citizen interactions, a point noted by Clinton in 1997: ‘One of the most significant uses of the Internet is in the world of commerce’ (President’s message to Internet users, 1 July 1997). From 1996 the US Business Advisor became a one-stop Internet site for businesses to access information, services and transactions from the federal government. The US Census Bureau, which has produced publicly available printed reports since 1790, has produced census data on computer tapes since 1960, which have been made available to the public since 1980, but they could only be used by those businesses or citizens in possession of the complex tools necessary to analyse them. By the 1990 census the Bureau of the Census considered that the production of the census on laser disks would ‘revolutionize the analysis of local markets in the 1990s’ (Schwartz, 1989:1). According to the Bureau, the ‘most revolutionary technology’ of the 1990 census was the TIGER digital map boundary file, a digital street map of the country, with enormous potential for businesses to use for a variety of purposes, from market research to site planning and logistics: ‘The commercial

6 Information technology in government

uses of TIGER will outweigh all of the commercial value of the census data itself (Schwartz, 1989:2). A file containing ZIP code data has been produced since 1980, paid for by a consortium of private companies, allowing opportunities for sophisticated marketing based on household type, race, sex, age and marital status. The reshaping of government’s relationship with commercial enterprises was evident in other computer developments in the US. The most ambitious federal information system under consideration in 1986, EDGAR, permitted the Securities and Exchange Commission to collect, process and to disseminate over six million pages of securities filings each year by electronic means. All documents were to be made available to outside users through interactive computer networks operated by private sector companies. The SEC Chairman of the time observed that, at a cost of over $50 million through to 1991, EDGAR would increase the efficiency and fairness of the securities markets; accelerate access to the capital markets; enhance SEC’s ability to protect investors and to maintain fair and orderly markets; and accelerate SEC processing of corporate filings (CGO, 1986:3). Partly because increasing sophistication in information dissemination has been geared towards the private sector, the increased volume of information that is available through more sophisticated technology can actually detract from citizen accessibility. In 1988 the US Office of Technology Assessment (OTA) observed that: At a fundamental level, electronic technology is changing or even eliminating many distinctions between reports, publications, databases, records and the like, in ways not anticipated by existing statutes and policies. A rapidly growing percentage of Federal information exists at some point in an electronic form on a computerized system as part of ‘seamless web’ of information activities. (OTA, 1988a:8) As the production of comprehensible and customer-relevant data from the ‘seamless web’ (rather than obtaining the data itself) became the activity with significant, expense, the accessibility of government information for citizens was affected in two ways. First, the very surfeit of information and the ‘seamless’ aspect of the web made it more difficult for people to know what was available or how it might be translated into what they wanted to know. Second, commercial companies increasingly acted as an interface in presenting information in an accessible format. The contractor implementing the EDGAR system was expected to generate revenue through the sale of the data to external users, raising concern that public data previously available free or at minimal cost would become prohibitively expensive to the public. Other concerns were that dissemination arrangements would give companies monopoly control over public information. Much of the data on public expenditure previously published in printed form was now reprocessed and

Computerising the tools of government?

7

repackaged by private sector management consultancies, for example ‘Federal Sources Inc.’, available only at a high cost. At a practical level, the dissemination problems presented by electronic data systems were significantly different from the problems presented by the distribution of government information in paper or other hard-copy formats. Paper documents could be reproduced and used relatively easily by anyone who could read, but the advent of electronic dissemination raised new equity concerns, since, to the extent that electronic formats had distinct advantages (timeliness and searchability), those without electronic access were disadvantaged. This problem raised the question of federal involvement in ensuring that citizens and public institutions had access to terminals and networks, for example to the Internet. In the US it also caused the enthusiastic modernist Republican Newt Gingrich, with a political programme otherwise notable for slashing social provision, to propose that a free lap-top computer be provided to all US citizens. The entry of UK governmental departments on to the Internet might have been expected to increase government’s ability to produce bespoke messages in response to direct enquiries from citizens. In April 1995 the minister for OPSS publicly announced his e-mail address in response to a written parliamentary question asked by Graham Allen, the Labour MP for Nottingham North (Information Technology and Public Policy, vol. 13, no. 3, Summer 1995). However, an altercation at a conference (the ESRC/PICT Conference on Information Technology and Social Change) in May 1995 illustrated how the apparent transformation of citizens’ interaction with government that comes when a citizen can contact the government directly by electronic means may be incomplete. A journalist two months earlier had sent perhaps the first electronic message to a British minister from a member of the public, in response to the publication of the e-mail address of the Minister for Public Services, John Horam. At the conference he asked the head of the CCTA, Roy Dibble (who had delivered a speech), why he had not received a reply; ‘now that citizens were talking to government, when was government going to talk to citizens?’ Roy Dibble replied: Your questions are currently sitting on my desk. When the minister received your e-mail message, it was printed off and sent to me, by post. One of my staff has written to the relevant agency heads with a request for information. Their staff will prepare the information and send it to my office where it will be collated and returned to the Minister’s office, also by post. He will check the information and one of his staff will type it on to e-mail and transmit to you. Similar examples of the ‘skin deep’ nature of the transformation of government—citizen interactions can be found throughout the British government. An employee of one Next Steps Agency observed that all electronic messages sent to the agency were, in 1995, printed off and filed as

8 Information technology in government

a matter of procedure. Furthermore, it should not be assumed that computerisation of the nodal resource automatically increases the quality of information produced. The Customs and Excise Department’s new Instrastat system for gathering EC trade statistics was implemented in January 1993, but was criticised by the Treasury Committee who said that UK trade statistics had become unreliable since the system had been introduced (Kable, 1994:161). However, there is a wide variation across departments in the extent to which technological transfer of information is used, and there is no doubt that for some tasks the transformation of the nodal resource has taken place. For example, the UK Department of Education has been innovative in Electronic Data Interchange (EDI). The first use of EDI for education was the transmission of teachers’ records from education authorities to the Ministry using Dialnet, a network dedicated to education which by July 1992 linked all 117 education authorities and by September had already cut errors in records. The head of development at the Ministry’s information technology department said that ‘if the number of grant-maintained schools jumps from the present 250 to, say, 2,500, the collection of statistics would become impossible using manual methods’ (The Times, 18 September 1992). The Central Statistical Office also used EDI to transmit data for retail price indices, production indices, trade figures and demographic data to the international statistical offices (The Times, 18 September 1992). In 1984 the higher education sector was the first to obtain dial-up access to the Internet with JANET, the UK’s largest computer network linking around 180 universities, higher education and research bodies to the Internet. Such developments are expensive, however. And, especially with devolved financial accountability since the 1980s, smaller public organisations are sometimes unwilling or unable to make long-term investments in centrally operated technical initiatives to increase nodality. As one head teacher observed of a scheme to create a public network for schools (enthusiastically endorsed by Michael Heseltine, then Deputy Prime Minister): ‘With a budget of £85,000, a school like mine with only 40 pupils can’t afford to spend anything. This scheme won’t really help us’ (Guardian, 14 November 1995). In addition, as technological innovations become available, possibilities increase and so does the cost. The first version of JANET, the higher education network, was funded by higher education funding councils and was free to universities, but they will have to pay to use Super Janet Phase III (16,000 times faster than the original JANET), scheduled for operation in February 1998 to cope with increased and more sophisticated workloads. In general, therefore, the development of information technology networks has not extended government’s nodal resource in either country as much as the claims of the modernist politicians might lead us to expect, at least for citizens. In 1995 citizens in both the US and Britain are far more likely to hear from their bank or other commercial organisations through technologyaided means than they are from their government. And differences across

Computerising the tools of government?

9

countries remain resilient to change: the political culture of the US has traditionally been more open, with more communications between government and citizens and more printed information available: and this difference remains. In the US, with its culture of ‘open government’, intensive data collection and widespread dissemination are treated extremely seriously. Authority Authority denotes the ‘ability to command and prohibit, commend and permit, through recognised procedures and identifying symbols’ (Hood, 1983:54). Therefore, the key organisations wielding authority in the two governments are: the police forces; the immigration agencies; agencies with the authority to levy and collect taxes; and those that enforce licensing controls, such as the Driver and Vehicle Licensing Agency (Britain) and the Farmers’ Home Administration (US). Authority is most often wielded through the form of tokens: orders, bans, requisitions, vouchers, warrants, coupons, licences, quotas, certificates and tax forms. Information technology does not alter authority itself, a distinctive resource which some government agencies hold. But there is great potential for information technology to shape the exercise of authority by government agencies. In the US the Department of State, Customs Service and Immigration and Naturalization Service created IBIS, a shared system to assist law enforcement officials at US borders. A Justice Retrieval and Inquiry System, JURIS, is a nationwide automated legal research system accessed by 75 per cent of the federal justice community from over 700 sites. In the UK, in 1997, computerising the whole process of litigation, involving Net links and Web sites, e-mail and video-lined virtual courts was at the top of the new Master of the Rolls’ agenda to make justice ‘faster, cheaper and fairer’ (Guardian, 12 June 1997). Quite simple techniques to use computer-aided transcription (CAT) to record court processes and provide judges with a continually scrolling, searchable stored record of all that was said, have made a particularly great difference in the legal field, where recording techniques had not changed since the nineteenth century. Lord Woolf commented that there were now cases which could not be managed satisfactorily without CAT, which allows judges and solicitors to match the evidence being given to all other relevant documents and look for discrepancies, a process inconceivable in a paperbased case. Use of the authority resource by immigration agencies has also been greatly enhanced by information systems. The US Immigration Agency in 1995 created INSPASS, an automatic teller machine for frequent international travellers to insert an ID card, put their hands in a slot and (if recognised by the system) be cleared in 30 seconds. Each $35,000 inspection machine replaced 4.2 inspectors. In 1995 some 50,000 US travellers were cleared in this way on entering the US from abroad (Washington Post, 9 October 1995). In Britain before 1995 the process of immigration control was almost entirely

10 Information technology in government

manual, involving every immigration officer checking the name of every non-EU passenger arriving at the control desk against an index of suspect persons in book form containing some 10,000 entries. The number of entries had to be limited to make the index relatively easy to handle, meaning that those included represented less than 2 per cent of the available information on offences and potential offenders. Following a survey in 1988 of information held on passengers from selected nationalities, the Immigration Service estimated that in theory, approximately 5,000 more passengers might have been scrutinised more closely if the full range of information had been available at the control (NAO, 1995a:28). After ‘several years’ of trying to computerise the system (NAO, 1995a:27), one was finally implemented towards the end of 1995. By 1997 it was rumoured that scanners that could identify individuals from their hands or the iris of the eye, which were currently being developed by IBM and tested in Bermuda, might replace the passport as a border control (Observer, 20 April 1997). Computer systems have also had a dramatic impact upon policing strategies. By 1985 in Britain the Police National Computer (PNC) contained a list of all 33 million vehicles and car owners in Britain and details of all reported stolen cars, available to police throughout the country via a network of 900 computer terminals (BSSRS, 1985:10), allowing traffic police to have any car checked immediately (via a radio call to a headquarters with a terminal). The details on the PNC were, during the 1980s, updated from the primary source of the computer system at the DVLC, a special courier bringing the updated information via magnetic tape from the DVLC in Swansea to the PNC in Hendon. Thus the advent of the computer system meant that ‘most adults are now in effect legally required to register their address with the police and notify them of any change’ (BSSRS, 1985:13). Another index on the PNC, the Stolen and Suspect Vehicle Index, contained a record of any vehicle that at some point had been identified by the police as suspicious. By 1983, of some 195,000 records about half were in a range of ‘suspect’ rather than ‘stolen’ categories, which included: suspected of being used in a crime incident; of long-term interest to the police; vehicle used for police purposes; blocked (information available over secure channels); seen or checked in noteworthy circumstances (BSSRS, 1985:14). BSSRS concluded in 1985 that It is a fair presumption that among the vehicles which find their way into these categories are a substantial number resulting from the surveillance of labour movement and political activists…information which can include rumour, hearsay and unsubstantiated suspicions. The Wanted and Missing Persons Index is linked to the Criminal Names Index, enabling the police to have full access to a suspect’s Criminal Record Office file, something not normally available to a court of law until after a verdict has been given. In 1996, the Scottish Chief Inspector of Constabulary

Computerising the tools of government?

11

said that computer systems were now crucial for successful policing and that new systems had helped push detection rates in Scotland up from 31 per cent in 1991 to 39 per cent in 1995–6 (Computing, 5 September 1996). In the US policing is a state-level affair and the impact of computer technology there has also been dramatic: The computer’s impact in public organisations is perhaps best exemplified by its contribution in the law enforcement area. In a survey of 79 cities Dial found that law enforcement was the single most recurring computer application area, accounting for 15 per cent of all computer applications. (Tien and McClure, 1986:553) The 1990s brought a further collection of innovative information technology projects aimed at strengthening the use of the authority resource. In Britain in April 1995 a system to check DNA became operational, collecting the ‘genetic fingerprints’ of people charged with burglary, serious assault and sexual offences and matching them against blood, mouth cells, hair roots or semen collected from the scenes of crime. The system, due to match 135,000 samples in its first year, was the first of its kind in the world and was described by the Home Secretary as the ‘most significant scientific advance in crimefighting since the introduction of fingerprints’ (Financial Times, 12 August 1995). By 1996 the system had been responsible for more than 100 successes (Computing, 30 May 1996). By February 1997, City of London police had implemented a surveillance camera system that read the number plates of all vehicles entering the City of London at specified entry points and automatically checked them against records at the central Police National Computer. Officers in a control room are immediately alerted if the car is suspect or if the occupants are wanted for questioning. The system can deal with up to 300,000 vehicles an hour. In 1995 the Home Office started spending money on a national automated fingerprint system which had been under discussion for 10 years and is now due to come on-line at the turn of the century. The system was the object of considerable controversy, receiving criticism from local police forces, 37 of which formed an independent consortium with IBM and Morpho to develop its own fingerprinting systems. Their system was operational for two years, until the consortia became disillusioned with IBM’s fulfilment of their contractual obligations and terminated the contract, with legal action ensuing on both sides. Local police stations issued a writ against IBM and reverted to a manual system for matching fingerprints, leaving themselves with a vastly reduced capability to trace fingerprints at a national level. Criticisms in the news media illustrated how, in the 1990s, once a technical solution is known to be available, it is felt that it should be used. Information technology has also facilitated innovation in the use of authority for tax collection. By 1995, the US Customs Service allowed people

12 Information technology in government

to pay duties on imported goods by credit card. Five per cent of US citizens paid their taxes electronically in the 1990s and in 1996 some 26 million taxpayers phoned in their returns (although, because Congress required that all tax forms be signed with a pen, taxpayers still had to mail in paper returns to be processed in the old way (Business Week, 26 June 1995)). The UK Customs and Excise Department too has been innovative. The Department was one of the pioneers of remote electronic input from port traders into its computer systems in the 1970s, before electronic data interchange (EDI) got its name. The Department also developed fully automated VAT collection and drug control systems during the 1980s and started to develop new systems needed for changing trade and travel rules in the Single European Market. Other potentially transformative authority-based innovations have failed to fulfil expectations, due to the resilience of existing social and political relationships. In Britain there have been several attempts to introduce electronic tagging for prisoners, a move which some believed would revolutionise surveillance capabilities and ‘pave the way for the widespread use of curfew orders in Britain for the first time’ (Guardian, 6 May 1994). Prisoner tagging is an example of information technology being built into the surveillance process. Curfew orders for adult offenders were introduced under the 1991 Criminal Justice Act but were never used because, without electronic tagging, they were impossible to implement. Trials in 1989 ended after a steady stream of problems and equipment failures. The trials used electronic ankle bracelets which emitted a signal transmitted along a telephone line to a computer at the local police station. If the offender left the house and was more than a specified distance from the telephone, the circuit was broken and the computer recorded the ‘violation’. More than half the 50 defendants involved had committed further crimes or otherwise violated their bail conditions while tagged (Guardian, 31 May 1995). A spokesman from the National Association of Probation Officers stated: Despite all the technical problems of preventing reoffending, Ministers continue to be obsessed with this technological wizardry…Ministers have failed to realise then and now that a punishment which might act as a deterrent for them does not work for many offenders who lead chaotic lives, may have mental health problems and drug and alcohol addictions, and have unsettled home lives. (Guardian, 6 May 1994) A Home Office report observed that there had been rapid growth in tagging in the US but warned that ‘in the US the need to sell electronic monitoring has led to aggressive marketing of competing devices with consequent claims of effectiveness which have not been substantiated’ (Guardian, 16 May 1994). Another potential authority-enhancing innovation is a national identity card, planned by the then Home Secretary Michael Howard in the mid-1990s. Civil liberties organisations were alarmed by

Computerising the tools of government?

13

plans to use the Driving and Vehicle Licensing Agency’s database to implement such a card, described by a representative of Liberty as ‘the least accurate of those existing databases which cover a large part of the population’ (Computing, 29 August 1996). Dreams of using information systems to transform the political process through decentralisation (much favoured by President Clinton) have also brought disappointments in practice. Between 1980 and 1995 the US federal government spent nearly $2 billion helping the 50 states computerise their enforcement of child support laws. Many states suffered major cost overruns and some of the systems developed were inoperable: ‘I think it is one of the saddest disappointments I’ve ever seen. It is a huge amount of money and very little has been accomplished’ (Division Director of General Accounting Office quoted in the Washington Post, 15 October 1995). In Maryland efforts to computerise child support and welfare programmes were disastrous, with development costs rising to $100 million and a further $30 million payment to a contractor who threatened a lawsuit after being dismissed. In California the state’s auditor reported that the new welfare computer systems alone would cost about $1 billion ($455 million more than the original estimate) and might never manage to accommodate the high volume of transactions and records they needed to handle. Most state bureaucracies failed because they ‘lacked technological expertise, were burdened by outmoded procurement rules and had fragmented decision-making processes’ (Washington Post, 15 October 1995). The problems did not relate solely to child support; for example, many states experienced similar set-backs with the development of vehicle registration systems. In 1994 an auditor estimated that California had wasted $50 million on a Division of Motor Vehicles computer system. Thus information technology has special potential to increase use of the authority resource, although not all innovations have been successful. For those who fear the rise of the authoritarian state through technologically enhanced use of authority, developments such as fingerprint systems, DNA databases and the Police National Computer may be alarming. Undoubtedly there is a potential for vastly strengthened political control through the use of increasingly sophisticated policing technologies. But it should be noted that in general, innovations have occurred in discrete applications and have largely developed in isolation from each other. The PNC, for example, contains only information of national interest: local police forces often tailor criminal information to their own requirements and there is little standardisation across local systems. Treasure Treasure denotes government’s stock of ‘fungible chattels, in the sense of anything that may be freely exchanged’ (Hood, 1983:40); moneys or moneylike substances. The treasure-processing systems of government were among

14 Information technology in government

the earliest to be computerised: ‘In a pattern characteristic of most government organisations as well as private businesses, initial non-science applications were primarily in the area of finance and fiscal operations’ (PRP 1978b:5). During this period the Internal Revenue Service (IRS) computerised their tax collection procedures, the Bureau of Accounts started automatically disbursing cheques, and one of the first federal payroll systems was installed by the Federal Bureau of Intelligence (FBI). By the 1990s the massive central database of the SSA was used in conjunction with a freephone telephone network that allowed any citizen anywhere in the US to make a claim by telephone. In Britain also, earliest developments were in financial applications: in payroll sections, followed by accounting systems. The public utilities’ customer suites became large-scale financial applications which followed the same pattern and became databanks for use by other departments. The Business Statistics Office of the Department of Trade and Industry established a register of businesses in 1970, with ‘great potential for the role of governments in managing the economy and as a means of developing models to aid the Treasury’s economic planning functions’ (Lamb, 1973:127). Such models have become ever more sophisticated through to the 1990s. Information technology has also introduced completely new ways of processing treasure through the creation of electronic markets. For many private sector companies, the best thing about the Internet was its facilitation of electronic commerce; carrying out transactions between companies and consumers, although concerns over security and control meant that by 1995, while 35 per cent of companies were planning to sell products on the Internet, only 5 per cent were actually doing so (Business Week, 26 June 1995). In the financial sector, banking procedures were transformed during the 1980s. By 1995 a raft of companies were developing their own forms of electronic money, known as ‘e-cash’ (Business Week, 12 June 1995), that is money that moves along multiple channels largely outside the established network of banks, cheques and paper currency overseen by the Federal Reserve: ‘Digital money is the ultimate—and inevitable—medium of exchange for an increasingly wired world…the biggest revolution in currency since gold replaced cowrie shells’ (Business Week, 12 June 1995). Such a development was predicted to loosen governments’ and central banks’ control of money flows, weakening governments’ ability to monitor and tax; ‘over the long haul, this is going to lead to the separation of economy and state’ (Bill Frezza, President of Wireless Computing Associates, Business Week, 12 June 1995). Emoney can be easily sent in and out of a country undetected, facilitating money laundering on a grand scale and setting up enormous potential problems for the Bank of England or the Federal Reserve, largely responsible for traditional money regulation. In 1997 Barclays became the first UK bank to introduce electronic money when it launched its Internet shopping mall, Barclay-Square. The electronic money was for small-value purchases where the transaction costs of using credit cards were too high: Barclaycard customers

Computerising the tools of government?

15

are able to charge up to £10 on their credit card and have an equivalent value downloaded on to an electronic wallet on a personal computer. In comparison with the private sector, however, transformation of government’s treasure-processing capabilities through information technology has been undramatic. The lack of automated handling of treasure by government organisations highlights a growing gap between the potential and the actual in government use of information technology. The 1990s brought a realisation that the extent to which treasure was dispensed automatically was diverging from the methods used by other financial organisations with huge customer bases, such as banks. In the US by 1994 the National Performance Review highlighted the electronic transfer of funds as an area under-utilised by government agencies: For 15 years, electronic funds transfers have been widely used. They cost only 6 cents per transfer, compared with 36 cents per check. Yet each year, Treasury’s Financial Management Service still disburses some 100 million more checks than electronic funds transfers. We still pay about one federal employee in six by check and reimburse about half of travel expenses by check. Only one-half of Social Security payments— 60 per cent of all federal payments—are made electronically.…Only 48 per cent of the Veterans Affairs Department’s payments are made electronically. Fewer than one in five Supplemental Security Income payments and one in ten tax refunds are transferred electronically. We have only begun to think about combining electronic funds transfers for welfare food stamps, subsidies for training programs, and many other government activities. (NPR, 1994:121) The world’s first cash dispenser had been launched nearly thirty years earlier, in London in 1967 (Computing, 26 June 1997). Such findings gave the modernist practitioners of the Clinton administration reason to be disappointed with government practice. The NPR report observed that while private financial transactions with banks had been increasingly enhanced and widened through electronic transfer, government agencies still contained many examples of ‘paper chases’. The Food Stamp Program involved ‘billions of bits of paper that absorb thousands of administrative staff years’; each month 210,000 authorised food retailers received 3 billion food stamps printed and distributed to 10 million households. The retailers carried stacks of coupons to 10,000 participating financial institutions, which then exchanged them with Federal Reserve banks for currency. The Federal Reserve Banks counted the coupons (already counted several times already) and destroyed them; the administrative cost of this system was almost $400 million per year. Criticism was also directed at the Veterans Benefits Administration by Congress and Veterans’ groups for investing $140 million in a computer modernisation programme that had

16 Information technology in government

not shortened the agency’s five-month lag in processing benefits claims (Washington Post, 9 October 1995). In Britain, as in the US, there has developed a realisation that further innovation possibilities for the processing of treasure exist, although the realisation has not been so humbly acknowledged in governmental documentation. Dramatic possibilities for policy innovations, in benefits distribution and the collection of taxes, were under consideration by the 1990s and the possibility of introducing smart cards was a frequently recurring news item; however, the introduction of ‘treasure’-based innovations such as ‘one-stop’ shops, integrated benefit systems and the merging of the tax and social security systems were all restrained by the existence of the discrete benefit-specific computer systems developed in the 1980s (described in Chapter 3). By the mid-1990s the British government announced plans to automate post offices and introduce a benefit payment card to replace order books (Information Technology and Public Policy, vol. 13, no. 2, Spring 1995:139), but by 1998 the Benefits Agency’s attempt to introduce such a system had still not succeeded. As in the US there were still many technically available yet currently unrealised possibilities for electronic transfer of treasure. However, there are examples in both countries of treasure-processing computer systems which facilitate new policy options not previously possible. Electronic data interchange (EDI) can be used to create markets in areas where market relationships were hitherto restricted by transactional limits. Privatisation of the electricity company, creating for the first time a market among rival suppliers on a single grid, can be viewed as a policy that could not be implemented without information technology. Foster (1992:73) argued that when the UK national electricity grid was set up in the 1920s there was no alternative to monopoly provision, because only complex computer systems could have provided the basis for a market in electricity on a common grid (Hood and Margetts, 1993:21). Similarly, in the US, the General Services Administration has replaced previous long-term purchasing agreements with electronic bulletin boards (enabling suppliers to check the deals negotiated between other vendors and government agencies and adjust their prices accordingly), using information technology to get closer to the characteristics of a spot market. By February 1995, 12 UK departments had also used electronic data interchange for procurement, although principally for advertising tenders. Government organisations do not just dispense treasure; since all government organisations are funded through public moneys, they all process treasure with accounting and budgeting systems, and these systems are now computerised in all but the smallest organisations. From the 1950s to the 1990s financial systems spread throughout the bureaucracy in both countries until each department and agency was peppered with financial management, budgeting and accounting computer systems. Information technology has thereby introduced completely new risks into the management of treasure, even in agencies where technology plays only a minor role in the

Computerising the tools of government?

17

organisation’s core activities. Even a small computer system can have a highly significant impact when it ceases to operate. The UK Foreign and Commonwealth Office, one of the least automated of government departments, was severely incapacitated when its small accounting system crashed disastrously in 1989, causing the FCO to produce the most serious qualification of an appropriation of accounts the Comptroller and Auditor General had ever made, a net imbalance of £5.3 million and £26.4 million posted to ‘dump accounts’. The system transpired to have been without backup during the eight years in which it had been in operation (Margetts and Willcocks, 1993:15). Similarly when the US Pension Benefit Guarantee Corporation’s new accounting system crashed in 1988, after an unsuccessful modification aimed at implementing legislatively mandated changes, it was inoperative for two years. The system was crucial, accounting for all incoming premiums, and without it the US General Accounting Office would never have been able to audit their financial statements (as they are statutorily required to do); a GAO official observed that without the accounting system the corporation had ‘just been in too big a mess to even audit—so it has never been done.… Information systems like this are the lifeblood of agencies now.’ Furthermore, any less dramatic problems that occur in the processing of treasure tend to be more systemic than with manual systems. Mistakes in the calculation of benefits, for example, are more likely to be due to a software error than a mis-calculation by an individual member of staff, allowing one mistake to be replicated many thousands of times. Organisational capacity Organisation is the fourth tool of the government—‘a label for a stock of land, buildings and equipment, and a collection of individuals with whatever skills they may have, in government’s direct possession’ (Hood, 1983:72). The organisational resource is treated distinctly from the other three: ‘It is perfectly possible to derive nodality, treasure and authority from organisation rather than the other way round’ (Hood, 1983:72). As might be expected, the role that information technology plays in the utilisation of this tool of government is also rather different from its relationship with the three resources discussed so far. With organisational capacity, information technology to some extent takes over and reshapes the role that organisation formerly played. First, information technology has ‘replaced’ bureaucracy through the reduction in staff necessary to carry out operations. In both countries staff savings have been used since the 1960s to justify expenditure on computer projects. In Britain up until the mid-1970s, when the volume of services provided publicly was growing, new services were being developed and procedures becoming more complex, computers were seen as a way of overcoming the problems of recruiting trained clerical staff. In fact the early use of computers often produced large savings in addition: 2,500 staff saved by departments by March 1968, with a saving of 13,000 anticipated

18 Information technology in government

from systems already in operation or at an advanced state of planning (CSD, 1971:11)—although CSD observed that ‘these figures must be treated with great reserve, and we would doubt whether in the event they will be fully realised’. By 1980 the emphasis switched from increasing capacity to reducing staff and nearly all major computer projects within British government during the 1980s were at least partially justified through the money they would save by the reduction in the number of staff the organisation subsequently needed to employ, although after initial reductions had been made, future promises to reduce further staff became increasingly difficult to realise (see, for example, the case of the UK Benefits Agency described in Chapter 3). Another effect of information technology on organisation is the ability to link spatially distinct locations, removing the logic of situating an organisation’s administrative operations on one site. Many transformative effects have been attributed to this facility by modernist writers (see Taylor and Williams, 1990, 1991; Bellamy and Taylor, 1994; Frissen, 1994a, 1994b, 1995). Thus the front office and back office functions of some UK social security offices have been separated and back office functions relocated to areas of high unemployment, although the economic and social benefits of this arrangement (also described in Chapter 3) have been disappointing to date. All the impacts of computer technology described so far have been effected through computer systems used in civilian agencies. But the organisational resource is perhaps epitomised by the operation of military bureaucracies; in fact, Hood (1983:72) summarised ‘organisation’ as betokening capacity and capability: ‘armies in government’s own service instead of mercenaries’. The military agencies are where we have seen the wildest dreams of the information society advocates, in the form of advanced technological development space age fantasies. Military agencies in both countries were enthusiastic users of all the possibilities and impossibilities that computers offered. This development was in line with military dependence on other types of technological development, from the horse to the tank to gun weaponry to nuclear weapons to ‘star wars’ technology. But some commentators have highlighted computers as particularly applicable to military operations: If any technoscience is crucial to postmodern war, it is not nuclear, but computer.…Computers are simply the right tools for military tasks.… Computers not only make nuclear technoscience possible, they form the underlying basis and rationale for most postmodern war doctrines, politics and weapons. (Gray, 1989:44) By the 1990s, simulation exercises had almost completely replaced live exercises in the training of soldiers (Government Computer News, 27 May 1996).

Computerising the tools of government?

19

Since the First World War the US Pentagon has given priority to technological innovation in a ‘strategy of high technology’, involving futurists, scientists and science-fiction writers as well as military and civilian government staff. Expert systems were developed to predict enemy activity, to track and filter information, and even to issue orders. Satellites were used to direct individual artillery rounds and send messages between commands. Pilot state monitoring was intended to include systems that read the pilot’s brain waves, followed eye movements and tested the conductivity of sweaty palms. By gauging the pilot’s mood, the computer would know how to communicate with the pilot, or even when to take over the plane if deemed necessary. Information systems are now attached to every kind of military tool: ‘Smart artillery shells, smart torpedoes, smart depth charges, smart rocks (scavenged meteors collected and then “thrown” in space), smart bombs, smart nuclear missiles and brilliant cruise missiles’ (Gray, 1989:54). It should be noted however that Gray also points out that ‘utility alone explains neither the urgency nor magnitude of the US effort in computing’ and that fascination with war games has repeatedly led Defense Department planners to contemplate computers for inappropriate roles. While military agencies continue to develop innovative possibilities for high-technology war, the computer systems actually used in military operations are not always so sophisticated. Head (1982) pointed to an article in Computerworld (23 June 1980:24): The recent malfunction of a US Air Force computer system, which mistakenly alerted the Strategic Air Command to a nonexistent Soviet military attack, raises serious questions about the plight of the nation’s military computing operations. Indeed Kennedy (1989:112) has noted that in spite of the US defence agencies’ major role in the establishment of the US information technology industry, one embarrassing aspect of government-sponsored research into weapons systems was ‘the very mediocre record military research and development had in actually delivering working weapons systems that in any way fulfilled their original mission’. And in May 1996, the Pentagon hosted a training exercise involving 53,000 British and American troops, 144 heavy-lift aircraft and a cost of nearly $ 17 million in order to overcome the concern of senior commanders that ‘troops will lose touch with the physical dimension of war fighting’ (GCN, 27 May 1996). In Britain the defence agencies have also developed both high-technology instruments of war and less dramatic administrative support systems. The MoD initiated one of the most important public sector EDI projects, the computer-aided acquisition and logistics support system (CALS) which originated in the US Department of Defense and aimed to cut out paper from the entire arms procurement process. However, as in the US, many of the systems actually used by the defence agencies have been disappointing,

20 Information technology in government

with claims from a high-ranking army officer that ‘MoD projects are often so slowly completed that equipment is in danger of being obsolete before entering service’ (Computing, 21 August 1997). In 1997 a vital Royal Navy communications system was revealed to have been delayed for more than seven years by problems over procurement. And the more prosaic task of administrative support computers in the defence agencies also illustrated the risks associated with huge-scale technical projects which increasingly have to incorporate a wide range of systems developed throughout a long history of computing. The MoD by 1991 operated some 25 mainframe computer centres, hundreds of other systems and thousands of micro computers, responsible for about 30 per cent of central government expenditure on information technology products and services (NAO, 1991b). The Department’s expenditure on buying in computer hardware, software, maintenance and consultancy services for support information technology rose from £24 million in 1980–1 to £130 million in 1989–90, with additional salaries costs for 4,000 information technology staff of about £75 million. In 1984, a report by the Department’s Chief Scientific Adviser stated that the Department had ‘long been at the forefront in the use of computing and telecommunications’ (NAO, 1991b:7), but was critical of the extent to which the Department extracted maximum benefit from the technology of the 1980s, especially in the ability of systems to communicate, blaming the Department’s method of planning isolated solutions to meet particular local needs. Efforts to overcome this problem in the 1990s included a logistics system called ‘Chots’ which linked 36 previously stand-alone logistics systems used by 30,000 service personnel (Kable, 1994:43). The MoD was also developing an automation scheme for the head office, consisting of over 11,000 terminals linked to 37 sites via a network: ‘It has taken so long to implement that its technology…now inevitably looks outdated’ (Kable, 1994:43). In 1992 the Public Accounts Committee criticised the project for spiralling costs and security problems, and in 1995 an MoD newsletter conceded the system was cumbersome and out of date. At one point the Ministry was dealing with a ‘virtual boycott from users who had become used to the ease and userfriendliness of off-the-shelf systems’ (Computing, 30 May 1996). The Secret Services have also been both attracted to and disappointed by information technology. The cost of MI5’s new headquarters rose from £85 million to £227 million in 1996, largely as a result of massive information technology expenditure. MoD sources claimed that the Defence Intelligence Service’s Trawlerman intelligence system, which cost the MoD £65 million was ‘absolutely useless’ and that millions were also wasted on a hightechnology spy-ship HMS Challenger which cost some £211 million but was only operational for three years between 1981 and 1990 (Computing, 12 September 1996). In spite of massive investment in satellite systems, MI6 missed the peace talks between the PLO and Israel, even through a BBC team was filming them. GCHQ never succeeded in breaking high-level Kremlin ciphers.

Computerising the tools of government?

21

Organised expertise Some writers have broken down the organisational resource into a further category, ‘organised expertise’: ‘the development of highly professionalised staffs and acquisition of knowledge-related resources and sophisticated organisational competencies for handling information and processing it to produce successful solutions’ (Dunleavy, King and Margetts, 1996; Dunleavy and Biggs, 1995:4). Overall, the use of information technology has occasioned a shift from organisational capacity as a resource to organised expertise: the introduction of a new army of technically skilled personnel into governmental organisations. In 1983 about 41 per cent of the federal data-processing budget was allocated to personnel resources (Grace Commission, 1983:ii). Ten years later the federal government employed 113,300 information technology staff, at a cost of $5.5 billion (OMB, 1994:15). In Britain the supply of information technology staff was recognised as a ‘recurrent problem’ (CSD, 1978:29) as early as 1978. In 1972 there were 4,640 skilled and trained information technology staff (excluding supporting operators such as data preparation staff) in the central government, rising to 7,018 by 1977 (CSD, 1978:50). By 1993 £500 million was being spent annually on information technology staff costs across central government departments, 22 per cent of information technology expenditure (Kable, 1994:16; 1995b:46). From the early days of government computing, agencies in both governments have also relied on external support and advice from private sector companies for the development of computer systems as well as for the procurement of hardware. This development is described in detail in Chapter 7, but we can note here that such a tendency increased dramatically during the 1980s. By 1994 the US federal government was using private sector personnel (in a huge variety of contracting arrangements, ranging from the hiring of individual contractors to the contracting out of whole operations’ departments) and nearly 50 per cent of their information technology work was accounted for by commercial services (OMB, 1993b:15). The cost of information technology staff as a percentage of information technology expenditure had dropped from 41 per cent to 22 per cent over ten years. In Britain by 1995 central governments were contracting out 30 per cent of their information technology expenditure, a rise of 7 per cent from 1993 to 1995: this explains why expenditure on information technology staff had fallen to £411 million in the same period (Kable, 1995b:43–6). Therefore, the drawing-in of organised expertise to central government through the use of information technology occurred in two stages in each government. First, during the 1970s both governments amassed large numbers of technically skilled staff to develop their own computing applications. Second, during the 1980s these staff were gradually replaced in part by private sector computer companies. Both stages have represented a change in the type of organised expertise required to carry out administrative operations,

22 Information technology in government

drawing into the civilian bureaucracies similar actors to those who had long participated in the running of military agencies. Detecting tools Each of government’s four resources may be used for detection as well as for effecting (Hood, 1983:87). Nodality may cause government to receive information in the same way as it may give government a reason to be listened to. People may give government information simply because of its social centrality and visibility; information of such a kind is free to government and Hood describes the detectors that pick it up as nodal receivers. Similarly, government’s resource of treasure can be used to buy information, through the tool of ‘rewards’; information which government gets in return for any kind of tangible quid pro quo. Third, government can use legal authority and demand information. Fourth, government can use organisation to acquire information, in the form of physical or mechanical contrivances for scrutiny which largely by-pass human motivation. They are called ergonomic detectors. A key task at the core of government’s ‘detecting role’ is the operation of a census, and the development of census technology illustrates the incremental way in which computers have infiltrated government’s detecting capabilities. The first US census was carried out in 1790, and for nearly a hundred years census data were tabulated by clerks adding columns of figures. In 1880 a tabulating machine (a wooden box in which a roll of paper was threaded past an opening where a clerk marked the tallies in various columns and then added up the marks when the roll was full)— made tabulating ‘at least twice as fast as before’ (Bureau of the Census, 1988:10). In 1890 a punchcard tabulated system was developed which counted holes in punchcards using an electric current that passed through each hole and tripped a counter: this system could count 250 items a minute, increasing to 2,000 by 1950. In 1951 the first large-scale electronic computer in the US, UNIVAC I, was designed and built specifically for the Bureau, able to tabulate 4,000 items per minute. Input devices developed from writing to punch cards to a film optical sensing device developed in the 1950s, increasing the transfer of data to 70,000 items a minute and, in the 1980s, transmitting the data over long distances to the computers at Bureau headquarters. Thus the Bureau of the Census and the computer systems that it uses have grown up together. In Britain, too, information technology has increased the potential for nodal detecting tools. The first computer was used for the census in 1961 (although punch cards and mechanical sorting were introduced in 1911). The 1991 census used a mainframe with 50 gigabytes of disc storage and around 400 personal computers, while the 1981 census was based on the use of magnetic tapes. For information storage, as computer systems have replaced administrative operations, change has been dramatic. The Police National Computer, the Criminal Records

Computerising the tools of government?

23

System, the Vehicle Licensing System, the Social Security System, MI5, and the Inland Revenue Computer System are the main systems that hold data about a significant subset of the population. Undoubtedly the government’s ability to collect, store and process information about citizens has increased dramatically. In some cases government pays for information rather than receiving it by virtue of its nodal position. As well as reducing the cost of producing the same information, widespread use of information technology has increased the potential for collecting information and has therefore in general increased the potential cost. The distillation of available information can be as expensive as collection itself: ‘The only cost is the screening process to separate the gold from the dross’ (Hood, 1983:93), but such a process can have significant cost when it is technically feasible to collect so much information. Thus information technology has occasioned the transfer of detecting tools from nodal-to treasure-based receivers, through the use of commercial enterprises to market repackaged or enhanced government information. OTA (1988b:9) observed that such companies in particular benefited from access to electronic formats. Requisitioned information is information requested by government from citizens or the private sector under threat of sanction for non-compliance: that is, detection that relies on authority. Identity cards, vehicle registration and income tax returns all fall into this category of detecting tool. The most important technological development likely to increase the quantity of information requisitioned by the government has been that of electronic card technology. In ‘on-line’ applications, a card or related device is used to communicate with a database to record data; identify individuals, verify eligibility for privileges, service or benefits; post transactions; provide access; and reconcile data. In off-line applications an integrated circuit card (ICC) or a smart card with a microprocessor performs similar functions by itself without direct, immediate linkage to a host computer program or database. Periodically transaction information recorded by the off-line device is transferred to the central computer database. Such applications have been popular in the US since the mid-1980s. By 1988 there were 51 entries in a Department of Treasury publication entitled Applications of Computer Card Technology, by 1989 there were 86 and by 1990 there were 184 (Department of Treasury, 1990:xx). Before and during the Iraqi War in 1990, card technology systems to dispense fuel, account for use, and track and monitor inventories were being established. Ergonomic detectors rely on the organisation resource: equipment or trained staff. Equipment that automatically records information became increasingly available; for example, the automation of immigration checks involves the possibility of collecting huge amounts of information automatically. Smart cards, if used, would allow governments to record automatically all government/citizen interactions in the same way that bank/ customer transactions are recorded. However, in neither country have smart

24 Information technology in government

cards been introduced for the delivery of benefits or the collection of taxes at the level of central government. Thus information technology has fuelled government’s capacity for detection using each of the four ‘tools’. But Hood suggests that with government detection it makes less sense to distinguish between particular and general applications. Instead he emphasises the distinction between ‘active’ and ‘passive’ modes of government information-gathering, where the difference between the two lies in the degree of initiative or mobility that government requires to obtain the information in question: ‘Thus when government observes us from a fixed watch tower, it is passive. When it knocks on our door or stops our car in the street to pursue its inquiries, government is active’ (Hood, 1983:89). The examples of detectors provided above are largely active. Passive detection is not something that governments have traditionally been very good at, especially in Britain where tightly vertical lines of authority prevent government agencies sharing information or developing profiles of citizens. However, an investigation of the relationship between detecting and effecting tools shows that technology brings a change to government’s passive detection capability, as a by-product of government’s use of information technology for ‘effecting’ tools. Organisations in both governments now routinely collect, store and maintain vast databases of information about citizens as part of their core administrative tasks, as computer systems have replaced manual operations. In 1986 a US Office of Technology Assessment report concluded that ‘the widespread use of computerized databases, electronic record searches and matchings, and computer networking was leading rapidly to the creation of a de facto national database containing personal information on most Americans’ (OTA, 1988b:15). Use of the social security number as an electronic national identifier was facilitating the development of this database. A de facto national database was ‘actively being created, although in a piecemeal fashion’ (OTA, 1988b:15). Both the US and British central governments, therefore, possess de facto national databanks. These databanks increase possibilities for ‘passive’ detection and reduce the need for ‘active’ detection. Government’s capacity to gather information is a function of social relationships rather than technology and has not necessarily increased through technological advance; totalitarian governments of the past, for example, have conducted sophisticated surveillance operations without information technology. The East German police, for example, employed 500,000 secret informers, 10,000 of whom listened to and transcribed citizens’ phone calls (Wright, 1998:10). But it is certainly the case that information that is gathered in government’s de facto national databanks becomes available for further processing and can be scanned far more easily using computer technology. For example, databases can be scanned to identify citizens falling into a certain category, such as those reaching pensionable age, and another process can then be automatically activated. Other agencies—the intelligence agencies and the

Computerising the tools of government?

25

police, for example—can scan such databases as and when required, restricted only by technical expertise and privacy laws. One effect of computerisation was to enable the police to direct their resources at chosen sections of the population. Police could now target intelligence-gathering at selected communities living in high-crime areas, for example by identifying high crime rates and concentrating resources in these areas, causing the BSSRS report to describe the change as ‘a policing policy which identifies the people living in particular working-class areas as criminals in waiting’ (BSSRS, 1985:22). The BSSRS report also suggested that another effect of the computer was a reverse process of a normal criminal investigation, where the police start with a crime and look for names; the computer enabled the police to start with a name or a car registration number and look for crimes. Wright (1998:4) argues that such effects have greatly increased and consolidated since the 1980s; ‘Police telematics and their use of databanks…facilitate prophylactic or pre-emptive policing.’ The policy criticality of information technology Information systems are now central to the tools of government. The developments described above show how all the most basic tools of government are conducted through a myriad of information systems and databanks, with enormous variation in size, impact and purpose. By the 1980s both governments had recognised that computers were playing a vital role in administrative tasks. In the US the Grace Commission (1983:1) observed ‘At present, there is no function of Federal Government—administrative, scientific, or military—that is not dependent on the smooth functioning of computer hardware and software.’ In Britain, the Civil Service Department (CSD) observed in 1978 ‘how greatly the main administrative operations of central Government, for example social security, defence, taxes, now depend upon computers; this dependence has increased over the last ten years and will increase further’ (CSD, 1978:1). The centrality of information technology to the tools of government establishes such technology’s potential to be critical to policy. But are the new ‘nervous systems’ of government more or less critical to policy than traditional administrative methods? At the most basic level, if many of the computer systems mentioned above ceased to function, then the tools of government policy could not be implemented. It could also be argued that disasters are more likely to occur in a computerised environment, with computers contributing to the ‘normal accident’ scenario identified by Perrow (1984) by increasing the possibility for administrative systems of high complexity. Certainly, information technology forces governments to undertake large, technologically based projects in the civilian sectors, where formerly they did not. As administrative functions are replaced by information systems, a significant part of government’s organisational capacity involves a specialist task that policy-makers know little about.

26 Information technology in government

Evidence suggests that government organisations have some way to go before they overcome the difficulties involved. In the US, when Gore and Clinton announced their ambitious programme of administrative reform in 1993, the National Performance Review (noted in the introduction), information technology was given a central role as a facilitator of innovation. But one central government player with a long experience of studying federal government information systems, the General Accounting Office (GAO), viewed information technology’s rise to centre stage with concern, having become increasingly aware that expenditure on information technology and the way in which it was being managed was presenting as many problems as solutions. GAO, largely enthusiastic about the findings of the National Performance Review report and disagreeing with only one of its 228 recommendations, expressed strong doubts as to the ability of the federal government to undertake the changes based on information technology, given its record on managing existing projects: One overriding concern with NPR’s recommendations is that they are highly dependent on the effective deployment of information technology and information systems. Many agencies simply do not have the capacity in terms of management skills, management continuity, and technical ability to develop the fundamental underpinnings needed to implement the recommendations. (GAO, Netresults, npr-gao 2025, 6 January 1994 para. 1) Furthermore, GAO provided the following gloomy summary of the past 20 years of federal computing: Despite heavy investments in computer technology, executive agencies still lack essential information for managing their programs and resources effectively, controlling expenditures, and achieving measurable results. Moreover, many agencies are not using information technology strategically to simplify and streamline their organization, management, and business processes to improve service to the public and reduce costs. As a result, projects have consistently run into serious trouble—they are developed late, fail to work as planned, and cost much more than expected. The results, in missed benefits and misspent money, can be found throughout the government. Dramatic benefits in cost savings, productivity, and service rarely materialize. Rather, some improvements are gained at the margins—but often at a high cost. (GAO, Netresults, 1994) Equally cold water was thrown at the computing capability of the British government in the 1990s when the Public Accounts Committee (PAC) took an unusually proactive role with a ‘special interest in failed computer projects’ (Independent, 3 February 1994) after a series of high-profile disasters. In 1994

Computerising the tools of government?

27

the PAC published a report entitled The Proper Conduct of Public Business (PAC, 1994), setting out ‘a number of failings on which we have reported in key areas of financial control, compliance with rules, the stewardship of public money and assets, and generally getting value for the taxpayers’ money’ (PAC, 1994:v). The report referred to nine bodies, six of which had major problems with computer systems. Specific incidents fell into three categories: systems breakdown (the Foreign Office and the Property Services Agency); mismatch between computerised records and manual records (the Insolvency Service and the Department of Social Security); and massive overruns on failed computer projects (the Department of Employment and the Wessex Regional Health Authority). Under the heading ‘Inadequate Financial Controls’, five of the eight cases had experienced serious problems with computerised financial systems. In addition to the inherent risk of implementing computer systems, information technology has a tendency to ‘scale up’ problems. Centralised computer systems tend to have a similarly centralising effect on information. At a World Congress on the Internet in Medicine, one specialist advised that ‘more and more public health information is being sucked from clinical systems towards the centre. The effect is that ‘we have a terrible aggregation of sensitive data to which more and more staff have access’ (Guardian, 6 November 1997). Most importantly, any risk associated with information technology can be multiplied throughout government administration, because any governmental organisation that used to operate a bureaucracy now uses computer systems. A notable example is the ‘Millennium-bug muddle’ (The Economist, 4 October 1997). Because of the desire on the part of the earlier systems designers to save storage space, most dates until the 1980s were saved in computer programs as two digits; therefore unless altered, most computer systems will crash or malfunction in the year 2000, which they will mistake for the year 1900. Fear of chaos means that an alert was raised across public and private sectors and most organisations were instigating code reworking initiatives from the mid-1990s, which may eventually mean that chaos is averted. The cost of the government’s millennium problem was estimated at ‘around the order of magnitude of £1 billion’ by the head of CCTA in 1997 (Computing, 25 September 1997). The UK police were so concerned about the problem that in 1997 they were planning to switch off all their information systems on 31 December 1999 (Computing, 11 September 1997). The systematic nature of the potential disaster is not something that seems possible in a purely bureaucratic system; it is difficult to imagine a government-wide threat of this kind. While the ability to implement policy can be affected in any agency that uses computer systems (as the examples given earlier of accounting systems failures in the UK Foreign and Commonwealth Office and the US Pension Benefit Guarantee Corporation demonstrated), in some agencies technological innovation has made possible policies that could not otherwise have been implemented, for example the privatisation of electricity. Unless electronic

28 Information technology in government

prisoner tagging in the UK achieves a higher level of success, some aspects of the Criminal Justice Act cannot be introduced. While computer systems in the 1970s and early 1980s largely computerised administrative tasks, replicating existing manual systems, developments of the 1990s and beyond seem likely to have greater policy criticality, as new innovations allow previously impossible policies to be implemented. As well as opening up windows for policy innovation, information technology also introduces new constraints. In 1997, the Chief Executive of the CCTA (the government centre for information systems) Bob Assirati warned that if the UK decided to join the European Monetary Union (EMU) with a single currency in the first wave, central government’s information technology resources would be stretched to ‘breaking point’ and the government ‘would have very little time to get financial systems sorted out’. When asked if there would be enough time, he said: ‘It’s difficult to say. I would be very concerned’ (Computing, 25 September 1997). A Labour MP (Andrew Miller) suggested that the difficulties of handling the changes necessary for the ‘Millennium bug’ problem at the same time as the money change partly prompted the decision not to join the first wave of European Monetary Union in 1999: ‘This was an issue which swayed the Chancellor’ (Computing, 30 October 1997). Chancellor Gordon Brown’s statement on entry into the ERM blamed the previous Tory government for failing to undertake essential preparatory work. Conclusions Information systems are now central to the tools of government. The developments described in this chapter show how all the most basic tools of government rely heavily on information systems. No uniform ‘transformation’ may be identified, however. Differences may be observed in the impact of information technology on each of the four ‘tools’ of government policy. The transformation of government’s nodal resource is clearer in interactions between government and commercial organisations than in interactions between government and citizens. While computerised links between members of the public and government are now in place in both countries, when the line of communication disappears into governmental organisations, it is often brought to an abrupt halt, and the interaction takes very much the form it had before computerisation took place. Information technology runs to the heart of government’s processing of ‘treasure’. All moneys processed within large governmental organisations have, since the 1960s, been processed on computer systems, introducing new risks and new dangers into financial management. But information technology is not fuelling a trend towards ‘cheque-book’ government. Where policy innovations become possible through data matching and data integration, they are most likely to be used for increased economy in the

Computerising the tools of government?

29

provision of welfare benefits, through more sophisticated ‘fitting’ of provision to recipients’ needs. And there is a growing bifurcation between what it is possible to achieve in transforming the way treasure is collected from and dispensed to the public, and the reality of government information systems in this area. While the bifurcation is now recognised in both countries, moves to address it will be indelibly shaped by the history of information technology in the two governments, as systems already in place determine what is possible in the future. Government’s ability to exert authority over the outside world has increased through computerisation. Authority-wielding organisations in both governments appear to have been innovative in the use of information systems to take on new authority roles. In this sense the extent to which information technological development has facilitated the transformation of the authority resource is evidenced more clearly than for the nodal resource. Some of the larger ‘authority-wielding’ agencies have been among the most innovative users of information technology. In contrast to the other resources, tasks based on authority tend to be unique to government, so the type of systems used is more likely to be at the forefront of technological development. In contrast, any organisation processes treasure or information and so innovations are more likely to be developed in these areas for the private sector. Information technology has the potential to facilitate an increase in political control, through innovation in policing strategies, for example. A ‘worst-case’ scenario is presented by, for example, Wright (1998), through the creation of a ‘massive machinery of supervision that can be retargeted fairly quickly should the political context change’ (Wright, 1998:9). However, various factors may prevent this potential being realised. New tasks can be accompanied by new problems. The relationship between computerisation and military capacity illustrates the most extreme case of problems introduced through computerisation. Research and development funding, on a scale undreamed of within civilian agencies, continues to be pumped into the realisation of ever wilder military fantasies, yet there are signs that the reality of military computing is untrustworthy, outdated and headed towards spiralling heights of complexity. Potential moves towards a ‘control state’ are more likely to be facilitated through the creation of de facto national databanks, an almost accidental by-product of the relationship between effecting and detecting tools, than through deliberate efforts of authority-wielding agencies. For the more general organisational capacity of government, information systems have played a key role in replacing tranches of government bureaucracy and introducing new armies of technical specialists into government. Traditionally government has marshalled its organisational resources through the operation of large-scale bureaucracies. Bureaucracy has always been seen as something that government ‘knew about’ and was ‘good at’. Now information technology is used by government to marshal the other resources; as bureaucracy used to do, but information technology

30 Information technology in government

is not something that government has a reputation for ‘knowing about’ or ‘being good at’. The evident difficulty in the design, development and maintenance of information systems engenders a transfer from organisational capacity to organised expertise. All government bureaucracies must now maintain a division dedicated to the development of new technology-based projects, with new risks and new dangers.

2 Innovation, expenditure and control Governmental responses to information technology

The wave of information systems rolling across all aspects of government administration has brought change in its wake. This chapter looks at governmental responses to the change, by examining four consequences of using information systems to carry out governmental tasks. First, it examines the pressure that information technology places on governmental agencies to innovate. The pressure to innovate bears a cost: the cost of carrying out previously impossible tasks, or existent tasks to a greater capacity. The second section examines trends in government expenditure on information technology. The third section examines the central government agencies that have been used to regulate and control the use of information technology by departments and agencies. Finally, information technology has been promoted by enthusiastic modernists as an integrative tool which facilitates inter-agency working; the remainder of the chapter examines efforts by both governments to develop a technological infrastructure, through the coordination of separately developed computer applications. Innovation in information technology in government The spread of computers throughout government places great pressure on government bureaucracies to innovate, fuelled by the enthusiasm of politicians for the benefits of the ‘information age’. Governmental organisations have never been seen as natural innovators. Indeed, economic analysts of bureaucracy have argued that bureaucrats have little incentive to innovate (or to remove barriers to technical progress) because they cannot appropriate profits personally from cost-saving, labour-reducing innovations (see, for example, Peacock, 1979:112–13). Such arguments have been used to explain perceived differences in productivity and labour-intensiveness between business and governmental organisations. In fact, perhaps surprisingly, in the early days of computer technology government agencies in both America and Britain were innovators in information technology development. During the 1950s and 1960s many governmental organisations were using information technology to the maximum capabilities known in either private or public sector. To quote one 31

32 Information technology in government

source: ‘In the early days of computing, the Federal Government as a user was a principal stimulus to the development of the field’ (OTA, 1981:20). And again: ‘In the earliest days of data-processing the federal government was a leader and an innovator in computer usage’ (Head, 1982:5). Many current developments in information technology use in general can be traced back to governmental research from this period. Computer technology itself was incubated through US government support of Second World War research efforts which produced prototypical stored program computers. The first successful large-scale data-processing installation anywhere was developed in the early 1950s at the US Census Bureau and the initial impetus towards programming languages for business applications came from the US Department of Defense support of the COBOL programming language in the 1960s (Head, 1982:5). The origin of virtual reality technologies can be traced back to research on interactive computing and head-mounted displays for Air Force pilots during the 1960s by Ivan Sutherland, partly funded by the US Department of Defense. The Internet began life as a US defence against a perceived military threat from the Cold War: ‘It was believed that a non-centralised network as opposed to a single computer centre would be much harder to eliminate’ (Sabbagh, 1994:21). Until the 1960s the US armed forces were the single most important influence on the development of digital computers. The Air Force funded biocybernetic research into artificial intelligence throughout the 1970s, including work in its own laboratories that ‘puts computer chips into dog brains in experiments aimed at giving pilots an extra sensing organ’ (Gray, 1989:53). Similarly, in Britain some public organisations were innovative in their early use of computers. The former Ministry of Public Building and Works had even in 1958 adopted ‘a very ambitious proposal for a computer-based system embracing all the routine work of the department, involving linking payroll, bill payment, stock control and vote and repayment accounting to facilitate the management of their resources’ (Lamb, 1973:125). In 1971 the Civil Service Department reported recent research which suggested that the Post Office was in ‘a unique and favourable position in data transmission which would enhance [the UK’s] use of computing equipment and set an international lead in this important and prestige activity’ (CSD, 1971:23). The Employment Services Agency set up a vacancy-matching scheme, CAPITAL, in the mid-1970s that was ‘highly advanced in concept and design’ and ‘in some respects…ahead of any other similar system’ (CSD, 1978:8). In some cases governmental organisations were forced to be innovative. As size and large-scale administrative operations were then the characterising features of government agencies in both countries, the technology of the time suited them well, but the size of operations rendered unique the applications that government departments needed: ‘The DHSS had, in the early days, to design and implement its own computer operating system as one was not available from the manufacturer’ (CSD, 1978:8). The taxation and social security agencies of the two governments deal with a significant

Innovation, expenditure and control

33

subset of the population, unlike most private sector companies. This characteristic forced them, along with some other departments, to innovate. By the 1970s, however, departments across both governments began to lose their role as innovators in information technology development. In 1978 President Carter commissioned a study of the entire federal government, the President’s Reorganization Project, which paid particular attention to information technology and concluded that ‘The federal government is, in general, mismanaging its information technology resources’ (PRP, 1979:2). This condition manifested itself in problems such as public complaints about delays and inaccuracies at many service delivery points; an inability to protect the rights and privacy of individuals from intrusive practices of government agencies; the growing obsolescence of equipment, systems and personnel; increasing economic threats accelerated by the availability of technical information and by products flowing in a free and uncontrolled manner from the United States into competitor nations; and ‘a military enterprise which is operationally vulnerable as a consequence of obsolescent equipment and undeveloped technical personnel’ (PRP, 1979:2). In 1981 the Office of Technology Assessment also recorded a lag between public and private sector use of technology: Although a few instances of Federal expertise at the leading edge of computer applications remain, the Federal Government is rapidly falling behind the private sector in its use and management of up-to-date computer and information technology. (OTA, 1981:21) In 1983 the Grace Commission, Reagan’s survey of federal government activities, observed that with ‘over 17,000 computers and a workforce of more than 250,000, Federal systems operations dwarf those of even the largest private sector users’ (Grace Commission, 1983:I), but many of these huge systems dated from the 1960s. There had been some gains in productivity and rapid increases in expenditure for automatic data processing (ADP), but major systems upgrading during the 1970s had encountered problems such as ‘large and undocumented software programs’, ‘bureaucratic inertia’ and a focus on building the new ‘ultimate system’ (Grace Commission, 1983:39). Awareness of the sound economic and technological reasons for replacing ADP systems had ‘not led to the successful implementation of new or replacement systems except in a few instances’ (Grace Commission, 1983:39). By the 1990s even these ‘few instances’ were showing signs of decay. The US State Department, which almost alone among federal departments was praised by the Grace Commission for its ‘attentiveness to ADP concerns’ (Grace Commission, 1983:40), was by 1993 running equipment observed by the OMB to be so old-fashioned that ‘worldwide systems could suffer from significant downtime and even failure’, so obsolete and incompatible that ‘employees

34 Information technology in government

often have to re-enter data several times…. These problems jeopardize our ability to meet our foreign policy objectives’ (NPR, 1993:121). In Britain, both central agencies and politicians have always been more circumspect in criticism of administrative operations than in the US, and the CSD tended to be protective of government computing; but similar evidence of a switch from government’s former role as innovator may be gleaned from the documentation available. In 1971 the Civil Service Department admitted that ‘there is some validity in the criticism that departments have slipped from their earlier position as national leaders in computer development’ (CSD, 1971:10). In 1988, looking back on the recent history of government as a user of computing technology, the Commons Trade and Industry Committee reported consultants’ evidence that ‘some of the Government implementations are without doubt the best practice anywhere in the world… but most are not’ (Trade and Industry Committee, 1988:xxxii). Another noted that ‘by any standard, Government departments are still very cautious and unimaginative users of IT’ (Trade and Industry Committee, 1988: xxxiii). From the evidence presented, the Committee concluded that ‘the government is not leading the way in the use of IT’. Thus the 1980s brought the reversal of a previous trend: the gap was widening between what government could potentially do with computers and what it was actually doing. Both governments had lost an early role as innovators, except in a few discrete instances (outlined in Chapter 1), and developments in the public sector were in general lagging behind those of the private sector. The widening gap between public and private use of information technology caused political concern in both countries. The fastmoving pace of technological innovation in general ensures that new policy options are constantly opening up. Combined with the enthusiasm of modernist politicians, this feature of information technology ensures that government remains under constant pressure to innovate. As the Chairman of the Information Technology Association of America’s Federal Systems Integration Task Force put it in 1996: ‘Technology is looked upon as a silver bullet. Congress is looking to technology for more efficient and effective government’ (Government Computer News, 1 April 1996). By the 1990s it was noticeable that if there were a computer solution available to solve a policy implementation problem, departments were criticised for not using it. In 1994 the Home Office launched a £270,000 computerised data system designed to help reunite families through the identification of missing persons, which would record ‘vulnerable’ missing people who had not been accounted for after 28 days. Before the new bureau went on-line, families had to rely on the ad hoc services of individual police forces and on poorly funded charities. The new database was criticised for failing to link up with the new Police National Computer until 1997 at the earliest (Guardian, 19 March 1994). In 1994 the Central Veterinary Laboratory, part of the Ministry of Agriculture, Fisheries and Food (MAFF), secretly abandoned a computer system designed to integrate results of tests on animal

Innovation, expenditure and control

35

blood and organs, part of an initiative aimed at improving the Laboratory’s efficiency at monitoring the spread of diseases including Bovine Spongiform Encephalopathy (BSE) (Independent, 3 February 1994). In 1998 the clear benefits of such a system became apparent when European bans on the sale of beef from the UK were lifted with respect to Northern Ireland, because the province had a computerised method of tracking BSE cases. MAFF was publicly criticised for the lack of such a system in Britain. Another pressure to innovate is the growth in counter-initiatives developed outside government in response to government innovations in the use of authority. Thus motorists now use radar scanners to watch for police radar technology. Hackers are able to change their identity to avoid ‘cross-matching’ by computer technology. Such ‘counter-innovations’ have a tendency to multiply. If electronic money (‘e-cash’) were to become widely used, the Internal Revenue Service could find itself in a spiral of technical innovation to counteract new sophisticated forms of tax evasion. Already the sale of goods through the Internet has posed serious problems for US states wishing to impose sales tax; as one newsreader put it, ‘trying to tax Internet purchases is like trying to bottle smoke’ (Radio Four, Today programme, 12 January 1998). And as US agencies stepped up their use of the World Wide Web to disseminate government information, agencies were ‘finding a new nemesis in cyberspace: the copycat Web site’ (Government Computer News, 18 November 1996). For instance, any of the 600,000 monthly visitors to the Web site of the National Park Service Park Watch for information on US national parks in 1996, might have been surprised to find that the National Park Service had been taken over by radical environmentalists bent on destroying the parks and turning them over to the United Nations—from a fake but official-looking site with a similar address, developed by the Property Rights Alliance of Washington. As one Internet consultant pointed out, problems of shadow sites were exacerbated by most people’s ‘tendency to believe anything we see in print as accurate’. The use of information technology in the public domain to commit new types of crime puts further pressure on governmental agencies wielding authority to innovate. In the UK police are now looking at the possibility of new legislation that would allow them to monitor and intercept Internet communications, after the discovery that ‘terrorists found the Internet an obvious choice for their activities’ (Guardian, 9 April 1997) and that the IRA had developed a sophisticated computerised intelligence bank using databases in Ireland, the US and France, used to identify potential murder targets (Sunday Times, 9 March 1997). A detective from the National Criminal Intelligence Service observed that ‘investigations would require a rapid exchange of information and seizure of evidence across national boundaries.…in many cases the Internet was being used for new forms of old crimes.’ New crimes also become available; for example, ‘hacking’ into high-security computer systems. In February 1998, the Pentagon received ‘the most organised and systematic attack [it]…has seen to date’ (Guardian,

36 Information technology in government

26 February 1998), during which all the services based in the Pentagon were penetrated to some degree. In response to increasing numbers of such assaults, in October 1997 a US commission on security published a fiveyear plan to increase the security of government systems in defence, power, banking and telecommunications, with a planned expenditure of $1 billion a year on computer security by 2004 (Guardian, 26 February 1998). Some commentators have suggested that dealing with computer security is a role that the UK and US military security agencies, searching for a post Cold War role, could actually relish. At a Cambridge symposium on economic crime in 1997, Dr R.Anderson observed that ‘There’s a sense in which information warfare is a big security marketing exercise’ and suggested that government agencies might actually be stoking up press-induced fears of cyber-crime (Computing, 25 September 1997). Thus perceived threats can have some of the same counter-innovation inducing effects as actual ones. Perceived or actual, the ease with which electronic identities can be assumed and the possibilities for skilled hackers have led to a burgeoning industry in encryption techniques. This technology allows anyone trading on the Internet to carry an electronic ‘digital signature’ created using encryption technology, which turns text into an apparently meaningless sequence of characters, rendering it useless to unauthorised users. The intended recipient of the protected information uses a key to turn these characters back into text. Some ‘Trusted Third Party’ (TTP) organisation then holds a list of registered keys. The computer industry and the government tend to differ as to whether the Trusted Third Parties (TTPs) should be licensed and regulated by the state, or whether the industry should be self-regulating. In the UK, the DTI rushed to produce a blueprint for TTPs in June 1996 before any unregulated TTPs were launched. But issues of legal liability remain unresolved. And again, counter-innovation occurs as many governments, including those in the US and UK, fear that terrorists, criminals and tax dodgers will use encryption to hide their operations. They therefore place export restrictions on encryption, which effectively means that many applications cannot be used across borders and this causes real problems for larger bona fide users. As an ICL representative put it, The real debate is whether government can still control technology. As networks become more diffuse and sophisticated, picking up a message while it is being transmitted will become more and more difficult. There are going to be lots of ways of hiding from governments. (Computing, 29 August 1996) Counting the cost of information technology The new pressures for innovation that technological development brings to government are expensive. Expenditure on computing resources in both

Innovation, expenditure and control

37

governments rose steadily during the 1980s. By the 1990s information technology formed a significant proportion of most departments’ operating budget. Expenditure on information technology in the US federal government In the US expenditure on information technology (including staff costs, consultancy, hardware and software) rose from 3.4 per cent of the federal operating budget in 1982 (a total cost of $9.1 billion) to 5.7 per cent in 1993 (a cost of $16.9 billion in 1982 dollars) (calculated from OMB, 1992b:I–3; 1993c:I–4). By 1996 there was no line item in the budget for governmentwide spending on information technology, so figures are difficult to compare, but industry analysts predicted the federal IT budget for 1996 would drop for the first time in memory, from around $27.3 billion in 1995 to $25.6 billion. These figures disguise a wide variation across departments. During the 1980s the Army, Navy and Air Force remained at the top of the expenditure tables (see Table 2.1). After 1975 the military agencies had continued to carry out research and development in information technology and even by 1989, 70 per cent of all university research in computer science in the US and nearly 80 per cent of direct government funding for information technology research and development was funded by the Department of Defense (Kennedy, 1989:111). During the 1980s, with SCI and SDI, the Pentagon began a deliberate attempt to regain control over certain sections of leading-edge computer research in America and to guide them in particular directions. The consistently high expenditure of the Air Force, Navy and Army throughout the 1980s reflects the high cost of their intensive experimentation with computer technology, even though the costs of computer systems embedded in weapons technology were excluded from the figures. As defence spending on information technology fell in line with defence cuts at the beginning of the 1990s, the US Department of Transport started to challenge the Army, Navy and Air Force as the highest level user of information technology. As Table 2.2 shows, by 1993 the Department of Transport was the highest spending user for the first time (if defence agencies are broken down to component forces) at $2.3 billion for the year, with a growth rate of 64 per cent. The most expensive (and most complex) examples of transport computer systems were in the Federal Aviation Administration, for example a leased Telecommunications-Operational Enroute Communications linking air route traffic control centres and remote communications air/ground facilities which cost an average of $147.5 million per year budgeted in the five years 1992–6.

38 Information technology in government Table 2.1 Federal information technology obligations by agency, 1982–90

Source: Data provided on disk by OMB March 1992. Notes: Calculated at 1994 prices using composite deflator 1.55240. Table is sorted by total obligations in 1990. Ranks indicate absolute spending in designated year.

Information technology expenditure in British central government In Britain the increase of computer expenditure during the 1970s was relatively modest. The number of major administrative computer installations in government rose from 103 in 1970 to 140 in 1978, excluding processor-controlled keying equipment. Between 1972 and 1978 annual expenditure on the hire and purchase of computers had risen from £24 million to £38 million and the annual maintenance bill for the equipment from over £4.5 million to over £12 million. With these increases came a rise in the number of systems and programming staff from 3,000 to 4,500, with comparable figures for operations staff rising from 1,150 to 1,800 (CSD, 1978:5–6; 48–50). During the 1980s expenditure growth was much more rapid. By 1986 the annual expenditure of the UK government on computers and software services had reached £1.41 billion and increased to £1.65 billion the following year (The Times, 18 August 1987). By the 1990s the rise in

Innovation, expenditure and control

39

Table 2.2 Information technology obligations as a percentage of discretionary budget in the US federal government, 1993

Source: Calculated from OMB (1993c:I–16) and OMB (1994:238). The figure for operating budget refers to the actual discretionary budget (outlays) for 1993.

expenditure had been even more dramatic than expected, with estimated total expenditure stabilising at around £2.3 billion per year. Wide variations across departments were visible from as early as 1969 when the Department of Health and Social Security and the Navy, Army and Air Force employed almost half the 2,000 specialised computer staff employed across the government (CSD, 1971:5). Table 2.3 shows information technology expenditure for UK central departments in the fiscal years 1993 and 1995. Again the defence agencies are at the top of the table. In Britain, however, by the 1990s the Department of Transport’s main function was to encourage private investment and the Department conducted few of the massive nationwide, high-technology systems carried out by the US transport agencies. Transport’s expenditure on information technology was dropping faster than that of any other UK department, declining on average 15 per cent a year from 1990–4 (Kable, 1994:83). The more innovative users of technology tend to be among the higher users of information technology in cost. The Home Office spends £73 million per year, reflecting some of the authority-based innovations described in the previous chapter, as does the Department of Trade and Industry, having acted as a catalyst in the early years of electronic data interchange (EDI) by launching the VANGUARD initiative, an awareness campaign for networking in general and EDI in particular. The wide range of the proportion of running costs spent on information technology across central government departments in both countries (evident in the tables above) support Strassman’s (1990:136) assertion that public sector

40 Information technology in government Table 2.3 Information technology expenditure as a percentage of running costs in the UK central government departments, 1993 and 1995

Source: Kable (1994:27, 1995b:8, 12), the most comprehensive summary of available data. Notes: MoD running costs are estimated from data in Kable (1994) as Supply Estimates 1995–6 from which the running costs figures in the table are taken only provides figure for operating costs of £18.8 billion.

agencies tend to exhibit a wider variation than private sector firms. Strassman found that, in general, staff in administrative functions received higher levels of support from information technology than operating personnel who deliver products or services, which he suggested might explain the wide range of per capita spending on information technology in the personnel and computer budgets for 524 state government agencies (Strassman, 1990:157; Caudle and Marchand, 1990). Differences across departments seem likely to increase. The consistency of rankings of departments across expenditure tables suggests that agencies with high expenditure on information technology will continue in this vein and that differences may even be reinforced over time. Departments with inadequate expertise and weak technological infrastructures may be unable to take advantage of future developments. For example, the Office of Management and Budget observed of the US Department of State:

Innovation, expenditure and control

41

One of the major issues facing the Department is the restoration of its IT infrastructure. Obsolete technology throughout the Department’s current IT infrastructure inhibits its ability to exploit the availability of the more powerful computing systems and access to electronic mail networks. (OMB, 1993c:IV–89) In the 1990s expenditure is a greater constraint on further development than perhaps at any other time in the previous 20 years. Innovation is expensive, as the expenditure figures for the more innovative agencies demonstrate. As new technological innovations become available, the potential for expenditure becomes even greater. In the US the Department of Treasury observed in 1990 of computer card technology: In order to implement a card system that will work and prove cost effective, it might prove necessary first to overhaul and upgrade a jurisdiction’s existing computer hardware, management and accounting systems, databases or staff skills. The infrastructure for a card technology system must be in place. Providing the necessary underpinning for a card system in some instances might become so complex and costly that establishment of the system prematurely would not be advisable. More than one Federal or State agency has arrived at this sober conclusion. (Department of Treasury, 1990:xxiv) As the Minister for the Office of Public Service in Britain put it in 1995: ‘The main determinant of progress is price. If we cannot bring down the price of communicating electronically, progress will falter’ (Robert Hughes in The Times, 16 February 1995). While nsew technology can be viewed as the ‘handmaiden of open government’ (ibid.), it also tends to ensure that open government is more expensive government. The UK Benefits Agency was developing touch-sensitive screens in 1995 which would speak in minority languages about entitlements, but such screens could not be placed in Citizens Advice Bureaux or local libraries because no agency would pay for the installation. A Business Link plan to create a network of 200 one-stop shops for businesses seeking information, advice and government-supported consultancy, supposed to become financially independent within three years, proved unsustainable without long-term government funding (Sunday Telegraph, 26 November 1995). Central control of information technology decisionmaking Like the people whom the computers have to some extent replaced, information technology has become the subject of internal regulation. Central governments in both the US and Britain have traditionally perceived a need

42 Information technology in government

for a network of agencies, bureaux, divisions and sub-divisions to control and standardise the hiring, firing and paying of government staff, perhaps the most obvious point of comparison. As might be expected, especially given the high cost and necessity of technological innovation noted above, analogous mechanisms have been set up to control and steer information technology decisions. But in common with all central agency functions, the extent to which information technology decision-making is centralised has waxed and waned over time. Thus both governments employed centralising techniques until the 1980s, followed by a period of decentralisation. Control of information technology in the US In the US, no specific agency was created to oversee, co-ordinate or manage the departmental use of information technology. There has never been an equivalent of the Office of Personnel Management for Information Technology, which is perhaps the nearest point of comparison. Instead, regulatory responses to the spread of information technology throughout the federal bureaucracy were slotted into existing agencies, meaning that a proliferation of agencies has been involved in government information technology. Thus the National Institute of Standards and Technology (NIST) was set up in 1966 within the Department of Commerce to devise standards for hardware and software across the government. A division of OMB, the Office of Information and Regulatory Affairs (OIRA) was established in 1980 and given various statutory responsibilities over the management of information and budgeting for information technology projects. A division of the General Services Administration (GSA), the Information Resource Management Service (IRMS) was given regulatory functions over agencies’ purchase of information technology goods and services. The General Accounting Office (GAO) and the Office of Technology Assessment (OTA) are periodically requested by Congress to make detailed studies of agencies’ information technology efforts. During the 1970s, legislative efforts at controlling information technology usage reached a peak due to the activities of the House Government Operations Committee, chaired ‘aggressively’ (as one official put it) by Jack Brooks, who took a special interest in information technology and had earlier given his name to the Brooks Act. However, this involvement relied on the Chair’s interest and after his departure in the 1980s the Committee had little involvement in the issue. From the legislative side, powerful appropriations committees are the most likely to intervene in agencies’ information technology plans and projects. This diffusion of responsibilities has meant that although successive Presidents have instigated government-wide information technology initiatives, there has been no central institutional driver to carry them out. In 1963, President Kennedy initiated a study of the management of automatic data-processing across the federal government and the Brooks Act was passed in response to Congressional concern over the management of federal information resources,

Innovation, expenditure and control

43

restricting the capability of an agency to carry out single-source procurement for large systems. In 1976 an inter-agency committee on automatic dataprocessing produced 42 recommendations on data-processing, but at a review meeting in 1978 only two had been implemented entirely and there had been no action taken on 36 of them (Head, 1982:7). In 1978 President Carter instigated the President’s Reorganization Project, which observed little improvement in management of information technology decisions since 1959 (PRP, 1978a:a– 48). Carter’s project made several recommendations, none of which was implemented. In 1980 the Paperwork Reduction Act included a unique attempt at an ‘informatization policy’ for the federal government, creating an Office of Information Resource Management (OIRM), buried in the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (OMB), which was supposed to implement the policy through controlling agencies’ information technology expenditure on behalf of OMB. But OIRM was provided with few resources (11 staff) and questionable authority (in fact OIRA had no Congressional authorisation during the 1980s) to carry out their task. The act also mandated every government department to create an information resource management (IRM) official, but although departments complied, these officials had no authority and no financial controlling powers. In 1983 Reagan’s Grace Commission also turned attention to information technology, criticising previous reform efforts including the work of OIRM and identifying federal ADP leadership as a crucial weakness in ADP ‘revitalisation’. But few of the Grace Commission recommendations concerning information technology were implemented, and by the 1990s OMB’s responsibility for information technology policy was still not clearly defined. In the 1990s attention to the regulation of information technology activity stepped up a gear under the National Performance Review (NPR). One of the several inter- and intra-agency teams of federal personnel made a study of federal information technology and the General Accounting Office was able to produce a ten-page document of ‘IRM-related’ action items within the Review (GAO, 1993e). In 1996 the Brooks Act was pronounced ‘dead’ (Government Computer News, 5 February 1996). The General Services Administration’s oversight role in systems buying, put in place by the Brooks Act, was ended and OMB became the chief overseer of information technology procurements, budgets and policies. The bill required the appointment of Chief Information Officers (CIO) in all Cabinet agencies. The White House formed a team of mid-level managers to resuscitate troubled information technology projects, which was to give hands-on help with building systems and integrating technologies, rather than criticising agency efforts. Thus, once again, in the US information technology is at the forefront of presidential attention. But US administrative reform has suffered from a tendency to be high on recommendation and low on implementation and it remains to be seen if the National Performance Review represents a decisive break with the past.

44 Information technology in government

Control of information technology in Britain In the UK until 1984, the administrative body with responsibility for information technology has been the Central Computer Agency (CCA), created in 1972 with a brief to centralise and focus the computing skills and experience of central government departments. The name was changed to the CCTA when telecommunications was added in 1978. The CCTA at the time actually ‘owned’ around 80 per cent of administrative computers in government; no department could acquire or use computers without CCTA approval. During the early 1980s, the agency was involved in overseeing all large information technology projects, authorising expenditure, procuring goods and services, and at times providing more direct practical assistance and a central core of expertise (CSD, 1978:24). Since that time, the CCTA’s role has steadily diminished. Financial responsibility was transferred back to the Treasury expenditure divisions in 1985. Responsibility for procurement was delegated back to departments. After this time Whitehall officials conceded that the government had no overall strategy for implementing information technology in the Civil Service (Financial Times, 9 June 1988; Trade and Industry Committee, 1988). Paul Freeman, then Director of the CCTA stated that: ‘The pace and direction which information technology takes is up to government departments. There is no super-strategy for IT’ (Financial Times, 9 June 1988). By 1995 the CCTA’s staff had fallen to 300 from 540 in 1984 and the agency was mandated to recoup most of its cost (90 per cent) from its ‘direct services’ customers and hence its role became increasingly reactive rather than proactive. At the end of 1995 the new Deputy Prime Minister Michael Heseltine created the Central Information Technology Unit (CITU), a division of around ten people largely seconded from the private sector and run along the lines of a company board within the Cabinet office. The CITU was intended to secure ‘a strategic approach to information technology across the Government’ (Financial Times, 8 November 1995), representing a new recognition that government information systems were in need of attention, although its responsibility relative to the CCTA remained unclear. With the election of a new Labour government in May 1997, the control of information technology decisionmaking was further complicated by the creation of the CSSA, a central government group on information technology. The CCTA was again the loser, with its central subsidy of 10 million (1995–6) reduced to 3.5 million, under which it was to move to a full cost recovery basis and further cuts in staff to 175. Thus most information technology decision-making remained decentralised, devolved to individual agencies and departments. By the 1990s the Treasury was the only central actor with controlling powers over individual departments and agencies with respect to information technology. But the degree of Treasury control over departments’ information technology expenditure was debatable. On the one hand, fear of the Treasury

Innovation, expenditure and control

45

undoubtedly played a major part in putting departments under pressure to proclaim their projects a success and to conceal mistakes, for fear of endangering future budgets. On the other hand, it was difficult to see how the Treasury could retain the technological expertise necessary to control effectively the agency’s computer expenditure. A comment from the head of one expenditure division illustrates the necessarily hands-off nature of Treasury scrutiny: We are not IT experts. We ask questions like ‘Have they considered the full range of options; have they used standard Treasury techniques?’— intelligent laymen’s questions…but when it comes to detailed questions like ‘Are they getting state of the art solutions?’, we just have to check whether they have consulted the right experts.…As it happens, the person working for me now has got quite a good background in covering systems in MoD, something he specialised in at his previous job. He does happen to know something about IT, but it is almost an accident that this is the case. Most people in the Treasury don’t have any great professional knowledge of information technology. Such a tendency was reinforced by drastic cutbacks in Treasury staff in the Department’s ‘biggest review in living memory’ (Guardian, 18 October 1994), in which it lost a third of its under-secretaries and a fifth of its assistant secretaries. By this time no governmental agency collected government-wide information technology expenditure figures. The National Audit Office (NAO) had some responsibility for the evaluation of individual projects, but treated them on a case-by-case basis with no government-wide analysis. Although the Public Accounts Committee had seriously questioned efficiency in agencies’ development of information systems in the 1980s (see PAC, 1989) and again in the 1990s (PAC, 1994), its capacity to ensure improvements were made was extremely limited. There was a Parliamentary Office of Science and Technology (POST) funded from 1989, but it was poorly resourced and had no controlling or overseeing powers. Creating a technological infrastructure and the integration of computer systems This section looks at governmental attempts to develop a technological infrastructure through the integration of separately developed computer systems. Integration is often heralded as the cornerstone of the information age: ‘New forms of technological integration are also a key feature of the network society. Digital technology gives rise to convergence between computers, telecommunications and television, for example’ (Taylor, 1997). Information technology is cited as bringing what Mulgan (1997) describes as ‘connexity’; bringing ‘new forms of relatedness, perhaps loose-textured, even anarchic, on the one hand, or highly structured on the other’. Malcom

46 Information technology in government

Bradbury ascribes this feature to the integrative nature of the technology itself: New technological systems not only freely cross borders; they leak into each other. The emergence of digital television, the increasing penetration of all private or domestic space by programmed output or systems of interactive contact, mean the new exposure of most individuals in most countries to the information superhighway and the communications melting-pot. (Guardian, 9 December 1995) However, the integrated nature of technological innovation does not, in spite of what such views would suggest, happen automatically. Computer systems in government have grown up independently, produced and consumed by individual departments and agencies and there is nothing automatic about their integrative power. Such a tendency has been exacerbated in the US and the UK by the decentralisation of responsibility for computer systems, outlined in the previous section. Many of the latest innovations rely on links with pre-existing computer systems, but up until the 1990s neither government paid much attention to creating any kind of technological infrastructure. Developments described in the first chapter were initiated within rather than across departments. An exception in both countries was the establishment of a departmental telecommunications network during the 1990s. In the US the federal data and telecommunications network (FTS2000) was established to provide a telecommunications infrastructure for shared departmental use. Usage of the FTS2000 was made compulsory, except for the FBI which demanded a higher level of security than was guaranteed. Benefits, considerably less than originally anticipated (Buckingham and Wyatt, 1992:2–3), included a freephone service for the Internal Revenue Service and Health and Human Services, although this ‘service to the citizen’ benefit was a post facto justification of an initiative aimed at improving economy and efficiency. In the main, users were satisfied with the voice service, but rather limited use has been made of the data services. Communicating inter-agency data on the network has been limited. In Britain the Government Data Network (GDN) was initially shared between four government departments—Inland Revenue, Customs and Excise, Social Security and the Home Office. The GDN was ‘not technically sophisticated’ (Buckingham and Wyatt, 1992:8) and usage was voluntary. The police refused to use it because of the political sensitivity of their data and many departments have developed their own local area networks. Usage in general has been much lower than expected. Thus neither network could accurately be described as a government-wide technological infrastructure. Additional innovations in information technology are made possible through the matching and integrating of existing computer systems. In fact,

Innovation, expenditure and control

47

even before the proliferation of local and wide area networks of personal computers, the integration of government computer systems was seen as providing the greatest potential for innovative use of computers. As early as 1973 Lamb had observed for Britain: The main development that can be foreseen is the linking up of several computer systems that are serving related purposes. The information relevant could be analysed in a way that would have been a flight of the imagination in the pre-computer age…The major problems of the next 10 to 15 years could well lie in establishing the organisational and managerial structures in which such developments could economically occur. (Lamb, 1973:43) As in the private sector, the demands for integration since this time have far exceeded those in place. In Britain examples of initiatives aiming to integrate systems were still sparse by 1996. The tightly vertical organisation tradition to the UK Civil Service works against such initiatives. When they do occur, the advantages were clear, for example the ‘co-ordination of the computerization of the criminal justice system’ (Muid, 1994:124), in which the Home Office, Lord Chancellor’s Department, Crown Prosecution Service, the courts, police and probation services combined to examine opportunities for overall service improvements (although replacement of paper transfers with electronic transfers was still putative by 1997; see CITU, 1996:17). The drawback of them not occurring is also clear. As noted in the previous chapter, the Criminal Justice Act of 1994 contained curfew orders which relied on electronic prisoner tagging, all attempts at which had until that time been unsuccessful. When asked if his division had been consulted on the implementation of prisoner tagging before the legislation was passed, a senior official in the relevant division replied: ‘The short answer is “no” but the longer answer is that “they should have been”.’ At the end of 1996, some signs of recognition of the lack of integration between government computer systems became apparent. The CITU produced a Green Paper, government.direct (CITU, 1996). While not proposing any particular reversal of the devolution trend in information technology organisation, the document proposed a front-end linking system which would provide a united front to citizens endeavouring to communicate electronically with the government, to be managed by existing agencies: viz. the CITU and CCTA. It outlined a future in which government services would be delivered to individual terminals and desktops, although the use of vague tenses in the document suggested that plans were not clear. Services ‘would’ be provided at easy-to-use public access terminals with touchsensitive screens, in places such as post offices, libraries and shopping centres. Some services ‘may’ be available over the telephone. As commercial interactive services are increasingly delivered into the home, via media

48 Information technology in government

such as cable and digital television, the same media ‘could’ be used to deliver government services (CITU, 1996:15). While generally welcomed, the Green Paper was criticised for lack of vision about the greater possibilities of information technology. As one consultant pointed out, while the promise of government. direct was substantial, carrying it out would be a vast technical task: ‘If we don’t change attitudes within the Civil Service so that we can do things across departmental boundaries, nothing is going to change’ (Computing, 25 September 1997). In the September after the general election of 1997, the new Minister for Public Services promised a White Paper covering the whole issue of using information technology to run government services more efficiently, with the full backing of Tony Blair. However, concerns remained (as with government.direct) that there would be ‘little advantage in providing the public with electronic access to government services unless the links between and across such services are fundamentally rethought’ (Computing, 25 September 1997). In the US the governmental environment seems to be more conducive to inter-departmental initiatives and there is a longer tradition of information technology projects developed through cross-agency initiatives. The US Department of Commerce has been at the forefront of inter-agency initiatives and has participated in the development of several highly complex systems. Of its constituent bureaux, the National Technical Information Service conducted a pilot project, FED WORLD, which provided on-line access to the public via 118 databases throughout the federal government. The Patent and Trademark Office set an information dissemination objective to allow users throughout the country to access their databases to conduct patent and trademark related searches. The Minority Business Development Agency’s computerised system for obtaining performance data from agency-funded organisations was made available for full operational use by more than 100 organisations. There were also attempts to overcome problems of a lack of co-ordination within departments, as well as across them, especially in the processing of treasure. Because financial systems have been developed over a long period of time and are needed in almost every part of the bureaucracy, over time they tend to become complex and overlapping. For example, the Department of State discovered around 20 discrete subsidiary systems processing financial data which seemed appropriate for integration with the Department’s primary accounting system: ‘The standardization of financial data and the system integration effort is a monumental task that will be conducted in stages over the next several years’ (OMB, 1993c:IV–89). In 1996 the Director of the National Institute of Standards announced that it was going to concentrate on a government-wide focus: ‘We still want to help with things that are cross-agency-based or generic in nature. But a lot of systems problems are unique to a single agency, and we have to strike a proper balance on where we focus our resources’ (Government Computer News, 29 April 1996). Thus in the US in the 1990s the integration of computer systems received

Innovation, expenditure and control

49

a higher administrative profile than in Britain and there were some examples of integrated computer initiatives being used to standardise administrative operations at a government-wide level. A Government Financial Management System was being developed, integrating departmental financial systems with those of central agencies: ‘The integration of agency primary systems, the Treasury’s system and OMB’s system will form a single government-wide system providing financial information to all levels of management and improving control of the government’s resources’ (OMB, 1991a:1–8). In addition OMB were developing a government-wide budgeting system, ‘Maximum Information in the Year 2000’ (MAX), to replace the existing arrangement for budgeting: the original (and antiquated) computer system used to print the President’s budget documents (the Budget Preparation System, BPS) and a system to track Congressional interventions (the Central Budget Management System, CBMS), which were unable to merge financial data from the Congressional Budget Office and the Treasury. In 1993 the National Performance Review announced the commitment to create a National Spatial Data Infrastructure, integrating data held by central agencies, state and local governments and private companies on geo-physics, the environment, land use and transportation into a single digital resource accessible to anyone with a personal computer (NPR, 1994:125). The National Performance Review set up an Information Infrastructure Task Force, which produced a ‘Strategic Vision’ document in 1994 and was joined by a ‘Government Information Technology Board’, the tasks of which included ‘developing shared government-wide information infrastructure services to be used for innovative, multiagency information technology projects’ and ‘creating opportunities for cross-agency cooperation and intergovernmental approaches in using information resources’ (Executive Order 13011, 16 July 1996). In the US enthusiasm for inter-agency projects was fuelled by a ‘growing consensus as to both the importance and the general characteristics of an improved national information infrastructure’ (OMB, 1992:III–5). OMB outlined an aim to build a consolidated infrastructure that operates across agency boundaries so that ‘a person will be able to receive professionalised, multi-media job and health care information and screening, pay bills, and apply for a birth certificate 24 hours a day, seven days a week, from home or at a public terminal’ (OMB, 1992:III–5). By 1996 the Federal Information Center, developed by GSA to integrate government-wide information to the public was up and running, providing service to all 50 states with a toll-free number. Another example was the border law enforcement system noted earlier, IBIS, which according to OMB would improve border law enforcement by ‘encouraging co-operation in the development of future border information systems’ (OMB, 1992:I–9). Finally, the Information Technology Management Reform Act of 1996 specifically made provision for agencies to enter into contracts which would then be used by other

50 Information technology in government

agencies. Also in 1996 too a government-wide, one-stop, electronic bulletin board system was created to improve links among agencies. In summary, neither country can be said to have developed a technological infrastructure, although considerably more moves in this direction have been made in the US. Both government.direct and the NPR indicate a realisation that such infrastructures matter. However, in spite of frequent mention in both documents of ‘infrastructure’ and ‘rationalisation’, both initiatives really involve attempts to link together currently separate systems rather than any tangible, government-wide system of transferring information. Conclusions The increasing use of information technology by government departments brings new pressures to innovate. New policy windows open. And as government agencies innovate, counter-innovations developed outside government provoke further innovations from government. Given the already wide gap between what government does and could do with information technology, these pressures seem unlikely to slacken. Such developments do not come cheaply. Both governments have steadily increased their budgets for information technology, both in absolute cost and in percentage of overall expenditure. Differences across departments noted by Nora and Minc in 1981 seem likely to continue, reinforced by the increasing variation in the extent to which technology is used for core tasks. In some agencies information technology has assumed a role in the core task of the agency. Thus in the UK Ordnance Survey agency, geographic information systems have become the agency’s principal task, feeding into all the work the agency carries out: as one senior official observed, ‘information management is our core business now’. The census agencies in both countries are other examples of agencies where the development of innovative and robust computer systems is a central activity. Other departments have found it more difficult (or less necessary) to relate information technology to their primary functions. In the Foreign and Commonwealth Office and Treasury in Britain, telegrams are still used for communication between bureaux and information technology plays a less than central role. Internal regulation is an obvious response to a new and expensive administrative tool. Although both governments have experimented with various regulatory responses over time, in general central control over information technology decision-making remains weak. In particular, the two central bodies with responsibilities for controlling expenditure—the Treasury in the UK and the Office of Management and Budget in the US— seem to lack resources (authority, expertise and organisational capacity) to control information technology expenditure in comparison with other budgetary items.

Innovation, expenditure and control

51

Finally, governmental attempts to use information technology as a government-wide tool have been minimal. In general applications are developed in isolation with few links between them. Both governments now plan to create national information infrastructures, but their histories of computer systems development render such a task difficult and expensive. The lack of computer-based communication or transfer of resources across departments and agencies calls into question the integrative power of information technology heralded by enthusiastic modernists.

3 Computerisation in the UK Benefits Agency

This chapter investigates computerisation in the UK agency with responsibility for delivering social security benefits: the Benefits Agency, formerly within the Department of Social Security. The Benefits Agency was the largest agency formed out of the Department of Social Security (DSS) under the government’s Next Steps programme in 1991. It is responsible for the delivery and administration of the major social security benefits in the UK: income support, pensions, invalidity benefit (the three largest benefits), child benefit, family credit, disability living and working allowances and the social fund. The operations of the organisation are huge and of comparable size in relation to total population as in the equivalent organisation in the US: the UK Benefits Agency maintains records for 60 million people and the US Social Security Administration for 260 million. By 1994, the Benefits Agency’s total yearly expenditure on benefits was £32,400 million administered to 15,140 beneficiaries. Operations were conducted by around 90,000 staff through nearly 500 local offices (DSS, 1993; ITSA, 1995). The proportion of benefit expenditure spent on administration was 5 per cent, while the percentage of administration cost spent on information technology costs was 19 per cent. The Department of Health and Social Security (the predecessor of the DSS) introduced the use of computers for social security tasks in 1959, developing huge systems in batch mode, held in central offices in Newcastle, North Fylde and Livingstone and updated from information sent on paper from local DSS offices. The computer systems were benefit-specific; that is, separate systems existed for child benefit, supplementary benefit, pensions and other allowances. By the 1970s the Department had become the largest and most experienced user of information technology in government service in Great Britain. Although no study assessing the efficiency achieved by these systems has been found, the size of the work performed daily was impressive (Avgerou, 1989). By the end of the 1970s, however, the computer systems of the DSS were widely recognised as outmoded and in need of attention. In recognition, the Department embarked on a large-scale and ambitious project to modernise them via the Operational Strategy. Uniquely among UK government computing projects, considerable academic attention 52

Computerisation in the UK Benefits Agency

53

has been focused on the computerisation of the DSS and specifically the Operational Strategy Project (O’Higgins, 1984; Willcocks, 1987; Avgerou, 1989; Dyerson and Roper, 1990a, 1990c, 1992; Margetts, 1991; Margetts and Willcocks, 1992, 1993; Bellamy, 1993a, 1993b; Bellamy and Henderson, 1991; Collingridge and Margetts, 1994). Previous research has focused on the organisational aspects of computerisation, so these sources are summarised and analysed in this section rather than primary sources, which are used for those areas that have received less attention (for example, contracting, assessment and developments in the 1990s). The birth of the Operational Strategy, 1977–87 By the end of the 1970s DHSS operations had become, as an Operational Strategy Review politely noted, ‘heterogeneous and relatively uncoordinated’. There were no computers in local offices and the intertwining of manual processes with those that had been automated was inefficient and cumbersome while error rates were high. Claimants were dissected into ‘social slices’ by the benefit-specific computer systems (O’Higgins, 1984:107). Personal data on individual clients were held, on average, in five different places. There was a profusion of non-standard forms: 8,000 internal and 12,000 external (Dyerson and Roper, 1990a:5). Data transfer relied on paper-shifting by administrative assistants; linking letters to casepapers, retrieving documents or filing them. Casepapers were often lost in their travels from one part of a local office to another or between the DE and the DSS, and error rates were high. An early and ambitious attempt at modernisation started in 1977 (CAMELOT) failed, the £12 million costs were written off to experience and the project’s name vanished into orchestrated obscurity when the project was abandoned in 1981. In 1977 plans were conceived and developed to reform completely the use of information technology in the DSS. The Operational Strategy was a combination of projects with the aim of introducing computer terminals into local offices, linked to four area computer centres which were connected to the departmental central index at the existing central computer installation in Newcastle. Facing the choice between a centralised computer system (with the problems of heavy communications and transaction loads, highly complex software and the serious threat to the whole social security system in the event of a major systems failure) and a decentralised system (higher capital and running costs, the use of advanced, previously untested micro-computer facilities and the problems of keeping software uniform and up to date), the DSS opted for a compromise three-tier structure. The key computer centre maintaining the General Index at Newcastle would remain, while further details on each claimant would be held locally in one of the new computer centres. The original proposals showed a ‘creditable awareness of difficulties’ (Willcocks and Mason, 1986:3), with a modular approach permitting autonomous development of the 14 proposed

54 Information technology in government

computerisation schemes over 15 years. As O’Higgins pointed out in 1984, ‘the technical line of development proposed in the strategy seems eminently reasonable’ (O’Higgins, 1984:118). The aims of the Operational Strategy were threefold: better service to the public, including reduction of payment errors; economy, facilitated by reducing the numbers of staff needed to administer social security benefits; and more satisfying jobs for staff by automating repetitive or tedious tasks (SSC, 1991b:6). The Department planned to rationalise data through the ‘whole person’ concept, so that data held and services given were based on clients rather than individual benefits. This design would enable one member of staff to deal with a client, rather than a client having to speak to pensions staff using a pensions system, in addition to income support staff using an income support system. Therefore the original documentation of the project specified that the principal method of organising the systems should relate to functions rather than benefits (DHSS, 1982:13). A functional analysis was carried out, identifying the common functions across benefits: to collate evidence, determine entitlement, and implement decisions; ‘this cross-benefit approach should be carried forward from the current analysis phase into the detailed design of all new systems’ (DHSS, 1982:13). The early years of the Operational Strategy were devoted to planning and design of the constituent projects. In 1984 the Department was criticised by the PAC for lack of financial control, especially in the light of a recalculation of the savings from an intermediary local office project within the Operational Strategy. The savings had been recalculated by the DSS as £66 million, £310 million less than the original estimate, provoking shock and concern among members of the PAC (PAC, 1984:9–11) and casting doubt on the eventual savings from the Strategy, predicted at £1,900 million. The PAC later concluded (PAC, 1989:ix) that the Department did not operate adequate arrangements for financial control until July 1988, six years after the commencement of the Strategy. 1987–91: implementation and results of the Operational Strategy In 1985 the government announced plans for a major reform of social security (Secretary of State for Social Services, 1985). The changes meant that project plans had to be altered within a relatively brief period of two years up to April 1988. In 1987 it became clear that many of the projects had slipped behind their original target dates. However, that year Eric Caines took over as Director of Social Security Operations and under his direction the Department resolved that no further delays would be tolerated. The speed of implementation after 1987 was fast and furious and subsequent target dates for individual projects were largely met. However, in January 1989 an unusually critical report from the National Audit Office brought the project into the public and parliamentary eye.

Computerisation in the UK Benefits Agency

55

The NAO found that to recover the time lost from slippage on main projects up to 1987 and to ensure delivery of the systems at the earliest opportunity, ‘the Department had deferred low priority functions; postponed less important projects indefinitely; employed an increasing number of consultants; and made extensive use of overtime’ (NAO, 1989a:11). In fact the ‘less important’ projects were largely those concerned with improvements in quality or additional functionality. After 1987 the driving rationale for the project had become the speed of implementation in order to achieve the staff savings on which depended its financial viability in the eyes of the Treasury. By 1991 there were terminals in all local offices, nearly 35,000 in total, and the pension and income support systems were operational, processing 18 million benefit enquiries per year. The DSS was again the ‘biggest user of information technology by a long way’ (SSC, 1991b:13) but biggest does not necessarily mean best in this context. The project was reputed to be the largest civil computing project in the world and the resulting systems were unique by virtue of their size. To have attained the aim of linking all local offices to a central system could clearly be regarded as a major feat. But commentators at the time and since have raised concern over the financial viability of the project, the reliability and accuracy of the systems themselves, and the quality of service offered. Accuracy has also been a concern, as the Comptroller and Auditor General has qualified the accounts of the DSS every year since 1989. The ‘whole person’ concept was never operationalised in the Operational Strategy systems. After implementation clients were still treated separately according to the benefit being claimed, and the pension system and income support system were developed separately with few links between them. In 1994 users of the current systems had to log off the income support system before entering the pensions system. The DSS funded a study on the ‘whole person’ concept in 1988 (Adler and Sainsbury, 1988), after the design phase of the project was complete. However, this study took the form of analysis as to what the ‘whole person’ concept might mean to interested parties: worthy but late, considering some claims that the concept was to be a ‘key plank in policy formulation’ (Burgess and Clinton, 1989:61) of the strategy. For the concept to have been operationalised it would have needed to be at the core of design plans from the start. At the end of the 1980s the Department claimed it was starting to use information technology to make major organisational changes: ‘The increasing ability to separate physically the front and back office functions is beginning to provide the Department with a new geographical identity’ (PAC, 1989). In 1988 a report entitled The Business of Service (DSS, 1988) proposed the relocation of administrative functions to Glasgow, Belfast and Wigan for offices in London that were deemed to be delivering an unsatisfactory service. These local offices remained as reception points with terminals for viewing while the assessment of claims was carried out in the social security centres.

56 Information technology in government

Relocation began in 1989; a total of 21 offices across London were selected as likely to benefit. The Relocation of Work Project was much heralded by the proponents of the ‘information polity’ (Taylor and Williams, 1990:158), who cited the DSS as a good example of where new information resources had been part of a ‘profound reconceptualisation of overall organisational design’, in which ‘information resources are also perceived at the strategic level as a force for horizontal organisational integration’. But the Relocation of Work Project was disappointing even from the point of view of economy, its primary objective. Although the DSS never stated publicly that relocation failed to achieve its objectives, ‘the Chief Executive of the Benefits Agency stated on several occasions that no further relocation exercises are planned’ (SSC, 1991d:11). In 1991 the Secretary of State for Social Security was asked the following question: ‘The break-even point on the savings seems to be receding two years for every year we go on, so I understand it is now at 2020; is that correct?’ He replied ‘I am afraid I could not speak to the year 2020’ (SSC, 1991e:105). The 1990s: agencification and the revival of the ‘whole person’ concept In 1990 all the information technology services divisions within the DSS were brought together to form the Information Technology Services Agency (ITSA), a Next Steps agency which thereafter accounted for all the Department’s expenditure on information technology. At first ITSA largely consisted of the in-house staff employed on the Operational Strategy and the agency’s primary responsibility was to administer the projects that went to make up the Operational Strategy. During the 1990s ITSA grew, as it provided information technology services and managed procurement of both hardware and information technology services for all agencies in the former DSS, working under Service Level Agreements and Service Development Agreements, negotiated between agencies. ITSA’s budget in 1992 was £422 million (ITSA, 1992:32). At the same time as ITSA was created, the Department was further split into the Benefits Agency, the Contributions Agency and three smaller, client-group-based agencies (Resettlement, Child Support and War Pensions). In the 1990s there was a re-emergence of interest in the ‘whole person’ concept with a new discussion document launched in 1992 (Benefits Agency, 1992). The Benefits Agency initiated the Common Information Systems project, which was intended to make the original dreams come true. Bellamy (1993a:10) has argued that the new plans have ‘real political clout’. But customer information will, for many years, continue to be held on different benefits systems: ‘Integration is to be achieved wherever and however the systems interact with customers’ (Bellamy, 1993a:10). Bellamy and Henderson (1991) linked any change towards developing computer systems that allow

Computerisation in the UK Benefits Agency

57

different benefits to be calculated by the same computer user to a change in the benefits system per se, indicating that such moves were as yet entirely notional. And in 1998, the DSS was defining its needs for better integrated ‘corporate data’ in terms that were reminiscent of descriptions of the benefits systems in the 1970s: The separate systems used for keeping records and handling benefits mean there are many duplicated fields about the clients of the DSS’s various agencies. An individual’s name and address will often appear in several different systems, and the information held is often inconsistent. Local office staff often have to move in and out of several systems to deal with a single transaction or change a record. (Kable Briefing, February 1998) During the 1990s computer technology within the Benefits Agency became a newsworthy issue as the subject of smart cards for beneficiaries and various other policy innovations came under debate. But an extremely high degree of computerisation remained technically feasible. The Agency’s methods for calculating and delivering payments were still a long way from those of major banks, with high reliance on paper processes. For example, in 1994 beneficiaries were still paid through the postal transfer of girocheques and order books from Area Computer Centres (NAO, 1995e), in spite of the DHSS having noted ten years earlier that modern payment methods were used in almost every other developed country (Secretary of State for Social Services, 1985:62). Postal and courier services were the principal method of transferring information between agencies and divisions within the DSS; if Benefits Agency staff needed updated information about a claimant’s national insurance contributions record, they obtained the information by dispatching a form by post to the Contributions Agency in Newcastle, normally received ‘within ten days’ (NAO, 1995d:8). Administration of the Social Fund had been computerised but with considerable difficulty, giving rise to serious concerns over accuracy and resulting in extraordinarily high administrative costs of over 60 per cent of the amount paid out in benefits in 1995 (DSS, 1995:58). In 1994 the Social Security Secretary signalled moves to introduce computers in all post offices, which could then be used to replace benefit books with new payment cards. Ministers claimed that up to £155 million could be saved on order-book and girocheque fraud by introducing a computerised system, making the new system self-financing. But Home Office figures suggested that introducing such a system would cost up to £475 million to develop, with a subsequent annual cost of up to £100 million (Kable Cuttings, 16 May 1994). The contract for the system (known as POCL) was signed in 1995 with a planned completion date of 1998, but by 1997 the project had been set back again until 2000, with only 205 out of 19,000 post offices automated and then to distribute child benefit only, rather than the range of benefits and pensions promised under POCL (Kable Briefing, January 1998).

58 Information technology in government

While discussion of fraud-reducing innovations continued, the Benefits Agency still gave away significant sums in overpayments through inaccuracies in the existing systems. Errors in income support in 1992–3 resulted in overpayments to claimants of £465.2 million (underpayments were only £136.2 million) (NAO, 1993b:9). Equivalent figures in 1994 were £540.1 million (overpayments) and £76.6 million (underpayments). Thus the price of the computers now in use in social security offices has been high. The remainder of this section investigates whether the cost may be justified and examines the extent to which computerisation of benefits delivery has achieved the objectives stated in the original proposals for the Operational Strategy. Organisation of information technology in the DSS and the Benefits Agency The command structure within the DSS was and is organised by benefit, and benefits commands were responsible for their own systems within the Strategy projects. This organisational structure is the primary explanation why the resulting computer systems were also benefit-specific. In spite of the functional analysis carried out and outlined in 1982 (DHSS, 1982:12–13), systems analysis relied more on the automation of existing work processes than the radical overhaul of working practices that the ‘whole person’ concept would have required. As one official who was present at the time put it, ‘The supplementary benefit clerks came and told the systems personnel what they did at present, and these processes were automated.’ Management of the Operational Strategy was influenced by concern over industrial relations. The initial phases of computerisation were endangered by striking staff. Moreover the anxiety of top DSS management to ensure that such an experience was not repeated to some extent shaped the organisational arrangements during the course of the project. A strike at Newcastle in 1984 was particularly disruptive and expensive, at a cost of some £150 million on government figures (Willcocks and Mason, 1986:6) for emergency payments by the Post Office and extra staff and overtime payments. By January 1985 management and unions had still not reached agreement over the proposed staff cuts through computerisation. A confidential Whitehall report in March 1985 recommended a revision of policy, intended to minimise the risk of industrial power concentrations building up in large computer installations. In 1987, when Eric Caines took over as Director of Social Security Operations, he decreed that half the internal programmers working on the project be moved to other areas and replaced with consultants, because they were seen as a strike risk, thereby eroding the skills base that had started to build up at the development centre in Lytham St Annes. After implementation of the Operational Strategy and agencification, the responsibility for information technology management activities was ‘shared between ITSA, the Department’s Headquarters, and

Computerisation in the UK Benefits Agency

59

the other Agencies of the DSS group’ (ITSA, 1992:13). The remaining core of the DSS was organised into three groups: Policy; Resource Management; and Planning (including some responsibility for information technology); and legal services. A Departmental Information Systems Strategy Committee (DISSC) was formed out of the original Operational Strategy steering committee on which the various constituencies interested in information systems within the DSS were represented. Thus formal authority over information technology rested within the core department, while most knowledge and understanding of information systems lay elsewhere, particularly in ITSA and the Benefits Agency, where Bellamy (1993a:32) observed ‘the development of countervailing authority over IS by senior mainline management in user commands’ who had ‘built their own networks of advisers and consultants’. Thus after agencification, management of information technology had become diffused across organisational boundaries within the DSS, causing problems for co-ordination. In early 1992 the Department perceived the need for some kind of central control, earmarked 5 per cent of the information technology budget for regulatory activity, and created the Departmental Information Technology Authority (DITA) within ITSA to carry it out. DITA was initially a small group (six people), located in London rather than Lytham St Annes where the rest of ITSA was based, with the brief to ‘maintain and develop IS strategy, to regulate information technology standards, technical architecture and policies, and regulate IT procurement within the group’. It was also charged with providing a departmental focus on information technology as opposed to the interests of individual agencies. During the 1990s the DSS increased the proportion of information technology work that was outsourced and the confusion of responsibilities across proliferating information technology projects worsened. Kevin Caldwell, head of DITA, ordered that a clear philosophy for information technology management, especially procurement, should be developed. Three hundred staff were transferred from ITSA to DITA and a review process was conducted to identify what structures would be required. DITA would be partially funded by ITSA, yet continue to act as its regulator: procurement services were placed with DITA because it could deal with ITSA in a ‘market fashion’, as one official described the relationship. At its peak in June 1994 DITA had 344 staff. This figure was reduced in 1995 to 264, with 134 staff helping senior IS strategists and planners develop their own information technology strategies (directly funded thorough charging business divisions) and the remainder regulating procurement, spending around £450 million per year. After ITSA’s creation, agencies bid for the information technology services they received. But ITSA argued the bidding regime ‘caused user agencies to treat IT as a free good: the fact that users bear no direct costs stimulates demand which is potentially unlimited, and creates few incentives for agencies to consider non-IT solutions for their problems or to find ways

60 Information technology in government

of using IT more efficiently’ (Bellamy, 1993a:27). The Department responded by introducing a hardcharging regime in 1993, after which ITSA’s services were provided to its users in return for the notional transfer of the information technology budget. In 1994 ITSA had some 40 service level agreements with user branches in the Department, their agencies and the employment service. ITSA’s ‘Mission Statement’ for 1992/93 stated that outside DITA, ‘the operational services remainder of ITSA will become, within three years, the first-choice supplier of IT services for the DSS group’ (ITSA, 1993:6). But there must be some doubt that if agencies were to use ‘real money’, they would voluntarily continue to use ITSA rather than a private sector consultancy. Thus by 1994 the management structures for information technology were unstable. It seemed probable that the buildup of expertise in user commands noted earlier would expand, further duplicating ITSA’s role; and ITSA’s future remained uncertain. ITSA’s reputation was not enhanced by suggestions that it had wasted £35 million on what is termed ‘rework’: the cost of replacing or rectifying ‘poor quality products and equipment and inaccurate or incomplete implementation’ (Independent, 9 May 1994). Ironically by 1994 one of the constraining factors remaining to the eventual implementation of the ‘whole person’ concept was the agencification of the DSS. The spirit of the Next Steps programme worked against the central control of data about people and payment information. Yet if calculating engines did not remain standardised, it would become impossible to ensure that distinctly developed programmes would work to exactly the same rules. Central control over information systems would facilitate the integration of currently benefit-specific information, ensuring that ‘the IT is not a constraint for the organisation, whereas it is at the moment, because that is it how it has been built’ (interview with an ITSA official). But central management of information technology was against the notion of executive autonomy. In this way Next Steps constrained any movement away from benefit-specific systems towards function-specific systems, in a reflexive causal relationship. Because the Operational Strategy systems were organised by benefit, they worked against co-operation across DSS agencies which were also organised by benefit. By 1994 the immense political upheaval that would now be necessary to implement the ‘whole person’ concept caused many systems staff to believe ‘it will never happen’. Even initiatives to introduce common screens across benefits (an ‘about the person’ screen showing personal details) caused arguments between different categories of benefit officer and different benefit offices. There was a strong feeling among the ‘customers’ of ITSA that the technological ‘tail’ should not ‘wag the dog’ and there were complaints that ‘that is not the way we do things’ rather than an acceptance that there were benefits to be attained from commonality. Although the top-level staff of the Benefits Agency were enthusiastic about the new screen, it was never implemented. Local managers had sufficient autonomy to resist the change; DITA did not have sufficient authority to resolve the problem. The notable

Computerisation in the UK Benefits Agency

61

executive autonomy that had developed within the Benefits Agency worked against the idea of standardised, centralised computer systems. Contract relationships At the outset of the Operational Strategy project the Department planned on the basis that their own staff would undertake most of the computer development work and that outside consultants would supplement this as required. In reality consultants were involved in the conception of the project and used to an increasing extent throughout development. Arthur Andersen, CAP and Computer Sciences Corporation were all involved in the development of a Local Office Project in the early 1980s and in 1984 the major tender for the development of the system software went to a consortium consisting of ICL, Logica and four universities. In 1986–7 the Department employed around 150 computer consultants at a cost of £12.1 million (NAO, 1989a:20). A DSS representative detailed some 123 contracts spread among 60 consultancy firms for the DSS in response to questioning in 1987 (PITCOM, 1988:67). After 1987, when Eric Caines took over as the Director of the project, the number of consultants increased to 235 at an annual cost of £22 million (NAO, 1989a:20). In 1989 the entire Livingstone Computer Centre was contracted out to Electronic Data Systems (EDS) for a preliminary period of five years, as part of a programme for ‘making my [the Secretary of State’s] computer installations less vulnerable to disruption from industrial action’ (Guardian, 20 September 1989). Since that time EDS has taken over two more of the four area computer centres. In 1994 the Livingstone Centre was market tested and the contract awarded to ISSC (IBM’s Facilities Management Division), involving a ‘complex transition’ of responsibilities (ITSA, 1995:22), later reversed when EDS won the contract back again in 1995 (see Chapter 7). The DSS was notable for its extensive ad hoc use of management consultants, starting during the Operational Strategy and continuing throughout the 1990s. ITSA’s expenditure on consultancy staff costs was £82 million in 1990–1 and £97 million in 1991–2. ITSA spent more on consultants (over £100m) than it did on its own full-time staff during 1994– 5 (Kable, 1994:135). Thus the DSS spent over 8 per cent of its information technology budget on consultancy (excluding facilities management or major outsourcing deals), a far higher proportion than any other department; the next highest was the MoD at 4 per cent. During the 1990s, ITSA undertook a bewildering array of contracts, with the number, size and co-ordination of contract awards arousing comment from officials in other agencies and presenting distinctive management problems. As one official put it: There is a big difference between contract management, which is actively managing contracts, and just creating a contract and putting it in a drawer and only getting it out when something goes wrong. A lot of the contracts

62 Information technology in government

will have conditions in them and terms in them which can constantly be changed so there needs to be active looking at contracts to see if charging levels should be changed. That is active contract management and the other one is creating contracts and putting them in a drawer and dusting them down from time to time…with the sheer weight of the number of contracts we had that is what was happening. An embryonic contract management team was built up, with the emphasis on proactive contract management. This move met with some resistance from senior DSS staff: A lot of people in the DSS are very naïve. They don’t like people saying you have to be careful with contracts and paperwork.…Some of the senior people seem to think why do all of that, these are our friends. It’s interesting…if you look at the track record of contracts that have come to fulfilment, it’s quite a trouble to find one successfully concluded. Several smaller contracts have ended in legal action, and in 1994 the DSS were taking a consortium of ICL and Hoskyns to court over the Analytical Services Statistical Information System (ASSIST) contract, a ten-year contract to design, develop and maintain a complex statistical computer system which failed in 1994 amid claims of a lack of consultation and changing specifications. By 1997, media reports claimed that the DSS had still failed to win significant compensation or even recover legal costs of more than £1 million (Computing, 19 June 1997). Relationships with central agencies The organisation outside the DSS with the most impact on the Operational Strategy was undoubtedly the Treasury, and speed of implementation remained the Treasury’s chief point of criticism throughout. It seemed that decision-makers, senior civil servants and consultants were invulnerable to criticism from all but the Treasury, through the NAO and the PAC. In the early days of the Operational Strategy a ‘plethora of committees’ (Burgess and Clinton, 1989:59) was set up to oversee the project; membership embraced ‘all those with a vested interest’, including representatives from the CCTA. Until 1984 the CCTA also exercised overall financial control, but after these arrangements had received criticism from the PAC (1984; 1989) its role was advisory only and the CCTA was not mentioned in the NAO report of 1989. Caines’ autocratic direction after 1987 was not conducive to outside intervention. Criticisms and suggestions from the PAC were noted but ignored. The impact of union responses to the Strategy was minimal, with formal mechanisms for consultation replacing any substantive role in the decision-making process.

Computerisation in the UK Benefits Agency

63

After the CCTA’s move to Norwich, while ITSA continued to be based in Lytham St Annes communications between the two were largely limited to security issues, although the Benefits Agency sometimes used the legal department of the CCTA, and the CCTA had representatives on the management forum of DITA and a steering committee concerned with procurement. Roy Dibble, the head of the CCTA, was on the DISSC. In the 1990s the CCTA was perceived by ITSA as being useful for guidance. One official described the organisation as ‘a useful postbox for information about technological developments’, but because of ‘the sheer size of the DSS now, we know more about doing this stuff than the CCTA do, so we don’t go to them for advice’. Policy considerations During the social security reforms of 1985 the divisions developing the Operational Strategy systems were not consulted and were given little warning of the reform, yet the legislative changes meant project plans had to be altered within a relatively brief period of two years up to April 1988. According to many sources, those involved in planning the Operational Strategy were not consulted until the Green Paper was published. Eric Caines, Director of the Strategy at the time, indicated at the Public Accounts Committee in 1989 that the urgency of modifying supplementary benefit software to income support software in time for the introduction of the new benefit gave rise to many of the problems experienced: ‘We rewrote software to be used on the same machine. It had to be rewritten in a great hurry to meet the April 1988 start date’ (PAC, 1989:8). The lack of consultation, rather than causing the Department concern, seems almost to have been welcomed: a junior social security minister stressed that the Strategy was about operations, not about the benefit system itself (DHSS, 1983), and information technology was apparently regarded strictly as an operational matter. ITSA from its creation was little consulted on policy matters. However, the extent to which information technology personnel were consulted depended partly on the personalities involved. When the Jobseekers Allowance was introduced in 1995, for example, the DITA was consulted but only because a key figure used to work in ITSA and deemed it worth investigating the systems implications. In this case the costings and the implications of the various options influenced the eventual policy decisions. In general the Operational Strategy systems have influenced the Department’s speed at dealing with policy changes, clumping the implementation of legislative changes together to coincide with software upgrades: A circular cannot be sent out to do something in a fortnight’s time, it has to wait for the next release of software and there are two per year. Sometimes it is booked up eighteen months or two years in advance. (SSC, 1991d:28)

64 Information technology in government

Assessment of information technology in the Benefits Agency Overall, it is worth noting that the only external commentators fulsomely to praise the Operational Strategy have been management consultants with a strong vested interest in enhancing the project’s reputation (see Burgess and Clinton, 1989). Other enthusiastic praise came from Eric Caines, leader of the Operational Strategy from 1987 to 1990, who referred to the project as ‘an outstanding success’ (Computer Weekly, 11 November 1993). Given that the potential savings of the Operational Strategy were the objective upon which the organisation concentrated throughout the project’s life, cost was the main standard against which the success or otherwise of the project could be evaluated. Quality of service can also be considered. Increase in job satisfaction, although stated as a primary objective, was never taken seriously in the organisation of the project. Indeed, in 1991 when Sir Michael Partridge (then Permanent Secretary of the DSS) was asked by the Chair of the Commons Social Security Committee whether staff savings were ‘the point of having the whole information technology programme’, he replied simply ‘Yes’ (SSC, 1991b:20). In 1989 the National Audit Office expressed concern about the financial viability of the Operational Strategy (NAO, 1989a). Between 1982 and 1988 the estimated costs of the strategy from commencement to 1998–9 rose from £713 million to £1,749 million in real terms while net savings fell from £915 million to £414 million in real terms. Originally it was planned that 20,186 staff would be saved upon implementation of that technology, but the number of Benefits Agency staff continued to grow throughout the 1990s. After the NAO report in 1989, the project became diffused around the organisation so that costs and benefits become hard to assess. Workload increased during the Operational Strategy project and in 1991 the Treasury gave an additional £30 million to the DSS in compensation. Agencification further complicated the analysis of staff savings. The DSS seemed to welcome the cloud of confusion surrounding estimates of costs and savings. At the Social Security Committee in 1991 DSS representatives spoke of a reduction in staffing levels of 5,000 personnel, based on £115 million savings on the Operational Strategy, while claiming: ‘There are so many policy changes which have an impact on staffing that it is actually very difficult to say, “We are reducing the number of staff”’ (SSC, 1991b:103). The last formal costbenefit review of the Strategy was the National Audit Office report of 1989 and more recent documents refer to these costings (see SSC, 1991b:6). Throughout the 1990s the Social Security Committee expressed concern over administrative costs of benefit delivery. The administrative costs as a percentage of benefit expenditure varied significantly between benefits, being at their highest in income support and family credit (the two most expensive benefits delivered by the Benefits Agency in expenditure). To establish the administrative costs for the Benefits Agency over time is problematic, because

Computerisation in the UK Benefits Agency

65

Table 3.1 Estimated administrative cost of main benefits in the Benefits Agency

Source: DSS (1991:39; 1992:22; 1993:20; 1994:46; 1995:58). Notes: Operational Strategy Systems came online in July 1991. Figures for 1988–9 and 1989–90 already contain capital costs for investment in computerisation (£97 million and £215 million respectively).

information technology costs were not apportioned until 1993. But the administrative costs of the main benefits computerised under the Operational Strategy, as calculated by the DSS, are shown in Table 3.1. A Benefits Agency responded to Social Security Committee concern over savings from computerisation by claiming that ‘the real savings will start to emerge from 1991–2 onwards’ (SSC, 1991a:4). However, as Table 3.1 shows, even by 1994 the average weekly administration cost per beneficiary for both income support and retirement pensions was still higher than in 1988/ 89, when figures already included capital costs for computerisation. To some extent administrative costs were formula driven rather than based on workload. Staff were allocated across local offices using a formula provided by the Treasury, based on the rate of change of income support claimants. However, as noted earlier, the Benefits Agency was expecting a reduction in costs from the Operational Strategy, and the assortment of published figures available casts doubt upon this expectation. Table 3.2 shows DSS estimates of savings from the Operational Strategy from 1992 and should be read in conjunction with the fact that the running costs (including capital costs) of ITSA were £380 million in 1990–1, £414 million in 1991–2, £436 million for 1992–3 and £548 million for 1993–4 (DSS, 1992:30, 1993:31, 1994:67). In 1994–5 ITSA was operating the Net Running Cost Regime for the first time: the Benefits Agency was charged £240 million for ITSA’s services (ITSA, 1995:45). As ITSA’s work for the Benefits Agency consisted almost entirely of completing and running the Operational Strategy network of computer centres and local office terminals, and maintaining the pension computer system, the income support system and the departmental central index, a substantial proportion of this figure should be included in the running costs of the project. Certainly the projected yearly gross costs of the Strategy of around £95 million per year for years 1993–9 (see column 2 of Table 3.2) are understated and the DSS’s projected net savings (column 4) unlikely ever to materialise. With regard to quality of service, accuracy has been a notable problem for the Operational Strategy systems. The Benefits Agency claimed at the

66 Information technology in government Table 3.2 Cumulative costs and savings for the Operational Strategy

Source: DSS in response to parliamentary questions on 23 July 1991 (Hansard, 195, col. 533).

Social Security Committee (SSC, 1991a:6) that accuracy had improved by a couple of percentage points but this improvement was not borne out by published figures. The government’s Chief Adjudication Officer reported that almost six in every ten decisions on claims for income support were defective in one or more respects in 1990–1 (Independent, 5 November 1991). The National Audit Office qualified the accounts of the DSS for income support payments every year from 1989 to 1994. The DSS itself admitted that it made more than one million errors in benefit assessments during 1990– 1 (Daily Telegraph, 30 May 1992), that fewer than one in 20 were corrected, and that accuracy for income support was a problem recognised by the Benefits Agency (DSS, 1994:62). The error rate for income support payments from 1988 to 1994 is shown in Table 3.3. In 1994, in six of 21 offices visited by the NAO at least one in four cases scrutinised was in error. Turnaround does seem to have improved. Some offices on the system were achieving three days on clearance times for tasks connected with income support that previously took five or six days (SSC, 1991a:6). The NAO found that there was a 20 per cent improvement in clearance times for pensions from 1990–1 to 1991–2 with a further 19 per cent improvement from 1991–2 to 1992–3 (NAO, 1995d:17). The Department claimed that Table 3.3. Errors in income support payments, 1989–94

Note: Figures are taken from NAO (1989b:vi; 1990:vii; 1991a:vii; 1992:1993b:9, 1994:8). Nearly 90 per cent of local officers were awarding income support under a computerised system by April 1991 and all offices were operating the new system by July 1991.

Computerisation in the UK Benefits Agency

67

the national average clearance time stood at 4.7 days compared with 6.2 days immediately before computerisation for income support and 22.7 days compared with 28.2 days for pension claims (SSC, 1991a); and the DSS (1994:62) gave 3.5 days for income support clearance times in 1992–3. But there were problems with order books being sent to the wrong places and delays in sending them. As a NACAB witness put it to the Social Security Committee: I suppose in the old days we used to hear the phrase ‘It is in the post’, now we hear the phrase ‘It is the computer’s fault’.…‘we have sent it through to the computer centre’ and ‘we are waiting to hear’, that is quite frustrating. (SSC, 1991d:26) Furthermore, in comparison with the US, the clearance times for pension administration compared unfavourably with that for old age and survivors insurance (the two benefits are comparable) for 1994. The National Audit Office compared pensions administration in the UK with that carried out by the US Social Security Administration (NAO, 1995d:15) and noted that from 1992–3 the SSA’s main target had been to clear all old age and survivors insurance applications before the first regular payment was due or within 15 days from the effective date of filing the application, if later. The Benefits Agency’s target for clearance of pensions claims dealt with by District Offices was 20 days in 1994 (NAO, 1995d:13). In the year to September 1993 the SSA had met their more ambitious target in 79.5 per cent of cases: the equivalent figure for the Benefits Agency was 71 per cent (NAO, 1995d:14). One of the principal ways in which the quality of service was to be improved was through the ‘whole person’ concept, and the concept has since been used to justify claims of future (Treasury-driven) cuts in the 70,000 staff (The Times, 11 August 1992). But such benefits had a mirage-like quality, as the agency realised anew that to operationalise the concept fully would be a huge and expensive task given the existing configuration of hardware and software within the agency. Other organisations shared the perception of systems’ staff (noted earlier) that the integration of staff expertise remained unimplementable: ‘We remain unconvinced that current or envisaged technology will allow any one person to assess all benefits. This view is shared by voluntary organisations working with people claiming benefits’ (NUCPS and CPSA, 1991:4). When the much publicised ‘POCL’ contract to introduce computers and payment cards into all post office was set back in 1997, the suppliers ICL blamed lack of communication between Benefits Agency systems: ‘The Benefits Agency has to turn lots of little databases into one database. What we are doing is dependent on them doing that’ (ICL spokeswoman quoted in Kable Briefing, January 1998). The failure of the ‘whole person’ concept illustrates how the history of decisions (or the side-stepping of decisions) over information technology

68 Information technology in government

policy can have a critical bearing on future policy decisions. Future rationalisation of the benefits system relies on the concept being implementable. Further innovations rely on equally problematic technical operations. The merging of the tax and social security systems has been mentioned on several occasions. A document outlining the possibilities of such a move (Clinton, Yates and Kang, 1994) makes only passing reference to the computer systems maintained by the two agencies, yet a previous Director of ITSA has stated publicly (at the London Business School, July 1990) that the merging of the two systems would be ‘physically, technically and conceptually impossible’ without complete redesign, due to the way the Operational Strategy systems were designed. Conclusions The DSS opted for a major computerisation project. It must be viewed as a considerable feat to have installed machines in all offices; indeed a former permanent secretary once observed that computerising Social Security benefits would take more lines of code than NASA needed to put a man on the moon (The Times, 16 February 1995). However, the DSS has not achieved any of the other objectives it set itself. Cost was the objective accorded priority throughout the life of the project; therefore, it is primarily the costeffectiveness of the project by which its success must be measured. Yet evidence presented here suggests that the project will never be cost-justified. Improvements in quality are restricted to increases in turnaround: additional projects to provide new facilities for clients were dropped from the plans at an early stage. Job satisfaction for staff was given little attention as an objective and hence had little possibility of materialising: there is plenty of research to suggest that an increase in job satisfaction is not an automatic result of computerisation. The success of the Operational Strategy systems therefore rests on the extent to which they established a solid technological base, with organisational capability to modernise further and to build on current expertise. But as the failure of the ‘whole person’ concept demonstrated, the computer systems resulting from the Operational Strategy do not bring the Benefits Agency much closer to being able to use information technology for a strategic approach to policy innovation in the future than they were under the manual system. Decisions over information technology policy made 15 years ago have a critical bearing on future policy options. Information technology decisions currently under discussion, that is whether the investment to integrate systems will be made, or whether a series of superficial ‘add on’ fixes will be made to the systems in place, will have a similar bearing on policy decisions in future decades. Although the Benefits Agency can claim no measurable cost benefits from computerisation, it does not seem to have used any of the wide range of careful pre- and post-investment evaluation practices among the most

Computerisation in the UK Benefits Agency

69

advanced users of information technology across a spectrum of service industries found by Quinn (1992). Most used a three- or four-level classification system which included ‘cost saving’ or ‘new product’ investments, ‘infrastructure or required’ investments, and ‘strategic’ investments. The first is widely regarded as being straightforward to evaluate: ‘Most managers were interviewed and could easily illustrate that these early investments had been dramatically profitable in reducing unit costs’ (Quinn, 1992). When companies move on to the infrastructure levels, for example core telephone, fibre optics, satellite interconnection, record keeping, and transaction processing systems, justification becomes more difficult. And when firms undertake strategic investments, which change the firm’s basic position in a marketplace or ensure its very viability, then it will become extremely difficult to make precise calculations of the specific causes of changes or their impacts. It seems that to justify their expenditure on computerisation, the Benefits Agency would need to show that an infrastructural investment or strategic investment was made. However, if the Benefits Agency was trying to make an infrastructural investment, the constant pressure for cost reduction throughout the life of the project has worked against this (never stated) aim. By employing a changing cast of contractors throughout every phase and level of the project and by moving all the technologically experienced staff into another agency, technical knowledge has been marginalised rather than brought to the heart of the organisation. By relegating improvements in quality and new functionality (with the aim of giving clients a better service) to the bottom of the list of objectives of computerisation, neither can the Benefits Agency plausibly be seen to have made a strategic investment. In a climate that pushes the Benefits Agency into organising themselves like a private sector company, they must find justifications for computer systems without concentrating primarily on often illusory cost savings. The creation of flexible, responsive computer systems which offer potential for policy innovation in the future seems the most likely justification. However, they will still have to contend with the modernist tendencies of policy-makers, who assume that computerisation will automatically bring measurable cost benefits. The DSS’s experience shows how UK government departments can carry out largescale projects with very little intervention from central agencies—apart from the Treasury, which exerts considerable influence to ensure that cost savings become the prima facie justification for projects of this kind. The Operational Strategy is a case of changes which go far beyond incremental ones. Collingridge and Margetts (1994) have argued that the Strategy illustrates how computer projects of this kind can satisfy the conditions of ‘inflexible technology’, characterised by a long lead-time, large unit size, capital intensity and infrastructural requirements. Its failure to deliver its promised benefits to the original choosers has been taken as a corroboration of the normative theory of incrementalism, holding that all decision-makers get better returns (regardless of their interests) from changes that are short

70 Information technology in government

term, local and self co-ordinating, rather than long-term, wide-scale, and coordinated by some master plan from the centre. The lack of incremental approaches in the project design seems likely to introduce a measure of inertia into policy decisions of the future, deriving from the introduction of an ‘administrative inertia’ similar to Rose and Karran’s ‘political inertia’ account of tax policy development (Rose and Karran, 1987; Rose and Davies, 1994). In the future, computing and policy innovation in the Benefits Agency will be shaped by the systems in place and the investment already carried out. Rose and Karran (1987:l00ff.) define the characteristics of inertia policy-making as spanning a long duration, with extensive organisational commitments, decisions difficult to reverse, and unforeseeable consequences, with the scale of change large and potentially destabilising. Conversely, incremental policy-making results in year-to-year planning, with individual choices, more easily reversible decision-making, shortterm consequences and a small but regular scale of change (Rose and Davies, 1994:32). Decisions taken over the Operational Strategy appear to fall within the former category rather than the latter. The project took place over a long duration, involving the type of planning that the DSS would never have embarked upon before computers became widespread. The project involved an organisational commitment to modernisation, the accumulation of many individual commitments, rather than disjointed and serial choices. Reversibility was evidently never an option; it is impossible to tell what disputes may have taken place at the technical level, but there was never any question of abandoning the project. The consequences of the project would have been difficult to foresee ten years ago; the importance of developing integrated rather than benefit-specific computer systems was never highlighted as an important, policy-related decision. Given the reliance of policy change on the information systems in place in the future, this ‘administrative inertia’ seems likely to result in a policy inertia. Already, the time taken to implement some types of policy change has increased. The technological infrastructure of the Benefits Agency works against rather than in favour of the type of innovations desired by policymakers, such as one-stop shops, the ‘whole person’ concept and electronic service delivery. This conclusion does not indicate that such innovations are impossible, merely that they are obstructed rather than facilitated by the information systems developed through the 1980s.

4 Systems modernisation in the US Social Security Administration

This chapter investigates computerisation in the US agency with responsibility for delivering social security benefits: the US Social Security Administration (SSA) within the Department of Health and Human Services (DHHS). The Social Security Administration succeeded the Social Security Board in 1946. In 1953 it was transferred to the Department of Health, Education and Welfare, remaining within Health and Human Services when President Carter extracted the Department of Education. From the 1970s the SSA argued the case for its own independence, a case that was finally won under the Clinton administration in 1994. The SSA administers a national programme of contributory social insurance whereby employees, employers and the self-employed pay contributions that are pooled in trust funds. For social security purposes the US is divided into ten regions, each headed by a Regional Commissioner. A nationwide field organisation of ten regional offices, six service centres and over 1,300 local offices, guides and directs all aspects of the cash benefit programme operations of the SSA. In 1994, the SSA employed 64,000 staff. The major programmes administered by the SSA by the 1990s were: • the Federal Old-Age, Survivors and Disability Insurance Program (title II), which provides income to individuals and families when workers retire, die or become disabled; • the Supplemental Security Income (SSI) Program (title XVI) which provides a federally financed floor of income for aged, blind or disabled individuals who meet income and resource requirements. Like the UK Department of Social Security, at the beginning of the 1970s, the SSA was regarded as the most experienced departmental user of information technology in the US federal government, but by the end of the 1970s it was experiencing major problems. During the 1980s the SSA undertook a major programme of upgrading its computer systems, the Systems Modernization Project (SMP). During 1994, expenditure on the SSA’s main benefits (the Federal Old-Age, Survivors and Disability Insurance Program and Supplementary Security Income) was $265,000 million distributed to 71

72 Information technology in government

45,000 beneficiaries. The percentage of benefit expenditure spent on administration was 1.5 per cent (compared with 5 per cent in the UK Benefits Agency), while the percentage of administration expenditure spent on information technology costs was 4.6 per cent (compared with 19 per cent in the UK Benefits Agency). These figures suggest that the SSA’s administration costs were in 1994 considerably lower than those of the Benefits Agency and its information technology costs represented a considerably lower percentage of administration costs. Such data must be used cautiously; there are incalculable crucial differences between the complexity of the benefit systems, location of clients and accounting methods across the two governments. But at first glance these figures suggest that, while the SSA deals with five times as many beneficiaries as the Benefits Agency, it is achieving greater administrative efficiency from its information systems (DSS, 1993; SSA, 1994:ES10-ES11; ITSA, 1995; OMB, 1995). The 1970s: A period of crises The SSA was an early pioneer in using automated systems, attaining a reputation for management excellence during the 1960s; ‘The SSA had been an early and successful user of automated data-processing techniques’ (Derthick, 1990:184–5) and ‘the agency’s information systems were held out as a model for other automated data-processing users’ (GAO, 1991a:2). In 1971 ‘systems improvement “saved” the equivalent of 2,022 employees and $19.9 million for SSA’ (OTA, 1986:96), under pressure from the Revenue and Expenditure Control Act of 1968 and President Nixon’s insistence that total federal employment be reduced by 5 per cent. During the 1970s however, the SSA faced a series of problems. The first and most important was the implementation of SSI, a federal programme for supporting the income of needy, blind, aged or disabled persons. In 1972 legislators gave the SSA 14 months to implement SSI. This was an enormous task involving the transfer of 3 million beneficiaries from 1,300 state and local offices to federal rolls; the implementation of complex eligibility and benefit criteria; the automation of the new process using data from all the individual states; the hiring and training of 15,000 new employees; and the opening of new social security field offices. After 14 months the SSA was not ready for the task. Payments were made in error and many people failed to receive payment at all. There were widespread public and Congressional charges of unfair and insensitive treatment caused by aggressive efforts to recover debts from beneficiaries who had received overpayments and also as a result of moves made to remove disabled clients from the beneficiary rolls. Fifteen years later, the SSI was ‘still remembered inside the agency as a disaster’ (Derthick, 1990:5) and ‘some employees and some outside observers think that morale at the agency never fully recovered’ (OTA, 1986:105). Memories of the SSI disaster were strongly related to the SSA’s computer development, both in the effect this had on the agency’s confidence in their

Computerisation in the US SSA

73

computer systems and in the effect such high-profile problems had on the agency’s subsequent reputation for handling information technology. The legislative changes had required the S SA to establish a complex communication and computer system and it was afterwards judged impossible to develop the necessary software in the time available (House Committee on Ways and Means, 1976:9–10). Indeed, sending any cheques out at all in 1974 was later pronounced a ‘remarkable achievement’ (SSI Study Group, 1976:201). Subsequent reviews also revealed an over-optimistic assessment of the SSA’s capabilities during the 1960s: ‘policy-making officials credited its [SSA’s] capacity to perform extraordinary feats’ (Derthick, 1990:185). After SSI it appeared that the SSA’s previous reputation had been exaggerated: ‘as of the early 1970s the SSA was not in fact on the frontiers of data-processing’ (Derthick, 1990:186). A report of the Office of Technology Assessment concluded in 1986 that SSA’s problem was not that its computers were ‘old’ or ‘obsolete’. It was that the workload had become too large, too complex, and too dependent on automated processing to be handled by SSA’s existing workforce with existing technology. In this situation every addition to the workload became a potential crisis. (OTA, 1986:14) Furthermore, the saving of staff through information technology and the resource limitations of 1969–72 were judged to have left the agency ‘in what turned out to be a seriously weakened position for the expanded operating demands and the reduced ADP support that were to unfold in the middle to late 1970s’ (OTA, 1986:97). In addition to the introduction of SSI, the SSA was faced with 15 new laws in the period 1972–81. Changes were made in the Retirement and Survivors Insurance Program and the Disability Income Program, four of them significantly altering the determination of entitlements and benefits. Following the tradition begun in the 1930s: Presidents and Congress continued to reject the concept of universal flat benefit payments such as many other nations used, with minimum administrative complexity, in favour of a mixed insurance and welfare system, with highly complex entitlement and benefit formulas. (OTA, 1986:103) After 1972 benefit levels embodied both automatic cost-of-living adjustments and periodic adjustments such as the Social Security Amendments of 1980, the Reagan debt collection initiative of 1981 and the Omnibus Reconciliation Act of 1981, all of which involved reprogramming of computer systems. To implement the annual cost-of-living increase required changes in 600 software

74 Information technology in government

programs because, as written, they could not accept four digits; the 1980 Disability Amendments involved changes to over 880 programs. The complexity of such changes was continually unanticipated and underestimated. The time provided by Congress for the SSA to respond to legislation proved ‘again and again to be inadequate’ (OTA, 1986:104). The 1980s: continuing crises and plans for change The problems of the 1970s lasted well into the 1980s, with the SSA suffering from a crisis of confidence in its ability to manage information technology, both inside the agency and among the oversight community. By 1982 ‘the combination of inadequate personnel and inadequate or even counterproductive ADP systems were compromising basic delivery of services’ (OTA, 1986:108). According to the SSA itself, its systems were faced with a ‘disaster of epic proportions’ (GAO, 1991a:2). The SSA had a ‘hodge-podge of software programs developed over a 20-year period’; consequently, any one of a ‘thousand possible small foul-ups’ could cause the whole system to come down (OTA, 1986:121). By this time the hardware was also obsolete, outdated and inadequate and mostly no longer supported by manufacturers. Each month 150,000 tapes had to be handled manually. Staff operated systems on an overtime basis to process critical workloads, while backlogs continued to mount (SSA, 1982:1–3). The posting of annual wage reports was four years in arrears and there were numerous complaints about the time it took to issue a social security card. To run a cost-of-living increase took three weeks of solid processing power. The mean time between systems failures was only 57 hours and a failure of one of the mainframes would bring the whole system down. Administrative costs rose by 35 per cent from 1980 to 1982, while the total workload dropped by 25 per cent (GAO 1987:21). In 1982 the SSA produced a plan for change, the Systems Modernization Plan, which was intended to ‘restore excellence’ in its ADP systems. The plan consisted of four parts: the development of new computer software, database integration, data communications and a computer capacity upgrade. The improvements were to take place over a five-year period at an estimated cost of $479 million, although by 1986 the projected costs through to 1988 had risen to $643 million, including some additional improvements (OTA, 1986:49). OMB proposed a plan to reduce the workforce by 17,000 full-time equivalents or 21 per cent of the 1984 staff levels by 1990. The plan was to divide the modernisation effort into three stages. The first stage was ‘to deal with the systems crisis and to bring control and stability to the systems environment’ (SSA, 1984:1) and was labelled the ‘survival phase’. The second, the ‘transition phase’, would position the SSA for ‘transition to modern automatic data-processing operations’ (SSA, 1984:5). These phases were each to take 18 months and to be completed by 1985. The state of the art phase, the final two years, would develop the new software, databases, communications and the hardware configuration to facilitate final testing of

Computerisation in the US SSA

75

the redesigned system. By 1988, with this phase complete, SMP would evolve into a continuing five-year planning and enhancement cycle. The emphasis was on the necessity to proceed in an incremental and evolutionary manner while the existing system was being stabilised to survive the crisis: ‘The plans are made up of manageable increments that allow for the evolutionary development of the new system without abandoning the old system and starting over again’ (SSA, 1984:3). During the 1980s some limited benefits of the systems development started to appear. The agency acquired significantly greater storage capacity through hardware upgrades and converted much of its data stored on tape to directaccess storage devices. It also started to move some of the batch processes to an on-line system accessible to computer terminals throughout the country. Processes such as those supporting the Title II programme were automated and nearly 40,000 computer terminals were connected directly to the National Computer Center (NCC) in Baltimore, for processing claims and queries. In 1988 the agency started a national toll-free number which required on-line access to NCC computers for responding to public telephone queries. The SSA created around 55 ‘telecentres’ with a 1–800 number and calls were routed via FTS2000 to the least busy teleservice centres. Around 80 per cent of calls could be handled to completion, with the remaining 20 per cent referred to the local district office. An appointment referral database allowed appointments to be made over the telephone. Most field offices could now conduct initial claims interviews by telephone; any office would be able to call up information about a client, regardless of where the original claim had been processed. Paper files still existed, but there was sufficient electronic information for them to be necessary only in the event of a major systems failure. By 1994 it took ten days to receive a social security card (where in 1982 it had taken six weeks), 24 hours to process an annual cost-of-living increase (rather than three weeks) and emergency payments were received in five days (instead of 15 days in 1982) (SSA, 1994:1–4). However, the price of achieving such benefits was higher than originally expected. From 1982 to 1990 the SSA spent over $4 billion on operating and modernising its systems under the auspices of the Systems Modernization Project (GAO, 1991a:2). By the 1990s the arrangement of hardware was still entirely centralised: all systems users had to communicate, often over great distances, with the NCC computers in Baltimore before they could perform basic functions. The system was costing around $400 million a year to operate and maintain. Second only to personnel costs was telecommunications, which amounted to about $122 million in 1990, including about $58 million for online data transmission and the toll-free telephone service. Each time an employee in one of the SSA’s field locations used a computer terminal, a series of telecommunications transactions had to be made between the terminal and the NCC computers in Baltimore. These transactions (over 11 million a day) placed a significant demand on both NCC computers and the telecommunications network.

76 Information technology in government

Organisation of information technology in the Social Security Administration The Systems Modernization Project was affected by inconsistency in top leadership in the SSA throughout the 1980s. Between 1977 and 1987 the SSA experienced a succession of leadership changes: seven Commissioners or Acting Commissioners with an average tenure of 17 months. The agency was managed by Acting Commissioners for almost half that time. These short tenures, along with Commissioners’ differing priorities and management approaches, according to senior SSA executives interviewed by GAO, ‘hampered SSA in its dealings with DHHS and OMB’ and ‘contributed to SSA’s computer modernization problems, interrupted actions being taken on major initiatives, and resulted in disruptive and harmful reorganizations’ (GAO, 1987:41). New leaders tended to bring their own ideas about how the SSA should operate, and the SSA underwent a series of reorganisations (in 1979, 1983, 1986 and 1990). The Office of Systems underwent concurrent changes, with six different leaders between 1976 and 1987, the larger proportion of whom lacked any substantial technical expertise. GAO criticised the SSA for not assigning overall responsibility for managing SMP to any one person other than the Deputy Commissioner for Systems, who was responsible for managing all other computer-related activities (GAO, 1987:51). However, continual reorganisation had one beneficial, if unintended, result for computerisation: the SSA came to be subdivided along functional rather than programmatic lines. In 1979 the SSA was rearranged in a functional structure, with ten offices each headed by an Assistant Commissioner. SSA operations were thus grouped around general administrative functions rather than around major programmes or benefits, so that the same types of administrative procedures would be conducted by a specialised unit for all SSA programs. This arrangement ran counter to 40 years of SSA experience, by dividing up program segments even more than had the 1975 reorganisation, and by scattering them through functional offices. The change made it difficult to account for the status of various benefits but ‘paved the way for agencywide automation and system redesign in the 1980s’ (OTA, 1986:115). The Office of Systems was able to bring all systems to work together and to play an integrating role, liaising with each of the functional divisions rather than systems analysis being carried out within the programmatic bureaux, as was the case before 1979. Throughout the 1980s the SSA Deputy Commissioner for information systems oversaw a massive systems development team: six major offices employing a total of nearly 3,000 personnel. An Office of Systems Requirements, responsible for interpreting policy and user needs, contained 550 staff; an Office of Systems Development of 750 staff worked out how to apply technology to the requirements. The maintenance of the old systems during the development of SMP required a Systems Operations Department of nearly 800 staff. Smaller offices dealt with information management and

Computerisation in the US SSA

77

telecommunications. An Office of Systems Planning and Integration produced the Information Systems Plan (ISP), an annual document detailing how the Office of Systems intended to support the aims of the agency outlined in the ‘Strategic Plan’ for the whole of the SSA (SSA, 1991b; 1994). Originally this plan laid out the progress of SMP (SSA, 1982), describing both what systems had been developed under the auspices of the project and what was planned for the future. Later during the 1980s the ISP was restricted to a more precise description of what computer development the SSA would perform in the coming year and was used as a ‘workplan’ by the systems division rather than as a political appeal for funds from the Congress as in the early days of SMP. A further component of the SSA’s computer-related organisation was an Office of Information Resource Management (OIRM) division, consisting of 12 staff in isolation from the massive Office of Systems and reporting to an IRM official who reported directly to the SSA Commissioner. Such an arrangement was reminiscent of the requirements of the US Paperwork Reduction Act of 1980, but the Act mandated the creation of an IRM official only at the core departmental level (i.e. in Health and Human Services) and therefore the SSA’s creation of this office was voluntary. The OIRM ‘prepared the Agency’s annual IRM Plan, promulgated IRM-related policies and procedures and reviewed Agency compliance with applicable IRM regulations’ (SSA, 1991a:54). The office saw its own role as ‘integrating, coordinating and overseeing IRM, ensuring that the management of IT fits into the agency’s direction’. However, the IRM plan produced by the OIRM was only a formal duplication of the Information Systems Plan. While the latter was recognised as a working document throughout SSA and received widespread attention, mention of the OIRM plan met with bafflement from SSA officials outside the OIRM. The OIRM also produced a report entitled Begin to Turn SSA into a…Paperless Agency (SSA, 1992a) but systems officials viewed this initiative with scepticism: ’If we implement a paperless office it won’t be through George Failla’s Office.’ The OIRM seemed to be viewed as a ‘withered arm’ of the SSA, lacking any of the necessary resources to influence information technology policy. Contract relationships Traditionally, the SSA had a policy of using in-house personnel to carry out information technology work where possible. The Agency tried to resist OMB’s circular A-76, a document that required that any function that was not inherently governmental must be made available for tender by private companies. The SSA’s management maintained in 1987 that applying A-76 to SSA operations would not necessarily result in contracting out these services, because ‘the Systems Modernization Project has, or will eventually, make the agency’s performance so highly efficient that SSA could become the lowest possible bidder’ (OTA, 1986:70). Such optimistic predictions did

78 Information technology in government

not always materialise, given that over half of SSA information technology costs go to outside contractors, but the agency retained a policy, unusual for a US federal agency, of developing in-house expertise where possible. The agency received support for this view from the Office of Technology Assessment, who suggested that the question of whether SSA operations should remain in the public service or be contracted out should be seen as a matter of social policy rather than as a narrow question of competitive bids (OTA, 1986:71). Prior to SMP, the SSA’s awarding of contracts had been plagued by legal battles and accusations of fraud. Up until 1965, the SSA had developed a ‘special relationship’ with IBM (OTA, 1986:11). As more and more IBM computers were installed at the SSA, the compatibility of new computer acquisitions with existing operating systems became a key requirement of procurements, leading to the adoption of still more IBM computers and further reliance on the company. IBM provided ‘first-class’ briefings, plans and justifications with which to approach Congress on expenditures (OTA, 1986:99). In 1978 at the request of the House Government Operations Committee, the GSA put a hold on SSA computer acquisitions until they could be reviewed; subsequently 300 out of 500 new orders were cancelled (OTA, 1986:120). In 1981 the SSA’s largest information technology contract so far (for $ 115 million) was awarded to Paradyne Corporation, which supplied the agency and its field offices with approximately 1,850 programmable micro-computer systems, a vital part of the first stage of the SMP. All 16 systems failed to complete ten days’ testing in 1981 and the company did not begin to meet performance requirements consistently until 1983. In 1983 the Securities Exchange Commission filed a civil suit against Paradyne, which had rigged the tests using dummy equipment made by competitors and had sold the SSA an untested prototype. Problems with procurement continued to plague the SSA during the 1980s. By 1982 OTA believed that procurement practices were at best inept and that IBM was the only competitor. Furthermore, while some of these problems with procurement could be attributed to procurement regulations, ‘SSA’s poor procurement procedures rather than the legal requirements for maximum competition caused it serious troubles and opened the way both for defective systems and for fraud and abuse by SSA officials’ (OTA, 1986:16). However, the focus on the SSA’s procurement procedures was in part due to the agency’s innovativeness in overcoming the barriers provided by the legislation of 1984 (the Competition in Contracting Act, described in detail in Chapter 7) to introduce competitiveness into the procurement process. This legislation allowed unsuccessful vendors to ‘protest’ against a contract award, through a legal procedure enforced by the GSA Board of Contract Appeals. In 1987 the SSA advertised a contract for thousands of personal computers; 35 firms put in bids for the contract. After winning three pre-award, time-consuming protests, the agency was ready to award

Computerisation in the US SSA

79

the contract but several companies pointed out that their product lines and prices had changed during the 17 months of protest and evaluation, so the agency delayed the contract again to give the companies time to update their bids. The contract was awarded in September 1987, after which one company made a protest that was not settled until December. The personal computers eventually arrived in March 1988, two years after the original request for bids. After this experience the SSA decided to protect itself by assuming there would be a protest with every single ADP award. The agency built a protest file from day one of every tender and added 90 days on to all project timetables to allow for delays caused by protests. The Assistant Commissioner for Systems, Rene DiPentima, observed ruefully (Washington Post, 16 July 1988) ‘as a result, we have won all but one, but it’s a hell of a way to do business’. Relationship with central agencies The Systems Modernization Project, while welcomed by Congress as a response to the crises of the early 1970s, was subject to ‘a small army of people’ (OTA, 1986:58) committed to monitoring and auditing the SSA to assist either Congress or the Administration in the overseeing function. GAO produced 19 reports on the SSA between 1978 and 1985, maintaining a continual presence in the organisation. In 1987 GAO carried out a major management review of the SSA. The resulting 250-page report was largely critical, especially of the SSA’s implementation and planning process, as was a further report in 1991, which criticised the SSA for having ‘no overarching, guiding vision’ (GAO, 1991a:3). The SSA also received criticism from the House Government Operations Committee when Jack Brooks was at the height of his chairmanship; one official described how the committee held the SSA in a ‘gridlock’, when expenditure on information technology was ‘literally frozen’ with even replacement of equipment prohibited. The Office of Technology Assessment produced a detailed and understanding report on the SSA in 1986 (OTA, 1986), which analysed the SSI disaster and attempted to assess the basic strategies of the SMP. OTA officials, in contrast to other members of the oversight community interviewed, were markedly supportive of the SSA’s efforts. OMB’s role was exercised almost entirely through its budgetary functions, principally by insisting on reducing the debt carried by the SSA due to erroneous payments and demanding staff reductions. The SSA managed to negotiate downwards an original staff reduction target of 19,000 in three years to 17,000 in six years (by 1990). In the early 1980s in the Reagan/ Stockman era OMB required the SSA to identify and contract for administrative functions that could be more efficiently carried out by the private sector, in opposition to the SSA’s preference for in-house work noted earlier. As GAO remarked in 1987: ‘These were major operational changes that have required SSA’s management to take a reactive rather than pro-

80 Information technology in government

active approach to design and implementation’ (GAO, 1987:29). The Office of Personnel Management (OPM) policies and procedures on salary levels also inhibited the SSA’s information technology policy, making the recruitment and retention of computer specialists a problem for the SSA throughout the 1980s and early 1990s (SSA, 1994:7–16). From 1981 GAO reported the SSA’s serious shortage of computer specialists and linked this shortage to the fact that OPM-established federal salary rates were not competitive with those of private industry. The SSA also had to contend with its parent department, the DHHS. The IRM chief of the DHHS saw his role as bringing together diverse groups under the same umbrella, with standardised software, hardware and communications and considered that the DHHS was very decentralised regarding information technology: ‘The operating divisions do their own thing’ (Government Computer News, 27 April 1992). However, at several points through the course of SMP, the Office of the Secretary in the DHHS joined the oversight legions, channelling government-wide circulars from, for example, GSA and OMB, and adding considerations of its own. The Inspector General of the DHH S found numerous deficiencies in the Claims Modernization Project during 1983 to 1984 and in 1985 criticised the SSA for wasting over $1 million on the procurement of useless software. However, the Systems Modernization Project illustrated the inherent difficulties involved in the oversight of major modernisation plans. In the short term it was very difficult to compare the SSA’s performance with that of several years ago; as work was reorganised and automated, measures of performance were necessarily redefined. In the longer term, oversight became more difficult because administrative decisions became more technical and involved issues of technological capability, multi-year investments and systems management strategy that laymen (which included most Congressmen and their staff: (OTA, 1986:58–9)) found difficult to understand. The OTA report of 1986 and GAO officials interviewed raised strong questions about the reliability and completeness of the information that the SSA provided to GAO, Congress and the public about its progress in SMP implementation. In 1991 the SSA’s information systems plan outlined a system connecting personal computers in the field offices to computer centres; it would also integrate voice, data and video communications. The plan set forth a total of 23 initiatives. It was criticised by GAO for containing no cost estimates: consequently GAO obtained cost data for four of the initiatives, the capital costs of which were estimated to be nearly $1 billion, but could obtain no cost data for the remaining 19. Policy considerations Many changes to social security law have provided very short implementation lead times, hindering the SSA’s ability to develop and test computer systems properly to automate the changes (GAO, 1987:27). The

Computerisation in the US SSA

81

SSI, noted earlier, was an obvious example but there were many others. For example, the Social Security Disability Amendments of 1980 had 14 provisions with lead times ranging from 23 days to 18 months, only one of which was automated by the effective date. The Omnibus Budget Reconciliation Act of 1981 had 13 provisions with lead times ranging from 17 days to 16 months, only three of which were automated by the effective date (GAO, 1987:27–8). The Act contained a provision to round benefits to the lower whole dollar, with an effective date of 18 days. In fact this provision could not be implemented by any means for 9 months because no manual process was feasible. Modifications had to be made to all Title II automated operations, including the interface with the SSI and pensions computer systems. Even GAO admitted that the ‘service to the public could be improved and SSA’s resources could be more efficiently used if SSA were given sufficient time to properly automate legislative changes’ (GAO, 1987:130). GAO recommended that Congress should consult with the SSA on the time needed to automate legislative requirements promptly, efficiently and effectively and should consider the information provided by the SSA in establishing effective dates. The SSA attempted to deal with legislative change through its Division of Programs, Policy and External Affairs, which interpreted legislation as it related to service delivery systems. This office liaised with both Congressional staff and the Office of Systems Design and Development, which researched the implications for systems design. When legislation had short effective dates, the Office of Systems began its work based on oral, informal statements of policy rather than waiting for a formal statement (GAO, 1987:28). When the SSA issued final operating instructions that differed from the initial informal policy statements, the Office of Systems would often have to redo its completed software development work. Final operating instructions would often be delayed by high-level negotiations over policy. In the case of SSI, 33 policy issues delayed the final development of the ADP systems requirements and specifications, which contributed to delays in automation of the change. Officials agreed that it would be useful to know of high-impact legislation earlier than they did, and when told of the ‘Systems Impact Reports’ that legislators in Sweden and Holland are mandated to produce, they commented, ‘we could only wish for such a thing’. Legislation passed in the 1980s required the SSA to classify a type of claimant that they did not previously uniquely identify and involved ‘finding’ seven million people that were hidden in their files: 120 million records for Title II and Title XVI had to be identified, updated and matched across the two benefits. The task involved tremendous cost and time as the SSA had to mail questionnaires, often more than one to the same person because they were unable to identify which records were duplicated across the two systems. This example illustrates how an inadequate high-level view of what policy changes will mean to a system can exacerbate the SSA’s problems. A senior

82 Information technology in government

systems manager might have said, ‘We’ve been doing repayees for 20 years, of course we can do that’ without being aware of the actual implications. Thus a meaningful impact statement would have to take the form of a detailed cost benefit analysis. In any case, under current arrangements, whether feasible or not, when legislation is passed, the SSA has to implement it. The Office of Systems does not get involved in negotiation and those interviewed saw no possibility of such a process within the legislative framework. It must be noted that the potential for saving in an organisation as vast as the SSA is enormous. A change implemented in the 1990s required 66 programming hours but saved 80 personnel, so it would have been carried out whatever the implications for the systems division. Rounding legislation passed in 1981 came at a time when Congress was searching for ways to supplement the trust funds. So although every computational program had to be changed to implement the legislation, a saving of on average 50 cents for each of the 34 million cheques paid a month amounted to $17 million a month: and ‘that is a lot of systems people, for ever’, as one official put it. So whatever the cost, the pay-back period tends to be short and the pressure for fast implementation will be strong, no matter what are the implications for SSA software. The legislative process for social security is additive, further complicating the SSA’s task. At the IRS and at other agencies there is a statute of limitations, meaning that after a certain period of time the legislation no longer applies. The SSA must continue to take account of all legislation passed since 1937. An example of the problems this requirement can cause is rounding legislation passed in 1981. Before then, the SSA’s computational formula specified a point in the calculation where the amount would be rounded up to the nearest dime. After the day the legislation was passed, the amount was to be rounded down for all computations. Every program that did computations had to be changed and remain changed, using code that indicated ‘if it is a computation prior to this period, do it this way, if it is a computation after this period do it that way’. Thus with every legislative alteration, the software has become more complicated. In a desperate attempt to speed up automation of legislated program changes, the SSA took several software development shortcuts over the years, including ‘patching’ existing software rather than rewriting it, inadequately documenting software changes and incompletely testing such changes. Over time these shortcuts rendered the SSA’s now undocumented and outdated software difficult to maintain and modify, causing more problems for the future. Assessment of information technology in the Social Security Administration In 1987 GAO (1987:3–4) agreed with the SSA that the survival phase had been completed more or less successfully with considerable hardware improvements, but it repeated concern (first expressed in 1985) over the

Computerisation in the US SSA

83

SSA’s progress in upgrading its software. In commenting on the SSA’s April 1986 semi-annual briefing on the SMP, the GSA again concluded that the SSA had not shown a lot of progress in the software area, citing its inability to follow a consistent course of action. One problem frequently mentioned by oversight agencies was the SSA’s decision to continue to use 65 to 70 per cent of the 12 million lines of code then in use, without carrying out any studies to validate the estimate (OTA, 1986:138): the decision was ‘drawn out of thin air’. Ironically the distinguishing features of the SSA’s approach, of which the agency is proudest, have received considerable criticism from oversight agencies. First, GAO has consistently called for the SSA to make more innovative use of technology. Yet the Office of Systems insists that all hardware must be custom built and field proven. GAO in its 1991 report commented that the DHHS disagreed with our suggestion regarding the use of scanning machines to process applications for new or replacement social security cards.… We were trying to point out to the agency an example where innovative technological solutions could be applied to an area where we observed a labor-intensive, paper-driven process. (GAO, 1991a:27) Given the still massive costs associated with plans to use scanning machines for these purposes (see Chapter 5 on the Internal Revenue Service, who have made such plans but are already shelving them), the SSA’s reluctance may have been prudent. The SSA’s reaction to the idea of smart cards was similarly cautious, arguing that all the information held on the card would have to be centrally retained in case of loss or fraud and therefore savings would be small in relation to the cost. One official described smart cards as a ‘technological solution looking for a problem’. Second, GAO continued to criticise the SSA for ‘incremental, piecemeal changes’ and the fact that many of its operations continued to be labourintensive and paper-driven. In all the SSA’s published documentation, however, there is a proud emphasis on their incremental and piecemeal approach: the Information Systems Plan ‘provides for evolutionary rather than revolutionary changes…the ISP provides a methodical, incremental approach for addressing the Agency’s requirements’ (SSA, 1982); ‘The plans are made up of manageable increments that allow for the evolutionary development of the new system, without abandoning the old system and starting over again’ (SSA, 1984:3). GAO (1991a:3) criticised the SSA’s approach for having no ‘overarching, guiding vision’. They pointed out that the Systems Modernization Project had been revised four times in the five years after 1982, replaced in 1988 by a new plan known as ‘SSA 2000: A Strategic Plan’, subsequently replaced in 1991 by the Information Systems Plan, which was revised annually. GAO complained that none of the revisions

84 Information technology in government

of the plan was in place long enough for its objectives to be completed. For the SSA, however, the ISP became a management tool for use on a day-today basis and they were proud that it kept changing: The ISP, which supports the Agency Strategic Plan, is SSA’s long-range plan for managing its information systems. It documents important systems planning decisions, and as such, is a primary systems management tool from which shorter-range plans, budgets and tactical decisions are developed. In order to ensure that management decisions are clearly communicated, addenda to the ISP are issued as major changes are approved. The entire plan is updated and republished annually to ensure that an up-to-date, comprehensive reference is available to SSA managers and external monitoring authorities. (SSA, 1994:ES–1) By 1994 the SMP was not deemed to be complete, so only intermediate costs and savings may be calculated. Changes in workload make assessment of cost savings due to computerisation difficult to calculate. Furthermore, the SSA has been criticised by GAO for not making available data ‘to fully and accurately compute and compare the effects of work complexity, mix or operational improvements on staff needs’ (GAO, 1987:23). Such problems remained unresolved by 1991, when the SSA was unable to provide a figure representing either its investment in the SMP or its total systems costs since 1982 (GAO, 1991a:26). A figure of $4 billion was derived by GAO through analysis of DHHS records, representing a total of expenditure on both maintenance of existing systems and modernisation between 1982 and 1991. The DHHS stated in its comments on the report that of this figure $3 billion represented maintenance and staff costs (GAO, 1991a:22). Such estimates do not suggest that the SMP has been a financial success. In 1987 the SSA had projected costs of modernisation through 1988 to be about $643 million (GAO, 1987:135). Between 1980 and 1986 the SSA’s six most labour-intensive workloads (amounting to about 50 per cent of work years), including initial claims, declined in volume to the point where in 1985 they were 24 per cent lower than in 1980. There was a 6.5 per cent reduction in the work years associated with the six workloads. But the SSA’s overall administrative expenses increased by 51 per cent, 45 per cent for the expenses directly associated with the six workloads. Thus by 1987 unit costs of the SSA were above their 1980 levels (GAO, 1987:20). There has been a consistent element of ‘jam tomorrow’ about the savings from the SMP. In the 1990s the SSA adopted the practice of showing annually the projected savings and cost for the service level improvement portion of the Information Systems Plan, rather than referring directly to SMP projects. In 1991 the cumulative projected net ISP savings were predicted at $237 million with a break-even period of 3.5 years (SSA, 1991b:8–13). In 1994 the

Computerisation in the US SSA

85

Table 4.1 The SSA’s information technology budget versus other agencies, 1991

Source: (SSA, 1991b:8–6)

SSA claimed a break-even point would be achieved in late 1998, with cumulative projected net ISP savings of about $3.5 billion through the year 2005. It seems likely that by this time the SSA will have further savings to predict; it is noticeable that ISPs of the 1990s (SSA, 1991b, 1992b, 1994) do not refer directly to the Systems Modernization Project, although the project has never officially finished. On the other hand, administrative costs in the SSA remain impressively low. In 1994 they represented only 1.5 per cent of the total SSA outlays (SSA, 1994:ES–11): ‘This relatively small administrative overhead has held constant for decades’ (SSA, 1994:ES–11). Furthermore, as the introductory section and the above Table 4.1 show, information technology costs as a proportion of administrative costs are also low in comparison with the UK Benefits Agency or other agencies in the US. The changes have resulted in improvements in service, such as reducing the time taken to issue a social security card from 42 days to 10 days, improving the error rate on retirement payments by 60 per cent, eliminating the wage posting backlog, and reducing the time needed for cost-of-living allowance calculations from as long as three weeks to about 24 hours. The initial claims process has been enhanced by conversion of files from tape to disk and developments of new software. There is a new system to make fast payments in emergencies. In addition, quality has been improved through the introduction of the 1–800 number service, which allows claimants to request information, to make an appointment and to transact some business, such as reporting a change in annual earnings. The volume of calls grew from 31.6 million in 1989 to 53 million in 1990, with 61 million calls for 1991 but it was difficult to assess (without a customer survey) whether this increase reflected satisfaction with the teleservice or dissatisfaction with local office provision. GAO considered that ‘the evidence is inconclusive’ regarding the effect on service to the public. Union representatives and field office personnel said that service declined; they reported longer waiting times, a ‘less caring attitude’ on the part of employees and increased error rates. Whenever the SSA claimed that service had improved, GAO would counter that SSA

86 Information technology in government

performance data were incomplete. For example, during the 1980s the SSA did not collect any data on waiting times for clients or on client satisfaction (OTA, 1986). From September 1984 there were nine surveys of how well the SSA served the public, the first two by GAO and the remainder by the Office of the Inspector General (OIG) in the DHHS. A 1993 OIG report (DHHS, 1993) found a decline in performance on the objectives for office waiting times and mail response time and no progress on five others, including referrals to related social service programmes, access to the national 800 telephone number, courteous service, understandable mail, and informing the public about SSA programmes. The satisfaction of disabled clients (48 per cent of respondents) was notably lower; while the satisfaction rating for non-disabled clients was 86 per cent it was 68 per cent for those who were disabled (DHHS, 1993:7). The SSA continued to respond to such concerns by asserting that it was ‘too early in the implementation of its strategic plan to reasonably expect significant progress toward its long-range goals’ (DHHS, 1993:ii). The agency commented of the OIG report that it had been allocated only recently sufficient resources to start implementation of the goals outlined in its Strategic Plan, although no clear indication was provided of when such goals would be achieved. While no cost benefits may be clearly attributed to the SMP, the SSA’s consistently low cost of administration remains the best advertisement for the Agency’s computerisation efforts. Conclusions The SSA’s approach to modernisation through computer systems has been indelibly marked by the history of information technology within the agency. The agency’s terrible experience of computing in the 1970s was undoubtedly a crucial factor in its choice of a far less radical option than the UK Benefits Agency, using an incremental process and taking as few risks as possible. What remains to be seen is whether they are headed for another crisis, whether they have established a solid technological infrastructure that will allow them to proceed to the next stage of implementation, or whether they are locked into paralysis, unable to proceed for fear of more mistakes, and unable to keep up with technological developments. Ironically, what in the UK is called the ‘whole person’ concept, although not a stated aim of the Systems Modernization Project, has come close to reality in the US system. Claimants can receive advice about any benefit over the telephone and the systems used are not inhibited by benefit demarcation lines. Other comparisons also favour the SSA. Their systems are labour-intensive, but compared with those of other federal agencies or of the Benefits Agency in Britain (see the introduction to this chapter) they are accurate and relatively cheap to run. The ratio of administration cost to total expenditure by the SSA is noticeably lower than the UK: 1.5 per cent for 1994 against the 5 per cent for the UK. The system costs around $500 million

Computerisation in the US SSA

87

a year to maintain, considerably less in cost per claimant than its UK equivalent. However, the SSA cannot claim the credit for its administrative economy from Systems Modernization: as with the UK Operational Strategy, the project cannot be cost-justified. In realising few cost benefits from computerisation, the SSA and the UK Benefits Agency are not alone. In a study of 292 private sector companies, Paul Strassman (1990) found no correlation between levels of investment in information technology and sales growth, profits per employee, or returns to shareholders. Nevertheless, service companies have continued to invest heavily in information technology. Quinn (1992) sees the reason as straightforward, noting of both Strassman and Roach (1991) (another pessimistic student of return on information technology investment) that the type of macro measurements they use ‘cannot always capture the quality gains and—more importantly—the “opportunity costs” of demise that technology investments may avoid’ (Quinn, 1992:422). Executives evaluating company performance face the same problems, especially that of reflecting in internal productivity measures the fact that the entire institution—bank, hospital or airline—could not exist without its technology investments. Most government agencies are faced with a different conundrum. They both could and did exist before widespread computerisation. Before the 1970s, the SSA appeared to be deriving benefits for relatively modest expenditure on the systems they had—yet does not receive measurable cost savings from its current systems. The need to justify expenditure through ‘infrastructural’ or ‘strategic’ investments (noted in the previous chapter) becomes crucial, but government bodies are not encouraged by central oversight agencies to view expenditure in this way. The SSA has battled against modernist zeal and a barrage of reports from the oversight agencies. The Systems Modernization Project also, however, demonstrates the difficulties that oversight agencies face when scrutinising technologically based projects of this kind. In fact the SSA has proceeded along its course with much criticism but little interventionist power from central agencies. The agency’s reluctance to revolutionise their system and fight against labour intensity as urged by the oversight agencies may reflect their critics’ view that the ‘SMP will fail because of SSA’s “organisational culture”, its long history of mismanagement, interference from outside, political pressure and its sheer size’ (OTA, 1986:50). Alternatively, it may turn out to be the cheapest, most reliable option. The SSA’s incremental route, especially in comparison with the experiences of the Benefits Agency detailed in the previous chapter, seems to support existing evidence that large-scale computer projects bring disappointments. Quinn (1992:370) also reported that, in part because of the difficulties of evaluation, many respondents said they had ‘broken away from “galactic” or “mega projects” crossing multiple divisions as being too longterm, too difficult to implement and uneconomic’. Instead they break largescale projects down into smaller, more discrete units, each of which can be

88 Information technology in government

justified individually. In using such an incremental approach, where each project has finite timing and pay-off, both risks and total investments can be decreased. Initial feedback from early projects can be used to guide those projects later in the sequence. Major large-scale projects require so many different participants and the accommodation of so many divergent longand short-term criteria that they become unmanageable. The General Services Administration concurred with this view in their report Alternatives to Grand Design in Systems Modernization (GSA, 1993). Finally, the SMP illustrates the particular impact that policy-making styles can have upon systems development. Iterative policy-making and overambitious policy-makers who do not understand the technological implications of policy change have consistently caused problems for the agency’s system development team. Such problems are exacerbated by the particular pressures under which government agencies operate. The SSA deals with a significant chunk of the 250 million population of the US, undertaking 12 million transactions per day. The magnitude of savings that can be made througn shaving time from one commonly performed transaction is rarely found in any other organisation and poses a unique challenge to the development of its computer systems.

5

Compute risation in the U S Inte rnal Re ve nue Se rvice

In the United States the collection of individual income tax was an early candidate for computerisation, and the U S Internal Revenue Service (IRS) developed innovative computer systems during the 1960s. After experiencing problems in the 1970s, they embarked on a major computerisation project during the 1980s; its name, Tax Systems Modernization, gives a clue of the modernist dreams that initiated its conception. The I R S still battles to implement its Tax Systems Modernization project in the 1990s, with costs spiralling out of control while operations are jeopardised by the crumbling systems still in use. This chapter investigates the story behind this result. The Internal Revenue Service is responsible for adm inistering and enforcing the internal revenue laws of the United States. The IRS ‘mission’ is to collect the proper amount of tax revenue at the least cost to the public, and in a manner that ‘warrants the highest degree of public confidence in th e Service’s in tegrity, efficien cy an d fairn ess’ (NAR A, 1989:490). O rganisationally the IRS reports to the Treasury department and collects 93 per cent of all government revenue. All U S citizens are legally required to file a tax claim every year by April 15; the IRS does not undertake to calculate citizens’ liability automatically. Indeed, in the U S only 1.08 per cent of individual returns were examined in 1994 (Business Week, 31 July 1995). The IR S has a National O ffice in Washington D C and over 900 offices in cities throughout the U S, seven regional offices, 63 district offices and 10 service centres. In 1994 the IRS employed around 115,000 staff. The regional offices co-ordinate, direct and review operations of the district offices and service centres within the regions. The district offices carry out operational functions, while the service centres make up the data-processing arm of the I R S, providing support to the district offices. There are two national computer centres, at Martinsburg and Detroit; the former maintains and updates the master file of all individual and business tax accounts while the latter is responsible for all other automated work, for example, the payroll processing for all I RS employees and for preparation of fiscal personnel reports, tax research and other statistical and budget work. The Internal Revenue Service collects around $1,200,000 million every year, with a total administrative cost (in 1994) of $7,105 million, 0.6 per cent of total revenue 89

90 Information technology in government

(GAO /AI M D-94–120, IR S’ Fiscal Year 1993 Financial Statements, U S Federal Budget Appendix, 1993:806–810). 1968–86: a pe rio d o f stagnatio n T h e I R S h ad b een a very early p ion eer in in form ation tech n ology development. As early as 1955 it launched an effort to automate tax return processing by establishing a ‘service centre’ on an experimental basis in Kansas City District O ffice. During 1958–9 the IRS began to convert some of its administrative procedures to automated systems, removing manual payroll operations from its regional offices initially to the Kansas office. Gradually other centralised service centres were set up. In the early 1960s a national administrative, service and training centre for automatic dataprocessing was created at Detroit, with over 1,000 personnel. Throughout the 1960s the IRS was considered the epitome of a modernised organisation, taking full advantage of the technology available at the time. During 1968–78 the IRS made its first attempt to redesign the entire system to take advantage of more advanced technology (GAO , 1990b:3). In 1976 the IRS testified to Congress that the system in place was ‘conceptually the system which began operations in January 1962’ and that the point had been reached where ‘the opportunity for further improvements has diminished’ (O TA, 1977:11). The I R S proposed a redesign of the system , the Tax Administration System, which in its design concepts was ‘on the leading edge of the state of computer art’; it was also to be ‘the largest data-processing project ever undertaken by the federal government’ (OTA, 1977:17) at a total estimated cost of nearly $1 billion. But the I R S was unable to resolve Congressional concerns about the cost of redesign and the security and privacy of taxpayer information. O TA identified 18 discrete problems with security and privacy in the proposed redesign of the system and the Chairman of the O versight Subcommittee of the House Ways and Means Committee expressed concern that ‘without safeguards, the new TAS system could become a system of harassment, surveillance and political manipulation’ (O TA, 1977:1). OTA observed that management issues which were previously confined to the Executive Branch had ‘with the advent of new generations of equipment, surfaced as key public policy issues of potential concern to Congress and numerous interest groups’ (O TA, 1977:15). Therefore during the 1970s, com puter system s in the I R S changed remarkably little. Data input and data-processing were carried out in spatially separate locations, with no automatic links between them: information entered into a Cincinnati computer had to be copied on to huge tapes and shipped to the main IRS data-processing facility in Martinsburg. Data input and retrieval often took several weeks and the service to taxpayers was slow and unreliable. O versight agencies and the IRS itself recurrently expressed concern over the system’s accuracy, responsiveness, reliability (OTA, 1977:11) and privacy (OTA, 1977:28–31).

Computerisation in the US IRS

91

In 1982 the IRS embarked upon another major modernisation attempt, the Tax System Redesign Project. During the following four years, the IRS sequentially pursued three different approaches for redesigning the system. The first involved giving private industry the opportunity to develop, refine, demonstrate and implement a new design concept. The second involved employing a computer company to act as the prime contractor for all redesign activities, ranging from holding a design competition to implementing the new system. The third approach was to combine new tax-processing system hardware with software to modernise the master file through an in-house effort. Although in the fiscal year 1986 the IRS obligated and spent $15.5 million to establish and operate a redesign office, develop the redesign approaches and test new technologies, none of the plans conceived was enacted. According to GAO, the problems were repeated changes in leadership at the IRS and Treasury, a lack of clear management responsibility for the programme, and the need for enhanced technical and managerial expertise within the agency’s executive ranks (GAO, 1988:10; GAO, 1990b: 3). According to one GAO official, the fundamental problem was that ‘the technologists were driving the project’. By the end of 1986, therefore, the IRS was still using the systems developed in the 1950s and 1960s. Its reputation had been severely damaged by a disastrous tax filing season in 1985: ‘perhaps its worst year ever’ (Business Week, 14 April 1986), which was largely ascribed to the changeover to a new computer system and inadequate staffing. The IRS installed a new mainframe system which proved to have insufficient capacity and no backup. By 5 April about 60 million returns had been received and the IRS had processed only 60 per cent of them. It was a year of ‘delayed refunds, of improper dunning notices, and of suggestions that overworked employees had chucked tax returns in the trash’ (Kelman, 1990:153). The IRS paid $15.5 million in interest to taxpayers because refunds were delayed (GAO, 1994:65). All the top management of IRS were removed. A Republican Senator described the Philadelphia service centre as a ‘quirky, error-prone, even hopeless hightech sweat shop where the choice if you were an employee was either to quit or try to do an impossible job’ (Associated Press, 25 March 1988). Back to the future: the Tax Systems Modernization project The Tax Systems Modernization (TSM) project was intended completely to reform this sorry state of affairs. Initially called, in common with previous attempts, ‘Tax Systems Redesign’, the project started in 1986 and the IRS described its management plan as a ‘living document’ (GAO, 1988:15) in 1988. The overall cost through to the turn of the century was estimated at $21 billion. The objectives of TSM were to ‘modernise’ tax document input processing; minimise paper processing; provide an information resource that was readily accessible and responsive to IRS employees; reduce the overall

92 Information technology in government

costs of tax administration; increase information resource security; and assure judicious application of information system technology, adequate capacity, and flexibility to meet the IRS’s future needs (IRS, 1992:1). TSM rapidly evolved into an ever-shifting series of projects, with continual renaming, consolidating and splitting of project plans. By 1989 TSM consisted of five projects; by 1990 there were 17. By 1990 the IRS had spent around $120 million on TSM since 1986, yet did not expect to complete its Design Master Plan until the autumn. GAO started to express doubts over the project: Without the road map that this plan is intended to provide, it has been difficult for IRS to articulate to the Congress what projects will comprise TSM and how these projects will come together to meet IRS’ stated long-range goals for modernization. (GAO, 1990b:1) The IRS proceeded with several independent projects before demonstrating how they would be connected into an integrated system that would enable IRS employees to share data electronically. Seven projects that were classified as TSM in 1990 were dropped from the 1991 programme and many of the projects were difficult to relate to the goals of TSM (GAO, 1990b:6). Indeed, the goals of some of the projects contradicted those of others. Two projects were planned to replace the paper-intensive input processing system: an electronic filing system, where tax returns would be transmitted over communications lines directly to the IRS, and a document-processing system, which would convert tax returns, correspondence and other tax information submitted on paper to electronic images for subsequent processing. Electronic filing became available to taxpayers nationwide during the 1990 tax filing season, with about 4.3 million electronic tax returns filed through to May 1990 out of the 200 million total, expected to increase to 13 million by 1998 (GAO, 1990c:3). But the percentage of returns filed electronically rather than on paper critically affected cost justification of the document-processing element of the initiative. Document-processing was a huge investment calculated at around $380 million (GAO, 1990c:2). But if enough returns were electronically filed, this figure would never be paid off; such an outcome was almost planned for, as the same report specified that the document-processing system was not scheduled to be fully operational until 1998. Research and design for document-processing had already begun by 1990, with a pilot project in 1993, even while discussion over its abandonment continued. To abandon the document-processing initiative in favour of electronic filing raised serious issues of burden on the taxpayer. If electronic filing were to become the sole method of filing taxes, taxpayers would have to pay tax consultants to file returns.

Computerisation in the US IRS

93

By 1994 plans for both projects continued, in spite of the Director of GAO Imtech testifying to Congress that ‘IRS’ input processing strategy appears to be a high-tech, high-risk, and high-cost venture for which IRS has not yet done the necessary homework to justify committing nearly three billion dollars’ (GAO, 1992f:1). In 1996 all software programming for the Document Processing System had been terminated because of system instability, lack of system capacity and a number of software application problems. GAO commented that ‘after expending over a quarter of a billion dollars on the project to build a Document Processing System, IRS has now suspended the effort and is re-examining some of its basic requirements’ (GAO, 1996:7). Meanwhile, the IRS did ‘not yet have a comprehensive strategy to maximize electronic filings’ (GAO, 1996:5). Privacy issues were also a problem for TSM, as they had been for previous redesign efforts. In 1991 GAO pointed out that, while the IRS’s draft Design Master Plan provided for developing security features, it did not recognise privacy as a separate issue or show how it was to be addressed; this was a serious omission since the IRS intended to allow public access to information. By 1992 the IRS was giving priority attention to ensuring security and privacy issues were addressed (GAO, 1992c:2), although concern over the lack of a firm deadline for resolving such issues continued until the IRS developed a Privacy and Security Action Plan in 1994 (GAO, 1994:64). Contract decisions in US tax collection were much more constrained by privacy concerns than in Britain. When the UK Inland Revenue contracted out its entire information technology operations in 1994 (see Chapter 6), the Dow Jones of 9 September commented ‘Imagine the reaction if the Internal Revenue Service turned over its computers, and access to the financial secrets of all its companies and citizens, to a foreign concern.’ Meanwhile, throughout the course of the project, an increasing number of resources had to be devoted to maintaining the old systems and developing ‘interim’ systems; stand-alone systems that did not share data with other systems, developed to satisfy IRS needs in the short term while TSM was under development. The IRS experienced problems and project slippages with almost all the interim systems. One such project, to develop electronic filing for use during the 1988 and 1989 seasons, was implemented ‘even though its prime contractor believed the interim system might well fail during 1988 because there was not enough time to adequately design and test it’ (GAO, 1989:2); the software never worked correctly and stop-gap manual operations during the 1988 season were necessary to correct errors. Replacement software was not ready in time for the 1989 season and the IRS again had to print paper copies of thousands of electronic returns in order to correct errors and issue refunds. By 1993, as a result of delays with interim systems, GAO observed that the later phases of several interim systems would not significantly overlap with development of the long-term TSM systems that would replace them (GAO, 1993a:4).

94 Information technology in government

Meanwhile the existing systems were becoming increasingly complex and difficult to maintain, as problems were patched up with short-term solutions: a ‘rats’ nest of logic’ as one official put it. In fact the productivity of the IRS declined during the 1980s. The agency’s staff increased from 84,700 in 1981 to 112,000 in 1990, yet over the same time period audits declined from 18 per 1,000 tax returns to 8 per 1,000. Even in the early 1990s, if a taxpayer contacted the IRS with a problem and had no account at the particular service centre called, IRS personnel would have to communicate manually with the Martinsburg centre, necessitating at least a week’s delay in the IRS’s response. Originally the IRS had considered that any organisational changes should be kept to a minimum during the course of TSM. But in 1992 the IRS began its first major review of the IRS organisation for 30 years, in response to recommendations from oversight agencies (GAO, OMB and Congressional committees) which by now considered radical change critical if the IRS were to receive maximum benefit from its investment in TSM. After the review the IRS commenced work on a major reorganisation which would necessitate changes to the TSM architecture and Design Master Plan. By 1993, seven years after the beginning of the project and 25 years after the first attempt at modernisation, the IRS was still using the systems developed in the 1950s and 1960s, by this time a source of serious and recurrent concern for the continuing operation of the tax system and its cost. In 1992 the IRS Commissioner, asking Congress for $124 million dollars for further work on interim non-TSM systems, stated: ‘the current potential for breakdown during the filing season greatly exceeds acceptable business risk’ (Government Computer News; 16 March 1992). Many of the ‘interim’ projects were still in the prototype and pilot stage. Most of the long-term systems, the ‘real TSM’, were still in the early stages of design and development. Oversight agencies were seriously questioning the $1.3 billion already spent (GAO, 1994:65). By 1995 the Director of GAO told a Congressional subcommittee that the agency had still not developed the necessary technical or organisational infrastructure needed to implement TSM and ‘after eight years and [an] investment of almost $2 billion dollars, the IRS has made minimal progress towards its vision’ (Dow Jones, 22 February 1995). In the face of such criticisms, the IRS continued to plan the details of its transition to TSM up to the year 2000 (IRS, 1992:L30–L33) and to exhibit strong faith in as yet tested technological innovations. In fact technological innovation looked likely to exhibit a spiralling effect in the IRS, as tax evasion and money transfer became more sophisticated. Increasing sophistication was sparked off in part by the Internal Revenue’s own automation efforts. In 1994 a report sponsored by a Congressional subcommittee claimed that the IRS was defrauded annually of up to $5 billion by electronic tampering with tax records, theft of social security numbers and other schemes: this represented 1 per cent of its yearly individual income tax collection. The report estimated that while far fewer taxpayers filed electronically, the fraud

Computerisation in the US IRS

95

dollar amounts from electronic and paper filing were about the same. The Treasury official responsible for enforcement claimed that the IRS would be introducing new technology to detect fraud, and that submission of fingerprints would become part of the application process for electronic filing (Associated Press, 5 October 1994). But in reality, while TSM remained under development, the interim systems in use had few safeguards. In response to concerns over employees who used computer access to snoop (1,300 employees were investigated between 1989 and 1994), a GAO representative told the Senate Governmental Affairs Committee that ‘there were virtually no controls programmed into the system to limit what employees can do once they are authorized…access’ (Associated Press, 19 July 1994). Furthermore, if ‘e-cash’ were to become widely used (see Chapter 1; Business Week, 12 June 1995), the IRS could find itself in a further technological spiral to keep up with the companies using it. As the co-founder of First Virtual, the first secure payment system on the Net, put it: One of the hardest questions merchants ask us is ‘Where do we owe taxes?’ That’s not a trivial question: with E-money, the merchant could be in Guam and the buyer in Canada, while First Virtual’s computers are located in Ohio. So whose sales tax do you pay?…I tell them to consult a lawyer. Electronic money could be easily sent in and out of a country undetected, meaning that ‘tax evasion could become a matter of pushing a button’ (Business Week, 12 June 1995). At the end of 1993 the National Performance Review (NPR) identified TSM as a key area for the future, due to the opportunities for electronic government-citizen transactions in tax collection. The NPR report made no recognition of the disappointments experienced so far but promised to ‘support the agency’s investments in new hardware and training’ (NPR, 1993:123). The Clinton administration requested a 25 per cent increase over the 1993 TSM budget of $572 million (Government Computer News, 24 May 1993). However, GAO was reluctant to recommend that Congress fund such a deployment, questioning whether the IRS would actually be able to install the systems in 1994 (Government Computer News, 24 May 1993). In November 1994 the IRS complained that TSM had been thrown off track by budget cuts, while a report from the National Research Council claimed: ‘TSM will simply slip farther into the future and remain basically “as is”: that is, the IRS appears to be compensating for current budget reductions by simply waiting until “next year”’ (Dow Jones, 21 November 1994). By 1996, even the modernists of the Clinton administration had lost patience. In budgetary negotiations through 1996, Congress approved only $336 million for TSM in 1996–7, whereas the Treasury had originally requested $800 million (scaled down to $600 million after severe Congressional criticism). IRS officials complained bitterly that the funding

96 Information technology in government

cuts ‘would prove lethal’ (Government Computer News, 23 September 1996) and started to plan the laying off of 1,500 systems staff which, they claimed, would result in programme disruptions and additional ‘losses from an already limited cadre of talented information systems staff’. Congress demanded that the IRS contract out the bulk of TSM work, with an appropriations bill restricting the service from spending any TSM money if it did not have a final request for proposals ready by 31 July 1997, nearly a year earlier than the IRS had planned. In the appropriations bill for 1997 spending, Congress noted that the IRS had spent $289 million on documentprocessing—yet was now ‘re-examining its basic requirement for document processing’ (Government Computer News, 21 October 1996). Meanwhile, the size of the agency’s problems were highlighted by its own Chief Information Officer’s analysis of the ‘Year 2000 problem’, calling the reprogramming of legacy systems to accommodate changes as a ‘massive problem’ with 62 million lines of code that needed to be rewritten (Federal Computer Week, 7 October 1996). In response, Congress cut TSM’s budget to almost half, to $420 million. The remainder of this chapter investigates the story behind this result, exploring the ‘high expectations and unanswered questions’ (GAO, 1990c:1) that have characterised the progress of the Tax Systems Modernization project. Organisation of information technology in the Internal Revenue Service The Information Systems Division was located in the Washington headquarters of the IRS and was headed by a Chief Information Officer, a position created in 1989 in line with many private sector companies and for which an incoming Commissioner, Fred Goldberg, had ‘scoured the world’ (Dow Jones, 18 October 1989). The internal structure of the Information Systems Division reflected the two major tasks of the organisation; to develop TSM and to maintain the existing systems upon which the tax system depended. It was divided into two major components: Information Systems Development (ISD), which was responsible for the leadership of TSM and Information Systems Management (ISM), which maintained the existing systems and developed the software for TSM. TSM was controlled and steered by a network of Information Systems control groups led by Assistant Commissioners who oversaw TSM projects. These projects were redefined many times, causing the infrastructure of the IRS to suffer. Organisational learning, hardware, software, planning and people sometimes got lost amid the diffusion of responsibilities across constantly changing organisational boundaries. The rising costs of TSM engendered organisational controversy. The budget attracted the attention of the Chief Financial Officer during the 1990s. During 1986 to 1990, as one official put it,

Computerisation in the US IRS

97

it was just the IS organisation planning and fiddling around with their money and their budget. Now it has got to such a magnitude that it is their budget that is in danger, so they are starting to take more of a leadership role. In addition, the budget for TSM became the focal point of conflict between the technical and business sides of the organisation. At the time, OMB specified that information technology initiatives, which require an investment ‘above normal operating costs’, must be justified by productivity savings in advance (OMB, 1993a:6), and nearly all TSM projects came into this category. So the budget for any project within TSM was calculated on the staff years that the project was predicted to save; these productivity savings had to be given up in advance. User organisations pushed hard to keep the expected benefits down because they did not want to give up more staff years, especially in the later years of TSM, when earlier hopes had already been dashed. When presented with a choice, therefore, they tended to choose the least ambitious of projects and would try to minimise the expected benefits. Thus the Information Technology Division was pulled in two directions when predicting the savings on a project, both up and down, a process which one official described as follows: So it is no longer a scientific analytic exercise—it is a political exercise in the will with them saying ‘no, we are not going to give up any staff years’ and us saying ‘if you want the system you have got to’ and the true costbenefit ratio has very little to do with reality any longer, it has all to do with the pain they are willing to take to get the system. We may ‘show’ a 1–1.5 return on investment. And they are saying 1.01. So it’s lost the reason it ever came to be, in a sense. It is possible to understand, therefore, the pressures acting on the technical staff. With OMB pushing them to cost-justify projects which they felt could not be cost-justified, they would put pressure on user organisations to strive for greater benefits. But the user organisations had every motive to argue the case for interim projects with lower benefits, which had then to be implemented quickly. Again, the reaction of the user organisations was understandable, given the limited benefits they had received from TSM so far. Such pressures acting upon the composite units of the IRS partially explained consistent criticism from oversight agencies that the project was driven by technologists moving too fast for their capabilities or for the business units. The Chair of the House Government Operations Subcommittee on Commerce, Consumer and Monetary Affairs in 1990 criticised the IRS once again for moving too fast, suggesting that the IRS’s business planning for TSM was in an ‘embryonic state’ compared with its technical planning (Government Computer News, 25 November 1991:6). Similarly, GAO commented:

98 Information technology in government

If current hardware acquisitions are justified in terms of their possible future role as part of TSM, then IRS risks creating a design to fit the newly-purchased hardware rather than allowing long-range business goals to drive the design. In other words, IRS could be limiting its future options in order to meet near-term needs. (GAO, 1990b:8) The technical staff frequently exhibited over-enthusiasm to keep abreast of technological developments. A revealing GAO report noted in 1991 that the IRS’s requirements analysis for the Automated Taxpayer Service System had not considered non-automation options, a vital omission considering that ‘many call-sites that were not automated nevertheless significantly improved in accuracy between 1989 and 1990’ (GAO, 1991b:7). While the automated pilot site’s accuracy improved by 21 percentage points, three other sites that did not have the Expert System experienced similar accuracy gains. The view of the systems staff was that they must constantly battle against the near-sightedness of operations staff: ‘The business people are running just as fast as they can to catch up with what the technology allows them to do.’ In 1993, at a meeting of IRS technical and business personnel, a consultant asked the participants to design a new IRS: their fantasy of how the IRS could operate in an ideal world. The business personnel designed a system fairly similar to the existing one with slightly more sophisticated telecommunications. The systems personnel designed a system in which the IRS consisted entirely of information systems, where all taxes were collected and paid automatically and the IRS organisation was reduced to an Assistant Commissioner with a nominal staff of 1,000. Yet the systems that were being developed under TSM would not reach anywhere near this ideal. Many of them were appearing ‘dated’ by 1994; electronic filing itself could be regarded as an interim step towards the automatic deduction of taxes at source. Contract relationships A project as huge as TSM was obviously extremely attractive to the vendor community. The huge contract awards within TSM were announced cheerily in the computer press: With the award of the $300 million integration support contract to TRW Inc. this month, the Internal Revenue Service kicked off what promises to be a series of awards for major buys under the massive Tax Systems Modernization effort. (Government Computer News, 23 December 1991:3) A vendor liaison office was created within ISM from the beginning of the project to deflect an incessant barrage of queries from hopeful vendors.

Computerisation in the US IRS

99

One particular contract placed an indelible stamp on the course of TSM. When assigned the responsibility of producing the Design Master Plan, the ‘road-map’ of TSM, the Systems Integration Division recruited the Midah Corporation, a quasi-governmental organisation licensed to carry out research. Midah produced a document at a very high level of functionality, designed at three levels (‘corporate database’, ‘workload management’ and ‘case processing’) which related to three technical components of the project (mainframe systems, file servers and work stations). Because the plan was designed at such a very high level, yet on a hardware basis, it was extremely difficult to relate the document to software ‘projects’. A systems analyst in the IRS described the problem as follows: When project managers look, they can’t tell where their system begins and ends so no-one knows how to do work planning, no-one knows what it is they are supposed to deliver because it is just taken in terms of end functionality.…When we sit down and look at requirements, they run vertically through an architecture, not horizontally. Users don’t think ‘That is everything I want to do on a work station, and that is everything I want to do on a mainframe’…They think ‘That is what I need.’ Involvement of contractors in an issue so central to the project caused bitterness among other parts of the IRS, expressed by one official: It was just the Systems Integration Division and the Midah organisation’s conception of what the IRS should do. Not the end users, they weren’t involved in the discussion.…There is a very strong internal perception that it is their document and not an IRS document. The design of this document fuelled an existent tendency for open-ended contracts tendered under TSM. For example, the specifications for the Service Centre Support System (SCSS) related purely to the functions of the system and did not outline specific system configurations at the service centres (Government Computer News, 5 August 1991:100). Another contract entitled ‘Federal Systems Research and Development’ consisted merely of the brief of assessing open-ended technology issues. The IRS was criticised continually throughout the project for its lack of control over procurement of contracts such as these and for a tendency to contract out work before systems were specified. In part the IRS’s anxiety in progressing procurements could be attributed to the extent to which it suffered under the protest system noted in Chapter 4 (and covered in detail in Chapter 7), which stipulated that agencies were not allowed to work on contracts under protest until a decision had been reached. Thus it was difficult for the IRS to know when a contract would be awarded, and delays sometimes vitally affected later implementation plans. For example, the TMAC project was crucial to TSM. It was to supply ADP

100 Information technology in government

equipment and services to all Treasury organisations nationwide: estimated at 3,200 multi-user computers, 7,000 terminals and 49,538 work stations, together with associated software, over five years at a cost of $1.4 billion. Most of the items under the contract would be used in subsequent projects. The contract was advertised in January 1989 and awarded in 1990. But the GSA Board of Contract Appeals upheld protests filed by IBM Corp. and Lockheed Corp. which charged that the IRS had concealed its true basis for evaluating the tenders and had failed to justify paying nearly $700 million more for the successful vendor’s (AT&T) proposal (Government Computer News, 10 May 1993). The Board ruled that the IRS did not analyse fully the tradeoffs between price and technical merit and therefore failed to justify their stress on technical factors in selecting AT&T. The Board charged the IRS to prepare a suitable price/technical trade-off analysis that would either confirm the previous award or justify a new selection. The IRS hired a contractor to carry out such an analysis; this procurement itself was subject to investigation by GAO, as the IRS had failed to find out whether such expertise was available inside the agency. Eventually in March 1992 the IRS confirmed the award to AT&T, the award was again protested by IBM and Lockheed but this time the protest was denied. Four years after the specifications had first been released, work on the project could start. The IRS claimed that procurement problems as serious as the Treasury Multiuser Acquisition Contract (TMAC) were exceptional: between October 1990 and March 1992 the agency successfully awarded 21 information processing contracts in excess of $ 1 million, using the same procedures, without encountering similar problems (GAO, 1992b:14). But the delay of a few crucial infrastructure projects such as TMAC delayed other projects within TSM. The day the TMAC contract was finally awarded, other projects, which were by now designed to use other hardware, had to be reconverted to the technology purchased through TMAC. The Under-reporter System, for example, had to be redesigned for the purpose, which meant extending the development cycle by another year and buying a new set of hardware for all users at the cost of another $10–20 million. As one IRS official commented, ‘That is something the project doesn’t budget for and jacks the cost up out of sight.’ The Check Handling Enhancements and Expert System (CHEXS) contract, initiated in 1984, was cancelled in 1992 after eight years of planning and preparation (GAO, 1992a:4), with further problems for dependent projects. In September 1990 the IRS attempted to overcome its procurement problems by elevating the procurement role organisationally, creating an Assistant Commissioner for procurement services under the IRS’s Chief Financial Officer, a move praised by GAO (Government Computer News, 6 May 1991:4). There were problems with recruiting experienced procurement staff, reminiscent of the difficulties in recruiting systems staff during the 1980s; 27 people were hired during 1991, yet ten people left during the same period. The IRS response was to establish a Procurement Career Development

Computerisation in the US IRS

101

Centre, a move approved by the Treasury and eventually created as an executive agent to serve the rest of the Treasury. During 1991–2 the IRS increased its procurement staff from 170 to 239 (GAO, 1992a:3) and in 1991 the Treasury Department raised the procurement review threshold for the IRS from $1 million to $3 million, in response to a Treasury oversight review which recognised improvement in the management of acquisitions (GAO, 1992b:15). In 1997, Congress’s insistence that the IRS must contract out the bulk of TSM was greeted with enthusiasm by the private sector. Lockheed Martin, EDS and Andersen Consulting formed a team to ‘pursue IRS Tax Modernization Program’ (LM press release, 24 July 1997). Lockheed Martin proclaimed proudly that it would ‘lead the team’s pursuit of what will be one of the largest systems integration undertaking in the world’. GAO, however, remained sceptical: ‘Increasing the use of contractors, for example, will not automatically increase the likelihood of successful modernization because IRS does not have the technical capability needed to manage all of its current contractors’ (GAO/HR-97–9, February 1997). GAO cited the example of the Cyberfile project, which ‘exhibited many undisciplined software acquisition practices as well as inadequate financial and management controls’. Eventually the IRS cancelled the Cyberfile project after spending over $17 million and without fielding any of the system’s promised capabilities. Relationship with central government agencies The oversight agencies were initially extremely supportive of TSM and the project was regarded from the beginning by GAO as having a greater chance of success than previous modernisation efforts. Officials described the relationship with GAO as beneficial to the IRS: ‘On the whole they do a lot of good, they have given pretty fair reports to the Congress.’ The GSA carried out a general management review of the IRS which was also well received: the IRS was trying to assimilate the information gleaned from the four to five staff from the GSA who worked on the reviews into their own review process. Congress was also perceived to be positive while remaining critical where necessary: the Assistant Commissioner thanked the House Government Operations Committee publicly for keeping the IRS on its toes through criticism and praise (Government Computer News, 17 February 1992). By the 1990s, however, doubt had started to set in among the oversight community. In December 1990 OMB denied the IRS’s 1992 budget request for $41 million for the Taxpayer Service Integrated System (TSIS), a project to automate the taxpayer inquiry programme. This project, planned to cost $250 million through to fiscal year 1998, was intended to enable employees to find the correct answers to taxpayers’ questions; electronically to order requested forms; to set up an automated process to research taxpayer inquiries and call them back later; and to access other databases containing taxpayer

102 Information technology in government

information. OMB’s refusal to continue funding and GAO’s discovery that the IRS did not have the information it needed to decide whether to install TSIS (two years’ systems testing had been inconclusive and the system could not be demonstrated as cost-effective) caused the IRS to change its plans, agreeing with the report recommendation that the benefits needed to be clearly demonstrated before nationwide installation. As TSM projects proliferated, GAO increased its oversight role over the Information Systems Development Division and by the 1990s GAO had 15–20 people working on TSM-related audits at any given time, supplemented in 1992 by 60–70 staff working on financial statements (GAO, 1994). Congressional anxiety also rose with the costs of the project, and in 1991 Congress insisted that all costings for TSM be separated out into a single budgetary appropriation, viz. ‘Appropriation 4’. Since that time everything relating to information technology has been included in this appropriation, including staff costs, hardware, software, site preparation, training and travel. Even office space was related back to TSM projects. Before 1991 computer support was combined with the collection activity, which is the way most agencies continued to budget for information systems. After having consolidated the costs, it would be extremely difficult to reallocate them across cost centres in the future. As a member of the Planning, Budgeting and Review Division pointed out, many systems are of a really corporate nature (such as the master file) so it may be impossible to determine who is the user. Therefore the division can end up being regarded as ‘the owner’ because there are so many owners. Despite the internal dissatisfaction over the Design Master Plan, noted earlier, it fulfilled one important function in that it allowed the IRS to go to Congress and back up budgetary requests. In 1991 GAO pointed out (GAO, 1991e:3) that the projects included in the IRS’s $451 million budget request for TSM in 1992 were not based on an ‘approved modernisation plan’ as the IRS’s Design Master Plan was still in draft. However, the effect of confirming the Design Master Plan was to solidify an unimplementable document as an implementation plan, putting systems staff in a difficult position: ‘What has happened…is that it has now become a budget document, which it shouldn’t have been, because now all of a sudden we are locked into that vision of how it is supposed to be.’ The oversight agencies, especially GAO, pressurised the IRS to make plans for reorganisation in 1992, by which time it was evident that TSM could no longer be justified with mere efficiency improvements. In the eyes of all overseers, dramatic change had to take place. At the start of TSM the IRS had made a conscious decision to minimise its political impact of TSM by changing the employment structures of regional and local offices as little as possible. Congress guarded these offices jealously because of their importance for local employment, and the agency decided that such a decision might be ‘the last straw for TSM’. Instead it planned to retain the ten service centres but to change their functionality: ‘Congress doesn’t mind what is

Computerisation in the US IRS

103

going on under that roof as long as it is still there.’ However, in 1991 GAO stated in testimony that the Design Master Plan does not provide a corresponding vision of how the new technology could enable the agency to transform its future organisational structures and business operations.…IRS has not recognized the opportunities presented by new technology to consolidate its field office structure. (GAO, 1994:67) OMB and various Congressional committees voiced similar concerns over TSM, as the IRS itself noted: They believe our modernization effort does not include key components imperative to the successful accomplishment of the modernization objectives and feel our current plan will simply provide more information to our employees faster, with little consideration as to how we may do our job better and improve the quality of our interactions with taxpayers. (IRS overview to GAO, 1994:68, contained in report) In response to these criticisms, the IRS undertook several major studies in 1992 to develop its new ‘business vision’ (GAO, 1994:68). In 1994 it announced a reorganisation of the system of regional and district offices that dated from 1952, when limitations on transportation and communications had made a widely decentralised agency more logical. By 1994 the IRS proclaimed, ‘We feel we can manage with fewer headquarters operations. We don’t need as many support operations’ (IRS Commissioner, Washington Post, 4 May 1995). The plan included closing 30 of its 63 district offices around the country and cutting the number of regional offices from seven to four. Thus the oversight agencies, both through public criticism and budgetary threat, forced the organisation into a restructuring that, ironically, further hampered the progress of TSM. As one official put it: ‘For the systems division, it is as frustrating as hell. We feel that we should be rolling our sleeves up and saying “let’s do it!” Now we’re back to a redefinition phase.’ In 1996 relationships with central agencies reached an all-time low. Senate and House members complained that the ‘IRS never seems to implement or finish any of the items recommended by GAO’ (Government Computer News, 23 September 1996). GAO repudiated IRS claims that budget cuts would paralyse the agency: ‘The IRS can survive cuts in its Tax Systems Modernization just fine.’ The Chairman of the Senate Governmental Affairs Committee lashed out with ‘We’ve been going at this for 12 years and every year we get a similar report. Do we have to wait another 12 years so the American people and Congress have confidence in the IRS?’ GAO produced eight reports on TSM during 1996–7, none of which was complimentary. In summary, the central agencies provided guidance to the IRS, fulfilling an almost managerial role and dedicating considerable resources to the

104 Information technology in government

project. In specific incidents their interventions prevented expenditure spiralling even further out of control. However, their ability to restrict expenditure was limited. The complexity of the project, the limited extent to which the IRS itself maintained control over the constituent components, and the time and resources invested in the procurement process for the maze of projects under way, all pushed the General Accounting Office, Congress, OMB and GSA into a support role and reduced their oversight capabilities. The high dependence of tax administration on computer technology had by 1994 forced the oversight agencies into an anomalous position, since the knowledge that TSM must work constrained their ability to intervene. As one IRS official put it: It is a truly and deeply held belief by a lot of people in IRS and our oversight agencies that if we don’t do TSM we won’t be able to collect taxes by the year 2000, our systems are that bad and that old, so we have a kind of survive or die thing behind us. In 1996, the IRS became the first agency to use the Presidential Technology Team set up by OMB to provide hands-on support to agencies with troublesome projects: ‘IRS has told OMB it needs people with experience in contracts, systems engineering, program control, requirements analysis and testing and evaluation’ (Government Computer News, 2 May 1996). Policy considerations Initially Congress made a conscious effort to avoid legislative change during TSM. Consequently, after the first two years of planning, the Assistant Commissioner thanked Congress for so doing. During the 1980s the possibility of major legislative change to the tax system was raised on two occasions. Opposition was voiced both times by the IRS, and the necessity of protecting the organisation against change during TSM was mentioned. In 1986 bills in the House and the Senate required the IRS to study the feasibility of a returnfree system in which the government would compute the taxes of people with relatively simple returns, based on information from employers and other income sources. Participants would send postcards to the IRS, giving their name, address and social security number. Documents would be matched by computer systems and the agency would compile a return for each individual; any individual who disagreed could send a copy back to the IRS seeking an adjustment. The government would then send taxpayers a refund or a bill. The IRS examined the options available and concluded that such a plan could not be carried out at a reasonable cost under current law and that return-free filing seemed ‘an idea ahead of its time’, adding to compliance problems. The agency would no longer be able to match information from income sources against information from taxpayers, so it might produce inaccurate results (Washington Post, 19 January 1988). GAO agreed that in

Computerisation in the US IRS

105

IRS’s current technological environment, erroneous information problems would pose a major barrier to the kind of return-free system proposed (GAO, 1992e:5). In 1991 Congress proposed another reform, to check whether corporations were reporting all their income, as it already did for individuals. GAO released a report saying that even in a limited form, such a programme could catch $1 billion in unpaid taxes by 1995 at an annual cost of $70 million. The programme would have required companies to send a report to the IRS whenever they paid another corporation any of five types of income: interest, dividends, rents, royalties and capital gains. The IRS would take the report and match it automatically against the company that received the income. This would be similar to a matching programme for individuals developed in the 1970s, which covered more than 30 types of income. This programme received more than one billion information returns in 1991 and matched them with individual tax returns (Dow Jones, 10 June 1991). The IRS Commissioner Fred Goldberg opposed the plan, stating that it would only bring in about half of what the GAO estimated and that it ‘doesn’t need another big complicated task while it is in the midst of a multibillion-dollar overhaul of its computers’ (Dow Jones, 10 June 1991). At the micro-level the IRS evolved through the 1980s a process of integrating information systems maintenance and development with the policy-making process. Congress asked divisions to inform them of the impact on computer systems (or TSM plans) of legislative changes before bills were passed, so technical staff would be aware of possible changes well in advance. Every Assistant Commissioner designated someone to work with the Legislative Affairs Division, who in turn worked with the Treasury, Congressional standing committees, OMB and other central agencies. Thus the Legislative Affairs Division could channel comments about the relationship between tax bills, operations and especially information systems, and if systems staff considered there was insufficient time before the effective date to make changes, then this consideration would be incorporated in a total agency position that was communicated to Treasury management. In some cases the Legislative Affairs Division was successful in having retrospective dates in bills made prospective, thereby giving the IRS more time to make changes and allowing the clumping together of changes to coordinate with an annual systems development cycle. Assessment of information technology in the Internal Revenue Service TSM has become a label for a seemingly eternal development process rather than a specific project. Evidently it is impossible to provide an assessment of a project for which completion is not planned until 2010, but the very extent to which it is impossible is a phenomenon in itself. The total cost of TSM was predicted from the beginning to reach $21 billion. This figure had risen

106 Information technology in government

to $23 billion by 1992, and in 1993 GAO observed that the phase-out costs of the old systems were not being budgeted, recorded, or reported as TSM costs (GAO, 1994:36). By 1992 the IRS had spent about $831 million on TSM since its inception in 1986, of which about $290 million was on longterm (rather than interim) projects. The IRS’s 1994 budget request included $145 million and 190 full-time equivalent staff for TSM, over half of which was questioned by GAO because it was intended for the implementation of a project for which critical plans had not been completed (GAO, 1993b:2). In reality, it is difficult to see how Tax Systems Modernization ever will be evaluated. The IRS itself still lacks any systematic method of tracking the individual components of the project. In 1990 OMB requested that the IRS develop a tracking system so that actual costs, benefits and schedules could be checked against the budget for TSM and the Design Master Plan. The IRS claimed that it was developing such a system but in 1991 implementation was delayed until 1994 (GAO, 1991c:6), by which time GAO was still observing that the IRS had no single system capable of managing and controlling changes to the estimated costs and benefits of TSM (GAO, 1994:36). The speculative nature of some of the projects means that the dollar figures sprinkled across GAO reports, IRS planning documents and the computer press must all be regarded with a cellar of salt. In one report GAO gave the total likely costs of the input processing initiative as ‘over $390 million’ (GAO, 1990c:1), while the life-cycle costs of two constituent projects were given as $379 million (for the document processing system) and $198 million (for electronic data filing). This cavalier approach to expected costs suggested that the oversight agencies’ insistence on the IRS providing ‘measurable objectives for each of the modernization systems’ (GAO, 1991c:15) caused IRS officials to translate uncertain ideas into hard plans while economic feasibility remained unclear. As noted earlier, most TSM figures were as much the result of inter- and intra-organisational battles as cost-benefit analysis. Unsurprisingly, the IRS was criticised by GAO for lack of accountability (GAO, 1991c:15). The IRS’s draft information systems plan in 1991 contained development milestones for 10 out of 11 major modernisation initiatives, but the Program Management Initiative, on which all the other systems depended, contained no such milestones. The plan did not even specify the IRS organisations responsible for the 11 major initiatives and although GAO eventually identified responsibility in most cases, neither they nor the IRS was able to identify the organisations responsible for two key projects. In 1991 the IRS was asked to take a $49 million cut in its data-processing account for TSM. However, GAO later pointed out (GAO, 1991e) that none of the cuts that IRS made related to those projects identified as TSM, but were made by cutting or cancelling projects within current operations, making it ‘difficult to understand just what the modernization program is’. Oversight problems of this magnitude for GAO rendered the I RS literally

Computerisation in the US IRS

107

unaccountable. Congress was reliant on GAO for information, and if GAO could not disentangle the true expenditure, then no organisation could. Some interim benefits were gained. The IRS received more than 103,000 tax returns under a Telefile test, which allowed a taxpayer to file a tax return using a push-button telephone. This pilot scheme was to be extended to 26 million taxpayers in 1996. The error rate for the electronic returns during the 1992 season was estimated at 2.8 per cent compared with 18 per cent for paper returns and the IRS expected to receive about 14 million electronic returns in 1993. But initial enthusiasm for electronic tax-return filing appeared to be waning. The number of returns filed electronically totalled 10.2 million in 1994–5, down from 12.7 million at the same point in the previous year, a decline of more than 19 per cent. Non-paper filing overall was down 17.5 percent (Washington Post, 13 April 1995). Furthermore, electronic filing fraud had become a growing problem and IRS statistics for 1992 showed that the IRS had identified 12,725 electronically filed returns involving fraud (totalling $33.6 million) and that one-third of the refunds were issued before the IRS could stop them (GAO, 1993b:8). Thus computer technology has brought problems as well as solutions to the IRS. Starting as a pioneer in technological development, by 1994 the organisation was locked into a succession of ambitious projects, the costs and benefits of which will remain as only vague estimates for many years. But there is no going back. The functioning of the IRS is dependent upon the success of a project that continues to hover between fantasy and reality, with spiralling expenditure raising serious issues of control and accountability. The oversight agencies intervene where possible, but for the most part can only remain faithful to the belief that the project will succeed. Conclusions Since the conception of Tax Systems Modernization in the 1980s, the Internal Revenue Service has been battling to achieve the dreams of its political masters. This picture has continued into the 1990s. The IRS struggles with the dilemmas of document processing and electronic filing, causing them to enter a spiral of innovation as computerised collection methods engender computerised tax fraud, which requires yet more sophisticated tax collection methods. The day-to-day operations of the IRS are still jeopardised by the crumbling systems in place. Meanwhile, Tax Systems Modernization has brought conflict to the organisation of the Internal Revenue Service, between the technical and business sides of the organisation, exacerbated with every year that TSM failed to be implemented, causing additional problems for any projects in the future. Such conflict illustrates a major challenge that can be introduced to government organisations by information technology: the difficulty of realistically cost-justifying necessary computer systems development, a challenge also experienced by the US Social Security Administration and the UK Benefits Agency.

108 Information technology in government

TSM also demonstrates the difficulties in co-ordinating a large number of multi-task contracts, each of which collects its own supporters in staff and vendors with a strong vested interest in its continuation. But in the 1990s the IRS seems to have recognised the importance of contract management in computerisation projects of this kind. With a history of procurement littered with problematic decisions, it has attempted to develop organisational structures for contract management, elevating the procurement role and attempting to learn lessons from the past. After initial enthusiastic support, the Internal Revenue Service has suffered from a diminishing reputation with the oversight agencies, culminating in a barrage of criticism. But TSM illustrates the challenge presented by information technology to government organisations: how to exert central control over administrative processes. GAO, the OMB and Congress were all relatively powerless to track, let alone halt, the financial excesses of TSM. Such a weakness seems likely to continue into the future, with further resources being directed to the project in a desperate attempt to attain the promised benefits and keep the tax collection system in operation.

6 Computerisation in the UK Inland Revenue

Like the other three organisations covered in this book, the UK Inland Revenue developed information systems in the early days of computer technology, during the 1960s. Also like the other government departments that were early to computerise, the Inland Revenue suffered problems with its systems in the 1970s and undertook a programme of computerisation to eliminate manual procedures and to redesign existing systems. But in contrast to the other projects investigated in this book, the Inland Revenue’s technically modest project, the Computerisation of PAYE (COP) was implemented in 1987 with only a slight increase in budget and has been operating successfully ever since. In 1993 the Inland Revenue employed 71,000 staff and collected around £78,000 million with administration costs of £1,660 million, around 2.1 per cent (Inland Revenue, 1992:48–58; OPSS 1994b). Such figures show that information technology as a proportion of administrative budget is considerably lower in Britain than in the US (where it is 19 per cent). This contrast is in spite of the fact that the Inland Revenue collects less revenue than the IRS by a factor of ten, but employs more than half the number of staff. However, it should be noted that, until 1995, there was a fundamental difference underlying the basis by which the British and American income tax systems operated. In the US, general rules are laid down by government for income tax liability: individuals determine their own tax liability, and the government limits itself to a checking-up role. On the other hand, in Britain individuals are obliged only to supply information about their finances to government: government then has to transform that information into a directed token in the form of an individual tax assessment signifying the sum for which each taxpayer is liable. (Hood, 1983:69) This characteristic has been a contributory factor to the comparatively high cost and labour intensity of the British income tax bureaucracy. Although self-assessment is currently being implemented (to a limited 109

110 Information technology in government

extent) in the Inland Revenue, the difference still existed during the period covered here. The U K Inland Revenue is responsible for the administration of income tax, corporation tax, capital gains tax, petroleum revenue tax, inheritance tax and stamp duties; also for advice on tax policy and for valuation services. The Inland Revenue was a centralised organisation until 1992, when the national organisation of the department was disaggregated into executive offices: ‘The Heads of the O ffices have enhanced management responsibilities and accountability for the work of their offices’ (O PSS, 1994b:49). By 1994 the Inland Revenue employed around 72,000 staff, including 10,000 parttime workers. Central services consisted of 6,000 staff (9 per cent of the total), of whom 2,650 worked on information technology development. They were employed in 29 executive offices overseeing 600 branches across the country. Early co mpute risatio n in the Inland Re ve nue During the 1960s the first Inland Revenue computer system was based at Worthing, handling straightforward data-processing tasks, with ‘pockets of computing’ for various purposes starting up around the country. In the mid-1960s the first plan to computerise the main tax system was drawn up: a batch system for PAYE which would run from nine computer centres across the country. The first, ‘Centre 1’, was implemented in Scotland in 1968 and a second in Liverpool was built and staffed. However, the incoming Conservative government of 1970 had plans for radical changes to the tax system to include a tax credit system, with a single interface between the government and the citizen in the giving and taking of money. This policy change would have meant a merger of revenue, national insurance and benefits. The planned computer system was deemed inappropriate for the policy changes and Centre 2 was never opened. The Inland Revenue com m enced work on a new system , which quickly becam e extrem ely complicated and threatened to be extremely expensive to implement. In 1974 an incoming Labour government abandoned the plans for tax credit. For a tim e there were no plans for com puterisation, with a degree of scepticism partly caused by problems at Centre 1 in Scotland. The mid1970s b rough t m oun tin g con cern over th e accuracy an d cost of tax adm inistration. Adm inistration costs were higher than in m any other countries and by 1978 the Permanent Secretary of the Inland Revenue, Sir William Pile, ‘felt that the manual system was close to breakdown’ and that ‘service would decline with the sheer weight that is being put on it’ (Dyerson and Roper, 1992:304). The Co mpute risatio n o f PAYE (CO P) pro je ct 1977–87 In 1977 the Inland Revenue returned to the plan of automating the tax system. The remnants of the 1960s plan were not resuscitated, for the technology was

Computerisation in the UK Inland Revenue 111

now out of date and public expenditure cuts dictated that a more modest plan be adopted. Steve Matheson, having recently emerged from private office in the Treasury, was given the task of investigating anew the possibilities for computerising the PAYE system. Approval was given for a ‘very limited on-line system’ (Matheson, 1984:92) and a feasibility study (involving 450 person-months of effort) took place from 1978 to 1979. No decision was taken until late in 1980 because of ‘political arguments’, described by one official as follows: It involved a lot of different departmental and ministerial interests—the procurement interest in the computer supplier, the Treasury interest, and the Civil Service Department interest in avoiding risk. It went to Cabinet three times.…The Treasury was opposed to it because it was high-risk and they didn’t believe it would work. Remember that we had a series of spectacular failures of government…DVLC, for example, and we hadn’t had a good experience from the Centre 1 programme so the feeling was that it was too high risk.…And the argument over suppliers was a complete split. The debate lasted throughout 1980, with senior civil servants remaining suspicious about the project’s chance of success. The objectives of the Computerisation of PAYE (C O P) project were to increase the efficiency of PAYE, in particular through a reduction in staff costs; to im prove the service to the public through greater accuracy, reliability and speed of response to communications; to provide up-to-date facilities for staff and offer greater job satisfaction; and to create a system offering greater flexibility for the implementation of future changes either within the present tax structure or in more far-reaching reforms of personal taxation (NAO , 1987a:7). The original staff savings were estimated at 6,800. The project was a modest one, seeking mainly to reproduce paper processes electronically. The project outlined the development of 12 processing centres across the country and 600 local offices were converted. The processing systems were initially ‘stand-alone’, that is there were 44 operational IC L configurations which did not communicate directly with each other in the first system (Matheson, 1984:96). Worthing was developed into the largest IBM site in civil government in Britain and 37,500 computer terminals were installed throughout the Inland Revenue, half the ‘population’ of terminals in Whitehall at the time (The Times, 16 January 1989). In April 1984 the C O P project was extended to include assessment of Schedule D tax (C O DA), the counterpart for the self-employed to the PAYE tax. The National Tracing System, a central referencing system for both C O P and C O DA, was also added to the overall programme; originally planned for 1992, it was built by 1986. This system was especially valuable when Nigel Lawson, as Chancellor, introduced independent taxation and

112 Information technology in government

‘didn’t want the computer system to hold it back’: the Revenue were able to comply with this demand (which would have been prohibitively expensive without the national computer system) partly because of the tracing facility. Permission to extend the project to computerise Schedule D taxation was easily obtained, with ministers impressed by the first tranche of savings from the project. The implementation of COP was widely regarded as a considerable achievement. Since implementation the COP/CODA system operated at 99 per cent reliability with no major problems reported during installation (Dyerson and Roper, 1992:309). The administrative costs of the Inland Revenue after the systems were installed compared favourably with both the Internal Revenue Service (see Chapter 5) and the UK Department of Social Security (see Chapter 3). Indeed, a member of the UK Parliamentary Social Security Committee commented in 1991 that expenditure on administrative costs for income support was ‘quite a high percentage and way out of line with any percentages on tax—one always thought it was easier to give away money than to collect money actually!’ (Social Security Committee 1991a: 4). The final expenditure figures of the project were estimated to have been £340 million at 1987 prices, a modest 9 per cent increase in the budgeted cost in real terms. The 6,800 staff savings were achieved, although this achievement owed something to the addition of CODA and the National Tracing System to the original plans. The 1990s: the rise and fall of the Information Technology Office In April 1991 the Information Technology Division of the Inland Revenue built up to oversee COP became an Executive Office of the Inland Revenue, known as the Information Technology Office (ITO). After this date the ITO provided information technology services to the Department, including its Executive Offices and the Valuation Office Agency. The ITO was responsible for procuring, developing, maintaining and operating all the Inland Revenue computer systems. During 1991–2 the Department agreed and implemented plans for the ITO to operate on Next Steps lines. By 1992 the Office employed 2,600 staff, had a budget of £250m per year and operated 13 regional computer centres. After COP had been installed, the ITO maintained a policy of splitting major information technology developments into smaller, more manageable projects to reduce complexity and risk. It observed that ‘the use of smaller modules is also central to the modernisation of existing systems as they become obsolescent. These smaller individual systems will need to be capable of integration to maintain efficient and coherent total support systems’ (Inland Revenue, 1992:42). Research continued into the possible uses of developments in optical character recognition, electronic data matching and imaging technology. But there was no impulsive leap to implement any of these technologies. Any

Computerisation in the UK Inland Revenue

113

advances were carried out at a decentralised level. Document-processing technology was tested but problems with the technology discouraged any nationally implemented plan. By the 1990s there was widely available software to help fill out tax forms: for example, Quicktax for Windows, which showed a copy of a tax return complete with the Inland Revenue’s own guidance notes. Tax consultants were using computerised versions of tax returns instead of paper forms for about a million personal taxpayers (Independent, 21 April 1994), but the sending of such forms to tax offices electronically was not yet possible. In general, computer development in the Inland Revenue was characterised by modest technological development and the technology supplemented rather than replaced the agency’s operations. The Department was always ready to undertake pragmatic mixtures of automatic and manual processes. For example, in 1994 the Revenue implemented a computerised file management and storage system to control four million files from the Department’s former records centre, allocating a bar code reference to each file which allowed it to be precisely located through an on-line database. Inland Revenue staff were to modem their requests from remote work stations; retrieved files would then be delivered in a distribution van. The COP project was implemented relatively free from either legislative or organisational change. The Department took the decision that ‘major and far-reaching changes in the tax system cannot be implemented at the same time as the computer system for PAYE is itself being implemented’ (Matheson, 1984:95) and it managed to ensure that such change did not take place. But the introduction of the taxation of unemployment benefit, the introduction of Mortgage Interest Relief at Source (MIRAS), changes in basic rates and allowances, and the annual effect of budget changes (in all some 7,000 smaller changes to the specification during the project) were all absorbed without problems. After implementation at the beginning of the 1990s the Inland Revenue embarked on the biggest tax reform in half a century: the adoption of a self-assessment system of tax administration. The system under consideration was similar to that used in the United States; taxpayers estimate their own tax liability and submit their tax forms to the Inland Revenue along with payment. Such a system was planned to apply to nine million higher-rate taxpayers and the self-employed, in an attempt to cut red tape, reduce costs and make the system more accurate. As the NAO later noted, the Inland Revenue decided that although the computer systems developed during the 1980s were still providing good service, they had become more complex over the years and ‘They cannot now be enhanced to provide the functionality needed to support all the changes the Department wants to make’ (NAO, 1996a:19). The Department drew up a new information technology strategy in 1993, intending to redesign its information technology infrastructure to cope with self-assessment and the replacement of terminals in local offices.

114 Information technology in government

Meanwhile, as plans for the introduction of self-assessment continued, in July 1992 the Director of the ITO announced that with the agreement of the Board and Ministers, the ITO was exploring the feasibility of entering into a contractual partnership with one, or possibly two, ‘world-class private sector computing organisations’ (Inland Revenue, 1992:42). Such an arrangement was to provide for a private sector partner to supply information technology products and services, provided that it could be justified on a clear value-formoney basis. A vastly reduced ITO would continue to manage the provision of information technology for the Inland Revenue. Such a contract was awarded to Electronic Data Systems Limited (EDS), signed on 23 May 1994. Since that date all the computers and information systems of the ITO have been owned by EDS. Thus the period between the 1970s and 1990s has seen the rise and fall of information technology expertise within the Inland Revenue. The remainder of this section examines the build-up of expertise within the Department until 1993 and the factors that contributed to the apparent success of the COP project. Organisation of information technology in the Inland Revenue The COP project structure brought together the ‘pockets of computing’ that had existed throughout the Revenue during the 1960s in what became known as M3, the third management division. This division grew from 350 staff at the beginning of the 1980s to almost 3,000 by 1987, at least half of whom were located in Telford. In addition, 12 computer centres were set up around the country to support the regions, with around 400 staff. The staff on the COP project worked in a ‘single project environment’ and the management of the project was regarded as a considerable achievement by the staff who worked on it. The 1980s are remembered with affection by the staff as the ‘heyday of the organisation’. There were severe shortages of skilled labour during the development of the Inland Revenue system, in response to which local office users were trained as programmers. Such a strategy was costly, but had the advantage of improving connections between end-users and designers. There was ‘next to no’ staff turnover on the project (Morris and Hough, 1987:165). Furthermore, the involvement of staff at the highest level of the organisation was evident from the beginning: ‘Unusually for a government IT initiative, senior management at Inland Revenue displayed a high level of personal commitment on the COP project’ (Dyerson and Roper, 1992:313). A committee structure was set up to manage the project and a co-ordinating committee chaired by the Project Director met monthly to monitor progress. The two successive Permanent Secretaries of the Inland Revenue during the life of the project were on the steering group and ‘both were interested and committed’ (Matheson, 1984:101). Steve Matheson, who directed the project

Computerisation in the UK Inland Revenue

115

from the beginning of a feasibility study in 1978 until 1984, took the role of ‘project champion’ which many authors have identified as crucial to the success of long-term information technology projects. In spite of press comments such as ‘No data-processing manager in the market sector could hope to start eight-year projects and keep his job’ (Guardian, 12 March 1987), Matheson stayed with the project until well after completion. After the project was completed, he made Civil Service history when a building was named Matheson House in his honour; by 1995 he was the Deputy Chairman of the Inland Revenue itself. After implementation of the COP project, as subsequent computer systems became more complex and new projects developed, it was decided that the ‘single project’ environment had become too large to manage and would have to be disaggregated. Unlike the strictly benefit-demarcated structure of the Operational Strategy project in the DSS, functional (rather than taxspecific) groups were built up around various technical platforms. Any new project would tend to draw on most of these groups. By the 1990s it was recognised that information technology was at the core of Inland Revenue operations. In 1980–2 the business side of the organisation had been opposed to COP, owing to doubts as to its feasibility, so boards and committees controlling the project were run by technical personnel. Gradually top management in the Revenue became increasingly involved in project boards, sponsoring projects from the business side to the extent whereby one official noted: ‘Over the last few years we have seen an almost total shift from information technology projects being driven by the information technology organisation to being driven by the business.’ The decision to contract out the core functions of the Information Technology Office brought dramatic changes to the management structure, which had been carefully fostered during the 1980s. The Department did not ‘invite or encourage’ (NAO, 1995c:9) an in-house bid, a decision which was criticised by the Inland Revenue Staff Federation; and the prospect of the EDS deal was ‘unwelcome news to many staff in the Information Technology Office’ (NAO, 1995c:20). About 55 per cent of staff participated in a one-day strike and there was a programme of non-cooperation with management from April 1993 to June 1994. Under the terms of the contract, 2,000 Inland Revenue staff were transferred to EDS. In 1993 around 300 staff in the Information Technology Office remained to carry out the retained functions, which comprised ‘technical strategy, planning and finance’ and the maintenance of the contract, with predicted running costs of £18 million per year. This figure was ‘expected to reduce over time’ (NAO, 1995c:9). Contract relationships The aim of the Inland Revenue during the 1970s was to use in-house staff for information technology work as far as possible, but to supplement them as necessary with consultants or contract staff. During the COP project,

116 Information technology in government

expenditure increased from £2.1 million in 1983–4 to £8.2 million in 1985– 6 and £11.5 million in 1986–7. Consultants accounted for about 12 per cent of the staff based at Telford in 1986 (NAO, 1987a:15). Support contracts with two outside firms, Computer Sciences and Pactel, were signed at the beginning of 1981 to review the implementation plan prepared during the feasibility study and to pronounce judgement on it: ‘Both firms, having reviewed this detailed plan, said it was tight but achievable. They put their reputation on the line, and this was a useful mechanism for preventing slippage’ (Matheson, 1984:100–1). Most of the consultants were employed through contracts placed in 1978 with these two firms of consultants and with ICL, the major supplier of computer hardware for COP. The COP project was highly reliant on a unique and close relationship with ICL. The start of the project coincided with the end of the government’s preferred procurement policy, the Alvey Programme, scheduled for 31 December 1980. Matheson had recommended in a feasibility study that the project should be let for open tender. Several US firms were lobbying hard to be allowed to bid for the project (Morris and Hough, 1987:162). The study recommended an amalgam of a full mainframe system and a full distributed system. This would be a formidable task due to the necessity of keeping the local system synchronised with the big one, even in a 24-hour delay. The problem was that no company at that time was capable of undertaking such a task. As one key official put it: ICL certainly couldn’t do that. The were offering to develop it but it would be brand new hardware, brand new software that they would have to develop from scratch. They had no experience of distributed systems at all at the time and the big interface in the middle was horrendous, no robust systems recovery software, so you would have no protection against database corruption, no guarantee of integrity.…So I didn’t want to go to ICL, no doubt about it. The Department of Trade and Industry, however, wanted to use ICL, on the basis that it was unthinkable that the government would place such an order with a foreign supplier, especially for a system containing data as sensitive as the tax records of the British population. So, in spite of these difficulties, the Inland Revenue was told to refashion the specification as something that ICL could do (‘It wasn’t presented in that way, but that was the truth of the matter’). The lack of storage capacity was to be overcome with the replication of stand-alone machines, communications between which relied on the manual transfer of magnetic tape. However, the mutually dependent relationship that grew up between ICL and the Revenue proved beneficial to both departments. ICL nearly collapsed in 1981, but the subsequent change of top management meant that the company prioritised the Revenue systems, bringing in Fujitsu as a partner when they identified a desperate need for more processing power than they

Computerisation in the UK Inland Revenue

117

themselves could provide. The future of ICL was clearly tied up with the success of the project and Matheson had access from the beginning to whatever strategic thinking ICL was involved in. Experience of ICL machines before the project had been poor, with ‘minutes between failures rather than years—If we had an hour without a problem then we were doing well.’ Previous problems actually helped in the design of COP, causing systems designers to build an intermediary software system. [This] forced us to think through the relationship between the application and what was available on the system in a way that perhaps hadn’t been done before.…Because we had done things in that particular way, we could see from the beginning some inherent flexibilities in the design of this system as we went forward, forced on us by the relationship with the supplier and the nature of the application. Up until the 1990s, therefore, the Department appears to have maintained a number of successful contracting relationships. An NAO examination in 1987 suggested that the Inland Revenue was ‘now inextricably dependent on the two consultancy firms involved and [is] likely to remain so for some time’ and expressed concern over the rising costs of consultants (NAO, 1987a:17). However, the fees paid were broadly in line with those paid by other departments for the use of consultancy services and ‘the advantage of continuity of consultants was likely to outweigh any marginal savings that might arise through open competition’ (NAO, 1987a:17). In fact the Inland Revenue seems to have used consultants effectively during the life of the project by capturing the expertise it initially lacked, yet maintaining control within the Department. Consultants were made full members of project teams, directly answerable to Inland Revenue project managers. Internal morale was maintained by vigilant monitoring of consultants, who ‘were sacked if they did not “fit in” to the project methods adopted at Telford’ (Dyerson and Roper, 1990b:11). The Inland Revenue saw continuity as a key feature of the arrangements and some individual consultants worked with the Inland Revenue for more than four years (NAO, 1987a:17). During 1988 and 1989 the contracting-out of the entire information technology operation was proposed by Steve Matheson. But the suggestion was received with nervousness from ministers and central agencies. Matheson suggested a formal partnership with an organisation that was 51 per cent owned by government, for example the original British Telecom, with a major player taking the other stake and a transfer of staff from the Revenue to the new organisation. Ministers shied away from the inevitable fact that the other major player would be a foreign firm (probably Electronic Data Systems or Computer Sciences Corporation). By the 1990s, however, perceptions on foreign involvement in domestic projects had changed. Consequently, following the government’s White Paper Competing for Quality in 1991, the Inland Revenue committed itself to

118 Information technology in government

a ‘major initiative’ to market test services. Information technology was cited as the first of the main new areas on which attention would be focused for the immediate future (HM Treasury, 1992:63) and over half of the Department’s expenditure in 1991–2 on information technology development and operations was bought in, nearly £80 million in all, including £17 million on consultancy support. Information technology procurement therefore started to assume a more important role during the 1990s. Contract and Finance Divisions grew up to meet the revised role of the ITO. A Finance Division consisting of two staff throughout the 1980s had grown to 50 staff by 1994 and a Contract Division enlarged similarly to reach 150. This development caused conflict with the General Contracts and Procurement Division of the Inland Revenue, with the head of Purchasing and Procurement arguing that such a key area as information technology should come under his jurisdiction. To a professional purchasing manager, the important specialism is the specifying and placing of the contract. However, the Contract and Finance Division within ITO argued that contracting of information technology goods and services had become an increasingly significant part of the ITO’s role and that ‘if that doesn’t involve being responsible for procurement and purchasing, then what does it involve?’ They argued that for the ITO to develop any relationship with information technology suppliers, they needed to retain control of the contract. The Contracts Division of the ITO won the argument, and responsibility for information technology decisions rested with them throughout the 1990s. The decision to enter into a strategic alliance was supported by the ITO for one principal reason. Market testing pushed the ITO into a contract model rather different from that developed under COP, with multiple suppliers rather than the delicate intertwining of consultants and in-house staff that had previously existed. The plan to enter into an alliance with a private sector supplier was partially a response to the problems engendered by the management of multiple contracts. The ITO was keen to avoid the situation which it had observed in the DSS Information Technology Services Agency (ITSA), where work was split up into many contracts: one official commented, ‘We could not imagine how you could effectively manage a different supplier doing different parts of the IT.’ Other reasons for considering a major outsourcing detail included the realisation that ITO would be unable to develop as a competitive information technology organisation, partly because it is prohibited from competing for business outside the Inland Revenue under the parliamentary vote system, and also because the Board of the Inland Revenue considered that if there were other customers, they would not have the ITO’s full attention. A further stated reason for considering a major outsourcing deal was the consistent difficulty in retaining the best information technology staff, still a constraint even with pay delegation. Before the EDS deal, information technology work in the Inland Revenue was centralised in the Information Technology Office. In 1993, after the

Computerisation in the UK Inland Revenue

119

contract had been signed, it was further centralised, with more clearance points for any user in the organisation wishing to make changes or request work. EDS was allowed to communicate with the ‘customers’ of the ITO, namely the other divisions and the executive offices. But any piece of work that was negotiated with EDS without being routed through the Contract and Finance Division of the ITO was not to be paid for. This rule was built into the contract to ensure that the ITO obtained as much work as possible at the basic rate. But it increased the separation between those delivering the technology and the users, and seemed likely to constrain the ‘enhanced management responsibilities and accountability’ (OPSS, 1994a:49) of the heads of executive offices, which the establishment of the offices was intended to create. Relationship with central agencies The Inland Revenue enjoyed a consensual relationship with the Treasury after the initial controversy over the COP project. Matheson attributed this consensus to the building up of a reputation over the years, rather than a special relationship due to the hierarchical relationship that the two departments have. A Treasury official concurred: ‘I think that broadly speaking the Inland Revenue have got a track record on IT. They have huge complicated systems but on the whole they seem to produce them on time and to budget and they seem to run budgets well.’ The normal problems of technical oversight played a part in this relationship; one official observed that ‘The Treasury believe that they understand a lot more than they do.’ They were described as ‘smart people, they do ask probing questions’ but ‘they are not in a position to ask the right questions, they never ask the questions that we fear that they will ask.’ An official from the Treasury Audit Division spoke highly of the Inland Revenue Internal Audit Division and implied that for this reason he did not feel the necessity of visiting them very often. The Expenditure Division overseeing the Inland Revenue was especially small, with only five staff including a secretary and administrative officer, and little professional knowledge of information technology. The Treasury took an interest in the framing of the ITO contract during the early 1990s, but by September 1994 had ‘pretty much lost interest’, according to one official. The ITO was required to feed information to the Treasury Expenditure Division about the EDS deal on a regular basis, but it was information that they would have prepared in any case for their own purposes. Greater concern was more likely to come from the Inland Revenue’s Finance Division because ‘the greater visibility the Treasury has with respect to EDS, in a sense the less flexibility the Department has to move money around so they are a little bit nervous about that’ (interview with Inland Revenue official). The NAO was a more important central actor as far as the Information Technology Office were concerned: as one senior official put it, ‘If you lose

120 Information technology in government

any sleep, you lose sleep over the NAO rather than the Treasury.’ But the NAO produced only two reports on Inland Revenue computing, in 1987 and 1995 (NAO, 1987a; 1995c). The first was a generally positive report of the COP project and the second a sanguine report on the tendering and sale of the ITO. However, the NAO did participate in the tendering process and its views were evidently taken into account. In summary, the Inland Revenue’s relationship with the oversight agencies was one of trust, built up during the COP project. Regardless of how efficient and effective the systems of the Inland Revenue really are, their excellent reputation aided the Department in its dealings with oversight agencies. The Treasury and the National Audit Office intervened to a minimum extent. This trust seems to have been transferred to the EDS partnership, with neither the Treasury nor the NAO showing significant concern. Policy considerations Matheson maintained throughout the course of the COP project that changes in the tax system were incompatible with the project: ‘The view we have taken (and ministers have acknowledged) is that major and far-reaching changes in the tax system cannot be implemented at the same time as the computer system for PAYE is itself being implemented’ (Matheson, 1984:95). The manager of any major computerisation project might make such a statement, but it is more difficult to ensure such arrangements are maintained. Matheson’s approach was crucial in obtaining acknowledgement from ministers. He argued that changing the tax system during systems development would mean a period where there was computerised working on PAYE in one part of the department and manual working under different tax rules in another; this would add risk, cost and time and would defer savings from the completion of the system. Matheson even ensured that a prospective Labour government in the wings did not make plans for major legislative change either, by talking to Denis Healey on the telephone: normally civil servants could only communicate with the Opposition in a very formal way, but Matheson knew Healey personally, having previously been his Private Secretary. The Inland Revenue is unusual compared with many other departments in that its ministers have traditionally had a regular annual cycle of legislation, which further aided the programming of system changes in line with legislation. By 1994 there were four designated legislative staff working for ITO, who were consulted on any possible changes to the budget. They worked in separate accommodation, carrying out discussions under ‘lock and key’, and responding to queries by consulting carefully with other divisions within ITO. A major policy change, for example the introduction of a lower rate of income tax, involved a preliminary feasibility study by the Inland Revenue Directorate, the Operations Division and the Information Technology Office.

Computerisation in the UK Inland Revenue

121

The results were incorporated into policy advice from the Personal Tax Division, including information on Exchequer and resource costs, analysis of the effects of the proposed change on particular groups of taxpayers, and comparison with other possible policy options. The implications for employers’ payroll systems (manual and computerised) would also be assessed. By the end of the 1980s it seemed to be accepted that legislative changes were to some extent programmed in line with the agency’s computer systems: ‘radical reforms of married couples’ personal taxation are to be phased in over a three-year period, in line with new Inland Revenue computer systems’ (The Times, 4 February 1988). The computerisation of PAYE created enormous flexibilities in some areas of legislative change. The introduction of independent taxation would not have been possible, at acceptable cost, without the national computer system and the national tracing system. After implementation, tax rates or bands could be changed overnight. In other areas, however, the computer systems introduced new inflexibilities. The programme for legislative change had to be planned further ahead than before, which meant that Chancellors have since announced 18 months in advance that changes will take place. If policy changes necessitated significant modifications to computer systems, then it was deemed impossible to keep them secret while the systems were adapted, especially when large numbers of consultants or computer companies were involved. Major changes could take up to five years. In the 1990s the Inland Revenue embarked on the major legislative change which Matheson had been so anxious to avoid during the 1980s while the COP project was under way: a change towards self-assessment along the lines of the American system. The project was initiated in 1991 and completion scheduled for 1996–7. From that time onwards taxpayers became responsible for the information on which an assessment is made, either by calculating their own tax or by submitting a form with all the information on which an assessment will be automatically processed. Such a change represented major organisational upheaval for the Inland Revenue. Assessment of information technology in the Inland Revenue After the modernisation projects of the 1980s, the Inland Revenue computer systems were considered (both by officials inside the Revenue and officials in other departments) to be the ‘Rolls Royce of government computing’ with all that this statement entails. That is, while the COP and CODA projects were regarded as successful, resulting in reliable systems, there was also a sense in which they were deemed to be expensive, perhaps unnecessarily luxurious. But the COP project finished within budget. Although Inland Revenue staff grew from 69,408 in 1986 to 71,300 in 1994, productivity in terms of receipts per member of staff rose steadily through the 1980s, from £1,436 in 1982 to £1,879 in 1991 (at 1990–1 constant prices) (HM Treasury,

122 Information technology in government

1992:65; 1993:65). The operational performance of the Inland Revenue computer systems since the implementation of COP has been impressive, continuing at 99 per cent availability throughout the 1990s. There appeared to be a consensus that the resulting management structures for dealing with information systems were successful. Certainly the history of COP and CODA presents an extremely different picture from the experiences of the DSS, the SSA or the Internal Revenue Service. The careful building up of a committed and skilled information technology workforce over 15 years, the modest approach to systems modernisation, and the continuing commitment of the project’s ‘champion’, Steve Matheson, appear to have paid dividends. It is far too early to assess the success or otherwise of Inland Revenue computing since the signing of the EDS contract. One of the Department’s stated key objectives in forming the alliance with EDS was a ‘significantly improved speed of response in the development and enhancement of systems’ (NAO, 1995c:26). However, the contract contained no explicit requirement for EDS to deliver a specific improvement in service levels or development times. Instead the contract provided for the Department and EDS to agree targets and to build them into the Master Service Level Agreement; future service levels will therefore rely on tight and effective contract management. The contract terms were based on the notion of ‘existing levels of service’ which were taken to be the six months before the contract signature (May 1994) or, if higher, the level of service in the twelve months before the start of the six-month period. As both of these periods included some of the period when staff were not co-operating with management (from April 1993 until June 1994), it is likely that service levels were not at their potential maximum and that the Department settled for a lower than normal standard of service. Furthermore, the contract excluded ‘EDS from any liability for consequential loss arising from a breakdown in computer services’, meaning ‘the Department would not be able to recover from the supplier either lost tax revenue or interest payments which they may be required to make on delayed refunds of tax’ (NAO, 1995c:31). The NAO’s legal advisers considered that it was not always the practice to exclude liability in outsourcing deals of this kind: ‘Some purchasers of information technology services were in a sufficiently strong position to insist upon compensation—if only up to a limited sum which can be covered by insurance’ (NAO, 1995c:31). Thus the 1970s–90s saw the rise and fall of technological expertise within the Inland Revenue. The COP project established a basic technological infrastructure and a management structure for the development of subsequent enhancements. The 1990s brought a raised awareness of the problems of procurement and contract management. But the contribution of information technology to tax administration now relies on a complex partnership that seems likely to last far into the future. It is widely accepted that the Inland Revenue will never develop or run their computer systems again.

Computerisation in the UK Inland Revenue

123

Conclusions The Inland Revenue’s modest plans appear to have paid off. The COP project was successful, especially when compared with the Internal Revenue Service’s Tax Systems Modernisation Project. The Inland Revenue systems continue to show excellent service levels. Organisationally, at least during the years of the COP project, information technology seems to have brought the two sides of the organisation closer together, with information technology recognised as a vitally important resource of the organisation and accorded high priority. The Inland Revenue appeared successful in creating a delicate balance between technical and business expertise during the COP project, employing collaborative teams of consultants, users and skilled information technology staff. At the time they were criticised by oversight agencies, but in retrospect their strategy seems eminently sensible. The Inland Revenue has proceeded almost free from oversight, its early good reputation placing it in a position of trust with the oversight agencies which had built up over the years. This trust now seems to be accorded to Electronic Data Systems: it is difficult to see which central government agency will intervene if serious problems with the EDS contract emerge in the future. It is possible to envisage a scenario where the oversight agencies, the Treasury and the National Audit Office, with their limited expertise in information technology and their reliance on legal and financial advisors, are impotent to intervene. Thus, through the radical outsourcing of their entire information technology operations, the Inland Revenue placed its trust (and to some extent the carefully established trust of the oversight agencies) in Electronic Data Systems. The future success of computing in the organisation will depend upon their handling of the contract relationship. Even in Quinn’s (1992) euphoric account of radical outsourcing, strong emphasis is placed on the importance of control structures: New internal structures—notably, vastly improved logistics systems, information systems that extend in depth into suppliers’ operations, sophisticated technical and strategic monitoring capabilities, and improved top-level expertise to craft and manage contractual relationships in detail—will become crucial. (Quinn, 1992:80) There is every indication that the Inland Revenue will have to heed such warnings in the future. Private sector companies calculate as a rough estimate that 5 per cent of a contract value should be devoted to subsequently managing the contract; for the Inland Revenue/EDS partnership the percentage is around 0.4 per cent (based on figures in NAO, 1995c:11), which would seem to increase the risk inherent in the contract. Furthermore, only half of one Inland Revenue staff member’s

124 Information technology in government

time was devoted to the task of tracking future technical developments. The Inland Revenue, after its smooth ride over the last 20 years, may find technology more of a challenge over the next 20, with constant pressure to keep ‘ahead’ of E DS and to retain the technological understanding necessary to manage one of the largest information technology outsourcing contracts in the world.

7 New players Government contracting of information technology

An increasing proportion of the information systems in both the US and British governments are now developed and maintained by private sector companies. Previous chapters have provided evidence of this trend. As the first chapter showed, information technology engendered a need for organised expertise, drawing teams of technically skilled personnel into government. A high proportion of such personnel have since been replaced by private sector computer companies, with the US government having contracted out around 50 per cent of information technology work by the 1990s. As Chapters 3 and 6 have shown, the UK social security and taxation agencies both embarked on major outsourcing contracts in 1994. The purpose of this chapter is to draw together government-wide evidence on information technology contracting to examine the history, nature and implications of this trend. Contracting out information technology services was a trend that started in private sector companies during the 1970s and accelerated rapidly during the 1980s and 1990s; private practice is examined in the first section. The second section describes the extent to which both private sector practice and technological developments have shaped the resulting computer service company markets in the US and Britain. The third section examines the regulation of the contracting out of information technology in the two governments, with an investigation of the relationship between contracting and administrative reform. The concluding section examines some of the distinctive features of government contracting of information technology which seem likely to shape the future of the computerised state. Private sector practice: changes in outsourcing relationships The contracting out of some information technology operations was a feature of information technology development generally in both private and public sectors. From the early days of computing it was recognised that there were economies of scale to be achieved if some companies specialised in the production of computer systems to carry out functions common to most large 125

126 Information technology in government

companies, consequently a market developed for standardised payroll and pension systems and financial management and accounting systems packages. These packages would vary in the amount of maintenance and support provided. They ranged from a payroll package in which a company’s data would be processed by the vendor company on its own premises to a wordprocessing package for which a telephone helpline would probably be the only support provided. More recently the practice of ‘outsourcing’ or ‘facilities management’ became widespread. Both terms are used to refer to the hiving-off of inhouse computing and data-processing operations to third-party specialists. Sometimes a distinction is drawn between facilities management, which suggests simple and well-defined tasks and the choice of contractor based solely on price, and outsourcing, which emerged during the 1980s and involved contracts based on a range of less tangible benefits with a stronger relationship between client and contractor. But often the words are used interchangeably. Broad types of facilities management are: • on-site management, where the company takes over the whole computer operation in situ and runs it; • transferred facilities where the entire computer operation is transferred to the company’s data centre; • the use of an FM company to set up computer operations for the first time; and • bridging services where companies are reorganising their information technology operations. The types of contracting of information systems may be broadly categorised as follows (although the terms continue to be used almost randomly within the printed media and even within the computer press): • Ad-hoc consultancy means bringing in private sector consultants as and when required at any stage during a computer project. The contractor supplies skilled personnel who will be managed by the client company. • Project tendering includes purchasing strategies which subject each decision to competitive tendering and includes specifying a development project which the contract company will undertake to supply by a certain deadline. • Facilities management involves a supplier taking over a specific operation or function on a long-term basis, for example a computer centre. • Systems integration has come to mean anything that involves a company ‘knitting together’ an alliance of software and hardware, sometimes already existent within the customer’s organisation and sometimes bought in from third, fourth or fifth parties. • Partnership agreements include ‘preferred supplier’ arrangements (sometimes called the ‘Japanese model’, see Willcocks and Fitzgerald, 1994:15), where

Contracting of information technology

127

the client turns to a given contractor first for information technology needs, as and when they arise, or ‘strategic alliance’ arrangements where a supplier will be chosen to supply all or a subset of a client’s information technology needs and information technology is seen as the supplier’s responsibility in a partnership arrangement. The closer relationships between vendors and clients implied by the last two categories of outsourcing arrangements are the most recent developments. Systems integration, described by some commentators as ‘a market born out of computer confusion’ (Business Week, 25 April 1988), became prevalent in the 1980s. Often it fulfilled the need of a company that had carried out prolonged and extensive outsourcing to co-ordinate systems already developed. The concept was popularised after 1985 when United Airlines Inc. offered IBM $300 million to patch together United’s disparate reservations computers into a more coherent hardware and software system and IBM labelled the contract ‘systems integration’. Systems integration contracts tend to take one of two forms. Some companies choose a ‘prime contractor’, to whom other suppliers are subcontracted; the prime contractor charges a premium for taking on a share of the risk of the project. Other users contract a number of suppliers in parallel, one of whom may be charged with project management. On the basis of survey evidence, European organisations in the early 1990s outsourced on average between 6 and 7 per cent of their information technology expenditure: 47 per cent of organisations outsourced some or all of their information technology (Willcocks and Fitzgerald, 1994:10). These average figures contain a wide range of contract relationships across the categories indicated above. In the early 1990s UK-located companies were much more likely to be outsourcing information technology than their European counterparts. However, by 1997 the European market seemed to be catching up, representing 26 per cent of the total worldwide outsourcing market (Computing, 27 November 1997). The trend away from ‘spot contracting’ and towards strategic alliances and partnership arrangements was a factor in causing information technology contracts of all kinds to increase in size and variety, with customers asking for long-term contracts which varied over time with demand. By the 1990s the average life of an information technology contract in Britain and Europe was five to seven years; in the US it was longer, with ten years not uncommon. Systems integrators often aimed to develop expertise in a wider range of their clients’ ‘systems’ than merely developing information systems, seeing information systems as intertwined with other service functions. Some of these contracts were seen as real partnership arrangements by the participants, with an element of risk-sharing, where those in charge of business operations will concentrate on building alliances and partnerships which bring specific skills and solutions to

128 Information technology in government

support their plans…this is already moving facilities management beyond IT-centred deals to relationships with third party suppliers to provide and support specific processes. (Ronald Bain of EDS-Scicon, Financial Times, 21 October 1992) Thus a new outsourcing model developed: ‘a vertical cut of business process rather than the horizontal cut of information technology’ (partner of Andersen Consulting, Financial Times, 21 October 1992). Suppliers started to talk of partnerships with customers rather than service contracts, and contracts were being negotiated on new terms. For example, the Computer Sciences Corporation (CSC) entered an 11-year agreement with the retail company British Home Stores under which it took over the company’s computing and 115 staff but also worked with it to sell the combined expertise of the two organisations to the retail industry. By the 1990s the company Electronic Data Systems (EDS) was signing contracts under which it took on system development at no cost; in return it took a percentage of the business gains by the customer. Its first contract of this type in Europe was in Sweden with the retail group Kooperativa Forbundet (KF), which EDS expected to be worth $1 billion over the following 10 years (Financial Times, 21 September 1993). The contract involved a traditional facilities management arrangement paid for by KF at normal market rates, under which EDS took over KF’s existing computing and 530 staff. But it also involved an agreement for the two companies to work together on a new information technology strategy. The Managing Director of EDS claimed that this type of contract would account for 70 per cent of EDS’s growth over the next three to five years (Financial Times, 21 September 1993). Thus when EDS bids for an information technology deal, the company’s longterm aim will be to gain control of a wider range of functions and some measure of profit sharing. The core competencies argument Through the 1970s organisations used outsourcing largely to ameliorate their lack of the necessary in-house expertise and technology to run their own computer operations. But as information systems formed a larger proportion of operational work and grew in complexity, companies came to realise that to carry out this function they would have to be specialists in information technology as well as in their original ‘core’ function. Concurrently the notion of ‘core competencies’ was developed by writers on management technique, notably James Quinn (1992): Each manufacturer needs to evaluate every service activity in its value chain and its staff overheads, determine if it is ‘best in world’ at that activity, and, if it is not, consider outsourcing the activity to the best-inworld supplier.…As it selectively outsources its less efficient activities,

Contracting of information technology

129

it may leverage its own unique resources and talents to a much greater extent. (Quinn, 1992:208–9) Following this strategy, Nike the sports clothing ‘producer’ became primarily a research, design and marketing organisation. Quinn ascribed its compound annual growth rate to the fact that, even when number one in its market, Nike outsourced 100 per cent of its footwear production, thereby owning no production facilities and seeking to provide its greatest value at the preproduction (research and development) and postproduction (marketing and sales) levels, while ‘closely overseeing the quality and responsiveness of its production units’. Quinn did not specifically identify information technology as a key area for contracting out; indeed many of his examples were of private sector companies developing as core competencies the technological infrastructure that they had found necessary to set up to enhance their previously core activities. However, it was evident that Quinn’s argument might lead companies to consider whether information technology could form one of their core activities, at which they could be ‘best in world’, and many private sector companies looking for core competencies came to regard large parts of information technology development as non-core. Indeed, the core competency argument joined the other primary reason given for outsourcing: the difficulty in creating and retaining the skills and resources within an organisation necessary to carry out large-scale, technically based projects. A Computer Management Group survey in 1992 concluded that for most users of facilities management, the motivating issue was the lack of internal skills or resources (36 per cent) and the ability to concentrate on ‘core business’ (27 per cent) (Financial Times, 21 October 1992). Other benefits identified for outsourcing were cost savings (23 per cent) and better value for money, since information technology specialists could bring economies of scale, specialised expertise and the ability to release management time for ‘more important issues’. By 1997, a survey of the largest 500 firms, 60 per cent of which had some involvement with outsourcing, found that reducing the cost of the service being outsourced was ‘easily the most popular short-term gain’, although greater flexibility and access to outside expertise were other popular reasons (Computing, 28 August 1997). Dangers of outsourcing By 1992 there were some signs that the seemingly unstoppable trend towards outsourcing of information technology in the private sector had begun to cause concern. The Financial Times of 5 October 1992 reported evidence from US researchers Frost and Sullivan suggesting that before contracting out an activity, companies should ask themselves how it affected their competitive strength in their main markets: ‘Those activities which are unique

130 Information technology in government

to the company or based on company-specific skills provide competitive advantage and should be kept in-house.’ Such considerations highlighted the point that contracting out in this area was not as simple as it might be for office cleaning or canteen facilities. Another report from Organisation and Technology Research in 1994 suggested that although the trend for outsourcing was continuing to increase, it could go into reverse after 1995: ‘Other European countries consider the UK distinctly over-enthusiastic about the practice, and warn that companies are throwing the technological baby out with the bath water’ (Independent, 17 January 1994). However, the growing size of the European outsourcing market, noted above, suggests that such warnings went unheeded. Problems associated with the strategic alliance type of contracting are difficult to establish and relatively unexplored. In a CMG survey (Financial Times, 8 September 1993) 40 per cent of those interviewed said that no real cost reductions could be proved. Many of the large-scale contracts had not run their course and there were few documented examples of companies trying to staff and run a computer centre after it had been run by a facilities management supplier. Once companies have divested themselves of in-house expertise, they have no choice but to continue to outsource. Increasing reliance on supplier relationships explains why market commentators continue to suggest that outsourcing will grow, while there is a realisation that this can cause potential problems. As Ronald Yearsley, former Deputy Chairman of BIS Information Systems explains: Information technology reminds me of the story of the asthmatic woman with a dog. She gave it away, then found a cure for her asthma and wanted it back. Its new owner said it was his now, and she couldn’t have it back. The IT phenomenon is like that; you can’t have it back. Once you’ve lost it, you may say that you’re losing control of your business as a function of this. (Independent, 17 January 1994) There were signs that recognition of such problems caused some companies in the US to move away from such radical outsourcing. For example, by 1992 the number of new contracts being signed by US banks with companies like EDS, IBM and Andersen Consulting had stopped rising (Economist, 12 September 1992). Although the benefits originally perceived continued to exist, [m]anagers lose responsibility for what has become a large chunk of their banks’ costs. And outsourcing leads banks to discard in-house technological expertise just when banking is becoming an increasingly high-tech business. When banks were scrabbling to restore profits and rebuild capital, the cost-cutting mattered more, because its gains were immediate. Now that the industry is in a better financial condition, bank

Contracting of information technology

131

managers are looking harder at the drawbacks. Like other service firms, banks have to be low-cost, high-quality suppliers. Technology now drives both improvements in customer service and new products. (The Economist, 12 September 1992) Lacity and Hirschheim concluded from detailed studies of outsourcing in 13 companies that public information sources often portrayed an overly optimistic view of outsourcing. Reports were often made during ‘honeymoon periods’ when clients first signed contracts, and tended to report projected savings rather than actual savings, under-representing outsourcing failures because few companies wish to advertise a mistake (Lacity and Hirschheim, 1993:256). In 1997 a report from the consultant Compass Analysis reported that ‘companies were pouring too much money into over-priced outsourced services’ and that organisations should be improving their bench-marking procedures in order to increase their control over information technology contracts (Computing, 11 December 1997). Overcoming the problems: retaining control Companies attempted to minimise the dangers of outsourcing by identifying those areas indispensable to competitive advantage. As the European IS Manager of ICI Paints revealed: In each of the areas where we have talked internally about outsourcing we have highlighted some system or application there is no way we want to give away control of. This is either because key, sensitive information comes from it or because it gives a competitive advantage, probably temporary, that therefore we really do not want to give away. (Willcocks and Fitzgerald, 1994:51) In this way, rather than dismissing information technology as ‘non-core’ (in Quinn’s terms), companies identified those parts of the information technology operation that were core (what Willcocks and Fitzgerald call ‘a strategic differentiator’, 1994:56) and those that were not. P&O European Ferries, for example, considered that its central reservation system provided an obvious example of a ‘strategic differentiator’ that should not be outsourced (Willcocks and Fitzgerald, 1994:62). But any such classification would be temporary, as the comment from ICI suggests, and the possibility of identifying information technology activities as a candidate for outsourcing in the future would remain. Companies tried to guard against the problem that if information technology is identified as non-core and is outsourced completely, future opportunities for identifying strategic opportunity for the organisation from information technology are reduced. Furthermore, Willcocks and Fitzgerald (1994:54) found that the general rule ‘never outsource a problem’ held up well in detailed case studies: ‘Unless an organisation is experienced in aligning IT with its

132 Information technology in government

business strategy, it may find at a later date that it has inadvertently lost control of a key competitive resource: inexperienced companies should be very cautious.’ It is not possible to produce a clear generalisation of private sector doctrine. As might be expected, different organisations have chosen different solutions at different times. What were viewed by some companies as ‘useful commodities’ and therefore prime candidates for contracting out were viewed as strategic by other organisations. While ‘total outsourcing’ deals received the most publicity, the number of private sector organisations who spent a high percentage of their total information technology budget on outsourced services was relatively low. Only 12 per cent spent 70 per cent or more in 1993, and 65 per cent spent 20 per cent or less: ‘partial outsourcing is the norm: total outsourcing is rare and likely to become more so’ (Willcocks and Fitzgerald, 1994:6). It was evident that successful companies remained watchful of the inherent dilemmas in all types of outsourcing: project tendering, buying in, preferred supplier arrangements and strategic alliances. The most significant risk seemed to be the loss of control over those information technology activities that form a vital, integrated part of a company’s core business. Technological developments and the changing shape of the information technology industry In response to a growing outsourcing market, US corporations pioneered the provision of facilities management in information technology. Companies like Computer Sciences Corporation (CSC) and Electronic Data Systems (EDS) adapted their operations to technological developments, maximising the markets available and further fuelling the trend towards outsourcing. EDS started in the 1960s as a data-processing company, running payroll packages for other companies. As the market expanded, suppliers proliferated, ranging from these and similar software and computer service companies to the consulting arms of the big accounting firms such as Andersen Consulting. Until the 1980s outsourcing in the US had suffered from ‘the stigma of being a desperation measure used by companies in financial trouble’ (IDC, 1991:3.14); client companies often waited until they were in dire straits and then looked for an easy way to cut or contain costs. During the 1980s this perception of outsourcing reversed, with facilities management viewed more as an investment than a last-ditch attempt to achieve profit-ability. By 1990 the revenue that US-based companies derived from the US market was about $3,500 million, twice the size of the European FM market. By 1994 the size of the total US outsourcing market was estimated at $12.7 billion, about 30 per cent of the global total (Veronis, Suhler and Associates quoted in Willcocks and Fitzgerald, 1994:10). EDS was the top outsourcing vendor with 40 per cent of the market (Lacity and Hirschheim, 1993:14). The next four top outsourcing vendors were IBM, Andersen Consulting and Computer Sciences

Contracting of information technology

133

Corporation, all with over 10 per cent of market share, followed by DEC with 7 per cent, KMPG with 4 per cent and AT&T with 4 per cent. Within this total, EDS was the top facilities management vendor, with 50 per cent of the facilities management market. The British outsourcing market developed later, but grew from a level of virtually zero in 1984 to around £650 million by 1992 (Independent, 17 January 1994). In 1990 the British market was assessed as ‘about four or five years behind the US’ and seven times smaller (IDC, 1991:3.14). The same comparison in 1986 would have revealed a much less developed UK market compared to its US counterpart. But the use of facilities management was growing rapidly in Europe by the 1990s and the UK accounted for more than a third of the total (Financial Times, 21 October 1992). By 1996 outsourcing for software and services had reached a total of £1.7 billion (a 45 per cent increase from 1995) and was expected to grow by an average of 24 per cent over the next three years, reaching a value of £4 billion (Computing, 30 May 1996). In Britain by 1992 there were over 100 companies claiming to be facilities management suppliers. In 1993, the leading vendors in the UK Computer Services Market were Hoskyns (bought by Cap Gemini Sogeti), with an estimated revenue of £103 million and 33 per cent of the market share, followed by AT&T Istel and EDS with 12 per cent each and Sema with 8 per cent (IDC, 1993; Willcocks and Fitzgerald, 1994:154). Some companies such as Cadbury-Schweppes and Barclays Bank had set their internal information technology operations up as semi-autonomous facilities management suppliers. Barclays went further than any other UK clearing bank in attempting to sell to other companies the computer expertise of its trading business, Barclays Computer Operations, gained over 30 years of running one of the country’s biggest computer systems. However, increasingly the leading vendors in the British market were not UK based. By 1996, GECMarconi S31 was the only UK-owned company in the top ten software and services industry, compared to seven in 1985 (Computing, 30 May 1996). In both countries vendors previously providing only hardware and software such as IBM and DEC also entered the market, expanding at a rate that caused Dataquest to predict in 1992 that ‘within five years, facilities management revenue will exceed mainframe revenue in the UK’ (Financial Times, 21 October 1992). Established hardware manufacturers turned to systems integration as a growth area while hardware sales slumped. The long-accepted figure of 80 per cent of information technology spending on support and maintenance stayed constant (Financial Times, 16 March 1994). In this sense systems integration meant a move for the largest computer companies from goods (in the form of hardware or software) towards services supply. Many of the former giants of mainframe sales were rescued by this trend; IBM, DEC, Groupe Bull, ICL and Unisys all expanded systems integration functions during the period from the mid-1980s to 1995. Sema, for example, aimed in 1989 to achieve a 60:25:15 revenue split between

134 Information technology in government

systems integration, facilities management and products. By 1993 the ratio was 70:20:10 and the company announced a 28 per cent increase in pre-tax profit in December 1993 (Financial Times, 1 March 1994). Unisys was generally perceived to have rescued itself from heavy losses in 1989 to have reached a profit of $565m in 1993, with services and systems integration revenues growing by 19 per cent in 1993 (Financial Times, 4 May 1994). The development of networks and their reliance on telecommunications networks engendered a further change in the nature of companies providing information technology goods and services. Telecommunications companies began merging with computer companies in global alliances that could deliver services in all those countries where multinational companies had offices: ‘The revolution in the world’s information and communications industries is like a dance of giants, with corporate heavyweights trying a range of partners in the hope of finding the perfect match’ (Financial Times, 17 May 1993). Electronic Data Systems, by 1992 the largest computer consultancy company in the world, sustained a protracted flirtation with Sprint, the telecommunications group which is America’s third largest long-distance carrier, but the alliance never took place. Companies providing services around the Internet also grew rapidly and 1997 saw the buyout of CompuServe by AOL, making AOL Europe’s largest Internet service provider, with over 1.5 million European subscribers (Computing, 11 September 1997). Thus a combination of technological change and increased complexity of organisations’ information technology needs fuelled a tendency for successful computer companies to increase in size. Research and development in information technology now takes place almost entirely within the private sector and only the largest companies possess the resources to carry it out. Acquisitions have fuelled this trend, with AT&T acquiring ISTEL, ICL acquiring CFM, Cap Gemini Sogeti acquiring Hoskyns and 27 per cent of Sema. Although vendors of information technology services almost doubled in number between 1987 and 1990 (IDC, 1993: report quoted in Willcocks and Fitzgerald, 1994:155), with many existing US companies entering the European market, some commentators have cast doubts on the market’s competitiveness. Competition was fierce for smaller companies, but less so for large, general vendors who could compete in all sectors. Willcocks and Fitzgerald (1994) observed the development of a two-tier market with ‘a few very large, increasingly global players offering the whole range of outsourcing services, and secondly, an increasing number of smaller, niche vendors’. Thus as the market continued to grow in terms of the number of companies, the biggest got bigger. By 1995 ‘even IBM and Digital Equipment Corp., out of favour with investors for most of the 1990s, are Wall Street darlings’ (Business Week, 28 August 1995). IBM’s estimated five-year earnings growth of 12.6 per cent and Digital’s of 10.3 per cent caused Business Week to conclude: ‘In computer services, it seems, bigger really is better.’

Contracting of information technology

135

The growing strength and size of the leading computer vendors distinguish the computer services market in the 1990s. A CIRCIT policy research paper (Houghton, 1991:20) suggested that multiple vendors are not necessarily a sufficient condition for market efficiency and that it is possible in oligopolist markets for large players to ‘exercise market power and reap economic rents’. The author pointed out that, using the rule of thumb that if four companies accounting for more than 40 per cent of the market it is an oligopoly, the US and UK computer services markets are both oligopolistic. In addition, in the US especially, the information technology industry has established itself as a serious political force. In 1996 the industry gave $7.3 million in campaign payments, at a time when Congress was set to consider 13 pieces of proposed legislation affecting information technology provision. Several of the major companies had set up their own Political Action Committees, the US method of legally channelling funds into politics. The most significant funder was Electronic Data Systems (EDS) which spent $238,000 in 1996 through its Political Action Committee, followed by Texas Instruments and Computer Sciences Corporation. Far greater sums were spent on direct lobbying through specialist lobbying firms: IBM spent $4.9 million on lobbying, Texas Instruments spent $3.6 million and EDS spent $1.8 million. While the political effectiveness of such expenditure is in general debatable, there have undoubtedly been some successes. In 1995, for example, Silicon Valley firms united to defeat Proposition 211, a proposed Californian state law that would have made it easier for investors to bring lawsuits against IT firms if they failed to perform as predicted (Computing, 11 September 1997). Information technology contracting in the US federal government In line with private sector practice, a growing proportion of the computer systems within the US and British governments was maintained and developed by private sector companies. This trend was shaped by administrative reform and the regulatory approach within the two governments, so the contracting out of information technology by American and British central governments is examined separatelyhere. In the US government by contract has been a distinctive feature of federal administration throughout the twentieth century. While this characteristic may have originated in the huge increase of defence expenditure during and after the Second World War and the Cold War (Weidenbaum, 1969; Margetts and Dunleavy, 1994), it also fits with the resistance to ‘big government’ and antagonism towards bureaucracy in general that has always prevailed in American political culture. Bureaucratic expenditure is less unpopular if it is directed at expanding private sector business. The outsourcing of government computing has been at the forefront of this trend, fuelled by the equivalent development in private sector companies

136 Information technology in government Table 7.1 Commercial services as a percentage of IT expenditure in the US federal government, 1982–93

Source: Calculated from data provided on disk by the Office of Management and Budget, 1993. Notes: The total commercial services budget for 1993 was $12 billion). ‘Commercial services’ indicates money paid to private sector companies for information technology-related work.

outlined above. As Table 7.1 shows, the proportion of commercial services to in-house information technology services over the period 1982–93 started at 39 per cent and has grown modestly but consistently since then, settling at around 48 to 49 per cent from 1988. Commercial services continued to make up the largest part of the federal government’s information technology obligations when compared with personnel, capital equipment and operating costs (OMB, 1992:I–5). The proportion of information technology work contracted out by the US federal government was thus consistently higher than the average figure for private sector companies in both the US and Britain noted earlier. Table 7.2 shows that within the major departments the figures have been fairly consistent over time, while varying greatly across departments, indicating that most departments have become accustomed to

Table 7.2 Commercial services as a percentage of IT expenditure across major departments in the US federal government, 1982–92

Source: Calculated from data provided on disk by the Office of Management and Budget, 1993.

Contracting of information technology

137

a certain level of contracting out which is inclined to grow over time rather than fall. The federal government computer consultancy market From the 1970s onwards computer companies clustered around the federal government agencies in Washington, often somewhat derisively termed ‘Belt-way Bandits’ (see for example, Garvey, 1993). By 1990 the federal government accounted for 38 per cent of the worldwide facilities management revenue of US-based vendors, compared with 35 per cent for US commercial operations and 7 per cent for state and local government (IDC, 1991:3.8). Many of these organisations were entirely reliant on federal agencies for their business. One example was Federal Systems Inc., originally a subsidiary of IBM, with around 13,000 employees and revenues of around $2.2 billion (Dow Jones, 13 December 1993). About 60 per cent of Federal Systems’ business was Department of Defense-related and the remaining 40 per cent consisted of systems integration applications for civilian agencies, including the Federal Aviation administration, the US postal service, the Internal Revenue Service and the Social Security Administration. Another example, Computer Data Systems with 3,800 employees and a revenue of $181 million, derived 95 per cent of its revenue from providing the federal government with software development and systems integration, including a $370 million contract over seven years with the Department of Education to manage the federal Direct Student Loan Program (WashingtonPost, 18 April 1994). The downsizing of the Defense Department during the early 1990s indirectly increased the fierce competition for government contracts, with ex-defence contractors flooding into the civilian market. Many defence contracts were heavily information technology-based, making it easy for contractors to switch markets. The proportion of defence versus non-defence expenditure on information technology remained almost constant until 1989, but after that the ratio dropped dramatically (see Table 7.3). While information technology expenditure in civilian agencies showed steady and increased growth in the 1990s, in the defence agencies it remained constant after 1989. Table 7.3 Defence expenditure as a percentage of IT expenditure in the US federal government, 1983–93

Source: Calculated from data on disk provided by OMB and OMB (1993c:I–16). Note: 1 Rounded to nearest billion.

138 Information technology in government

One procurement official interviewed observed the effect on the market as follows: The computer procurement marketplace, especially telecommunications, is an intensely competitive business. Everything’s competitive but people are looking to gain market share and market penetration because they are losing it in defence—it has always been competitive but it is becoming more so now. Institutional pressures: The competition in contracting act The importance of ‘competition’ runs like the name in a stick of rock through the federal bureaucracy. This prevailing attitude reached a peak during Reagan’s presidency, when privatisation received an ideological thrust and institutional mechanisms that encouraged or even forced managers to contract out were put into place. For example, ceilings were placed on the hiring of full-time equivalent staff, so that increased expenditure could be more easily justified to Congress if it took the form of contracting, when costs would fall into programme costs upon which no ceiling was placed, rather than conducted in-house. Agencies were given annual targets for contracting out, set on the percentage of their operations to be privatised. Such targets were implemented through the Circular ‘A-76’, produced by the Office of Management and Budget (OMB), which ‘established Federal policy regarding the operation of commercial activities’ (OMB, 1983:I) and specified a mandatory procedure for reviewing activities currently carried out in-house and conducting a cost-comparison analysis to decide whether it would be more economical to contract them out to the private sector. The A-76 was always controversial, with federal employees objecting to the contracting out of government tasks and jobs and private companies continuing to push for the government to outsource more work. The A-76 continued in use up to the 1990s, although plagued by implementation problems, charges of questionable savings and, in some agencies (for example the Social Security Administration, see Chapter 4), resistance. One OMB official described its enforcement as ‘a source of unending frustration’ for OMB (Federal Computer Week, 3 August 1992:4). During this time the pressure towards contracting out government functions, combined with the already prevalent contracting out of information technology systems by the private sector, meant that information technology projects led the way to widespread outsourcing. Oversight agencies, especially OMB, applied further pressure in their dealings with agencies: as one official put it, ‘In any case in which you could contract out, OMB directed agencies to shout “Contract out”! It gave that kind of guidance.’ The rise in outsourcing of information technology was accompanied by concern for the competitiveness of information technology contracts, leading to special treatment for information technology in the drafting of the

Contracting of information technology

139

Competition in Contracting Act (CICA) in 1984. The Act originated from a concern with procurement in general, but made specific provision for the contracting out of information technology goods and services. The Act endeavoured to reduce sole-source tendering by raising the status of procurement through competitive bidding. Since then government agencies have been subject to a barrage of rules when they consider making major information technology acquisitions. They must first publish a ‘Requests for Proposal’ (RFP) and bidders respond with written proposals and sometimes live demonstrations. The agency would then select the vendor who appeared to satisfy requirements, the criteria for which were laid down in the 744-page Federal Acquisition Regulations (FAR) specified under the Competition in Contracting Act, thousands of pages of supplementary rules for specific agencies and an additional document, the Federal Information Resources Management Regulation (F I RM R), governing computer-related procurement. The Competition in Contracting Act also established a new mechanism to increase pressure on agencies to make tenders competitive; it set up a separate tribunal for protests by vendors against agencies for information technology contracts. While the GAO Board of Contract Appeals already in existence for general government procurement, was and is an informal tribunal-like affair, the new GSA Board of Contract Appeals, set up for the regulation of the procurement of information technology goods and services, is a formal court with judicial procedures. Unsuccessful vendors were able to protest a contract in the court if they thought they could prove the tender was not awarded competitively. This facility was viewed by one of the drafters of the legislation as particularly important for information technology contracts: Competition can have an impact on the IT industry and on what the government gets much more than in many other fields because you don’t just have a static number of competitors or a number that changes very slowly; particularly with systems integration the capital investment required is so small that firms are being born and dying within very short spaces of time, so competition becomes more important than in other areas. Ironically, however, the new regulations had the effect of further increasing the size of government information technology contracts, thereby restricting the number of companies that could compete for them. Procurements were not allowed to continue while a protest took place, so any suspected violation of the rules had the potential to delay crucial aspects of any project. As a result of the growing propensity of losing bidders to protest awards in costly court battles, risk of delay in the procurement process increased. Agencies responded by consolidating more and more contracts into so-called megacontracts, giant projects that pull together work that previously might have

140 Information technology in government

been broken down into smaller contracts. Once such a contract was awarded, future delays could be avoided. The larger the contracts became, the more impossible it was for companies under a certain size to bid for them. Thus, the CICA fuelled the trend for huge systems integration contracts and may have indirectly caused the average size of vendor companies to increase. This upward trend in the size of federal information technology contracts had a tendency to spiral. As the government made more mega-contracts, companies bidding for them had more at stake. The cost of preparing the bids soared into millions. The proposal for a medium-sized bid in 1993 consisted of a pile of paper around 10 inches high. Unsuccessful bidders, having spent literally years preparing proposals, had every motivation to protest the bid and by the 1990s the GSA Board of Contract Appeals received several hundred protests per year, handling about 150. During its 12-year tenure, it handled more than 2,500 cases (Government Computer News, 26 August 1996). The process of preparing the response to the RFP was so lengthy and detailed that there was every chance of some kind of violation. Furthermore, as one official experienced in government procurement observed, computer companies have long shown a greater willingness to bring legal action than other types of company supplying goods and services to government: Even the telecommunications industry—if you look at how MCI was built, it was built on litigation, they took on AT&T in the courts, they are not afraid to litigate to go to the GSA BCA or the courts and it is just a more intense competition in terms of the cut-throatness of the industry and the willingness to litigate, to go into legal action, it makes it different. The number of lawyers specialising in procurement protests increased about fivefold in the years between 1984 and 1988 (Washington Post, 16 July 1988). In 1988, a complex protest could cost $500,000. In some cases federal agencies and winning bidders were paying protestors to go away, to avoid the delay caused by the protest process; for example C3 Inc. received $400,000 from the Census Bureau in return for dropping their protest over one $80 million computer contract. The three unsuccessful bidders were paid a total of $1.1 million (Washington Post, 16 July 1988). A successful FBI contractor paid Falcon Systems and Unisys Corporation undisclosed sums to drop their protests against a $220 million contract award; SMS Data Products Group received $500,000 from the Environmental Protection Agency to withdraw its protest over a $72.8 million purchase of computer equipment. The GSA BCA objected to settlements like this but a federal appeals court in 1987 overruled the board saying that the board ‘abused its discretion’ by rejecting such settlements (Washington Post, 16 July 1988). In response to this kind of action, some companies allegedly filed protests solely to get settlements, a practice known in the industry as ‘fedmail’.

Contracting of information technology

141

By 1993 the resultant spiralling upwards of the size of the contracts awarded had pulled federal agencies out of line with practices in the private sector. As noted earlier, private sector companies in the US were beginning to consider the dangers of radical outsourcing. A report in Computing (27 November 1997) suggested that ‘smaller deals are favoured now’. Certainly the size of federal computer contracts was by now distinctive. One interviewee from a systems integration company suggested that private sector companies don’t go in for these mega-bids.…When they make a contract they negotiate for each system separately. They will come back and negotiate more often and it is a quicker turn-around time because they are not asking for the kind of response that the government asks for. The government wants everything guaranteed for however long the contract lasts for and that might be 10–12 years. It is very hard to guarantee prices for that kind of time. Some commentators recognised that the consistently high level of such largescale outsourcing was bringing the risk of ‘hollow’ information technology disvisions, without the necessary expertise to oversee the huge number of contract relationships generated by such a strategy. One official with many years of experience in an oversight agency described the cause of the problem as follows: first of all our inability to manage contractors. It wouldn’t be unusual to find a staff of five people overseeing 500 contractors and you’ve lost control. We’ve seen a lot of cases of GAO going in doing audits and finding sail boats bought with money…And people were moving so quickly… ‘Sure, I’ll sign it—you did the work’ but also somebody in the government needs to have some knowledge of how programmes are being managed. You can’t just give that all away, there has got to be a balance. The specific problems that regulation of procurement and consequent trends in contracting engendered for information technology projects in the federal government reached a crisis point towards the end of the 1980s. The enormous detail in which the specification of contracts had to be conducted under the FAR, and the fear of protest, meant that the time taken to make procurement awards had increased to the point where the average buy took 18 months and a major buy four years. Agencies faced a dilemma: whether to parcel contracts into discrete units, each of which had to be awarded separately under competitive tender and run the risk of protest, or whether to award large contracts which if protested would be delayed for several months. The Air Force took the latter option by awarding a huge multi-billion dollar contract for microcomputers; the award was made

142 Information technology in government

to the company Desk Top 4, was protested and was hung up in legal dispute for nine months. In summary, the regulations placed on procurement of information technology became increasingly numerous and severe. By the 1990s the federal government employed 142,000 workers dedicated to procurement in general, recording about 20 million contract actions each year. The burgeoning set of rules that had to be rigorously applied meant that the procurement process was extremely lengthy. The procedure for buying a personal computer by 1993 included obtaining 23 signatures and took over a year (NPR, 1993). In 1992 the President of Fedinfo, a company marketing information systems to the-federal government, suggested that, were Bill Clinton to be elected, one of his most important tasks would be to take a ‘long look at the entire procurement process’. Significantly, especially for the head of a private sector IS consultancy, he asked ‘Has outsourcing and employment of systems integrators gone too far, considering the reliance that agencies like Defense, Energy, the EPA and NASA have come to place on outside contractors?’ (‘An IRM plan for the Clinton Administration?’, Government Computing News, 8 June 1992:66). He also called for the adversarial relationship between agencies and their contractors to be reoriented into more of a partnership, claiming that procurement vehicles that emphasise lowest costs as the basis for award were impeding the development of trust and longstanding relationships between two organisations. In effect it was a call for the type of relationship that was producing effective results in the private sector. In this respect his pleas echoed those of Stephen Kelman (1990), who in Procurement and Public Management argued forcefully that the procurement system (and especially the rules guiding computer procurement) stripped officials of the ability to use contracting as an effective tool and that government was losing potential benefits from buying rather than making. The incoming Clinton administration in 1992 seemed to bring potential help. First, it finally released agencies from some of the institutional pressures to contract out information technology. An OMB official observed that the new administration brought a move away from the ideological pressures for contracting out and an attempt to implement a more pragmatic approach. In 1992 the government created inter-agency councils that would be responsible, in part, for reviewing agency decisions on activities that should be kept inhouse (Federal Computer Week, 3 August 1992:4). The National Performance Review of 1993 produced a government-wide report on procurement with a special focus on information technology procurement as a key area. Their recommendations came to fruition in 1996 with the National Defense Authorization Act (the Cohen Act) which laid out new requirements for agencies to follow when acquiring information technology. GSA was to phase out its procedures for delegating authority for specific purchases. Bid protest jurisdiction of the GSA Board of Contract Appeals was to go (although GAO would remain as a contracting tribunal where information technology cases could be heard), as was the Federal IRM Regulation (FIRMR). Replacing

Contracting of information technology

143

the former GSA framework was a new order, headed by OMB, which would be able to retaliate against a poorly performing agency by reducing its budget and, in extreme cases, by taking away an agency’s procurement authority by naming an ‘executive agent’ for contracting. Most importantly, perhaps, the Cohen Act introduced modular contracting, ‘an alternative to the current preference of some agencies for single huge contracts or “grand design” projects’ (Government Computer News, 1 April 1996). With modular contracting, it was intended that an agency’s IT needs would be satisfied ‘in successive acquisitions of interoperable increments’. The new law authorises Federal Acquisition Regulation changes to implement this incremental buying concept. The Cohen Act also aimed at a ‘performance and results-based management’ system, which placed great emphasis on quantified measures of productivity. It is too early to tell whether the Cohen Act will revolutionise information technology contracting in the US federal government. But certainly, by the end of 1996 there was plenty of evidence to suggest that without the new approach, outsourcing contracts would have continued to increase in size. In 1995 the NASA administrator Daniel Goldin signed a sole-source justification that brought all space shuttle contracts, previously handled by at least 12 contractors and numerous subcontractors, within the control of a single contract worth $6.2 billion awarded non-competitively. The winner was United Space Alliance (USA), a new joint venture uniting Lockheed Martin Corporation and Rockwell International Corporation. In 1996, USA terminated the contracts of two major subcontractors and their 1,000 jobs were incorporated into the Alliance (Government Computer News, 4 November 1996). In summary, therefore, the US experience illustrates the inherent dilemmas of information technology contracting and how difficult it is to overcome them with central oversight of procurement. Especially with the large systems integration and partnership contracts, regulation can work against the benefits to be gained from such a relationship. And the very fact that expertise has been run down inside federal agencies, through prolonged contracting, makes oversight of contracts more difficult, with the specialists in general procurement ill-equipped to deal with the particular nature of information technology contracts. Information technology contracting in the British central government In Britain government contracting out in general has traditionally been far less widespread than in the US, particularly before the 1980s: ‘QGAs and local governments overwhelmingly took the strain of expansion, and the “core competencies” focus of the Civil Service meant that in-house provision predominated’ (Margetts and Dunleavy, 1994:20), with the exception of the defence agencies. Until the end of the Second World War most public

144 Information technology in government

contracts awarded to the private sector involved specialised or unusual services, mostly within defence. Public sector agencies still lagged behind the private sector, whose ‘patronage of contractors during the 1950s and 1960s helped many fledgling industries prepare themselves for the explosion of work in the 1970s’ (Ascher, 1986:24), when contracting was used widely in local government to overcome labour shortages and to circumvent industrial action. Only in the 1980s did a strong thrust towards contracting out of central government administrative functions appear, alongside various other trends which came to be summarised under the title of New Public Management. In 1980 the Prime Minister announced it was the government’s policy to transfer work out of departments whenever ‘this is commensurate with sound management and good value for the taxpayer’ (Ascher, 1986) and the Treasury continued to publicise the advantages of contracting out departmental services, with the Manpower Control Division within the Treasury overseeing a departmental tendering initiative from the mid-1980s. The move from encouragement to enforcement of contracting out central government functions was formalised in the government’s ‘Market Testing’ proposals announced first in a White Paper, Competing for Quality in 1991 (OPSS, 1991). However, the widespread use of computer consultants has been a feature of government computing since the 1970s and predates the Market Testing initiative. In National Audit Office reports, and departmental responses to them, the use of consultants has often been presented as a response to the shortage of trained and skilled information technology staff. Such a shortage undoubtedly existed throughout the 1980s and was described as ‘the most important single issue inhibiting the UK IT industry.…The very fast evolution of information technology creates its own skills shortages’ (Trade and Industry Committee, 1989:xviii). However, a shift towards ‘contracting as good’ rather than ‘contracting as a necessary evil’ was already evident in the early 1980s. By 1991 the public sector accounted for 43 per cent of the £400 million information technology facilities management market in the UK (Financial Times, 26 March 1992). A leaked Cabinet Office Efficiency Unit report in 1994 was reported as detailing government spending of £500m a year on outside consultants (£565 million for 1992–3), £160 million of which was spent on information technology consultancy. Possibly owing to the origin of their use as a necessary supplement to inhouse operations within a project already under way, ad hoc use of computer consultants was the most common form of information technology contracting by government during the 1980s (as opposed to preferred supplier or strategic alliances). Problems arose from poor control and co-ordination of the use of consultants, always a risk with this type of contract. Frequently an increasing number of consultants was brought in during the life of a project, rather than planned from the start. Thus the division of control between the government agency and the private sector tended to differ from the US at this time, where the contracts were more usually awarded at the start of a project. The National

Contracting of information technology

145

Audit Office found extensive and poorly controlled use of consultants (NAO, 1993a) to be one of the primary weaknesses in the Department of Employment’s abortive attempt to build a field system for Training and Enterprise Councils in 1993 (eventually abandoned at a cost of £71 million). By September 1992 £11 million had been spent by the Department on consultancy support (originally estimated at £2.4 million in the business case in 1989); the Department employed more than 200 individuals or firms during its life. The regulations governing the hiring of consultants are left to core departments to enforce. In this case the NAO found that ‘consultants were recruited directly instead of through a formal tendering process; consultants began work before contracts were signed; and contracts were routinely extended rather than renegotiated’ (NAO, 1993a:21). On hearing evidence on the report, the Public Accounts Committee concluded that ‘at least a significant part of the expenditure of £48 million by the department on this project has been wasted’. More recently more wholesale contracting out—that is facilities management, outsourcing, systems integration and partnership contracts of the type entered into by the private sector and in the US—has increased dramatically. By 1992 central government departments and agencies accounted for £67 million (The Times, 20 November 1992) of a £500 million information technology facilities management market, with Hoskyns (since purchased by the French company Cap Gemini Sogeti) as the leading supplier for government. Table 7.4 shows that the percentage of information technology work contracted out had not yet reached the level of that in the US at the beginning of the 1990s. But by 1997 the British government had caught up with the US, with external services across the government as a whole rising to 45 per cent of total expenditure on information technology and with outsourcing set to rise by 11 per cent annually between 1996 and 1999 (Kable Briefing, February 1998). The National Heritage Department might be taken as an indicator for the future; the Department was formed from a number of functions taken from other departments and faced the problem of co-ordinating the disparate collection of computer systems that it had inherited. The new department’s response was to contract out about 40 per cent of its information technology operations, increasing to 56 per cent by 1995, placing it at the top of Table 7.4. The contracting figures for the British central government departments demonstrate that, as in the US, government agencies outsource a considerably higher proportion of their information technology operations than the average private sector company. Thus, computer companies and consultants carry out work in almost all British central government departments and agencies, with a bewildering array of contracts spanning the spectrum of possible contract arrangements. Some UK companies such as Logica and ICL (the former preferred hardware supplier of government) retain a hold in this market. ICL led a consortium developing the MoD’s head office automation scheme (CHOTS)

146 Information technology in government Table 7.4 Commercial services as a percentage of IT expenditure by department for British central government 1993–5

Source: Kable (1995b:41). Kable collect these data through a combination of parliamentary questions and the Computer Weekly user expenditure survey. They must be used cautiously due to disparities in estimates between Kable, 1994 and 1995b. Kable informed the author that the latter are more accurate. They are, in any case, the only data available. Note: 1 Figure for commercial services includes consultancy, training, facilities management, contract staff/recruitment, software support, hardware maintenance and computer bureau/other services.

at a total contract value of £250m (Kable, 1994). ICL were also developing the RAF’s UKAir project (£41m originally, but over budget), the Scottish Prison Service prisoner record system (1994–5, £1m) and the Contributions Agency system for Civil Recovery sections (awarded by ITSA). Logica, the longest-established UK facilities management firm, derived 16 per cent of its annual turnover (£217.4m to June 1993; Independent, 16 May 1994) from the government; two major projects were running the UK’s air quality information service on behalf of the Department of the Environment and (with IBM) providing the MoD ‘Lits’ system, linking 36 of the RAF standalone systems into one world-wide logistics system at a cost of over £300m. However, by far the larger share of the government computer business was taken by overseas companies. Most central government departments have entered into one or more major ‘facilities management’ or ‘systems integration’ deals. Systems

Contracting of information technology

147

integration companies with a high proportion of their business from government included the French-owned BULL, with a systems integration division in Livingston. BULL’s contracts included the £50 million ‘Iron Systems’ integration project for the Inland Revenue and a £45 million order for a fiscal information system for the Polish Ministry of Finance (Scotsman, 24 September 1992). The SEMA Group (which made a plan as early as 1989 to earn 60 per cent of revenues from systems integration which eventually rose to 70 per cent in the 1990s) was systems integrator for the Royal Navy and in 1994 was awarded a contract worth £52 million over five years for the administrative information technology services provided by the Home Office computing division. SEMA was also developing (with ITSA) the national unemployment benefit system (Nubs2) and its Income Support Computer System Interface, as well as carrying out the analysis of management systems for the Meat Hygiene Service in MAFF. A contract to run the MoD’s Bureau West Computer centre was in 1993 awarded to Hoskyns (now owned by the French company Cap Gemini Sogeti), which had gained considerable experience with facilities management in the National Health Service. The Department of National Heritage awarded a £1 million contract to DEC in 1994 to run the computer systems for the British Library. Coopers & Lybrand provided consultancy services to the Department of Trade and Industry and undertook contracts in the Department of Health between 1989 and 1992 (to a total of £3m), the MoD (£7.8m over 26 contracts), 22 contracts in DoE, other contracts in the Treasury, and 10 contracts in the DSS (PITCOM, 1992:201). Siemens Nixdorf developed a warehouse and transport management system for the RAF and was a ‘Systems Integration’ partner for MAFF. The Department of Trade and Industry contracted out its information technology service provision to Hoskyns from 1994 for three years in a contract worth around £20 million. Institutional pressures: Next Steps and Market Testing There were two major elements of public sector reform in Britain that affected the extent and nature of information technology contracting by government agencies. The first was the reorganisation of departments into core departments and ‘Next Steps’ agencies which started in 1988. The Next Steps programme in some instances acted to fuel, or at least to reshape, the shift towards private sector provision of government information technology services. As part of the Next Steps programme, the information technology divisions of some departments were separated out as executive agencies and, like all executive agencies, were submitted for a ‘Prior Options’ review on creation, three years (later extended to five years (see OPSS, 1994b)) after creation and systematically thereafter. Prior Options involved asking three questions: first, whether there was a continuing need for the activity at all. The second question asked whether the agency should not be privatised en bloc, usually by means of a negotiated transfer of the undertakings involved

148 Information technology in government

to a private company in return for a contract to provide the department with the same services. Only after these questions had been answered were agencies to ask the third question: were there new tranches of work which could be market tested? For the agencies created specifically to carry out the information technology operations of the major departments, the second option was favoured. In two cases they were tendered for privatisation: the information technology arm of the Department of Transport (the DVOIT), the first agency to be privatised in 1993, and the Information Technology Office of the Inland Revenue. The DVOIT had became a separate executive agency in 1992 with an annual turnover of about £26 million and about 480 employees; its primary customers are the Driver and Vehicle Licensing Agency (70 per cent of DVOIT’s work), the Vehicle Inspectorate, the Driving Standards Agency and the Department of Transport. The agency was offered for sale at the price of £5.5 million, at the same time offering service contracts for information technology services worth £70 million pounds (NAO, 1995b: 1). EDS was the successful bidder for both sale and service contracts from a shortlist of three; the others were Computing Sciences Corporation and a consortium combining DVOIT managers and IBM. The Department considered only companies that could handle the massive DVOIT workload, including the database and processing of details of all UK car drivers and owners. The DVOIT workload was transferred to EDS computers while the DVOIT computers (three mainframes which could be accessed from 4,000 terminals in 200 offices throughout the UK) were to be used elsewhere in the EDS group. But a further push towards attracting only the largest of companies was that, as in all contracts of this size, the deal was subject to the Transfer of Undertakings (Protection of Employment) regulations (TUPE), under which the 320 DVOIT staff who transferred to EDS were entitled to retain their existing terms and conditions of employment, representing a ‘major liability’ to any purchaser (NAO, 1995b:16). The privatisation of the Information Technology Office of the Inland Revenue in 1994 resulted in an even more important contract for EDS. The Inland Revenue spoke first to companies with more than 20,000 staff worldwide; only a large manufacturer could be expected to meet their conditions (Financial Times, 26 November 1992). Like the DVOIT sale, the contract was covered by TUPE and specified the transfer of 2,000 staff for whom terms and conditions would be preserved. The tendering process identified two clear frontrunners: the Computer Sciences Corporation in association with IBM and EDS. The contract was awarded to EDS which offered 17 per cent savings on projected in-house costs. The ten-year £1 billion contract (NAO, 1995c:5) received widespread publicity as Europe’s largest data-processing outsourcing deal, public or private sector. The contract was based on a constant volume of work for which there was a fixed price, to be reduced gradually to 50 per cent over ten years. If the Inland Revenue require other services in the future, they will be chargeable. EDS looked

Contracting of information technology

149

carefully at the organisation’s plans and what these additional services were likely to be before they judged how to bid. The contract created a unique organisational arrangement within EDS, a kind of ‘quasi-private’ subdivision of the company. EDS is required to operate under an ‘Open Book’ approach, whereby the Inland Revenue work is carried out by a separate division of EDS (although not necessarily by the original Inland Revenue staff) and both the Inland Revenue and National Audit Office have unlimited access to the financial accounts. EDS eventually broke with tradition and agreed to recognise a trade union for the Inland Revenue staff; no union had hitherto been recognised for their 5,000 strong British workforce. However, the Inland Revenue refused to restrict EDS’s flexibility by complying with the trade union’s request that the tenderers should guarantee the transferred staff employment for the life of the contract; EDS gave only the guarantee that they would not make any compulsory redundancies within six months of their transferring to the company. The relationship between the Inland Revenue and E D S was characterised by EDS as a partnership (in a private letter to the Shadow Minister for the Civil Service, September 1995). This partnership is managed by a contract management team, led by grade 5 personnel, originally consisting of about 40 staff in 1994. However, figures given by the National Audit Office (NAO, 1995c:11) suggest that only about eight staff will be directly managing the contract during the larger part of its term. The ‘partnership’ aspects of the relationship are dealt with at a different level, consisting of the head of the contract management team, the Information Technology Director and a board member (all personnel of grade 4 and above) and EDS’s management team in the US. Any problems with the contract will be referred to this level by the contract management team and discussed at the partnership level with EDS. However, a senior member of the contract management team recognised that they face inherent organisational contradictions: Clearly a lot of the success of this will come from having a very open, a very co-operative relationship. But we have still got a contract which is pretty tough…[and] which we need to manage on a very tight basis, because they could run rings around us. The core competency of the Information Technology Office in the future will therefore be the management of a unique organisational partnership, with in-built contradictions. Thus, through bundling up information technology operations as a discrete organisation, Next Steps has centralised information technology provision in addition to the existing centralisation of information technology operations within the major departments. Where information technology operations were concentrated in one agency, the agency was presented with the ‘total outsourcing’ option through privatisation of the new agency, an option not

150 Information technology in government

so readily available when information technology operations were dispersed throughout an organisation. The second central government reform that affected information technology contracting was Market Testing, with more widespread effects across central government. The Government’s White Paper, Competing for Quality (OPSS, 1991), listed central government information technology as a ‘promising area’ for contracting out. Information technology was seen by the Cabinet Office as an area where ‘it is easier to establish an objective measurement of whether you are receiving the required level of service’ (The Times, 20 November 1992) and it was widely accepted from the beginning that the largest part of the Market Testing programme would be in information technology services. The information technology activity in Agriculture, Customs and Excise, Defence, Employment, the Foreign Office, the Home Office, the Inland Revenue, the Lord Chancellor’s Office, the Northern Ireland Office, the Office of Public Service and Science, the Department of Trade and Industry, the Department of Transport and the Welsh Office were all earmarked for Market Testing. William Waldegrave, who implemented the first stages of the Market Testing initiative as the Minister for OPSS, specified information technology as one of the areas ‘where the Government could not maintain the investment and expertise necessary to compete effectively with the private sector and from which it was best for the Government to withdraw’ (Treasury and Civil Service Select Committee, 1994:xvii). Some departments responded literally to this statement. In 1994 a significant chunk of the Information Technology Services Agency (ITSA) of the DSS was designated as a candidate for Market Testing in contracts worth £577m and a transfer of 1,600 staff under TUPE (NAO, 1996b:1). The contract was divided into two packages: first, contracts for the provision and support of data network communications (the Distributed Systems contract), worth £140 million, and a Data Centre Services package, worth £437 million. The Distributed Systems contract was divided into three geographical packages (North, South and Central) which were awarded to SEMA and ICL (NAO, 1996b:14). EDS won the Data Centre Services contract, thereby retaining the two computer centres they already ran, regaining the computer centre that they had lost to IBM in 1994 and taking over the fourth computer centre. The contracts were signed in June 1995, representing the majority of ITSA’s information technology processing activity. Thus regulation of contracts on behalf of the DSS became ITSA’s primary function, a subject for concern given the agency’s poor record on managing contracts noted in Chapter 3. The authority-based agencies presented especially rich pickings for computer suppliers, given the high degree of innovation in this area noted in Chapter 1. At the central level, three management companies were competing for the contract to store hundreds of thousands of records on a multimillion pound computer system for the Home Office aimed at speeding up police access to criminal data. A Home Office spokesman said: ‘All three are based in the UK but we are talking about the work being done overseas. Transferring

Contracting of information technology

151

records onto computer will be done either in the Philippines or Jamaica, depending on which company wins the contract’ (Daily Mail, 9 May 1994). By 1996, the Metropolitan Police were planning to outsource up to 1,000 information technology staff in another multimillion pound contract (Computing, 29 August 1996). So both Next Steps and Market Testing tended to increase the size of government computer contracts. Both fuelled existing trends towards wholesale contracting out of computer development rather than the use of consultants to supplement existing projects, given that agencies were under pressure to market test a certain percentage of their operations, and information technology was often seen as the most likely candidate, with high potential monetary value compared with other administrative activities. Under Market Testing the benefits of government agencies carrying out not only development projects but also maintenance functions already in progress had to be re-evaluated at the so-called ‘strategic decision’ phase and contracted out where possible. The response of the British government to information technology procurement was very different from that of the US, with no move towards creating regulatory mechanisms such as those adopted in the US. Indeed such regulations as do exist in Britain were often more honoured in the breach than the observance. The Departments of Social Security, Trade and Industry and Defence were all accused of flouting European Union tender rules aimed at ensuring open competition by forcing government departments to tender openly for computer contracts (Independent, 17 March 1994). Departments had sidestepped restrictions by forming secret agreements with existing suppliers which allowed them to bid for contracts without external competition, known as ‘framework’ agreements. Such contracts were not advertised or open to public scrutiny, and only the largest companies, which had established ‘preferential’ status, were able to gain access. In 1994 it was revealed that more than half of all government contracts were not legally declared and details of their costs and conditions were withheld from scrutiny (Independent, 24 October 1994), thereby flouting European law which requires countries to publish invitations to tender and details of who won the contract and its approximate cost. Countries are allowed to withhold information only when it would be against the national interest to do so. The worst government offender was the CCTA itself, giving details of awards in under one-tenth of its contracts. The notion of a public sector contract does not exist in English law and ‘the process of adapting the private law of contract to the special requirements of public service provision in the 1990s and beyond still has a long way to go’ (Drewry, 1993:4). The legal implications of large-scale information technology contracts are especially underdeveloped and are often not anticipated during the contract process. In the negotiation of the EDS contract with the DVOIT, legal advice over the contract rose from an initial estimate of between £30,000 and £80,000 to £545,000 (NAO,

152 Information technology in government

1995b:14–15). Even this expense did not prevent an unanticipated legal battle with the companies that had provided the DVOIT with software and were unwilling to see it transferred to EDS. The dispute only ended when the DVOIT agreed to pay £505,000 to the software companies for software licences that they already owned (thereby calling into question the quality of the expensive legal advice). Computer suppliers themselves exhibited concern over the speed and scale of information technology outsourcing in the British central government and suggested that the government had not fully considered the implications of Market Testing. One supplier commented publicly: ‘If ministers ask for information, civil servants jump. It is not going to be like that again. Once data-processing skills are lost they are hard to replace’ (Financial Times, 26 November 1992). Some computer companies questioned whether Market Testing was a fashion that would have to be reversed in a few years as the ‘close relationship between policy and data-processing becomes apparent’ (Financial Times, 26 November 1992). One director of a computer consultancy commented on the enormous difficulties of specification for the larger government computer systems: ‘Sometimes when I am in the bath I try and think of how a contract could be written for the National Insurance Records System that would satisfy both the client and the supplier and I can’t.’ In 1994 the Cabinet Office backed away from Market Testing by lifting imposed targets. The Treasury favoured instead the use of the Private Finance Initiative (PFI), launched in 1992, a relaxing of the rules on private sector investment on public sector projects with the aim of increasing such investment. Public sector computer projects were designated as suitable candidates; the Contributions Agency’s National Insurance Recording System was an early candidate. Total expenditure through PFI doubled in 1995, reaching £600 million. By 1997, £140 million of the £2,497 total expenditure on government information technology was spent through PFI, predicted to rise to £560 million by 1999 (Computing, 18 September 1997). In effect, for information technology development there is little difference between the two initiatives; both involve the development of government information systems by private sector companies. Both have a mandatory element: departments and agencies were told at the start of the PFI that information technology funding would be subject to evidence that the organisation had tried to obtain private funding. And, as with Market Testing, PFI contracts tend to be let to larger companies. By the end of 1997, nine major government information technology contracts had been let under PFI and large companies (Siemens, EDS, Racal, Andersen Consulting and ICL) had won all of them (POST, 1998:23). By 1997 the new Labour government, after promising a moratorium on outsourcing projects during the early 1990s, looked set to continue with outsourcing and the PFI was left in place. It was no longer mandatory and equal weight was to be given to outsourcing and in-house development of public sector information technology projects. However, the new

Contracting of information technology

153

government’s pledge to stick to the spending limits laid down by the previous administration meant that there was no public money to finance extra capital projects. The Passport Agency’s £90 million deal with Siemens was probably the first deal under the new government (Computing, 25 September 1997). During 1998 the PFI was under review, with criticisms from vendors that procurement deals were costly and time-consuming to prepare. In 1997 AT&T pulled out of NHS healthcare, following Cap Gemini, ICL and Oracle; the CSSA blamed the PFI, claiming that the initiative had forced the market to consolidate and was eroding competition (Computing, 18 September 1997). Another criticism of PFI was the way it reinforced existing divisions between departments, as PFI contracts tend to be both large scale and long term, and are negotiated and let by individual departments and agencies. Thus PFI in effect ‘freezes’ existing departmental demarcations into the system and ‘could seriously curtail the ability of Government to engage in holistic reengineering for many years’ (POST, 1998:62). New relationships: prospects for information technology contracting in the US and Britain Thus in both the US and Britain computerisation has drawn new players into government in a variety of contractual arrangements. Through widespread computerisation the type of problem that has hitherto primarily been the concern of the defence and health agencies has become a feature of all central government agencies. In both countries the history of government contracting suggests several warnings for the operation of huge-scale systems integration contracts (for examples see Turpin, 1972; Dunleavy, 1980, 1994; Kelman, 1990; Garvey, 1993:37–47; Fesler and Kettl, 1991:258–61). Asymmetry of expertise and information Government overseers tend to grow quickly out of line with the private sector company they are controlling. EDS employees working on the Inland Revenue contract will be highly trained specialists, while the contract management team of the Inland Revenue will be generalists. Contract management is not normally regarded as a prized position for a Civil Service high flyer: throughout the transition period the highest grade member of the contract management team was grade 5. At the beginning of the contract, the staff remaining in the Information Technology Office had detailed knowledge of the computer systems that EDS would manage, having worked on them over a long period of development. But over time their expertise will date and diminish as they focus on the details of the contract. Furthermore the amount of resources to be allocated to contract management is considerably less than a private sector company would devote to such a major contract. Private sector companies calculate as a rough estimate that 5 per cent of a contract value should be devoted to the

154 Information technology in government

subsequent management of the contract; in the case of the Inland Revenue/ EDS partnership, as noted earlier, the percentage is around 0.4 per cent (based on figures in NAO, 1995c:11), which would seem to increase the risk inherent in the contract. In the US, as demonstrated by the experiences of the Social Security Administration and Internal Revenue Service in Chapters 4 and 5, contract management and procurement have come to be seen as crucial aspects of information technology work, with the elevation of the procurement function in the Internal Revenue Service and the professionalisation of procurement in many agencies. Some of the difficulties have been recognised as attri-butable to the regulatory strangulation of procurement that has grown up over the years: other difficulties are attributed to the inherent problems involved in government oversight of large-scale technology-based contracts. However, even in the US it remains to be seen whether leading computer specialists will choose government procurement as a career path. Rather, it seems likely that over time as the information technology ‘profession’ of systems and business analysts, engineers, designers and programmers develops, there will build up a disparate concentration of specialists at the leading edge of technological innovation in companies like Electronic Data Systems, while the less well-paid, less professionalised and less innovative will be managing contracts in Civil Service organisations. Profit surrogates Another feature of government contracts is that they are generally run to the tightest possible profit margins, with ministers and officials tending to regard too intimate a relationship with vendors as engendering risk of fraud or corruption, or of public disapproval. This attitude draws government contract relationships away from the more relational contracting preferred in the private sector. In the drive for competitiveness, Market Testing presented its architects with similar dilemmas to those faced by the implementors of the US Competition in Contracting Act. In 1994 John Oughton, then Head of the Efficiency Unit in the Cabinet Office stated: We are encouraged, by some contractors, away from the concept of a classic competition, with in-house bids and outside contenders, towards the concept of strategic partnerships.…Selecting a supplier to Government because we felt comfortable with the fit could lay the Government open to the charge that business was simply given to friends and associ-ates. There would be those who would question whether this was an appropriate way to spend public money, and who would want to know the basis of our selection of a preferred supplier. We need to demonstrate objectivity and impartiality. (Oughton, 1994:9)

Contracting of information technology

155

The Inland Revenue’s strategy for dealing with Electronic Data Systems was fairly typical, with one official defining the role of the contract management team as to ‘forget all about partnership’ and to make sure that ‘EDS do exactly what the contract says and we are not paying them a penny more than we ought to.’ Contract management of this type is problematic for two reasons. First, it can cause companies to search for alternatives to immediate profit; ‘profit surrogates’. Surrogates include maximising the contract length, regulation avoidance strategies, ‘gold-plating’ of specified activities and enlarged involvement. A rational company pursues all these strategies to the full once the contract is awarded, staying just below penalty levels or contract cancellation levels, easing up when enlargement is possible or when contract renewal is imminent. Information technology contracts are particularly suited to such strategies, due to the difficulties in customers retaining the technical expertise to establish what they really need. Companies like IBM and AT&T have traditionally used their systems software to lock their customers into their equipment (Quinn, 1992:183). A survey of 123 UK companies by KPMG in 1997 found that 41 per cent of contracts (which are often drawn up by the outsourcer’s lawyers) failed to make any provisions to transfer service at the end of the contract, making it difficult for users to terminate them: KPMG’s principal outsourcing consultant observed: ‘EDS uses a lot of its own software on contracts. In practice, it will let customers use that software when a contract finishes, but the guarantee is not there’ (Computing, 25 September 1997). A second problem with contracts of this kind is that many of the benefits of partnership type agreements come from a close collaboration that is difficult to sustain when the customer is working so hard to minimise profits. The success of the partnership between EDS and the Inland Revenue will depend to some extent on the closeness of the relationship between the two organisations: ‘It is well-known that a high percentage—about two-thirds in the industries studied—of all innovation occurs at the customer supplier interface’ (von Hipple quoted in Quinn, 1992:78). Yet tight control of the relationship may work against such benefits. Penetration pricing Another feature of government contracting comes from a preoccupation with price as the primary factor in choosing a supplier. This characteristic can lead to ‘penetration pricing’, where companies bid low in order to secure business and then either enlarge their involvement through the first contract, or increase the price at a point where the outsourcer is unable to back out. In its contract with the Inland Revenue, EDS was prepared initially to work to very low margins, partly because the Inland Revenue gave it maximum planning visibility. Thus it is likely that EDS see the current contract as a lead into the future. As one official put it:

156 Information technology in government

They know that the department will under the present government be under pressure to market test various other components of this and I am sure they will be interested in trading on from that position and being in a good position to bid for other business if it comes along. Penetration pricing is difficult to establish, but some companies seem to have used such a strategy. One official described Arthur Andersen’s successful bid for the NIRS II contract by the Contributions Agency as follows: I am absolutely certain that the reason for Andersen’s bid which was startlingly cheap is that they know they would be in a prime position to get any of the other business coming in and around the Contributions Agency, especially if there is going to be some connection with the Inland Revenue. It has to be the biggest loss leader ever seen. As the National Audit Office put it, ‘Andersen Consulting had reduced their bid so dramatically, when compared with their indicative price that the Contributions Agency approached them for an explanation. They explained that this was “primarily a commercial judgement”’ (NAO, 1997:22). If Andersens were penetration pricing, then it would indeed be a commercial judgement, but it is difficult to see why the Contributions Agency or the National Audit Office should be satisfied with this answer. Arthur Andersen later ran into serious problems with the project. Likewise, EDS were especially keen to win the Benefits Agency’s POCLE contract; officials suggested that the company saw it as a lead into subsequent identity card schemes although this was never one of the stated aims of the project. In fact, by the late 1990s, EDS had begun to make a name for itself for coming in with the lowest bid for any operation. When EDS won the JOCS contract in 1998, CSC (a rival bidder) were aggrieved at losing the contract as they believed that military personnel close to the project favoured their approach: ‘However, bid price appeared to be the dominant factor— producing an outcome familiar to other companies who have come toe-totoe with EDS’ (Kable Briefing, February 1998). Loss of core competencies In the 20 years since 1975 government agencies have moved from being innovators in the development of information systems to what is rapidly becoming primarily a contract management role. In the 1970s an Inland Revenue official described the Revenue’s role during the development of the COP project as follows: A lot of our effort became involved in what was really research and development. We were fed up with working on the frontiers of ICL

Contracting of information technology

157

capability; we were always the testbed for something new. That was inevitable because in order to meet the targets and objectives that ministers set for us, we couldn’t use the existing technology—we had to develop it beyond its capability. That demands R&D effort and arguably we shouldn’t be doing that—a government department and particularly an administrative government department like this shouldn’t be investing substantial sums in Research and Development. It is not our business. Since that time government agencies in general appear to have reached the same conclusion. In the 1990s the development of information systems is not regarded as something that can be the core competency of a government agency, even where an agency has been formed for the prime purpose of carrying out information technology activities: for example the DVOIT or the Defense Logistics Agency. In the Treasury privatisation came to be seen as almost synonymous with value for money, with one official describing the Inland Revenue Information Technology Office contract as innovative by virtue of its uniqueness in terms of size. Such a strategy leads us to ask what government’s core competency will be in the future. Quinn’s (1992) core competency argument specifies that outsourcing partnerships should be developed only with non-competing companies, to maintain a strategic focus: To maintain its position from a strategic viewpoint, the company’s selected focus must control some crucial aspects of the relationship between its suppliers and the marketplace.…And it must defend itself from big purchasers attempting to vertically integrate into its turf. (Quinn, 1992:235) Whatever the core competencies of the future Inland Revenue are likely to be, EDS’s status as non-competitive will require careful monitoring, given the company’s preference for contracts that contain some aspect of vertical integration. Indeed, one Inland Revenue official recognised that the deal was approaching a profit-sharing arrangement of the type that EDS prefers: If we were a private sector organisation this would be the first step for them and they would then be seeking to persuade us to outsource a lot of our administrative functions to them as well. Now I am not sure they have completely given up on that. I mean, I think they would quite like eventually to be running all these tax offices for example. Such a development, a direct result of radical outsourcing of information technology, would raise questions over the core competency of the entire Inland Revenue, rather than merely the Information Technology Office.

158 Information technology in government

The new players in government The US federal government has long been accustomed to dealing with private sector providers of many kinds. The information technology revolution has merely created a new breed, present in every department and agency of the federal bureaucracy and notable for their size and willingness to litigate. Such a development has resulted in a regulatory response, specifically directed at information technology contracts. While the first wave of the National Performance Review appeared to bring a temporary reprieve from regulatory strangulation, the lack of attention paid to large-scale systems integration projects, the renewed drive towards privatisation in the second wave of reforms, and the ever-present concern for competition in contracting throughout the federal government all seem likely to ensure that regulations will silt up again in the future. Civilian agencies in the British government are less accustomed to dealing with such companies than in the US, and the UK information technology industry has nothing like the institutionalised political clout of the US industry, where the American Electronics Association is ‘openly proud of the influence it exerts over the policy makers in Washington’ (Computing, 24 April 1997), for example on issues such as encryption and international trade agreements. But it is no surprise that vendors have been quick to lobby ministers when unsuccessful in winning information technology contracts. When the Benefits Agency ‘POCLE’ contract was awarded against EDS under the Private Finance Initiative, the company went straight to the Secretary of State, Peter Lilley, to protest that the award had been uncompetitive. The award process was audited, but was not found to be uncompetitive. Companies like SEMA and EDS have large public relations departments and well-established lobbying procedures. Such companies pay parliamentary lobbyists to clarify government (and opposition) policy on information technology contracts, to entertain senior civil servants, lobby ministers and generally do all they can to establish their reputation across the government. However, it should not be assumed that the awarding of government computer contracts to such companies has been primarily the result of private sector lobbying activities. Evidence presented in this chapter shows how the fact that widespread computerisation has occurred at a particular point in administrative history has ensured that such companies have been handed government business on a plate. By 1997 EDS had emerged as a particularly major governmental player, with over 50 per cent of government outsourced business (Computing, 2 October 1997). In addition to the contracts mentioned above, in April 1996 EDS won the three-year GCAT contract, worth £400 million per year, a contract designed to speed up IT procurement for departments, agencies, NHS trusts and councils by offering products that had already competed under EU tendering rules. EDS won the contract with Tplc, a subsidiary of ICL but ‘dumped’ the smaller company after alleged customer complaints

Contracting of information technology

159

(Computing, 11 September 1997). In 1997, EDS secured further government business in an indirect way when Digital embarked on the strange move of outsourcing its own multi-vendor customer services division to EDS in a global deal worth £300 million over eight years and the transfer of 800 Digital staff. Digital’s customers include the Cabinet Office (Computing, 26 June 1997). EDS has been particularly successful in defence agencies, winning a £400 million Armed Forces Personnel Administration Agency outsourcing contract in September 1997, to run over 10–12 years, taking over the management of 900 staff, operation and development of the agency’s information systems. In October the company also won contracts for the Royal Navy’s new Command Support System, which would eventually be fitted into every Navy ship and a client-server system on more than 6,000 ships and land stations (Computing, 30 October 1997). In November 1997 ED S won an MoD contract for the Armed Forces personnel and administration system, valued at £300 million over a time period of 12 years, with the transfer of around 900 MoD staff (Kable Briefing, November 1997). In 1998, Kable Briefing reported that ‘EDS’s star is now even higher in the ascendant in the UK Defence heaven’, when it won the £40 million JOCS project, designed to deliver a Joint Command, Control, Communication and Intelligence System to those headquarters supporting Joint (Navy Army and Air Force) Operations. This contract placed EDS in an excellent position to dominate computer support to Joint Operations over future years, as the MoD is embarking upon a Joint Command Systems Initiative, planned to bring all the current single service systems, Communication Information Transfer Services and Wide Area Networks into one command system and valued at over £150 million. As noted in previous research (Cawson et al., 1990:361), firms tend to be less fragmented than governments; relations between firms and governments are likely to consist of a relatively centralised organisation confronting and negotiating with a number of different departments with various and conflicting objectives. In Britain such tendencies seem set to increase; as the state has become more fragmented through Next Steps, Market Testing and the proliferation of quasi-governmental agencies, the providers of information technology contracts have become larger. The relationship is also one of dependence, with government agencies increasingly unable to operate without the participation of the new players. The Inland Revenue acknowledges that the expertise formerly contained in the Information Technology Office would take several years to rebuild, so the returning of the information technology activity in-house is not a viable option when the contract is re-tendered. Comparisons and conclusions This chapter has analysed an important result of the modernisation agenda: the drawing in of new players into government through the replacement of

160 Information technology in government

administrative tasks by computer operations, creating a perceived need for private sector expertise, followed by the handing over of such operations to a computer services market. In both the US and Britain private sector outsourcing trends, the particular shape of the computer services market, administrative reform and legislative change have all contributed to a tendency for government agencies to contract out and for contract sizes to increase. Such a trend has continued throughout the 1980s and 1990s, fuelled by the Competition in Contracting Act in the US and Next Steps and Market Testing in Britain. In both countries the percentage of information technology operations contracted out is currently greater than in private sector companies. In Britain the still growing proportion suggests a counter trend to the private sector, where the number of major outsourcing deals signed by companies is showing signs of reducing. In the US the trend has been for a growing number of government-oriented companies to grow up in the beltway around Washington to serve the federal bureaucracy, a significant part of what Garvey (1993) terms the ‘shadow bureaucracy’. Information technology’s leading role in the expansion of the contract state in the US seems likely to be replicated in Britain, but the pattern for British government computer contracting seems rather different, with few national players acting major parts. At present in Britain a few huge multinational companies have come to dominate the computer services market in general and are increasingly dominating the development, maintenance and operation of government computer systems. All departments and agencies now operate a wide variety of contract relationships with computer services suppliers and consultancies. In both countries concern over competition in contracting has brought its own dilemmas. The history of contracting in the US illustrates one approach to the problem: to direct maximum resources at the regulation of contracts. Ironically this approach has increased the size of contracts tendered by government agencies and in some instances has reduced the ability of smaller companies to tender for them, rendering government contracts less competitive. In Britain the institutionalised pressure to contract out through Market Testing and the lack of regulation means that government contracts are in theory more competitive, but in practice only the largest players in an oligopolistic market may tender for those contracts shaped by the Next Steps programme. The Conservative government refused right up until 1997 to show concern over the rising publicity of EDS as a government contractor, as did high-level officials interviewed at the National Audit Office. Although backbenchers in the run-up to the election in May 1997 voiced concern that the privatisation process was resulting in ‘an overly great concentration in EDS’, the shadow Science Minister of the time, Geoff Hoon was (like his predecessors) more equivocal: ‘We want to see competition in this area. So long as the rules are followed properly, competition can produce a situation where one company can win all the contracts’ (Computing, 24 April 1997).

Contracting of information technology

161

While private sector companies have paid great attention to developing a clear picture of their core competencies, ‘there seems to be a lack of serious consideration of analogous issues in contemporary public management of reform’ (Dunleavy and Hood, 1993:11). Evidence presented here suggests an implicit government assumption that information technology development cannot be a core competency of the UK central government organisations. As multinational corporations like EDS, Sema or Cap Gemini Sogeti with government contracts in many countries expand their share of the government computing market and staff from previous UK Civil Service information technology divisions are amalgamated into these organisations, the situation seems unlikely to reverse. The prevailing attitude appears to be that government cannot be ‘best in world’ at developing or maintaining information systems. With companies such as Electronic Data Systems and SEMA poised to ‘vertically integrate’ into other operations, the new players in government computing seem likely to raise new questions of control and accountability. Steering capacity will depend upon the now vital task of contract management, with computer companies presenting the same challenge to regulatory oversight as drugs companies have to the Department of Health. Organisations like the Information Technology Services Agency and the Information Technology Office of the Inland Revenue have identified their core competency, by default, as writing contracts. It remains to be seen whether they can be ‘best in world’ at writing contracts.

8 The ambiguous essence of the state of the future

Information systems have been shown to play an increasingly vital, policycritical role in the process of government, bringing new challenges, risks and dangers as well as solutions. This chapter draws together the conclusions of previous chapters to present a picture of the ‘computerised state’ as it enters the twenty-first century. A core task of central government in the future will be to control and direct the new global players drawn into government through the widespread use of information technology. The introduction of this book showed how politicians see information technology as a potential catalyst to the transformation of government, a panacea for the problems of administration. This chapter traces these claims back to their source, namely contemporary thinking and writing on government in the ‘information age’. First the relationship between information technology and modernism is investigated. Writers who perceive information technology as a modernising tool may be categorised into four categories: hyper-modernist, anti-modernist, critical modernist and postmodernist. This chapter examines their hopes, fears and dreams in the light of evidence presented in the intervening chapters. There is little evidence to suggest that any process of overarching transformation is taking place. The last section of this chapter suggests an alternative perspective on information technology and its place in the state of the future: antepostmodernism, where the ‘ambiguous essence’ of technology characterises its governmental role. The policy impact of information technology There can be little doubt that information technology is now critical to policymaking. Such relevance is increasing, as technological developments of the 1990s offer new opportunities for policy innovation, especially in authoritywielding agencies. New flexibilities and increased responsiveness of administrative systems introduce new possibilities for policy-making. Some of these innovations rely on the integration of existing computer systems, the matching of data across organisational boundaries and a move towards a client focus rather than an organisational focus for information systems development. Through the use of information technology, in the form of 162

The ambiguous essence of the state

163

electronic interactions between citizens and government, legislators have high hopes of the transformation of government’s nodal resources. However, as such interactions start to increase, the transformation can often be seen to be ‘skin deep’; interactions flounder as they come up against the information systems that currently exist within government. Co-ordination between computer systems is often assumed to result from computerisation; but this study has shown that there is nothing automatic about the process. Interagency projects remain as difficult to initiate and maintain as they were in the 1970s. Lack of social co-ordination, especially vertically between policymakers and technical decision-makers, creates further problems for technical co-ordination. Another reason why information technology is likely to be of importance to public administration in the future is the challenge it provides for political oversight and for internal regulation, which presents a new administrative dilemma for government. The successive failures of US regulatory efforts illustrate how difficult the regulatory challenges introduced by information technology are to overcome. The laissez-faire approach of the British government illustrates the disadvantages of the other course of action: departments and agencies develop computer projects in isolation, with problems unlikely to reach the public agenda and a handing over of control to a few major computer companies (in one-to-one relationships with government departments) with sparse residual governmental checks on their actions. One of the major challenges to oversight in both countries comes from the widespread outsourcing of information technology that both governments have undertaken. In the US both legislators and public officials take a more cynical view of computer companies, a view developed through years of contracting. Such a view means that far greater regulatory resources are directed towards controlling the activities of both computer companies and government agencies. But this regulatory response brings problems of its own: the most notable being a tendency for regulatory strangulation of activities, stifling innovation and discouraging the more relational contracting preferred by private sector companies. Such problems demonstrate that some of the control problems introduced through information technology are real dilemmas with no simple cure. In Britain public administrators are more trusting, more naïve and seem likely to hand over a greater proportion of control to the new players in government; yet the same tendency of government agencies to run contracts to tight profit margins seems equally likely to discourage innovation. The traditional problems raised by politicians’ lack of concern for administration are heightened by computerisation for two reasons. First, the perceived difficulty of technology has introduced a new layer of organisational complexity for policy-makers seeking comprehension of administrative capacity. Politicians may not be interested in bureaucracy, but most would purport to understand how it operates; the same is not true of information

164 Information technology in government

systems. Second, the evidence revealed in this study illustrates how the number of organisational clearances that legislators are required to understand increases as new organisations undertake these operations. Many information systems in government now represent a maze of contracts and computer companies, as the largest companies with their constantly shifting core competencies search for new roles. Evidence of the dangers of such a widening of the gap between legislators and administrators was well illustrated by the history of the SSA’s Systems Modernization Project in the US in the 1970s. Similar lessons might be learnt from the DSS’s Operational Strategy, as the inheritance of ten years of automation increasingly dictates the policy options of the future. As yet such lessons remain largely undigested by legislators in the US or Britain, as both governments focus their attention on the more attractive topic of the information superhighway and the sharp-end policy innovations that they faithfully believe the twenty-first century will bring. Transforming the state? Information technology and modernism Politicians’ claims for information technology in government have their source in a particular tradition of political thought: modernism. Modern began to appear as ‘a term more or less synonymous with “now” in the late sixteenth century, used to mark the period off from medieval and ancient times’ (Williams, 1989:31). The word has been most usefully defined by Jane Austen (in Persuasion) as a ‘state of alteration, perhaps of improvement’ (quoted in Williams, 1989:32) although it and the words ‘modernism’, ‘modernise’ and ‘modernist’ have been subsequently in common usage, usually with less irony. Modernism as a cultural movement dominates the period between the end of the nineteenth century and 1940; but modernism as a sociological concept, dating from the 1970s, indicates either belief in or support of modern tendencies or characteristics. The modernist writer with the most relevance for the study of government could be Max Weber, writing at the beginning of the twentieth century: [F]or Weber, the transition to modernity takes place largely through increasing rationalization. Rationality denotes following a rule as opposed to acting on impulse or at random. Rationality means consistency in linking our thoughts or statements, creating the logical order of premise to conclusion. It also means consistency in linking our actions, creating the efficient order of means to end. (Kolb, 1986:10) For Weber, the road to modernity through rationalisation was facilitated by the development of bureaucracy. Bureaucracy would allow the control of the world through calculation, the systematisation of meaning and value into

The ambiguous essence of the state

165

an overall, consistent, ethical view, and the methodical living of daily life according to rules (Gerth and Wright Mills, 1975:55; Roth and Schlucter, 1979:14–15; Kolb, 1986:10–11). His was a ‘ruthlessly pessimistic’ view of modernity: it was likely to produce ‘a gloomy bureaucratic state where administered uniformity severely limited freedom’ (Kolb, 1986:12). At first glance information technology would appear to facilitate the modernisation process that Weber envisaged to an even greater extent: to ‘out-Weber Weber’, as Christopher Hood has elegantly put it (Hood, 1994:140). Such technologies allow the formalisation of rules and procedures and enhance scope for increasing rationality into decision-making. Ethical schemata are easier to implement using computers; for example the calculation of quality-adjusted life-years in health care can become considerably more sophisticated. Long-accepted problems of rational decision-making, such as ‘bounded rationality’ (Simon, 1955) can be tackled, as computers are used to simulate policy alternatives. Indeed, the way that some writers perceive the impact of information technology bears a strong similarity to Weber’s vision of administrative modernisation. Information technology is perceived as facilitating, almost by definition, a more modern public administration: ‘Informatization in public administration is a process of continued modernization’ (Frissen, 1995:8). Information technology is frequently identified as the tool to modernising the state, the key to modernist dreams. The work of writers who believe that information technology will transform the central state can be categorised according to how they view the consequences: positive (desired by the hypermodernists) or negative (feared by the anti-modernists). Hyper-modernists The hyper-modernists are the source of many claims from modernist politicians; technological utopians who see information technology as the central enabling element of a utopian vision. John Sculley (1991), for example, visualised information technology driving change that has the potential to create a new society, in the same way as the Renaissance was catalysed by an explosion of literacy: Today we are in need of a second Renaissance which like the first can also be galvanized by new technology. We are on the verge of creating new tools which, like the press, will empower individuals, unlock worlds of knowledge, and forge a new community of ideas. These core technologies and the tools they support will help create a new environment of learning. We believe the tools that show the most promise for the new learning environment build on three core technologies: hypermedia, simulation and artificial intelligence. Each of these technologies alone can enrich the educational process. Each gains additional strength when learners can share resources over networks.

166 Information technology in government

And when these technologies are fully integrated with each other, they will fuel a 21st century Renaissance outpouring of new learning and achievement. (Sculley, 1991:58) One of the most popularly influential of such writers to ascribe dramatically enabling and transformative effects to technology on government has been Alvin Toffler. In a trilogy extending over twenty years, Toffler (1970, 1980, 1990) revelled in the notion of transition, transformation and revolution. First, Future Shock announced a revolutionary change to a ‘super-industrial society’. By 1980 the Third Wave was bringing a new civilisation with greater flexibility and diversity, with the electronics revolution at its technological base. This civilisation would be peopled by ‘information workers’ in ‘intelligent buildings’ full of ‘electronic offices’, organised in networks rather than formal hierarchies. Toffler claims that ‘Second Wave countries being battered by this Third Wave of Change’ are unable to cope, with citizens sensing that ‘the political system, which should serve as a steering wheel or stabilizer in a change-tossed, runaway society, is itself broken, spinning and flapping out of control’ (Toffler, 1980:396). He makes the more specific claim that the decentralisation afforded by information technology will inevitably lead to political decentralisation as well: ‘It is not possible for a society to decentralize economic activity, communications, and many other crucial processes without also, sooner or later, being compelled to decentralize government decision making as well’ (Toffler, 1980:234). In 1990 in Power Shift, Toffler tells what seems to be the same story, with some updating to account for technological innovation; ‘exactly like businesses, governments are also beginning to bypass their hierarchies—further subvert-ing bureaucratic power’ (Toffler, 1990:255). Evidence presented throughout this book suggests no overarching transformation of government through information technology that lends support to Toffler’s claims. While nearly all major departments in both the US and Britain are to varying degrees heavily reliant on information technology for core administrative tasks, none of them has been transformed; nor has technology brought ‘the end of bureaucracy’. The process of ‘automation’ could not be described as ‘complete’, as it has been in the Netherlands by Snellen (1994). Many manual processes still exist. What has changed is that every government department is now expected to operate projects with a high technical content, creating new needs for organised expertise and new pressures for administrative innovation. Such needs and pressures have been met by a varying array of techniques and strategies over time. While both governments started the 1970s as innovators in the development of information technology, during the 1980s and 1990s the gap between what governments can do with information technology and what they are actually doing started to widen, especially with the processing of treasure. Furthermore, the new risks and dangers inherent in the task of embarking

The ambiguous essence of the state

167

upon large-scale, technology-based projects, unforeseen by the hypermodernists, remain with both governments, rather than appearing as a temporary ‘glitch’ in progress to a fully modernised state. Toffler’s conception of technology-driven federalisation relies on the assumption that state governments will be able to overcome the inherent difficulties in implementing large-scale computer systems experienced by federal government, an assumption seriously challenged whenever processes have been devolved. For example, when attempts in the US to computerise child support programmes at state level failed (as noted in Chapter 1), Donald Kettl from the Brookings Institution observed that the block grant concept ‘presumes that state governments will succeed at doing what federal governments have never done in terms of managing large-scale information systems’ (Washington Post, 15 October 1995). Anti-modernists Anti-modernists also believe in transformation and revolution through technology but, unlike the hyper-modernists, they are deeply concerned about the implications. They concentrate on the negative effects of information technology-driven change, claiming that malign governments holding the technological reins will drive us to the ‘control state’. Beniger (1991:388), for example, claims that the ‘progressive convergence of information-processing and communications technologies in a single infrastructure of control’ is sustaining the ‘Control Revolution’: ‘a concentration of abrupt changes in the technological and economic arrangements by which information is collected, stored…processed and communicated and through which formal or programmed decisions might effect societal control’. Others see information technology bringing about the new Leviathan, ‘“integrating” the state through the backdoor of information management’ (Lenk, 1994:313). Burnham (1983) sees a causal relationship between new technology, state tyranny and the undermining of democracy: ‘the spread of computers has enabled the National Security Agency to dominate our society should it ever decide such action is necessary’ (Burnham, 1983:144). Warner and Stone (1970) describe ‘the onrush of technology and the threat to freedom’ through the computers and the storage of information by state organisations in Britain and America. Other writers predict that computers are leading us to a ‘cyborg world’ (Levidow and Robins, 1989) in which ‘infotech systems promote new models of rationality, cognition and intelligence’, or a ‘military revolution’ (Goure, 1993). Such accounts are derived from military world-views and founded on the pursuit of a logic of total control, both internal and external: ‘The military information society involves internalizing a self-discipline, technologies of the self, in ways that come to be seen as normal, rational, reasonable’ (Levidow and Robins, 1989:8). Such writings belong to a wider anti-utopian literature that also has its roots in the Weberian tradition. Just as Weber feared unbridled bureaucratic

168 Information technology in government

domination, Kafka (1925), O rwell (1954) and H uxley (1932) warned of a rule of impersonal officialdom. For O rwell and H uxley, such a rule was disastrously strengthened by technological advance. The human ‘machine’ of Weberian bureaucracy would be dehumanised, first by the systematisation of h um an p roced ures, followed b y th e rep lacem en t of h um an s with automated machines. So if the hyper-modernists are wrong, we must ask if the anti-modernists are right; is information technology leading either the U S or Britain towards ‘th e C o n tr o l State’? C h ap ter 1 d em o n str ated th at in n o vatio n an d transform ation seem to be at their peak within the authority-wielding departments and agencies. But increasingly customised information systems, presenting greater and greater possibilities for sophisticated control systems, seem unlikely to result. There remains a possibility, in the same way that b ureaucracy p resen ts a p ossib ility of a totalitarian state; totalitarian governments of the past have not allowed lack of information technology to restrain their activities. Wright (1998) presents a ‘worst case’ scenario, where a ‘range of unforeseen impacts are associated with the process of integrating these technologies into society’s social, political and cultural control systems’, one of which is the ‘militarisation of the police and the para-militarisation of the army as their roles, equipment and procedures begin to overlap’ (Wright, 1998:4). But to transform this possibility into a probability is a task that is not being addressed by either government. Both the U S and U K governments have possessed de facto national databases since the 1960s and information technology has dramatically influenced potential capacity to wield authority (especially through policing strategies) in specific ways since that time. But the ‘Control State’ seems an unlikely consequence of computerisation, for two reasons. First, although computer systems provide new opportunities for governments to take a ‘government-wide’ approach, in general such an approach would be out of line with administrative trends in either country, and neither government is using computer systems to embark on such a project. The creation of government-wide financial management systems and a new budgeting system in the U S was the most notable development of this kind, hardly a radical move towards a fully integrated central state. Second, difficulties in integrating computer systems and linking them together seem likely to remain an obstacle to government-wide systems of control. Databases seem set to remain de facto, especially given that increasingly they are developed, operated and maintained by multinational companies with little incentive to share information. Furthermore, government holds no monopoly on technological innovation. As the example of tax evasion in the U S showed, technologically sophisticated, control-wielding agencies encourage the tactics of ‘smart citizens’ (as predicted by Frissen, 1994a), in turn necessitating further (and more difficult to attain) technological efforts from government agencies. Thus inform ation technology em erges from this study as m ore of a control problem than a control solution. Administrative trends in both

The ambiguous essence of the state 169

countries—decentralisation, fragmentation, contracting out and privatisation— seem likely to ensure that it remains so. Critical modernists In contrast to the dramatic claims of the hyper-modernists and antimodernists, the critical modernists take a more careful and realistic approach and focus on government, rather than society in general. An early example and perhaps the initiator of a tradition of ‘critical modernist’ writing, was a report on the current and future impact of computers and new communications technology on French society, commissioned by President Giscard d’Estaing in 1976. A bestselling book was published from the report, written by Nora and Minc (1981), called The Computerization of Society (L’Informatisa-tion de 1a société); it outlined the challenges presented by the spread of computers, considered the development of computer systems within government departments and asked the question, ‘Will a computerized society be a society of cultural conflicts?’ (Nora and Minc, 1981:126–33). Nora and Minc criticised previous post-industrialist approaches: The information society does not fit these analyses and predictions. Going beyond the world of production, it fashions its new requirem ents according to its own plan, its own regulatory patterns, and its own cultural model. It is the locus of an infinite number of decentralized, unexpressed conflicts that do not respond to a unifying analysis. (Nora and Minc, 1981:134) They concluded that the future was uncertain; the creation of a computerised society would be a complex process, requiring a serious investment of time and reciprocal learning, operating through generations. The French title of Nora and Minc’s book was used by a school of researchers in the Netherlands who changed the ‘s’ to a ‘z’ in ‘L’informatisation’ and developed the concept of ‘informatization’, a cover term for ‘a multitude of related phenomena’ (Frissen et al., 1992:1). In the Dutch approach, ‘informatization’ includes the following elements (from Frissen et al., 1992:1–2): • the introduction of information technology in order to take care of or sh ap e th e p rocess of in form ation sup p ly b y m ean s of autom ated information systems; • th e arran gem en t an d re-arran gem en t of flows of in form ation an d information relations for the sake of administrative information supply; • the adjustm ent or change of the organisational structure in which information technology is introduced; • the development of information policy as a differentiated policy in the organisation; and

170 Information technology in government

• the introduction of specific expertise in the field of information technology through officials with assignments in this field. As can be seen from the above definition, ‘informatization’ is a term with bold aims. An extensive body of detailed and thoughtful work has emerged from their research projects, usefully summarised in Snellen (1994). In the US, the URBIS group at the University of California in Irvine followed suit, a group of information systems specialists who focused on computing in government, especially local government, both within the United States and other countries, published in Telecommunications Policy, Administrative Science Quarterly, Communications of the ACM and (less frequently) Public Administration Review. In a substantial longitudinal study of computerisation in 42 ‘leading-edge’ cities in the US in 1976 and 46 such cities in 1988, Northrop et al. concluded that expected gains from automation had been slow to emerge. While improvements were demonstrated by a few leading-edge cities and benefits from applications with a large record-keeping component (such as financial, personnel and land records systems) were likely to deliver benefits, [M]ore sophisticated applications, such as those information systems which set and measure goals whether internally (to departments) or externally (to larger community planning decisions), appear to be a long way away from delivery of major payoffs if, in fact, the political nature of these application tasks will ever allow them to be dramatically affected by computerization. (Northrop et al., 1990:512) Behavioural accounts such as these contrast with the Dutch literature, suggesting that ‘the introduction of computer communications, command, control, information, and intelligence-oriented technologies into the policy-making process is unlikely to alter its fundamentally political character’ (Dutton, 1987:188). However, Dutton also argues that politics in the information age might be fundamentally different from politics of earlier eras: It seems that those who know and understand the new information technologies will increasingly be involved in the political process.…One of the most critical problems that this trend poses for a pluralistic society is the increased importance that this technological change places on the organization and political resources of groups and interests in the society. Groups and interests that cannot marshal the resources and expertise to effectively participate in the process of defining reality, forecasting the consequences of alternative policies, and defining alternative policies, are unlikely to be well served by the new politics of the information age. (Dutton, 1987:188–9)

The ambiguous essence of the state

171

The work of the Irvine group has been corroborated by a research group on information technology in public administration set up in Germany in 1972 as ‘part of an endeavour to establish a curriculum of applied informatics in the field of law and public administration, with the intention to counterbalance the already strong trend towards a computer science conceived of like a classical engineering discipline’ (Lenk, 1992:2). The work of the ‘Kassel group’, compiled over 20 years in over 50 research reports, includes descriptions of everyday operations in major agencies in taxation, social security and local government; it is ‘unprecedented’ and ‘finds no parallel elsewhere in German administrative science’ (Lenk, 1992:3). The results of this ‘impact’ research ‘confirm those obtained by the Irvine research group: IT has tended to reinforce existing power relations in organisations’ (Lenk, 1992:3). A study group of researchers in Britain also followed the critical modern school The work of this group (summarised by Pratchett, 1993) includes a collection of useful case studies of public sector computerisation in the UK (for example, Dyerson, 1989, 1992; Dyerson and Roper, 1990a, 1990b, 1992; Bellamy, 1993a, 1993b; Bellamy and Henderson, 1991; Margetts, 1991; Margetts and Willcocks, 1992, 1993, 1994; Buckingham and Wyatt, 1992; Collingridge and Margetts, 1994; Keen, 1994; Pratchett, 1994; Bellamy and Taylor, 1995). In contrast to the findings of the American and German researchers, some of the group’s key members have come to see information technology as the central force in transforming public administration products, organisations and processes, with enhanced information flows and information technologies facilitating the ‘information polity’ (Taylor and Williams (1991), Bellamy and Taylor (1994, 1998)). The ‘information polity’ is a ‘matching concept’ to that of the ‘information economy’ and other constructs which analyse post-industrial shifts in the economic base of societies towards services and away from manufacturing, and emphasise the contrast between production-line technology and new information and communications technologies. The compelling search for centrality of information technology has led some of these writers to ascribe a strong technological push to governmental change of all kinds. Bellamy and Taylor (1994) even argue that the ‘New Public Management’ (NPM: more commonly known as a cohort of managerial changes sweeping many OECD countries and extensively discussed within public administration) is to be understood as a subset of changes stemming from ‘informatization’. They argue that: NPM can be interpreted as a special and prominent case of an attempt to deliver the transformational properties of informatization. Full-blown NPM is an information-intensive reform of the structures and processes of governance, demanding new and complex horizontal and vertical flows of information in and around government organisation. (Bellamy and Taylor, 1994:26)

172 Information technology in government

Similarly, Taylor (1997) claims that the ‘informatization’ approach is useful to ‘describe and classify contemporary administration by reference to a new technological wave, that of information and communication technologies and, most importantly, a new wave of innovations, potential and actual, around the development and use of information and communication’. In part, the tendency of the above analyses to allocate information technology a central role in public management change is due to the fact that they all start from a ‘technological determinist’ assumption: the widely-held commonsense belief…which holds that technical change is a prime cause of social change, and that technical innovations are themselves ‘uncaused’—in the sense that they arise only from the working out of an intrinsic, disembodied, impersonal ‘logic’, and not from any ‘social’ influence. (Edge, 1995:14) While the writing is littered with disavowals of technological determinism (see, for example Bellamy and Taylor (1994:27); the exception is Frissen (1994a, 1994b, 1995, 1996a, 1996b) who enthusiastically embraces the phenomenon), the idea that information technology predetermines public management change is evidently determinist. The same deterministic push is evident in the claims of practitioners, especially in the United States. After the National Performance Review was instigated in 1993, the Tofflerian ‘Third Wave’ crashed into Washington politics, with Newt Gingrich, then Republican Leader of the House of Representatives, enthusiastically promoting Toffler’s vision. The Clinton administration were soon claiming that the National Performance Review was also based on such a vision and introduced a second wave of reforms which reflect Toffler’s faith in ‘federalization through information technology’. As the New Yorker put it: ‘with the Clinton administration dancing to the Toffler’s tune the pace is getting frantic’ (New Yorker, February 1994). Toffler’s fundamental and determinist assumption that decentralisation made possible through information technology actually causes governmental decentralisation was thereby influencing political decision-makers. While evidence from this book seems to support some of the conclusions of such writers as Dutton (noted above), the idea that information technology has somehow ‘caused’ public management change during the time period covered is more problematic. The contradictions inherent in such an idea can be illustrated by looking at each of the NPM related trends and identifying how information technology development has interacted with them. Hood (1994:129–30) summarises the doctrines of NPM as more emphasis on: • disaggregating public organisation; • formal competition;

The ambiguous essence of the state

173

• the adoption within the public sector of private sector management practices; • hands-on management; • setting explicit and measurable performance standards for public organisations; and • controlling public organisations by preset output measures. The first three represent an attempt to reduce public service distinctiveness; the second three indicate the reduction of the emphasis on the doctrine of public management according to system-wide rules of procedure. What this books shows is that each of these elements of NPM doctrine is at least as much contradicted as it is supported by information technology developments in government in practice. Disaggregation of public organisations This represented an attempt to ‘unbundle the public service into corporatised units, organised by product’ (Hood, 1994:130). In Britain information technology provision appears to have been identified as a ‘product’ by default through implementation of the Next Steps programme, with information technology operations centralised in agencies within the Department of Social Security, the Inland Revenue and the Vehicle Licensing Office. Such organisations represent a greater centralisation of the information technology processing function than has been observed in most private sector companies. When such organisations are privatised through sales to large multinational companies such as Electronic Data Systems, further centralisation occurs, with contract management divisions representing the crucial link through which decision-making within the remnants of the organisation must pass. In the US the retention of large-scale systems divisions, the adoption of huge systems integration contracts, and the eventual centralisation of the procurement function in response to years of contracting out have had a similar effect. Competition Through its identification of a prime candidate for Market Testing, information technology does open up tranches of public sector work to wider competition. Certainly computer contracts continue to be awarded on a competitive basis in the US and Britain. But the tendency for the size of computer contracts to increase, the size of computer companies able to compete for government business to increase, the trend towards strategic partnerships and alliances, and the regulatory response that information technology appears to necessitate, all cast doubt on the competitiveness of computer contracts in the future. In the US, ironically, the introduction of legislation specifically aimed at increasing competition in computer contracts has increased a

174 Information technology in government

tendency for them to be non-competitive. In the early days of computing there was a tendency for the purchase of information technology products to be non-competitive due to the immaturity of the market. But in the 1990s, the proportion of computer products (i.e. services as well as goods) that are purchased from private sector companies is very much greater. Adoption of private sector management practices On the one hand, by pulling into the bureaucracy an array of new actors in the form of computer companies and consultants, information technology can be seen literally to introduce private sector management techniques into public sector organisations. That is, private sector personnel are carrying out governmental tasks. But on the other hand, organisational structures for managing information technology have diverged from private sector practice, with the centralisation and demarcation of information technology as a discrete area of activity unrelated to the rest of the organisation. In both countries outsourcing techniques in government agencies are also diverging from those in the private sector, with a higher percentage of operations outsourced than in private sector organisations and different approaches to contract specification. There is little sign that the problems with government procurement witnessed in the military agencies for many years have been overcome, with the result that the distinctiveness of public sector contracts will ensure the distinctiveness of public sector organisations in the future. Emphasis on ‘hands-on’ management This implies more freedom to manage by discretionary power. In Britain the centralisation of computer systems within the Department of Social Security and the Inland Revenue works against the discretionary power of managers within these departments. Local office managers in both organisations are reliant on centrally developed, managed and operated systems. When users at the local level identify problems or areas for modification, then such centralisation increases the number of organisational clearances they must go through to request change. In the case of the Inland Revenue, the number of organisational clearances is further increased after the signing of the contract with Electronic Data Systems, whereby every unit within the organisation must go through the contract management office to request changes or new work. Computerisation has introduced an obstinate thorn in the side of managerial autonomy, as all operational divisions are reliant on the work of technical divisions developing computer projects. Computerisation has also engendered organisational conflict between technical and business operations; this is a cumulative process if computer projects are unsuccessful, as the experience of the US Internal Revenue Service illustrated.

The ambiguous essence of the state

175

Explicit and measurable performance standards Information technology is widely and correctly perceived as dramatically increasing the ability to produce and proliferate performance measures. Hood (1994:131) states that the emphasis towards setting explicit and measurable performance standards is ‘a shift away from the tradition of “high-trust” relations within public organisations towards a “low-trust” arms-length style’. Other authors too have categorised the version of NPM practised in Britain during the 1980s and 1990s and in the US during the 1980s as ‘low trust’, with a ‘higher-trust’ version developing through the National Performance Review in 1993 (Margetts and Dunleavy, 1994). However, the types of contract relationships currently developing in the US and UK with systems integrators involve a high degree of trust, introducing contradictions with the ‘low-trust’ contracting procedures still used in both countries. Such contradictions have had to be acknowledged by the Inland Revenue, whose relationship with EDS in the future involves both ‘high-trust’ and ‘low-trust’ arrangements at different organisational levels. Output measures This shift ‘reverses the PPA doctrine of the classified public service, with a common and uniform pay matrix based on general rank or educational qualifications’ (Hood, 1994:132). The management of information technology projects within the British government represents a similar picture to that used for personnel management, with strong central control and standardisation moving to lack of control and destandardisation through the 1970s to 1990s. But the lack of measurable output from information technology expenditure poses particular challenges to the notion of output measurement. Of the four major computerisation projects investigated in Chapters 3 to 6, only one could be cost-justified. Commercial organisations have been facing the fact for many years that second-stage information technology investments are difficult to cost-justify and companies in the later stages of information technology investment assess their information technology on the basis of comparative advantage, through the creation of technological infrastructures. Thus information technology investment poses real dilemmas for public sector managers seeking output measures, yet forced to judge each computer initiative on its potential savings. In summary, the evidence in this study presents a powerful challenge to those critical modernists who use the term ‘information polity’. Administrative reform seems to have influenced government information technology more than information technology has influenced administrative reform. Rather than strengthening the prevalent modernist assumptions of public administration, information technology exhibits a tendency to work against contemporary administrative trends.

176 Information technology in government

Postmodernists Finally, information technology has caught the attention of another group of writers who view it as an essential element of a ‘postmodern’ society. Some of the more pessimistic of these have characterised the ‘military information society’ as postmodernist: ‘current US Defense policy is creating a postmodern army of war machines, war managers and robotized warriors’ (Gray, 1989:44). But any close inspection of these writings shows that no analytic distinction is made between information technology and any other modernising technology or even management technique: ‘The post-modern enlisted soldier either is an actual machine or will be made to act like one through the psychotechnologies of drugs, discipline and management’ (Gray, 1989:44). A more optimistic analysis of the relationship between information technology and postmodernism is provided by the Dutch writer Paul Frissen (1994a, 1994b, 1995, 1996a, 1996b), who moved from an earlier critical-modernist stance to claim postmodernist credentials, arguing that in a ‘postmodernized’ public administration ‘fragmentation will lead to an emancipation of the bureaucratic organization—beyond central control’ (Frissen, 1995:9). According to his account, information and communication technologies have played a vital part in this change: time and space have become less significant as organisational factors and ‘this relativizes progress, so characteristic a concept of modernization’ (Frissen, 1995:9). For organisations this facility means a strong increase in fluidity and flexibility. Essential to politics and public administration is that through ICT and some of the transformational aspects of new conceptions of steering the notion of a mono-centric world disappears. This holds for society as a whole, but also for the anthropo-centric assumptions of man in this world. The ‘homo politicus’, the ‘citoyen’ no longer is the dominant actor. Systems are becoming more and more intelligent and at a growing pace are better at several things. ICT being a intellectual technology, this has far-reaching implications for our notions of sovereignty and selfdetermination. Reality is the unintended result of decisions, increasingly taken by machines. (Frissen, 1995:10–11) In this ‘virtualized reality’, ‘boundaries of organizations, as markings of physical and bureaucratic identity, become less important. Relational patterns with other actors do not just become increasingly important, but substitute organizations’ (Frissen, 1994b:277). Frissen’s combination of postmodernism and ‘informatization’ follows an earlier alliance between public administration and postmodernism. Clegg (1990) is often cited as the most relevant postmodernist text for researchers in public administration (for example, Rhodes, 1995:4). Although Clegg does not explicitly mention either computers or information technology in his

The ambiguous essence of the state

177

description of postmodernist organisations, he does contrast modernism and postmodernism in terms of different usage of technological developments: ‘Where modernist organization was premised on technological determinism, post-modernist organization is premised on technological choices made possible through “de-dedicated” micro-electronics equipment’ (Clegg, 1990:181). Such an assertion suggests a straightforward linkage between epochs and technological development, which in itself might be open to accusations of technological determinism. But the critique of modernist assumptions (for other examples, see Lyotard (1984); Bernstein (1991); Lyon (1994:19–36)) in terms of their technological determinism, suggests that postmodernism might have something to offer the study of information technology and governmental organisations. So does the evidence presented here support a postmodernist analysis of the computerised state? To the extent that difference remains important and no overarching paradigm may be identified, then postmodernist analyses of the computerisation era are correct. The problem with refuting or supporting the claims of the postmodernists is that there is little consensus among such claims, which range from the strongly pessimistic to the vibrantly optimistic. But there is little evidence in the US or Britain to support Frissen’s (1995, 1996a) assertion that ‘the pyramidal nature of public administration’ has changed into ‘an archipelago of network configurations’. Where network configurations have developed, they do not appear to be technology-driven; indeed the organisational configurations resulting from agencification in Britain can be seen to work against the development of integrative computer systems, for example in the Benefits Agency. In general, those charged with developing computer systems carry out, as far as they can, what those at the top of the policy hierarchy direct. If the situation that Frissen envisages should emerge in the future, it will be multinational corporations, rather than networks of government officials, that shape processes. The blurring of organisational boundaries which Frissen claims ‘will occur both in structural and in cultural terms’ is slow to start, despite the evidently beneficial effects of inter-agency co-ordination over computer projects. A ‘client-oriented’ focus is more difficult to achieve where responsibilities are decentralised. Organisational boundaries have proliferated, but have not blurred. The agencification of central departments in Britain appears to mean that the traditional vertical lines of control in public administration now run through departments as well as between them, reinforcing previous fissures such as teams of personnel organised by benefit in the DSS. Ironically the US Social Security Administration, operating an entirely centralised systems division, has come nearer to the ‘whole person’ concept than the UK Benefits Agency, due to an almost accidental relationship between information technology development and previous administrative reorganisation. Often decentralisation and fragmentation have been intended to create semiautonomous units in competition with other public or private sector organisations, making it even less likely they will embark on collaborative

178 Information technology in government

computer projects. Furthermore, ‘visualisation’ when it occurs, for example in the spatial fragmentation of local offices of the UK Department of Social Security, can have disappointing results, with organisational boundaries adding to complexity rather than becoming blurred, and predicted benefits proving almost impossible to obtain. What public administration does to information technology So information technology has not brought the future that the modernists expected. One reason for this mismatch between prediction and reality is the fact that as well as information technology changing public administration, public administration has changed information technology. Public sector information systems are and will remain distinctive. Evidence has uncovered both differences and similarities in the relationship between information technology and across countries, the tools of government and policy sectors. The effects of information technology have entered the heart of each of the ‘tools’ of government policy, but no uniform effect may be discerned. Information technology has been dribbled, like oil, into all the constituent elements of the tools of government policy, but no measurable increase in resources can be seen to result. Differences across agencies ensure that developments remain localised. While some authority-wielding agencies have been notably innovative, others have remained reliant on manual processes. While some innovations have been successful, just as many have been disappointing. There is little evidence to suggest that the Inland Revenue, the Benefits Agency, the Social Security Administration or the Internal Revenue Service have become more similar owing to the influence of information technology. The experiences of the four organisations involved constitute four distinct stories, with social, economic and organisational factors continuing to influence the use of information technology, thereby ensuring that the effects of computerisation continue to vary. In all four parameters covered in Chapters 3 to 6 (viz. the organisation of information technology; contracting out; relationship with central agencies; and policy considerations), difference has been maintained. Differences in contract relationships seem likely to maintain disparity in the future, with information technology development relying crucially on a diverse array of structures for the management of contracts. The organisation of information technology has varied across departments. Differences have tended to be reinforced over time. As the Inland Revenue developed an increasingly good reputation in the eyes of oversight agencies, the Internal Revenue Service developed an increasingly bad one. In both the Social Security Administration and the Benefits Agency, technology acted as an important intermediary between organisational change in the past and

The ambiguous essence of the state

179

future. Organisational decisions taken before or during computerisation affected the resulting computer systems, which in turn have affected the possibilities for organisational structures in the future, thereby exhibiting evidence of ‘feedback’ loops between technology and organisations over time, as hypothesised by Edge (1995). Computers are widely perceived as introducing certainty into organisations. The comparison of the two social security agencies illustrates how computerisation may also generate uncertainty. With the ‘whole person’ concept, the US Social Security Administration was ‘lucky’. Function-specific changes caused the creation of a computer system which was function-rather than benefit-specific; this characteristic subsequently ensured that the organisational structure was retained, making the ‘whole person’ concept a more feasible option in the future. The UK DSS was not so lucky; the benefitby-benefit organisational structure was replicated in the computer systems developed in the 1980s. In the 1990s the computer systems constrain any change to functional organisation and the ‘whole person’ concept remains a remote goal. Thus the experiences of the UK Department of Social Security and the US Social Security Administration demonstrate how the history of information systems dictates the future possibilities for policy innovation, creating a new form of ‘administrative inertia’ within government agencies. Innovation through information technology in the future will depend upon the ability to integrate, to co-ordinate and to develop a client focus rather than an organisational one; the computer systems in place can obstruct rather than aid this process. Thus administrative inertia can lead to policy inertia. The globalisation of government services So public administration continues distinctively to influence the development of the computerised state. If difference becomes undermined, the most likely reason will be through the mediation of the major computer companies. Given the trends identified in Chapter 7, the size of company managing either government’s computing resources seems set to continue increasing. Patrick Dunleavy has argued that a combination of radical outsourcing in the public sector, together with bureau-shaping incentives and government procurement rules combine to ‘clear the ground for a transformation of the commodification dynamic in public services where a few large companies are able to put a proprietorial stamp on what is being supplied’ (Dunleavy, 1994:56). Dunleavy provides little empirical evidence to substantiate his argument, but EDS, now the largest information systems company in the world with a global profit of $700m, a turnover of $8500m and a market value of around $15 billion (Guardian, 17 May 1994), looks like the type of corporation set to fulfil such predictions. In Britain in 1997 EDS employed 4,000 staff and was running contracts for over 50 per cent of central government business. Its major customers within central government included the DVOIT, the Inland Revenue, the DSS, the Home Office and the MoD.

180 Information technology in government

In any major policy innovation concerning the Inland Revenue and the Department of Social Security, it seems likely that EDS might be able to place a ‘proprietorial stamp’ on subsequent computer systems. The company has every incentive to introduce standardised solutions developed in other countries, perhaps other governments, and seems likely to retain the upper hand in terms of technological expertise, enabling it to make other options appear prohibitively expensive. Already in 1995 there were signs that the Inland Revenue/EDS partnership had run aground on the problems of Self Assessment, with a Minister admitting that the Inland Revenue had been forced to pay an additional £250 million for costs not covered by the contract (Hansard, vol. 263, 3 July, 1995: Column 20) and press comments that EDS were proposing to overcome problems with an ‘off-the-shelf package that would not be integrated with the Revenue’s existing computer systems (Computer Weekly, 27 April 1995; Independent, 7 July 1995). As one witness to the Social Security Committee (5 November 1991:28) observed with reference to the Operational Strategy: In a system of private sector consultants designing software, the Department is keen to identify the cost of doing a particular thing in a particular way. Software writers can put an estimate to the Department of what the cost is of amending that version and the Department looks at whether it is worth making a change to the software. It can conduct a cost benefit analysis and it may well decide in the circumstances it is not worth the effort, or the expense does not justify it. I am not saying the Department would ignore policy questions. I am saying this changes and will inevitably change the whole way you look at the system. Computerisation puts us in a new era. If government is to develop a role as a ‘proactive, strategic planner and service innovator’ in this new era, then more attention will need to be paid to the lessons available from the history of government computing. And legislators’ current interest in information technology will need to extend into bureaucracies themselves, recognising that information systems are not discrete entities that may be selected from competing solutions but rather inextricably intertwined with policy-related tasks. The form of globalisation observed here is not technologically determined as some of the hyper-modernists predicted. As governmental organisations have fragmented, computer companies have increased in size, increasing the steering difficulties and offering companies more possibilities for retaining advantages in technological expertise and control over government computer systems. The changing shape of the computer services market supports existing evidence that technological development has been a ‘crucial independent variable in accounting for outcomes’ (Cawson et al., 1990:377), with technological change eroding sectoral boundaries. But differences across countries illustrate how the decision-making processes of firms and

The ambiguous essence of the state

181

governments continue to influence outcomes, with regulatory responses to developments differing strongly in the US and Britain. There is nothing inevitable about any globalisation process. For example, government organisations might have remained at the forefront in R&D in information technology as they were at the beginning of the 1970s. The Information Technology Office of the Inland Revenue, before the deal was struck with EDS (the ‘Rolls-Royce’ of government information technology), is an example of what government computing might look like if such a strategy had been adopted. Such an example illustrates that other scenarios were available. Meanwhile, it is widely assumed that information technology research and development should not be the ‘core competency’ of any government agency. Such an assumption raises the question of what the core competency of the taxation or social security agencies will be in the future, given the likelihood that the number of functions carried out by private sector computer companies will increase rather than decrease. Certainly, governmental agencies are unlikely to find help from their new private sector partners in their search for core competencies. In October 1996 the magazine Wired asked John Bateman of EDS (the boss of the UK Managing Director) if he saw any role for government in the future. His reply was reported as follows: ‘“To be honest, I really struggle to come up with a clear definition of ultimately what role government has.” And then he laughs.’ If globalisation and standardised solutions start to impact upon policy change in the future, then the ‘computerised state’ will be open to the same pressures as the mass housing market in the 1960s. Dunleavy (1980:122) illustrated how in public housing during the 1950s and 1960s the activities of the top firms in the construction industry during a period of high demand produced ‘a massive shift towards mass housing solutions’. It is too early to analyse the spread of computer contracts across the US and British central governments in these terms. But the scene, after 20 years of computerisation, is set for such a shift. With technological innovation at the sharp end of policy change in the 1990s, albeit in vastly different degrees across functional areas and organisations, the decisions of those who drive technological innovation will be crucial. Even if the giants of the 1990s computer services market fragment during the next 20 years, their presence at a particular time in the history of government computing will have made its mark. Technical obstructions to policy-making and complex government/industry partnerships (with government in a dependent role) will remain as a testimony to this period, just as high-rise housing blocks provide a visible reminder of the spread of mass-housing technologies in the 1970s. At the least, commercial pressure combined with the modernist tendencies of practitioners seems likely to work against the search for non-technical administrative solutions to policy problems. This book has shown how unlikely it is that the actors at the forefront of developing technological applications will be government organisations or people, and how limited will be the steering capacity of governmental organisations in controlling the new players in government.

182 Information technology in government

The ambiguous essence of technology and the ante-postmodernist era It seems that a modernist perception of technology has given us a misleading picture of the future of government in the ‘information age’. This section proposes an alternative view of technology borrowed from perhaps the father of postmodernism, Heidegger, who in his essay ‘The Question Concerning Technology’ (first published 1962; trans. 1977) asserts that technology, as well as a means to an end, is also something further that is extremely difficult to define. Technology is not simply a neutral tool, and the assumption that it is will cause us problems: Everywhere we remain unfree and chained to technology whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology. (Heidegger, 1977:4) Just as Weber’s concern for the possible malign effects of rationalisation through bureaucratisation was often ignored by his successors, Heidegger is a thorn ready to prick the modernist balloon of optimism: the instrumental conception of technology conditions every attempt to bring man into the right relation to technology. Everything depends on our manipulating technology in the proper manner as a means. We will, as we say, ‘get’ technology ‘spiritually in hand’. We will master it. The will to mastery becomes all the more urgent the more technology threatens to slip from human control. (Heidegger, 1977:5) Although technology does provide means to ends, this picture is less than complete: although Heidessing, Heidegger suggests, technology is more than merely a means to an end: ‘So long as we represent technology as an instrument we remain held fast in the will to master it’ (Heidegger, 1977:32). The essence of modern technology always remains hidden, even where power machinery has been invented, where electrical technology is in full swing and where atomic technology is well under way. Heidegger thereby warns that the existence of technology in an operation is always likely to introduce some unknown element into activities, something not fully understood. Politicians are attracted by the neutral, instrumental view of technology, which is why they build an increasingly central role for it in their image of the future. In contrast, an ‘ante-postmodernist’ approach takes Heidegger’s view of technology and accepts from postmodernism the critique of

The ambiguous essence of the state

183

modernism, disregarding the modernist assumption that society (and government) are embarked on a continuing process of progress and modernisation, driven since the advent of computers by ever-increasing technological advance. While such advance undoubtedly and uncontroversially facilitates change, such changes will not necessarily happen in a uniform way and some will not happen at all. It is not ‘just a matter of time’. In the ante-postmodernist era, information systems are a vital and changing part of any organisation, just as people are. What distinguishes the technological choices that organisations make is not the extent to which they are ‘modern’. A variety of decisions, organisational arrangements, contract relationships and historical contexts will determine the type of technological infrastructure that organisations operate. The use of the term ‘antepostmodernism’ is intended to reject modernism without embracing the rather confusing philosophical associations of postmodernism (especially deconstructive postmodernism), in which: If there were many modernisms, there have also been just as many postmodernisms, as we tried—and try—to shape, define, characterize and interpret the indeterminate, pluralistic, ever more globalized period in culture from 1945 on.…Indeed, the problem that now descends on our cultural historians is how to define, and what to call, the phase that follows, is already beginning to follow. (Bradbury, 1995:766) The inefficiency of history Ante-postmodernism also reacts against modernist assumptions by borrowing from the ‘New Institutionalism’ the idea of the inefficiency of history. Historically, political theory has been ambivalent about the efficiency of history. Like other social scientists, students of political development have been inclined to accept an idea of progress, the more or less inexorable historical movement toward some more ‘advanced’ level. At the same time, political histories have often emphasized the unique significance of a particular sequence of events or choices, the impact of a particular campaign strategy or speech, or the particular tactics of international negotiation. In modern usage, the terminology of progress has been largely replaced by a terminology of survival, but for the most part, in contemporary theoretical political science, institutions and behaviour are thought to evolve through some form of efficient historical process. (March and Olsen, 1984:737) In contrast, they argue:

184 Information technology in government

History cannot be guaranteed to be efficient. An equilibrium may not exist. Even if there is an equilibrium, historical processes can easily be slow enough relative to the rate of change in the environment that the equilibrium of the process is unlikely to be achieved before the environment, and thus the equilibrium changes. (March and Olsen, 1984:737) By assuming quickness, theories of political behaviour avoid focusing on transient phenomena that might be less predictable and more subject to effects from the details of the processes involved. Modernist predictions have coloured our perception of the history of the computerisation of the state. This book has endeavoured to avoid the ‘inefficiency of history’ by looking at change over a long time period, 25 years. Investigating change over a long time period has shown that technical decisions made at one time influence policy options in the future, introducing an ‘inertia’ to administrative operations similar to the ‘policy inertia’ observed in taxation policy-making by Rose and Karran (1987). There is little evidence that the four agencies studied in detail here are embarked on some kind of progress towards a modernised, up-to-the-minute state. Rather they have become involved in a process of continual innovation, dependent upon technology, but not in complete control of it, a condition that looks more like art than science. No existing approach will suffice to encompass the relationship between computerisation and government. Thus the term ‘antepostmodernism’ is adopted here to reject modernist approaches; to capture the virgin moment before postmodernism started. A second assumption has been that the interaction between technology and government is much more complex than any analysis based on technological determinism can provide. Some sociologists are now turning their attention to the way society and the economy shape technology and developing models for the integration of ‘shaping’ and ‘effects’ studies. As Edge (1995) points out, the social effects of technical change are of obvious importance but more fundamentally, if research is restricted to questions of effects, it can contribute only to what may be called ‘reactive’ policy measures designed to cope with, or adapt to, the consequences of technical change, rather than anticipating (and so influencing) these consequences. (Edge, 1995:26) The technology has its own autonomous logic, but there is a complex interaction between the technology and the external context; social factors shape the technology, which in turn shapes the social environment and there are complicated feedback loops between the two. Most writers on ‘informatization’ have concentrated on the latter; this book has incorporated the former, investigating the production of information systems in, rather

The ambiguous essence of the state

185

than solely their consumption by, public administrators. In this sense the approach is in line with the postmodernist, moving away from technological determinist assumptions. Thus the ante-postmodernist approach recognises the benefits that postmodernism has to offer; a rejection of technological determinism and a critique of modernist assumptions. However, it steers away from the ‘wide abundance of meanings’ with which postmodernism has been endowed until eventually, ‘with much of its sense of immediately postwar ideological and existential anxiety gone, postmodernism became an open definition…a fate or a generalized condition’ (Bradbury, 1995:769–70). By examining the past, thereby rejecting the ahistoricism of postmodernism, it has been possible to assess the character of an era in which governments find themselves after many governmental functions have been automated. So what will the ante-postmodernist era be like? It may seem unlikely that information systems could ever invoke the same pride and affection as that accorded by many commentators to the traditional Civil Service (for example, see Chapman and O’Toole, 1995). But in the early days of computing, enthusiasm and enjoyment marked the development of computer systems for government. As one US federal computer consultant put it: I’ve been on the scene long enough to have witnessed most of this evolution and rubbed shoulders with the great and near-great of the golden era of computer development of the 1950s and 1960s. Believe me, there was nothing grim or apocalyptic about what went on in those days. The work was frenetic, confusing, fast-moving and downright fun. A host of uninhibited and creative technological and entrepreneurial personalities gravitated to the field and, by ginger, they lived fast, they lived dangerously and more than we could imagine at the time, they built an industry. (Robert Head in Government Computer News, 22 June 1992) If government computer systems had reached the same level of automatic competence and fulfilling of functions as stereo systems and microwave ovens, then a continual state of development would be unnecessary. But the stories presented here show how the computerisation of government remains a challenge that cannot be simply met. We need to recapture the spirit of creativity and innovation of the early days of computers, devoting resources to the task and rescuing information technology from a false image of rationalising drudgery. If we dispose of the technological determinism of the modernists, the utopianism of the hyper-modernists and the cultural pessimism of the antimodernists; if we pay heed to Heidegger’s warnings of the dangers of viewing information technology purely as a means to an end, while resisting the antihistoricism of the postmodernists, we can restore the future of government to its proper place in human perception. A more hopeful vision of the state of

186 Information technology in government

the future will result: less susceptible to irrational fears, impossible dreams and disappointed expectations. As the US Office of Technology Assessment (OTA, 1986:98) noted with respect to technological choices that the US Social Security Administration was making in the 1950s and 1960s: What needs stressing is how much such decisions were a matter of art rather than science. In the 1950s and 1960s many Federal agencies mastered that art and were at the forefront of successful information technology applications. There is little evidence that things have changed as dramatically since OTA made this observation as many people would assume. Forty years later in the 1990s such decisions still seem to be in the realm of art, as well as science, relying on social, cultural and historical factors which are intertwined with technical decisions. Perhaps, rather than assuming that the problems associated with information technology are a temporary delay in progress to the fully automated state, it should be recognised that such decision-making will always be a matter of art rather than science: Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it. Such a realm is art. (Heidegger, 1977:35)

References

Adler, M. and Sainsbury, R. (1988) Putting the Whole Person Concept into Practice—Final Report Parts I and II, Report to the DSS (Edinburgh: University of Edinburgh). Angell, I. (1995) ‘Winners and Losers in the Information Age’, LSE Magazine, Centenary Issue. Armstrong, W. (1971) ‘The Civil Service Department and Its Tasks’ in R.Chapman and A.Dunsire (eds) Style in Administration: Readings in British Public Administration (London: Allen & Unwin). Ascher, K. (1986) ‘Contracting Out in Local Authorities and the National Health Service: Developments under the Conservative Governments 1979–1985’, PhD thesis, F6783, London School of Economics and Political Science. Avgerou, C. (1989) ‘Information Systems in Social Administration: Factors Affecting their Success’, PhD thesis, F6336, London School of Economics and Political Science. Barras, R. and Swann, L. (1985) The Adoption and Impact of Information Technology on UK Local Government (London: The Technical Change Centre). Beaumont, P. (1992) Public Sector Industrial Relations (London: Routledge). Bekkers, V. (1993) ‘New Forms of Societal Steering and Informatization: The Case of the Sheltered Working Places’, paper to the ESRC/PICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, London, 30 April. Bell, D. (1973) The Coming of Post-Industrial Society: A Venture in Social Forecasting (New York: Basic Books). Bellamy, C. (1993a) ‘Next Steps and Strategic Management: Information Systems Strategy in DSS’, paper to the ESRC/PICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, London, 30 April. Bellamy, C. (1993b) ‘The Informatization of Policy and Policy-making for Informatization: The History and Future of the Operational Strategy in the UK Department of Social Security’, paper submitted to the Permanent Study Group on Informatization in Public Administration, Annual Conference of the European Group of Public Administration, Strasbourg, 7–10 September. Bellamy, C. and Henderson, A. (1991) ‘The UK Social Security Benefits Agency; A Case Study of the Information Polity?’, paper submitted to the Annual Conference of the European Group of Public Administration, The Hague, 29– 31 August. Bellamy, C. and Taylor, J. (1992) ‘Informatisation and New Public Management: an 187

188 References Alternative Agenda for Public Administration’, Public Policy and Administration, vol. 7, no. 3. Bellamy, C. and Taylor, J. (19 94) ‘Theorising the Information Polity: Domains,Networks and Polities’ , paper to workshop on ‘The New Public Management’, annual joint sessions of the European Consortium of Political Research, Madrid, 25 April. Bellamy, C. and Taylor, J. (1995) ‘Transformation by Stealth? The Case of the Criminal Justice System in the UK’, paper submitted to the Permanent Study Group on Informatization in Public Administration, Annual Conference of the European Group of Public Administration, Rotterdam, 6–9 September. Bellamy, C. and Taylor, J. (1998) Governing in the Information Age (Milton Keynes: Open University Press). Benefits Agency (1992) ‘One Stop: Benefits Agency Service Delivery’, Discussion paper (Leeds: Benefits Agency). Beniger, J. (1991) ‘Information Society and Global Science’ in C.Dunlop and R.Kling (eds) Computerization and Controversy: Value Conflicts and Social Choices (London: Academic Press). Berg, A.-J. (1995) ‘A Gendered Socio-Technical Construction: The Smart House’ in N.Heap, R.Thomas, G.Einon, R.Mason and H.Mackay (eds) Information Technology and Society: A Reader (London: Sage). Bernstein, R.J. (1991) The New Constellation (Oxford: Blackwell). Bradbury, M. (1995) ‘What was Post-Modernism? The Arts in and after the Cold War’, International Affairs, vol. 71, no. 4, October. BSSRS Technology of Political Control Group (1985) TechnoCop: New Police Technologies (London: Free Association Books). Buckingham, J. (1992) ‘Information Policies in the United Kingdom’ in P.Frissen, V.Bekkers, B.Brusaard, I.Snellen and M.Wolters (eds) European Public Administration and Informatization (Amsterdam: IOS Press). Buckingham, J. and Wyatt, S. (1992) ‘Central Government Networks in the UK and the USA: Impacts by Design’, paper to the ESRC/PICT study group on Information, Communications and New Technologies in Public Administration, London, 7 July. Bureau of the Census (1988) Factfinder for the Nation CCF No. 4 (revised) May. Burgess, K. and Clinton, D. (1989) ‘Operational Strategy Development: Department of Social Security’, Public Money and Management, vol. 9, no. 4, Winter. Burnham, D. (1983) The Rise of the Computer State (London: Weidenfeld & Nicolson). Burnham, J. and Jones, G. (1993) ‘Advising Margaret Thatcher: the Prime Minister’s Office and the Cabinet Office Compared’, Political Studies, vol. XLI. Butler, R. (1995) ‘Draft Opening Speech by Sir Robin Butler’, 25th Annual Conference of the Public Administration Committee of The Joint Universities Council, September 1995. Byrne, L. (1977) Information Age Government: Delivering the Blair Revolution (London: Fabian Society). Campbell, C. (1983) Governments under Stress: Political Executives and Key Bureaucrats in Washington, London and Ottawa (Toronto: University of Toronto Press). Caudle, S. and Marchand, D. (1990) ‘Managing Information Resources: New Directions in State Government’, Information Management Review, vol. 5, no. 3. Cawson, A., Morgan, K., Webber, D., Holmes, P. and Stevens, A. (1990) Hostile

References

189

Brothers: Competition and Closure in the European Electronics Industry (Oxford: Clarendon Press). Cawson, A., Haddon, L. and Miles, I. (1993) ‘The Heart of Where the Home Is: the Innovation Process in Consumer IT Products’ in P.Swann (ed.) New Technologies and the Firm: Innovation and Competition (London: Routledge). CCTA (1991) Annual Report 1990–91 (London: HMSO). CCTA (1994) Annual Report 1993–94 (Norwich: CCTA). Central Information Technology Unit (CITU) (1996) government.direct: A Prospectus for the Electronic Delivery of Government Services, Green Paper (London: HMSO). Chapman, R. and O’Toole, B. (1995) ‘The Role of the Civil Service: A Traditional View in a Period of Change’, Public Policy and Administration, vol. 10, no. 2. Civil Service Department (CSD) (1971) Computers in Central Government: Ten Years Ahead (London: HMSO). Civil Service Department (CSD) (1978) Longer Term Review of Administrative Computing in Central Government (London: HMSO). Clegg, S.R. (1990) Modern Organisations: Organization Studies in the Postmodern World (London: Sage). Clinton, D., Yates, M. and Kang, D. (1994) Integrating Taxes and Benefits (London: The Commission on Social Justice, Institute of Public Policy Research). Collingridge, D. and Margetts, H. (1994) ‘Can Government Information Systems Be Inflexible Technology? The Operational Strategy Revisited’, Public Administration, vol. 72, no. 1. Committee on Government Operations (CGO) (1986) Electronic Collection and Dissemination of Information by Federal Agencies: A Policy Overview, 28th report, 99th Congress, 2nd Session House Report 99–560, 29 April. Congressional Research Service (CRS) (1990) Information Technology Revolution, CRS Review Paper, 101st Congress, Second Session (Washington DC: Library of Congress). Cornford, T. (1990) ‘Social Information Systems: A Study of Computers, Government and Society’, PhD thesis, F6709, London School of Economics and Political Science. Department of Health and Human Services (DHHS) (1993) Social Security Client Satisfaction: Service Indicators 1993, report by the Office of Inspector General (New York: Office of Evaluation and Inspection, DHHS). Department of Health and Social Security (DHSS) (1980) ‘A Strategy for Social Security Operations’, working paper (London: HMSO). Department of Health and Social Security (DHSS) (1982) Social Security Operational Strategy: A Framework for the Future (London: HMSO). Department of Health and Social Security (DHSS) (1983) Social Security Operational Strategy: Sunningdale Seminar December 1982 (London: HMSO). Department of Social Security (DSS) (1988) The Business of Service: the Report of the Regional Organisation Scrutiny (The Moodie Report) (London: HMSO). Department of Social Security (DSS) (1991) The Government’s Expenditure Plans 1991– 92 to 1993–94, Cm. 1514, February (London: HMSO). Department of Social Security (DSS) (1992) The Government’s Expenditure Plans 1992– 93 to 1994–95, Cm. 1914, February (London: HMSO). Department of Social Security (DSS) (1993) The Government’s Expenditure Plans 1993– 94 to 1995–96, Cm. 2213, February (London: HMSO).

190 References Department of Social Security (DSS) (1994) Social Security Departmental Report: The Government’s Expenditure Plans 1994–95 to 1996–97, Cm. 2513, March (London: HMSO). Department of Social Security (DSS) (1995) Social Security Departmental Report: The Government’s Expenditure Plans 1995–96 to 1997–98, Cm. 2813, March (London: HMSO). Department of Treasury (1990) Applications of Computer Card Technology (Washington DC: Financial Management Service, Department of Treasury). Derthick, M. (1990) Agency under Stress: the Social Security Administration in American Government (Washington DC: The Brookings Institution). Dilulio, J. (1994) Deregulating the Public Service: Can Government Be Improved? (Washington DC: The Brookings Institution). Dordick, H. and Wang, G. (1993) The Information Society: a Retrospective View (Newbury Park: Sage Publications). Dowding, K. (1995) The Civil Service (London: Routledge). Drewry, G. (1993) ‘Towards a Managerial or Legalistic Style of Government’, LSE Public Service Seminar Series, no. 5. Drewry, G. and Butcher, T. (1990) The Civil Service Today (Oxford: Blackwell). Dunleavy, P. (1980) Urban Political Analysis (London: Macmillan). Dunleavy, P. (1994) ‘The Globalization of Public Services Production: Can Government Be “Best in World?”’, Public Policy and Administration, vol. 9, no. 2. Dunleavy, P. and Biggs, S. (1995) ‘Local Government Organization in a “PostBureaucratic” Age: A Bureau-Shaping Analysis’, paper to the European Consortium of Political Research, Annual Workshops, workshop on ‘The Changing Local Governance of Europe, University of Bordeaux, 27 April–2 May. Dunleavy, P. and Francis, A. (1990) Evidence on the Next Steps Initiative, submission to the Treasury and Civil Service Sub-committee of the House of Commons (London: LSE Public Policy Group). Dunleavy, P. and Hood, C. (1993) ‘From Old Public Administration to New Public Management’, LSE Public Policy Group Papers, no. 4. Dunleavy, P., King, D. and Margetts, H. (1996) ‘Leviathan Bound: A Budgetary Analysis of the US Federal State’, unpublished book manuscript, Chapter 2. Dutton, W. (1987) ‘Decision-Making in the Information Age’ in R.Rinnegan, G.Salaman and K.Thompson (eds) Information Technology: Social Issues (London: Hodder & Stoughton in association with Open University Press). Dutton, W., Blumler, J., Garnham, N., Mansell, R., Cornford, J. and Peltu, M. (1994) ‘The Information Superhighway: Britain’s Response’, PICT Policy Research Papers, no. 29. Dutton, W., MacKenzie, D., Shapiro, S. and Peltu, M. (1995) ‘Computer Power and Human Limits: Learning from IT and Telecommunications Disasters’, PICT Policy Research Papers, no. 33. Dyerson, R. (1989) ‘The DVLC and Technological Change: One-Time Failure, LongTerm Success’, Technology Project Papers, London Business School. Dyerson, R. (1992), ‘Technology and Firm Organisation: Learning from the Private Sector’, paper to the ESRC/PICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, London, October. Dyerson, R. and Roper, M. (1990a) ‘Computerisation at the DSS 1977–81: The Operational Strategy’, Technology Project Papers, no. 4, London Business School.

References

191

Dyerson, R. and Roper, M. (1990b) ‘Building Competencies: the Computerization of PAYE’, Technology Project Papers, no. 6, London Business School. Dyerson, R. and Roper, M. (1990c) ‘Implementing the Operational Strategy at the DSS: From Technical Push to User Pull’, Technology Project Papers, no. 8, London Business School. Dyerson, R. and Roper, M. (1992) ‘Large Scale Information Systems in the UK’ in P.Frissen, V.Bekkers, B.Brussard, I.Snellen and M.Wolters (eds) European Public Administration and Informatization (Amsterdam: IOS Press). Dyson, K. and Humphreys, P. (eds) (1986) The Politics of the Communications Revolution inWestern Europe (London: Frank Cass). Earl, M.J. (1989) Management Strategies for Information Technology (Hemel Hempstead: Prentice Hall International). Edge, D. (1995) ‘The Social Shaping of Technology’ in N.Heap, R.Thomas, G.Einon, R.Mason and H.Mackay (eds) Information Technology and Society: A Reader (London: Sage). Farnham, D. and Horton, S. (1993) Managing the New Public Services (Basingstoke: Macmillan). Feeny, D., Edwards, B. and Earl, M. (1987) ‘Complex Organisations and the Information Systems Function—A Research study’, unpublished working paper RDP 87/7, Oxford Institute of Information Management (Oxford: Templeton College). Fesler, J. and Kettl, D. (1991) The Politics of the Administrative Process (New Jersey: Chatham House). Foster, C. (1992) Privatization, Public Ownership and the Regulation of Natural Monopoly (Oxford: Blackwell). Frissen, P. (1989) ‘The Cultural Impact of Informatization in Public Administration’, International Review of Administrative Sciences, vol. 55. Frissen, P. (1992) ‘Informatization in Public Administration: Research Directions’, paper to the ESRC/PICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, London, March. Frissen, P. (1994a) ‘The Virtual Reality of Informatization in Public Administration’, paper to the ESRC/PICT study group on Information, Communications and New-Technologies in Public Administration, Birkbeck College, London, 30 September. Frissen, P. (1994b) ‘The Virtual Reality of Informatization in Public Administration’, Informatization and the Public Sector, vol. 3, nos 3–4. Frissen, P. (1995) ‘The Virtual State: Postmodernization, Informatization and Public Administration’, paper to The Governance of Cyberspace Conference, University of Teeside, 12–13 April. Frissen, P. (1996a) ‘Beyond Established Boundaries? Electronic Highways and Administrative Space’ in V.J.J.M.Bekkers, B.-J.Koops and S.Nouwt (eds) Emerging Electronic Highways (Amsterdam: Kluwer Law International), pp. 27–34. Frissen, P. (1996b) De Virtuele Staat. Politiek,Bestuur, Technologie: een Postmodern Verhaal (Amsterdam: Academic Service). Frissen, P., Bekkers, V., Brussaard, B., Snellen, I. and Wolters, M. (eds) (1992) European Public Administration and Informatization (Amsterdam: IOS Press). Fry, G. (1995) Policy and Management in the British Civil Service (Hemel Hempstead: Harvester Wheatsheaf).

192 References Garvey, G. (1993) Facing the Bureaucracy: Living and Dying in a Public Agency (San Francisco: Jossey-Bass). General Accounting Office (GAO) (1987) Social Security Administration: Stable Leader ship and Better Management Needed to Improve Effectiveness, GAO/HRD-87–39 (Washington DC: GAO). General Accounting Office (GAO) (1988) ADP Modernization: IRS’ Tax System Redesign Progress and Plans for the Future, GAO/IMTEC-88–23BR (Washington DC: GAO). General Accounting Office (GAO) (1989) ADP Modernization: IRS Needs to Assess Design Alternatives for Its Electronic Filing System, GAO/IMTEC-89–33 (Washington DC: GAO). General Accounting Office (GAO) (1990a) The Evolution of the General Accounting Office: From Voucher Audits to Program Evaluations (Washington DC: GAO). General Accounting Office (GAO) (1990b) Tax System Modernization: IRS’ Challenge for the21st Century, GAO/IMTEC-90–13 (Washington DC: GAO). General Accounting Office (GAO) (1990c) Tax System Modernization: Status of IRS’ Input Processing Initiative, GAO/IMTEC-91–9 (Washington DC: GAO). General Accounting Office (GAO) (1991a) SSA Computers: Long-Range Vision Needed to Guide Future Systems Modernization Efforts, GAO/IMTEC-91–44 (Washington DC: GAO). General Accounting Office (GAO) (1991b) Tax System Modernization: Further Testing of IRS’ Automated Taxpayer Service Systems Is Needed, GAO/IMTECH-91–42 (Washington DC: GAO). General Accounting Office (GAO) (1991c) Tax System Modernization: an Assessment of IRS’ Design Master Plan, GAO/IMTEC-91–53BR (Washington DC: GAO). General Accounting Office (GAO) (1991d) Tax System Modernization: Issues Facing IRS, Testimony, GAO/IMTEC-91–18 (Washington DC: GAO). General Accounting Office (GAO) (1991e) Uncertainties Surrounding IRS’ Fiscal Year 19 92 Budget Request for Tax Systems Modernization, GAO/T-IMTEC-91–4 (Washington DC: GAO) . General Accounting Office (GAO) (1992a) Update on Critical Issues Facing IRS (Statement of Howard G.Rhile, Director, General Government Information Systems Information Management and Technology Division). Testimony before the Committee on Governmental Affairs, United States Senate, GAO/T-IMTEC92–18 (Washington DC: GAO) . General Accounting Office (GAO) (1992b) Tax Systems Modernization: IRS Could Have Avoided Successful Protests of Major Computer Procurement, GAO/IMTEC-92–27 (Washington DC: GAO). General Accounting Office (GAO) (1992c) Tax Systems Modernization: Concerns over Security and Privacy Elements of the Systems Architecture, GAO/IMTEC-92–63 (Washington DC: GAO). General Accounting Office (GAO) (1992d) Comptroller General’s 1992 Annual Report (Washington DC: GAO). General Accounting Office (GAO) (1992e) Tax Administration: Opportunities to Reduce the Burden of Filing and Processing Tax Returns, Statement of Hazel E.Edwards, Associate Director, Tax Policy and Administration Issues, General Government Division. Testimony before the Subcommittee on Commerce, Consumer and Monetary Affairs, Committee on Government Operations, House of Representatives, GAO/ T-GGD-92–41 (Washington DC: GAO). General Accounting Office (GAO) (1992f) Tax Systems Modernization: Input Processing

References

193

Strategy Is Risky and Lacks a Sound Analytical Basis. Statement of Howard G.Rhile, Director General Government Information Systems, Information Management and Technology Divison. Testimony to the Subcommittee on Commerce, Consumer and Monetary Affairs, Committee on Government Operations, House of Representatives, GAO/T-IMTEC-92–15 (Washington DC: GAO). General Accounting Office (GAO) (1993a) Tax Administration: Status of Tax Systems Modernization, Tax Delinquencies., and the Tax Gap., GAO/T-GGD-93–04 (Washington DC: GAO). General Accounting Office (GAO) (1993b) Tax Administration: IRS’ Budget Request for Fiscal Year 1994, GAO/T-GGD-93–23 (Washington DC: GAO). General Accounting Office (GAO) (1993c) Social Security: SSA Needs to Improve Service for Program Participants, GAO/T-HRD-93–11 (Washington DC: GAO). General Accounting Office (GAO) (1993d) Social Security: IRS Tax Identity Data Can Help Improve SSA Earnings Records, GAO/HRD-93–42 (Washington DC: GAO). General Accounting Office (GAO) (1993e) IRM-Related Action Items in the NPR Report, internal document provided to author by GAO. General Accounting Office (GAO) (1994) Examination of IRS’ Fiscal Year 1993 Financial Statements, GAO/AIMD-94–120 (Washington DC: GAO). General Accounting Office (GAO) (1996) Tax Systems Modernization: Actions Underway but IRS Has Not Yet Corrected Management and Technical Weaknesses, GAO/T-AIMD96–106 (Washington, DC: GPO). General Services Administration (GSA) (1993) Alternatives to Grand Design in Systems Modernization (Washington DC: GSA). Gerth, H. and Wright Mills, C. (1975) From Max Weber (New York: Oxford University Press). Goold, M. and Campbell, A. (1987) Strategies and Styles: the Role of the Centre in Managing Diversified Corporations (Oxford: Blackwell). Goure, D. (1993) ‘The Military-Technical Revolution’, Washington Quarterly, vol. 16, no. 4. Grace Commission (1983) President’s Private Sector Survey on Cost Control: Report on Automated Data Processing and Office Automation, approved by the subcommittee for the full executive committee, Spring-Fall. Gray, A. and Jenkins, W. (1985) Administrative Politics in British Government (Brighton: Wheatsheaf). Gray, C. (1989) ‘The Cyborg Soldier: The US Military and the Post-modern Warrior’ in L.Levidow and K.Robins (eds) Cyborg Worlds: The Military Information Society (London: Free Association Books). Gray, M.M. (1982) Federal ADP Equipment: A Compilation of Statistics—1981, NBS Special Publication 500–97 (Washington DC: National Bureau of Standards, US Department of Commerce). Griffith, J. and Ryle, M. and Wheeler-Booth, M. (1989) Parliament: Functions, Practice and Procedures (London: Sweet & Maxwell). Grindley, K. (1991) Managing Information Technology at Board Level (London: Pitman). Handy, C. (1991) Gods of Management: The Changing Work of Organisations (London: Business Books). Handy, C. (1994) The Empty Raincoat: Making Sense of the Future (London: Hutchinson). Head, R. (1982) Federal Information Systems Management: Issues and New Directions, Staff Paper (Washington, DC: Brookings Institution).

194 References Heady, F. (1996) Public Administration: A Comparative Perspective (New York: Marcell Dekker). Health and Human Services (HHS) (1993) Office of Inspector General: Social Security Client Satisfaction: Service Indicators 1993 (New York Regional Office: Office of Inspector General, HHS). Heap, N., Thomas, R., Einon, G., Mason, R. and Mackay, H. (eds) (1995) Information Technology and Society: A Reader (London: Sage). Heclo, H. and Wildavsky, A. (1981) The Private Government of Public Money (London: Macmillan). Heidegger, M. (1977) ‘The Question Concerning Technology’ in M.Heidegger, The Question Concerning Technology and Other Essays, ed. W.Lovitt (New York: Harper & Row). HM Treasury (1992) Departmental Report of the Chancellor of the Exchequer’s Departments: The Government’s Expenditure Plans 1992–93 to 1994–95, Cm. 1918 (London: HMTreasury ). HM Treasury (1993) Departmental Report of the Chancellor of the Exchequer’s Departments: The Government’s Expenditure Plans 1993–94 to 1995–96, Cm. 2217 (London: HM Treasury). Hood, C. (1983) The Tools of Government (London: Macmillan). Hood, C. (1994) Explaining Economic Policy Reversals (Milton Keynes: Open University Press). Hood, C. and Margetts, H. (1993) ‘Informatization and Public Administration Trends: Igniting, Fuelling or Dampening?’, paper to the ESRC/PICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, London, 10 December. Hough ton, J. (1991) ‘Outsourcing Information Technology Services’, CIRCIT Policy Research Paper, no. 17 (Melbourne: CIRCIT). House Committee on Ways and Means (1976) Oversight of the Supplemental Security Income Program, Hearings before the Subcommittee on Oversight, 94 Congress, 2nd session (Washington DC: GPO). Huxley, A. (1932) Brave New World: A Novel (London: Chatto & Windus). Information Technology Committee (1992) The Parliamentary Office of Science and Technology, First Report together with the Proceedings of the Committee, Minutes of Evidenceand Appendices, Session 1991–92 (London: HMSO). Information Technology Services Agency (ITSA) (1992) Annual Report 1991 (Lytham St Annes: ITSA). Information Technology Services Agency (ITSA) (1993) Business Plan 1992/1993 (Lytham St Annes: ITSA). Information Technology Services Agency (ITSA) (1995) Annual Report and Accounts 1994–1995 (Lytham St Annes: ITSA). Ingraham, P. (1995) ‘Reinventing the American Federal Government: Reform Redux or Real Change’, paper to the ESRC Public Service Seminar, LSE, February. Inland Revenue (1992) Report for the Year Ending 31st March 1992: One Hundred and Thirty Fourth Report (London: HMSO). Internal Revenue Service (IRS) (1992) Tax Systems Modernization: Design Master Plan (Washington DC: Department of the Treasury). International Digital Communications (IDC) (1991) The Impact of Facilities Management on the IT Market (London: IDC).

References

195

International Digital Communications (IDC) (1993) Facilities Management: The Nature of the Opportunity (London: IDC). Kable (1994) Market Profile: Civil Service IS 1994/95 (London: Kable). Kable (1995a) Police Market Profile: Management Summary (London: Kable). Kable (1995b) Market Profile: Civil Service IS 1995/96 (London: Kable). Kafka, F. (1925) The Trial (London: Pan Books). Keen, J. (1994) ‘Should the National Health Service Have an Information Strategy?’, Public Administration, vol. 72, no. 1. Keliher, L. (1988) ‘Policy-making in Information Technology: A Decisional Analysis of the Alvey Programme’, PhD thesis, London School of Economics and Political Science. Kelman, S. (1990) Procurement and Public Management (Washington DC: AEI Press). Kennedy, N. (1989) The Industrialization of Intelligence: Mind and Machine in the Modern Age (London: Unwin Hyman). Kickert, W. and van Vught, F. (1995) Public Policy and Administration Sciences in the Netherlands (Hemel Hempstead: Harvester Wheatsheaf). Kolb, D. (1986) The Critique of Pure Modernity: Hegel, Heidegger and After (London and Chicago: University of Chicago Press). Kraemer, K., Dutton, W. and Northrop, A. (1981) The Management of Information Systems (New York: Columbia University Press). Lacity, M. and Hirschheim, R. (1993) Information Systems Outsourcing: Myths, Metaphors and Realities (Chichester: John Wiley & Sons). Lamb, G. (1973) Computers in the Public Service (London: Allen & Unwin). Lane, J.-E. (1993) The Public Sector: Concepts, Models and Approaches (London: Sage). Leach, S., Stewart, J. and Walsh, K. (1994) The Changing Organisation and Management of Local Government (London: Macmillan). Lenk, K. (1992) ‘Informatics and Public Administration: Towards a Research Programme’, paper to the E S RC/ P ICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, 12 March. Lenk, K. (1994) ‘Information Systems in Public Administration: From Research to Design’, Informatization and the Public Sector, vol. 3, nos 3/4. Levidow, L. and Robins, K. (1989) ‘Towards a Military Information Society?’ in L.Levidow and K.Robins (eds) Cyborg Worlds: The Military Information Society (London: Free Association Books). Loh, L. and Venkatraman, N. (1994) ‘Information Technology Outsourcing: a Crosssectional Analysis’ in R.Galliers and B.Baker (eds) Strategic Information Management: Challenges and Strategies in Managing Information Systems (Oxford: Butterworth Heinemann). LSE Public Policy Group (1995) Final Report on ‘Cold Reviews’ of National Audit Office Reports, 1 October 1994 to 30 September 1995, (London: LSE). Luhmann, N. (1966) Recht und Automation in der Offentlichen Verwaltung (Berlin: Duncker &Humblot). Lynn, N. and Wildavsky, A. (eds) (1990) Public Administration: The State of the Discipline (New Jersey: Chatham House). Lyon, D. (1994) Postmodernity (Milton Keynes: Open University Press). Lyotard, J.-F. (1984) The Postmodern Condition: A Report on Knowledge (Manchester: Manchester University Press).

196 References Mackay, H. (1995) ‘Patterns of Ownership of IT Devices in the Home’ in N.Heap, R.Thomas, G.Einon, R.Mason and H.Mackay (eds) Information Technology and Society: A Reader (London: Sage). McGregor, E. (1990) ‘The Public Sector Human Resource Puzzle: Strategic Management of a Strategic Resource’ in F.S.Lane (ed.) Current Issues in Public Administration (New York: St Martin’s Press). McLouglin, I. and Clark, J. (1995) ‘Technological Change at Work’ in N.Heap, R.Thomas, G.Einon, R.Mason and H.Mackay (eds) Information Technology and Society: A Reader (London: Sage). March, J. and Olsen, J. (1984) ‘The New Institutionalism: Organizational Factors in Political Life’, American Political Science Review, vol. 78, no.3. March, J. and Olsen, J. (1989) Rediscovering Institutions (New York: Free Press). Margetts, H. (1991) ‘The Computerization of Social Security: The Way Forward or a Step Backwards?’, Public Administration, vol. 69, no. 3. Margetts, H. (1995) ‘The Automated State’, Public Policy and Administration, vol. 10,no. 2 . Margetts, H. (1996) ‘The National Performance Review and the Future Shape of American Government’, in A.Massey (ed.) The Globalization of Government (London: Routledge). Margetts, H. and Dunleavy, P. (1994) ‘Enhancing Executive Autonomy in Central Government Systems: Comparing the National Performance Review and Next Steps’, paper to the annual joint sessions of the European Consortium of Political Research, Madrid, 15 April. Margetts, H. and Willcocks, L. (1992) ‘Information Technology as Policy Instrument in the UK Social Security System: Delivering an Operational Strategy’, International Review of Administrative Sciences, vol. 58. Margetts, H. and Willcocks, L. (1993) ‘Information Technology in Public Services: Disaster Faster?’, Public Money and Management, vol. 13, no. 2. Margetts, H. and Willcocks, L. (1994) ‘Informatization of the Public Sector: Distinctive or Common Risks?’, Informatization of the Public Sector, vol. 3, no. 1. Massey, A. (1993) Managing the Public Sector (Aldershot: Edward Elgar). Matheson, S. (1984) ‘Computerisation of the Pay As You Earn System’, in D.Pitt and B.Smith (eds) The Computer Revolution in Public Administration (Brighton: Edward Elgar). Meyer, O. (1992) ‘Informatization Policies in the Netherlands’ in P.Frissen, V.Bekkers, B. Brussaard, I. Snellen and M. Wolters (eds) European Public Administration and Informatization (Amsterdam: IOS Press). Miles, I. and Gershuny, J. (1986) ‘The Social Economics of Information Technology’ in M.Ferguson (ed.) New Communication Technologies and the Public Interest (London: Sage). Moe, R. (1991) ‘The HUD Scandal and the Case for an Office of Federal Management’, Public Administration Review, vol. 51, no. 4. Moe, R. (1992) ‘Reorganizing the Executive Branch in the Twentieth Century: Landmark Commissions’, CRS report for Congress, the Congressional Research Service, 19 March. Morris, P. and Hough, G. (1987) The Anatomy of Major Projects: A Study of the Reality of Project Management (Chichester: John Wiley). Muid, C. (1992) ‘The Transformation Challenge’, paper to the ESRC/PICT study

References

197

group on Information, Communications and New Technologies in Public Administration, Regents College, London, 12 March. Muid, C. (1994) ‘Information Systems and New Public Management’, Public Administration, vol. 72, no. 1. Mulgan, G. (1991) Communication and Control: Networks and the New Economies of Communication (Cambridge: Polity Press). Mulgan, G. (1997) Connexity: Responsibility, Freedom, Business and Power in the New Century (London: Chatto and Windus). National Archives and Records Administration (NARA) (1989) The United States Government Manual 1988/89 (Washington DC: GPO). National Audit Office (NAO) (1984) Administrative Computing in Government Departments, HC 259 (London: HMSO). National Audit Office (NAO) (1987a) Inland Revenue: Control of Major Developments in Use of Information Technology, HC 132 (London: HMSO). National Audit Office (NAO) (1987b) Information Technology Security in Government Departments (London: HMSO). National Audit Office (NAO) (1989a) Department of Social Security: Operational Strategy, HC 111 (London: HMSO). National Audit Office (NAO) (1989b) Appropriation Accounts 1988–89, Volume 8: Classes XIV and XV Health and Personal Social Services and Social Security (London: HMSO). National Audit Office (NAO) (1990) Appropriation Accounts 1990–91, Volume 8: Classes XIII and XV Health and Personal Social Services and Social Security (London: HMSO). National Audit Office (NAO) (1991a) Appropriation Accounts 1990–91, Volume 9: Classes XIII and XIV Health and Office of Population, Censuses and Surveys and Social Security (London: HMSO). National Audit Office (NAO) (1991b) Ministry of Defence: Support Information Technology, HC 644 (London: HMSO). National Audit Office (NAO) (1991c) Office Automation in Government Departments, HC 314 (London: HMSO). National Audit Office (NAO) (1991d) The Management of Information Technology Security in Government Departments, HC 248 (London: HMSO). National Audit Office (NAO) (1992) Appropriation Accounts 1991–92, Volume 9: Classes XIII and XIV Health and Office of Population, Censuses and Surveys and Social Security (London: HMSO). National Audit Office (NAO) (1993a) Computer Systems for Training and Enterprise Councils: The Department of Employment’s Management of the Field System, HC 694 (London: HMSO). National Audit Office (NAO) (1993b) Appropriation Accounts 1992–93, Volume 9: Classes XIII and XIV Health and Office of Population, Censuses and Surveys and Social Security (London: HMSO). National Audit Office (NAO) (1994) Appropriation Accounts 1993–94, Volume 9: Classes XIII and XIV Health and Office of Population, Censuses and Surveys and Social Security (London: HMSO). National Audit Office (NAO) (1995a) Entry into the United Kingdom, HC 204 (London: HMSO). National Audit Office (NAO) (1995b) Department of Transport: Sale of DVOIT, HC 128 (London: HMSO).

198 References National Audit Office (NAO) (1995c) Inland Revenue: Market Testing the Information Technology Office, HC 245 (London: HMSO). National Audit Office (NAO) (1995d) Administration of Retirement Pensions, HC 360 (London: HMSO). National Audit Office (NAO) (1995e) Department of Social Security: Purchase of Postal and Courier Services, HC 362 (London: HMSO). National Audit Office (NAO) (199 5f ) Information Technology Security in GovernmentDepartments , HC 231 (London: HMSO). National Audit Office (NAO) (1996a) Change Management in the Inland Revenue, HC 140 (London: HMSO). National Audit Office (NAO) (1996b) Information Technology Services Agency: Outsourcing the Service Delivery Operations, HC 255 (London: HMSO). National Audit Office (1997) The Contract To Develop and Operate the Replacement National Insurance Recording System, HC 12 (London: HMSO). National Performance Review (NPR) (1993) From Red Tape to Results: Creating a Government that Works Better and Costs Less (Washington DC: GPO). Copy obtained over the Internet, bound and available for page number references. National Performance Review (NPR) (1994) Status Report of the National Performance Review 1994 (Washington DC: GPO). National Performance Review (NPR) (1995a) Objectives, Principles, Approach: Phase II, February (Washington DC: GPO). National Performance Review (NPR) (1995b) Status Report of the National Performance Review 1995 (Washington DC: GPO). National Union of Civil and Public Servants and The Civil and Public Services Association (NUCPS and CPSA) (1991), Benefits Agency Service Delivery (Onestop)— A Trade Union Response (London: NUCPS). Nora, S. and Minc, A. (1981) The Computerization of Society (Massachusetts: Massachusetts Institute of Technology). Originally published as L’Informatisation de 1a société (1978, Paris: La Documentation Franchise). Northrop, A., Kraemer, K., Dunkle, D. and King, J. (1990), ‘Payoffs from Computerization: Lessons over Time’, Public Administration Review, 50th Year. Office of Management and Budget (1983) OMB Circular No. A-76 (Revised) Performance of Commercial Activities (Washington DC: OMB). Office of Management and Budget (OMB) (1988a) Management of the United States Government, FY 1988 (Washington DC: GPO). Office of Management and Budget (OMB) (1988b) A Five-year Plan for Meeting the Automatic Data Processing and Telecommunications Needs of the Federal Government (Washington DC: GPO). Office of Management and Budget (OMB) (1990) Current Information Resource Requirements of the Federal Government: Fiscal Year 1991 (Washington DC: GPO). Office of Management and Budget (OMB) (1991a) Information Resources Management Plan of the Federal Government (Washington DC: GPO). Office of Management and Budget (OMB) (1991b) Circular No. A-11 (Washington DC: OMB). Office of Management and Budget (OMB) (1992) Information Resources Management Plan of the Federal Government (Washington DC: GPO). Office of Management and Budget (OMB) (1993a) Management of Federal Information Resources: Proposed Revision of OMB Circular No. A-130, Transmittal 2 (Washington DC: Executive Office of the President).

References

199

Office of Management and Budget (OMB) (1993b) Current Information Technology Resource Requirements of the Federal Government: Fiscal Year 1994 (Washington DC: GPO). Office of Management and Budget (OMB) (1993c) Information Resources Management Plan of the Federal Government (Washington DC: GPO). Office of Management and Budget (OMB) (1994) Budget of the United States Government Fiscal Year 1995 (Washington DC: GPO). Office of Management and Budget (OMB) (1995) Budget of the United States Government Fiscal Year 1996 (Washington DC: GPO). Office of Management and Budget, General Services Administration and Department of Commerce (1992a) Current Information Technology Resource Requirements of the Federal Government: Fiscal Year 1993 (Washington DC: GPO). Office of Management and Budget, General Services Administration and Department of Commerce (1992b) Information Resources Management Plan of the Federal Government (Washington DC: GPO). Office of Public Service and Science (OPSS) (1991) Competing for Quality, Cabinet Office, Cm. 1730 (London: HMSO). Office of Public Service and Science (OPSS) (1993) The Government’s Guide to Market Testing (London: HMSO). Office of Public Service and Science (OPSS) (1994a) The Civil Service: Continuity and Change, Cm. 2627 (London: HMSO). Office of Public Service and Science (OPSS) (1994b) Next Steps Review 1994 (London: HMSO). Office of Technology Assessment (OTA) (1977) A Preliminary Analysis of the IRS Tax Administration System (Washington DC: OTA). Office of Technology Assessment (OTA) (1981) Computer-Based National Information Systems: Technology and Public Policy Issues (Washington DC: OTA). Office of Technology Assessment (OTA) (1986) Social Security Administration and Information Technology, Special Report OTA-CIT-311 (Washington DC: GPO). Office of Technology Assessment (OTA) (1988a) Informing the Nation: Federal Information Dissemination in an Electronic Age (Washington DC: GPO). Office of Technology Assessment (OTA) (1988b) Federal Government Information Technology: Electronic Record Systems and Individual Privacy, OTA-CIT-296 (Washington DC: GPO). O’Higgins, M. (1984), ‘Computerising the Social Security System: An Operational Strategy in Lieu of a Policy Strategy’, in D.Pitt and B.Smith (eds) The Computer Revolution in Public Administration (Brighton: Edward Elgar). Orwell, G. (1954) Nineteen-Eighty-Four (Middlesex: Penguin). Osborne, D. and Gaebler, T. (1992) Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector (Reading, Massachusetts: Addison-Wesley). Oughton, J. (1994) ‘Market Testing: The Future of the Civil Service’, Public Policy and Administration, vol. 9, no. 2. Parliamentary Information Technology Committee (PITCOM) (1988) Information Technology and Public Policy, vol. 7, no. 1. Parliamentary Information Technology Committee (PITCOM) (1992) Information Technology and Public Policy, vol. 10, no. 3. Parliamentary Office of Science and Technology (POST) (1998) Electronic Government: Information Technologies and the Citizen (London: POST).

200 References Peacock, A. (1979) The Economic Analysis of Government (Oxford: Martin Robertson & Co.). Perrow, C. (1984) Normal Accidents: Living with High-Risk Technologies (New York: Basic Books). Perry, J. and Kraemer, K. (1990) ‘Research Methodology in Public Administration: Issues and Patterns’, in N.Lynn and A.Wildavsky (eds) Public Administration: The State of the Discipline (New Jersey: Chatham House). Pinchot, G. and Pinchot, E. (1994) The End of Bureaucracy and the Rise of the Intelligent Organization (San Francisco: Berrett-Koehler Publishers). Pitt, D. and Smith, B. (eds) (1984) The Computer Revolution in Public Administration (Brighton: Edward Elgar). Porat, M. (1977) The Information Economy: Definition and Measurement (Washington DC: US Department of Commerce, Office of Telecommunications). Pratchett, L. (1993) ‘Themes and Issues in Public Sector ICTs: A Review of the ESRC Study Group’, paper to the ES RC/ P ICT study group on Information, Communications and New Technologies in Public Administration, National Institute of Social Work, London, December. Pratchett, L. (1994) ‘Open Systems and Closed Networks: Policy Networks and the Emergence of Open Systems in Local Government’, Public Administration, vol. 72, no. 1. President’s Reorganization Project (PRP) (1978a) Federal Data Processing Reorganization Study: Central Agencies Team Report (Washington DC: Department of Commerce). President’s Reorganization Project (PRP) (1978b) Federal Data Processing Reorganization Study: General Government Team Report (Washington DC: Department of Commerce). President’s Reorganization Project (1979) Federal Data Processing Reorganization Study: Summary Report (Washington DC: Department of Commerce). Public Accounts Committee (PAC) (1983) Management and Control of the Development of Administrative Computing in Government Departments, 27th Report, July (London: HMSO). Public Accounts Committee (PAC) (1984) Report by the Comptroller and Auditor General on Administrative Computing in Government Departments: Review of the Central Computer and Telecommunications Agency, Minutes of Evidence, 28 March (London: HMSO). Public Accounts Committee (PAC) (1989) Department of Social Security: Operational Strategy, 24th Report, June, Session 1988–89 (London: HMSO). Public Accounts Committee (PAC) (1991) Foreign and Commonwealth Office: Qualification of Accounts, HC 275–i, Session 1990–91 (London: HMSO). Public Accounts Committee (PAC) (1994) The Proper Conduct of Public Business, 17 January, HC 154 (London: HMSO). Quinn, J. (1992) Intelligent Enterprise (New York: Macmillan). Ranson, S. and Stewart, J. (1994) Management for the Public Domain: Enabling the Learning Society (New York: St Martin’s Press). Rhodes, R. (1995) ‘Towards a Postmodern Public Administration: Epoch, Epistemology or Narrative’, Paper to the 25th Anniversary Conference of the Public Administration Committee, Civil Service College, Sunningdale, 4–6 September. Roach, S. (1991) ‘Services under Siege: The Restructuring Imperative’, Harvard Business Review, September-October. Rose, R. and Davies, P. (1994) Inheritance in Public Policy: Change without Choice in Britain (Harvard: Yale University).

References

201

Rose, R. and Karran, T. (1987) Taxation by Political Inertia (London: Allen & Unwin). Roth, G. and Schlucter, W. (1979) Max Weber’s Vision of History: Ethics and Methods (Berkeley: University of California Press). Sabbagh, D. (1994) ‘Self-health and the Virtual Health Service, Liberation Technology’, Demos Quarterly, no. 4. Schroeder, R. (1995) ‘Virtual Reality in the Real World: History, Applications and Projections’ in N.Heap, R.Thomas, G.Einon, R.Mason and H.Mackay (eds) Information Technology and Society: A Reader (London: Sage). Schwartz, J. (1989) ‘The Census Means Business’, American Demographics, July. Sculley, J. (1991) ‘The Relationship between Business and Higher Education’ in C.Dunlop and R.Kling (eds) Computerization and Controversy: Value Conflicts and Social Choices (London: Academic Press). Secretary of State for Social Services (1985) Reform of Social Security: Programme for Change, vols 1 and 2, Cm 9518 (London: HMSO). Senker, P. (1995) ‘Technological Change and the Future of Work’ in N.Heap, R.Thomas, G.Einon, R.Mason and H.Mackay (eds) Information Technology and Society: A Reader (London: Sage). Simon, H. (1955) Models of Man (New York: Wiley). Smith, A. (1776) An Inquiry into the Nature and Causes of the Wealth of Nations (London: Routledge, 1893). Snellen, I. (1991) ‘Central Government Informatization Policies’, paper submitted to the Annual Conference of the European Group of Public Administration, The Hague, 29–31 August. Snellen, I. (1993) ‘Automation of Policy Implementation’, paper submitted to the Permanent Study Group on Informatization in Public Administration, Annual Conference of the European Group of Public Administration, Strasbourg, France, 7–10 September, 1993. Snellen, I. (1994) ‘ICT: A Revolutionizing Force in Public Administration?’, Informatization and the Public Sector, vol. 3, nos 3/4. Social Security Administration (SSA) (1982) Systems Modernization Program (Baltimore: Office of Systems, SSA). Social Security Administration (SSA) (1984) Systems Modernization Program: 1985 Executive Summary (Washington DC: Health and Human Services, Office of Systems, SSA). Social Security Administration (SSA) (1991a) Information Resources Management: Longrange Plan FY1993–1997 (Baltimore: Office of Information Resources Management, SSA). Social Security Administration (SSA) (1991b) Information Systems Plan (Baltimore: Office of Systems Planning and Integration, SSA). Social Security Administration (SSA) (1992a) Begin to Turn SSA into a…Paperless Agency (Baltimore: Office of Information Resource Management, Social Security Administration). Social Security Administration (SSA) (1992b) Information Systems Plan (Baltimore: Office of Systems Planning and Integration, SSA). Social Security Administration (SSA) (1994) The Social Security Administration’s Information Systems Plan (Baltimore: Office of Systems Planning and Integration, SSA). Social Security Committee (SSC) (1991a) Social Security Public Expenditure: Minutes of Evidence, 19 March 1991, Session 1990–91, HC 322–i (London: HMSO).

202 References Social Security Committee (SSC) (1991b), The Organisation and Administration of the Department of Social Security: Minutes of Evidence, 25 June, Session 1990–91, HC 550–i (London: HMSO). Social Security Committee (SSC) (1991c) The Organisation and Administration of the Department of Social Security: Minutes of Evidence, 2 July, Session 1990–91,HC 550– ii (London: HMSO). Social Security Committee (SSC) (1991d), The Organisation and Administration of the Department of Social Security: Minutes of Evidence, 5 November, Session 1991–92, HC 19–i (London: HMSO). Social Security Committee (SSC) (1991e), The Organisation and Administration of the Department of Social Security: Minutes of Evidence, 12 November, HC 19–iii (London: HMSO). Social Security Income (SSI) Study Group (1976), Report to the Commissioner of Social Security and the Secretary of Health, Education, and Welfare on the Supplemental Security Income Program, January. Strassman, P.A. (1990) The Business Value of Computers: an Executive’s Guide (Connecticut: Information Economics Press). Stevens, R. (1993) The CCTA: Its Past, Present and Future Role, MSc dissertation, London School of Economics and Political Science, September. Sturgess, G. (1994) ‘Virtual Government’, paper to the Public Service Commission Lunchtime Seminar, Lakeside Hotel, Canberra, ACT, 24 July. Swann, P. (1993) ‘Introduction’ in P.Swann (ed.) New Technologies and the Firm: Innovation and Competition (London: Routledge). Taylor, J. (1992) ‘Information Networking in Public Administration’, International Review of Administrative Sciences, vol. 58, no. 3. Taylor, J. (1997) ‘Infomrrmatization as X-Ray: What is Public Administration for the Information Age?’, paper to the EGPA Conference, Leuven, September. Taylor, J. and Williams, H. (1990) ‘Themes and Issues in an Information Polity’, Journal of Information Technology, vol. 5, no 3. Taylor, J. and Williams, H. (1991) ‘Public Administration and the Information Polity’, Public Administraton, vol. 69, no. 2. Tien, J. and McClure, J. (1986) ‘Enhancing the Effectiveness in Computers in Public Organizations through Appropriate Use of Technology’, Public Administration Review, Special Issue. Toffler, A. (1970) Future Shock (London: Pan Books). Toffler, A. (1980) The Third Wave (New York: Bantam Books). Toffler, A. (1990) Power Shift (New York: Bantam Books). Trade and Industry Committee (1988) Information Technology: First Report, vol. 1, 23 November, HC 25–i (London: HMSO). Trade and Industry Committee (1989) Information Technology: Minutes of Evidence, 12 July, HC 338–ii (London: HMSO). Treasury and Civil Service Select Committee (1994) The Role of the Civil Service: Fifth Report, vol. 1 (London: HMSO). Treasury Board of Canada (1994) Renewing Government Services Using Information Technology (Canada: Secretariat of the Treasury Board of Canada). Turpin, C. (1972) Government Contracts (Middlesex: Penguin). Von Hipple, E. (1988) The Sources of Innovation (New York: Oxford University Press). Warner, M. and Stone, M. (1970) The Databank Society (London: George Allen & Unwin).

References

203

Weber, R. (1988) ‘Computer Technology and Jobs: An Impact Assessment and Model’, Communications of the ACM, vol. 31, January. Weidenbaum, M. (1969) The Modern Public Sector: New Ways of Doing the Governments Business (New York: Basic Books). White, J. and Adams, G. (1994) Research in Public Administration: Reflections on Theory and Practice (Thousand Oaks, CA: Sage). Willcocks, L. (1987) ‘Information Technology in Public Sector Settings: Towards Effective Systems’, International Journal of Public Sector Management, vol. 2, no. 3. Willcocks, L. and Fitzgerald, G. (1994) A Business Guide to Outsourcing I.T. (London: Business Intelligence). Willcocks, L. and Harrow, J. (1991) Rediscovering Public Services Management (Maidenhead: McGraw-Hill). Willcocks, L. and Mason, D. (1986) ‘The Case of the DHSS Operational Strategy 1975–1986’, teaching notes. Williams, R. (1989) The Politics of Modernism: Against the New Conformists (London: Verso). Wilson, J. (1989) Bureaucracy: What Government Agencies Do and Why They Do It (New York: Basic Books). Wright, S. (1998) An Appraisal of Technologies of Political Control (Luxembourg: Directorate General for Research, European Parliament). Zuboff, S. (1988) In the Age of the Smart Machine: The Future of Work and Power (New York: Basic Books). Zuurmond, A. (1994) ‘From Bureaucracy to Infocracy: A Tale of Two Cities’, Informatization and the Public Sector, vol. 3, nos 3/4.

Index

Agriculture, Department of (US) 38, 39, 136 Agriculture, Fisheries and Food, Ministry of (UK) 34–5, 40, 147, 150 Alvey Programme (UK) 116 American Electronics Association 158 Andersen Consul ting 61, 101, 130, 132–3, 152, 156 artificial intelligence 32 Assirati, Bob 28 AT&T 100, 133–4, 140, 153, 155 Bank of England 14 Barclays 14, 133 BBC 20 Bellamy, Christine 58, 171 Benefits Agency (UK) 16, 18, 41, 52– 70, 72, 85–6, 107, 158, 164, 177–9; Chief Executive 56; DITA 57, 59– 60, 63; family credit 64; incomesupport 52, 58, 63–7; Information and Technology Services Agency (ITSA) 56–63, 65, 68, 118, 147, 150, 161; Jobseekers Allowance 63; Livingstone Computer Centre 61; Newcastle Computer Centre 52–3; pensions 52, 65–7; POCL 57–8, 66, 156, 158; social fund 57; whole-person concept 54–8, 60, 66, 70, 86, 179 Blair, Tony 4–5, 48 Bradbury, Malcolm 45, 185 British Telecom 117 Brooks, Jack 42, 79 Brooks Act 42, 43 Brown, Gordon 28 BSE 35 bureaucracy 17, 18, 27, 29–30, 164–5

Cabinet Office (UK) 40, 44, 150, 152, 154, 159 Caines, Eric 54, 58, 61–4 Caldwell, Kevin 59 CAP 61 Cap Gemini Sogeti 133, 134, 145, 153, 161 Carter, Jimmy 33, 43 CCTA 7, 44, 47, 62, 151 Census Bureau (US) 5, 22, 32, 140 Central Information Technology Unit (UK) 44, 47 Central Statistical Office (UK) 8 Central Strategy Unit (UK) 5 Chief Adjudication Officer (UK) 66 Chief Information Officers (US) 43 Citizens’ Advice Bureaux 41 Civil Service Department (UK) 18, 25, 32, 34 Clark, David 5 Clegg, Stuart 176–7 Clinton, Bill 5, 13, 15, 26, 142, 172 Cohen Act see National Defense Authorization Act Collingridge, David 69 Commerce, Department of (US) 38–9, 42, 48, 136 Competition in Contracting Act (US) 78, 138–43, 154, 160 Comptroller and Auditor General (UK) 17, 55 CompuServe 134 computer-aided transcription 9 Computer Sciences Corporation (CSC) 61, 116–17, 128, 132–3, 148, 156 Congress 4, 12, 15, 72–4, 79–80, 90, 93–6, 101–2, 104–6, 108; House Government Operations Committee 42, 78–9, 97,

204

Index

101; House Ways and Means Committee 90 Congressional Budget Office 48 contracting 21, 61–2, 77–9, 98–101, 108, 115–19, 125–61, 163, 173–5, 178 Contributions Agency (UK) 56–7, 145, 156 ‘control state’ 13, 24–5, 29, 167–9 Coopers and Lybrand 147 core competencies 128–9 Criminal Justice Act (UK) 12, 27, 47 Crown Prosecution Service (UK) 47 Customs and Excise (UK) 8, 12, 40, 46, 150 Customs Service (US) 9, 11 Defence, Ministry of (UK) 19–20, 39, 40, 61, 145–7, 150–1, 159, 179; Army 39, 40; Navy 39, 40, 159; RAF 39, 40 Defense, Department of (US) 4, 19, 32, 35, 37–9, 85, 136–7, 142; Airforce 37–8, 136, 140; Army 37–8, 136; Navy 37–8, 136 Dibble, Roy 7, 63 Digital Equipment Corporation (DEC) 133–4, 147, 159 DiPentima, Rene 79 DNA 11, 13 Driver and Vehicle Licensing Agency (UK) 9, 10, 13, 22 Dunleavy, Patrick 21, 179–81 e-cash 14–15, 35, 95 e-mail 4–5 Education, Department of (UK) 8, 40, 137 Education, Department of (US) 38–9, 136 electronic data interchange (EDI) 2, 8, 12, 16, 19, 39 Electronic Data Systems (EDS) 61, 101, 114–15, 117–24, 128, 130, 132–5, 148–61, 173–5, 179–81 electronic tagging 12, 28, 47 Employment, Department of (UK) 27, 40, 53, 144 Employment Services Agency (UK) 32 encryption 36 Energy, Department of (US) 38–9, 85, 136, 142 Environment, Department for (UK) 40, 146–7

205

Environmental Protection Agency (US) 38–9, 140, 142 ESRC/PICT conference 7 European Monetary Union 28 European Single Market 12 facilities management see contracting Falcon Systems 140 Farmers Home Administration (US) 9 Federal Acquisitions Regulation (US) 139–42 Federal Aviation Administration (US) 37, 137 Federal Bureau of Intelligence (US) 14, 46 Federal Reserve (US) 14, 15 Federal Systems Inc. 137 Federal Systems Integration Taskforce (US) 34 Foreign and Commonwealth Office (UK) 1, 17, 27, 40, 50, 146 Freeman, Paul 44 Frissen, Paul 2, 168, 176–7 FTS2000 (US) 46 Fujitsu 116 GCHQ 20 GEC-Marconi 133 General Accounting Office (US) 13, 17, 26, 42, 43, 79–81, 83–6, 91–5, 97–8, 100–6, 108, 141 General Services Administration (US) 16, 38, 42–3, 48, 78, 83, 88, 101, 104, 140, 142; Board of Contract Appeals 78, 100, 139–40, 142; Federal Information Center 48; Office of Information Resource Management (OIRM) 42 Gingrich, Newt 7, 172 globalisation 179–81 Gore, Al 16 Government Data Network 46 government.direct (Green Paper) 47–8, 50 Grace Commission 25, 33, 43 Groupe Bull 133, 146 Healey, Denis 120 Health, Department of (UK) 40, 146–7, 161 Health and Human Services, Department of (US) 38, 39, 46, 71– 88, 136; Inspector General 86; Office of the Secretary 80

206 Index Heidegger, Martin 182–5 Heseltine, Michael 8, 44 Home Office (UK) 11, 12, 34, 39–40, 46–7, 57, 150, 179 Home Secretary (UK) 11–12 Hood, Christopher 1, 3, 9, 13, 17, 22, 165, 172–3 Hoon, Geoff 160 Horam, John 7 Hoskyns 62, 133 Housing and Urban Development, Department of (US) 38, 136 Howard, Michael 12 Huxley, Aldous 167–8 IBM 10–11, 61, 78, 100–1, 130, 132–5, 148, 150 ICL 61, 62, 66, 111, 116–17, 133, 134, 145, 150, 152–3, 158 Immigration and Naturalization Service (US) 9 Immigration Service (UK) 10 incrementalism 69–70 information age 45 Information Infrastructure Taskforce (US) 48 information polity 171–2, 175 information technology: contracting 21, 61–2, 77–9, 98–101, 108, 115–19, 125–61, 163, 173–4, 178; expenditure 21, 37–41, 50; infrastructure 69–70; policy implications 11–13, 25–8, 63, 67–70, 80–2, 88, 104–5, 120–1, 162– 4, 180–1; regulation 41–5, 50, 62–3, 79–80, 101–4, 108, 119–20, 163, 178, 180–1 Information Technology Management Reform Act (US) 48 ‘informatization’ 169–70, 172, 176, 184 Inland Revenue (UK) 23, 40, 46, 93, 109–24, 147–9, 150, 153–7, 159, 161, 173–5, 178–81; Computerisation of PAYE 109–24, 156; Information Technology Office 112–15, 118–20, 148–9, 153, 157, 159, 161; MIRAS 113; Permanent Secretaries 114; Schedule D 111–12; self-assessment 121; Staff Federation 115 innovation 31–6, 46–7, 68, 94–5, 150, 162, 166, 168, 181 Insolvency Service (UK) 27 Interior, Department of (US) 38–9, 136 Internal Revenue Service (US) 1, 4, 14,

35, 46, 82–3, 89–108, 109, 112, 122, 137, 154, 174, 178; Assistant Commissioner 101, 104; Chief Financial Officer 96, 101; Chief Information Officer 96; Commissioner 94, 105; Martinsburg Computer Centre 89, 90; Tax Systems Modernization 89, 91–108, 123 Internet 3–5, 7, 14, 27, 32, 35, 134; Internet Architecture Board 4; Internet Engineering Task Force 4 Irvine study group 170–1 JANET 8 Justice, Department of (US) 38–9, 136 Kassell study group 171 Kelman, Stephen 142 Kennedy, John F. 42 KPMG 133, 155 Labor, Department of (US) 38, 136 Labour Party (UK) 5 Lawson, Nigel 111 Liberty 13 Lilley, Peter 158 lobbying 158–9 Lockheed Martin 100–1, 143 Logica61, 145 Lord Chancellor’s Department (UK) 40, 47, 146, 150 Market Testing 118, 144, 147–8, 150–2, 154, 159–60, 173 Maryland (US) 13 Matheson, Steve 111, 114–17, 119–22 Midah Corporation 99 millennium bug 27–8, 96 MI5 20, 23 MI6 20 modernism, modernists 18, 34, 51, 162, 164–72, 175, 177, 182–4 Morpho 11 Mulgan, Geoff 45 NASA (US) 38, 39, 68, 85, 142–3 National Association of Probation Officers (UK) 12 National Audit Office (UK) 20, 45, 55, 62, 64, 66–7, 113, 117, 119–20, 122– 3, 144–5, 149, 156, 160 National Criminal Intelligence Service (UK) 35

Index

207

national data bank 24, 29, 168 National Defense Authorization Act 142–3 National Health Service 147 National Heritage, Department of (UK) 40, 145–6 national identity card 12–13 National Institute for Standards and Technology (US) 42, 48 National Park Service (US) 35 National Performance Review 4, 15, 26, 43, 48, 49, 95, 142, 158, 172, 175 new institutionalism 183–4 New Public Management 171–5 Next Steps Programme 52, 60, 112, 147–9, 151, 159–60, 173 Nike 129 Nixon, Richard 72 Nora, S. and Minc, A. 169 Northern Ireland Office (UK) 40, 146, 150 NUCPS 66

Pensions (UK) 52 Pile, William 110 police 25, 27, 29, 35; City of London Police 11; Metropolitan Police 151; Police National Computer 10–11, 13, 22, 34; Scottish Chief Inspector of Constabulary 10 Political Action Committees 135 postmodernism, postmodernists 162, 176–8, 182–5; ante-postmodernism 182–5 Post Office (UK) 32, 57–8 President (US) 4 President’s Reorganization Project 33, 43 Prime Minister’s Office (UK) 4 Private Finance Initiative 152–3, 158 Property Services Agency (UK) 27 Public Accounts Committee 20, 26, 45, 54, 62–3, 145 Public Building and Works, Ministry of (UK) 32

Office of Management and Budget (US) 33, 40, 42, 48, 50, 74, 76–7, 79–80, 97, 101, 103–6, 108, 138, 142; OIRA 42–3; OIRM 43 Office of Personnel Management (US) 42, 80 Office of Public Service (UK) 7, 40, 41, 48, 150 Office of Technology Assessment (US) 5, 33, 42, 73, 78–9, 80, 186 Omnibus Reconciliation Act (US) 73, 81 Operational Strategy (UK) 52–6, 68– 70, 87, 115, 164, 180 Oracle 153 Ordnance Survey Agency (UK) 50 Orwell, George 167–8 Oughton, John 154 outsourcing see contracting

Quinn, James 69–70, 87, 123, 128–9, 157

Pactel 116 Paperwork Reduction Act (US) 43 Paradyne Corporation 78 Parliamentary Office of Science and Technology 45 Partridge, Sir Michael 64 Passport Agency 153 Patent and Trademark Office (US) 48 Pension Guarantee Corporation (US) 17, 27

Racal 152 Reagan, Ronald 33, 79, 138 Rockwell International Corporation 143 Rose, Richard 70, 184 Royal Navy 20 Scottish Office (UK) 40, 146 Sculley, John 165 Securities and Exchange Commission 5, 78 SEMA 134, 146, 150, 158, 161 Senate Governmental Affairs Committee 95, 103 Siemens Nixdorf 147, 152–3 smart cards 23–4, 41, 57, 83 Social Security, Department of (UK) 27, 32, 39, 40, 46, 52–70, 112, 122, 146– 7, 150–1, 164, 173–4, 177–80; DISSC 59, 63; Permanent Secretary 64; Secretary of State 56, 59, 61 Social Security Administration (US) 4, 14, 52, 66, 71–88, 107, 122, 137–8, 154, 164, 177–9, 186; commissioners 76; Federal Old-Age Survivors and Disability Insurance 71, 82; National Computer Center 75; Supplemental

208 Index Security Income (SSI) 15, 71–2, 79, 82; Systems Modernization Plan 71– 88, 164 Social Security Committee 64–6, 112, 180 Social Security Disability Amendments 73, 81 ‘star wars’ (Strategic Defense Initiative) 18 State, Department of (US) 9, 33, 38–9, 40, 48 Stockman, David 79 Strassman, Paul 39–40, 87 Supplemental Security Income 15 Taylor, John 171, 172 technological determinism 171–2, 177, 184–5 Texas Instruments 135 Toffler, Alvin 166–7, 172 tools of government 2–30, 178; authority 9–13, 28–30, 150; detecting tools 22–30; effecting tools 3–22, 28–30; nodality 3–9, 28–30, 163; organisational capacity; 17–20, 28–30; organised expertise 21–2, 28–30; Treasure 13–17, 28–30, 166 Trade and Industry, Department of (UK) 14, 36, 39, 40, 116, 146–7, 150–1

Trade and Industry Committee (UK) 34 Transport, Department of (UK) 39, 40, 146, 148, 150, 173; DVOIT 148, 152, 156, 179 Transport, Department of (US) 37, 38– 9, 136 Treasury (UK) 1, 14, 40, 44, 45, 50, 55, 62, 65, 69, 111, 119–20, 123, 144, 146–7, 152, 157 Treasury Committee (UK) 8 Treasury, Department of (US) 15, 38–9, 41, 48, 85, 89, 91, 95, 101, 105, 136 Trusted Third Parties (TTPs) 36 TUPE 148, 150 Unisys 133–4 United Space Alliance 143 Veteran Affairs’ Department (US) 15, 38, 39 virtual reality 2, 32 Waldegrave, William 150 Weber, Max 164–5, 182 Welsh Office (UK) 40, 146, 150 Wessex Regional Health Authority 27 Willcocks, L. and Fitzgerald, G. 126–7, 131–3