Under New Public Management : Institutional Ethnographies of Changing Front-Line Work 9781442649101, 9781442626560, 9781442619463, 9781442619470

The institutional ethnographies collected in Under New Public Management explore how new managerial governance practices

148 12 2MB

English Pages [365] Year 2014

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Contents
List of Tables and Figures
Acknowledgments
Introduction
Section One
1 Literacy Work and the Adult Literacy Regime
2 Learning Global Governance: OECD’s Aid Effectiveness and “Results” Management in a Kyrgyzstani Development Project
Section Two
3 E-governance and Data-Driven Accountability: OnSIS in Ontario Schools
4 Digital Era Governance: Connecting Nursing Education and the Industrial Complex of Health Care
5 What Counts? Managing Professionals on the Front Line of Emergency Services
6 “Let’s be friends”: Working within an Accountability Circuit
Section Three
7 A Workshop Dialogue: Outcome Measures and Front-Line Social Service Work
For-Profit Contractors, Accreditation, and Accountability
Research and Development Work at an Ontario Youth Shelter
The Neighbourhood Computer Lab: Funding and Accountability
“If our statistics are bad we don’t get paid”: Outcome Measures in the Settlement Sector
Section Four
8 A Workshop Dialogue: Institutional Circuits and the Front-Line Work of Self-Governance
Accountability Circuits in Vocational Education and Training
The Circuit of Accountability for Lifelong Learning
Institutional Circuits in Cancer Care
9 Knowledge That Counts: Points Systems and the Governance of Danish Universities
Conclusion
Contributors
Recommend Papers

Under New Public Management : Institutional Ethnographies of Changing Front-Line Work
 9781442649101, 9781442626560, 9781442619463, 9781442619470

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

UNDER NEW PUBLIC MANAGEMENT Institutional Ethnographies of Changing Front-Line Work Edited by Alison I. Griffith and Dorothy E. Smith

The institutional ethnographies collected in Under New Public Management explore how new managerial governance practices influence the activities of people doing front-line work in public sectors such as health, education, social services, and international development. In these fields, public organizations have increasingly adopted privatesector management techniques, such as standardized and quantitative measures of performance focused on cost reductions and efficiency. Using research drawn from Canada, the United States, Australia, and Denmark, the contributors expose how standardized managerial requirements are created and applied, and how they are changing the ways in which front-line workers engage with their clients, students, or patients. alison i. griffith is a professor in the Faculty of Education at York University. dorothy e. smith is a professor emerita at the Ontario Institute for Studies in Education, University of Toronto, and an adjunct professor in the Department of Sociology at the University of Victoria.

This page intentionally left blank

Under New Public Management Institutional Ethnographies of Changing Front-Line Work

EDITED BY ALISON I. GRIFFITH AND DOROTHY E. SMITH

UNIVERSITY OF TORONTO PRESS Toronto Buffalo London

© University of Toronto Press 2014 Toronto Buffalo London www.utppublishing.com Printed in the U.S.A. ISBN 978-1-4426-4910-1 (cloth) ISBN 978-1-4426-2656-0 (paper)

Printed on acid-free, 100% post-consumer recycled paper with vegetable-based inks.

Library and Archives Canada Cataloguing in Publication Under new public management : institutional ethnographies of changing front-line work / edited by Alison I. Griffith and Dorothy E. Smith Includes bibliographical references. ISBN 978-1-4426-4910-1 (bound). – ISBN 978-1-4426-2656-0 (pbk.) 1. Human services – Evaluation. 2. Human services – Management. 3. Ethnosociology. I. Griffith, Alison I., 1942–, editor II. Smith, Dorothy E., 1926–, editor HV40.U53 2014

361.3

C2014-903118-1

This book has been published with the help of a grant from the Canadian Federation for the Humanities and Social Sciences, through the Awards to Scholarly Publications Program, using funds provided by the Social Sciences and Humanities Research Council of Canada. University of Toronto Press acknowledges the financial assistance to its publishing program of the Canada Council for the Arts and the Ontario Arts Council, an agency of the Government of Ontario.

an Ontario government agency un organisme du gouvernement de l’Ontario

University of Toronto Press acknowledges the financial support of the Government of Canada through the Canada Book Fund for its publishing activities.

Contents

List of Tables and Figures Acknowledgments

viii

ix

Introduction 3 alison i. griffith and dorothy e . smith Section One

23

1 Literacy Work and the Adult Literacy Regime richard darville

25

2 Learning Global Governance: OECD’s Aid Effectiveness and “Results” Management in a Kyrgyzstani Development Project 58 marie campbell Section Two

81

3 E-governance and Data-Driven Accountability: OnSIS in Ontario Schools 85 lindsay kerr 4 Digital Era Governance: Connecting Nursing Education and the Industrial Complex of Health Care 122 janet rankin and betty tate

vi

Contents

5 What Counts? Managing Professionals on the Front Line of Emergency Services 148 michael k. corman and karen melon 6 “Let’s be friends”: Working within an Accountability Circuit 177 marjorie devault, murali venkatesh, and frank ridzi Section Three

199

7 A Workshop Dialogue: Outcome Measures and Front-Line Social Service Work 201 shauna janz: For-Profit Contractors, Accreditation, and Accountability 204 naomi nichols: Research and Development Work at an Ontario Youth Shelter 212 frank ridzi: The Neighbourhood Computer Lab: Funding and Accountability 223 liza mccoy: “If our statistics are bad we don’t get paid”: Outcome Measures in the Settlement Sector 234

Section Four

251

8 A Workshop Dialogue: Institutional Circuits and the Front-Line Work of Self-Governance 253 lauri grace: Accountability Circuits in Vocational Education and Training 255 cheryl zurawski: The Circuit of Accountability for Lifelong Learning 263 christina sinding: Institutional Circuits in Cancer Care

273

Contents vii

9 Knowledge That Counts: Points Systems and the Governance of Danish Universities 294 susan wright Conclusion 339 alison i. griffith and dorothy e . smith Contributors

351

Tables and Figures

Tables 9.1 9.2 9.3

Danish Publications Points System 304 Weighting of Indicators in the Formula for Competitive Allocation of Basic Grant 304 Universities’ Research Block Grant (Basisbevilling) 2006–2012 (in million kroner) 305

Figures 3.1 3.2 4.1 6.1 7.1 7.2 9.1

Data Cleansing Loop 94 OnSIS Data Sources 98 HSPnet™ Graphic 138 Eligibility Determination Process 180 Logic Model Trajectory 225 Logic Model “Job” Sequence 226 Institutional Circuitry: The Points System from Individual Performance to World Rankings 301

Acknowledgments

This book and the papers it includes originated in a workshop titled “Governance and the Front Line” held in the fall of 2009 with an Aid to Scholarly Conferences grant, Social Science and Humanities Research Council (SSHRC) (646–2009–0060). We owe so much to those who participated in the workshop. Many are represented by chapters that appear in this volume, but there are those whose involvement in our workshop discussion played an important if now invisible role in the process of producing this book. So first, we thank all those who participated and whose work is collected here. Those whose work, for any number of good reasons, does not appear here but whose participation was invaluable include Lois Andre-Bechley, California State University at Los Angeles; Jennifer Clarke, Ryerson University; Ferzana Chaze, York University; Bonnie Slade, University of Stirling; and Mandy FrakeMistak, York University. Special thanks go to Stephanie Mazerolle, our graduate assistant, whose amazing organizational abilities meant that the workshop went off without a hitch and whose editorial work on our complex and evolving text was invaluable. And, finally, we acknowledge the support and editorial skills of Stephan Dobson. Stephan was with the book, as editor, from its beginning in the 2009 workshop; his attention to detail as well as his extensive editorial experience has been foundational to the making of this work. Beyond the application of his skills to the detail of editing has been the importance of our dialogue with Stephan, whose knowledge as a social scientist in the regions being explored in this volume is itself exceptional.

This page intentionally left blank

UNDER NEW PUBLIC MANAGEMENT Institutional Ethnographies of Changing Front-Line Work

This page intentionally left blank

Introduction alison i. griffith and dorothy e. smith

Behind Our Backs The institutional ethnographies collected in this book make visible changes in the front-line work of organizations delivering services to people – changes that are going on behind our backs. Our collected ethnographies explore the introduction of new forms of management, which, we have come to believe, has to be understood in relation to changes in the nation-state. Go back a few years to a time when a national government had the capacity to manage its economy. Keynesian theory and policy derived therefrom assumed that the state could actually oversee and control its economy to a significant degree. Today globalization, however we interpret that term, means an economy in which commerce, financial organization, and corporate functions are organized and operate transnationally (Mahon & McBride 2008). A major corporation such as Nike (Dicken 2003), for example, has its head offices in Beaverton, Oregon (at the time of writing this Introduction). That is where research is done, where some specialized product lines are produced, and where its central management is located. That central management coordinates a production process, which is organized largely through contracts with other companies that are located in various parts of Southeast Asia, in China, and in other parts of the world. This how Nike now exists as a corporate body (Dicken 2003: 235–6). Felicity Lawrence, a writer/journalist specializing in the transnationality of foods, summarizes the “global shift” as follows: “While traditional multinationals identified with a national home, TNCs [transnational corporations] have no such loyalty. Territorial borders

4 A.I. Griffith and D.E. Smith

are no longer important. This had been the whole thrust of World Trade Organisation [sic] treaties in the past decades. Transnationals can now take advantage of the free movement of capital and the ease of shifting production from country to country to choose the regulatory framework that suits them best” (2011: 20). Like it or not, our everyday living depends on the transnational organization of the economy in which TNCs play a major part. The accounting companies littering the global landscape have been systematizing accounting procedures across borders (Eaton & Porter 2008). The enforcement of legal contracts among transnational corporations becomes the responsibility of the relevant regional government authority (Cutler 2008). Free trade agreements undermine the traditional territorially defined boundaries of government control of commerce. Transnational forms of economic governance (Beder 2006) have been established: the World Bank, the International Monetary Fund (IMF), the World Customs Organization (WCO), the G8, the G20, the Organisation for Economic Co-operation and Development (OECD), and the various bodies of the United Nations. In this changing context of operation, national governments and their various subordinate powers such as states, provinces, and municipalities must now compete with other governments to secure capital investment or commercial advantage: “Instead of the interaction among states constituting the bottom line of world politics, that bottom line now consists of a range of multi-layered processes of conflict, competition and coalition building among a growing diversity of actors, large and small, old and new” (Cerny 2010: 26). Over time, governments move increasingly towards what might be described as a service relation to capital. Neoliberal discourse comes to govern both economic policy and how economic issues are represented in the media (Bashevkin 2002; Brunelle 2007). In the 1950s in Canada, the move towards what we might call the service state can be seen in national educational policies. Vis-à-vis the orientation towards promoting capital investment, the people of a country become human resources – in Canada what was formerly the Department of Manpower and Immigration (DMI) became the Department of Human Resources and Skills Development Canada (HRSDC) (Smith & Smith 1990). Originally “human resources” was a term used in the corporate context as a label for the special function of departments responsible for recruitment, for managing employee-corporate relations, and so on (Wardell 1992). The use of the term to define the functions of a national government department in Canada suggests a shift away

Introduction

5

from service to citizens towards a labour force management strategy oriented to providing a service to employers. The transnational organization of competing labour forces has been facilitated by, among others, the OECD (Pal 2008), which has developed for member states such as Canada standardized measures of students’ learning achievements such as those examined in this book in Richard Darville’s and Lindsay Kerr’s chapters (Chapters One and Three, respectively); these standardized measures are oriented towards the labour market. The Program for International Student Assessment (PISA) enables comparisons of labour force qualifications transnationally (Rubenson 2008). The 1999 Bologna Accord, European-based but including many other countries (among them Canada) in addition to the 20-plus European signatory countries, represents a commitment to a transnational standardization of higher education outcomes (AccessMasters 2012). In the context of a transnationally organized economy, citizens’ living standards come to depend on how successful governments are in developing in directions that attract and sustain investment and promote commercial opportunities for transnational capital enterprise. Taxation policies aim at increasing a country or region’s attractiveness to investment and at facilitating the expansion of commercial relations more generally. As such, what we have described here as the “service” state is also one in which “tax rates must be downwardly harmonized to ensure that the tax regimes do not present a disincentive to investment” (Shields & Evans 1998: 129). Public service costs must somehow be reduced. Alan Sears (2003) describes an emerging regime he calls the “lean state,” in which principles of lean production, developed in the capitalist restructuring of the 1970s, are transposed into the design and management of public service with the ostensible aim of reducing costs while improving efficiency. New Public Management The reorganization of the public sector on which most of our institutional ethnographies focus is known as the New Public Management (NPM). It involves the imposition of managerial regimes modelled on those already operative in the sphere of private enterprise. “Management” in relation to business developed as a discourse in the late nineteenth-early twentieth century (in institutional ethnography, discourse identifies texts connecting people through what they read, how they talk, what goes on meetings and conferences, and so on and so on).

6 A.I. Griffith and D.E. Smith

Distinctive in the emergence of systemaetic approaches to management was Frederick Taylor’s application of scientific research to the study of work process (see Montana and Charnov 2008: 15–16). As management’s major principles were laid out, most influentially by the French engineer Henri Fayol (see ibid.: 19–20), organization came to be conceived as a unit to which individual interests were to be subordinated. Fayol’s thinking marked a distinctive and influential shift in two ways: 1. An emphasis on managing as something to be learned (and hence implicitly calling for a management discourse); and 2. A differentiation between supervision as direct control over work processes and management as an overall governing function and authority. Further developments in managerial discourse and practice took place largely in the United States, focusing first on behavioural approaches and later increasingly on administration and administrative practices (ibid.: 23–30). Based on these historical resources, management science has now developed as a generalized discourse (with many subspecializations) taught at all levels in colleges and universities. Applying what has come to be called New Public Management has involved the adoption and adaptation of strategies and textual technologies that revolutionized corporate management during the 1980s and 1990s (Drucker 1964; Osborne & Gaebler 1993; Davidow & Malone 1993). NPM is a major institutional specification of neoliberalism aiming to produce in the public sector a simulacrum of privatesector organization and management (Aucoin 1995; Shields & Evans 1998; McCoy 1998, 1999; McBride 2005; Savoie 2003). Some aspects of NPM focus on reproducing in the public sector the marketized relations characteristic of business corporations (Osborne & Gaebler 1993; Hood 1995; Newman 2002; Wright 2008). More directly relevant to the institutional ethnographies collected in this book are those elements focused on how new management functions within units and, in particular, on managerial control of workers and their work. Here, from his original list of seven, is Christopher Hood’s description of “moves” introduced by the NPM that are particularly relevant in this context: 1. A move towards greater use within the public sector of management practices which are broadly drawn from the private sector …

Introduction

7

2. A move towards greater stress on discipline and parsimony in resource use and on active search for finding alternative, less costly ways to deliver public services … 3. A move towards “hands-on-management” (i.e., more active control of public organizations by a visible top manager wielding discretionary power) as against the traditional style of “hands-off” management in the public sector, involving relatively anonymous bureaucrats at the top of public-sector organizations … 4. A move towards more explicit and measurable (or at least checkable) standards of performance for public sector organizations, in terms of the range, level, and content of services to be provided, as against trust in professional standards and expertise across the public sector. (1995: 97)

The institutional ethnographic investigations collected in this book explore how new managerial practices are imposed and operate in public sector services in which the major work focus for realizing objectives is done at the front line. Public sector front-line work presents some special (though institutionally various) problems in realizing NPM objectives, particularly those objectives that seek to establish standardized evaluations of performance or outcomes and enable comparison with similar services. In an early study of social work and the theory of organization, Gilbert Smith (1970) took up and developed the concept of front-line organizations as it had been originally formulated in Dorothy Smith’s (1963) study of a state mental hospital in California. Gilbert Smith described social work as functioning in organizations wherein a centralized hierarchy cannot effectively command units at the periphery. He described “the distinctive characteristics of ‘front-line organizations’” as follows: 1. The organizational initiative is located in front-line units; 2. Each unit performs its tasks independently of other units; and 3. There are obstacles to the direct supervision of the activities of such units. (1970: 37)

Gilbert Smith emphasized the importance of professional training in ensuring that those making decisions effective for clients at the front line “have internalize[d] standards of commitment and acquire[d] levels of competence which ensure that he [or she] acts in accordance with a given set of norms even in the absence of intensive supervision” (ibid.: 41).

8 A.I. Griffith and D.E. Smith

As John Clarke and Janet Newman have argued, it is precisely these discretionary powers of professionals as well as the rigidities of bureaucratic organization that the “moves” (Hood 1995) towards “managerialism” displace. At the same time, the problems remain of how to manage organizations in which people working at the front line must align their activities with these new major objectives, whether in work they themselves are producing or in work dealing directly with those being served. New public management has had to and continues to struggle with developing forms of control that are responsive to the new practicalities. Managerial forms of control in business settings take for granted production processes standardized to fit the categories of managerial accounting systems, thus enabling decisions, planning, and command to be centralized and hierarchic (see Zurawski, Chapter Eight). However, as managerial systems developed for business corporations are translated for public-sector front-line organizations, the distinctive characteristics that Gilbert Smith lists are not done away with, but must somehow be entered into managerial practices of control. Hence, the focus of the ethnographies collected in this book is on how that which Clarke and Newman label “the managerial state” reorganizes work performed at the front line, including at the interchange between institutions and the people served. Our Starting Place This book developed out of concerns we, the editors, were beginning to have about what was going on in our society (as well as elsewhere) “behind our backs.” Studies such as Clarke and Newman’s book (1997) alerted us to the possibility that changes happening in Britain had their parallels in Canada: new accountability routines, cutbacks in the public sector, and the decline of professional autonomy. So, too, the institutional ethnographic research presentations we were hearing and the papers we were reading also spoke of change – a reorganization of people’s everyday/everynight lives and how their work was being transformed. Across professional and disciplinary boundaries, the similarities between the described changes were striking. For us, it was as if this research, starting in the everyday world of work, school, and home, was exploring a range of mountains we could see in the distance – each study opening up a new trail and discovering more of the mountain range. Yet there was much still to be learned, particularly about how the complex of social relations was being put together as people’s

Introduction

9

everyday activities. Our discussions were further stimulated by reading institutional ethnographies of front-line managerial reorganization, most notably, perhaps, Janet Rankin and Marie Campbell’s (2006) study of the managerial reorganization of hospital nurses’ front-line work in British Columbia. Of course, there were other sources describing the phenomena we were becoming increasingly aware of (e.g., Ball 2012; Rizvi & Lingard 1999). There was little, however, that would allow us to explore, beyond people’s everyday activities, the complex of social relations that organizes their work. Opening up further routes of discovery into the range of mountains that our discussions and reading had brought into view became a focus for the working conference that we organized entitled “Governance and the Front Line” funded by the Social Sciences and Humanities Research Council (SSHRC) and held in the Fall of 2009 in Toronto. We invited institutional ethnographers and researchers using related approaches whose work was showing us directions into this mysterious mountain range but who had not yet fully explored them. Those faculty and graduate student researchers who were able to attend came mainly from Canada but also from the United States, Denmark, and Australia. Our research topics included nursing, education, families and schools, health work, international development, literacy work, and new methods for collecting data. We came together as a group to discuss our research and to push further to bring into view the institutional technologies of change that were starting to show up in our research. Exploring the routes and where they led became the focus of our workshop sessions. The chapters in this book were developed by participants out of their current research, their workshop presentations, and the group discussions at the conference; research, thinking, and writing continued beyond the workshop and in some cases were completed only two or three years after the workshop was concluded. We modelled our workshop on a previous one organized by Pamela Moss and Kathy Teghtsoonian in 2005,1 creating as best we could a dialogue among participants. Before the conference, we asked the participants to write a short description of their line of research as the starting point for our work together. After sharing research descriptions among participants, we established working groups that were asked to sketch out a paper during the conference. Our hope was that the participants would be able to bring their research to bear on the more general topic they had selected and produce a set of papers speaking to the new pathways we wanted to open up. Indeed, for two groups, this

10 A.I. Griffith and D.E. Smith

collaboration worked well (McCoy, Janz, Nichols, and Ridzi; Grace, Zurawski, and Sinding). Other participants engaged with the conference working groups but elected to pursue individual work that was already underway (Campbell; Corman and Melon; Darville; DeVault, Venkatesh, and Ridzi; Kerr; Rankin and Tate; Wright). The result of this collaboration and ongoing discussion is this book – a wide-ranging set of topics all oriented to the changes being experienced as people’s work is subjected to managerial reorganizing. Most of the studies included here focus on how the work of those active at the front line of the public sector in Canada, the United States, and Australia (and elsewhere – e.g., Denmark – see Susan Wright’s Chapter Nine) is being reorganized by what is generally known as the “new public management.” Zurawski (Chapter Eight) highlights one of the NPM governing technologies operating in the private sector. All the chapters bring into view not only changes in how workers relate to the clients they are working with, but also how the new textual modes of management themselves are forms of work to be done. As indicated above, the social scientific approach used by most of the studies included in this book is institutional ethnography (D. Smith 1987, 2005). Institutional ethnography can best be described as a method of inquiry designed to discover how our everyday lives and worlds are embedded in and organized by relations that transcend them, relations coordinating what we do with what others are doing elsewhere and elsewhen. It starts and remains always with actual individuals and what they are doing in the actual situations of their bodily being, but focuses on how what they do is coordinated beyond local settings. Along the way of its development, institutional ethnographers have discovered how to incorporate texts into ethnographies (D. Smith 2006b). This recognition of how texts enter into and organize sequences of actual people’s actions has made it possible to extend ethnographic approaches into the complex translocal and text-mediated relations that govern contemporary societies. These are relations that rule; they are objectified forms of consciousness and organization, subsuming phenomena conceptalized as large-scale organization, government, governance, discourse, and the like. Institutional ethnographers have found a “generous” conception of work useful. We have expanded use of the term beyond its everyday meaning to direct attention to whatever people are doing that is intentional, takes time and effort, and is getting done at a particular time and in a particular place. The studies included in this book are mostly

Introduction

11

concerned with work (in this sense) being done at the front line of the public sector where institutional workers engage directly with those they serve – clients or so-called customers – or are doing work of their own for which they are institutionally responsible. In those institutional settings where services are provided to clients, we should remember that, using the “generous” conception of work, those who are served are also working; they put in time and energy and are active in actual local settings as they engage with or are caught up in an institutional process (see, e.g., Sinding’s experiential ethnography in Chapter Eight of this book). There must be exceptions, of course, as, for example, when paramedics whose work is described in Michael Corman and Karen Melon’s study (Chapter Five) have to deal with an unconscious patient. But Timothy Diamond’s ethnography (1992), based on his experience as a nursing assistant in two nursing homes, stresses that even the severely physically hampered should be recognized as “at work” as they struggle for comfort, for help, and to be responsive to others. In evaluating front-line performance or outcomes, NPM requires public sector organizations in which objectives are realized at the front line to develop evaluative techniques that identify front-line outcomes. Unlike business circuits of accountability, when the front-line work of the public sector involves people, it is not directly resolvable into monetary form. In the outcomes or performance of employees’ work in the private sector, business enterprises can compare costs of production with the price of the standardized products sold in a market, hence completing accounting circuits showing profit (or loss). Yet the outcomes or performance of employees’ work in public sector agencies where objectives are realized at the front line are not automatically rendered into a monetary form that makes them directly measurable. Nor are they readily standardizable from instance to instance. The institutional ethnographies collected in this book explore, in various ways, aspects of the relations between the front-line work with people – the actual work of bringing the managerial “boss” or governing texts and the standardized textual representations of frontline performance or results/outcomes together with the practicalities of working with people whose lives and experiences are various and unique. Bringing them together both at the workshop where they were presented and discussed and then collected in this book has created a dialogue among them and with its editors in which the significance of what we are calling “institutional circuits” for managing front-line work has emerged as a concept that is applicable in general to all types

12 A.I. Griffith and D.E. Smith

of institutions and organizations in which objectives are realized at the front line and by workers as individuals. Institutional Circuits The notion of institutional circuits emerged in part out of past institutional ethnographic research and in part, as stressed above, in the dialogue among researchers during the 2009 workshop. Institutional circuits are recognizable and traceable sequences of institutional action in which work is done to produce texts that select from actualities to build textual representations fitting an authoritative or “boss” text (law, policy, managerial objectives, frames of discourse, etc.) in such a way that an institutional course of action can follow. Once a textual representation fitting the categories/concepts established by the authorized or boss text has been produced, the actuality (as textually represented) becomes institutionally actionable. How George W. Smith specified the use of the concept of social relations in institutional ethnography can be directly adapted to the concept of institutional circuits as it is used here. Smith insists that the concept of social relations “provides a method of looking at how individuals organize themselves vis-à-vis one another. It is not a thing to be looked for in carrying out research, rather, it is what is used to do the research” (1995: 24). The same applies to the concept of institutional circuits; it is a “not a thing”; it is a method of looking for how people coordinate what they do with one another. As developed here, the concept of institutional circuits links back to the conception of “ideological circles” formulated in Dorothy Smith’s early critique of sociology (1974). Ideological circles identified distinctive procedures Smith had discovered in her investigations of the social organization of knowledge; they are “methods of creating accounts of the world that treat it [i.e., the world] selectively in terms of a predetermined conceptual framework” (1990: 93) (see Lindsay Kerr’s Chapter Three for a research deployment of the concept). George W. Smith’s article “Policing the Gay Community” (1988) took up the concept but made a significant innovation. His study goes beyond the ideological circle to show people’s textually coordinated work of building, from an actual situation, a textual representation that will fit an institutionally authoritative text, thereby unleashing an institutionally “mandated course of action.” He introduces a report written by undercover detectives visiting a bathhouse where gay men were enjoying themselves sexually. The report describes what the detectives saw and how they

Introduction

13

observed; it was designed very specifically to fit the categories of the Ontario “bawdy house” law. For example, “gross indecency” is one of the criteria under which charges can be brought under that law. But what is “gross indecency”? Smith shows how the categories of the law operate as a shell to be filled with specifics designed to fit them. Sexual acts fit the shell only if they are performed publicly. So the detectives in their prowl around the bathhouse had to look for and select what could be counted as sexual acts performed in public as well as recording how they did that looking in their report. Masturbation, for example, is not as such grossly indecent; it must be done in public to fit the category of “gross indecency.” The detectives described in their report how some men were masturbating while others watched. A textual representation was produced that selected and built from the actualities of the bathhouse situation that could be fitted to the clauses of Ontario’s “bawdy house” law. The detectives’ report then became the basis of charges being laid against the owner, the manager, and the “found-ins” of the bathhouse. Though Smith did not use the term, he engineered an important transition, which the concept of “institutional circuits” makes explicit. It is a circuit in which people are actively at work; the circle or circuit can be discovered in his ethnography, not as an abstraction but as a sequence in which people are at work producing a textual representation conforming to an authoritative or “boss text,” thus enabling a course of institutional action. The institutional ethnographies collected in this volume address the institutional circuits introducing new managerial forms of organization into front-line organizations. New managerial practices call for setting clear objectives (usually with a definite time frame), setting up realistic resources and monitoring performances or outcomes that have been prefigured in the objectives (Montana & Charnov 2008). In his historical account of the managerial reconstruction of the civil service in the United Kingdom, which started in the 1960s, Brendan McSweeney emphasizes the weight of professional accountants and accountancy. The technical specifics of accountancy were not always realized or realizable, but “optimizing input-output relationships” became an imperative (1994: 260). Central to the practice of the new forms of management has been the introduction of standardized representations of performance or outcomes “stated in concrete and measurable performance terms” (Montana & Charnov 2008: 277; emphasis in the original) to be applied to all relevant organizational units.

14 A.I. Griffith and D.E. Smith

The institutional ethnographies in this book bring into view analogous imperatives as they are worked out as textual realities or representations methodically produced to articulate to managerial practice and principle. Most of the studies (Sinding’s section in Chapter Eight is an exception) explicate institutional circuits that follow the logic of accountability described in McSweeney’s account of “input-output relationships.” In these studies, we will learn something of the methods and technologies enabling textual representation of performance or outcome that have been devised to respond to the specifics of the varied front-line work situations under ethnographic scrutiny. For example, the legislation establishing the Chronic Care strand of the U.S. Medicaid program operates as the “boss text” governing how individuals become eligible for Medicaid (see DeVault, Venkatesh, and Ridzi, Chapter Six). Evaluation of an individual’s eligibility is presented through a monetary medium and relies on an assessment of his or her financial status using accounting technology. Here is a circuit: categories and concepts in the boss text organize selective attention to actualities; the representation of front-line work produced becomes thereby readable and interpretable within the frame established in the boss text. People’s work, whether they are staff or clients, somehow translates the everyday actualities of their doings (as either staff or clients) into texts that become stand-ins for whatever has actually been happening. Such representations have been designed to fit the authorized frames embedded in boss texts. The latter may be identifiable as policies, plans, discourses, laws, and so on; examples in this book are to be found in Richard Darville’s explication of the redesigning of adult literacy discourse (Chapter One) and Marie Campbell’s account of the implementation in Kyrgyzstan of the 2005 Paris Declaration on Aid Effectiveness and its recommended Management by Results (MBR) strategy (Chapter Two). Through the creation of such textual realities, actualities can be subjected to courses of action mandated (G.W. Smith 1988) by the boss or governing text. NPM establishes managerial texts that rely on textual technologies to generate the required standardized representations of front-line work. These are the modes of inscribing front-line actualities into managerially actionable forms that are clarified by the studies that follow. A subcategory of institutional circuits aim to produce front-line accountability – we are calling them “accountability circuits.” Hood suggests that “accountability” is generally required when governance functions, operations, or actions are broken down into “separately managed ‘corporatized’ units for different public sector ‘products’” (1995: 97).

Introduction

15

Some of the studies in this volume focus on implications of the commensurability achieved when a government has adopted the policy of disaggregating specific public service areas, allocating or enabling the allocation of different functions to different specialized for-profit or notfor-profit organizations. Shauna Janz’s study (Chapter Seven) shows how performance-commensurable accountability circuits have reorganized the work and client relations of one such specialized for-profit organization under contract to the British Columbia government. In the same chapter, Liza McCoy’s ethnography of the provision of training for immigrant women shows how the Alberta Ministry of Human Resources and Employment (MHRE) works with a number of agencies and for-profit organizations in delivering “an integrated system of employment services that implement provincial labour force development policies and control access to services.” Richard Darville’s account (Chapter One) describes a shift in the discourse of professional adult literacy educators. He shows how Canadian governments have displaced that professional discourse with another, one that is fully coordinated with international conceptions (originating in the Organisaton for Economic Co-operation and Development – OECD) of methods and objectives. This shift resets the governing frame to make commensurable the performance and outcomes of dispersed adult literacy programs across Canada – organizations that are competing for government funding. As mentioned above, the institutional circuits explored ethnographically in this book regulate what is going on at the front line, where an institutional process is engaged directly with people or, as in Zurawski’s or Wright’s studies (Chapters Eight and Nine, respectively), where individual front-line workers themselves are directly responsible for accomplishing the required managerial performance. Those public sector organizations providing services to people confront the insertion of accountability circuits with special difficulties. People are individuals. Not only – unless they are identical twins – are they differentiated genetically, but also they/we all have lived different lives and have had different experiences; we have different knowledges, competencies, desires, and brains that are rigged differently. And the situations that bring us within the scope of institutional action also are different. Nonetheless, to become actionable within institutional mandates, our actions must be translated into textual representations standardized across settings to fit the frame(s) of the relevant boss text. Marjorie DeVault, Murali Venkatesh, and Frank Ridzi (Chapter Six) explore a particular instance of a reorganized Medicaid accountability circuit in the United

16 A.I. Griffith and D.E. Smith

States. This circuit has resolved the idiosyncrasies of the engagement of social worker and client by translating the client’s assets into standardized and generalizable forms. Liza McCoy (Chapter Seven) tells us how the Immigrant Employment Centre (IEC) is evaluated by the Alberta Ministry of Human Resources and Employment on the basis of the success of individual client Investment Plans. The particularities of the relationships between immigrant women and the community not-for-profit centre are resolved into standardized forms, which then are used to assess the community agency. Michael Corman and Karen Melon’s account (Chapter Five) of the circuit built to make the work of paramedics accountable brings into view a disjuncture between the particularities of their actual work and how it has to be represented textually. Frank Ridzi’s account (Chapter Seven) concerns the struggles of a caring program director to achieve appropriate outcomes within constraints on how he could adapt to the actualities of a local setting. These and other chapters explicate how the institutional circuits of NPM impose an order of standardized representation on the tough recalcitrance of people-work actualities that never quite fit the frames established in the institutional boss texts. Think of grocery shopping. What people want and look for varies. But individual diversity is resolved into an exchange of money for the goods the store has for sale. Individual preferences can readily be measured statistically to compare preferences for different products and in relation to costs. But where front-line work with people is not mediated monetarily and yet some kind of standardized across-the-board measures are required, textual technologies have been and are translating the actualities of that work into standardized textual representations fitting managerial frames. Not all the ethnographies collected here make the relevant technology a focus, but specialized technologies are apparent in all. Textual technologies have become integral to creating institutional circuits that make front-line work accountable. Some of the studies presented here bring into view the significance of such technologies in producing the transnational comparability of a given country’s public institutions. Lindsay Kerr (Chapter Three), for example, brings out the significance of educational technologies in producing standardized representations or measures of the educational performance of teachers, schools, school districts, provinces, and countries and how the measures circulate in the various agencies of educational governance. Richard Darville (Chapter One) introduces us to the International

Introduction

17

Adult Literacy Survey (IALS), which through the OECD compares tests of adult literacy across more than 20 countries. Other chapters bring into view how managerial technologies operate in the contractual or contract-analogous relations between government and subordinate governmental units or non-governmental organizations (NGOs). For example, Shauna Janz (Chapter Seven), describes the detailed reporting on a client’s improvement that was required to sustain the accreditation standards of the non-governmental organization she worked for and studied. In contrast, some of the studies show that the inserted technologies simply do not work as a representation of what is getting done at the front line. Lauri Grace’s account (Chapter Eight) of vocational training in Australia shows the instructors in a college struggling to figure out how to translate what is actually going on and what they are actually doing into technically devised categories of commensurability that make no sense in terms of their local practices. In Susan Wright’s Chapter Nine we learn of the varying responses of Danish academics to the imposition of a standardizing points procedure for evaluating faculty publishing productivity. The points system developed to measure faculty productivity conforms to the publishing patterns of the natural sciences; it does not readily fit those of academics in the humanities and social sciences; and, though their published output fits the point system more easily, even natural sciences faculty find their sense of professionalism undermined. Such standardized textual representations of performance may require reorganizing work being done at the front line. Meeting the requirements of an accountability circuit means keeping records, filling in forms, and so on. The technologies of accountability may significantly reorganize the work of those at the front line, as can be seen in Shauna Janz’s and Naomi Nichols’s descriptions (Chapter Seven) of how new forms of accountability actually change front-line workers’ relations with clients, creating difficulties that had not been foreseen and that they could do little to modify. Frank Ridzi (Chapter Seven) describes how Jim (not his real name), a program director working with and within an elegantly designed Neighborhood Networks program in New York state offering computer training to people in housing developments subsidized by the federal US. Department of Housing and Urban Development, wrestles with and within the constraints of the design of the accountability model to get residents from the nearby housing development involved. Producing technologies to standardize and make comparable performances, outcomes, or the costs of

18 A.I. Griffith and D.E. Smith

inputs and so on presents particular difficulties when what has to be represented institutionally does not straightforwardly fit into what is involved in doing the work. Thus, Janet Rankin and Betty Tate (Chapter Four) describe how imposing formalized reporting requirements on the supervision of nurses in the practicum stage of their training interferes with the supervisor’s ability to be responsive to the particularities of how an individual trainee learns her practice. Any technology that attempts to transcribe the actualities of a performance of this kind can never be entirely adequate; it is always going to be skewed; it is always going to involve specialized work on the part of those delivering the institutional service to find ways of writing the actual into the technological standardization that is responsive to the governing frame(s) of the boss text(s). New technologies of accountability may also add to the work of front-line workers and may indeed radically reorganize how their work can be done. Such was the case in Janz’s and Nichols’s experiences described in their ethnographies (Chapter Seven). Grace, Zurawski, and Sinding in their workshop dialogue (Chapter Eight) stress how workers adopt self-governing strategies in response to the imposition of institutional circuits. Sinding’s part of the study is a special case because it is the only one in our collection that gives an account from the client’s side of the front line; she describes her own experience as a client/patient and of working with a physician to deal with the contradictions between a patient’s need-to-know and the institutional constraints on the physician. These studies open up some of the changes that, in a quiet way, are taking place in how our societies are being put together behind out backs. We, the editors of the book, have learned a great deal about the kinds of reorganization that have been happening. What we have come to see as a key issue, one that remains unexamined generally in the institutional circuits that have emerged as the instruments of restructuring, is that of fundamental problems involved when services provided directly to people are required to be performed in ways that are representable textually in a standardized and measurable form. However ingenious the technologies, the disjuncture between textual realities produced to fit frames established in boss texts and the actualities of what is going on in people’s lives remains as an obstinate presence. The move towards the managerial state as specified by Clarke and Newman (1997) displaces the responsiveness of professional discretion and judgment without remedying the pitiful rigidities of traditional bureaucracy.

Introduction

19

NOTE 1 The title of the conference co-organized by Pamela Moss and Katherine Teghtsoonian was “Illness and the Contours of Contestation”; it was held in Victoria, British Columbia, in November 2005 and was supported with funding from both the Social Sciences and Humanities Research Council (SSHRC) and the Canadian Institutes of Health Research (CIHR). See Moss and Teghtsoonian (2008).

REFERENCES AccessMasters. 2012. Bologna accord: Overview of main elements. Retrieved www.accessmasterstour.com/masters/bologna-accord/index.html. Aucoin, P. 1995. The New Public Management: Canada in Comparative Perspective. Montreal: Institute for Research on Public Policy. Ball, S.J. 2012. Global Education Inc: New Policy Networks and the Neoliberal Imaginary. London: Routledge Falmer. Bashevkin, S. 2002. Welfare Hot Buttons: Women, Work and Social Policy Reform. Toronto: University of Toronto Press. Beder, S. 2006. Suiting Themselves: How Corporations Drive the Corporate Agenda. London: Earthscan. Brunelle, D. 2007. From World Order to Global Disorder: States, Markets, and Dissent. Vancouver: UBC Press. Cerny, P.G. 2010. Rethinking World Politics: A Theory of Transnational Neopluralism. Oxford: Oxford University Press. http://dx.doi.org/10.1093/ acprof:oso/9780199733699.001.0001. Clarke, J., & J. Newman. 1997. The Managerial State: Power, Politics and Ideology in the Remaking of Social Welfare. London: Sage. Cutler, A.C. 2008. Transnational law and privatized governance. In L.W. Pauly & W.D. Coleman (eds), Global Ordering: Institutions and Autonomy in a Changing World, 144–65. Vancouver: UBC Press. Davidow, W.H., and M.B. Malone. 1993. The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21st Century. New York: HarperCollins. Diamond, T. 1992. Making Gray Gold: Narratives of Nursing Home Care. Chicago: University of Chicago Press. http://dx.doi.org/10.7208/chicago/ 9780226144795.001.0001. Dicken, P. 2003. Global Shift: Transforming the World Economy. 4th ed. New York: Guilford. Drucker, P.F. 1964. Management for Results: Economic Tasks and Risk-taking Decisions. New York: William Heinemann.

20 A.I. Griffith and D.E. Smith Eaton, S., & T. Porter. 2008. Globalization, autonomy and global institutions: Accounting for accounting. In L.W. Pauly & W.D. Coleman (eds), Global Ordering: Institutions and Autonomy in a Changing World, 125–43. Vancouver: UBC Press. Hood, C. 1995. The “new public management” in the 1980s: Variations on a theme. Accounting, Organizations and Society 20 (2–3): 93–109. http://dx.doi. org/10.1016/0361-3682(93)E0001-W. Lawrence, F. 2011. A mere state can’t restrain a corporation like Murdoch’s. Guardian Weekly, 28 July: 30. Mahon, R., & S. McBride, eds. 2008. The OECD and Transnational Governance. Vancouver: UBC Press. McBride, S. 2005. Paradigm Shift: Globalization and the Canadian State. Halifax: Fernwood. McCoy, L. 1998. Producing “What the deans know”: Cost-accounting and the restructuring of post-secondary education. Human Studies 21 (4): 395–418. http://dx.doi.org/10.1023/A:1005433531551. McCoy, L. 1999. Accounting discourse and textual practices of ruling: A study of institutional transformation and restructuring in higher education. PhD dissertation, Sociology and Equity Studies in Education. OISE / University of Toronto. McSweeney, B. 1994. Management by accounting. In A.G. Hopwood & P. Miller (eds), Accounting as Social and Institutional Practice, 237–69. Cambridge: Cambridge University Press. Montana, P., & B.H. Charnov. 2008. Management. 4th ed. New York: Barron’s Educational. Moss, P., & K. Teghtsoonian, eds. 2008. Contesting Illness: Processes and Practices. Toronto: University of Toronto Press. Newman, J. 2002. The new public management, modernization and institutional change: Disruptions, disjunctures and dilemmas. In K. McLaughin, S.P. Osborne & E. Ferlie (eds), New Public Management: Current Trends and Future Prospects, 77–90. London: Routledge. Osborne, D., & T. Gaebler. 1993. Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector. New York: Penguin. http://dx.doi.org/10.2307/3381012. Pal, L.A. 2008. Inversion without end: The OECD and global public management reform. In Mahon & McBride, 69–76. Rankin, J.M., & M. Campbell. 2006. Managing to Nurse: Inside Canada’s Health Care Reform. Toronto: University of Toronto Press. Rizvi, F., and B. Lingard. 1999. The OECD and global shifts in education policy. In R. Cowen & A. Kaxamias (eds), International Handbook of Comparative Education, 247–60. Dordrecht: Kluwer.

Introduction

21

Rubenson, K. 2008. OECD education policies and world hegemony. In Mahon & McBride, 226–41. Savoie, Donald J. 2003. Whatever Happened to the Music Teacher? How Government Decides and Why. Montreal and Kingston: McGill-Queens University Press. Sears, A. 2003. Retooling the Mind Factory: Education in a Lean State. Aurora, ON: Garamond. Shields, J., & B.M. Evans. 1998. Shrinking the State: Globalization and Public Administration “Reform.” Halifax: Fernwood. Smith, D.E. 1963. Power and the front-line: Social controls in a state mental hospital. PhD. dissertation, University of California, Berkeley. Smith, D.E. 1974. The ideological practice of sociology. Catalyst 8:39–54. Smith, D.E. 1987. The Everyday World as Problematic: A Feminist Sociology. Milton Keynes: Open University Press. Smith, D.E. 1990. Texts, Facts, and Femininity: Exploring the Relations of Ruling. London: Routledge. http://dx.doi.org/10.4324/9780203425022. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: Rowman & Littlefield. Smith, D.E. 2006a. Institutional Ethnography as Practice. Lanham, MD: Rowman & Littlefield. Smith, D.E. 2006b. Incorporating texts into institutional practice. In D.E. Smith (ed.), Institutional Ethnography as Practice, 65–88. Lanham, MD: Rowman & Littlefield. Smith, D.E., & G.W. Smith. 1990. The job-skills training nexus: Changing context and managerial practice. In J. Muller (ed.), Education for Work, Education as Work: Canada’s Changing Community Colleges, 171–96. Toronto: Garamond. Smith, G. 1970. Social Work and the Sociology of Organization. London: Routledge and Kegan Paul. Smith, G.W. 1988. Policing the gay community: An inquiry into textuallymediated social relations. International Journal of the Sociology of Law 16:163–83. Smith, G.W. 1995. Accessing treatments: Managing the AIDS epidemic in Ontario. In M. Campbell & A. Manicom (eds), Knowledge, Experience and Ruling Relations: Studies in the Social Organization of Knowledge, 18–34. Toronto: University of Toronto Press. Wardell, M. 1992. Changing organizational forms: From the bottom up. In M. Hughes (ed.), Rethinking Organization: New Directions in Organization Theory and Analysis, 145–64. Newbury Park, CA: Sage. Wright, S. 2008. Governance as a regime of discipline. In N. Dyck (ed.), Exploring Regimes of Discipline: The Dynamics of Restraint, 75–93. Oxford: Berghahn.

This page intentionally left blank

SECTION ONE

The two chapters in the first section of this book bring into focus the larger governmental or transnational organizations that frame the new forms of managing the public sector. The institutional circuits that maintain the managerial focus of front-line work are essential to managerial success. First, the focus of people’s work has to be re-conceptualized through newly developed managerial discourses. Second, managerial routines must be established as the reporting processes for front-line work and must include textual technologies translating local work into standardized and measurable representations. Richard Darville (Chapter One) addresses this transformational relation as he details the shifting discourses of literacy. Beginning in the experience of literacy practitioners, Darville creates an institutional map of the discourses and technologies that are reframing literacy work. He uses the term “literacy work” to describe teaching and advocacy that is human-centred and relational. He contrasts literacy work with the new conceptual framing of literacy assessment instrument – the International Adult Literacy Survey (IALS) – that promotes and regulates literacy in relation to economic competitiveness. He describes how this measurement technology refocused literacy work towards individual deficits, creating a disjuncture between literacy as it is experienced in everyday life and the description of literacy reported by the IALS. Darville identifies the links between the IALS and the OECD, bringing into view the transnational features of the shift from literacy work to literacy reporting. He shows (as is echoed in many of the other chapters in this book) that the reporting processes are coordinated across individual sites, building the commensurability of data for transnational purposes. Yet however successful for transnational purposes the architecture of

24

Section One Introduction

commensurability may be, the story on the ground is one of difficulty and dissonance. Marie Campbell’s Chapter Two takes up the theme of commensurability as she moves us from literacy work in Canada to development work in Kyrgyzstan. She identifies similar transnational textual processes at work that reduce the importance of distance between the research sites, or of differences in their everyday worlds. Campbell has extensively researched the changes in governance in the field of nursing (e.g., Rankin & Campbell), and in this chapter she brings her research skills and knowledge of new public management in the public sector to bear on international development aid and the accountability circuits that are changing the everyday activities of aid workers. She notes that aid effectiveness has come to be judged through the frame of “management by results” (MBR). Her chapter, examining women working in internationally funded grassroots organizations, brings into view the shift from locally organized actions to a (re)formulated conceptual frame for managing international development aid. Beginning with concerns expressed about the Paris Declaration on Aid Effectiveness (2005) by women working in international non-governmental organizations (INGOs), Campbell focuses on the textually organized relations through which international funding and coordinating agencies (e.g., the OECD; the Dutch INGO Hivos) manage the international aid process. Concentrating particularly on the work of one INGO manager, this chapter addresses the ways that international aid workers must learn to see and report their work in relation to a set of standardized indicators.

1 Literacy Work and the Adult Literacy Regime richard darville

Although “literacy” seems only to name the ordinary ability of masses of people to read and write, in discourses of literacy the term is not simply referential. Since it came into conventional use some two centuries ago, its meaning and implications have inevitably been contested, pointing to reading and writing certain kinds of texts for certain religious, political, cultural, economic, and educational projects. The term characteristically works to conceptualize and organize divisions between insiders and outsiders in such domains of knowledge and action. Adult literacy education has developed in Canada and other industrialized countries over the last four decades. Early in that development, many advocates and activists, including myself, had a sense that we were inventing this new field, and we often called it “literacy work.” Without claiming any overriding authority, I suggest that the imaginary of literacy work has two core tendencies. First, literacy work is responsive to learners, in tendencies that carry on from learner-centred and humanistic notions in North American and global educational discourses. It is respectful of people often not respected and confirms their knowledge and capacity. It recognizes learners’ diverse particularities of experience and ways of learning. It aims to discover, in the experiences of literacy learners and teachers, the teaching practices and materials that support people’s learning of reading and writing and of what can be done with them Second, literacy work understands literacy as relational, in tendencies that carry on from discourses of North American community development and from global-south discourses of popular or liberatory pedagogy. It not only accepts that definitions of literacy vary with times and purposes, but also recognizes that “low literacy” is often literacy

26

R. Darville

restricted by the circumstances of people’s lives. It recognizes that one crucial question about literacy is whose experiences and purposes are carried in dominant forms of print, that solutions to “literacy problems” involve those who produce texts as well as those who use them, and that developing literacy is potentially challenging to social relations within which literacy has been restricted. It treats literacy development as collaborative development and sometimes as the development of solidarity. Certain terms recurrently used by literacy workers point to both the responsive and the relational (I will use the term “literacy workers” here as a placeholder for those who aspire to a responsive and relational field). “Voice” is both self-articulation or self-assertion and a kind of cultural revolution that breaks constraints on who might speak and be heard. “Participatory” names both learner involvement in curriculum and program choices and a beginning of challenging subjection and oppression. The “literacy movement” encompasses not only teaching, but also advocacy for a cause. “Community” programs are understood not merely as located in and attuned to communities, but as resources for their development. These tendencies recur in program developments, in materials prepared for classrooms, in advocacy proposals, and in conversations over coffee. They have been carried and developed in practitioner publications and in what has come to be called “research in practice.” They make for the occasional attraction of literacy workers to scholarship that conceives of literacy as social practices (e.g., Barton, Hamilton & Ivanic 2000). This imaginary of literacy work was more apt 25 years ago than it is today. In the interim, the sense of space for the invention by advocates and activists of a field grounded in front-line work has shrunk. This shrinkage is one aspect of an adult literacy regime, an ensemble of intertwined governing processes that, within the overall governance of society, aim both to promote literacy and to regulate its development. This chapter begins in the experiences of literacy workers and of advocates for the project of literacy work, taking them as a standpoint from which to explicate or map some of the regime’s intertwined governing practices. Both the territory and the map are complex. So I offer a brief advance sketch of experiences that opened up the analysis and some of the main pathways that the map describes. Many of the experiences of literacy workers and advocates relevant here are reading experiences, that is, engagements with the regime’s

Literacy Work and the Adult Literacy Regime

27

various media, policy, and administrative texts. Of course, in many readings we take up these texts in order to enact the regime – for example, to send required reports on teaching and learning up the institutional hierarchy or to latch onto discursive resources that seem useful for building a plausible case for the support of literacy programs. In such ways, we read the regime’s texts and use their renderings of literacy in our own actions. But in another mode of reading – sometimes flickering in alternation with the first – we also know that those texts’ renderings of literacy are ruptured from what we otherwise know about literacy and hope for from literacy work. Our visceral reactions, put in words, might say, “How do they manage to get it so wrong?” These reading experiences open the way to analyses both of how literacy has been textually constructed and of how these constructions are resources for making the regime. So, attentive to “texts as they are entered into and are integral to the organization of sequences of action” (Smith 1999: 218) in large-scale organizational processes, this chapter unpacks such texts, analysing how their renderings of literacy are entered into courses of action in the literacy regime. Early on, advocates had the curious experience of seeking media attention and finding first that it was not attainable, and then that it rather suddenly materialized. What appeared were media narratives of illiteracy and literacy learning that, read from the standpoint of literacy work, oddly exaggerated both the debilitating effects of limited literacy and the transformative powers of literacy. The shock of such readings opened the way to the analysis here of how the media framed literacy and how the developing media attention was coordinated with a developing governmental interest in a “literacy issue” – one elaboration of a “code” that construes literacy as a force that can further other policy concerns, chiefly economic competitiveness. Policy documents, which earlier had commonly declared literacy a right, began to assert above all the economic importance of literacy, with ubiquitous attention to competitiveness. Related to the new policy attention, new statistical surveys of population literacy levels assumed discursive prominence. In reports of the International Adult Literacy Survey (IALS), which claim that large numbers of people have inadequate literacy, literacy advocates found useful arguing points. But again there were ruptures between what we read and what we otherwise knew – between the individuated skill levels that IALS reported and the diversely situated and always relational literacy practices that we knew. A dreadful, if initially ill-defined, sense that the very meaning

28

R. Darville

of literacy was being changed from afar opened the way to the analysis here, an analysis concerned with the following: how the IALS “construct” of literacy is systematically alienated from the way in which literacy is experienced in everyday lives; how this construct is designed for a “literacy for competitiveness” project rather than for any other human purposes; and how literacy for competitiveness fits into the transnational neoliberal policy development and dissemination conducted by the Organisation for Economic Co-operation and Development (OECD), in which literacy becomes a “policy object” manipulated for its effects on other objects of policy concern. As the regime has developed, mandates have focused on economistic aims and related accountability technologies have been elaborated to make program “outcomes” commensurable for governance purposes. These technologies are not simply deduced from the literacy for competitiveness discourse, and they do not in a simple way adopt the testing technologies used for population literacy assessment. But they do have an isomorphic attention to individual skill development; they are hooked into the competitions over literacy rates between jurisdictions that population literacy testing enables; and they are held in place by a general move to public management methods focused on measureable “outcomes.” These technologies organize the transposition of actual learning and teaching into the terms that coordinate their management. They are of course woven into funding arrangements and decisions and so are ineluctable. Accountability requirements have been acutely problematic for practitioners – often experienced as burdensome, as requiring descriptions at odds with actualities on the ground, and as squeezing out that continual local invention that is central in responsive and relational literacy work. As these accountability devices have become prominent in the experiences of literacy practitioners, a reformist discourse has developed in response, a discourse concerned with ameliorating their effects. The chapter lays out a map (Smith 2005) – inviting later correction and filling in – of the discourses and institutional technologies that coordinate front-line experience and determine the prospects of the project of literacy work. It aspires to be a resource for those who, with commitments to literacy work, want to see the regime’s grip loosened. It begins, at least, to show where we are, within the relations of governance of literacy, as we look for points and strategies of intervention. I conclude with some selective reflections on prospects for the regime’s development, both those related to the reformist discourse regarding

Literacy Work and the Adult Literacy Regime

29

accountability and others related to the broader definition of the mandate that legitimizes public attention to adult literacy. Media Narratives and the Design of “Literacy” In the early to mid-1980s, I was among literacy advocates in British Columbia who worked to attract attention to our version of “the literacy issue.” We thought that the public should know that people came to study with courage and resourcefulness, that it was a right to be literate, and that there should be more programs. To gain media and thus public attention, we arranged to meet reporters. Some were open and sympathetic, but even they said, “There’s no story. I could never get it past my editor.” Only a few years later, however, literacy was a story. By 1987, reporters came looking for people to interview, sometimes with wearisome frequency. A door that we had been pushing on flew open. Some in the field were buoyant with optimism, thinking our time had come. Others were offended or dismayed at the character of the coverage, at news items that rendered literacy in ways that were not ours. The literacy issue was described in an exaggerated rhetoric of “crisis” or “tragedy.” Some stories focused on job errors made by illiterates and the costs of these errors; these stories clearly implied a business agenda, and they also seemed to carry blame. But more generally,1 stories distorted both limited literacy and learning as we on the ground knew them. Many stories exaggerated the distress of “illiteracy.” Maudlin portrayals selected out from people’s whole lives, moments of misery, isolation, and shame; they discarded what we would have emphasized – the courage about learning and other challenges and the wisdom that was often more important than literacy. Markedly different pictures were drawn of lives before and after literacy – a kind of romance of transformation to social inclusion and economic success; stories did not tell the small, sometimes reversible changes seen by literacy workers. In one magazine account of “a life transformed,” Constance from Alberta is quoted describing her life before reading as a life in a shadow, full of pain: “I was full of anger and frustration. Most nights I’d cry myself to sleep” (McKay 1987: 14). But after learning, “nowadays, instead of crying, I read myself to sleep” (ibid.: 16). Dorothy from New Brunswick is quoted similarly: “Not reading and writing means … being humiliated and ashamed” (ibid.: 13). She obtained a tutor and soon started a successful business. Illiteracy, the account says,

30

R. Darville

“translates into a society that is unable to fulfill its potential economically or socially … [and is] diminished in human spirit. The pity is that such need not be the case since illiteracy is a solvable problem. That’s something Dorothy knows first-hand” (ibid.: 16). In many such humaninterest media stories, the literacy “issue” is a frame or shell that is filled in with accounts of individuals who suffer and are transformed. Illiterate humiliation and incompetence give way to literate competence and happiness; reading replaces crying. Of course, people do change their lives in ways that involve reading and writing – although dramatic, romantic transformations are far from ordinary. But what is important for the regime is the shaping, in many accounts, of literacy as the cure for other ills and of illiteracy as a “solvable problem” of individuals and society. The “issue” – the incipient governing discourse into which lives and programs could be transposed – opened the door to selectively framed media narratives at a time when advocates focused on learners’ and teachers’ experiences could not. The appearance of “a story” meant not that reporters began to see how literacy and learning figured in people’s lives, but that there was policy interest, and that people’s ability to read and write was being reshaped as an object of governance. The media narratives that inflate both the misery and the boon are framed by a more encompassing discourse, indeed an “ideological code” that generates procedures for selecting and interpreting forms of language in texts and talk (Smith 1999: 159). This code has two essential linked features. It individuates literacy, making it a matter only of individuals and their skills and skill gains and leaving behind all the ways that individuals’ learning fits into existing opportunities in their lives (or does not). This literacyas-skills, abstracted from lives, is then inserted conceptually back into them as a force that makes changes. It is portrayed as bumping into lives (like a cue ball careening across a billiard table) to make people happy and prosperous. The code of literacy as a force in lives and society is profoundly steeped in western history and in many discourses – of religion, morality, economic theory, and, of course, education. Its design has been pointed to as “the literacy myth” (Graff 1979), the “autonomous” conception of literacy (Street 1984), or the “literacy thesis” (Collins & Blot 2003), of literacy as independent of contexts and yet exerting beneficent influences on them. The code is not merely an ideological structure. It is a resource for the organizing of governing social relations. People can, for example, read themselves into implied subject positions in the media narratives

Literacy Work and the Adult Literacy Regime

31

and volunteer as tutors or make charitable donations to enable transformations into literate success (Darville 1998). At a more encompassing level of organization, the code’s media instantiations now seem a kind of beachhead of the regime, anticipating forms of economically oriented policy discourse that later appeared in elaborated forms. The individuated illiteracy that makes people miserable in media narratives also makes them a drag on competitiveness in economic discourse. The literacy that makes them happily successful in media narratives makes them productive workers in management discourse. In the mid-1980s, the literacy issue had begun to be defined in extragovernmental business-oriented think tanks and was taken up by policymakers. Several business organization reports, initially European, described literacy as an aspect of economic competitiveness and a determinant of productivity, enabling worker retraining, uptake of new technology, and so on. A Canadian Speech from the Throne in 1986, for example, in prototypical vocabulary and syntax, promised that the federal government would “work with provinces, the private sector, and voluntary organizations to develop resources to ensure that Canadians have access to the literacy skills that are the pre-requisite for participation in our advanced economy” (Partnerships in Learning 2007: 22). At the time it still seemed that, although literacy was rhetorically tied to economic concerns, advocates for literacy work were among the “stakeholders” who might hold sway in “coalitions” formed between government, education, business, and labour. But advocates’ efforts to “put literacy on the agenda” – as we innocently said at the time – were scooped up into a process organized elsewhere by powerful institutional forces. The Currency of Discourse: IALS and the OECD Narratives of illiteracy in the media discourse of the 1980s were intermeshed with the counting of the number of illiterates. Literacy statistics had previously been derived from census data on school attainment (the convention: five years literate, nine years functionally literate). As part of a major newspaper series promoting the literacy issue and printed in newspapers across the country (Calamai 1988), a relatively crude direct test of literacy ability was devised, consisting of items deemed necessary for a functionally literate adult. This enabled the announcement of “shocking statistics” and their use in political and governmental discussions.

32

R. Darville

Shortly after the burst of media attention, there appeared both clear signs of government interest and the preliminary forms of a more sophisticated population literacy testing, one that has been central in the regime’s development. Reading reports on these tests – from the standpoint of literacy work – opens the way not only to critiques, but also to unpacking their place in the workings of the regime. I first encountered this testing when reading the initial report of an early US-based test. It proposed in part measuring what it called “prose literacy” – an ability applied to any continuous text, including news stories, poems, and brochures. This grouping seemed, from my teaching experience, very strange. People in my classes often had lousy jobs, so we read employment standards – for example, that split shifts cannot, by law, extend over more than 12 consecutive hours. People wondered what a poem is, so we read Patrick Lane’s poem about a carpenter, adding floors like a hawk adds levels to his nest, “until he’s risen above the tree he builds on / and alone lifts off into the wind / beating his wings like nails into the sky” (Geddes 2001: 306–7). Although both texts have continuous words, the notion that a singular “prose literacy ability” might define the reading of both the regulatory and the metaphorical seemed preposterous. I still recall how reading a text whose language was so alien to experienced literacy work induced a kind of vertigo. Such tests would become the International Adult Literacy Survey (IALS) and its successors. Anyone attending to adult literacy policy and discourse encounters them. The first tests were closely parallel exercises in the United States (in 1985 and 1992) and Canada (in 1989), with initial conceptual work done by experts at the Educational Testing Service in the United States and at Statistics Canada. Further development and test administration has been conducted and coordinated by national statistical agencies and OECD committees. IALS was administered in 22 countries between 1994 and 1998; the Adult Literacy and Life Skills Survey (ALL) (also known in Canada as the International Adult Literacy and Skills Survey), adapting and extending the survey testing method, was administered in 12 countries between 2002 and 2006.2 A related but elaborated new testing technology, the Program for the International Assessment of Adult Competencies (PIAAC) (Schleicher 2008), is being worked up at the time of this paper’s writing. IALS (I will use that name for the range of tests to date) offers a direct test of literacy skills, displacing the use of the school-attainment proxy. It sets out to measure literacy in commensurate ways across languages and nations with procedures of test design that, for example, eliminate

Literacy Work and the Adult Literacy Regime

33

test items that appear “culturally biased.” It conceives several dimensions of literacy (prose, document, quantitative, or numeracy and additional skills such as problem-solving). It claims to reject a unidimensional, dichotomous view of literate and illiterate, instead conceiving a continuum of literacy tasks and abilities, broken into four levels (nominally five, but the two highest are ordinarily collapsed in reporting), level three being designated as the standard necessary for participation in an “information society.” Based on this standard, IALS finds substantial segments of adult populations to be less than adequately literate (e.g., 48 per cent of the 16 years and older population in Canada, in the latest reported results). The OECD and Education The IALS findings, along with the discourse that they elaborate, work transnationally as central organizing devices in the literacy regime. IALS is overtly aimed at shaping policy, and it has come to provide the very terms of public and policy discussion and to contribute to structuring the relations of governments to the programs they support. To understand how those effects work, it is necessary to understand something about the OECD and its general modes of operation, its concern with adult literacy as an element of economic policy, and how IALS works to elaborate that concern. The OECD was formed in 1961 to “promote policies” (as its founding Convention states) leading to member nations’ economic growth and financial stability and to expand world trade. Its membership now includes more than 30 countries. Its policy advice is neither developed through debate in public forums nor legitimated electorally. It also has no enforcement powers. It cannot, as do other transnational organizations and agreements, impose fiscal or social “conditionality” on national governments receiving loans, or invalidate national legislation as contrary to trade agreements. Nevertheless, the OECD seems ubiquitous; its data or recommendations on a multitude of topics appear in the news almost daily. It has immense “powers of influence.”3 The OECD works as a transnational think tank, articulating and promoting policy ideas in any way related to its economic aims and also as a centre for international policy deliberation and exchange. It publishes “reviews” of selected policy areas in member nations, conducted by committees composed of national and international civil servants along with civil servants from other nations and extra-governmental

34

R. Darville

experts.4 Also – and more important for adult literacy in Canada – the OECD collects and generates extensive data on many matters. Databased national comparisons not only inform OECD policy guidance to countries, but also enable national self-surveillance and mutual country surveillance that feeds into the shaping of policy. The themes organizing OECD attention to education have shifted several times. Its initial focus in the 1960s was on the scientific and technical labour force. It soon came to promote a general expansion of education, under the aegis of macroeconomic human capital theories, which at that time expected education not only to produce economic growth but also to reduce social inequality. In the 1970s there was a more narrowly focused concern for matching skills to specific labour force needs. Work from the 1980s to the present has been animated by a heightened vigilance about how OECD nations, and firms within them, can survive and thrive in conditions of intensifying global economic competition and constant technological innovation, which together produce demands for continual and rapid changes in work processes. Human capital theories are again central, but now with a microeconomic focus on the human resources that enhance firms’ competitiveness. The OECD discourse also currently promotes “lifelong learning,” whether from schooling or elsewhere, to develop individual knowledge and skills as human capital. The generalized discourse regarding education/learning and the economy has been elaborated in measurement or accounting technologies that produce “indicators” of nations’ standing in relevant regards. A part of OECD attention to education and learning since at least the early 1990s has been a project of human capital accounting, an encompassing effort to produce measurements of all forms of human competence that are economic resources, as well as cost-benefit calculations regarding those resources (Miller 1996). In this accounting project and discourse, firms and nations should be able to calculate their rates of return on human capital investments, and even individuals should orient to their own competences in this way. The succession of adult literacy tests conducted since the early 1990s are one centrepiece of the accounting project.5 IALS, the Skills Discourse, and the Discursive Technology IALS carries and develops both a framing discourse and an institutional technology. In IALS reports, and in policy-oriented documents drawing on them, a generalized discourse of literacy for competitiveness

Literacy Work and the Adult Literacy Regime

35

(governed by the still more general discourse about human capital and the economy) defines “the literacy problem” and shows what would be gained from addressing it (e.g., Benton & Noyelle 1991; OECD and Statistics Canada 1995). In a nutshell: literacy skills are one form of – or a proxy for all of – the human capital that individuals bring to the labour market. Literacy skills are important for firms and nations within an intensely competitive globalized economy, conceived as a “knowledge economy” or an “information society.” In this information society – although this is not the IALS way of putting it – social relations are ever more intensively and extensively text mediated. Institutional spheres of activity are increasingly saturated with technical expertise and systematic planning. The texts that disseminate knowledge or gather information for planning are moved ever more deeply down through the hierarchies of workplaces, other organizations, and public communications, so that everyone must engage directly with them, or soon will have to do so. There is an intertextual circuit connecting the framing discourse of literacy for competitiveness and the institutional technology of IALS measurement of population literacy levels. The phenomenon of “literacy” that IALS creates, the IALS levels and rates, are utterly textual realities. They exist not in people’s actual lives and conduct, but only through the institutional technologies that bring them into being. Experts deploying an integrated array of psychological constructs, psychometric and statistical methods, and social survey techniques produce the phenomena of individual literacy “proficiency” and, on that basis, societal “literacy levels.” The reading proficiency that IALS constructs – I will focus on reading rather than numeracy or some other construct – is “informationprocessing.” This construct is related to attempts to define workplace literacy, specifically to distinguish school-like “reading to learn,” in which ideas from a text are held in memory for possible later use (perhaps for a test), from “reading to do,” in which information from a text is immediately fitted into a work task. Psychologists developed a related and more tightly focused conception of “document reading” – using tables, graphs, charts, and the like – as “information-processing” – locating, combining, and drawing inferences from bits of information in a document in order to perform a pre-specified task using those information bits. The information-processing conception of document reading was then apparently conflated into a model of reading in general, including the reading of continuous (or “prose”) texts. (Thus was generated the category of “prose literacy” that was so disorienting to encounter from the perspective of literacy work.)

36

R. Darville

In the IALS test construction, tasks or items are created at different levels of complexity. Statistical procedures yield a numerical ranking of the difficulty of items, on a 500-point scale. These procedures simultaneously rank individuals, based on their test performance, on that same 500-point scale. This scale is divided into four or five IALS levels, numerically determined but given simplified explanations: at level one, people have limited ability to manage everyday demands; at level two, people are able to deal with simple material, clearly laid out, but have difficulty with novel demands; at level three, people are able to deal with the everyday demands of life and work in our complex society. Level three is sometimes described as what is expected of high school graduates, but more commonly as the level “experts agree” is necessary for participation in an information society.6 IALS and its levels are deeply embedded in competitiveness discourse. One technicality regarding the assignment of individuals to literacy levels is crucial for the achievement of this embeddedness. To be ranked at any level, one must correctly perform 80 per cent of the test tasks designated at that level. This requirement has been contentious, and some critics (notably Sticht 2001, 2005) say that the 80 per cent criterion is overly stringent, and that a lower criterion (67 per cent or 50 per cent) would provide more realistic estimates of literacy abilities and difficulties (since people ranked at lower IALS levels can ordinarily perform at least some tasks at higher levels). I would emphasize a somewhat different point. IALS is not like, say, a driving test, for which people are expected “to practice and anticipate the test items” (Hamilton 2001: 187), and it is not a test of whether people can perform the tasks that are actually parts of or within their daily lives. Rather, in placing people “at a level,” IALS assesses whether they can perform a range of tasks of that level’s complexity, including previously unfamiliar tasks. The facility insisted on is like what musicians call “sight reading” (Darville 1999). Thus, the claim that 48 per cent of adult Canadians have inadequate literacy does not say that those people cannot perform tasks of “level three complexity” in the course of their actual daily life and work. It says that those people fail to demonstrate a “predictable ability” to perform tasks of that complexity, even unfamiliar tasks. This more stringent criterion hooks the testing technology into the framing discourse of literacy for competitiveness. The sight-reading literate individual in IALS reports is the counterpart of the flexible, retrainable worker in contemporaneous managerial discourses describing the competitive firm. In this way IALS levels are

Literacy Work and the Adult Literacy Regime

37

designed for the literacy for competitiveness discourse, constructing the flexible reader-worker. In turn, these textually produced phenomena of literacy levels and rates comprise a “policy object” that can be reinserted into the discourse from which they arise. The policy object called “literacy” is treated as a means of producing other policy objects, for example, pushing literacy up to push down rates of unemployment and social assistance receipt and push up GDP growth (the imagery is almost hydraulic). Further, levels of literacy can be compared over time and across nations, so readers of the discourse can assess how comparatively good or bad things are. Some of IALS’s many critics have simply found implausible the claim that a third or a half of adults have less literacy ability than society requires; society hasn’t disintegrated yet. Some have seen the claim for a transnational, cross-culturally commensurable literacy as only a dubious artefact of the institutional technology (Blum et al. 2001), or even as a colonialist imposition (Street 1997). I would emphasize critiques that hark back to “literacy work” – whose responsiveness to learners and whose relational questions about literacy are both off the IALS map. The IALS standardization of literacy is determinedly not responsive. As an institutional rendering of literacy, not people’s own (Hamilton & Barton 2000), IALS’s objectification displaces people’s subjectivity. IALS offers a reasonable definition of literacy as “the ability to understand and use printed information in daily activities at home, at work and in the community – to achieve one’s goals and to develop one’s knowledge and potential” (Statistics Canada 2005: 198). But this abstract definition is contradicted by the IALS test construct and thus the policy object, in which ability, goals, and potential are not open to definition by people themselves. Overwhelmingly, even those with low-tested literacy selfassess their abilities as adequate to their everyday life and work. IALS, however, dismisses this as misperception or, lately, as market information failure (Murray et al. 2009). More than one critic has seen it as not “ethically defensible to disregard” people’s own reckonings and to “convey the impression that adults just do not know how stupid they are” (Henningsen 2007; Sticht 2005). (The various reports likewise ignore or dismiss grounded practitioner knowledge about literacy and its teaching and learning [Darville 2009]). The IALS individuation of literacy likewise obscures literacy’s relational character. It obscures all the ways that, in practice, people may simply work out ways of reading what they need or secure assistance

38

R. Darville

from others. In merely assuming the givenness of “tasks,” it obscures the ways that they are constituents of social relations (Darville 1999) that are ordinarily hierarchical. So IALS obscures the ways that “literacy problems” may be less a matter of individuals’ skills than of conflictual or simply opaque text-mediated social relations (Belfiore et al. 2004; Farrell 2006). Of course, it obscures as well the ways that “while literacy may not be experienced as a problem in the routine conduct of [people’s] lives, it is being made to matter by powerful interests that have harnessed it to their purposes” (Jackson & Slade 2008: 38). Policy Discourse and Public Management Critiques notwithstanding, the transnational literacy discourse and accounting technology has been a juggernaut. The OECD’s international discursive rival, UNESCO, which for decades has promoted literacy as a human right, has been marginalized (Lo Bianco 1999, 2009; Rubenson 2006a, 2009). In adult literacy, as in other areas of education, the OECD has come to be a “magistrature of influence exerted through its reports, studies and publications” (Lawn & Lingard 2002: 300). Without enforcement powers, and even without overt prescription, OECD discourse comes into force as the currency of policy action, in Canada as in other nations. The discourse operates not merely as “statements about” literacy, but as organizer of the promotion and management of literacy development. In public and political discourse, even in advocacy discourse, the language of skills for competitiveness and literacy for economic wellbeing are continually recapitulated. The statistics on literacy, ordinarily presented as the numbers or percentages of those who do not reach level three, are rehearsed.7 These appear alongside comparative rankings, “league tables” of population literacy levels, so that Canada, or any province or city, can boast about or bemoan its relative standing; thus, indicators instigate competition. The BC premier, for example, announces a literacy initiative to make the province “the most literate jurisdiction” in North America (Walker 2008); an Edmonton economic development officer worries about the “region’s lagging literacy rate” (Centre for Family Literacy 2006); and so on. These discourse themes and data reappear in journalistic coverage, which is often triggered by releases of new IALS reports. They reappear in publications of business associations (such as the Conference Board of Canada and ABC Life Literacy Canada) that highlight both the economic imperative and the

Literacy Work and the Adult Literacy Regime

39

actions that firms can take. Perhaps most strikingly, the IALS version of literacy is carried in advocacy reports by literacy organizations, rarely with caveats, although advocates may, in another discursive mode, object to the reduction of literacy to the economic and know that most “literacy programs” deal only with people at the low end of the IALS spectrum. These repetitions carry the ideological code of autonomous literacy, the skills for competitiveness discourse, and the ideal of the flexible reader/worker, into new texts, conversations, and practices, supplanting alternative policy-oriented discourses, even though “no one seems to be imposing anything on anybody” (Smith 1999: 175). The intention of the discourse’s reinstantiation is, of course, that governments (and others) take action. But OECD policy directions are not simply “implemented,” and national (or provincial) policy-makers are “more than mere transmission belts” (Mahon & McBride 2009a: 7, 14; cf. Lawn & Lingard 2002: 301, 303). Discourse must somehow be fitted to or taken up in the milieux of federal and provincial politicians and civil servants. And, indeed, OECD-promoted policy directions are evident there. The discourse of literacy for competitiveness is taken up in mandates that define the importance and objectives of literacy promotion and legitimate public expenditures. That discourse focuses on the policy object – a stock of human resources, or a societal supply of individuated, decontextualized, and hierarchically ranked skills. Mandates for literacy are articulated as investments in producing that object. Such a mandate requires an accounting of the policy object whose production is the aim of literacy policy, some institutional technology to coordinate activities justifiable under the mandate’s terms, or simply a display of government’s management of them. The IALS technology does not immediately determine what will count as literacy development for purposes of its funding. But it shapes the framing of reporting or accountability requirements. As the competitiveness discourse and the IALS league tables are taken up in governance, there is a trajectory towards a managerial oversight of programs held administratively accountable in ways that produce “some equivalency … between the IALS measures and program outcomes” (Hamilton 2001: 192). I will turn shortly to consider accountability measures more closely – beginning with how they are experienced in the field and responded to in a reformist discourse. But another determination of Canadian governments’ uptake of the OECD policy machinery should be noted first. That uptake is consolidated by the alignment of the literacy for competitiveness discourse

40

R. Darville

and the IALS measurement technology with concurrent discursive conceptions of the proper role of the state and of good government management practice (which not incidentally have also been promoted in OECD efforts). The state conception is often called “neoliberal,” meaning, at minimum, that the state is legitimated not as arising from or strengthening liberal democracy, but “according to its ability to sustain and foster the market” (Brown 2005: 41). So good policy is conceived not as counterbalancing the effects of the market, but only as assisting individuals to take responsibility for their own success in it. This neoliberal conception is also caught in the idea of a “social investment state” (Saint-Martin 2007), in which public expenditures, seen as “investments” in the future economic well-being of individuals and the nation, are justified by the “return” that they will later provide. The dominant conception of good government management practices, sometimes termed the “new public management” (McBride 2005), emphasizes clear “performance targets,” ideally in “objective,” quantitative form, to ensure “quality of service.” As such, funded organizations are held accountable for effectively and efficiently achieving defined objectives. Thus, the competitiveness discourse and the measurement technology slide into a state conception and model of good management as if a place had been prepared for them. And in turn they elaborate what the notions of investment and outcomes might amount to. Accountability There is a wide variety of programming arrangements across provincial jurisdictions – involving community colleges, school boards, and “community groups,” as well as employer- and union-sponsored programs – and greatly different extents of provincial support.8 However, many recurrent experiences of literacy practitioners and of advocates for literacy work point to an encompassing reorganization of the literacy regime. One consequence of the changed composite of governance – in which government activities are conceived as investments with economic payoffs and there is a dedication to managing programs by measuring their “outcomes” – is a lessening of collaborative work between civil servants and practitioners. This has been produced in part by the replacement of civil servants with histories in adult literacy by generic managers. Thus, one practitioner remarks that it takes educated government staff to understand “how complex it is” and how “bean-counting”

Literacy Work and the Adult Literacy Regime

41

is not enough, but that governments are “replacing staff knowledgeable about literacy and the operation of community organizations by bureaucrats” (Crooks et al. 2008: 22). There have also been pervasive changes in the textual coordination of relations between government departments and the field. A telling insider’s account (Hayes 2009) of the federal government’s National Literacy Secretariat (NLS) elaborates these changes. From its 1987 formation, within the Department of the Secretary of State, with a mandate for literacy as “full participation,” the NLS was oriented to “support” for the field, in, for example, developing networks, learning materials, and public awareness. It consulted literacy organizations about funding priorities. Literacy development was viewed not as an individual responsibility, but as requiring changes in “aspects of society,” including workplaces, professional groups (e.g., in law and health), and government accessibility. NLS staff saw themselves as “social development officers.” Funding was usually in the form of relatively non-restrictive grants and contributions. But the increasing influence of the new public management in a climate of anxiety about public spending “boondoggles,” and the NLS’s absorption into a larger, clearly employment-related ministry – Human Resources and Social Development Canada (HRSDC) – and its eventual transformation into an Office of Literacy and Essential Skills (OLES) resulted in a shift from “community development and partnerships” to “accountability.” Working contacts between government staff and community groups were curtailed, and grants were required to be intensively monitored with reference to predetermined criteria. Under a “results-based framework,” literacy was no longer about empowerment and being learner centred, but “about moving people to level three.”9 One aspect of the new managerial mode is program reporting to government and other funders, that is, accountability. In such reporting, outcomes or performances are made commensurable for purposes of their governance, and these commensurabilities are of course part of the basis on which literacy programs are funded. This has been a major focus of recent discussion around the field and agitation from it. Various programming forms are being brought under accountability mechanisms that are experienced similarly across the board.10 Accountability mechanisms vary across the country (Crooks et al. 2008; Page 2009; St. Clair 2009). Schemes in some jurisdictions are relatively simple, compiling learner numbers and characteristics, duration of study, contact hours, and sometimes periodic reports on program improvement. Other schemes require more elaborate reporting of gains

42

R. Darville

in ability. Those in Ontario, which have been most researched (Darville 2002; Jackson 2005, Grieve 2007), do not simply require a test of “achievement,” but penetrate into the detail of practitioner-learner relationships. There are matrices or grids of skills, often stated in school-like language, broken into levels (presumed to develop sequentially) and divided into domains (reading, writing, numeracy, problem-solving, etc.). Learner accomplishments are fed into such matrices as practitioners find work by learners, or co-produce work with learners, that “verifiably demonstrates” the objectified skills. Such demonstrations are melded with “individual learning plans,” descriptions of learners’ current abilities, objectives (employment or employment-related objectives are favoured), and steps of development between the two. There is a constant churn in managerial technologies. New devices are created with new specifications of the terms with which programs and teaching must articulate. The Canadian federal government-produced lists of “Essential Skills,” with occupation-specific variations, also work with a grid of levels and dimensions of skills. There are proposals to align various assessment formats with one another – for example, Essential Skills with the Canadian Language Benchmarks used in Canada in ESL programs for immigrants (Gibb 2008), or IALS, Program for International Student Assessment (PISA), and the Benchmarks (Alexander 2009). An Ontario “curriculum framework” has been developed (Ontario Ministry of Training, Colleges and Universities 2011), that requires the reformulation of all teaching and learning activities in a framework of “context-free competencies” (based on Essential Skills lists), that are ranked at three levels of “complexity” (using an IALS-type conception of complexity); practitioner training and curriculum materials anchored in this framework are being rolled out. Although accountability forms vary, there is a trajectory – driven both by discourses of skills and by discourses of good government management – towards institutional technologies that display programs producing the individuated skills of the policy object. This direction of development is evident in experiences of accountability at the front lines and in responses to accountability in practice-oriented discourse – experiences and responses that are remarkably similar across the English-speaking world (Jackson 2005). A common experience is that accountability not only takes time and effort, but also even curtails actual work with students. One practitioner observes, “As accountability measures have constantly increased, especially during the past 10 years, there has been no additional funding,

Literacy Work and the Adult Literacy Regime

43

no recognition that these things cost money ... There is only one place resources come out of and that is out of the classroom” (Crooks et al. 2008: 26). Another observer wryly remarks, “It is not desirable or sensible for programs to use a high proportion of their resources proving that they are using resources well” (St. Clair 2009: 2). “Gaps” are felt between reporting requirements and programming actualities – between, for example, reporting formats’ assumptions that learners arrive with definable objectives and depart for institutionally discoverable reasons and the actual diverse, untidy paths of learners’ lives and learning (Jackson 2005; Grieve 2007). Gaps may also appear between skills assumed to develop in sequences that are pre-specified in curricular or assessment documents and the actual benefits that differ from or go beyond these skills. Practitioners sometimes say that only stories are truly informative about what programs actually achieve, while numbers alone do not “tell us anything about success, what kind of impact we have had” (Crooks et al. 2008: 18). Practitioners’ accounts display their being pulled by documentary requirements into an “official” language, whose use can remake and distort both teaching work and learning.11 Practitioners sometimes speak of finding it possible to teach well and then translate the experience into reporting discourse, but at the cost of feeling “schizy” (Darville 2002). UK research on individual learning plans (similar to those of Ontario) finds this as well (Tusting 2009: 19) and describes practitioners’ “feelings of unease and ethical discomfort” (Hamilton 2009: 239) when they write up learners in the vocabulary of competences and plans, and feel they are “putting words in their mouths.” Both the Essential Skills framework and the Canadian Language Benchmarks are observed (Gibb 2008) to instruct people in how to view their own abilities/ deficiencies only in contrast with objectified versions of competence. Practitioners who work with a sense of literacy as developed from and contributing to “community,” notably in aboriginal programs, find that skills-defined outcomes terms deflect “culturally-relevant learning” (Johnny 2003). Not surprisingly, tensions and even fear are felt around accountability. The “Connecting the Dots” project found practitioners and civil servants alike often reluctant to attend or to present at conferences (Hurley & Shohet 2008: 32). Some refused to be interviewed; a few deleted much of the “validation draft” of their interview transcripts (Crooks et al. 2008: 4, 9). Some civil servants spoke of feeling “‘caught in the middle’ between their obligations to their employer and their understanding of

44

R. Darville

the needs in the field” (Page 2009: 6), perhaps feeling themselves turned into enforcers of a project whose terms were not theirs. In a practice-oriented reformist discourse, accountability is recognized as a need not of programs, teachers, or learners, but of management. Objection is raised to what it requires: the burdens it imposes, its displacement of effort from actual teaching, its demand for an alien language that ignores palpable benefits for learners. But at the same time there is a routine emphasis that practitioners do not object to “the principle” of accountability; instead, there are calls for its broadening – providing accountability to learners, community, board, even oneself (Crooks et al. 2008: 13–14); accountability reports that are more ample and subtle and that allow the voices of learners to be heard (Lefebvre 2006; cf. Eckert & Bell 2004); and relationships between governments and programs conducted with more “mutuality.” The discourse reiterates (often citing Merrifield 1998) the apophthegm that what is counted becomes what counts, and it makes various proposals for alternative “counting,” some of which I will touch on below. Changing Accountability, Changing Mandate? I have so far developed a rough map of the regime’s obdurate intertextual hierarchies. To review: the ideological code of autonomous literacy as individual abilities that change lives and society organizes texts “across discursive sites” (Smith 1999: 159) throughout the regime, producing a “layered simultaneity” (Blommaert 2005: 126ff.) of discourses. The literacy for competitiveness discourse, constructing literacy as human capital and as a force that affects other policy objects, provides a mandate for promoting literacy. Through the technology of population literacy testing, rates of literacy – as individuals’ ability to perform information-society tasks on demand – are established. The discourse and statistics are reinstantiated in media and public discourse. They organize surveillance of policy and policy outcomes. In alignment with outcomes-based management, they shape terms of program accountability. All this formal standardization predictably generates frustration and criticism from literacy practitioners whose work in actual teaching is hooked into it and from literacy advocates who see that a more generously conceived literacy work is possible. Yet these relations are not standing still. There is continual churn in textual technologies, some changes originating from above, as managerial modes are elaborated, and some originating from below,

Literacy Work and the Adult Literacy Regime

45

as criticisms and proposals are brought discursively forward. In the remaining pages, I consider prospects for the regime, in both accountability practices and mandate shaping, in relation to this intertextual terrain. Regarding the changes proposed within reformist discourse about accountability, better “communication” (and in that sense “mutuality”) and some experimentation with locally invented reporting formats does seem possible. This is evident in meetings of the Connecting the Dots project and in at least some of its action research projects. But the reform of more encompassing frameworks of accountability, given the intertextual relations between what might be dubbed the “boss texts” of policy discourse and the “minion texts” of accountability, is more difficult. The Connecting the Dots project summary – written by an experienced civil servant – observes that discussions with front-line government staff are insufficient to make change. To do so would require “the engagement of senior government officials and policy makers” and “‘policy-oriented’ documents on mutual accountability designed for” them and for politicians (Page 2009: 6). That summary also maintains, hopefully, that “there are a number of measures of ‘performance accountability’ which, if collected, could enhance the quality of instruction and increase the return on investment for the funder” (ibid.: 12). Indeed, there are candidates for a tweaking of measures of performance accountability. Two kinds of outcomes descriptions – “confidence” and “text-use” – are coming to be conventional, both attending to how lives are changed rather than merely to how skills are gained. The conception of “confidence,” or sometimes “non-academic outcomes,” points to the observation that, as a result of program participation, learners – often in their own judgment – gain confidence, a sense of dignity, and an increased willingness to speak up and to try doing things with texts (a theme long ago discussed by Charnley and Jones 1979 and elaborated in Canadian practitioner-research by Battell 2001, Westell 2005, and Lefebvre 2006). This observation is both a critical response to accountability conventions and a proposal for reforming them. There have been efforts to create “instruments” to record such outcomes (Battell 2001; Lefebvre 2006; for the United Kingdom, cf. Eldred et al. 2006); a conception of “self-management” was elaborated in Ontario as a domain of skills gain (Grieve 2003); and a “confidence scale” is imaginable, although different learners’ confident demeanours might look quite different.

46

R. Darville

The conception of text-use as a program outcome points to the ways that, outside or after program participation, learners make more extensive use of texts and documents. Large-scale US research, attending to learners’ text use as well as simply to skills, has elaborated the groundwork for this as an alternative reporting term (envisaging a technology for reporting text use for accountability, rather than research, purposes). One study showed that when literacy programs used “authentic materials” in classes, the extent of learners’ everyday uses of literacy outside class expanded (Purcell-Gates, Jacobson, & Degener 2004). More recently, a related large-scale longitudinal study (Reder 2008, 2009) examined both literacy proficiency (on a test similar to IALS) and literacy and numeracy practices (how often people read fiction and non-fiction, write notes or email, and use math for personal financial management). The study observed no significant short-term relationship between basic skills program participation and literacy proficiencies. However, it also found that skill development takes circuitous paths: program participation was followed by increases in literacy practices, which, in turn, led to long-term gains in measured proficiencies. This result highlights a “misalignment … likely to produce substantial distortions in educational practice” between actual program effects on “literacy and numeracy development … and the short-term proficiency gains for which programs are accountable under the dominant policy and funding regimes” (Reder 2009: 47). As to the burdens of reporting, there might be streamlining or even specific funding for required documentation (cf. Campbell 2007). But at least one response to burden-complaints will simply insist that governance needs its data: “While one sympathizes with … concerns about the burden of data collection, the fact is that it is impossible to do a true evaluation of program and policy effectiveness without accurate and relatively complete information” (Alexander 2009: 16). (Not incidentally, adding measures of text use or confidence to existing reports would almost certainly increase paperwork demands.12) What of the possibility of a regime mandate more open to “literacy work”? It is difficult even to think such a possibility, when the literacy for competitiveness project and its “indicators” are so routinely recited and when resources for assembling and organizing knowledge of literacy are massively concentrated on behalf of that project. Consider first a foreseeable narrowing. Different conceptions of policy goals have been formulated in relation to IALS data. In one conception (effectively reverting to treating il/literacy as dichotomous), the goal is

Literacy Work and the Adult Literacy Regime

47

simply to make more level-three human resources. If some resources are more cheaply made than others, then efficient policy might jettison the traditional focus of literacy programming on those “with greater needs.” Confining programs to those easiest to work with is an ongoing temptation in adult literacy policy and programming (Sticht 2009: 545). Conceiving literacy policy as “social investment” allows the triaging out of poor investments (Saint-Martin 2007: 292–3), as “the reality of fiscal constraints” pushes an orientation to “cost-benefit issues” (Alexander 2013: 15). Triaging learners, a strategy between the lines in IALS discourse from the beginning, has come closer to full articulation with recent estimates of the costs required to bring people at various levels from below level three up to the standard (Murray et al. 2009). Threads of OECD and IALS-related discourse do offer alternatives to such merciless directions. Some uptakes of IALS data, not fixated on a dividing line, have attended to the “socio-economic gradient,” the steepness of the curve between literacy scores of those with the lowest and the highest socio-economic status. This work notes that nations with the highest overall literacy “have achieved relatively high levels of literacy for their most disadvantaged groups” (Sloat & Willms 2000: 230), and that “quality does not have to be at the expense of inequality” (Willms 2003: 251). A gradient-flattening goal draws explicit attention to inequalities and could bring “responsive” attention to the variety of ways that traditionally conceived “literacy learners” develop literacy within their lives. This would not upend the literacy for competitiveness project, but give it a relatively more generous direction. Such a regime development does not now look likely, at least not as originating within the OECD. Although the OECD has studied the relationship of education to “inclusiveness,” “cohesion,” and even “inequality” (e.g., Schuller 2006), these relationships ordinarily are afterthoughts, of concern as they impinge on “returns to education” and economic growth (Rizvi & Lingard 2009). Some observers of the OECD’s internal politics have seen openings to an “inclusive liberalism” that might promote equity and participation in their own right and the shaping of programs by those involved in them. But these openings appear only in areas not central to economic policy – and thus not in education (Mahon & McBride 2009b: 279; 2009c). There is evidence that government policy can ameliorate the “law of inequality” by which adult education favours participation by those already most educated and can reduce literacy inequalities, through “a preoccupation with … public adult education for disadvantaged … groups”

48

R. Darville

(Tuijnman 2003: 289). The “preoccupation” with reducing inequalities takes the form of establishing a “demanding equity standard and … an institutional framework to support this ambition” (Rubenson 2006b: 341). But the OECD discards such possibilities, since they violate the neoliberal orientation to minimizing the role of government (Rubenson 2006a, 2009). Furthermore, the next round of human resource discourse and data, in PIAAC, is predicted to more closely connect education and labour ministries in national, and presumably provincial, government (Rubenson 2009: 247), further securing the economistic domination of education policy. On the terrain described in this dismal mapping, local resistance and the continual invention of literacy work will of course continue. So will utopian imagination, proposing what cannot immediately be won – by blowing on the embers of Canadians’ “right to develop the literacy and essential skills they need in order to participate fully in our social, cultural, economic, and political life.”13 A “right to develop … literacy” could support the responsive elements of literacy work – taking “the right to develop” to imply “in locally sensible ways” – supporting learning guided by local relevance rather than forced up a grid of skills defined from afar as “essential.” It might even allow some recognition that developing literacy is fundamentally a social process, woven into how people act together and against one another. Such recognition naturally implies dealing with literacy problems by changing texts as well as readers or working at literacy development to strengthen communities (e.g., aboriginal, trade union, even classroom communities) of which people are “members” rather than employees, clients, or students. Of course, the goal of those who conceive literacy work as both responsive and relational is not merely to make the economy more competitive, but to make society more democratic and lives more secure. Such a literacy project would be part of a “utopian project of adult education” that finds again an economics of education with a “link to economic democratization” (Rubenson 2005: 24). It would necessarily be part of a larger transformation, recognizing that there are necessities of life that market institutions do not provide for (Sen 2009). It would involve justifying government and the programs (educational and other) that it supports by the quality of life they provide, not by the quality of their service to capital (McBride 2005). The imaginary of a re-enlarged project of literacy work is a task not for a “literacy movement” alone, but for literacy work aligned with many forms of knowledge and politics created for people and changing the relations of governance.

Literacy Work and the Adult Literacy Regime

49

NOTES 1 For further analyses, see Darville (1998). 2 Key reports include Statistics Canada (1991); OECD & Statistics Canada (1995); Statistics Canada (1996, 2005); Statistics Canada & OECD (2005); OECD & Statistics Canada (2000). Thorn (2009) gives an overview. 3 My sketch here relies on Eide (1990); Mahon & McBride (2009a, 2009b); Martens (2007); Moos (2009); Rinne, Kallo & Hokka (2004); Rizvi & Lingard (2009); Rubenson (2009); and Schuller et al. (2006). 4 Canada’s jobs strategy was reviewed in 1994, and Canada was part of a “thematic review” of adult learning in 2002. 5 Another is PISA, the Program for International Student Assessment, a curriculum-independent test of literacy, mathematics, and science administered to secondary school students across OECD member nations. 6 Of course, this acknowledgment of “experts” is a masked self-citing. 7 It is noteworthy that IALS results are commonly misread in their reporting – saying, for example, that around a quarter of adults “have problems with the simplest literacy tasks,” or that nearly half “lack the skills needed for daily life.” In such formulations of “everyday problems” and the like, the complexities of the IALS standard, such as the predictable (“sightreading”) ability to perform a range of tasks of a certain complexity, vanish in public and policy discourse. 8 Various promotions of a pan-Canadian literacy strategy (ABC Canada 2000; Longfield 2003; Advisory Committee on Literacy and Essential Skills 2005) have so far been inconsequential. This contrasts with the United Kingdom’s substantial expansion of programming, research, teacher training, and so on (Hamilton & Hillier 2007). 9 A related casualty of the change in governance was funding for the field’s burgeoning “research in practice” movement (Horsman & Woodrow 2006) and the practice-oriented journal Literacies – both of which held to the ideal of knowledge of literacy generated by and with those working in the field. 10 A recent national project, called “Connecting the Dots,” assembles and extends the discussion of experiences of accountability, explores alternative field-originated reporting formats, and promotes discussions between practitioners and front-line civil servants (Crooks et al. 2008). 11 Ng (1988) provides a seminal account of the painful diversion of program activities oriented to community and individual needs into activities oriented to government accountability and funding requirements. 12 One different hope has also been promoted. The framing of accountability requirements is governed both by the literacy skills discourse and by the

50

R. Darville

requirement of quantifiable performance outcomes in recognized good public management. Policy-makers and scholars alike have observed the negative effects of contracting and accountability on both funded organizations and public sector managers, and have promoted more “horizontal” and collaborative governance (Phillips & Levasseur 2004; Hajer & Wagenaar 2003; Clark & Swain 2005). Public management less fixated on performance outcomes – although it seems now a remote possibility – could allow more flexible accountability and even allow a “hard” mandate to be joined to “softer” accountability arrangements. 13 This is the 2005 “vision statement” of the Ministerial Advisory Committee on Literacy and Essential Skills.

REFERENCES ABC Canada. 2000. National Summit on Literacy and Productivity. Toronto: ABC Canada Literacy Foundation. Advisory Committee on Literacy and Essential Skills. 2005. Towards a Fully Literate Canada. Ottawa: Minister of State for Human Resources Development. Retrieved http://en.copian.ca/library/research/towards/towards.pdf. Alexander, C. 2009. Literacy Matters: Helping Newcomers Unlock Their Potential. Toronto: TD Bank Financial Group. Retrieved http://www.td.com/ document/PDF/economics/special/ca0909_literacy.pdf. Alexander, C. 2013. Literacy Matters: A Call for Action. Toronto: TD Bank Financial Group. Retrieved http://canlearnsociety.ca/wp-content/ uploads/2013/ 01/Literacy-Matters.pdf. Barton, D., M. Hamilton & R. Ivanic. 2000. Situated Literacies: Reading and Writing in Context. London: Routledge. Battell, E. 2001. Naming the Magic: Non-Academic Outcomes in Basic Literacy. Victoria, BC: Ministry of Advanced Education. Belfiore, M.E., T.A. Defoe, S. Folinsbee, J. Hunter & N.S. Jackson. 2004. Reading Work: Literacies in the New Workplace. Mahwah, NJ: Lawrence Erlbaum. Benton, L., & T. Noyelle. 1991. Adult Literacy and Economic Performance in Industrialized Countries. Paris: OECD. Blommaert, J. 2005. Discourse: A Critical Introduction. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511610295. Blum, A., H. Goldstein & F. Guérin-Pace. 2001. International Adult Literacy Survey (IALS): An analysis of international comparisons of adult literacy. Assessment in Education 8 (2): 225–46. Brown, W. 2005. Neoliberalism and the end of liberal democracy. In W. Brown, Edgework: Critical Essays on Knowledge and Politics, 37–59. Princeton, NJ: Princeton University Press.

Literacy Work and the Adult Literacy Regime

51

Calamai, P. 1988. Broken Words: Why Five Million Canadians Are Illiterate. Ottawa: Southam Communications. Campbell, P. 2007. Student assessment in Canada’s adult basic education programs. In P. Campbell (ed.), Measures of Success: Assessment and Accountability in Adult Basic Education, 207–50. Edmonton: Grass Roots Press. Centre for Family Literacy (Edmonton). 2006. Annual Report. Retrieved www. famlit.ca/about/2006-CFL-Annual-Report.pdf. Charnley, A.H., & H.A. Jones. 1979. The Concept of Success in Adult Literacy. Cambridge: Huntington. Clark, I.D., & H. Swain. 2005. Distinguishing the real from the surreal in management reform: Suggestions for beleaguered administrators in the government of Canada. Canadian Public Administration 48 (4): 453–76. http://dx.doi.org/10.1111/j.1754-7121.2005.tb01198.x. Collins, J., & R. Blot. 2003. Literacy and Literacies: Texts, Power and Identity. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/ CBO9780511486661. Crooks, S., P. Davies, A. Gardner, K. Grieve, T. Mollins, M. Niks, J. Tannenbaum & B. Wright. 2008. Accountability in Adult Literacy: Voices from the Field. Montreal: Centre for Literacy of Quebec. Retrieved http://en.copian.ca/ library/research/aalvff/field.pdf. Darville, R. 1998. Nowadays I read myself to sleep. Paper presented in Sessions on Textual Analysis; Media Narratives in the Adult Literacy Régime, Pacific Sociological Association, San Francisco. Darville, R. 1999. Knowledges of adult literacy: Surveying for competitiveness. International Journal of Educational Development 19 (4-5): 273–85. http://dx.doi. org/10.1016/S0738-0593(99)00029-2. Darville, R. 2002. Policy, accountability and practice in adult literacy: Sketching an institutional ethnography. In S. Mojab & W. McQueen (eds), Adult Education and the Contested Terrain of Public Policy, 60–6. Ottawa: Canadian Association for the Study of Adult Education (CASAE). Retrieved www.casae-aceea.ca/sites/casae/archives/cnf2002/2002_ Papers/darville2002w.pdf. Darville, R. 2009. Knowing Literacy for Teaching, Testing Literacy for Policy. Ottawa. Canadian Association for the Study of Adult Education (CASAE). Retrieved www.casae-aceea.ca/sites/casae/archives/cnf2009/OnlineProceedings-2009/ Papers/Darville.pdf. Eckert, E., & A. Bell. 2004. Authentic accountability in literacy education. Adult Basic Education 14 (3): 174–88. Eide, K. 1990. 30 years of educational collaboration in the OECD. International Congress, Planning and Management of Educational Development, Mexico. UNESCO Papers ED. 90/CPA.401/DP.1/11. Retrieved http://unesdoc.unesco. org/images/0008/000857/085725eo.pdf.

52

R. Darville

Eldred, J., J. Ward, K. Snowdon & Y. Dutton. 2006. Catching Confidence: Summary Report. Leicester: National Institute of Adult Continuing Education. Retrieved http://www.niace.org.uk/sites/default/files/ documents/publications /catching-confidence-summary-report-en.pdf. Farrell, L. 2006. Making Knowledge Common: Literacy and Knowledge at Work. New York: Peter Lang. Geddes, G., ed. 2001. 15 Canadian Poets X 3. Don Mills, ON: Oxford University Press. Gibb, T.L. 2008. Bridging Canadian adult second language education and essential skills policies: Approach with caution. Adult Education Quarterly 58 (4): 318–34. http://dx.doi.org/10.1177/0741713608318893. Graff, H.A. 1979. The Literacy Myth: Literacy and Social Structure in the 19th Century City. New York: Academic. Grieve, K. 2003. Supporting Learning, Supporting Change – Research Report. Toronto: Ontario Literacy Coalition. Grieve, K. 2007. Assessment for whom and for what? In P. Campbell (ed.), Measures of Success: Assessment and Accountability in Adult Basic Education, 123–58. Edmonton: Grass Roots Press. Hajer, M.A., & H. Wagenaar, eds. 2003. Deliberative Policy Analysis: Understanding Governance in the Network Society. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511490934. Hamilton, M. 2001. Privileged literacies: Policy, institutional process and the life of the IALS. Language and Education 15 (2-3): 178–96. http://dx.doi. org/10.1080/09500780108666809. Hamilton, M. 2009. “Putting words in their mouths:” The alignment of identities with system goals through the use of Individual Learning Plans. British Educational Research Journal 35 (2): 221–42. http://dx.doi. org/10.1080/01411920802042739. Hamilton, M., & D. Barton. 2000. The International Adult Literacy Survey: What does it really measure? International Review of Education 46 (5): 377–89. http://dx.doi.org/10.1023/A:1004125413660. Hamilton, M., & Y. Hillier. 2007. “Deliberative policy analysis: Adult literacy assessment and the politics of change.” Journal of Education Policy 22 (5): 573–94. http://dx.doi.org/10.1080/02680930701541758. Hayes, B. 2009. From community development and partnerships to accountability: The case of the National Literacy Secretariat. Literacies 10:19–22. Henningsen, I. 2007. Adults just don’t know how stupid they are: Dubious statistics in studies of adult literacy and numeracy. 13th International Conference on Adults Learning Mathematics, Belfast. Retrieved http:// mmf.ruc.dk/~tiw/PapersWEB/IHenningsenI-ALM13.pdf.

Literacy Work and the Adult Literacy Regime

53

Horsman, J., & H. Woodrow, eds. 2006. Focused on Practice: A Framework for Adult Literacy Research in Canada. Vancouver: Literacy BC. Retrieved http:// decoda.ca/wp-content/uploads/FocusedOnPractice.pdf Hurley, D., & L. Shohet. 2008. Is accountability a barrier or an opportunity? Findings from the Connecting the Dots project. Literacies 9:32–5. Jackson, N. 2005. Adult literacy policy: Mind the gap. In N. Bascia, A. Cumming, A. Datnow, K. Leithwood & D. Livingstone (eds), International Handbook of Educational Policy, 763–78. Hingham, MA: Kluwer. Jackson, N., & B. Slade. 2008. “Hell on my face”: The production of workplace il–literacy. In M.L. DeVault (ed.), People at Work: Life, Power, and Social Inclusion in the New Economy, 25–39. New York: New York University Press. Johnny, M. 2003. The emerging accountability tug-of-war: The Native literacy experience in Ontario. Portraits of Literacy Conference, Vancouver, University of British Columbia. Lawn, M., and B. Lingard. 2002. Constructing a European policy space in educational governance: The role of transnational policy actors. European Educational Research Journal 1 (2): 290–307. http://dx.doi.org/10.2304/ eerj.2002.1.2.6. Lefebvre, S. 2006. “I’ve opened up”: Exploring learners’ perspectives on progress. Toronto: Parkdale Project Read. Retrieved http://en.copian.ca/ library/research/openup/cover.htm. Lo Bianco, J. 1999. Globalization: Frame Word for Education and Training, Human Capital and Human Development/Rights. Melbourne: Language Australia. Retrieved www.eric.ed.gov:80/PDFS/ED438413.pdf. Lo Bianco, J. 2009. UNESCO, literacy and Leslie Limage. Literacy and Numeracy Studies 17 (2): 35–41. Longfield, J. (with the Parliamentary Standing Committee on Human Resources Development and the Status of Persons with Disabilities). 2003. Raising Adult Literacy Skills: The Need for a Pan-Canadian Response. Retrieved http://en.copian.ca/library/research/raisinge/raisinge.pdf. Mahon, R., & S. McBride. 2009a. Introduction. In R. Mahon & S. McBride (eds), The OECD and Transnational Governance, 3–22. Vancouver: U BC Press. Mahon, R., & S. McBride. 2009b. Conclusion. In R. Mahon & S. McBride (eds), The OECD and Transnational Governance, 276–81. Vancouver: UBC Press. Mahon, R., & S. McBride. 2009c. Standardizing and disseminating knowledge: The role of the OECD in global governance. European Political Science Review 1 (1): 83–101. http://dx.doi.org/10.1017/S1755773909000058. Martens, K. 2007. How to become an influential actor: The “comparative turn” in OECD education policy. In K. Martens, A. Rusconi & K. Leuze (eds), New Arenas of Educational Governance: The Impact of International

54

R. Darville

Organizations and Markets on Educational Policy Making, 40–56. New York: Palgrave Macmillan. McBride, S. 2005. Paradigm Shift: Globalization and the Canadian State. Halifax: Fernwood. McKay, S. 1987. A world without words. Imperial Oil Review 71:12–6. Merrifield, J. 1998. Contested Ground: Performance Accountability in Adult Basic Education. Cambridge, MA: National Center for the Study of Adult Learning and Literacy. Retrieved www.ncsall.net/fileadmin/resources/research/ report1.pdf. Miller, R. 1996. Measuring What People Know: Human Capital Accounting for the Knowledge Economy. Paris: OECD. Moos, L. 2009. A general context for new social technologies. Nordisk Pedagogik/Nordic. Educational Research 29 (1): 79–92. Murray, T.S., M. McCracken, D. Willms, S. Jones, R. Shillington & J. Stucker. 2009. Addressing Canada’s Literacy Challenge: A Cost/Benefit Analysis. Ottawa: Data Angel Policy Research. Ng, R. 1988. The Politics of Community Services: Immigrant Women, Class and State. Toronto: Garamond. OECD & Statistics Canada. 1995. Literacy, Economy and Society: Results from the First International Adult Literacy Survey. Paris, Ottawa: OECD and Statistics Canada. OECD & Statistics Canada. 2000. Literacy in the Information Age: Final Results of the International Adult Literacy Survey. Paris, Ottawa: OECD and Statistics Canada. Ontario Ministry of Training, Colleges and Universities. (2011). Ontario Adult Literacy Curriculum Framework. Toronto. Retrieved www.tcu. gov.on.ca/eng/eopg/publications/OALCF_Curriculum_Framework_ Mar_11.pdf. Page, J.E. 2009. Linkage Report. Connecting the Dots: Accountability and Adult Literacy. Montreal: Centre for Literacy of Quebec. Retrieved http://en.copian. ca/library/research/connectdots/linkage_report/linkage_report.pdf . Partnerships in Learning. 2007. Fostering Partnership Development: An Historical Look at the National Literacy Secretariat Business and Labour Partnership Program. Retrieved http://en.copian.ca/library/research/fpd/historical/ historical.pdf. Phillips, S., & K. Levasseur. 2004. The snakes and ladders of accountability: Contradictions between contracting and collaboration for Canada’s voluntary sector. Canadian Public Administration 47 (4): 451–74. http:// dx.doi.org/10.1111/j.1754-7121.2004.tb01188.x. Purcell-Gates, V., E. Jacobson & S. Degener. 2004. Print Literacy: Uniting Cognitive and Social Practice Theories. Cambridge, MA: Harvard University Press.

Literacy Work and the Adult Literacy Regime

55

Reder, S. 2008. The development of literacy and numeracy in adult life. In S. Reder & J. Bynner (eds), Tracking Adult Literacy and Numeracy Skills: Findings from Longitudinal Research, 59–84. London: Routledge. Reder, S. 2009. Scaling up and moving in: Connecting social practices views to policies and programs in adult education. Literacy and Numeracy Studies 16 (2)–17 (1): 35–50. Rinne, R., J. Kallo & S. Hokka. 2004. Too eager to comply? OECD education policies and the Finnish response. European Educational Research Journal 3 (2): 454–85. http://dx.doi.org/10.2304/eerj.2004.3.2.3. Rizvi, F., and B. Lingard. 2009. The OECD and global shifts in education policy. In R. Cowen & A.M. Kazamias (eds), International Handbook of Comparative Education, 437–53. Dordrecht: Springer. Rubenson, K. 2005. Social class and adult education policy. New Directions for Adult and Continuing Education 106: 15–25. http://dx.doi.org/10.1002/ace.175. Rubenson, K. 2006a. Constructing the lifelong learning paradigm: Competing visions from the OECD and UNESCO. In S. Ehlers (ed.), Milestones Towards Lifelong Learning Systems, 151–70. Århus: Danish University Press. Rubenson, K. 2006b. The Nordic model of lifelong learning. Compare: A Journal of Comparative Education 36 (3): 327–41. http://dx.doi.org/10.1080/ 03057920600872472. Rubenson, K. 2009. OECD education policies and world hegemony. In R. Mahon & S. McBride (eds), The OECD and Transnational Governance, 242–59. Vancouver: UBC Press. Saint-Martin, D. 2007. From the welfare state to the social investment state: A new paradigm for Canadian social policy? In M. Orsini & M. Smith (eds), Critical Policy Studies, 279–98. Vancouver: UBC Press. Schleicher, A. 2008. PIAAC: A new strategy for assessing adult competencies. International Review of Education 54 (5–6): 627–50. http://dx.doi. org/10.1007/s11159-008-9105-0. Schuller, T. 2006. Education and equity: Perspectives from the OECD. In J. Chapman, P. Cartwright & E.J. McGilp (eds), Lifelong Learning, Participation and Equity, 1–24. Dordrecht: Springer. Schuller, T., W. Jochems, L. Moos & A. Van Zanten. 2006. Evidence and policy research. European Educational Research Journal 5 (1): 57–70. http://dx.doi. org/10.2304/eerj.2006.5.1.57. Sen, A. 2009. Capitalism beyond the crisis. New York Review of Books, 26 March. Retrieved www.nybooks.com/articles/archives/2009/mar/26/ capitalism-beyond-the-crisis. Sloat, E., & J.D. Willms. 2000. The International Adult Literacy Survey: Implications for Canadian social policy. Canadian Journal of Education 25 (3): 218–33. http://dx.doi.org/10.2307/1585955.

56

R. Darville

Smith, D.E. 1999. Writing the Social: Critique, Theory, and Investigations. Toronto: University of Toronto Press. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: AltaMira. St. Clair, R. 2009. The Dilemmas of Accountability: Exploring the Issues of Accountability in Adult Literacy through Three Case Studies. Toronto: ABC Canada Literacy Foundation. Statistics Canada. 1991. Adult Literacy in Canada: Results of a National Study. Catalogue No. 89-525E. Ottawa: Minister of Industry, Science and Technology. Statistics Canada. 1996. Reading the Future: A Portrait of Literacy in Canada. Ottawa: Statistics Canada. Statistics Canada. 2005. Building on Our Competencies: Canadian Results of the International Adult Literacy and Skills Survey, 2003. Ottawa: Statistics Canada. Statistics Canada & OECD. 2005. Learning a Living: First Results of the Adult Literacy and Life Skills Survey. Ottawa, Paris: Statistics Canada and OECD. Sticht, T.G. 2001. The International Adult Literacy Survey: How well does it represent the literacy ability of adults? Canadian Journal for the Study of Adult Education 15 (2): 19–36. Sticht, T.G. 2005. The new International Adult Literacy Survey (IALS): Does it meet the challenges of validity to the old IALS? Retrieved http://en.copian. ca/fulltext/sticht/ials/ials.pdf. Sticht, T.G. 2009. Adult literacy education in industrialized nations. In D.R. Olson & N. Torrance (eds), The Cambridge Handbook of Literacy, 535–47. Cambridge: Cambridge University Press. Street, B. 1984. Literacy in Theory and Practice. Cambridge: Cambridge University Press. Street, B. 1997. Literacy, economy and society: A review. Working Papers on Literacy 1: 7–16. Montreal: Centre for Literacy. Thorn, W. 2009. International adult literacy and basic skills surveys in the OECD region. Education Working Papers Series 26. Paris: OECD. Tuijnman, A.C. 2003. A “Nordic model” of adult education: What might be its defining parameters? International Journal of Educational Research 39 (3): 283–91. http://dx.doi.org/10.1016/j.ijer.2004.04.008. Tusting, K. 2009. “I am not a ‘good’ teacher; I don’t do all their paperwork”: Teacher resistance to accountability demands in the English Skills for Life strategy. Literacy & Numeracy Studies 17 (3): 6–26. Walker, J. 2008. Going for gold in 2010: An analysis of British Columbia’s literacy goal. International Journal of Lifelong Education 27 (4): 463–82. http:// dx.doi.org/10.1080/02601370802051462.

Literacy Work and the Adult Literacy Regime

57

Westell, T. 2005. Measuring non-academic outcomes in adult literacy programs: A literature review. Retrieved http://en.copian.ca/fulltext/ measuring/measuring.pdf. Willms, J.D. 2003. Literacy proficiency of youth: Evidence of converging socioeconomic gradients. International Journal of Educational Research 39 (3): 247–52. http://dx.doi.org/10.1016/j.ijer.2004.04.005.

2 Learning Global Governance: OECD’s Aid Effectiveness and “Results” Management in a Kyrgyzstani Development Project marie campbell

This chapter offers insight into how development aid – especially its more effective management as promoted by multilateral organizations and donor governments – contributes to the post-Cold War organization of global capitalism. The Paris Declaration on Aid Effectiveness (OECD– DAC 2005) and its recommended management by results (MBR) strategy are the discourses at the centre of this analysis. Sponsored by the Development Assistance Committee of the Organisation for Economic Co-operation and Development (OECD–DAC) and supported by the multilateral development banks and development-related agencies of the United Nations, the Paris Declaration’s five principles establish a new (or, as is argued elsewhere,1 a reformulated) knowledge frame for managing aid. Like other practices of good governance such as fiscal transparency, aid effectiveness addresses problem areas that concern both donors and recipients of aid. But so-called good governance does not necessarily make these practices into a neutral public good. Speaking about fiscal transparency that they say is “inherently political,” Philipps and Stewart note that, besides disciplining a government’s expenditures, “the information that such transparency makes available establishes credibility for financial markets, international lenders and donors” (2009: 799), suggesting that the benefits accrue lopsidedly to these actors. Other observers analyse the “power relations [arising] from the asymmetries of resources among the various organizations, countries, professions, and interest groups” that are involved in governance efforts in “the interplay between democracies, markets, networks and hierarchies” (Held, Dunleavy & Nag 2009: 2–3). Learning how good governance operates to advance particular interests and agendas is especially relevant in the field of international development, where

Learning Global Governance

59

prudent investment and management of official development assistance are confidently expected to strengthen local economies, thereby reducing poverty and empowering people. My analysis focuses on something that is consistently overlooked in critical analyses of globalizing governance and management; in what follows, I analyse how ideas that are promoted by organizations such as the World Bank and OECD and by countries donating development assistance make the crucial move from particular discourses into people’s local development action. Local participants learn to be governed, I will argue, through the training provided by doing management by results. The Paris Declaration on Aid Effectiveness had achieved a presence in people’s talk and action in Kyrgyzstan when I began researching “Women’s NGOs in Kyrgyzstan, International Funding and the Social Organization of Gender”2 in 2007. Women working at the grassroots level in development were expressing mixed feelings about the Paris Declaration, and a series of regional meetings of gender activists had been organized by UNIFEM (United Nations Fund for Women), including a meeting in Central Asia in May 2007 to develop a consensual position regarding the policy. Women activists worried that their work could suffer loss of funding when, as aid effectiveness recommends, development assistance enters their country through national budget support. UNIFEM was leading the effort to put gender successfully into the country’s new development policy documents. My research colleague Elena Kim and I subsequently collected data on the contribution of local gender advocates as the Kyrgyz Republic responded to the Paris Declaration’s principles (e.g., country ownership) by developing a national development policy called Country Development Strategy (see Campbell & Teghtsoonian 2010). During this period, Kim and I began to hear and see how the new focus on aid effectiveness and on the use of MBR was emerging in local development projects. While MBR is the OECD’s recommended approach to account for development assistance going into national government budget support, it was also appearing in projects funded by international non-governmental organizations (INGOs) involved in development. Our attention was drawn to one small environmental development project funded by the Dutch INGO, Hivos, in which a variant of MBR was being used for its management and accountability. Interviews we conducted with the international manager of this project in Kyrgyzstan suggested that its results orientation was an all-encompassing focus of the project work. For us, as institutional ethnographers, a puzzle emerged

60

M. Campbell

about what was being accomplished through making “results” the central focus of work in a development project, in the manner we heard about from our informants. The promise is of good governance and the attendant efficiency and effectiveness of the application of donors’ funds. Problematizing this undertaking as an institutional ethnography offered us the possibility to explore this particular setting as an entry for analysis of how MBR worked in practice. Our social organization of knowledge framework meant that we treated what we saw and heard about the operation of the local development site as an expression of the social relations of the development institution about which we wanted to learn more. We understood that the project work being conducted would connect people and their local activities into the larger development institution, and we were intrigued to explore what was actually happening. What could we learn about the ruling relations of the development institution from tracing the talk about doing results management back through the project funding and accountability to its origins elsewhere? MBR is not new in the west. I had taught it to Canadian social work students in the 1980s, using training materials obtained from the Ontario government’s Management Secretariat. It is now used throughout Canada’s federal government, including in the Canadian International Development Agency (CIDA) and its international projects. The rationale that the Paris Declaration makes for its aid effectiveness recommendations is sufficiently plausible that donor and aid-recipient countries, and even otherwise sceptical civil society organizations (CSOs),3 are endorsing the use of MBR. Its use seems a tidy way to organize and control funded activities taking place outside the purview and ordinary control mechanisms of the donor bureaucracies. Yet analysts of development attempting to untangle relations of dominance in development aid have begun to query features of management similar to those our informants described. Cornwall and Brock, for instance, identify the importance of language in new forms of managing development, and they trace the trajectories of what they call buzzwords that have become part of the “interplay between ‘money changing hands’ and ‘ideas changing minds’ that is international development” (2006: 50). I want to add “effectiveness” and “managing by results” to concepts such as “good governance,” “transparency,” “ownership,” and “participation” that, as Cornwall and Brock say, transmit “consensual meanings from the centre of the discourse” (ibid.: 60). To do so, I focus on the actual practices in which such transmission can be recognized as it takes place in people’s development work.

Learning Global Governance

61

Understanding work-in-texts as important to how ideas move and are taken up in widely dispersed sites, I analyse the texts and textual practices through which MBR is enacted by the participants of this development project in Kyrgyzstan. It is in such practices that I understand language and discourse to be relevant. As institutional ethnographers emphasize, these elements of any setting appear in and as its social relations. Therefore, the specific language heard about in ethnographic interviews and in texts mentioned in accounts of project-related work make the social relations available for analysis. Looked at within the social and ruling relations of local settings, aid effectiveness can be seen as more than a set of widely distributed policy recommendations. Rather, it is part of a discursive technology whereby OECD–DAC reaches across the boundaries of nations and institutions, engaging local actors in particular practices whose goals are both specific and general. Beyond any particular project results achieved on the ground (which this study does not assess), managing the work in a resultsoriented manner is itself an achievement in that it organizes project activity into definite “transparent” forms for extended uses. In constituting an authoritative textual version of what happens, any project’s activity can be monitored and its “results” calculated and then compared across all sectors that organize themselves within the same discourse. The project participants of my study, citizens of a country with an economy in transition from Soviet socialism, are developing the individual capacity to calculate and represent their activities-in-texts as part of a global knowledge regime constituting the information environment for managing aid funding “effectively.” In doing this development work, project participants learn to think and act in the calculative mode that Peter Miller noted was “turning into a routine component of programs and strategies of government in western nations, and one that is an important component of the ‘assistance’ currently being provided to Eastern European economies” (1994: 243). Miller’s account suggests that MBR gives project participants the individual capacity to engage in practices integral to the operation of global capitalism. My contribution in this chapter is to track the geographically dispersed sites and ruling texts of aid effectiveness to identify how MBR enters and instructs a particular manager’s work in the Kyrgyzstan project. Analysing this technology in one setting suggests how an empirically based understanding can be generated, as opposed to accepting as given from the content of the discourses, the possible benefits or the drawbacks of learning to participate in this global knowledge regime.

62

M. Campbell

“Effectiveness” Accomplished in an Accountability Circuit The story as I present it here begins in the Netherlands, the country whose official development assistance contributes much of the support for the project in question. As a signatory of the Paris Declaration, the Netherlands is committed to aid effectiveness principles, and, in the particular development funding relationship being examined, the Dutch government passes that commitment on to the INGO Hivos. Reports such as those from the Kyrgyzstan project will later become the basis for Hivos to demonstrate to its funders in the Dutch foreign ministry that its funds are being used effectively. Generating the appropriate results information for use in the “effectiveness” knowledge loop hooks Hivos into the policy commitments of the government of the Netherlands. This commitment is explained on the Hivos website in their “Hivos Policy Framework for Improved Result Orientation and Result Assessment” (Hivos 2004). Consisting of five separate documents and a link to further web-based and library resources, including the OECD–DAC Glossary, these web documents are aimed at the managers of Hivosfunded development projects and are identified as either “basic reading” or “recommended reading” for them.4 The documents explain that Hivos is reorganizing its own internal management into what it calls a results-oriented approach to improve its own organizational efficiency as well as to ensure that adequate attention is given to the effectiveness of the use of the funds that Hivos receives from its funders and passes on in project funding to partners in developing countries. These web documents are written for the purpose of communication with agencies or groups that are applying for or managing development project funding from Hivos. Once accepted for funding and subject to its own management and accountability regime, Hivos calls such organizations its partners. My reading of the web documents finds their tone to be informal, almost conversational, reflecting, I assume, the process of consultation that had been carried out with project partners in advance of the documents’ appearance on the website. Consulting the organizations that it funds about an upcoming change in its reporting requirements appears consistent with Hivos’s history of basing its development work on democratic and social justice values.5 That Hivos now will impose results-based management on projects that local NGOs conduct in their communities reflects the pressure being exerted on Hivos to change its management practices in order to account for effectiveness. To me, the changes appear significant. Project partners

Learning Global Governance

63

are being expected to adopt new ways of conducting their activities, learning to use the new language of results orientation. The next section of this chapter illustrates the very specific language that a project manager has been taught to use in her own reporting to Hivos and for which the OECD Glossary, available on the internet,6 provides the authority. The new results framework on the Hivos website offers what seems to be a reasonable argument for the changes that, it explains, are to “improve the quality of result assessment processes within and between” Hivos and its partner organizations. Interviews I conducted with Hivos staff support the perception that the origin and urgency of the new reporting requirements lay elsewhere than in improvement of the efficiency of its own internal organization – although that, too, was a motivation. The changes Hivos was undergoing were being encouraged by multilateral organizations in a variety of ways. Already by 2004, management of development assistance had become the focus of transnational improvement efforts. For instance, the Netherlands was a signatory of the Rome Declaration on Harmonisation (2003), an OECD-organized agreement to standardize methods of accountability that donors would require from the recipients of official aid – and that would, among other things, lower “transaction costs” on both sides. Besides asserting that aid recipient countries needed to improve their practices of managing aid, the Rome Declaration targeted for improvement the alignment among donors and within donor organizations of specific accountability practices. Added to this was the World Bank’s influence both on the Netherlands and on Hivos itself. Previously Hivos had been fully funded by the Dutch government, but by 2004 Hivos had been forced to look beyond government funding when the Netherlands began dividing its development assistance among a greater number of international development agencies. Hivos went to the World Bank for additional funding. The World Bank has its own fiscal accountability agenda for development and throughout the previous decade had been organizing meetings, in which the Netherlands had participated, to encourage the building of results-focused “corporate cultures”7 in donor organizations. In these ways, donors such as the Netherlands were being introduced to the expectations of multilateral organizations that eventually were summarized in the five principles of the Paris Declaration. By 2006, a government official, speaking at a conference organized by Hivos to discuss the impact of aid effectiveness on autonomous INGOs, acknowledged that the Paris Declaration “heavily influences the way we at the Dutch government work.”8

64

M. Campbell

Complying with external demands from its funders for better reporting of the effectiveness of its project funds created an internal conflict within Hivos. According to its website, Hivos aims to build relationships with its development partners that promote their autonomous development. The imposition of results-oriented management and accountability carries implications that might be seen as conflicting with this mission. For instance, the new approach would make it possible to determine a best course of development action quite outside an interpersonal consultative relationship with partners. But doing so would lead Hivos to violate its own principles. To forestall the possible contradiction, in Part 1 of its Results Framework Hivos rejects the notion that results reporting can be substituted for other forms of knowing. The website makes the point that the social and political changes towards which development programs aim are difficult both to assess and to attribute to particular causes. Hivos states that it does not expect that results reporting will supply causal connections between intervention and “outcomes” or “impacts” (2004: Part 1, 3). Nevertheless, resultsoriented reporting is seen to be important, and the website offers two different justifications for it. One justification arises in Hivos’s own operations; the other is expressed as an expected improvement in their partner organizations. As regards the first justification, it seems that the external demand to account for the effectiveness of funding afforded Hivos an opportunity to rethink its own knowledge framework and find a way of expediting its processing of partners’ reports. The Results Framework states that, in 2004, the work of processing “800 annual reports of partner organizations” stretched the limits of the agency’s administrative capacities; Hivos “didn’t have the instruments to process all this information in a way that facilitates analysis or learning at a level beyond the individual partner,” and this deficit strengthened interest in requiring “more focus and analysis in partners’ reports and gaining more technical possibilities for Hivos” (ibid.: Part 2, 2). The second justification for adopting results-oriented management follows from the first, namely, if the results orientation is useful for Hivos, it should also be good for partners’ organizations. The Results Framework states specifically that Hivos’s partners and their own organizations will benefit from learning how to do results reporting. Beyond compiling information for better “aggregation and analysis of the very diversified result information [Hivos] receives [which is] necessary for reasons of external accountability, policy evaluation and knowledge building,” the new management

Learning Global Governance

65

system is expected to bring benefits for partners – when they apply their learning to how they manage their own undertakings (ibid., 5). Among the benefits to partners of doing results-oriented management is the opportunity being extended to them to participate in the new processes of accountability (ibid.: 7). Not just the techniques of management but the relations between the two partners, the Framework suggested, will be improved. Hivos tackles head-on the issue of power relations that emerge in the Results Framework, where the “aid chain” is explained as a “system of channelling financial resources from rich countries to [groups within] poor countries” (ibid., 4). Donor-partner relationships are acknowledged to be unequal because “the power to set conditions and standards, to formulate reporting requirements, to judge which results are good enough and to decide on the continuation of funding, lies with the actor providing the funds” (ibid.). The Results Framework takes a pragmatic approach to what, from Hivos’s perspective, is the reality of the unequal situation, arguing that the power differentials will be reduced as the new results orientation and assessment system of accountability “frees” its partner NGOs in important new ways. For instance, the Framework states that, through “negotiation, partners [can shape] their own result assessment practices and jointly set standards” (ibid., 5). In the new managerial relationship being established, according to the Results Framework, Hivos does not dominate nor determine exactly what its partners will do in funded projects. Once the results-oriented processes are adopted, the local development workers are free to decide for themselves the specific activities with which they will fill in the reporting categories. The accountability circuit that is being put in place establishes the Paris Declaration on Aid Effectiveness as the first-level regulatory text: the Netherlands enacts its accountability relation by demanding “effectiveness” reporting from Hivos. The web documents show how Hivos gears up to take the required action through which it will demonstrate its effective use of donors’ funds. The MBR technology recommended by aid effectiveness (in this instance, Hivos’s results orientation) is expected to generate knowledge in a form that works within this accountability structure. Hivos’s Results Framework thus becomes the second-level regulatory text: it guides the specific managerial practices through which “results” will be made explicit in project reports (as discussed in the next section). In doing so, Hivos accomplishes transparency of the “effective” use of allocated development funds, just as budget transparency is required of aid-recipient governments.9

66

M. Campbell

Generating “Results” in Language in a Development Project Accounting for results of the (funding of the) environmental project in Kyrgyzstan closes the accountability circuit through which effectiveness of the Dutch development assistance to Hivos is established. To arrive at reportable results, project participants must learn to express their local environmental knowledge, and related project plans and actions, in a particular manner. Project participants are expected to adopt a text-based method of taking action in which (written) objectives become the basis for choosing indicators of desired results, where an indicator is defined as “a quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to reflect the changes connected to an intervention, or to help assess the performance of a development actor” (Hivos 2004: Part 4, 4).10 The web documents of the Hivos Results Framework provide a 14-page set of instructions for partner organizations to use to design programs and choose indicators within the Hivos model for results-oriented thinking. For instance, the concept of a “results chain” is explained as “activities lead[ing] to concrete results (output) that are used by and have effects on people (effect/outcome) in a way that is expected to contribute to more structural changes in the longer term (impact)” (ibid., 1). This textual expression of results is what will make the reports to Hivos efficiently processable; using specific terms will prepare project reports for entry into Hivos’s own effectiveness accountability reporting. The results-oriented management creates the funded project as a largely textual domain of action. In doing so, a problematic of transliteration appears for institutional ethnographic analysis. An institutional ethnographer recognizes that, in the movement of participants’ knowledge into the language of results orientation, something analytically and politically relevant happens. However, that the management of this transliteration comprises definite work activities is likely to be taken for granted and its importance for reorganizing the setting can and will be overlooked. We watch what happens at the Kyrgyzstan site. As ethnographic data reveal, the international manager, whom we call Katya, becomes responsible for ensuring that the transliteration happens and the project is properly textualized. Our interviews reveal that the work of steering the project, with its focus on work-in-texts, is neither “natural” nor easy. Katya has attended training sessions offered by Hivos and has learned to see her management work as mainly about information and reporting. She is absorbed by its specific language: “K: From any

Learning Global Governance

67

body of information I can distinguish outputs, results, what is lacking and what needs to be evaluated. This is what I call management. This relates to skills of how to work with information.” Katya’s managerial attention is oriented by the project proposal (which becomes the third level regulatory text) and whose approval and acceptance by Hivos triggered funding of its stated project objectives.11 Katya (or her predecessor in the project management position) interpreted the objectives, decided on implementation strategies, and among the first project activities published a call for volunteers from regional environmental NGOs to become participants in thematic working groups (TWGs). Relations between Katya and the Kyrgyzstan-based members of TWGs are conducted primarily in text, supplemented by episodic, face-to-face meetings. The TWGs are encouraged to propose, describe in writing, and carry out pilot projects whose themes would advance the larger project objectives. Topics approved (by Katya and her Bishkek-based office manager, Zara) and taken up for action by five TWGs include, among other themes, ecological education, environmental monitoring and advocacy, and public participation. Interviews that were done with project managers and participants in the course of our ethnography suggest that they were faced by a variety of difficulties as they learned to apply the results orientation in doing the project work. Asked about such difficulties, Katya, a project manager, suggested that getting good internal reports was the most challenging, creating problems for her, for her office manager, and for the participants, too. She explained: “You know, [participants] don’t know how to distinguish activities from outputs, from results, from problems, etc. They just write a narrative story; for them if they have worked a lot [that] means ‘results.’ I gave them trainings, explained them many times, gave examples from my reports. I invited [the office manager] to come here [Amsterdam] and worked with her through the reports. It is becoming better now towards the end of project [of four years].” The training of the participants – amateur environmentalists from mainly small and previously isolated Kyrgyzstan NGOs – offered them specific and detailed instructions for enacting the project within the resultsoriented terms. They (now members of one or another of the TWGs) were expected and coached to engage in the project work using the language and methods of results-oriented management, developing their plans and project activities in its categories, and eventually reporting in the same way. One instance, examined below, illustrates the “everyday” or routine practices of what became the textual mediation of the

68

M. Campbell

project. If working as it is conceptualized, the textual mediation of results-orientation management should coordinate participants’ thinking and action. Of course, its practical operation depends on participants’ learning, which in this case is less than perfect and, as the data analysed below illustrate, the project work is not fully coordinated with the ruling managerial discourse. Yet project decision-making does take place in reference to the project texts – a coordinating achievement of the textual practices – and participants learn something of how they are to conduct their activities in the new mode. Even so, some participants are surprised to find that they have produced texts that then order what they can do. Transposing local knowledge into technical language to define sites of possible action is what some analysts have called rendering what is to be developed “technical” (see Li 2007: 7). But as also shown below, the activity that textualizes the project for its results management does more than render the field into technical terms for managerial purposes. What is happening, it is being argued here, is the operation in this setting of a new knowledge regime; this kind of work achieves the knowledge basis for many forms of worldwide regulatory action. One incident from our ethnographic data offers insight into how the new knowledge regime was enacted in the site. It concerns decisions about the funding that the larger project made available for the TWGs to use in their pilot projects. Managers Katya and Zara told us that they reserved the right to approve these expenditures. In the situation being examined, Zara and Katya disagreed with the participants’ choice. Zara explained: “We provided funds to groups [and those funds] were supposed to cover their expenses when enacting action plans. So, the monitoring [and advocacy] group wanted to purchase technical equipment for measuring radiation or something … but that was not included in the goals of the project. As a result, we did not give our agreement for that [purchase].” This disagreement caused that particular TWG, the monitoring and advocacy group, to disband and leave the project. What becomes important for the present analysis is the textual relation between the objectives (Zara has substituted the word “goals” for “objectives” in the data excerpted above) and the group’s request for a particular purchase. When the disagreement arose, this TWG had already decided on and written objectives for their pilot project and its (textual) proposal had been approved by the managers. Now, the group’s desired purchase became unsupportable because, according to Zara, what they wanted to purchase “was not included in the goals

Learning Global Governance

69

of the project.” As readers or even as analysts, we have no basis for judging who was in the right, in the sense of which purchase would be most useful for this project of environmental activism. Rather, the analytic point is that the decision about funding was made in relation to the group’s own planning text. The text, with its coordinating power (rather than a more local weighing of the pros and cons of the purchase or even, according to Zara, an appeal made to distant funders), ruled the decision. “The text is active here. A perceptual standardization is being organized, such that people differently positioned in relation to a named object (or in this case, a funding decision based on an ‘objective’) can see it as the same. Hence, diverging perspectives that are the necessary outcome of being in bodies and starting from each individual’s own centre of co-ordinates … can be concerted in words that organize perceptual generalization” (Smith 2005: 85–6). The results-oriented approach with this kind of language basis for action was being taught to the TWG volunteers and its proper use was coached, supervised, and monitored. A good deal of project time was spent in preparation for using the expected (results-oriented) approach that the volunteers were to adopt in conducting their pilot projects.12 After choosing pilot projects, the first activity undertaken in the TWGs was to identify objectives and associated indicators of their future achievement; once conceptualized in this manner, group participants were to make plans for using the indicators to assess how well the pilot project had worked. Katya needed reports of such achievements to generate an annual report that would begin to show “effectiveness” to Hivos, and she relied on Zara to get suitable reports. Serious breaches in the results orientation appeared in all the reporting and Katya had to manage them. This was partly done through her monitoring of an Excel time/activity spreadsheet into which Zara entered project activities and their specification. When this documentation appeared inadequate for her purposes, Katya would request more specification by noting lapses in detail on the form itself and talking to Zara via email or on Skype. Episodically and again nearing the end of the project, the working groups forwarded reports of their activities through Zara to the international agency office in the Netherlands, where Katya used them to construct her reports to Hivos. With one exception, in the reports I read in translation little indication was offered of the expected rolling out of pilot project activities in the language of objectives and matching indicators that made possible the text-oriented monitoring of results. The group working on ecological education was the exception. It produced

70

M. Campbell

a final report that showed its participants were getting the idea of how to use the language properly. Their report listed both the objectives of its pilot project and the indicators (of projected results) that the group had set for itself. It also said that at the conclusion of the pilot project a questionnaire had been circulated to people attending a final meeting, from which the group leader had constructed “outcomes” and ideas about the long-term “impact” of the pilot project. Sections from this group’s properly constructed report offered Katya a basis of comparison with the poorer mastery of the results orientation by other TWGs. For instance, she reported that other groups “had difficulties developing indicators for monitoring projects’ effectiveness” (project file of annual report to Hivos, May 2008: 9),13 and that one group lacked “knowledge about monitoring system development techniques and as a result had difficulties developing indicators and reporting forms” (ibid.: 10). Besides this evidence of “lack of capacity” in results orientation, Katya’s annual report to Hivos hinted at other difficulties that apparently could not be described in the language of “results”; instead, she presented some details in narrative form, as “case studies.” Katya’s case studies remind us of the real world in which these amateur environmentalists live and work “in their bodies.” (In her 2005 book, Smith noted that bodily experience provides a basis for knowing that an authoritative textual basis can supersede). While apparently not an effective “achievement” that could be pre-planned, managed, and monitored and thus properly reported as such, Katya’s case studies show people taking action on aspects of the local environment that mattered to them. We hear that confrontations with mining companies14 stretched the volunteer environmentalists’ public participation capacities past their abilities to manage such interaction smoothly. The report carries a tone of criticism of the efforts of the TWG involved. And these events were not properly textualized, leaving one to wonder if Hivos’s more efficient processing of annual reports could recognize in this case study a “result.” Not only Katya but also an external evaluator felt that the project was less than fully successful in achieving its objectives. However, even if results were not fully achieved and results’ reporting was less than successful, this does not mean that the volunteers/participants were failing to learn the lessons about using the language of results management. Nor does it mean that the new knowledge regime would fail in its achievements. However removed from the actuality it represents, its new form as text would make counting possible and it would be counted as it moves upward through the development institution.

Learning Global Governance

71

Textual Coordination of Actors and Activities: Advances in Global Governance Even though amateur environmentalists’ responses within the Hivos results-oriented project were sometimes “flawed” (as illustrated), these project participants nevertheless are being introduced to the resultsbased frame of the knowledge regime and how they should enact their parts in it. Learning, we can assume, takes place from all experiences of doing results-oriented work, both positive and negative. The monitoring and advocacy group learned that the proposal they had produced in the required format, language, and categories – including its approved “objectives” – would trigger approval of only “matching” project purchases. As they were to discover, their textual production had ruled out the particular action that they wanted to take. Even though they withdrew from the project, this experience constitutes an important lesson about how texts work and how people can work with them. Their negative experience would have taught these volunteers that their own reading and writing of the proposal objectives regulated their actions. A proper reading of their pilot proposal, as written, would have directed their response differently, their desire being directed towards a different purchase. Notice, too, that the managerial decision was entirely objective and available for all participants to recognize as such. It was the text “speaking.” A proper activation of the project texts can be expected to culminate in local action “freely” undertaken, yet aligned with the project design and its relevancies, fulfilling Hivos’s assurances of improved relations with project partners. At one level that promise of freedom for their partners to conduct project work as they see fit was entirely borne out in this Kyrgyzstan project. No evidence was discovered that Hivos dictated to or knowingly dominated the local project. Katya’s strategies assumed a similar commitment: each TWG was free to propose a pilot project, to decide among the participants themselves which of any suggested pilot project to choose and write up, and so on. Yet this apparent flexibility does not reduce the regulatory effect of participants’ coordination within the knowledge regime. The coordination implicit in the (correct) operation of the knowledge regime is in line with how institutional ethnography treats texts: “not prescribing action but establishing the concepts and categories in terms of which what is done can be recognized as an instance or expression of the textually authorized procedure” (Smith 2006: 83). This account of how textual coordination works describes the

72

M. Campbell

particular text-action-text sequence in which the purchase of equipment was denied the monitoring and advocacy group. The language of their proposal would activate whatever response – in this case to approve a purchase – could be read as an instance of what the text authorized. Project participants would discover in either positive or negative exchanges that their pilot project proposals were part of an ongoing course of institutional action, not something to be completed and then forgotten. A project text, as illustrated here with regard to the proposal for a pilot project, performs in different readings and for different purposes over the course of the project. It regulates project activity. Besides the approval of purchases, the whole conceptualization of the pilot project’s management and results assessment follows and makes use of a properly written proposal. The education group, mentioned previously, apparently understood this.15 Their project objectives served to determine their choice of correct (or at least managerially useful) indicators for their eventual monitoring of results. The adequate selection of definitions and use of indicators made technically possible their monitoring of designated project activities. In this manner, the institutional action is advanced. Participating through specific processes of reading and writing is recognized by institutional ethnographers as constituting a specific kind of relation: “reading a text is [understood to be] a special kind of conversation in which the reader … “activates” the text … responding to it or taking it up in some way. Its activation by a reader inserts the text’s message into the local setting and [into] the sequence of action in which it is read” (Smith 2005: 105). Being acted on, in the results-oriented process, are not just the objects (as per “objectives”) of development projects, but the subjects of the action as well. Katya learned “management language” and enacted her work accordingly. Beginning with reading the call for volunteers and continuing in trainings and proposal and report writing, the participants in the Kyrgyzstan project were being trained and sanctioned to move from their practical knowledge of their everyday lives, experiential communities, and local environmental problems into the project’s textual domain. Their learning is crucial to this coordinated action. To the extent that a local project’s text-action-text sequences incorporate these individuals into textual commitments, they become the agents of the results-oriented text. Acting in this way, they put in place a particular institutional order. The text-mediation makes possible a terrain of calculability. As participants become capable of enacting the textual practices, doing so changes them as people. It should become second nature,

Learning Global Governance

73

as Katya suggests it is for her, for all project partners to be able to construct a results account and, by engaging with the concepts designed into the project, to calculate their own project activities successfully. The analysis shows that not everybody in this project was successfully or fully integrated into the new knowledge regime, but everybody was expected to learn how to construct their project activities so that they could represent what they did in its terms. To that extent, they are also learning self-management (discussed by Li 2007 and Ilcan & Phillips 2008, who see in self-management a new form of domination in development). A contradiction ensues between self-management and the “freedom to do as they see fit,” as Hivos claimed for results-oriented management. In the Kyrgyzstan project, people’s activities were being coordinated in learned responses to special texts, and as they gained the capacity to do so “as second nature,” their subjectivities were being coordinated within the ruling relations of results management. “Ruling,” as understood by institutional ethnographers, influences what happens by substituting the interests of rulers for those of the people being ruled. Ruling takes place routinely, in contrast to being enforced violently, when people’s everyday work organizes them to act in terms of somebody else’s interests. Multilateral Organizations, Textual Coordination, and Ruling The principles, including the recommended management practices of the Paris Declaration (promulgated through the OECD), aim to build more effectiveness into the management of development assistance and, according to the official view, this will bring benefits to donors and recipient governments. If the definition of the United Kingdom’s Department for International Development (DFID) “to measure the quality of aid delivery” (DFID 2007) is used, effectiveness is accomplished textually. The texts, such as the ones Hivos was instituting in this environmental development project in Kyrgyzstan, make monitoring and measuring possible and “effectiveness” calculable. However, my analysis opens up to view the way in which an important but ordinarily submerged ruling relation is constituted between the discourses of the management approach and the local setting. We have already seen how in the transliteration, even using Hivos’s nonmathematical version of indicators and results, local knowledge, and action are expressed in objective concepts that can be aggregated. Managing in a results-oriented manner makes all project activities,

74

M. Campbell

their outcomes, and the use of funds objectively knowable in new ways. Doing this transliteration work trains development workers to know themselves and their world differently. According to Smith, when “the sense a subordinate text makes is found in the interpretive frame that the regulatory text establishes” (2005: 87), one can recognize “power” operating discursively. This is the form of power that is routinely exercised in results-oriented management. The analysis of this one project exhibits how regulation of subordinate texts by superordinate ones can coordinate action throughout the whole intertextual domain of the development project. What does or might it mean that socially organized circuits of accountability draw individual participants into new forms and relations of ruling? Whose interests are being substituted for those of the participants – the people recognized as the intended beneficiaries of aid? Beyond building somewhat suspect accounts of success within development projects, the reporting work constitutes the currency through which the business of development and economic investment is done. In the project studied (and as argued by Campbell & Teghtsoonian 2010), it seems that the high-level knowledge goals of aid effectiveness (tied to ideas and strategies of economic growth as the engine of poverty reduction) are being materialized through the doings of local development actors. It matters that it is these textually constructed accounts of results that are being read as authoritative in ruling sites. Getting suppressed in the process is the authority of onthe-ground knowing from experience, as well as participants’ confidence to speak from that knowing as an alternative form of “truth.” This problem appears everywhere in the world; it is not just a problem of the participants of one development project. Becoming expert in the (ideological) work of text-based management – the transliteration of an actuality into a textual object that fits a ruling frame – constructs the textual environment in which people adapt themselves, their knowledge, and their beliefs to the ruling frames in which they are immersed. Many questions remain unanswered about how the development of a particular aid-recipient country’s own economy and its people’s well-being are being promoted through the processes of aid effectiveness.16 My analysis contributes to informed discussion and debates of such important topics by making visible the workings of the discourses that otherwise seem to simply “circulate,” somehow supporting good governance. Discovery of the ruling relations that tie authoritative discourses and their carefully crafted and well-funded circulation

Learning Global Governance

75

into local sites of development action makes a difference in how their authority can be read. My analysis of the activation of aid effectiveness and MBR in this one instance sends a cautionary message. When the new knowledge regime is not taken for granted, and the work of transliteration is revealed, its effects – the framing-up of what actually happens into management and discourse-relevant language, categories, and accounts – must be recognized as ruling practice. Ruling practices have definite consequences. While not materially present to the project participants, the OECD–DAC discourse is regulating action in the local setting. The social relations operating through the (work of) accounting for the funds that Hivos receives from the Netherlands and then makes available in Kyrgyzstan hook that distant project into the ruling agenda of the development institution. At issue is not simply the usefulness of constructing accounts of “effectiveness,” but what goes missing in that version of what happens. In this case what is lost may be something important about Hivos’s specific humanist mission for international development (Hivos 2008). More generally, we are able to see some of the actual workings of a global governance project that needs our careful scrutiny.

NOTES 1 Many critical analysts point out that the “new” development aid frame maintains its longstanding focus on adjusting the policies of aid-recipient countries in the way that discredited structural adjustment programs (SAPs) of the 1980s did, but the addition of the principle of “countryownership” creates more legitimacy for the enterprise. 2 Under SSHRC Grant No. 861-2007-0019. 3 See International CSO Steering Group (2008). 4 The Results Framework, Part One (five pages) includes a Table of Contents and Introduction and guidelines for use; Part Two (9 pp.) is Background; Part 3 (7 pp.) is Procedures for Improved Result Orientation; Part 4 (14 pp.) is Programme Design and Indicators; Part Five (2 pp.) contains Annexes and References and offers links to websites as well as books and articles about results-based management. Of particular relevance is the 40-page OECD–DAC Glossary of Key Terms in Evaluation and Results Based Management (OECD–DAC 2010), which is the authoritative basis for the terminology of Hivos’s policies and communication on its own results orientation and results assessment.

76

M. Campbell

5 Hivos describes itself in the following terms: “Hivos is a development organization, which stands for emancipation, democratization and poverty alleviation in developing countries. For this purpose financial and political support is given to more than 800 local private organizations in 30 countries in Africa, Asia, and Latin America; the seven policy spearheads of Hivos are financial services and enterprise development; socially and ecologically sustainable production; human rights and democratization; HIV/AIDS; arts and culture; gender, women and development; ICT and media” (Hivos 2008: 3). 6 The statement of purpose for the guide is as follows: “The DAC Working Party on Aid Evaluation (WP–EV) has developed this glossary of key terms in evaluation and results-based management because of the need to clarify concepts and to reduce the terminological confusion frequently encountered in these areas. Evaluation is a field where development partners – often with widely differing linguistic backgrounds – work together and need to use a common vocabulary. Over the years, however, definitions evolved in such a way that they bristled with faux amis, ambivalence, and ambiguity. It had become urgent to clarify and refine the language employed and to give it a harmonious, common basis. With this publication, the WP–EV hopes to facilitate and improve dialogue and understanding among all those who are involved in development activities and their evaluation, whether in partner countries, development agencies and banks, or non-governmental organisations. It should serve as a valuable reference guide in evaluation training and in practical development work. The selection of terms and their definitions in the attached glossary have been carefully discussed and analysed and have benefited from advice and inputs, notably from DAC Members and the academic evaluation community … A WP–EV Task Force, chaired by the World Bank, led the overall project, in collaboration with the Secretariat” (OECD–DAC 2010). 7 The first International Roundtable on Better Measuring, Monitoring, and Managing for Development Results took place on 5–6 June 2002 at the World Bank headquarters. Multilateral development banks jointly sponsored the Roundtable – the African Development Bank, Asian Development Bank, European Bank for Reconstruction and Development, Inter-American Development Bank, and World Bank – in collaboration with the Development Assistance Committee of the Organisation for Economic Co-operation and Development. It included representatives from borrowing and donor countries, the IMF, UN agencies, the EC, other international agencies, and civil society. The Roundtable took stock of

Learning Global Governance

8 9

10 11

12 13

14

77

ongoing efforts in countries and agencies to manage for results, with a focus on the actions needed to build demand for and increase capacity to adopt results-based approaches in developing countries. It stressed the need for development agencies to offer co-ordinated support for capacity-building and to harmonize approaches to results-measurement, monitoring and reporting. Further, it discussed ways for development agencies, including the MDBs, to develop results-focused corporate cultures and incentives (Managing for Development Results 2002). At the Second International Roundtable on Managing for Development Results in February 2004 in Marrakech, Morocco, participants “reflected on how donors can better co-ordinate support to strengthen the planning, statistical systems, and monitoring and evaluation capacity that countries need to manage their development process. As a final outcome of the Roundtable, the heads of the multilateral development banks and the chairman of the OECD’s Development Assistance Committee endorsed common principles on managing for development results, including a commitment to specific actions for 2004” (Managing for Development Results 2004). This account is taken from internal Hivos documents made accessible during ethnographic research. When development assistance goes into a country through budget support, Aid Effectiveness requires accountability to be “mutual” for donors and recipient governments. Budget transparency is expected to make recipient government use of funds auditable. This is the OECD–DAC definition, and several others are also offered on the web page. Hivos objectives as stated in the final report of the (internally available) external evaluation of October 2008 were to strengthen the communication among Kyrgyz environmental NGOs and to develop long-term cooperation projects between Kyrgyz environmental NGOs and their networks. A team of project trainers in Kyrgyzstan produced manuals and conducted training sessions for volunteers. This report, written by Katya for submission to Hivos, is not normally available to the public. Katya supplied it in the form of an electronic file during an ethnographic interview. Citizens in one region in Kyrgyzstan had had the experience of having their river contaminated by gold-mining wastes, and other new mining sites were being developed, bringing increased tensions around the country. The politics of mining and of environmentalism were made more complex by the government’s interests in resource extraction.

78

M. Campbell

15 Contemporary institutional action and training in administration and management have become routinely text based in the west. Some of the volunteers for this project in Kyrgyzstan were university educated and would have already been introduced to this form of organization. Zara, for instance, had a degree in administration. Yet this development project provided a setting where these ideas had to be put into practice by the volunteer participants themselves, quite possibly for the first time. 16 My data are not adequate to make the local experience of project participants sufficiently visible to support a strong argument about benefits for them through use of the results-oriented process.

REFERENCES Campbell, M., & K. Teghtsoonian. 2010. Aid effectiveness and women’s empowerment: Practices of governance in the funding of international development. In S. Rai & K. Bedford (eds.), Feminists Theorize International Political Economy special issue. Signs: Journal of Women in Culture and Society 36(1): 177–201. Cornwall, A., & K. Brock. 2006. The new buzzwords. In P. Utting (ed.), Reclaiming Development Agendas: Knowledge, Power and International Policy Making, 43–72. Basingstoke: Palgrave Macmillan. Department for International Development (DFID). 2007. DFID annual report 2007: Development on the record. Retrieved http://webarchive. nationalarchives.gov.uk/20110622155615/http://www.dfid.gov.uk/ About-DFID/Finance-and-performance/Annual-report/Annual-Report-2007/. Held, D., P. Dunleavy, & E.M. Nag. 2009. Editorial statement. Global Policy 1 (1): 2–3. Hivos. 2004. Hivos policy framework for improved result orientation and result assessment. Retrieved http://www.hivos.org/results-measurement. Hivos. 2008. HIVOS International. Hivos 14(1). Ilcan, S., & L. Phillips. 2008. Governing through global networks: Knowledge mobilities and participatory development. Current Sociology 56 (5): 711–34. http://dx.doi.org/10.1177/0011392108093832. International CSO Steering Group. 2008. From Paris 2005 to Accra 2008: Will aid become more accountable and effective? A critical approach to the aid effectiveness agenda. N.p. Retrieved www.ccic.ca/_files/en/working_groups/ 003_acf_accra_summary_recs.pdf. Li, T.M. 2007. The Will to Improve: Governmentality, Development, and the Practice of Politics. Durham, NC: Duke University Press. http://dx.doi. org/10.1215/9780822389781.

Learning Global Governance

79

Managing for Development Results (MfDR). 2002. First International Roundtable on Better Measuring, Monitoring, and Development Results. Washington, DC. Retrieved www.mfdr.org/1stRoundtable.html. Managing for Development Results (MfDR). 2004. Second International Roundtable on Development Results. Marrakech. Retrieved www.mfdr. org/2ndRoundtable.html. Miller, P. 1994. Accounting and objectivity: The invention of calculating selves and calculable spaces. In A. Megill (ed.), Rethinking Objectivity, 239–63. Durham, NC: Duke University Press. Organisation for Economic Co-operation and Development – Development Assistance Committee (OECD–DAC). 2010. Glossary of Key Terms in Evaluation and Results Based Management. Paris: OECD. Retrieved. http:// www.oecd.org/development /peer-reviews/2754804.pdf. Organisation for Economic Co-operation and Development – Development Assistance Committee (OECD–DAC). 2005. The Paris Declaration on Aid Effectiveness and the Accra Agenda for Action. Retrieved http://www.oecd. org/dac/effectiveness/34428351.pdf. Philipps, L., & M. Stewart. 2009. “Fiscal transparency: Global norms, domestic laws, and the politics of budgets.” Brooklyn Journal of International Law 34:797–859. Rome Declaration on Harmonisation. 2003. Retrieved http://www.oecd.org/ development/effectiveness/20896122.pdf. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: Rowman & Littlefield. Smith, D.E. 2006. Institutional Ethnography as Practice. Lanham, MD: Rowman & Littlefield.

This page intentionally left blank

SECTION TWO

The extent of the managerial turn in governance is made possible by digital technologies. Computers are excellent sorting machines, once the material to be sorted has been reconstituted as machine-recognizable data. This reworking of the everyday world into data for entry into virtual technologies is, of necessity, an objective rendering of the social world. As we saw in the previous section, this objective rendering reshapes the work organization at the front line while reframing the raison d’être of that work. However, people work is not an easy fit with the categories that intend the institutional circuits. The papers in this section explore the disjunctures that open as front-line workers are required to report their work in the objectified data categories for digital technologies. Lindsay Kerr explores the Ontario School Information System (OnSIS). This chapter closes the circle between reporting, data, and computer systems. Drawn from her dissertation research (2010), Kerr focuses particularly on the OnSIS institutional circuits put in place to coordinate student, school, and teacher information. Kerr’s critique of OnSIS attends to the objectification of teachers’ work that is required to develop the commensurability of measurements across the diversity of teaching in rural, urban, suburban, and exurban settings. As Kerr notes, the OnSIS accountability relations narrow the possibilities for reporting that have been developed over the 100 years of public school teaching. In order to link reporting in Ontario to transnational ranking structures such as the PISA or TIMMS standardized tests, Ontario’s Ministry of Education has employed a version of outcomes-based teaching and learning coordinated with standardized curricula and testing. Kerr documents the struggles of teachers as they try to manage the

82

Section Two Introduction

disjunctures between their work with children in the classroom and the need to select from that diversity those experiences that will map onto the standardized reports required for OnSIS. Janet Rankin and Betty Tate, both nurse educators in post-secondary education, explore the appearance of e-governance in nursing education in Canada. They focus on the development of technologies for managing nursing labour resources. The separation of action from the local to the translocal and therefore the ability to view local actions in one place as commensurable with local actions in another are features of new governance processes. Digital technologies are organized to coordinate nursing education practice from local nursing school standards to regional and national nursing standards. Of particular interest in Rankin and Tate’s chapter is the involvement of nurse educators in the development of the institutional technologies of governance. They show how nurse administrators are drawn into consultative processes in which the conceptual frame of commensurability has already been established. Their work, then, is to help set up the institutional circuits through which non-local assessment and evaluation can be established. Michael Corman and Karen Mellon refocus our attention on the difficulties that face those who do people work. Like the other chapters in this section, and indeed throughout this book, the everyday work of making one set of actions commensurable with those across a variety of sites of action emerges. However, commensurability work by paramedics and emergency department nurses raises some interesting problems for patient care. Paramedics are among the first responders to any emergency event. As the authors note, the range of “problems” that paramedics encounter is limited only by the diversity of people they care for. Regardless of the multiplicity of health issues they face, paramedics and nurses must insert the individual into the managerial organization of the emergency department. This data-based interface is the focus of their analysis. The Canadian Triage and Acuity Scale (CTAS) is one organizer. Nurses apply a numerical acuity scale to incoming patients. In the handover from paramedic to nurse, the patient is slotted into a standardized acuity ranking. The CTAS is coordinated with the new electronic version of the Patient Care Record (PAR) highlighting particular conditions, subordinating others, and co-ordering the work of paramedics and nurses within the electronic Patient Care Record (ePCR) and CTAS framework. The section concludes with the chapter by, Marjorie DeVault, Murali Venkatesh, and Frank Ridzi. Set in the ongoing reorganizations of

Section Two Introduction

83

Medicare in the United States, they document an institutional circuit that seems to work well for everyone involved. They analyse the development of a workable relation between those who receive Medicare, the nursing homes to which Medicare is paid, and the civil servants who vet the applications. The Medicare forms are long, detailed, and notorious for the amount of time and effort required to complete them. DeVault et al. describe a series of work routines that smooth the collection of data needed to complete the forms. The process they describe is in striking contrast to the other chapters in this book. Here the institutional circuits draw from actions in the everyday world that are already worked up as data. For example, regardless of how difficult it may be for seniors to leave their family home, the housing market is fully commodified. Thus, the worth of the family home, for purposes of the Medicare form, is relatively easy to establish and insert into the appropriate line. De Vault et al. bring into view another feature of the institutional circuits being described in this volume, but that is not always visible. Institutional circuits are not static. Where the disjunctures, such as those explored by Corman and Mellon, raise issues that cannot be resolved, the work routines are often revised until they are stable, at least from the perspective of the institution. Thus, workplaces are often unstable in relation to the requirements of the institutional routines that make up the circuit. New work routines, shifts in responsibilities, reorganizing categories, and so on are often part of the worksite until a reasonable coordination between local and translocal is achieved. This is not to say that the conceptual framing of any particular work organization will change.

This page intentionally left blank

3 E-governance and Data-Driven Accountability: OnSIS in Ontario Schools l indsay kerr

Everyone knows that the challenges facing education haven’t changed substantially in the past 100 years; educators are still in the business of turning blank slates into valuable resources. Steve Thompson, President and CEO of SRB Education Solutions Inc. (2007)1

While this epigraph may seem puzzling to the reader, it is pertinent because it pinpoints the “ideological code” (Smith 1999: 157–71) of the database management system that is unpacked in this chapter. The Ontario School Information System (OnSIS) is an initiative of the Ministry of Education, designed in partnership with a private IT company, SRB Education Solutions Inc.2 Under the Managing Information for Student Achievement (MISA) Project, the ministry began phasing in OnSIS in 2004–2005 (OnSIS–SISOn & I&IT 2007). However, as a practising teacher myself, I was not aware of OnSIS, and neither were the teacher-participants whom I interviewed as part of my research into the Student Success Strategy.3 While not explicitly defined, the Student Success Strategy lends a positive spin to a disparate array of policies that target students “at risk” of dropping out of high school (Kerr 2009a). As stated by the ministry, the three “core priorities” for the strategy are “high levels of student achievement; reduced gaps in student achievement; and increased public confidence in publicly funded education” (Ontario Ministry of Education 2008a). Whereas these priorities may have surface appeal, it is the way they are picked up in the re-regulation of educational governance that is problematic for the teacher-participants in my study. Most of them had not heard of OnSIS,

86

L. Kerr

while others had heard of it, but did not know what the acronym stood for or what it accomplishes. The first reference to OnSIS that caught my attention was a footnote in an evaluation report conducted by the Canadian Council on Learning (CCL 2007) for the Ontario Ministry of Education. The CCL describes OnSIS as providing the “infrastructure” for the Student Success/Learning to 18 Strategy in Ontario and as “a web-based application, which integrates and collects board, school, student, educator as well as course and class data” (ibid.: 37n6). This report finds the excitement about OnSIS among ministry “informants” not surprising, given the investment of work, time, and money in the initiative. The CCL predicted that the “Ministry will retire the [former] Legacy system at the end of the 2005–2006 school year. OnSIS will be the sole authoritative data source for 2006–07” (ibid.: 92). Across this and other reports are repeated calls for “evidence-based” research and “evidenceinformed” practice. Thus, the OnSIS/MISA project intends to collect and merge data from all school boards across the province into a single database management system, although internal ministry memos indicate various technical difficulties, delays in meeting the deadlines, and “enhancements” of OnSIS, such that the phasing-in process continues. In 2009, I could find no information about OnSIS from the ministry, through website searches and enquiries by telephone or email. This lack of transparency on the part of the ministry and teachers’ lack of knowledge about it prompted my intertextual analysis. The situation has since changed and the ministry website now includes that information (http://www.edu.gov.on.ca/eng/policyfunding/misa/ – last modified: 9/5/11 3:34:00 p.m.). Based on analyses of interlocking texts (below) and putting the pieces of the puzzle together, I argue that OnSIS is a governance technology that increases surveillance over students and teachers. Not only is OnSIS the primary source of “facts in education” in Ontario, but also it institutionalizes an accountability regime that objectifies teachers’ work and reorganizes schooling to make it technologically commensurable with other nations. This displaces embodied relations between students and teachers in classrooms and subordinates teachers’ everyday work to e-governance accountability processes that rely on objectified statistics. In doing an intertextual analysis of OnSIS, I draw on Dorothy E. Smith’s (1990) explication of the social organization of facticity, and of ideological circles. Rather than taking facts as actuality, Smith states: “The history of the making of the factual account is … its facticity; the social

E-governance and Data-Driven Accountability

87

organization of its production structures ‘what actually happened / what is’ so that it will intend the schemata, concepts, and categories that both describe and interpret it … Facticity is essentially a property of an institutional order mediated by texts” (ibid.: 79). Within an institutional order, facticity is organized in ideological circles that operate both as forms of circular reasoning and as forms of social coordination in which textual practices that carry the ideological schema of the institution are theorized by policy-makers who operate quite apart from front-line workers. As Smith explains: “Characteristically such ideological circles are laid down in and inhabit organizational forms separating those who theorize, formulate, conceptualize, and make policy from the front-line workers who experience the actual ways in which the organization interrelates with its objects. Those in actual contact [i.e., teachers] with those who are the objects of action [i.e., students] are not those who frame the policies, categories and concepts that govern their work” (ibid.: 95). Starting from the experiences of practising teachers, my research takes its problematic from disjunctures teacher-participants encounter in their everyday work, that is, between policy and practice. Participants’ Accounts Teachers and Guidance Counsellors Classroom teachers are engaged in face-to-face contact and decisionmaking in the context of moment-to-moment interactions with students. In their everyday work, teachers are responsible and accountable for “delivering” curricula as well as reporting on students’ marks, attendance, and behaviour. Teachers interface with and contribute to the collection of electronic data through the daily practice of recording student attendance, as well as filling in student report cards at regular intervals during the school year. This responsibility at times entails grappling with ethical issues; a teacher-participant describes her ethical dilemma in reporting on a new immigrant student as follows: Ironically, yesterday, I sat looking at a memo to staff from our school indicating that early interim reports for students in grades 10 to 12 need only be issued to at-risk students. I sat there thinking, “Does that mean those with weak attendance [which could lead to lack of “success,” i.e., failure] or those who, due to low levels of literacy, may not be at grade level? Do I demoralize the student who is making the effort but is misplaced level-wise in terms

88

L. Kerr of her capacity to read and write? Do I mark her ‘unsatisfactory,’ she [being] a student who is trying hard? Or, do I mark her work ‘satisfactory,’ but find a comment that would indicate her area of weakness?” I did the latter, and probably this suggests that I am likely to pass her despite the fact that she will not come close to grade level. That’s me, my decision based on my belief system. Is it ethical?

The dilemma lies in reconciling care for the student with the administrative reporting requirements. Early interim reports flag students predicted to fail a course as “unsatisfactory.” The decision the teacher makes in reporting is oriented towards not “demoralizing” the student early in the course and giving her the benefit of the doubt. However, not flagging the student has consequences; if the student does not pass the course, the teacher will be held accountable for not having identified the “problem” early on in the official record. The implication of her decision is that she will likely have to pass the student at the end of the course, though the student may be ill equipped to cope with the next grade level. This epitomizes the tension for teachers between an ethic of care and accounting logic (Kerr 2006). Teachers working with students “at risk” adapt, negotiate, and deliberate around standardized official requirements to suit the particular context and particular student. Computerized student report cards record each course taken (by course code), the mark assigned, attendance to date, and computerized comments. Additional forms that constitute part of the official records target “exceptional” students on both academic and non-academic grounds. Academic forms called Individual Education Plans (IEPs) are documents that must be filled out by special education teachers for students identified with learning, behavioural, or physical disabilities.4 A special education teacher-participant says that in doing IEPs for his students, he draws on a “cheat sheet” provided to him by a school consultant (higher in the educational hierarchy). For him, filling in the IEP FileMaker template boxes on the computer for every student is time consuming and has negligible value in his everyday work with students, and the “cheat sheet” aids in “getting it down to a little system.” While the cheat sheet of standard phrases in official language enables him to efficiently satisfy the administrative reporting requirement, the teacher’s utterances are subsumed by officially sanctioned phrases from the list. There is also a plethora of non-academic forms for documenting behavioural misconduct, as legislatively mandated by the Safe

E-governance and Data-Driven Accountability

89

Schools Act of 2000 and amendments. At the Toronto District School Board (TDSB), the Operational Procedure 699 series of forms include Notification of Risk/Injury, Violent Incident form, Behaviour Log, Safety Plan, and so on.5 The series indicates a progression in seriousness of misconduct. For example, the Special Education Student Safety Plans (Form 699J) documents the behaviour of students deemed to constitute a risk to school safety. These forms flag students “at risk” for closer scrutiny, trigger further processes and so forms, and arguably stigmatize, pathologize and criminalize students. A teacher-participant describes the Safety Plan and Behaviour Log as follows: “It [the Safety Plan] deals with crisis and it deals with the escalation. So there’s two parts: when things are bad, bad, bad, what to do; and when things [are less serious], warning signs, I guess ... all clear, mapped out. It has to have a Behaviour Log, you know, [recording] duration and intensity and frequency [of misbehaviour], and you have to keep a log for three months. And you have to have clear triggers, clear antecedents, clear behaviour, clear outcomes.” Once a student is flagged as a risk to school safety, a documentary process of surveillance ensues in which teachers have to record every infraction in the Behaviour Log, in numeric and linear terms (duration, intensity, frequency; antecedents, triggers, outcomes). An infraction that might otherwise be handled informally enters the official record. These official forms constitute evidence in school team meetings, school board appeal processes, and court cases involving a judicial review of suspensions and expulsions under the so-called Safe Schools Act of 2000; thus, these forms that teachers fill in are hooked up and oriented to appeals and legal processes. Whereas teacher-participants do not condone violence and assert the need for teachers to know about students in their classes who may be a threat to themselves or others, their concern is that some students are being unfairly labelled as behaviour problems, such as those with Asperger’s or Tourette’s Syndrome. Participants remark that teachers customarily talk to and consult with each other informally about students in their everyday work, but that intra-school communication (with special education, guidance, and administration) is also becoming increasingly standardized in written forms that reduce face-toface communication. Repetitive use of these fixed formats arguably has a tendency to reframe the language used by teachers with each other, coordinating their consciousness and influencing their work with students.

90

L. Kerr

Reporting in fixed formats prescribes how teachers document their work and how students are profiled in the official records. Through activating and filling in forms according to predetermined checkboxes, categories, and/or data sets, teachers constitute “informants” of what is happening in the so-called black box of the classroom, in compliance with administrative-managerial priorities. Moreover, the standardizing effect of texts for academic and non-academic reporting obliterates the uncertainties and indeterminacies of practice and reproduces the relevancies of the governing framework in the official record. Currently, individual student records are held in the student’s OSR (Ontario Student Record), a manila folder containing hard-copy documentary forms and records for every student from kindergarten to grade 12. The OSR follows the student from school to school and is retained at the last school attended for three to five years; it has a physical presence and location. Teachers have access to OSRs and may consult them as needed. Upon written request, parents can access the OSR to view its contents, and can request that certain documents, such as behavioural or psychological reports, be removed from the file. However, the move underway from paper to electronic digital records through the OnSIS/ MISA project raises concerns about privacy, access, and control over people’s lives. Web-based technologies like OnSIS depend on security systems to control access and protect data, but are susceptible to data mining. As distinct from paper records, electronic memory systems enable records to be stored in digital archives indefinitely and to be shared translocally. How teachers activate computerized record-keeping technology varies across schools and boards. Classroom teachers with the TDSB do not have access to electronic records, except uni-directionally, at report card time, when they enter data about each student in their classes; at one school in Toronto, teachers book computer time on a limited number of computers to input course data into the computer program called TrilliumTM Student Information System (produced by SRB Education Solutions Inc. 2009b).6 Although he has “never heard of OnSIS,” a teacher-participant at another school board that uses a different computerized student information system (eSIS) expresses his frustration with having to record daily attendance at the start of every class, as well as course marks and comments on report cards: I’ve never heard of OnSIS. We’re on eSIS. I can mortgage a house faster than I can take attendance! If it were timed in comparison to just doing it

E-governance and Data-Driven Accountability

91

on paper, you could open up room for [more teaching time]. The box for “ethnicity” – we’re not allowed to use it, yet – it’s just something no one talks about and we were simply told to not worry about it. The new eSIS mark entry system takes double [the time for report cards] to enter marks, and handcuffs us so much that we can only write 255 characters [for comments] and most teachers are down to four blanket comments with only one word changed in each. Everyone uses blanket learning skills comments because you have to wait so long for what you type to appear … but parents will only see it as “lazy” commenting.

Aside from feeling “handcuffed” by the technology in providing relevant feedback to parents, he is troubled by the silence surrounding an “ethnicity” box that he sees as indicating the potential to gather sensitive personal information about students. Other sensitive personal information about “exceptional” students is collected from special education teachers in the form of standardized computerized Individual Education Plans and Safety Plans. Even though the predetermined categories do not reflect the actualities of practice, in satisfying their administrative/reporting obligations on attendance, report cards, IEPs, Safety Plans, and so on, front-line teachers contribute to the OnSIS data collection regime. Guidance counsellors and school administrators activate Trillium (or its equivalent) in their everyday work of enrolling students, formulating timetables, providing counselling, and/or managing discipline. For guidance counsellors, Trillium facilitates certain routine aspects of their work. However, they convey ambivalence about how their work has changed; several of them remarked that they spend more time on the computer and less time with students during the school day. As one guidance counsellor put it, Trillium is useful for timetabling, maintaining student transcripts, tracking attendance, and obtaining contact information about students. The “credit summary” aspect of Trillium provides more information than the student transcript does; for example, it records courses failed in grades 9 and 10, which do not appear on the student’s transcript, since “full disclosure” of failed courses applies only to transcripts for grades 11 and 12. The “Notepad” function documents meetings, and includes dates, persons present, and decisions made. Even though it is a useful, timesaving timetabling device, Trillium imposes limitations in extraordinary situations. A guidance counsellor explains: “I have to go to the principal to override the program to increase the size of a class to accommodate

92

L. Kerr

a student when a class is closed, and there is no space and no other timetable option available; or to permit a student to take courses in a different order, if that is advantageous to a student who, say, may not have the prerequisites or co-requisites.” Thus, class sizes and course prerequisites or co-requisites written into the program language of Trillium impose electronically mediated constraints that restrict guidance counsellors’ discretion in timetable decision-making and enforce the ritual of obtaining permission from a principal-administrator. Each transaction is automatically logged by username and time. Moreover, graduated access to Trillium enables higher levels of authority to override default program code, to activate a broader range of data, and to monitor lower levels in the educational hierarchy. Whereas guidance counsellor participants spoke at length about Trillium, they did not mention OnSIS. One of the guidance counsellor-participants had heard of OnSIS, but was not aware of what it accomplishes or how it is hooked up to Trillium. During the interview, she noticed a “new OnSIS icon” as a menu item at the top of the Trillium screen. Surprised that she could activate it, she expressed dismay when she discovered that it contained information about her, including her own timetable. Her puzzlement in the immediacy of the moment is taken as a disjuncture that marks the operation of ruling relations. Teachers’ and guidance counsellors’ accounts reveal how their work is reorganized with the advent of computerized record-keeping technologies oriented to managing professionals and displacing their discretion. However, none was aware that OnSIS also contains personal information about teachers. School Board Informants Whereas OnSIS is not yet visible to classroom teachers, and school reporting work is coordinated by the school administration, data management work is organized by people working in various IT departments at the ministry and at school boards. A school board representative describes the work intensification at reporting time: “We have been short-staffed and there is a lot going on, especially as we come up to reporting time.” He explains OnSIS and MISA and related data reporting activities as follows: It [OnSIS] is a web-enabled system that is being used for the collection and management of education-related information at the elemental and aggregate levels. It will provide information to facilitate policy development

E-governance and Data-Driven Accountability

93

and board funding as well as tracking, monitoring and accountability at the ministry, board, and school levels. OnSIS is part of the MISA project. Day schools will send their data to OnSIS three times a year for the submission dates of October 31, March 31, and June 30. These files will be validated and cross-referenced against other ministry information, such as validating OEN [Ontario Education Number] information, and verifying data across schools and boards … Data areas are: educator, class and student.7

OnSIS data are activated by policy-making and management levels of the educational hierarchy and is hooked up to funding and accountability mechanisms. “Aggregate”-level data pertain to depersonalized statistics, while the “elemental” level pertains to identifiable personal information that enables the “tracking, monitoring and accountability” of individuals by various levels of educational governance. OnSIS records student data by Ontario Education Number (OEN), but teacher data are also recorded by Ministry Educator Number (MEN). Whereas former ministry-level information systems relied on aggregate data, OnSIS retains personal data about individual students and individual teachers, through the use of unique personal identification numbers (OEN and MEN) across the province that make it possible to extract elemental data from the aggregate. Data collection begins at the school level, using Trillium (or its equivalent), and is transmitted electronically to the ministry’s OnSIS website at regular intervals, via school boards. The data clearance work performed by people in IT departments at the school board level is summarized in Figure 3.1, Data Cleansing Loop, as described by a school board representative in an interview. The process works as follows: Trillium OnSIS is the application tool “to capture, format, and validate the student data that is required.” At reporting time, each school office runs a Trillium OnSIS snapshot for the school in order to validate ministry “business rules” against Trillium data, including educator, class, and student data areas. The snapshot detects “errors” or warnings to be corrected. Errors are referred to the appropriate person for correction. The process continues until there are zero errors. At the board level, the School Information System Department (SIS) uses the Trillium OnSIS application to create a “transmission” file in XML format, logs onto the ministry OnSIS website, and uploads the XML file.8 The ministry batch process checks and validates the data against its business rules.9 If there are errors, SIS works with the school to correct them

94

L. Kerr

Figure 3.1. Data Cleansing Loop School

Student data

Trillium (Snapshot) No error

Error

Board / SIS

(Transmission / XML file to ministry OnSIS website) No error

Error

School Signoff

Board / Planning Error

No error

Board Signoff

in Trillium, and the process starts again from the beginning with a new snapshot and is repeated until there are no errors. When there are no errors, the School Signoff is complete, and SIS confirms the School Signoff. The last stage in the process is the Board Signoff. If there are errors when the data are checked against more business rules, then SIS works with the school to correct them. The process begins again with a new snapshot, and continues until there are zero errors. The process is complete when the Board Signoff has no errors and is confirmed by the Planning Department.

E-governance and Data-Driven Accountability

95

The OEN is a crucial field in data-cleansing work. An example of a school error is “OEN is mandatory,” that is, a student without an OEN is not accepted in the system. At the ministry level, an OEN discrepancy may show up if the OEN does not match one at the OnSIS website. As explained by the school board representative, “It’s hard to give business rules, there are so many of them.”10 An example at the board level given by this representative is a “Main School Flag” error; this error occurs when “more than one school is trying to count the student.” Whereas these examples of errors might seem relatively benign and rectifiable, they gloss over the larger implications. First, while people are involved in entering data and rectifying errors in Trillium, the warnings and errors are electronically generated; they depend on decisions about “business rules” that are made translocally and written into the computer programs. Technological problems have apparently delayed the OnSIS/MISA project, owing to system glitches, bottlenecks, and hardware and software failures; for example, Ministry Memo 32 cites “extraordinary issues associated with OnSIS implementation” (Ontario Ministry of Education 2007b: 2). Second, although it seems reasonable that policy and funding decisions ought to be based on “correct” information, the main IT work appears to be ensuring that data files are “structurally” correct according to computer protocols (such as the XML standard) and numbers (OEN and MEN), rather than substantively sound. Such a technical preoccupation displaces questions about the relevance and inclusiveness of actual data content. What kinds of information are collected? How else might the data be used (or abused), and for what purpose, through derivative activities? Third, the process is much more centralized at the ministry than pre-OnSIS datareporting practices and is highly prescriptive with respect to data sets and compliance with reporting and deadlines. The ministry’s “business rules” embedded in OnSIS ultimately control and regulate Trillium student records, at the local school level. Once fully implemented, data exchange directly from schools to the ministry OnSIS website could suffice in the management effort to find efficiencies, bypassing school board clearance procedures altogether. Electronic reporting may portend a major shift in educational governance, displacing school boards and democratically elected trustees; this is not inconceivable in light of new legislation (Bill 177, discussed below). Fourth, the data categories and fields included (or excluded) in the database management system and the kinds of queries and reports that can be generated from the data are not neutral; they too involve decisions. Once entrenched,

96

L. Kerr

the predetermined data sets determine discursive practices and policy frameworks that coordinate how the system operates, which in turn has an impact on classroom teachers’ work. Finally, scarce educational resources are redirected to inquisitive and meditative activities, to develop the software and the infrastructure to input/correct data and to maintain and service the technologies, as well as for consults, training, meetings, and so on. According to reports to school boards, the original OnSIS/MISA budget range of $12–15 million over three years escalated to $90–100 million over that time;11 this amounts to a seven- to eightfold increase in costs, yet with no visibility to teachers in classrooms. The school board informant points out that OnSIS replaces the former ministry reports on school enrolment and the entire data-cleansing process (as depicted in Figure 3.1 above) is repeated three times a year for each school (31 October, 31 March, 30 June, as mentioned above). These student enrolment data are tied to educator and class data, such that the school board representative cautions: “As data is constantly changing in Trillium, a snapshot that had no errors today may have errors tomorrow.” Since computer-detected errors must be corrected in Trillium at the school level, the data-cleansing process places a considerable burden of work on schools (and boards) to support the ministry’s data collection regime. Whereas local data collection and reporting practices differ from board to board, the OnSIS/MISA project intends to standardize them across the province into a single centralized system, the full extent and implications of which are as yet unclear. Based on participants’ accounts, it seems that only certain limited aspects of OnSIS are accessible and visible to school principals and to some board staff when activated for specific assigned tasks; that is, different levels in the educational hierarchy activate different aspects of OnSIS. But who, if anyone, sees the whole picture? Putting the puzzle pieces together, formal reporting procedures not only regulate how students are written up in the official records of the institution, but also can be read as measures of school and system performance. As such, OnSIS is tantamount to a centralized data resource that can conceivably be activated to enforce accountability to ministry mandates; schools and boards that do not measure up in producing results face punitive sanctions (as discussed below). This raises critical questions: are the so-called Student Success Strategy and the infrastructure of OnSIS, which purport to raise student achievement and close the achievement gap, actually about student success, or school success, or system success?

E-governance and Data-Driven Accountability

97

Intertextual Analysis of OnSIS Given that OnSIS was invisible to teacher and guidance counsellor participants in their everyday work, and negligible public communication or published material was available at the time, I undertook an intertextual analysis to unpack what is going on. This involved connecting the dots between ministry memos and reports, minutes of school board meetings and committees, legislation, and websites and other internet sources to figure out what OnSIS is and what it actually accomplishes as well as how it fits into educational governance and how it affects teaching practice. Memos from the ministry to directors of school boards reveal that work on OnSIS dates back prior to 2005 at the level of the ministry and school boards. The MISA project strategically coordinates the phasing in of OnSIS through directives from the highest level of educational governance, based on the meditative activities at seven MISA Professional Network Centres (PNCs) established across the province by the ministry. School boards use different databases, data fields, and reporting practices; for example, the TDSB uses the SRB computer program, Trillium, to keep track of students. Under OnSIS, all previous “legacy” data at school boards will be merged into a single database management system. The OnSIS/MISA project not only marks a technological move to large-scale computerized record keeping and data collection practices, but it brings school boards into compliance with centralized ministry mandates and renders the data comparable. The “objects” in these databases are not only students but also teachers, thus extending accountability and surveillance over teachers’ work. OnSIS extends computerized accounts beyond student academic records of attendance and marks to importing and merging databases across various translocal sites. Figure 3.2, OnSIS Data Sources, indicates some of the sources that have been identified from the intertextual analysis as linked to OnSIS. In order to make visible how OnSIS is articulated with educational governance, the analysis below focuses on Ontario Statistical Neighbours, the School Information Finder, Notice of Indirect Collection of Personal Information, the Student Achievement and School Board Governance Act (Bill 177), and SRB Education Solutions Inc. / StarDyne Technologies Inc. It proceeds to examine the national and international context, including documents from the Organisation for Economic Co-operation and Development (OECD).

98

L. Kerr

Figure 3.2. OnSIS Data Sources

StatsCan MINISTRY EQAO Board

OSN OnSIS ESDW

School Teacher

OUAC

OCAS

OCT

Ontario Statistical Neighbours Locating OnSIS within the wider context of education makes it apparent that electronic data-reporting technologies are hooked up across sectors, from elementary and secondary to post-secondary education. Being phased in across Canada since 2001, the Postsecondary Student Information System (PSIS) is a data management system that operates across the provinces and is overseen by Statistics Canada and the Council of Ministers of Education Canada (CMEC). PSIS longitudinally tracks students through post-secondary institutions and on to employment. In Ontario, school data from OnSIS are transmitted to PSIS via the Ontario Universities’ Application Centre (OUAC) and the Ontario College Application Services (OCAS). At present, there is a separate database of elementary schools called the Ontario Statistical Neighbours (OSN), a recent initiative of the Literacy and Numeracy Secretariat branch of the ministry. More in the open

E-governance and Data-Driven Accountability

99

than OnSIS, the OSN database is outlined in the document entitled Ontario Statistical Neighbours: Informing Our Strategy to Improve Student Achievement (Ontario Ministry of Education 2007c). In the absence of an equivalent publication about OnSIS, this document provides a window into what electronic educational databases accomplish. It seems likely that OSN has served as a pilot project and may be integrated into OnSIS in future, to circumvent data duplication. Rather than being oriented to individual students’ achievement, the OSN database is oriented to administrative-managerial functions: capacity building, program development, data to inform school- and board-based decision-making, research and evaluation, and strategic planning. The ministry maintains that OSN enables the answering of a myriad of questions, and it provides an example of one “school-focused question”: “Are there schools with demographic challenges like my school that are improving on the Grade 3 EQAO [Education Quality and Accountability Office] reading assessments and have a high proportion of students whose first language at home is different than the language of instruction?” (ibid.: 4). The question posed by the ministry indicates the kinds of queries and reports this database management system supports; it brings into view the comparative and competitive nature of the evidence being produced and its orientation to standardized EQAO tests.12 Taking the school as the unit of analysis in this text, the ministry declares that the “OSN was used to help identify the schools selected to participate in the Ontario Focused Intervention Partnership (OFIP) and the Schools on the Move: Lighthouse Program” (Ontario Ministry of Education 2007c: 3). Both OFIP and the Lighthouse program target “low-performing” schools. Based on the statistics of EQAO assessments and school demographics, schools tagged as “low-performing” or “static schools” are labelled as OFIP 1, OFIP 2, or OFIP 3 schools, depending on how poor their performance is.13 In 2006–2007, almost 1,100 schools came under review. Once tagged, the Literacy and Numeracy Secretariat dispatches Student Achievement Officers (SAOs) as “critical friends” to “help” OFIP schools to “improve” and avoid being slated for closure.14 In Ontario, “low- performing” schools are targeted for ministry intervention; using the depersonalized language of schools deflects attention from “underperforming” students who may be affected by school closures or targeted for enhanced scrutiny and surveillance in return for temporary top-up funding, with strings attached. Amid the constant repetition of slogans about student success, student achievement, and student-centred learning, the ministry’s MISA project of capacity building through OnSIS

100

L. Kerr

statistics, such as OSN, seems tied to inquisitive or punitive interventions that target schools in poor neighbourhoods. A second implication of OnSIS arising from the precedent of OSN is pressure on teachers towards so-called evidence-informed practice, or data-driven instruction. Recognizing teachers’ lack of “comfort” with data, it seems the main thrust of the ministry is to orient teachers to “data literacy” and to retrain teachers to become results oriented. The “data wall” tool of the Literacy and Numeracy Secretariat constitutes an example of how the ministry envisages using data “evidence” to guide instruction and epitomizes outcomes-based education. According to a podcast of the Literacy and Numeracy Secretariat, a data wall tracks students’ progress (especially targeting students at risk) in visual form, mounted in a prominent place on a school wall with colour-coded stickers indicating where each student is at in terms of ministry standards and objectives and in relation to peers. Across Ontario, EQAO scores constitute the official barometer of student, school, and board “success,” and education becomes geared to closing the achievement gap on EQAO scores. The implications of OnSIS and e-governance for the front line are pressure to reorient teachers’ practice to teaching to the test; that is, “data-driven instruction” orients teaching practice to producing test results rather than focusing on actual students in the classroom. Teachers do not activate EQAO results in their everyday work, except in so far as they comply with ministry pressure to use a data wall. The primary activators of EQAO scores are school and board administrators and ministry officials tracking the public education system. Parents (and the public) have access to schools’ EQAO scores through school board websites as well as the ministry’s EQAO and School Information Finder websites; these public websites promote EQAO results as providing the information parents need to exercise school “choice.” The School Information Finder Launched in 2009, the ministry’s School Information Finder (SIF) website is arguably hooked up to OnSIS, since both are coordinated by the ministry and articulated to data collected by the ministry. As a window into the data fields of OnSIS, the SIF website indicates which categories of data are included (or excluded), at least from the partial view of what is made accessible to the public, when activated via computer and the internet. As such, the SIF site provides what Mathiesen (1997) calls a “synoptical” view of OnSIS.

E-governance and Data-Driven Accountability

101

This controversial interactive site posts “profiles” of elementary and secondary public schools across the province. Purporting to increase “transparency and accountability” and to “provide information” about the ministry’s “school improvement process,” the website states: “The School Information Finder increases the transparency and accountability of Ontario’s publicly funded school system. It provides information to encourage a more informed dialogue about schools and school communities. This information enables all members of the school community to be informed participants in the school improvement process” (Ontario Ministry of Education 2009a). A visit to the website shows that the SIF compares each school with the provincial average, based on student achievement and on student population. An interactive button (the “school bag” button), which previously enabled parents to select and compare up to three schools on the screen at the same time, was removed by the ministry shortly after the SIF launch, because of objections from teacher and community groups. The third table of Student Population tabulates demographic data based on Statistics Canada’s 2006 Census and school-specific enrolment data. Demographic data about the student population at the school is also compared with the provincial average. The Student Population data sets are the percentage of students who live in lower-income households, whose parents have some university education, who receive special education services, who are identified as gifted, whose first language is not English, who are new to Canada from a non-English-speaking country, whose first language is not French, and who are new to Canada from a non-French-speaking country. Rating schools against the “provincial average” shows disregard for the effect on school and community morale, that is, for the people who constitute those communities. How schools are presented to the public carries consequences for school enrolment and hence school funding, since Ontario’s school-funding formula is calculated according to school enrolment numbers. Most likely to be negatively affected are schools in poor communities. The school-specific Student Population data sets indicate a dominant focus on exposing percentages of students at the school who are most likely to be labelled “at risk.” The text-reader conversation (Smith 2005) involved in interpreting the data requires some knowledge about statistics.15 For example, the SIF mixes and/or conflates school-specific data and generalized demographic data from the Statistics Canada Census. The demographic indicators of schools are based on estimates by postal code for household

102

L. Kerr

income and parental education that may mislead the public into believing them to be accurate representations of actual students currently enrolled at the school. Moreover, there is a time lag in these estimates: launched in 2009, the SIF demographic profiles of schools are based on the 2006 Census (the latest Census data available at that time).16 Using percentages of lower-income households and parents with some university education as proxy measures for the socio-economic status of the neighbourhood foregrounds class as a factor in comparing schools. Further, school-specific data that single out immigrants from nonEnglish (or non-French) countries and students whose first language is not English (or not French) suggest an ethnocentric bias; the special education and gifted sets highlight a cognitive-ablest bias that separates out percentages of gifted students from those with disabilities. Taken together, these data arguably accentuate social divisions and the prejudices and stigmas that ensue. Exposing marginalized “others” in this way obfuscates the equity question: how is it that schools with higher percentages of poor, disadvantaged, or ESL students may not “measure up” to the provincial standards? Similar biases or blind spots are undoubtedly inherent in OnSIS data; relayed to the public through the SIF website, they shape discursive practices about schools and the meaning and purpose of education. However, resistance to the SIF site has been apparent since its inception. Unanimous opposition by the Education Partnership Table (2009) is indicated in an open letter addressed to the premier of Ontario (Dalton McGuinty) and the minister of education (Kathleen Wynne) and copied to all Ontario members of provincial Parliament. Dated 1 June 2009, the letter is signed by a broad base of 21 educational groups across the public and Catholic, English, and French elementary, secondary, and post-secondary education sectors, including teacher unions, principals, trustees, deans, directors of education, school boards, and parent groups. Referring to the ministry’s new Equity and Inclusive Education Strategy, the letter points to contradictions between words and actions and between the ministry’s equity policy and the posting of social demographic data. Notice of Indirect Collection of Personal Information Nested within the ministry website under frequently asked questions (FAQ) is a crucial document: Notice of Indirect Collection of Personal Information (Ontario Ministry of Education 2009b). Under privacy

E-governance and Data-Driven Accountability

103

legislation (municipal, provincial, and federal; discussed below), it is a legal requirement in Canada to notify individuals about the indirect collection of personal data (as distinct from that directly provided by individuals). The electronic Notice at the ministry website is presumably intended to satisfy this legal requirement. However, unless a student, parent, or teacher happens to activate this particular FAQ page at the ministry website, the viewer would not know about the Notice. It has no date of issue or statement about when it was originally posted, which would indicate when the ministry began indirectly collecting personal data for OnSIS and whether it had the legal authority to do so at the time (discussed further below). Only the date when the Notice was “last modified” is stated at the bottom of the webpage. Checking this date periodically indicates that the Notice has been changed or updated more than once; hence, this is a fluid, unstable text. According to the version of the Notice dated “1/4/08,” the Information Management Branch of the ministry indirectly collects personal information about identifiable individuals from multiple sources: schools, boards, school authorities, EQAO, OCAS, OUAC, Statistics Canada, and unspecified “other organizations.” OnSIS is thus hooked into an extensive web of translocal ruling relations. The categories of personal information about students listed on the Notice include OEN, name, sex, date of birth, and educational history; similarly, the categories of personal information about teachers include MEN, name, sex, date of birth, and employment history. What is not declared in the Notice is that educator data collected by OnSIS includes both the status of individual teachers with regard to performance appraisals conducted under the auspices of the New Teacher Induction Program (NTIP) and the Teacher Performance Appraisal (TIP) process that is conducted every five years.17 The legislative authority for collection of private data is broadly stated as “the Education Act, the regulations, and the policies and guidelines made under the Education Act,” “Ontario Regulation 440/01,” and the “Freedom of Information and Protection of Privacy Act, R.S.O. 1990, c. F.31.” The only particular legislative clause cited from the Freedom of Information and Protection of Privacy Act (FIPPA) is section 2(1); a follow-up shows that this clause provides no more than the legal definition of “personal information” as “recorded information about an identifiable individual” and provides examples. In the second part of the Notice, the ministry justifies collecting indirect personal data as “evidence” to improve student achievement outcomes; to inform policies, programs, and practices; and to provide educational services,

104

L. Kerr

administration, and planning at the local level. However, the blurring of administrative priorities, general system outcomes, and individual student achievement detracts attention from personal privacy rights for students and teachers. The Canadian Legal Information Institute website (CanLII, at www.canlii.org) indicates that FIPPA has been amended many times since 1990. The two aspects of the FIPPA legislation attempt to balance competing interests – between the public’s right to know (Freedom of Information) and individuals’ privacy rights (Protection of Privacy). The tension between these two competing interests is heightened when public sector workers (teachers) and those served (students) are a matter of public interest, yet may not be aware of their privacy rights. Moreover, the legislation passed in 1990 lags behind technological developments and digitalized record-keeping practices that have escalated exponentially over the intervening decades. The other legislation cited is Ontario Regulation 440/01 of 2001, which pertains to the Education Act; a follow-up shows that this regulation mandates the use of an Ontario Education Number (OEN) for all students in elementary or secondary schools and authorizes the indirect collection of personal information by the Ministry of Education, thus paving the way for a province-wide database management system. However, there is no mention of a MEN for educators in this regulation. A comparison of the 2001 version and the amended 2007 version of the regulation (at CanLII) shows that the subsequent amendment expands the data collection authority of the ministry beyond schools, to include post-secondary institutions, OUAC, and OCAS. Thus, the centralized data collection regime spans elementary, secondary, and post-secondary educational institutions. Moreover, it is possible to track individuals throughout their lives, from their education data to their employment data, by linking unique identifiers: the OnSIS OEN, the PSIS National Student Number (PSIS-NSN), and the Social Insurance Number (SIN). Whereas provincial privacy legislation (FIPPA) is alluded to in the Notice, there is no mention of other interlocking laws governing privacy: the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA; R.S.O. 1990, c.M.56) or the federal Privacy Act (Government of Canada 1983) and Personal Information Protection and Electronic Documents Act (PIPEDA; see Parliament of Canada 2000). The municipal, provincial, and federal privacy acts all predate largescale, web-based database management systems, and amendments lag behind or react to technological developments. Whether or not PIPEDA pertains to schools, it directly addresses the particular threats to privacy

E-governance and Data-Driven Accountability

105

posed by the emergence of electronic records and establishes legal obligations to comply with Fair Information Practices (FIPs; discussed further below) (Gellman 2011). Moreover, there is no mention of OnSIS by name in the Notice, thus downplaying the extent to which the data are centrally organized and coordinated. These omissions frame the ministry’s indirect collection of personal data in purely instrumental administrative terms, as “improving” policy decisions (for which depersonalized data may suffice at the ministry level) and individual achievement (which requires identifying persons at the local level), as if there were a clear boundary between these domains. The ministry states: “In most cases the ministry can meet its information needs after depersonalizing the data” (Ontario Ministry of Education 2009b; emphasis added). However, the extent to which the electronic record-keeping system of OnSIS is actually oriented to reorganizing and re-regulating the social relations of educational governance becomes visible through examining the processes behind the Student Achievement and School Board Governance Act of 2009. The Student Achievement and School Board Governance Act (Bill 177) Passed through the Ontario provincial legislature as Bill 177, the Student Achievement and School Board Governance Act of 2009 received royal assent on 15 December 2009 (Ontario Legislative Assembly 2009). This act amends the Education Act with respect to school board governance. It specifies new duties of school board officials and trustees and a new process for alleged breaches of the code of conduct; it also changes the powers and duties of the minister and adds a new “purpose provision.” The purpose provision centres on student achievement as the means to measure and evaluate education and the public education system, although “student achievement” is not defined in the act. Effectively subordinating local school board autonomy to ministry priorities, the purpose provision makes school boards accountable board-wide for student achievement and expands ministry authority to intervene and to place boards under supervision if they fail to meet the “purpose provision”; under previous legislation, the ministry had authority to take over school boards only if they failed in their fiscal responsibility to balance the budget. The coercive regulation written into law marks enhanced centralization of control at the ministry level that legally orders a chain of command actionable through law in which

106

L. Kerr

directors (or supervisory officers) of school boards are mandated to report to the minister with respect to the “purpose provision.” Whereas this act purports to hold school boards accountable for student achievement, it is actually front-line teachers who engage with students and influence achievement. Accordingly, Section 42.(1) of the act pertains to the governance of teachers; it states the following: “Pending the board’s decision whether to terminate the teacher’s employment, the director of education for the board, or the supervisory officer acting as the board’s director of education, shall, (a) suspend the teacher with pay; or (b) reassign the teacher to duties that are appropriate in the circumstances in the view of the director of education or supervisory officer” (ibid.). The rewording of this subsection, compared with the previous legislation, adds “supervisory officers” to the chain of command and assigns them powers over teachers equivalent to those of directors of school boards. Within these relations of ruling, accountability for student achievement (as understood by the ministry) is effectively downloaded onto teachers, and directors as well as “supervisory officers” are granted the legal authority to “suspend” or otherwise “reassign” teachers who are not in compliance with the purpose provision. Not only does this legislation force directors (or “supervisory officers”) of school boards to report directly to the ministry, but it also controls and regulates teachers’ work by threatening to place schools under ministry supervision for failure to comply with the “purpose provision.” The meditative activities of the Governance Review Committee leading up to Bill 177 reveal an ideological circle in policy-making, in which the conclusions are actually contained in the original premise. In its report, the committee defines governance and what constitutes good governance as follows: “Governance is about the allocation of responsibilities within an organization. Good governance provides a framework and a process for the allocation of decision-making powers” (Governance Review Committee 2009: 12). In this ideological circle, the committee’s recommendations have been established in advance, according to ministry priorities. How educational outcomes, or student achievement, are to be gauged is left in vague terms, and there is no direct mention of OnSIS in either document. However, under recommendation 25 that deals with provincial interest regulations, the report specifies the “indicators” for ministry intervention in a school board as triggered by “results of provincial assessments and other indicators that reflect the increased sophistication in gathering and analysing data relevant to understanding progress in improving student achievement”

E-governance and Data-Driven Accountability

107

(2009, 49), that is, by EQAO scores and OnSIS data, though these are not mentioned by name. SRB Education Solutions Inc. / StarDyne Technologies Inc. OnSIS is hooked up to and produced by SRB Education Solutons Inc., a private-sector IT company that does the work of writing software codes, issuing updates, and providing ongoing technical support under contract to the ministry. The epigraph to this chapter epitomizes the ideological gap between data management systems developers on the one hand and front-line teachers or educators on the other with regard to the history and purpose of education; it indicates that SRB’s top executive advocates human capital theory as the core of the company’s “philosophy” of education. The corporate-managerial mindset built into the data systems becomes visible through tracking SRB’s achievements from its early work oriented to administrative tools for payroll, human resources, financial accounting, and planning. One example of potential abuses of OnSIS data appears under features of the student data archive system; the OnSIS suite can, the SRB website states, “maintain accurate past student records for future requirements such as reunions or fundraising” (SRB Education Solutions 2009a).18 Fundraising puts OnSIS data to purposes outside the legal authority for its collection; it extends beyond student achievement and education policy and is susceptible to data mining for commercial or other ancillary purposes. This misuse of personal data is in violation of clauses in the Freedom of Information and Privacy Act that restrict unauthorized use of personal information for fundraising without proper notice. From its beginnings in Markham, Ontario, with Trillium sales contracts to individual school boards, SRB has expanded its market niche to the OnSIS contract with the ministry. In its promotional literature, SRB refers to OnSIS as the Ontario Student Information System (rather than the depersonalized term of the Ontario School Information System, as used in ministry texts). The “SIS suite of products” centres “core Trillium” as the hub to which other core or web-based products are hooked up, one of which is OnSIS. Common words appearing in the promotional literature are assessment and achievement – student assessment, elementary achievement, secondary achievement, student success, and so on. This reiteration suggests that assessment of achievement is the core of the new public management accountability regime enabled by OnSIS.

108

L. Kerr

SRB’s web-based educational data systems continue to proliferate, the SIS suite of products being flanked by bands for other product categories coordinated to it: “Library,” “Student Success,” “Data Management & Decision Support,” and “Partner Solutions.” For example, the web-based automated library management system – Library 4 Universal™ (L4U) – shown as connected to Trillium via web service, can track students’ reading habits. Under Partner Solutions, Transportation (such as contractual arrangements for student bussing) and Stakeholder Communication are listed, which suggests that information in Trillium is shared with external parties. Thus, OnSIS culls data from, and shares data with, various interlocking administrative systems within the education system as well as with outside agencies and private subcontractors. At a location far removed from the everyday world of teachers and students in classrooms, SRB’s database management product called OnSIS draws together data from multiple sources into a ministry tool to surveil and manage system performance as well as being a source of “facts” for the construction of “evidence-based” policy and “evidenceinformed” practice (CCL 2007). This raises questions about the extent to which computer companies drive policy (Anderson et al. 2009). The National and International Context In Canada, education falls under provincial jurisdiction and there is no federal department of education. However, the Council of Ministers of Education Canada (CMEC) coordinates educational policy across the country, such that similar digitalized educational database management systems exist in other provinces, for example, BCeSIS in British Columbia, iNSchool in Nova Scotia, and PASI-SIS in Alberta. They also exist across the United States in compliance with the federal legislation of 2001 called No Child Left Behind that instituted datadriven accountability, especially in Title I schools for “disadvantaged” students, though there are differences from state to state. In England, the green paper entitled Every Child Matters (2003) paved the way for enhanced national electronic records through advocating “joined up services,” or data sharing across government departments, to protect “vulnerable children.” According to the UK Department for Education website, the National Pupil Database (NPD) “holds a wide range of information about pupils who attend schools and colleges in England” and “forms a significant part of the evidence base for the education

E-governance and Data-Driven Accountability

109

sector.”19 However, an independent report by Anderson et al. rates the NPD as “amber,” which means that it “has significant problems and may be unlawful,” such that an “independent assessment” is called for; they state: “the National Pupil Database … holds data on every pupil in a state-maintained school and on younger children in nurseries or childcare if their places are funded by the local authority, including: name; age; address; ethnicity; special educational needs information; ‘gifted and talented’ indicators; free school meal entitlement; whether the child is in care; mode of travel to school; behaviour and attendance data. It is planned to share this data with social workers, police and others” (2009: 6). Through schools, it is possible to collect an array of personal information about students and their families as well as about teachers. As in Ontario, there is a marked turn to accountability regimes tied to standardized testing across all these provincial and national jurisdictions, raising the question of how discursive practices are organized transnationally. At the meta-level, an intertextual analysis makes visible a complex web of translocal ruling relations that govern public sector work across national boundaries, the Organisation for Economic Co-operation and Development (OECD) producing “boss texts” at the top of the “intertextual hierarchy” (Smith 2006: 85–6).20 Although it lacks legislative authority, the OECD operates as an international data clearing house and policy think tank that “harmonizes” public sector policies across member states through its inquisitive and meditative activities; policy frameworks are devised and “experts” are legitimated at the OECD (Jacobsson 2006; Djelic & Sahlin-Andersson 2006; Orsini & Smith 2007; Mahon & McBride 2008). The preamble across all OECD documents reiterates Article 1 of its Convention, which makes explicit its primary aim: to promote policies designed to foster “economic growth,” “economic development,” and “world trade.” In line with Article 1, OECD policies across its prolific publications exhibit an overriding economic imperative that largely eclipses social processes.21 With regard to education, the emergence of the OECD as the pre-eminent authority in transnational ruling relations coincides with the implementation of its Program for International Student Assessment (PISA) in 2000 (Morgan 2006), displacing and superseding other transnational education organizations. Large-scale standardized testing, modelled on PISA and predicated on the OECD’s education “indicators,” has risen to prominence as the crux of educational assessment across member states and as the primary source of “evidence” for policy-makers (as is

110

L. Kerr

shown to be the case at the School Information Finder website, above). As I have demonstrated elsewhere (Kerr 2009b), the work of the OECD Directorate for Education orients education policy to workplace literacy, numeracy, and lifelong learning to produce human resources for the global economy (see also Darville, Chapter One, above). For example, the OECD’s annual publication series entitled Education at a Glance makes statistical comparisons that rate and rank countries or regions according to its predetermined performance indicators. With regard to public administration in general, the work of the OECD’s Directorate for Public Governance and Territorial Development advocates the new public management (NPM) model of governance to restructure public sector work, improve efficiencies and impose accountability. The seminal text on NPM entitled Governance in Transition: Public Management Reforms in OECD Countries (OECD 1995) calls for “reinventing” the public sector to conform to market-driven priorities. In 2001, the Directorate for Public Governance and Territorial Development launched its e-Government Project to formulate “how governments can best exploit information and communication technologies (ICTs) to embed good governance principles and achieve public policy goals” (OECD 2009a). At the outset, the OECD’s policy brief from its Public Management branch recognized the problems of ballooning budgets, deadline overruns, IT shortcomings, abandoned projects and the “political risks” taken by governments as a result, but nevertheless continues to promote e-governance as the “e-dream of enhanced effectiveness and efficiency” (OECD 2001). The flagship report on e-government entitled The E-Government Imperative (OECD 2003) specifically links e-governance to accountability as a central organizing principle enhancing management powers and opening the way for public-private partnerships in public administration. It states: “E-government can open up government and policy processes and enhance accountability. Accountability arrangements should ensure that it is clear who is responsible for shared projects and initiatives. Similarly, the use of private sector partnerships must not reduce accountability” (ibid.: 19). Running through this and a number of other OECD texts on the topic, the cynical intent of the e-Government Project emerges as less about technology per se than about using technology to leverage change in public sector administration; that is, using technology to manage knowledge and re-culture the public sector (e.g., OECD 2005). Almost a decade after instigating the e-Government Project, the text entitled Rethinking e-Government Services: User-centred Approaches

E-governance and Data-Driven Accountability

111

(OECD 2009b) acknowledges low adoption rates and low use of e-government services by users. By its own admission, the evidence from OECD country comparisons indicates that e-governance has not delivered the reforms to public services that the OECD predicted it would. Instead of questioning the premise of e-governance, however, the OECD (2009b) advocates a “paradigm shift” towards “citizen centricity” that is purported to bring benefits in terms of services as well as savings. However, citizen reticence towards using e-government services across the member states undoubtedly reflects that electronic systems were not designed for citizens at the outset, but rather as a management strategy to implement and oversee administrative and financial efficiencies. Does the ministry’s Notice of Indirect Collection of Personal Information satisfy the consent principle: ensuring that data subjects are informed and knowledgeable about OnSIS? If data subjects are not informed, the principle of challenging compliance is a moot point, even if complaint mechanisms exist (e.g., through the Information and Privacy Commissioner of Ontario). My interviews and follow-up textual analyses show scarce evidence of compliance with the principles of openness and individual access. On the principles of Fair Information Practices, OnSIS falls short on a number of counts. In line with the discursive practices promulgated by the OECD that promote e-governance to reform the public sector, the goal of the OnSIS/MISA project becomes clear: to standardize school board practices across the province, to compare statistics based on key performance indicators and standardized tests, to find efficiencies, and to re-regulate education according to performance-based assessment and accountability. Conclusion: OnSIS & Accountability The re-regulation of education is organized and coordinated at multiple levels within a wide field of relations. Starting from the disjuncture that classroom teachers and guidance counsellors experience with the reporting requirements that shape their work, and given the lack of transparency about the OnSIS/MISA project, the foregoing intertextual analysis makes visible the activities going on translocally, at school board and ministry levels of governance in Ontario. Students and teachers are “data subjects” contained in and tracked by OnSIS, using unique identifiers (OEN and MEN, respectively). Not only does the indirect collection of personal information about students and teachers raise

112

L. Kerr

concerns about violations of privacy, but it also portends new forms of data-driven accountability, using OnSIS as the primary source of data evidence. In order to unpack the changed relations of contemporary forms of governance in education, it is necessary to take into account developments in IT and centralized web-based database management systems that are supplanting distributed local circulation systems. This move to e-governance technologies in public sector work displaces everyday social processes in favour of remote instrumental goals (e.g., higher test scores, greater public confidence in the system). OnSIS streamlines data from school boards across the province to render it comparable and actionable. The ministry’s shift to “evidencebased” policy and practice suggests that the intention of the OnSIS/ MISA project is to govern teachers’ day-to-day work, standardizing teaching practice according to centralized administrative mandates and priorities, de-professionalizing teachers and displacing teacher discretion. At a venue in the United States, Michael Fullan (special advisor to the premier and minister of education in Ontario) reveals the strategy of using “capacity building” as a proxy for accountability when he says: “Another big feature of our work is to play down accountability in favour of capacity building, and then re-enter accountability later. If you lead with accountability, which most states do, then people are immediately on the defensive and it doesn’t work so well” (2009).22 Coordinated by the ministry under the auspices of “capacity building,” the inquisitive and meditative activities of the OnSIS/MISA project are setting up the infrastructure for performance-based accountability in education, enabling new forms of workplace surveillance that are tied to results. The ministry’s MISA project explicitly aims to “build capacity” at the provincial, board, and school levels of governance, through orchestrating a “sound leadership” structure that promotes a “healthy data culture as a necessary prerequisite for effective data use” (Ontario Ministry of Education 2007a: 1). From internal memos and school board meetings, it is clear that the MISA/OnSIS project is not without contention. However, contention is based largely on the associated costs and workload issues, rather than questioning how value-laden terms (such as “sound” or “healthy”) are carried in the official texts of the institution in ways that promote technologically driven education policy. The ministry accomplishes institutional capture through promotion practices for leadership positions in schools, boards, and new or reconstituted ministry departments; in the context of cash-strapped schools and school boards, the ministry offers rewards for compliance

E-governance and Data-Driven Accountability

113

with MISA directives in the form of “supplementary” funding, albeit targeted, temporary, and conditional upon performing the requisite digital paperwork (Ontario Ministry of Education 2007a). Reorganizing the social relations of schooling, the Student Achievement and School Board Governance Act of 2009 holds school boards accountable for student achievement, under threat of being placed under ministry supervision. This legislated accountability, however, ultimately downloads responsibility for student “achievement” onto front-line teachers to do the work of producing the sought-after “results.” Whereas few would argue against the usefulness of computers as tools, the OnSIS/MISA project exemplifies changed relations of ruling that use centrally controlled web-based technologies to leverage change in public sector administration. As shown above, OnSIS adherence to the principles of Fair Information Practices falls short on a number of counts, such as the vagueness of the purpose provision, the lack of openness, and the absence of informed consent. Perhaps the gaze of accountability should be turned onto the ministry by challenging its compliance with Fair Information Practices applying to electronic data. Through text-reader conversations, OnSIS shapes what gets taken up and talked about as the “facts” in education. Controlling knowledge/evidence about education in this way arguably operates to bring “the ambiguities of the work of human services under financial control” (Smith 2006: 86). Since OnSIS contains educator, student, and class data, it is possible to tie a teacher’s MEN to students’ OENs so as to read off individual teacher’s performance in terms of the ministry’s notion of student achievement. Whether defined in terms of course marks or EQAO scores, this could portend managing professionals through reward and punishment – that is, merit pay or promotion for producing “results” and suspension or reassignment for not doing so.23 An implication for teaching practice is teaching to the test. At the same time, at a distance, aggregate disembodied data are taken up in electronic forms of governance to monitor and manage system performance in a nameless, faceless world. When interpersonal relations mediated by computerized e-governance technologies, actual people – teachers and students – are displaced and rendered invisible. Devoid of context, certain quantifiable aspects of teachers’ work are foregrounded over non-quantifiable aspects; quantitative data eclipses qualitative data. Reconstituted in prescriptive data sets as objectified reified statistics, the complexities, uncertainties, and particularities of front-line work disappear from view.

114

L. Kerr

LIST OF ACRONYMS CanLII CCL ESDW ESL EQAO ICT IEP FIPs FIPPA IT L4U MEN MFIPPA MISA NPD NPM OCAS OCT OECD OEN OFIP Ofsted OnSIS OSN OSR OUAC PIA PIPEDA PISA PSIS PNC SAO SIS StatsCan TDSB

Canadian Legal Information Institute Canadian Council on Learning Elementary Secondary Data Warehouse English as a second language Education Quality and Accountability Office Information and communication technology Individual Education Plan Fair Information Practices Freedom of Information and Protection of Privacy Act Information technology Library 4 Universal Ministry Educator Number Municipal Freedom of Information and Protection of Privacy Act Managing Information for Student Achievement National Pupil Database New Public Management Ontario College Application Services Ontario College of Teachers Organisation for Economic Co-operation and Development Ontario Education Number Ontario Focused Intervention Partnership Office for Standards in Education, Children’s Services and Skills Ontario School Information System Ontario Statistical Neighbour Ontario Student Record Ontario Universities’ Application Centre Privacy Impact Assessment Personal Information Protection and Electronic Documents Act Program for International Student Assessment Postsecondary Student Information System Professional Network Centre Student Achievement Officer School Information System Department Statistics Canada Toronto District School Board

E-governance and Data-Driven Accountability

115

NOTES 1 This quote by Steve Thompson comes from an interview, retrieved on 17 October 2009 from an archived cached page, the link to which has since been removed; it stated: “Interview reprinted with permission as published in the EdTech Show Daily on June 27, 2007.” A personal communiqué with SRB confirms its existence; thus, SRB acknowledges the quote printed here, which reveals the company’s particular ideological stance on education. 2 The French translation of OnSIS is Système d’information Scolaire de l’Ontario (SISOn). 3 These interviews took place between 2007 and 2009. In casual conversations with teachers, I continue to ask whether they know about OnSIS. To date and without exception, no one is aware of it. 4 The existence of the “OnSIS/IEP Data Collection Form” indicates that exceptionality data about students is being entered into OnSIS by the Thames Valley District School Board, phased in by that board around January 2009. The small print at the bottom of this one-page form states “Notice of Collection” and cites the Municipal Freedom of Information and Protection of Privacy Act, 1989, and the Education Act, 1990, as the legal authority to do so. 5 The TDSB is the largest school board in Ontario. Other boards have similar forms and processes in compliance with the legislation. 6 Introduced by the TDSB in 2002, Trillium is a school-based electronic student information management system. Not all school boards use Trillium, although, according to the SRB website, the majority do. Other student information management systems include Maplewood, eSIS, and Trevlac. 7 Communiqué from a school board representative received 20 November 2009. 8 XML stands for Extensible Markup Language, which is a set of textual data format rules for encoding documents electronically. 9 Error levels and the GOTO command can be used to set up batch files to do different things depending on whether the commands in them succeed or fail. Thus, batch files are only as reliable as the programming script. 10 According to a conference presentation on OnSIS, there are more than 5,000 business rules (OnSIS–SISOn 2007). Note that, unlike open academic conferences, the mediatory activities of OnSIS/MISA seem to be organized out of view at corporate-style events, in this case, by Verney Conference Management (see www.verney.ca). 11 The amounts spent are derived from two sources: reports on the OnSIS/ MISA project to the Ottawa Carleton District School Board (2008) and

116

12

13

14

15

16 17

L. Kerr

the Niagara Catholic District School Board (2008). Similarly, escalations of spending arose over computer contracts at the City of Toronto and e-health Ontario; both of these cases involved allegations of questionable practices, mismanagement, and lack of accountability among highranking officials. Established by the ministry in 1995, the Education Quality and Accountability Office (EQAO) oversees and reports on provincial standardized tests. EQAO tests are currently administered annually to every student across Ontario in grades 3, 6, 9, and 10. The ministry explains the designation in terms of EQAO test results as follows: OFIP 1 – schools where less than 34% of students are achieving at Level 3 (the provincial standard) or Level 4 in reading, in two of the past three years; OFIP 2 – schools where 34–50% of students are achieving at Level 3 or 4 in reading and results have been static or declining based on trends over the past three years; OFIP 3 –schools where 51–74% of students are achieving at Level 3 or 4 in reading but results have been static or declining based on three-year trends (Ontario Ministry of Education 2008b). Similar surveillance of school performance applies to Title 1 schools under No Child Left Behind legislation in the United States, and to school inspections by the Office for Standards in Education, Children’s Services and Skills (Ofsted) in Britain. Teacher suicides in the United Kingdom have been linked to Ofsted inspections in coroners’ reports (see Mullins 1999; Russell 2000; Levy 2007; BBC 2007). There have also been concerns about suicides of pupils subjected to the pressure of testing. The notion of the “text-reader conversation” explicated in Smith (2005: 101–22) brings people to the foreground in considering texts and institutional discourses. Texts are considered inert until the reader activates them, becoming the text’s agent and responding to the text in her/his work. Through being taken up by readers, texts formulate institutional discourses that organize the subjectivities and consciousness of institutional participants, as well as coordinating their actions and making them accountable. In Canada, the Census is conducted every five years, and there is a time lag between data collection and its publication. The response to my Freedom of Information Access request filed in 2010 indicates that OnSIS has data fields for NTIP and TPA under the category of School Educators/Board Educators; however, it does not indicate whether the fields are active yet. The response confirms that OnSIS data about teachers and students is considerably more extensive than the ministry website declares.

E-governance and Data-Driven Accountability

117

18 The specific link to this statement is no longer active; retrieved on 30 October 2009 from www.srbeducationsolutions.com/Default.aspx?PageContentMode= 1&tabid=448. However, this statement, which was posted at the SRB website, shows the commercial mindset built into the technology and provides an example of how OnSIS data could be mined for ancillary purposes. 19 See www.education.gov.uk/researchandstatistics/national-pupil-database. 20 Institutional ethnography makes visible the intertextual hierarchy of regulatory texts that organize social relations, from higher-order or “boss texts” to subordinate texts. Boss texts set the interpretive frame for producing and reading subordinate texts, where subordinate texts “must be capable of being interpreted/understood as a proper instance or expression of its regulatory categories and concepts” (Smith 2006: 85). Thus, texts operate in “intertextual circles,” where circularity is apparent in the organization of texts regulating other texts and are tied to funding and institutional accountability. 21 The OECD uses the Internet to disseminate its publications and proceedings to subscribers via its online iLibrary (or SourceOECD) and its Online Bookshop. Texts are released to the public only if they have been vetted, approved, and “declassified.” For the extent of its output, see www. oecd.org/about/publishing/. 22 The video clip from which this quote is taken was originally located at Houston A+ Challenge in Texas, from a workshop given there by Michael Fullan for aspiring leaders in education. . 23 Legislation in Florida suggests that this is not improbable. According to a special report posted on the Education Tech News Weekly Newsletter, “Republican state lawmakers are pushing legislation that would end professional service contracts – AKA ‘tenure’ – for public school teachers. And Florida could become the first state in the country that evaluates teachers’ performance based on students’ test scores ... Half of teachers’ pay would depend on students’ test scores improving from the previous year” (Simms 2010). The law passed in March 2011.

REFERENCES Anderson, R., I. Brown, T. Dowty, P. Inglesant, W. Heath & A. Sasse. 2009. Database state. York: Joseph Rowntree Reform Trust. Retrieved www.jrrt. org.uk/sites/jrrt.org.uk/files/documents/database-state.pdf. British Broadcasting Corporation (BBC). 2007. Ofsted – Fear head killed himself. BBC News. 10 December. Retrieved news.bbc.co.uk/2/hi/uk_ news/england/cambridgeshire/7135924.stm.

118

L. Kerr

Canadian Council on Learning (CCL). 2007. Evaluation of the Ontario Ministry of Education’s Student Success/Learning to 18 Strategy: Stage 1 Report. Retrieved www.ccl-cca.ca/pdfs/OtherReports/ StudentSuccessStage1ReportJuly-27-2007.pdf. Department of Justice, Canada. 1983. Privacy Act. Retrieved http://www. priv.gc.ca/leg_c/r_o_a_e.asp Djelic, M.-L., & K. Sahlin-Andersson. 2006. Introduction. In M.-L. Djelic & K. Sahlin-Andersson (eds), Transnational Governance: Institutional Dynamics of Regulation, 1–30. New York: Cambridge University Press. Education Partnership Table. 2009. Letter to Premier McGuinty and Minister Wynne, 1 June. Retrieved www.cpco.on.ca/Newsletters/2008_09/ documents/June3/SIF.pdf. Fullan, M. 2009. Interview. In focus with Michael Fullan (Houston A+ Challenge). Retrieved www.youtube.com/watch?v=D4DloKhHkSo. Gellman, R. 2011. Fair Information Practices: A brief history. Retrieved http:// bobgellman.com/rg-docs/rg-FIPShistory.pdf. Governance Review Committee. 2009. School Board Governance: A Focus on Achievement: Report of the Governance Review Committee to the Minister of Education of Ontario. 20 April. Toronto: Ontario Ministry of Education. Retrieved www.edu.gov.on.ca/eng/policyfunding/grc/grcReview.pdf. Jacobsson, B. 2006. Regulated regulators: Global trends of state transformation. In M.-L. Djelic & K. Sahlin-Andersson (eds), Transnational Governance: Institutional Dynamics of Regulation, 205–25. New York: Cambridge University Press. Kerr, L. 2006. Between Caring and Counting: Teachers Take on Education Reform. Toronto: University of Toronto Press. Kerr, L. 2009a. Ontario’s success story? Teachers’ perspectives on the “Student Success Strategy.” Paper presented at the Annual Meetings of the Canadian Sociology Association (CSA), Carleton University, Ottawa, 27 May. Kerr, L. 2009b. The technologies of risk and safety: Education as training for work. Paper presented at the Annual Meetings of the Society for the Study of Social Problems, San Francisco, CA, 7 August . Levy, A. 2007. Popular teacher committed suicide “after being bullied over Ofsted report.” Daily News. 22 November. Retrieved www.dailymail.co.uk/ news/article-495645/Popular-teacher-committed-suicide-bullied-Ofstedreport.html. Mahon, R., & S. McBride. 2008. Introduction. In R. Mahon and S. McBride (eds), The OECD and Transnational Governance, 4–22. Vancouver: UBC Press. Mathiesen, T. 1997. The viewer society: Michel Foucault’s “panopticon”

E-governance and Data-Driven Accountability

119

revisited. Theoretical Criminology 1 (2): 215–34. http://dx.doi.org/10.1177/13 62480697001002003. Morgan, C. 2006. Educational harmonization: The impact of the OECD’s PISA on educational policy-making in Canada and Mexico. Paper presented at the annual meeting of the International Studies Association (ISA), San Diego, 22 March. Mullins, A. 1999. Ofsted inspection stress led to teacher’s suicide. The Independent. 30 September . Retrieved www.independent.co.uk/news/ ofsted-inspection-stress-led-to-teachers-suicide-1123209.html. Niagara Catholic District School Board. 2008. Board meeting. 14 October. Retrieved www.niagararc.com/niagaraRC//board/meetings/ committee_whole/2008/Oct14/C4.pdf. Ontario Legislative Assembly. 2009. Student Achievement and School Board Governance Act, 2009. Retrieved www.ontla.on.ca/bills/bills-files/39_ Parliament/Session1/b177ra.pdf. Ontario Ministry of Education. 2007a. Managing Information for Student Achievement (MISA): Memo #24. Retrieved www.edu.gov.on.ca/eng/ policyfunding/memos/MISAMemo24.pdf. Ontario Ministry of Education. 2007b. Managing Information for Student Achievement (MISA): Memo #32. Retrieved www.edu.gov.on.ca/eng/ policyfunding/memos/oct2007/MISAMemo32.pdf. Ontario Ministry of Education. 2007c. Ontario Statistical Neighbours: Informing Our Strategy to Improve Student Achievement. Retrieved www.edu.gov.on.ca/ eng/literacynumeracy/osneng.pdf. Ontario Ministry of Education. 2008a. Reach Every Student: Energizing Ontario Education. Retrieved www.edu.gov.on.ca/eng/document/energize/ energize.pdf. Ontario Ministry of Education. 2008b. Ontario Focused Intervention Partnership (OFIP). 1 October . Retrieved www.edu.gov.on.ca/eng/literacynumeracy/ ofip.html. Ontario Ministry of Education. 2009a. School Information Finder. Retrieved www.edu.gov.on.ca/eng/sift. Ontario Ministry of Education. 2009b. Notice of Indirect Collection of Personal Information. Retrieved www.edu.gov.on.ca/eng/about/faqs.html. Ontario Ministry of Government Services. 1990 (rev. 2007). Municipal Freedom of Information and Protection of Privacy Act. Retrieved https://www.e-laws. gov.onca/html/statutes/english/elaws_statutes_90m56_e.htm. Ontario School Information System – Système d’information Scolaire de l’Ontario (OnSIS–SISOn & I&IT. 2007. Reflections of the Ontario School Information System (OnSIS) journey: Experience architecture, navigating

120

L. Kerr

change. 1 March. PowerPoint presentation to Experience Architecture, 2007. Retrieved www.verney.ca/ea2007/presentations/359.pdf. Organisation for Economic Development and Communication (OECD). 1995. Governance in Transition: Public Management Reforms in OECD Countries. Paris: OECD. Organisation for Economic Co-operation and Development (OECD). 2001. The Hidden Threat to e-Government: Avoiding Large Government IT Failures. Paris: OECD. Retrieved http://www.oecd.org/redirect/ dataoecd/11/56/1902965.pdf. Organisation for Economic Co-operation and Development (OECD). 2003. The e-Government Imperative. Paris: OECD. Organisation for Economic Co-operation and Development (OECD). 2005. E-Government for Better Government. 24 November. Paris: OECD. Retrieved www.oecd.org/document/45/0,3746,en_2649_34129_35815981_1_1_1_1,00. html. Organisation for Economic Co-operation and Development (OECD). 2009a. Home page: Public sector innovation and e-government. Retrieved www. oecd.org/department/0,3355,en_2649_34129_1_1_1_1_1,00.html. Organisation for Economic Co-operation and Development (OECD). 2009b. Rethinking e-Government Services: User-centred Approaches. Paris: OECD. Retrieved www.oecd.org/document/7/0,3343,en_2649_34129_43864647_ 1_1_1_1,00.html. Orsini, M., and M. Smith. 2007. Introduction. In M. Orsini and M. Smith (eds), Critical Policy Studies, 1–18. Vancouver: UBC Press. Ottawa Carleton District School Board. 2008. Board meeting. 26 March. Retrieved www.ocdsb.edu.on.ca/Documents/Board_Meetings/ Meetings/2008/March_2008/Chairs_Mar26_2008/6_DRIVE_to_Success.pdf. Parliament of Canada. 2000. Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5. Retrieved http://laws-lois.justice.gc.ca/ eng/acts/P-8.6/index.html. Russell, B. 2000. Woodhead faces union anger after suicide. The Independent. 17 April . Retrieved www.independent.co.uk/news/education/educationnews/woodhead-faces-union-anger-after-suicide-698148.html. Simms, J. 2010. Florida giving tenure the heave-ho? Education Tech News. Retrieved http://educationtechnews.com/ florida-giving-tenure-the-heave-ho. Smith, D.E. 1990. The Conceptual Practices of Power: A Feminist Sociology of Knowledge. Toronto: University of Toronto Press. Smith, D.E. 1999. Writing the Social: Critique, Theory, and Investigations. Toronto: University of Toronto Press.

E-governance and Data-Driven Accountability

121

Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: AltaMira. Smith, D.E., ed. 2006. Institutional Ethnography as Practice. Lanham, MD: Rowman & Littlefield. SRB Education Solutions Inc. 2009a. Home page. Retrieved www. srbeducationsolutions.com/Default.aspx?PageContentMode=1&tabid=448. SRB Education Solutions Inc. 2009b. SRB integrated SIS product suite. Retrieved www.srbeducationsolutions.com/LinkClick.aspx?fileticket=hJ8z K6duRno%3D&tabid=453&mid=367.

4 Digital Era Governance: Connecting Nursing Education and the Industrial Complex of Health Care janet rankin and betty tate

This chapter examines the appearance of e-governance and integrated management strategies that are beginning to organize nursing education in Canada. Efforts to integrate health and education sectors are being driven by mechanized approaches to organizing students’ need for practical experience in health care settings. The integration strategies we describe here extend the reach of the production model into the work of nurse educators. We argue that they entrench relevancies that are at odds with some of the goals of nursing and nursing education. Professional management, with its increasingly powerful command over the knowledge of the enterprise being managed, has been shown to bring new forms of ruling into health care. Rankin’s previous research (2001, 2003, 2009; Rankin et al. 2010) has been motivated by the stresses arising for nurses who become engaged as key actors in the promotion, implementation, and use of professional management technologies. Rankin and Campbell (2006) argued that what recommends the application of new managerial technologies to health care organizations – the capacity to calculate and apply more precisely the available resources to the achievement of desired outcomes in treatments – has less than positive impacts on nurses and health care. They explained how objective, technologically based management systems being adopted within health care organizations can be understood as forms of coordination and control that operate on the basis of information constructed systematically, most often for the purpose of numerical calculation. However, Rankin (2001, 2003, 2009; Rankin et al. 2010) and others have argued that the uses of these strategies create new and disturbingly mystifying problems for both nurses and patients; an improvement in organization can become a source of trouble for practitioners.

Digital Era Governance

123

The analysis we develop links how the calculative influence of technological approaches to managing labour resources is expanding more directly into the work of nurse educators. It draws from an institutional ethnographic study conducted by a team of nurse educators in British Columbia, Canada. We describe our glimpse into the material form of a process that is underway to join nursing education more securely to the industrial health complex. “Joined-up government” is a term coined by Christopher Pollitt to describe public sector reforms that have been going forward since the late 1990s. Pollitt noted a policy trend in Organisation for Economic Co-operation and Development (OECD) countries that represented efforts to “achieve horizontally and vertically co-ordinated thinking and action”; he adds: “Through this coordination it is hoped that a number of benefits can be achieved. First, situations in which different policies undermine each other can be eliminated. Second, better use can be made of scarce resources. Third, synergies may be created through the bringing together of different key stakeholders in a particular policy field or network. Fourth, it becomes possible to offer citizens seamless rather than fragmented access to a set of related services” (2003: 35). We scrutinize a specific occasion of this sort of logical rationality that we discovered being carried out in the work processes of nurse educators and learned from them about what was happening within a technological strategy developed to coordinate their thinking and action. The educators we interviewed described their work on a project directed towards eliminating specific nursing school policies in favour of a broad set of regional guidelines. The strategy was carried inside software designed to streamline student practice placements. We work to discover the knowledge that gets left behind when thinking and action become coordinated in this way. A Brief History of Nursing Education in Canada In Canada prior to the 1970s, most Registered Nurse (RN) education was accomplished within hospital-based programs. Although there have been university schools of nursing since the 1920s, until the late 1970s the majority of student nurses lived in nurses’ residences and completed three-year training programs administered by hospitals. This was a model that supplied a cheap and flexible source of nursing labour. However, as the discipline of nursing progressed within scientific advances related to biomedicine, pharmacology, technology, and the humanities, nursing knowledge was organized into its modern form and it became apparent that nurses needed in-depth theoretical knowledge

124

J. Rankin and B. Tate

to support the experiential learning acquired during hands-on practice. In response, hospital-based programs redesigned nursing curricula to offer more classroom instruction and less time on the wards.1 Thus, during the late 1970s and into the 1980s hospital-based programs were redesigned, student nurses received more theoretical tutelage, and the student nurse workforce was not as readily available. The new model of education for nurses required more teachers in the classroom. On the wards, students were provided with more direct supervision from nurse educators and were supernumerary to the complement of paid nursing staff. For hospital administrators, the cost-benefit ratio of educating nurses was greatly reduced. Concurrently, the feminist advances of the 1960s and 1970s produced a more militant nursing workforce, no longer willing to accept low wages and educational mediocrity. The discipline lobbied for access into academia (Baumgart & Kirkwood 1990). Two-year nursing diploma programs proliferated in colleges. The four-year university degree programs, which for decades had been educating a nursing elite, became more popular. Gradually, during the 1970s and 1980s the hospital-based training programs closed and the responsibility for educating nurses moved from health-care institutions to post-secondary colleges and universities. With the transition of nursing education into the academy, there was a great deal of emphasis placed on the nature of a baccalaureate education for nurses and the theoretical and conceptual habits of thinking that are thought to distinguish a “professional” nurse. Within the significant reforms in nursing education over the past 20 years, health industry administrators have consistently critiqued the preparation of the new nurses who graduate from the academy. Employers require new graduates to “hit the ground running” (Romyn et al. 2009) and to be familiar with the institutional aspects of coordinating their nursing work within the time constraints and pressures of reformed work settings. Nurse educators have attempted to resist this pressure, arguing that “practice-readiness” (intellectual preparation as well as general skill development) as opposed to “job-readiness” (knowing the practicalities of the particular setting) is more effective in the long term for helping a nurse to mature into an expert practitioner (Benner et al. 2010). Thus, debates between health service employers and nurse educators express long-held tensions. Nursing curricula construct a framework for the education of a professional nurse who has “a good grasp of everyday ethical comportment, demonstrating appropriate use of knowledge, skills of care and relations and communication with

Digital Era Governance

125

patients and colleagues” (ibid.: 28). This approach to nursing work is somewhat incompatible with the demands of the health service sector, with its imperative to reduce costs, where, increasingly, health care is viewed as a commodity and patients are referred to as customers or even product lines. Of course, nurses are not the only group of health professionals expected to contribute to the commodification of health care. The Health Council of Canada recognizes that “health reform efforts are entwined with the availability of appropriately trained health human resources” (Baranek 2005: 1). To this end, a great deal of attention has been paid to the scope and practice of other regulated health professions. Strategies to link the regulation and education of health professionals are being carried out through a variety of initiatives that incorporate regulatory and educational reform. In Ontario, Alberta, and British Columbia, the integration of health professionals has been augmented by the enactment of provincial laws, such as the Health Professions Acts (HPAs) (Baranek 2005). The HPAs of each of these three provinces are intended to “enhance collaborative practice and optimal use of teams ... [the] recognized elements of health reform and solutions to workforce strategies” (ibid.: 3). Health Professions Acts are provincial laws. In Canada, health services are coordinated nationally within a federal Canada Health Act (CHA), but are administered provincially. Each province’s ministry of health is responsible for designing and delivering health services to the citizens of that province within the principles of the federal legislation, which are enforced through the economic driver of federal transfer payments to each province. Even though, in Canada, both health and education are under provincial jurisdiction, they play out in remarkably similar ways across regions. Various forms of e-governance have been developed to organize health care nationally, and strategies such as the federal Canadian Institute of Health Information (CIHI) are in place, which integrate and coordinate practices across time and geography. As changes and initiatives are introduced in one region, with technological systems of information at their centre, they spread widely and in many directions simultaneously. They produce forms of conformity within local practices and standardization across sites of health work. BC Academic Health Council and HSPnetTM The e-governance research being described here was conducted in British Columbia and focuses on an organization known as the BC Academic Health Council (BCAHC) and one of the computerized

126

J. Rankin and B. Tate

strategies it has developed; the Health Services Placement Network (HSPnetTM). HSPnetTM is software designed to streamline how health professional student practica are organized. The initiative began in British Columbia, but is now being administered within a national alliance that “supports a federation of provincial networks, each with local management and accountability, that access a shared infrastructure for operations, enhancement, evaluation, and policy development” (HSPnetTM 2010). The BC Academic Health Council, and specifically HSPnetTM, is the vehicle that is “joining-up” nursing education and health services management and aligning the education of professional nurses with health care commodification. Prior to the formation of BC Academic Health Council and the implementation of HSPnetTM, during the 25 years when nursing education was uncoupled from the increasingly commercializing industrial health complex, nursing schools had an arms-length relationship with the health care agencies where student nurses practised. Advisory councils were in place and colleges and universities developed formal memoranda of agreement with local health authorities. The agreements established legal obligations and insurance arrangements. The memoranda also outlined that, although they were not employees of the health agencies where they practised, students and faculty from the schools of nursing were bound by the health agency policy. The health agency policy was augmented by school policies. Schools of nursing had committee structures to generate policies for student nurses. For example, school policy specified those occasions when students required a teacher or qualified nurse “at elbow” to perform skills. The policy framework addressed nursing practices considered complex or high risk such as managing central venous catheters or administering certain intravenous medications. School policies also included attendance policies and dress codes for student nurses. Along with directives for clinical practice, school policies also addressed student evaluation, progress, grades, academic misconduct, and so forth. Not only was nursing school policy at arm’s length from the health care industry, but it was also independent of the discipline of medicine. Historically, medicine has been deemed intellectually superior to nursing and throughout its history nursing education and nursing practice have been subordinate to medicine (Ashley 1976). With the move into the academy, nursing established its professional independence more securely. The agreements (that schools of nursing established with hospitals through their advisory councils and practice agreements) were separate

Digital Era Governance

127

from their counterparts in medicine. In British Columbia, until 2004, the University of British Columbia (UBC) had the only medical school in the province. Unlike nursing schools, whose larger numbers and geographic dispersion across community colleges and hospitals throughout the province generated multiple agreements, the medical school was closely affiliated with tertiary and specialty hospitals clustered in Vancouver within a more cohesive structural relationship. The medical school affiliation was organized within an informal group known as the Council of University Teaching Hospitals (COUTH). In 1988, COUTH established itself more formally and between 1988 and 2001 it functioned as “a collaborative society” serving “those Teaching Hospitals having a comprehensive affiliation agreement with the University of British Columbia” (2011). The society focused on medical students’ education and practice. This history is important because it is the background for the genesis of the BC Academic Health Council (BCAHC), which was formed out of COUTH in 2002. This was accomplished through an amendment to the Society Act. The reorganization coincided with an expansion of medical education across the province. The council is now focused more broadly on health professionals’ education within a “federation of independent and publicly funded health care and post-secondary organizations in BC, as well as related Government ministries” (Eisler 2003: 1). Since 2003, the BCAHC has been supported by member (health care and post-secondary organizations) fees that are prorated relative to the amount of activity each member generates in each of three key areas: (1) research, (2) health education, and (3) practice. In 2003 the minimum member fee was CDN $25,000 and the proposed fee structure was expected to generate $670,000. During the first three years of the council’s operation, it also received an additional 1.2 million development dollars from the BC Ministry of Health to “administer and manage student placement provincially” (ibid.: 5). The BCAHC now describes itself as “a major strategic forum for effective collaboration, partnership and leadership by senior leaders in healthcare, research, and education” (BCAHC 2011). It continues to be funded by membership fees paid by health authorities, colleges, and universities with health education programs. The goals of BCAHC are summarized below: • Developing action plans and deliverables for key health professional education issues; • Fostering and supporting collaboration;

128

J. Rankin and B. Tate

• Serving as an important forum for the development and implementation of solutions and innovations; • Undertaking strategic, longer-term planning on key aspects of health professional education; • Facilitating and sharing of best practices and resources; • Serving as a networking and communication link for academic health in BC; • Informing policy-makers at all levels; and • Coordinating selected projects during their development phase. The council’s vision statement asserts that it “will assure an adequate supply of appropriately educated health professionals which is critical to meeting current and future population and patient health needs in BC” (ibid.). The genesis of the BCAHC heralds the coupling of two sectors (education and health care), both of which have been heavily targeted for reform. As indicated in the list above, BCAHC goals are unabashedly focused on resources, strategic planning, best practices, communication, deliverables, solutions, and innovations. These are features of institutional knowledge practices and decision-making that have been particularly malleable to managerial technologies, and it is this sort of language that constructs the apparent improvements in health care. The systems work by translating individual, local subjectivities into calculative metrics – the “deliverables” (such as length of stay and wait times) upon which understandings of good organization are generated. We note that, for nursing, the formal integration of leaders from post-secondary education, health, and government establishes an administrative-bureaucratic regime (Smith 1995) through which to introduce new objectives, targets, and measures of performance that support human resource planning for the health sector. They entrench features of the industrial production model into practices and activities of nursing school administrators and educators, whose previous focus, at arm’s length from government, medicine, and industry, had been directed by professional regulation and accreditation as it arose within the regulatory mandate of nursing. This mandate was guided by ideologies of “public trust” and professionally constructed standards and competencies that carved out a scope of practice for registered nurses. However, within the BCAHC two seemingly disparate sets of goals and objectives are joined up. This occurs through language and activities that produce a compelling pragmatism and rationality and result in the

Digital Era Governance

129

desired “coordinated thinking and action” that is the promise of the “joined-up government” described by Pollitt (2003). However, looking down at what is actually happening reveals a different picture of the consequences of this joining-up for nursing education. Developing Policy for Student Practice Experiences: Data from the Ground of Nurse Educators’ Work The research team that conducted the Institutional Ethnography (IE) that led us to this critical analysis of the BCAHC’s HSPnetTM initiative teach in undergraduate nursing programs. The focus of the research was the work nursing teachers do when they evaluate students during their practicum (Rankin et al. 2010). The data revealed how instructional work in clinical practice settings pulls teachers in different, somewhat contradictory, directions. Excerpts of field notes and interviews with a nursing instructor are used to explore the problem of dealing with the work of teaching when it is organized by managerial technologies that supersede professional judgment. Our analysis begins in this nurse educator’s interaction with a first-year nursing student as the latter is about to begin a nursing practicum. On the basis of this ethnographic data, we trace the ruling relations activated in the teaching and learning experiences. The analysis moves some distance from the practicum and local teaching work. We discover and begin to map what may be missing from the interests the new technologies carry with them as they begin to organize nurse educators. We show how the surveillance of students’ learning proceeds differently within a technological initiative being used to organize collaborations between education and health care employers. We show how problems related to students that arise for health care agencies are not necessarily synonymous with the problems that instructors know about as they prepare new nurses. Our purpose for including this data excerpt in the chapter is twofold. First, we want readers to see what we recognize as troubling (which might otherwise go unnoticed) when we focus critical attention on what actually happens between this teacher and student. Also, we want to guide readers through our analysis as we show how teachers’ competent practices are organized as ruling relations; how a nursing teacher’s knowing is organized through a taken-for-granted construction of how a student can be judged competent and how that construction organized what could happen. It becomes apparent as our analysis develops

130

J. Rankin and B. Tate

that what happens between the teacher (Anne) and the student (Grace) is ruled by social happenings outside their local experience. We will see how Anne’s teaching knowledge and skills start to become “ruled out” through the domination of strategically applied technologies. Anne and Grace Anne, our informant, is a skilled and knowledgeable nurse educator. In her account of her work, she describes being given a new teaching assignment in the second semester of the first year of the nursing program. Part of this assignment required her to supervise students in a facility for dependent older adults who are called “residents.” Anne describes the advice she was being given as she prepared for this teaching assignment: “Listening to my colleagues who had taught this course and taught practice in second year – I heard a lot about students being prepared for practice. Being prepared seemed to be about doing research on diseases and conditions and writing up the research leading to a plan with foci and actions. So [for students] being prepared was ‘I know everything there is to know about this person’s condition and treatments and I know what I am going to do.’” In this interview excerpt, Anne discusses the work of consulting with her teaching colleagues to ensure that the second-semester (first-year) students she is supervising learn the skills necessary for developing their practice, not only in order to provide nursing care for the dependent adults they would be encountering, but also to develop skills that would serve them in the future when they encounter the fast pace of acute care in the second year of the program. In this case, student nurse Grace is expected to review the assigned resident’s case notes, study the resident’s conditions and treatments, and prepare a written plan for nursing care that includes “foci” and “actions.” As Anne describes how her new teaching assignment unfolded, she recounts concerns she had about Grace’s competence as it related to her safety-to-practise. As Anne discusses Grace, she refers back to her conversations with her colleagues, saying: “I knew I had to assess whether Grace was safe to practice and again I had heard this over and over again in discussions with my colleagues. Somehow Grace’s care-plan would help me assess whether she knew enough and had enough of a plan that the resident would be safe under her care.” Here Anne’s attention to assessing Grace’s safety-to-practise is supported by consultation with colleagues, all of whom are organized within the ruling

Digital Era Governance

131

relations of a nursing regulatory body that enforces professional and practice standards. Within these interconnected discursive practices, it is Anne’s responsibility to instil competent safety practices into Grace’s performance. It begins prior to the very first contact a student has with a patient or resident – in this case “writing a good care plan” is the indicator that will tell Anne that Grace is safe. Anne explains more about this particular instance of her teaching work when she describes the trouble Grace was experiencing in writing the care plan that she was developing on the evening prior to the scheduled practicum. Anne describes an email interaction she had with Grace: “There were clear differences between where this struggling student was and the other students. Her initial care plan was very minimal and she wasn’t making the connections to make a plan, so I couldn’t see how she was going to care for the patient. She didn’t seem to understand the language of the form. I also knew the student needed to be in practice to learn but that the care plan was not good enough for her to be seen as safe.” We draw attention to the tensions that arose for Anne and Grace as they accomplish work that is apparently about patient safety, organized by particular rules and standards. Anne states that Grace did not seem to understand the form. Perhaps it was the language of “foci” that was confusing. Nonetheless, establishing competence with the form was the requisite skill that Grace needed to develop before she could proceed with her hands-on learning. This was the textual requirement that established that Grace could be “seen as safe.” Anne’s dilemma with Grace was exacerbated when she and Grace arrived in the facility to begin work with residents. Anne judged that Grace’s care plan was still inadequate and that Grace continued to pose some sort of risk. At first, Anne did not allow Grace to work with her assigned resident. She said: Then the student was in a room on her own trying to improve her care plan and I was running back and forth between the other students and trying to help her. It was hard work – I was trying very hard to get her to do enough on the care plan so she could go out and work with the resident before the whole morning was done. I showed her where to look in the text and tried to simplify the form and help her make the connections to make a plan. I kept thinking – “would she ever get it sitting in a back room with a book?” The care plan was like a hurdle I had to get her over so she could practise.

132

J. Rankin and B. Tate

Part of Anne’s teaching work is to document Grace’s “safety” progress in the formal evaluation reports she generates. The care plan, the indicator for competent student practice, is organizing Anne’s work and Grace’s learning. Writing the care plan (not giving the care) is the “learning” that has become central to Anne’s teaching focus. The care plan is not only an indicator of Grace’s safety-to-practise but also of Anne’s accountability to the professional standards as laid out in her professional regulatory body. In the following excerpt Anne describes what happened when she finally allowed Grace to work with the resident: This all occurred before she was in practice at all. At the time I was helping her that much with her care plan I hadn’t seen her in practice so I didn’t have any of those experiences to go on ... I kept thinking that by keeping her out of practice, I was telling her she wasn’t good enough. So did that undermine her confidence? ... We did get the care plan done so that I thought it was okay ... afterwards I asked her resident how she found the student and the resident was very happy. I noticed that the student was very relational with the resident. After I saw her in practice, even though she was hesitant, I was much more confident that she was okay in practice and safe. I had seen and observed what she was doing to be safe. It’s like reviewing the care plan and being a gatekeeper was a hurdle both she and I had to get through so she could get into practice and learn.

In this excerpt, we can see how even before the introduction of increasingly standardized accountability practices (the initiatives embedded in HSPnetTM that we explore in this chapter) effective teaching is bound by textual practices and understandings that do not always make sense in the actual practices of teaching. The line of inquiry we follow from Grace and Anne’s story tracks this construction of student competence and patient safety as it gets replicated across sites of teachers’ talk and curriculum texts. This textual formulation of safety and competence – that our data indicate gets activated across all the schools we studied – does not work in the interests of students and teachers. It is also not indicative of, nor does it ensure a reliable standard of the actual safe care given. This textual formulation is often a “stand-in” for the actualities; the things that really go on in the practice settings, which teachers and students undertake to ensure patients are provided with competent care while simultaneously supporting students to learn. Our analysis offers insight not only into the

Digital Era Governance

133

problems that were faced by Anne and Grace, but also into the organization of practices that create “a disjuncture between the world as it is known within the relations of ruling and the lived and experienced actualities its textual realities represent” (D. Smith 1990: 96). As noted by Anne, Grace’s inability to write a care plan posed no real risk to the resident she ultimately provided care for; Anne knew this, just as she knew that she could establish a safe learning environment for Grace and her assigned resident. The disjuncture is visible within an analysis of how Anne’s assessment and the teaching options that were open to her were institutionally tied to Grace’s written plan of care. The Texts Organizing the Work of Anne and Grace Anne’s work with Grace is subject to a variety of organizational texts. The texts, such as the practicum course outline, mid-term and final evaluation forms, student conduct policies, and policies for progress and promotion are developed within nursing faculty committees. Some of these organizing texts are developed “in house” and are particular to each individual school of nursing. They are linked to student policies administered by each post-secondary institution, and in Grace and Anne’s school they include an Academic Progress Policy that outlines the process to be followed should a student “be at risk for not meeting the learning outcomes.” Also, the Attendance and Performance in Courses and Programs Policy that outlines how “certain courses and programs are intended to enable students to develop behaviors that meet accepted workplace practice ... Students in these courses or programs are expected to attend classes regularly, be punctual and to demonstrate a satisfactory level of performance and rate of progress” (The College2 2010). Other texts coordinating Anne’s work with Grace, such as the course outline and evaluation tools, are linked to the work of faculty engaged in overseeing a nursing curriculum. These texts are reviewed not only by the Education Council at this post-secondary institution but also by the College of Registered Nurses in British Columbia (CRNBC) Education Program Review Committee, which has the mandate to approve baccalaureate schools of nursing across the province. The process of program review is linked to CRNBC professional and practice standards. In turn, these standards have been used to develop the Competencies in the Context of Entry-level Registered Nurse Practice in British Columbia (CRNBC pub. 375, 2009). Similarly there are 15 practice standards, as distinct from

134

J. Rankin and B. Tate

professional standards, designed to “link with other standards, policies and bylaws of the College of Registered Nurses of British Columbia and all legislation relevant to nursing practice” (CRNBC pubs 337, 343, 359, 398, 408, 414, 415, 429, 432, 433, 436, 439, 442, 486, and 672, 2009–10: 1). These standards re-emphasize that “nurses have a professional, ethical and legal duty to provide their clients with safe care” (CRNBC pub. 442, 2010: 1). Thus, Anne’s attention to Grace’s being competent is highly regulated, as might be expected within a self-regulated profession where nurses’ work involves them in people’s potentially life-altering vulnerabilities; the regulatory texts reflect the serious responsibility nurses have to ensure their practice is not negligent and does not pose a hazard to patients. Competence as a “ruling relation” is fully embedded into the texts organizing sequences of actions that align both students and teachers to discursive forms of nursing competencies. These are carried into the “local” texts that guide Anne’s and Grace’s activities. The course outline for the first year practicum that Anne and Grace were working in states: “In this practice experience, learners engage with faculty, practitioners, and clients to facilitate learning of safe, professional nursing practice” (The College3 2005: 35). These are the standardized, now mostly taken-for-granted tools and practices that organize a nursing teacher’s response to students, such as Anne’s requirement that Grace work on her care plan (rather than with the patient) in order to be “good enough for her to be seen as safe.” Joined-up Government: The Introduction of BCAHC and HSPnetTM In what follows, we show how these constructions of safety and competence, originally organized through professional regulation that coordinate how nursing teachers practise, are now also being included in new forms of health agency governance and management. As we tracked Anne’s teaching and assessment work through the curriculum texts and nurses’ regulatory practices, we discovered a new organization of nurse educators in health agencies. It was during interviews with nursing school administrators in British Columbia that we began to hear about the BCAHC, HSPnetTM, and the work that the latter is generating. In interviews, administrators described their work on newly formed committees, convened by the regional health authorities. This new committee work involved the development of standard policies in both

Digital Era Governance

135

education and health sectors that would apply to all health professions students in each region’s practice settings. Our informants described how they were being asked to collaborate with other professions and practice agency managers on a new level of policy development targeting health professions students. This policy development was related to practice placements processes and pre-practice requirements (such as photo identification, immunization, cardiopulmonary resuscitation certification, and criminal record checks). It included practice orientation processes for students and faculty and encompassed policies about student performance. The new level of policy development outlined specific responsibilities for the employed health authority staff that involved them in new ways related to reporting student conduct and in securing consent from the patients who were assigned to a student’s care. One informant from a relatively remote region of the province told us about how she had been tasked to work with 33 Practice Education Guidelines (PEGs), developed by a subcommittee of the BCAHC, which were to be implemented locally across all the health care sites in her community. The informant described how she had been asked to serve on a regional committee comprising managers, educators, and other health professional representatives. The regional committee on which she sat was tasked with translating the BCAHC PEGs into shared health care and educational agency policy. “Collaboration” was being sought. The BCAHC subcommittee had generated the guidelines but could not impose them as policy across the geographically varied regions. To ensure the guidelines were implemented, health authority administrators (all the health authorities are members of BCAHC) needed to engage local practitioners in a work process that translated the “guidelines” into actual policies that could be authoritatively activated. The guise of consultation and collaboration was constructed within the apparent autonomy given to each geographic region to establish a new policy framework. Our informant talked about the niggling and haggling that the discussion about the guidelines generated among the colleagues on her local committee. She also described how the policies took shape within a variety of compromises and a sense of unease. Eventually, the policy work accomplished by the local committee was compiled into a large document that she and her nursing faculty colleagues were required to abide by in order for their students to have practice placements in the health care agencies. Our informant confided that she knew that many of the new policies would

136

J. Rankin and B. Tate

create serious administrative issues for her school of nursing and that there would also be costs involved that would have to be passed onto students (such as the policies related to criminal record checks, CPR certifications, immunization requirements).4 She said: “We could only work with what they gave us. The framework was the PEGs and even though we hashed out the policies, we actually had very little say.” Another informant working with a different regional health authority told a similar story. Despite the fact that the processes he described were organized locally, and apparently independently, the orienting mandate of BCAHC ultimately harnessed the consultation. Like our first informant, this nursing school representative was recruited into a local “collaborative” committee tasked with policy development. The policies were to be implemented in a standard way across several hospitals, colleges, and universities clustered in a more highly populated area of the province. Here, too, the framework to be used was the Practice Education Guidelines developed through BCAHC. His experience mirrored that of our first informant, when he described many serious administrative issues that nursing programs would be required to grapple with within the new policy structure. For our second informant, within his local committee, a decision was made not only to establish the policies but to concurrently implement them. As a result, school of nursing administrators actually encountered many of the same issues that our first informant had predicted. The second informant detailed the new “pre-practice” policies, standards that students had to meet before they could enter a practice agency: “One of these pre-practice requirements is a criminal record check [CRC]. For years nursing students have required CRCs before entering a nursing program and we have developed admission requirements related to CRC and processes for reviewing CRCs when there is an offence. Our dean, in consultation with our regulatory body, determined if the offence posed a risk to safe practice or would be a barrier to being registered. In the new process, the health authority has to be involved in the decision to assess the offence.” Our informant went on to describe the development and implementation of the new process. One of the concerns he and his faculty colleagues expressed was related to student confidentiality. “We were really concerned that we were being asked to share an applicant’s CRC with the health authority. The rest of the committee understood this, but instead of accepting our work in relation to screening CRCs, we developed involved procedures to get consent from the student and to ensure they remained anonymous

Digital Era Governance

137

when the health authority was being consulted about the offence in question. In the end it became the health authority that determined whether the student would be accepted into practice and thus they had control over who we could accept into the nursing program.” The committee our second informant belonged to had four members from the geographically clustered schools of nursing. Unlike our first informant, who was the only nurse educator on the committee, in this case the nurses constituted a critical mass of opposition. Their resistance became a stumbling block in the policy development work. However, the BCAHC policy framework is slowly being implemented. Our informant described how the health authority representatives on the committee consistently referenced the framework of the Practice Education Guidelines as being well aligned to “evidence” and “best practice.” The members of the committee who represented the health authority emphasized the advantages of treating all health professions students equally. The health authority administrators and many other members of the committee were convinced that implementing the Practice Education Guidelines would result in a fairer, safer system, which would address hierarchy, elitism, and inequality. Unlike the work of the first nurse educator informant we interviewed, whose committee work was time limited and who told us that the chair of her committee “bulldozed” the process, the second informant described a very slow and frustrating process. HSPnetTM: A Textual Technology at Work The interview data from the nursing school administrators who were describing their committee work with the BCAHC PEGs drew our attention to the practice education guidelines. We wanted to learn more about them and discovered that they are embedded in the textual technology of HSPnet™ as a set of “standardized, interprofessional guidelines for health authorities, addressing policy issues relevant to clinical practice education” (HSPnet™ 2011; italics added). HSPnet™ is the central technological instrument through which health professional education and the health care industry are coupled. As noted previously, when the BCAHC was first being established, it received $1.2 million of targeted provincial government funding to provincially administer and manage health professions student placement. HSPnet™ is the result of this targeted funding and is described as a “fully integrated solution for the challenges of Practice Education

138

J. Rankin and B. Tate

Management” (ibid.). Figure 4.1 identifies all the components of HSPnet™. The stated goals of the HSPnet™ technology are to • Increase the availability and quality of practice education opportunities for students; • Streamline processes and improve coordination and communication among agencies that place and receive students; • Identify untapped opportunities and provide access to a greater range of placement settings, including rural and community; • Support evaluation and improvement of learner outcomes; and • Enhance the profile and priority of practice education. (ibid.) Nested within this data management tool for practice placements are a number of strategic initiatives for aligning practice education more closely to the needs and requirements of the industrialized model of health care. The jigsaw puzzle schematic of Figure 4.1 shows how each interlocking piece is part of a broad set of standardizing practices that interconnect with the HSPnet™ technology and its central linchpin of “placement, coordination, and communications.” The tool was developed to organize practice

Figure 4.1. HSPnet™ Graphic. www.hspcanada.net

Digital Era Governance

139

placements for students within conditions of complexity and scarcity. It establishes standardized capacities, protocols, and procedures to be used by educational institutions and health agencies to make practice experiences more effective and streamlined. HSPnet™ initiatives target teachers, students, nursing staff, and others. For nursing teachers, HSPnet™ includes in-house certification processes for faculty known as e-Orientation for practice education (eOPE) (bottom right). For students it organizes standard student prerequisites known as student prerequisites (SPREs) (top right). Staff nurses are implicated through a preceptor management and recognition process (centre right). For all members of the allied health team, the HSPnet™ introduces strategies for interprofessional education (lower left). Among the many strategies that have been bundled with the HSPnet™ is a process for introducing broad practice education guidelines (upper left). We learned it was the PEGs that were causing consternation among the teaching colleagues we interviewed. Our explication of this piece of the jigsaw puzzle shows that, when examined for the actual practices they coordinate, what may seem to have a compelling rationality becomes visible as part of what Dorothy Smith has described as “a revolution going on behind our backs” (2009). HSPnet™ is a fascinating example of a textual technology that is joining up local sites of work and converting it into something new. HSPnet™ is coordinating the work of nursing teachers into circuits of accountability. It is a new form of nursing governance that links nursing education to the industrial demands of 21st-century health care. Coordination, Collaboration, Stakeholders, and Synergies: The Rhetoric of Joining-up Despite the language of being a collective and collaborative, the system redesign being accomplished through HSPnet™ is neither a neutral nor an equal partnership between the education and service sectors. It is through collaborations described by our informants that the PEGs introduce a pre-engineered “problem and solution” that, until carefully scrutinized, appears to make sense. The BCAHC, HSPnet™, and the Practice Education Guideline implementation are being enthusiastically embraced to solve a problem newly identified – what Dr Sally Thorne, the director of the UBC Faculty of Nursing, describes as a “bifurcation of education and service”: The global nursing shortage presents us with a timely opportunity to rethink this division. Increasingly, the two disparate sectors [i.e., education

140

J. Rankin and B. Tate

and service] are collaborating on twin goals: enhancing nursing work life and creating high-quality learning opportunities ... In my part of the country, we’ve seen impressive movement toward system redesign. Educators and their service partners are enjoying the complexities that emerge when working across skill sets and organizational cultures and the discovery that our two sectors think about, plan for and enact nursing in different ways. And we are learning that by giving up a bit of individual control we actually gain real collective power. (2009: 52)

The joining-up of education and practice is introduced as a way to “collaborate on twin goals.” Within this collaboration, with its message about “collective power,” particular effects of “problem and solution” engineered into HSPnet™ software have been implemented across Canada. We notice that the innovations of the BCAHC are seen as “an impressive movement toward system redesign.” The broad set of initiatives that are being dispersed through HSPnet™ are being enthusiastically embraced within understandings of a “global nursing shortage.” We argue that the BCAHC, its genesis, its funding, and its HSPnet™ innovation frame problems and solutions within the formulation of health care as an industry. It is developed within a market logic that focuses attention on efficiencies, transactional thinking, and individuated responsibilities. It is an initiative designed to promote a flexible workforce. It is being discussed within strategies expected to address global shortages of health professionals and thus is hooked into forces at work in the global labour market. Overall, in the administration of health care, these sorts of integrated technologies have become part of the internationally recognized systems for accrediting hospitals (CCHSA 2010) that are equated with “high performance” organizations. Their genesis is in the Japanese manufacturing sector.5 According to Jackson and Slade, they represent a “sea change in the philosophy of management for workplaces of all kinds” (2008: 29). In the case we are exploring, we identify traces of “high performance” and “quality” implemented via BCAHC and HSPnet™ that are broadly organizing the Practice Education Guidelines.6 The technological innovations organize a paradoxical shift in professional authority (to act) that has built-in circuits of accountability that consistently emphasize a particular brand of improved integration, efficiency, and effectiveness (Shields & Evans 1998; McBride, Yeager & Farley 2005). A centralized form of governance, they coordinate processes of production (in this case, health professionals) across geographically dispersed areas.

Digital Era Governance

141

They are forms of coordination that are designed to supersede local regulatory frameworks. The strategies introduce broad standardization across previously sovereign jurisdictional boundaries. In this case, the practices of local educators are technologically aligned with a form of institutional corporatization (HSPnet™) that insidiously aligns nursing education with workforce flexibility – a central objective of the contemporary health industry. Despite being couched in a persuasive language of sustainability, collaboration, enhancing nursing work life, and creating high-quality learning opportunities, this is not what is happening. When we research how the practices are tracked into the local constituencies of nurse educators, something else is going on. Thus, when nursing leaders such as Dr Sally Thorne (2009), quoted earlier in this chapter, identify that the “bifurcation” between education and service is narrowing, it is our impression that nursing education is getting “joined-up” within a system coordinated by the needs of an expanding health care industry that is organized by a globalized, corporate, financial system oriented to the economy, rather than being focused on people and their needs for nursing care. What This Means for Nursing Education In the first section of this chapter, we cited Pollitt’s recommendations related to the promise of “joined-up government” and his assertion that “synergies may be created through the bringing together of different key stakeholders in a particular policy field or network” (2003: 35). In our research, we discovered that, while the language of synergies and collaboration may evoke a sense of local freedom and the capacity for people to work creatively together, this is not what happens; in fact, local people’s knowledge and activity are actually constrained, not liberated. When examined within nurses’ efforts, over the past 25 years, to become a progressive discipline concerned with health and social justice (Johnstone 2011), it is apparent that this vision for nurses’ contributions is being insidiously eroded within practices that bind nursing education (and the discipline of nursing) to the authority of the health care industry’s emphasis on economic rationality. The industrialized modes of thinking that have infiltrated the provision of public services have serious consequences for the discretion of front-line workers. These sorts of practices threaten to undermine the contributions of nurses that the discipline has been advancing since Nightingale’s 1880s vision for professional independence in nursing and hospital management. They

142

J. Rankin and B. Tate

reverse the small gains nurses had achieved within the recent history of autonomous professional self-regulation. Returning to Anne and Grace: they were already working within a tightly organized complex of professional and educational accountability practices. Nonetheless, despite the textual constraints and standards, Anne exerted her own judgment in order to respond to Grace’s learning needs. Anne told us that she knew Grace needed different things from her than many of the other students did. Despite the school and regulatory framework to which they were held, Anne was able to respond to Grace to provide the unique support that Grace needed. This autonomy is constrained within the new PEGs. Our second informant described the pressures exerted by the health authority representatives, who consistently called on “best practice” guidelines to emphasize the advantages of treating all health professions students equally. Equal treatment is not always what students need. In this case “equality” is standing in for standardization. The needs of the health care industry for a contingent, flexible, rationed workforce is being organized as a ruling relation inside nursing education. The need for a reorganized workforce is currently being coordinated to supersede the practical concerns of nurse educators and the ideals, theories, and philosophy of the nursing discipline. We are not alone in our observations regarding a significant change in nursing education as it becomes organized by the new managerial technologies that inform the reform efforts. Jackson (2006) critically analyses how the “managed care environment” changes the way nurses are educated, and Glen suggests a trend that sees British nursing education moving “back to the future” when she predicts that “as things begin to come full circle in nursing, there will be a demise of pre-registration nursing and a return to the development of practice-based assistant practitioner programmes and practice-based postgraduate nursing programmes owned and managed by primary care trusts and NHS foundation trust” (2009: 502). According to Frank (2008), the US. National League for Nursing’s “Position Statement on Innovation in Nursing Education: A Call to Reform” (NLN 2003) is focused on addressing the “rapidly changing health care delivery system characterized by personnel shortages” (Frank 2008: 25). In Canada, Boychuk Duchscher and Myrick note that within reformed hospital environments student nurses are being given “increased levels of responsibility without equivalent increases in practice autonomy or institutional influence” (2008: 202); their research concludes that these workplace conditions “serve to frustrate and demoralize SNs [student

Digital Era Governance

143

nurses], and further contribute to a dissatisfying and disillusioning professional role transition” (ibid.). Also in Canada, in New Brunswick Rheaume et al. (2007) describe specific negative impacts for nursing education within the hospital reforms moving forward. Although many of these authors understand the changes in nursing education to be a return to a pre-theorized, “task focused” form of nursing that is responding to the current demands of increasing acuity and shortages of professionally prepared staff, we advance this analysis by mapping its socially organized features, which we propose reveal even more insidious implications for nurses, their patients, and the discipline of nursing. While the nursing shortage, scarcity of clinical placements, and renewed emphasis on the new graduates’ transition, recruitment, and retention are the issues that support the rationality of the techno-managerial solutions – such as HSPnetTM – that nurse educators are embracing, we caution that these technical innovations may further complicate (not resolve) many of the tensions that arise in nurse educators’ teaching practices. Our research opens up HSPnet™ as a standardizing tool that is being used to extend the interests of the health industry into work that was previously the jurisdiction of educators and professional regulation. Provincial practices coordinated through BCAHC – distant from the local expert knowledge of nurse educators – are being developed to organize how teachers are to respond. They insert new forms of coordination and accountability that align nursing teachers with management practices of the health care industry. Carried within technological programs intended to standardize and streamline students’ practice placements, they change the work of nurse educators in ways that are not immediately visible. Conclusion This chapter, just one finding from a broader institutional ethnographic study looking at how student nurses are evaluated in practice, extends Rankin & Campbell’s (2006) work on the managerial technologies proliferating in the health sector. Our current research begins to examine the appearance of HSPnet™ – an e-governance strategy that is organizing nursing education in Canada. We introduce a cautionary note into the rapid uptake of this technological tool that is purported to improve collaboration between education and practice. In the prior study, Rankin and Campbell noted that technological solutions become “irrevocably entrenched – a tide impossible to turn back”

144

J. Rankin and B. Tate

(ibid.: 176). The relatively recent incursion of these tools into the organization of nursing education provides an opportunity to scrutinize their taken-for-granted utility before their full entrenchment. HSPnet™, a seemingly benign technology developed to organize and streamline student nurses’ practice placements, is being used to link the work of educators to the tightening demands of industry. Students’ clinical placement and the work of nurse educators is being organized by mechanized approaches to establish a rationed and flexible health care workforce that, in turn (as previously argued in ibid.), concerts the new industrialized, task-work approach to patients’ needs for nursing care. We have shown how the speed with which technological governance is being implemented and the dominant role it occupies for decisionmakers is not neutral. It introduces particular interests while subordinating others. Our research reveals that the innovations these systems promise direct educators’ work with students towards the organizational needs of the changing health care enterprise. Our data provide evidence that this is at the expense of nurse educators’ capacities to respond to the individual needs of students. What is streamlined for some people in the organization produces a troubling new level of bureaucracy for others. We witnessed how the new level of policy development coordinated fractious relationships among local people whose previous mode of working was guided by a policy framework directed by local issues and resources. At the outset of this chapter we suggested that some knowledge gets left behind when government is joined-up. Pollitt enthusiastically suggests that “situations in which different policies undermine each other can be eliminated” (2003: 35); in the analysis we developed for this chapter we get a glimpse of exactly what is being eliminated. The local knowledge and the particulars that nurse educators use to navigate their work in each local jurisdiction cannot find a footing in the strategies and processes that are at work in “joined-up” government via the technological advances of e-governance.

NOTES 1 One of this chapter’s authors, Janet Rankin, entered the Vancouver General Hospital (VGH) School of Nursing in 1975 during the implementation of a new curriculum. Prior to the new curriculum, nursing students at VGH had extended periods of working various shifts in patient care areas. Rankin and her colleagues spent only two days each week in patient care areas, where

Digital Era Governance

2 3 4

5 6

145

an assigned teacher supervised them in groups of eight. As well, unlike her student predecessors, in her second and third year of schooling the year was broken into three distinct terms. Each term consisted of three months of classroom instruction supported by two days a week of supervised practice. During the fourth month of each of the three terms, the students were assigned to a ward where they integrated with the nursing staff and practised under the guidance of the head nurse of that area. During the month-long practicum they were always supernumerary to the paid staff. Furthermore, Rankin and her student colleagues received full room and board along with a monthly stipend of $120. They paid no tuition. The full name of the source has been omitted in order to preserve the confidentiality of the respondents and location of this study. See endnote 2. One of the aspects of this policy work targeting students that our informant found troubling were that the policies were more rigid than those the health authority staff were subjected to (e.g., standards for mandatory immunization). She suggested that this was due to constraints that the labour unions placed on the health authority employers, a protection that students could not invoke. That is, lean production; cf. Sears (2003), referred to in the Introduction to this volume. The language of the mission and goals of the BCAHC – “action plans,” “solutions and innovations,” “deliverables,” and “an adequate supply of appropriately educated health professionals” (see quote from the BCAHC website earlier in this chapter) – reveal how industrial meanings (and activities) begin to permeate how to think about health care problems (and their solutions). Language circulates a particular “genre” (here it is the business genre) that links up with more overt terms such as “lean manufacturing” and “agile production,” which arose in the Japanese manufacturing sector.

REFERENCES Ashley, J.A. 1976. Hospitals, Paternalism, and the Role of the Nurse. New York: Teachers College Press. Baranek, P. 2005. A Review of Scopes of Practice of Health Professions in Canada: A Balancing Act. Toronto: Health Council of Canada. Baumgart, A., & R. Kirkwood. 1990. Social reform versus education reform: University nursing education in Canada, 1919–1960. Journal of Advanced Nursing 15 (5): 510–6. http://dx.doi.org/10.1111/j.1365-2648.1990.tb01849.x.

146

J. Rankin and B. Tate

Benner, P., M. Sutphen, V. Leonard & L. Day. 2010. Educating Nurses: A Call for Radical Transformation. San Francisco: Jossey-Bass. Boychuk Duchscher, J., & F. Myrick. 2008. The prevailing winds of oppression: Understanding the new graduate’s experience in acute care. Nursing Forum 43 (4): 191–206. http://dx.doi.org/10.1111/j.1744-6198.2008.00113.x. British Columbia Academic Health Council (BCAHC). 2011. HSPnet Overview and Benefits. Retrieved www.hspcanada.net/docs/hspnet_overview.pdf. Canadian Council on Health Services Accreditation (CCHSA). 2010. Home page. Retrieved www.accreditation.ca/en/default.aspx. Council of University Teaching Hospitals (COUTH). 2011. University of British Columbia Archives Faculty of Medicine fonds. Retrieved www. library.ubc.ca/archives/u_arch/medicine.html#couth. Eisler, G. 2003. An Evolving Concept Paper. 3 April. Vancouver: BC Academic Health Council. Frank, B. 2008. Enhancing nursing education through effective academicservice partnerships. Annual Review of Nursing Education 6:25–43. Glen, S. 2009. Nursing education: Is it time to go back to the future? British Journal of Nursing 18 (8): 498–502. HSPnet™ (Health Services Placement Network). 2011. Strategic plan. National alliance. Retrieved www.hspcanada.net/about/nationalalliance.asp. Jackson, N., & B. Slade. 2008. “Hell on my face:” The production of workplace il–literacy. In M.L. DeVault (ed.), People at Work: Life, Power, and Social Inclusion in the New Economy, 25–39. New York: New York University Press. Jackson, S.E. 2006. The influence of managed care on U.S. baccalaureate nursing education programs. Journal of Nursing Education 45 (2): 67–74. Johnstone, M. 2011. Nursing and justice as a basic human need. Nursing Philosophy 12 (1): 34–44. http://dx.doi.org/10.1111/j.1466-769X.2010.00459.x. McBride, A.B., L. Yeager & S. Farley. 2005. Evolving as a university wide school of nursing. Journal of Professional Nursing 21 (1): 16–22. http://dx.doi. org/10.1016/j.profnurs.2004.11.006. National League of Nursing (NLN). 2003. Position Statement on Innovation in Nursing Education: A Call to Reform. New York: National League of Nursing. Pollitt, C. 2003. Joined-up government: A survey. Political Studies Review 6 (1): 34–9. Rankin, J.M. 2001. Texts in action: How nurses are doing the fiscal work of health care reform. Studies in Cultures, Organizations and Societies 7 (2): 231–51. http://dx.doi.org/10.1080/10245280108523560. Rankin, J.M. 2003. Patient satisfaction: Knowledge for ruling hospital reform – An institutional ethnography. Nursing Inquiry 10 (1): 57–65. http://dx.doi. org/10.1046/j.1440-1800.2003.00156.x.

Digital Era Governance

147

Rankin, J.M. 2009. The nurse project: An analysis for nurses to take back our work. Nursing Inquiry 16 (4): 275–86. http://dx.doi.org/10.1111/j.14401800.2009.00458.x. Rankin, J.M., & M. Campbell. 2006. Managing to Nurse: Inside Canada’s Health Care Reform. Toronto: University of Toronto Press. Rankin, J.M., L. Malinsky, B. Tate & L. Elena. 2010. Contesting our takenfor-granted understanding of student evaluation: Insights from a team of institutional ethnographers. Journal of Nursing Education 49 (6): 333–9. http://dx.doi.org/10.3928/01484834-20100331-01. Rheume, A., M. Dykeman, P. Davidson & P. Ericson. 2007. The impact of health care restructuring and baccalaureate entry to practice on nurses in New Brunswick. Policy, Politics & Nursing Practice 8 (2): 130–9. http://dx. doi.org/10.1177/1527154407300797. Romyn, D., N. Linton, C. Giblin, B. Hendrickson, L. Houger Limacher, D. Murray, P. Nordstrom, G. Thauberger, D. Vosburgh, L. Vye-Rogers et al. 2009. Successful transition of the new graduate nurse. International Journal of Nursing Education Scholarship 6 (1): 1–19. http://dx.doi. org/10.2202/1548-923X.1802. Sears, A. 2003. Retooling the Mind Factory: Education in a Lean State. Aurora, ON: Garamond. Shields, J., & B.M. Evans. 1998. Shrinking the State: Globalization and Public Administration Reform. Halifax: >Fernwood. Smith, D.E. 1990. The Conceptual Practices of Power: A Feminist Sociology of Knowledge. Boston: Northeastern University Press. Smith, D.E. 2009. Briefing notes for “Governance and the Front Line” workshop. Smith, G. 1995. Managing the AIDS epidemic in Ontario. In M. Campbell & A. Manicom (eds), Knowledge, Experience and Ruling Relations: Studies in the Social Organization of Knowledge, 18–34. Toronto: University of Toronto Press. The College (2005). School of Nursing Curriculum Guide: Consolidated Practice Experience I Course Blueprint. 4–35. The College (2010). The College Credit Calendar, 2010–2011: Academic upgrading; Business; Fine arts; Health care; Tourism; Trades; University transfer. Thorne, S. 2009. Can we swing the pendulum back to centre? Canadian Nurse 105 (7): 52.

5 What Counts? Managing Professionals on the Front Line of Emergency Services michael k. corman and karen melon

Health care systems throughout industrialized countries have experienced ongoing reform and restructuring practices that have shaped and reshaped how health care is delivered, experienced, financed, and made accountable. The goal of reforming health care and health work has been to rectify a multitude of perceived crises in the arena of health care service and delivery, including spiralling costs, escalating wait times, varied practices of health-care workers, and lack of accountability (Bird, Conrad & Fremont 2000; CAEP 2002). In both the professional literature and the news media, one site of health care where these aspects of service delivery are intensely scrutinized, widely reported, and of significant concern is the emergency sector (Bond et al. 2007; Braid 2006; CAEP 2002; CBC 2007; HQCA 2007, 2010; Lang 2006; Logan 2006; QMI Agency 2010; Schull et al. 2004). Today, timely emergency medical care is an essential component of most health systems, providing rapid diagnosis and treatment for a variety of medical situations from critical injuries and events requiring resuscitation and life support to minor ailments. A recent survey by Statistics Canada reported that 3.3 million adult Canadians received care for their most recent injury or had their last contact with a health professional in an emergency department (ED) (CIHI 2005). The integration of emergency medicine with rapid transportation to hospitals now incorporates treatment at the scene and en route. Making up one of the largest groups of health care professionals in Canada, with numbers exceeding 20,000 (Pike & Gibbons 2008), paramedics are mobile health care providers essential to emergency medical services. Trained to use specialized biomedical knowledge to perform a variety of procedures and interventions, paramedics

Managing Professionals on the Front Line of Emergency Services

149

assist in diagnoses and “save patients and prevent further damage” in emergency situations. They do so both as members of “health care teams” (Swanson 2005: 96) and on the streets – unstandardized contexts “rife with chaotic, dangerous, and often uncontrollable elements” (Campeau 2008: 3). While services vary, they exist in every major city and many rural areas throughout North America (Paramedics Association of Canada 2008) and other countries around the world (Roudsari et al. 2007). Paramedic services in North America, for instance, treat and/or transport 2 million Canadians and between 25 and 30 million Americans annually (Emergency Medical Services Chiefs of Canada 2006). As such, the work of paramedics necessarily connects with the care provided by doctors, nurses, and other health practitioners in hospital emergency settings (Pike & Gibbons 2008). There is a pivotal interface between the field setting in which paramedics do their work and the hospital ED where patients are often transported for additional care mediated by the work of triage nurses. Officially, triage work incorporates managing patient intake and assessment, mobilizing and allocating resources, coordinating space and patient flow, and monitoring the status of all waiting patients, including those waiting with paramedic teams. The Canadian Triage and Acuity Scale (CTAS), and similar scales internationally, operate as powerful textual organizers of this work. This text, for instance, facilitates nurses to reconstitute the patient’s emergency into standardized terms and apply a numerical acuity score to their condition. A recommended wait time is attached to the score. The goal is to ensure that the sickest patients are treated first by allocating beds and prioritizing the queue to see a physician according to the acuity score given. The result is that some patients wait for treatment longer than others. The CTAS text activated by the triage nurse inserts the institutional interests of creating standardized categories of patients and the relevancy of time, acuity, and risk, all of which are also entered into a large-scale administrative database. These data are computed, averaged, and counted in various ways to formulate an authorized version of emergency care delivery. The combination of CTAS data and increasing wait times provides evidence of a crisis and the need for redesign of work processes to improve efficiency. This chapter examines how paramedics and nurses on the front line of emergency medical services in Alberta are currently being targeted by technologies of knowledge and governance to devise solutions to problems in the arena of emergency medical care. These technologies

150

M.K. Corman and K. Melon

are part and parcel of health reforms that attempt to reshape how health care is delivered, experienced, and made accountable (Anantharaman 2004; Alberta Health and Wellness 2008a). In what follows, we ethnographically explore the work of paramedics and nurses and investigate how their work practices intersect with and are organized by new forms of managerial control. These reforms shape and restrict how emergency health care providers can do their work, implement their expertise and discretion, and gain knowledge of their patients. Furthermore, such technologies are central to “what counts” and what is “counted” on the front line of emergency medical services. As we will show, these technologies serve multiple purposes in the larger project of health system management and governance. By exploring the work practices of paramedics and nurses as they activate, and are organized by, these textual technologies and the discourses embedded within them, we begin to see how reform and restructuring practices are “accomplished” on the ground and the consequences of this restructuring (DeVault 2008). This chapter draws on the early stages of two complementary research projects. The paramedic data are taken from an interview in 2009 with Jake (all names are pseudonyms), a paramedic for over 10 years, while emergency medical services in Alberta were in the process of major restructuring. The interview focused primarily on Jake’s work practices and how they interface with what is known as the patient care record (PCR); this is a paper-based document that records specific patient information from the time of dispatch to the transfer of the patient to hospital (discussed in more detail below). Once the interview began, the first author learned that the PCR had recently been replaced by an electronic version, known as the electronic patient care record (ePCR) (hereafter, we refer to both as the PCR unless specified). The interview is complemented by around 120 hours of field observations of paramedics between December 2010 and April 2011. The nursing data are taken from the second author’s experience working on the front line of emergency and critical care since 1982 and from three interviews conducted with registered nurses during her graduate course work in qualitative research methods. All nursing participants have advanced training in triage and routinely perform the triage role in their work in Calgary hospital EDs. Pat has worked in emergency for almost 25 years; both Tammy and Jane have worked for approximately 10 years as ED nurses. We specifically focus on how the PCR and CTAS interface with practitioner work and their central role in authoritative knowledge claims and accountability in the emergency medical complex. In other

Managing Professionals on the Front Line of Emergency Services

151

words, we explore how the work of paramedics and nurses, and what becomes institutionally recognized as work, is organized, coordinated, and made visible, in part by the PCR and CTAS. We make explicit how these text-mediated sequences of events organize and make visible limited aspects of these practitioners’ work. These two accounts begin to trace how emergency medical settings are being managed and put together and how the work of those on the front line connect with new governance practices; both technologies, for example, hook paramedics and nurses into new mandates intended to decrease wait times and increase accountability. In doing so, we exemplify how institutional ethnography can “marshal material evidence to support an alternative analytic account” (Rankin & Campbell 2006: 167); much more is going on than is represented in the virtual reality constituted by and through these technologies of knowledge and governance from which policy and practice flow. This chapter is separated into three sections. The first section explores how two intersecting technologies enter into and organize the front-line work of paramedics and nurses. Section two discusses how these institutional technologies create problems of actualities for both front-line workers and their patients. In section three, we provide critical insights into how “what counts” on the front line is being used to manage the deployment of resources in emergency medical services. Institutional Technologies: Ruling in Real Time Institutional technologies are essential to ruling in real time. Embedded within the textual devices central to the reforms discussed above are private sector and management principles believed to alleviate and/or prevent problems in health care settings by managing and fine-tuning the activities of workers, making what they do visible and amenable from afar to managers, administrators, politicians, and the people they serve. Based on both an implicit and an explicit trust in numbers (Porter 1995), these reform and restructuring practices objectify and generalize the work of those on the front line, making what they do the object of continuous quality improvement in the domains of efficiency, effectiveness, and quality of care, based on objectified forms of information – evidence – that define how the aforementioned domains are conceptualized (Rankin & Campbell 2006: 128). Deployed through new and emerging Information and Communication Technologies (ICTs), these reform and restructuring practices

152

M.K. Corman and K. Melon

constitute what we call new forms and technologies of knowledge and governance – “forms of language, technologies of representation and communication, and text-based, objectified modes of knowledge through which local particularities are interpreted or rendered actionable in abstract, translocal terms” (McCoy 2008: 701; see also Blumenthal & Glaser 2007; Griffith & Andre-Bechely 2008; Heath, Luff & Svensson 2003). Similarly, Pence defines institutional technologies as “both the specific tools that workers use to accomplish their tasks and the institutionally organized procedures for accomplishing these tasks” (2001: 204). She goes on to explain that they shape “the way we live and work together and what we are able to produce” (ibid.). In this context, they organize and coordinate the work of health professionals and patients on the front line (Frank et al. 2010) and are thus central to how “important” features of health systems and the activities of those individuals are recorded, averaged, compared, and made visible. The intent is to ensure the rational management of the system, making workers on the front line more accountable, predictable, and maximally efficient (Mykhalovskiy & Weir 2004; Rankin & Campbell 2006). Two institutional technologies central to the work of paramedics and nurses are depicted in the following vignette (based on field observations), which represents a typical handover of an Emergency Medical Service (EMS) patient to the triage nurse. It is a busy day and Dave, a paramedic, has had to wait for the line-up of “walk-in” patients to be dealt with before the nurse can talk to him. Nurse Annie clicks a box on the computer screen to show the list of ambulances en route. “Medic number?” she says to the paramedics standing in front of her behind the triage desk. She scrolls down to medic 22 on the list of EMS crews that appear and double clicks. This action enters Dave’s arrival time into the hospital system. Dave starts with the patient’s age and gender. Annie interrupts with, “OK, I need a name.” Other questions she asks the medic include: “Can you spell that? OK, what’s his problem? When did it start? How long? How much morphine did you give?” They are both frustrated as Dave struggles to answer her queries while flipping through the various screens on his electronic tablet [the ePCR]. Annie deletes her triage note three times while listening to Dave’s story before entering her version of the patient’s reason for calling the ambulance and coming to the ED: “Abdo pain RUQ 10/10 with vomiting × 3 days. No blood. Pain 8/10 after max analgesia by EMS.” [This is exactly how the triage entry would appear, as free text using abbreviated institutional

Managing Professionals on the Front Line of Emergency Services

153

language. It can be translated as abdominal pain, right upper quadrant. 10/10 refers to a scale used to rate pain, and the maximum amount of analgesia or pain medication allowed by medical protocols has been given by the paramedic]. Annie hesitates over the box labelled CTAS. She walks over to the patient and asks him a few questions, in an attempt to observe his general condition. Then to Dave, “What were his last vitals?” She enters these in the triage record. She clicks back on the CTAS box and enters “3.” Dave asks, “Well, if he is a 3 can I put him in RAZ?”1 Annie responds, “Not with all that pain. Sorry, he has to wait for a bed, and we’re full.” She clicks “enter.” The patient is now “visible” via the IT system to the rest of the ED, and many sites beyond.

The electronic-patient care record (ePCR) is a technology literally in the hands of paramedics. It is pre-programmed to record institutionally relevant information in standardized ways based on the encounter between the paramedic and the patient. This device has recently been introduced in Calgary and Toronto2 and is used in some jurisdictions in the United States and potentially elsewhere.3 The ePCR device accompanies paramedics throughout their calls (e.g., on scene, in the ambulance, at triage, when reporting to unit nurses, and while waiting in hospitals). It is literally “on their person” if not all the time, then most of the time. It has different point and touch dropdown screens where the paramedic records specific information. According to Jake, who at the time of the interview had just started using an ePCR in Calgary, “It’s not just one screen that you scroll up and down, it’s like six different tabs and within those six different tabs, you know one tab might have an additional four tabs, another tab might have eight tabs.” The ePCR is touted as “a solution that could help solve [Calgary EMS] business problems”; by embedding in it different “data management solutions” relevant to EMS, critical information can be managed “for maximum performance.”4 Terry Abrams, the ePCR Project Manager at Calgary EMS when it was introduced, and as cited on a press release on the Zoll website (www.zolldata.com), said: “we expect this new ePCR solution to improve patient care, operational efficiencies and capabilities, as well as improve documentation compliance.” The press release also mentioned the “data mining” ability of the ePCR as being “vital” to improving patient care. The moment a person calls for help in an emergency activates a “complex system of agencies and legal proceedings … The number 911 is the first in a series of texts that will co-ordinate, guide, and instruct a

154

M.K. Corman and K. Melon

number of practitioners” (Pence 2001: 201). Set in motion by people at work (e.g., patients or proxy patients and bystanders), the PCR is activated once the dispatch centre relays a message that a particular EMS unit is needed at the scene of a possible medical emergency. To relay the message, the dispatch centre sends an audible tone either to the ambulance unit or to the station where paramedics are often located between emergency calls. It is from this moment that the coordinating work of the PCR can be made visible. Upon arrival at the scene, for example, Jake describes how he works closely with the PCR for “continuity of care” purposes, so that the emergency department practitioners, once the patient is transferred to the ED, can see “where that patient was when they decided that they were sick enough to call 911” and can prevent duplication of treatment, so the ED does not “do something that is counter indicative of the treatment that we’ve already given.” This is accomplished by recording very specific mandatory information into the categories provided by the PCR document or dropdown menu of this section of the ePCR. For example, there is an “on arrival section” where “how we found the patient” is recorded. Jake gives an example of what he might enter into this section: “On arrival, 79-year-old, male, slumped forward in his chair. Accessory muscle use. Acute respiratory distress.” Other information recorded in additional sections of the PCR include the “chief complaint” of the patient, what type of treatments or interventions were given to the patient and how the patient reacted, change of status, history of complaint, past medical history, current medications, and so on. With the introduction of the ePCR, automatic “time-stamps” of when different interventions were given can now be digitally recorded. Jake explains: “So you document every intervention, which includes vital signs, so you take a set of vital signs and now with the computerized PCRs, everything is time stamped. You can, you point and click to say vital signs and you hit ‘now’ that you take them, so it brings up a line that time stamps with that time and then you fill in, you know, their blood pressure, their respiratory rates, and all of that. And then that also logs it in the intervention section.” Other, institutionally relevant information overlaps the continuity of care information collected in the patient care section. Jake explains: “But what overlaps is both for the governmental agencies that oversee both EMS and then just track health trends … So they have all this, basically, these codes that all patients fall into basically. You know you have to squeeze a patient into, you have to, there are certain aspects of

Managing Professionals on the Front Line of Emergency Services

155

the PCR that have to be filled out every single time or you get it back and you have to fill it out, and that’s because information is required by the government or required by billing [emphasis] or required by EMS Quality Control.” A significant aspect of the PCR is quality control, a principle “central to the managerialist agenda” (Clarke & Newman 1997: 119). Connected to quality control is quality assurance – assuring that the care provided by paramedics is appropriate and within their scope of practice. While what paramedics can and cannot do varies in each province and sometimes within a province, physicians have “medical control” over the work of paramedics. This, in turn, allows them to determine what constitutes “appropriateness.” Jake explained this in his own words: “In EMS we operate under the licence of a doctor, so in a sense we’re not independent practitioners, we’re tied to or we’re certified through someone else’s licence; we’re not licensed to act independently. So we have to follow medical control guidelines or protocols and there are very real repercussions if we act outside of them.” The “medical control guidelines or protocols” Jake mentioned are linked to current reforms and restructuring practices in health care based on evidence-based medicine (EBM) (Mykhalovskiy & Weir 2004). At the heart of EBM are procedural standards, which attempt to prescribe the actions of practitioners by outlining the steps in a medical encounter that are to be taken, depending on what specific symptoms, conditions, or criteria are met (Timmermans & Berg 2003; Lemieux-Charles & Champagne 2004). The claim of these standards, guidelines, and protocols is that they offer clinicians better tools that are based on evidence when making clinical decisions (Daly 2005). Furthermore, many, including some policy-makers, administrators, and practitioners, view this shift in policy and practice “as an important lever to ensure clinical practice is more effective and represents value for money” (Dopson & Fitzgerald 2005: 1). Such evidence-based technologies are not unique to pre-hospital emergency workers (Frank et al. 2010; Mykhalovskiy & Weir 2004); they are present in a variety of sites, including a multiplicity of health care settings in Canada and internationally, with between 1,200 and 2,500 clinical practice guidelines in Canada alone (CAEP 2002: 434; see also Griffith & Andre-Bechely 2008; Hall 2005). Key to the implementation of these protocols in practice, as Daly (2005) notes, are ICTs. For instance, Jake mentioned, “Now that we have these laptops [the ePCRs], they can have like, you know, huge amounts of information stored on them,” including the protocols.

156

M.K. Corman and K. Melon

Standards, guidelines, and protocols are central to the front line work of paramedics. Jake explains: Yeah, I should have brought my protocol book, but you have a book that’s, you basically have 30 scenarios, 30 types of patients: pregnancy, overdose, cardiac … OK, we’ll go start with something very basic, we’ll say nausea and vomiting … there’s literally a protocol for it. There’s like the universal protocol which is, you know, assessment, vital signs, cardiac monitoring when appropriate, oxygen when appropriate [interruption] and then for nausea and vomiting, it’s like a certain type of nausea and vomiting, say, you know, gastrointestinal [inaudible], eating or something like that, then you give Gravol 25 milligrams, four, and repeat dosage times one after 10 minutes … So that would be that protocol. So if I gave 50 of Gravol, is what can give as a max instead of 25, then I’d be deviating from protocol and I would be reprimanded for it. If I did that enough, and they found enough PCRs for that, I would lose my job. I’m supposed to, that’s how I’m supposed to treat nausea and vomiting, period.

Since December 2010, the protocols for paramedics in Alberta have changed as part of province-wide restructuring. All of these standards and guidelines determine what is considered appropriate care given to a patient when a certain condition(s) is present. According to Jake, it is the work of the “quality control people,” facilitated by and through the PCR, to monitor the work practices of paramedics on the front line to ensure their actions are appropriate. The PCR allows the quality control people to “oversee its practitioners without actually being on the call” by “pulling” paramedics’ PCRs to review what they did on the streets. As depicted above, the PCR is central to the work of paramedics and how their work is made visible and accountable to other individuals in the health care arena. The vignette above provides one example of how the PCR connects to the work of other health practitioners and other institutional technologies, specifically the CTAS. In observations of this interchange, the ePCR is often open on the triage desk, situated so that both paramedic and nurse can see relevant information. The PCR and CTAS organize and coordinate the work of the paramedic, who “gives report” to the triage nurse, who in turn is tasked with assessing patient acuity and managing hospital ED resources. As such, triage as a work site is the “processing interchange” (Pence 2001) that connects the patient and the work of EMS with the hospital, both in real life and electronically. Specific information from the paramedic’s “on-the-street”

Managing Professionals on the Front Line of Emergency Services

157

observations and data entered into the PCR is verbally reported to a triage nurse, who translates what she is told about the patient into a brief entry that complies with the CTAS terminology guidelines for describing presenting complaints, and then she assigns a triage score based on her interpretation and knowledge of the scale. The CTAS was developed in response to federal and provincial government demands to account for and justify health care expenditures and initiate reforms in the 1990s (Beveridge 1998). The use of a numerical triage category applied to each patient was identified as a data element useful for health care reorganization and management of increasing numbers of patients accessing care through emergency departments. While ensuring the collection of specific data, a secondary purpose of CTAS was as a “decision support tool” based on descriptors of a variety of clinical symptoms whereby patients could be sorted and prioritized according to their presenting complaint and the urgency of their condition relative to that of others in the queue (Asaro & Lewis 2008; Murray, Bullard & Grafstein 2004). A score from 1 (immediate resuscitation) to 5 (non-urgent) indicates degree of urgency as well as the recommended priority for physician and nurse assessment based on time frames (Beveridge et al. 1998; CAEP & CTAS National Working Group 2006; Murray, Bullard & Grafstein 2004). Like other methods of health services research, this data collection seeks to make the characteristics of patients and the actions of health care providers administratively knowable (Mykhalovskiy 2001). Concurrently, a standardized electronic data information system was designed and mandated nationally to capture patient’s presenting complaints, CTAS scores, and a number of data elements identified as performance indicators such as wait times, daily patient volumes, length of stay, patients who leave prior to physician assessment or against medical advice, and so on (Beveridge 1998; CAEP 2002; Murray 2003). These data are also aggregated by the Canadian Institute of Health Information, which broadly connects to a number of different health information data networks (the Commonwealth Fund, the National Ambulatory Care Reporting System [NACRS], the Organisation for Economic Co-operation and Development [OECD], etc.), from which reports are generated. The data aggregated within computerized systems of monitoring and counting form an official account of emergency department operations and enable managers to identify waste, inefficiencies, and risk and to target them for improvement (Rankin & Campbell 2006). Once entered into the computer (as depicted in the vignette above), which also enters the data into other IT systems, the patient virtually

158

M.K. Corman and K. Melon

can be moved to any location in the ED using the map function, a “real time” computer picture of the entire ED layout. The map display uses colour to indicate vacant locations, which patients are admitted to the hospital, and whether or not they have an inpatient bed assigned. Each map location is virtually capable of accepting more than one patient, allowing for extra beds or chairs to be used in a single physical space if necessary (see ibid.). These treatment areas are also specialized to provide varying levels of intervention and monitoring. Triage nurses utilize this map function to manage the work of placing patients into the most appropriate spaces for their condition first virtually, then actually, by sending the paramedic to the location assigned once it is available. This information can be monitored for quality purposes to ensure that treatment spaces are maximally used and that patients are allocated to the space with the most appropriate resources for their condition. In this way, the IT system organizes the work of managing treatment space and makes it amenable to evaluative practices. For example, map locations that remain vacant for too long or instances when ambulance crews are not assigned to a space immediately are tracked. As we will see in the next section, both the PCR and the CTAS create “problems of actualities” whereby the virtual realities produced by such institutional technologies are unable to capture important features of the work of these front-line workers. They also fall short of representing the real needs and requirements of patients, with unintended and potentially serious consequences. “Problems of Actualities”: Disjunctures, Tensions, and Non-work The interactions between the nurse and paramedic in the vignette highlight the work of articulating and aligning the patient’s problems and “en route care” with scoring categories and presenting complaints via the triage entry and CTAS text. We see tensions as they try to co-create an accurate story, each one’s thinking organized by different texts and expectations. What is not made visible in the technologies discussed above are what we call “problems of actualities” – disjunctures between the actual work of these practitioners (what they do) and how what they do is textually mediated and recorded; there are disconnects between patients, texts, and settings, leading to tensions. A consequence of these problems of actualities is the proliferation of what Diamond (1992) calls “non-work”: work that is central to the everyday worlds of these

Managing Professionals on the Front Line of Emergency Services

159

practitioners and integral to the functioning of the system as a whole, but is unaccounted for and thus remains invisible. The system appears to work smoothly as patients are moved through more “efficiently,” but only because more invisible work is done. We suggest that this is part and parcel of reform and restructuring practices specifically and, more broadly, text-mediated social organization. For example, Jake and the triage nurses operate within a text-based framework that organizes their “hands-on work” with patients. In talking about the interface between what he does and the technology central to making snippets of what he does visible, Jake spoke of a line of fault – “a huge grey area” – whereby paramedics “have those situations all the time where we’re technically acting outside of our scope and we need permission to do that.” Jake gave an example of a situation where he acted outside protocol by giving a patient an extra dose of medication prior to the passage of a specified amount of time. He explained how the first dose calmed the patient down only “a little bit, just enough for me to climb on top of him and get an IV.” Jake explained how he acted outside protocol because he was worried about the patient’s safety and well-being – “the patient was going to die … especially when they’re being pummelled by the police, facedown, handcuffed with their hands behind them.” According to protocol, he was expected to call the ED and obtain permission from a physician prior to administering the medication. The context in which his work took place did not allow time for this phone call. Jake went on to explain the tension that arises from situations where actual work practices do not reflect the text-mediated modes of organizing those practices. He specifically discussed this in the context of being monitored through the PCR. So, everything that we do is monitored, especially the more serious things like … They [management] can randomly pull your PCRs. Or they can pull all of your PCRs and they can look at every treatment that you’ve done and see whether or not that falls within protocol. And you can be off by just like, if you give a drug at seven-minute intervals instead of fiveminute intervals if your protocol states you should, regardless of the real world reasons why you couldn’t do it exactly at five minutes, you know they’ll say you’re in deviation of protocol … Like you always document accurately but … you know “we’re moving the patient at this time and that’s why I didn’t do it in exactly five minutes.” So you, sometimes you’ll just document that it was five minutes, and the end result is exactly the same, the mechanism of action I would argue, a doctor might say, “no …

160

M.K. Corman and K. Melon

[inaudible].” I’m not advocating doing that but, but, that’s the sort of thing that people start to realize when they document through PCRs.

In order to deliver “appropriate” care and avoid trouble, paramedics on the front line are expected to follow the protocols and guidelines. According to Zoll, one of the many benefits of the ePCR is how it can be used for quality assessment and improvement measures5 because the technology allows for monitoring as never before. As Jake explained, “The quality control aspect of the PCR is a big thing, and you learn to document according to that.” He went on to explain how “there’s a fine line between just solely protecting your ass and then sacrificing the needed information that the hospital needs to know about the patient … That aspect of the patient care report, people are really cognizant of, the fact that more than anything else it’s used to hang them out to dry, to get them into trouble, right” (emphasis added). While Jake informed the physician once he arrived at the hospital of acting outside protocol, this example illustrates how disjunctures emerge when protocols are void of everyday experiences and thus do not necessarily fit the situation or the patient. Other paramedics observed on the streets spoke about the differences between treating the patient versus treating the protocol. Jake discussed this when he spoke about the “cookbook medics” approach to interacting with patients. He noted that protocols instil a way of knowing in text-mediated ways whereby you can start to “only think in terms of protocols,” because “after a while if you’re only judged at how closely you follow protocol, not only will you document according to that … you’ll start to, you know, look at the patient that way.” He went on to explain that “there’s enough times it will fail.” We see similar disjunctures experienced by nurses. Although the CTAS score is numerical, the work of categorization is more difficult than might be expected. Patients’ experiences and descriptions of their symptoms are unique and may be expressed differently. Some problems, as Pat observes, are easy to categorize, while other problems are more difficult to fit into the confines of the text: Some are obvious, like cardiac arrest doing CPR. Part of it is gut. I mean it’s subjective and objective and there’s going to be bias. No matter how you look at it, CTAS or not, there’s going to be bias. There are high 2s and low 2s and the whole range of 3s. Sometimes the 4s come in and you get labs [blood test results] back and they’ve got leukemia or something. You

Managing Professionals on the Front Line of Emergency Services

161

can’t tell by lookin’ … or on a day when it is busy, you almost get a tolerance going because everyone is so sick and you compare people. You get this idea – they can’t all be 2s! [emphasis added]. But they are 2s but on that day, they’ll get a 3. I think some people also think about where do I put people? It’s not just about acuity and a score anymore. It’s about where do I put people – like I think they should be a 2, but I have nowhere to put him – so I’ll make him a 3. Not intentionally do they do it. I think it is subconscious sometimes.

The diversity and complexity of patients and the treatment spaces available at that moment make assigning CTAS scores difficult. Pat is aware that if she makes a patient a “2,” she is expected to get the person in faster, even when no beds are vacant. Tammy alludes to the difficulty of making decisions about acuity based on a CTAS text that appears objective but is based on limited information. A patient who is placed in a low-urgency category may actually be in more trouble than they appear in a triage examination. For example: “We don’t have a crystal ball to see that someone’s platelet count is 15!6 They might look perfectly fine.” Also, patients may “forget to tell you stuff at triage, like other symptoms – chest pain, for instance – or that they are diabetic or sometimes you don’t get the whole story. Sometimes you just can’t tell.” In addition to the assumption that the CTAS text produces valid and standard information about individual patient acuity, problems emerge because the initial score cannot be changed in the data system (Murray, Bullard & Grafstein 2004). Even if nurses’ professional judgment tells them that the patient is worsening, what is recorded in the CTAS and the on-theground experience of the situation are disconnected. The first score entered is retained in the IT system, even if a patient gets worse while they are waiting. The nurses address this by juggling the patient’s order in the queue of priority for allocating treatment space and physician attention. The actualities of what is happening with patients that are not represented by the standardizing categories are further complicated for those doing the front-line work of care by variations in the situations in which care has to be given. The volume and severity of patients arriving at a particular time in emergency and the sometimes very varied “on-the-street” situations confronted by paramedics mediate the work practices producing the categorization. Through the type of inquiry developed in this chapter, we begin to see how nurses and

162

M.K. Corman and K. Melon

paramedics in the workplace are immersed in the actual world of bodies while at the same time are tasked with translating their work into formalized organizational texts that articulate bodily concerns and tasks to the conceptual order of the institution (Campbell & Jackson 1992; Rankin & Campbell 2006). In addition to the tensions emerging between practitioners and patients, tensions also emerge between individual workers as each is tending to “his or her own work through the relevancies of that position, its activities, and the standpoint that employment responsibilities generate” (Campbell 2008: 271). Pat, a veteran at triage, talks about new tensions that she has noticed when triaging EMS patients: “When it comes to an EMS patient, my triaging has changed through the years. You like, trust some [para]medics more than others. This also helps me decide my CTAS and how much depth I go into talking to the patient. Now I interview all my EMS patients … I don’t always trust what I’m being told anymore … I don’t know if it’s their training or observations, or if when it’s busy, they downplay things or up-play it so they can clear [leave the hospital]. They just want to get out. They are not so intent on the patient. They’re intent on getting out.” Pat believes that the report she is given by the paramedic is governed by interests that may alter her picture of the patient. Assigning an accurate triage number is an expectation embedded in the CTAS guidelines that she knows from her triage training and experience. This number influences how others, including the physicians, the nurse assigned to his care, the paramedic, the charge nurse, and others examining the “data,” will interpret the patient’s urgency. The pre-hospital system involving the paramedic service and the ED interface in such a way that troubles for the ED affect service to the community: the numbers of ambulances and the length of time they are parked affects how many ambulances are on the street. The situation is monitored in the IT system and communicated back to the ED and the EMS system as “alerts” – yellow, orange, and red – which indicate the number of ambulance units available for calls in the community. The alerts take the form of an audible broadcast from the dispatch centre through a radio in the triage area, which must be acknowledged by the triage nurse. They are also displayed on the computer screen and have protocols that require action: the immediate offloading of patients into the care of the ED. This pressure on paramedics to return to the pre-hospital site sometimes creates conflict, tension and perhaps placement of patients in inappropriate spaces. Tammy describes

Managing Professionals on the Front Line of Emergency Services

163

a situation where an EMS patient was sent to the RAZ area – a zone designed for stable, “treat-and-release” type patients: I got this really complex patient – CTAS 3 – but could have been a 2. He was with EMS for four hours. They wanted triage to put him in and when she said there were no beds, they wanted to download [transfer patient to hospital and return to the street] to the waiting room. So they [triage] said, “OK, put him in RAZ.” Then EMS was angry because we voiced our concern that he may not be appropriate for RAZ. So [the paramedic is] all mad and says: “I’ve been waiting in the hallway for four hours and Triage won’t put him in the waiting room!” He’s 95 and he can’t walk! … He’s totally not appropriate for the waiting room either, but that’s what happens … when there are no beds.

Jane also felt “harassed,” saying that paramedics “are always in orange alert and trying to download patients.” In this context we can see how the two organizational texts that are being applied create tension: the one requiring EMS workers to be seen to fulfil their mandate to the community, the other governing how nurses manage a system of prioritization designed to ensure that the “sickest patients get in first,” regardless of the order in which they arrive. The CTAS text plays a role in who waits, including the EMS. On the other hand, time tracking of the paramedic’s wait at the hospital begins when the triage nurse clicks on the medic number that appears on her computer screen. This information is then visible to the ambulance dispatch centre. The managerial interest of getting the ambulances back on the street may be taken up personally by the paramedic who tries to negotiate an immediate space in the RAZ area. Furthermore, the time spent waiting for a treatment space can be scrutinized and questioned by the managers and administrators and analysed from afar to design improvement strategies. DeVault notes that text-mediated social relations produce “systematic practices of “not knowing”‘ (2008: 290), and can be thought of as integral to the accountability and legitimacy of ruling regimes (Campbell 2008; see also Porter 1995). In the context of hospital downsizing, the problems of the disconnect between the actualities confronting front-line workers, both Emergency Department nurses and paramedics and how they are represented in the categories of the technologies of health care governance become more severe. What is “not-known” in these technologies is, as Diamond (1992) puts it, the “non-work” of front-line workers as they respond to patients’ needs and situations under conditions that

164

M.K. Corman and K. Melon

intensify their work. For example, the work of nurses somehow making room for more patients in EDs is non-work in the sense of being managerially invisible. Nurses interviewed described different yet complementary ways of finding/creating space. Pat described it as “constant shuffling and running around to see which patients can move out of monitors [beds with capability to continuously monitor bodily functions typically for the sickest patients] and into RAZ.” Jane described it as ad hoc accommodating; she gave the example of “when both code room (resuscitation areas) beds are full and someone collapses, you just roll in a third stretcher and portable monitor in the middle.” Tammy described it as chronic pressure to find space: “It’s always go, go, go. In out, in out, in out. It’s about getting people in and getting people out. That’s all it’s about. It’s like a stop watch going all the time,” and at triage, “your guts are in a knot the whole time because you want those sick people in and there isn’t the right space to put them, but you try and juggle, right?” The map function, used for placing patients into the most appropriate spaces for their condition (discussed earlier), does not capture how many times patients are moved from one space to another or the extra work of finding and adding extra equipment to care for them, giving handover reports to other nurses, and so on. The paramedics are part of this invisible work, both as they enter with patients and during their time “parked” (waiting in an ED hallway to be assigned a treatment space). For example, the lag time where paramedics are “parked” in the halls of the ED waiting for space connects them with the work of nurses, as they monitor patients, administer ongoing treatment, and communicate changes in status as per CTAS reassessment guidelines, negotiating for and helping to make room. Paramedics are thus caught up into the work of nurses and the broader institutional complex of emergency medical services whereby time and tracking is integral to the accountability on the front line of EMS. The nurse participants described in vivid detail what actually happens for them when they are compelled by discourses of efficiency and crisis: they work at breakneck pace to manage limited resources, bring sick people into whatever space is available, and just as rapidly accomplish getting patients discharged. It is important to emphasize here that it is impossible to convey the depth of emotion apparent in the audio taped interviews where the cadence of speech actually increased as nurses talked about their “speeded up work,” the pressure of “making room,” the tensions with EMS, “no privacy for people,” “no time to get to know your patients,” and their observations of what they felt was

Managing Professionals on the Front Line of Emergency Services

165

“poor care, like sending people to the operating room fully dressed,” “falling standards,” “lowering the bar,” and the “teaching and relational work they were unable to do.” In short, the care they were able to give fell far short of their professional obligations to patients, as they were compelled to respond to the institutional order of constantly making room for the next patient. Similarly for paramedics, Jake expressed that his job was not to be parked with patients in the hospital but to be ready on the street for emergency situations. In this section, we see examples of the disjunctures and tensions that arise in the effort of fitting patients into the text-mediated forms that make the work of those on the front line accountable. As Campbell explains, “they know their patients in two distinct and often contradictory ways – as real people with bodily needs and as text based objects of professional attention” (2006: 95). These examples allude to hidden dangers, also observed by Rankin and Campbell (2006), when care is impacted by the restructuring of consciousness as practitioners become agents of reform practices and respond to more compelling, text-mediated ways of knowing and doing. Knowing only in text-mediated ways can potentially impact patient care (ibid.) by bringing professional discretion that these practitioners are trained to use under new forms of managerial controls that shape and restrict how they do their work and how their work is recognized. We argue that the protocols of the new technologies of governance and management are concerned more with control and increasing measurable productivity than in supporting professional autonomy (Armstrong, Armstrong & Coburn 2001). Furthermore, central to the system’s functioning is the “non-work” of health practitioners, which somehow coordinates the disconnect between the actualities of patients’ needs and situations and what is accountable within the system. Resources are deployed and decisions made based on this very specific, text-mediated, social organization of what DeVault calls “not-knowing” (2008: 290). Deploying Resources (Virtual Needs/Actual Patients) Neoliberalism and new public management (NPM) are the overarching discourses central to past and present changes in health care and other socially organized settings in Canada and elsewhere (Armstrong, Armstrong & Scott-Dixon 2008; DeVault 2008; Rankin & Campbell 2006); they have become a hegemonic hybrid (Cribb 2008) central to what some have called the managerial state (Clarke & Newman 1997;

166

M.K. Corman and K. Melon

Daniel 2008). The standardizing tools and texts, including electronic data information systems, in health-care work are essential components of systemic reform and resource management. Thus, resources, including financial, equipment, workers, support services, and so on, can be allocated based on interpreting these numbers, such that an ED reporting fewer CTAS 1 and 2 (high-acuity) patients might receive fewer resources. The data aggregated within computerized systems of monitoring and counting form an official account of Emergency Department operations and enables managers to identify and compare resources, waste, inefficiencies, and operational metrics and target them for improvement (Rankin & Campbell 2006). In the context of extended wait times constituting “crisis,” patient “acuity or urgency” has also become the deciding factor for the care and attention made available to each patient. Triage scales, for example, have had a distinct development that has more to do with accountability, resources, and economics than concerns with treating patients (Campbell 2008). Triage, as a managerial site of interest, provides a method for distributing health care resources when patient needs exceed what is available. It was inserted into professional practices and values under the premise that the objectivity of the scale would support more effective management of emergency beds and patients. This managerial interest was carried into relations with pre-hospital services through ambulance diversions and the creation of policies requiring ambulance workers to stay with their patients in the ED when beds were limited and their patients were judged less urgent than others waiting (Bond et al. 2007; Schull et al. 2001), and with the increased focus on the textual accountability of their work practices. The text-mediated versions of the paramedic’s and nurse’s work and the virtual patients created allows for the patient to be placed in a ranked order of access to emergency services and at the same time justifies the rationalization of services and supports. These virtual representations create an understanding that members of these patient-groups and the work practices of those who interface with them are homogeneous and standard, falling, for instance, within one of the 30 protocols described by Jake or the CTAS category of “4 – less urgent.” The accounts provided earlier, however, describe the variation of conditions, complexities, and unknown factors that are at odds with the assumption of the homogeneous patient in each category and the standard set of work practices and resources required for their care. Nevertheless, the official accounts create virtual needs (resources and time), which can become

Managing Professionals on the Front Line of Emergency Services

167

evidential targets for change (Mykhalovskiy 2001; Rankin 2001) and build a new foundation for policy and practice. The aggregation and calculative practices made possible through this “objective” stance “give shape to health care activities as a patterned universe of increases, decreases, and dispersions … known through statistically derived objects of discourse” (Mykhalovskiy 2001: 149). For example, in the context of triage nursing and the larger scheme of hospital administration and resource allocation, the groups understood as less urgent, and therefore in need of fewer resources, are targeted by efficiency strategies, such as RAZ and “Waiting Room Care,” that offer fewer services and less time and attention. However, actual experience reveals that some patients were found to be much more ill than the initial text-mediated category indicated. These individual discrepancies are obliterated by the process of data aggregation and patterning that subsumes the exceptions of who may, in fact, need more resources. Similar reasoning has been evident elsewhere in health services, where patients known through the textually mediated category of “Alternate Level of Care” (ALC) (Rankin & Campbell 2006; Rankin 2001) can be targeted as “consuming a disproportionate amount of acute care resources” (CAEP & NENA 2003). In other words, in Alberta at least, they can be judged as not being “in the right place and at the right time” (Alberta Health and Wellness 2008b). NPM also identifies time and “wasting time” in health-care systems as a significant source of reducible health care expense – thus the predominance of “process improvement” strategies intended to accelerate patients’ “flow” through the ED (Folaron 2003; Kim et al. 2006; Kollberg, Dahlgaard & Brehmer 2007; Ng et al. 2010; Woodard 2005). In many urban hospitals, this has taken the form of rationed care pathways whereby select patients, deemed less sick, are given fewer services and less nursing time and attention, and reduced wait times and cost savings can be claimed. In addition, because wait times are averaged, moving a select group through faster appears to decrease wait times overall. The IT systems funnel very specific information about patients into an organized production of records and statistics (Rankin 2001); therefore, instances where nurses or paramedics may intervene to “rescue” patients are lost, and all patients in a certain group can be viewed in a standardized way. The connections between the organizational features and their effects on people disappear, leaving the official story to show “improvement” (Campbell 2008: 268). The practitioners’ accounts of their speeded-up work are very different from the official

168

M.K. Corman and K. Melon

account of their work; their descriptions of that work are filled with instances of lowered standards and compromised professional expectations and obligations. As such, not only health care workers but also patients absorb the consequences. The work of resource allocation is creeping into the pre-hospital site, where paramedics are learning acuity scoring and expanded scopes of practice, thus delegating them to make decisions about the “right” location for subsequent care, for the purpose of “reducing the use of emergency rooms by non-urgent patients” (Alberta Health and Wellness 2008b: 8; Murray 2003). Ron Liepert, former minister of Alberta Health and Wellness, explains the intent of targeting paramedics with such reforms: “Fully integrating EMS into the health-care system will not only improve efficiency, it will improve patient care. Highly trained paramedics, in consultation with a physician, will be able to diagnose and in some cases even treat on site. The only option won’t be an unnecessary trip to an already crowded emergency room, with patients waiting for emergency care being blocked by folks who don’t need to be there” (2009; emphasis added). This reform is touted as a “new way of integrating and expanding the education, skill and experience of paramedics” (Alberta Health and Wellness 2008b), but viewed under a different lens, as a strategy of NPM, it inserts the interests of an administrative regime and accounting logic in controlling patient access and choice directly into the front line work of paramedics. As such, departing from the past where “need” was the product of both bureaucratic and professional judgment, needs are now “increasingly articulated with and disciplined by a managerial calculation of resources and priorities” (Clarke & Newman 1997: 76). The expansion and development of ICTs support even tighter manipulation and control of resource deployment and the work of the front line. For example, the expansion of data systems to link with prehospital services is designed to further enhance efficiency and continuity of care, facilitate the collection of additional emergency service performance data, and organize specific actions based on numerical calculations. These systems provide up-to-the-minute information on ambulance activity, patient volume, CTAS scores, time waiting, and more. This information is intended to enter into the work and working decisions of emergency care providers and their immediate managers. The system calculates and displays a summary of current conditions organized as colour-coded “alerts” and “triggers” attached to icons (e.g., red angry faces, green happy faces) based on preset limits in each

Managing Professionals on the Front Line of Emergency Services

169

data category. When limits are reached, protocols for action are initiated: calling in an extra physician, downloading patients “parked” with EMS, sending admitted patients to ward hallways to wait for an available inpatient bed, for example. Decisions about requesting additional resources previously made through professional collaboration are now organized by numerical calculations controlled by individuals far removed in time and space from the conditions at hand. The details of stretching resources to accommodate extra patients offloaded from EMS or sent to ward hallways are the work/non-work of the front line. Pat explains how triage nurses are becoming adept at using the data to think ahead and plan: “It is used by nurses primarily to gain information quickly to gauge how busy the department is,” to “know if inpatient beds will be available,” so they “can anticipate if they need to move people around to make room or if new patients are being sent in.” Pat also reports using the data to “see how busy the other hospitals are because that affects how many ambulances come rolling in to our site.” To use resources most effectively, these new technologies also allow the ambulance dispatch centre to virtually monitor the activity levels of the various hospital EDs in the city and direct ambulances to the site that appears least busy. A patient picked up at home in the far northeast of the city could be driven past the hospital closest to home, and family support, to the hospital in the deep southwest, if that ED has fewer patients, more available space, lower acuity, and so on. Many unforeseen problems result that front-line health care providers and patients must solve: “You need family to translate and they can’t get here,” or “family can’t come pick them up, so you have to find them a way to get home,” or according to Pat: “The ambulances take longer to get here and suddenly you have four ambulances at once because your site is ‘green,’ and at the same time you get a line-up of sick people walking in, and within five minutes you are overwhelmed and the other hospitals have had a bunch discharged and have empty beds! It doesn’t work!” An essential characteristic of text/technological-mediated relations is that they mediate what becomes visible and known (Rankin & Campbell 2006; Weinberg 2003: 3); what actually happens on the front line is discursively and ideologically organized and regulated by what are institutionally recognized as important features of work (Mykhalovskiy 2001; see also Diamond 1992). Furthermore, the data systems that garner such information are also used for tracking and monitoring purposes, often without reconciling the details that transpired while time passed. This is an essential characteristic of the PCR, the CTAS, and other interfacing

170

M.K. Corman and K. Melon

technologies; the categories required for completion are metaphorically painted by the discursive colours embedded within the institutional technologies. What becomes known is limited by the conceptual frames that are deemed institutionally important while other ways of knowing are ideologically captured/closed, considered “irrelevant,” and are not counted when it comes to resource allocation. As a result, “workers in the human services seem increasingly to be working across a line of fault … they act in face-to-face relations with their clients and simultaneously in a virtual space of textualized accountability” (DeVault 2008: 294). Conclusion The observations and accounts of front-line work offered in this chapter represent very preliminary stages of two separate projects intent on discovering how things work inside the socially organized site of the emergency care sector. This arena of health care is undergoing unprecedentedly rapid change. Under such reform and restructuring practices, workers are increasingly seen as units of production that can be mediated, moderated, accounted for, and governed. Governance on the front line was central to this analysis; texts, such as the PCR and the CTAS, and the technologies that operate in tandem with them are the “connective tissue” needed to accomplish such discursive forms of governance in practice on the front line (DeVault 2008), allowing for the classification, standardization, and generalization of people’s doings across time and space. These technologies organize front-line work, displacing and subsuming what actually happens to a “virtual” happening in the image of governing technologies and the discourses they carry. The ethnographic snapshots provided show how texts and different institutionally mandated technologies, and the regulatory discourses embedded within them, are active constituents of people’s everyday lives (Smith 2005), organizing and coordinating the work of these health practitioners on the front line of emergency services and socially organizing specific knowledge of “what counts” as work. In Alberta, this is giving rise to a new vision of health service delivery: the “right service in the right place and at the right time” (Alberta Health and Wellness 2008b). While these reforms claim to increase quality of care and to be patient focused, the questions remain: what is right and who decides? It is imperative to consider the consequences of institutional technologies and the new ways they are being used. As DeVault explains, this is important for both “the workers who subordinate their grounded knowledge

Managing Professionals on the Front Line of Emergency Services

171

to objectified knowing and for the institutional ‘subjects’ who are being authoritatively known only through such abstractions’ (2008: 296). Even with this entry-level exploration, a number of tensions and disjunctures were evident in the descriptions and observations we gathered and the texts we have begun to analyse that carry new forms of measurement and accountability that restructure the work and professional relations of those on the front line. We argue that there are unintended consequences from contemporary forms of governance based on the discursive rationalities of NPM and neoliberalism. We argue that new forms and technologies of knowledge and governance leave many hidden dangers in their wake because of how people’s work and what becomes known are socially organized. It is important to remember that the known and the unknown are dialogical in that certain and very specific information is captured and valorized depending on what is discursively recognized as relevant, while necessarily silencing other possible ways of knowing (Rankin & Campbell 2006). As such, not knowing or the unknown can be thought of as being integral to the working and accountability and legitimacy of ruling regimes, who most likely benefit from leaving hidden the actualities of people’s everyday doings (Campbell 2008; see also Porter 1995). In addition to the many potential hidden dangers, this social organization of knowledge potentially creates many “distinctive new problems” (Griffith & AndreBechely 2008: 44) that must be explored and explicated. We do not dismiss all forms of objective approaches as inherently bad; sorting, classifying, and quantifying are political and inevitably valorize some points of view at the expense of others (Bowker & Star 1999; Epstein 2007; Martin & Lynch 2009) – and there is certainly evidence to suggest that CTAS, protocols, guidelines, and the data systems are of great benefit to both patients and care providers. Our findings suggest that they cannot stand alone; the knowledge they provide is an incomplete picture of what is going on. Unintended consequences may result when work is reorganized on the basis of this incomplete understanding and limits possibilities of addressing actual problems. The analysis offered here contributes further evidence of the various ways that NPM and neoliberal ideologies have infiltrated health-care services in ways that are disruptive and insert values and practices that are aligned less with professional standards and expectations of quality and more with the “managerial state.” Through contributing to the growing body of knowledge about the ruling capacity of texts, we can begin to discover ways to insert texts and discourses that activate the interests of care providers and patients. Marshalling material evidence through an

172

M.K. Corman and K. Melon

empirical process of discovery provides the opportunity to see an alternative account that contributes knowledge to the void left by the objectified renderings that claim to understand front-line emergency work (Rankin & Campbell 2006). As such, we call for additional research into what Rankin and Campbell term “the social organization of information in health care and attention to the (often unintended) ways ‘such textual products may accomplish … ruling purposes but otherwise fail people and, moreover, obscure that failure’” (quoted by McCoy 2008: 709).

NOTES 1 RAZ (Rapid Assessment Zone) is one example of a new development in EDs across Canada. Also known as Fast Track, Sub Waiting Room, Intake, or Waiting Room Care, it is an area designed for stable, treat-and-releasetype patients requiring fewer staff, who receive an expedited version of traditional care and who often sit in chairs or return to a common waiting area to await results of investigations or tests. Only when very specific criteria are met can patients go to RAZ. 2 See www.zoll.com/2009/12-14-09-ceremony-honors-toronto-emsrescuenet-epcr/. 3 See www.zolldata.com/web/rescuenettv.html. 4 See www.zoll.com/medical-markets/EMS/ and www.zolldata.com/web/ prViewRelease.aspx?id=37. 5 See www.zolldata.com/pdf/rescuenet/RescueNet_ePCR_Suite_Brochure. pdf. 6 Platelets are cells involved in blood clotting and this level (15) is dangerously low.

REFERENCES Alberta Health and Wellness. 2008a. A Renewed Model for Patient-Centred and Coordinated EMS Services: Transition Handbook. Edmonton: Government of Alberta. Alberta Health and Wellness. 2008b. Vision 2020: The Future of Health Care in Alberta: Phase One. Edmonton: Government of Alberta. Retrieved www. health.alberta.ca Anantharaman, G. 2004. Standards and standardization in paramedic protocols. Australasian Journal of Paramedicine 2 (1). [digital only: http:// ro.ecu.edu.au/ jephc/vol2/iss1/3]

Managing Professionals on the Front Line of Emergency Services

173

Armstrong, P., H. Armstrong & D. Coburn, eds. 2001. Unhealthy Times: The Political Economy of Health and Health Care in Canada. Don Mills, ON: Oxford University Press. Armstrong, P., H. Armstrong & K. Scott-Dixon. 2008. Critical to Care: The Invisible Women in Health Services. Toronto: University of Toronto Press. Asaro, P., & L. Lewis. 2008. Effects of triage process conversion on the triage of high risk presentations. Academic Emergency Medicine 15 (10): 916–22. http://dx.doi.org/10.1111/j.1553-2712.2008.00236.x. Beveridge, R. 1998. The Canadian triage and acuity scale: a new and critical element in health care reform. Journal of Emergency Medicine 16 (3): 507–11. Beveridge, R., B. Clarke, L. Janes, N. Savage, J. Thompson, G. Dodd, M. Murray, C. Nijssen Jordan, D. Warren & A. Vadeboncoeur. 1998. Implementation guidelines for the Canadian emergency department triage and acuity scale (CTAS). CTAS16.DOC. Retrieved http://www. calgaryhealthregion.ca/policy/docs/1451/Admission_over-capacity_ AppendixA.pdf. Bird, C., P. Conrad, & A. Fremont. 2000. Medical sociology at the millennium. In C.E. Bird, P. Conrad, A.M. Fremont & S. Timmermans (eds), Handbook of Medical Sociology, 1–10. 5th ed. Upper Saddle River, NJ: Prentice Hall. Blumenthal, D., & J.P. Glaser. 2007. Information technology comes to medicine. New England Journal of Medicine 356 (24): 2527–34. http://dx.doi. org/10.1056/NEJMhpr066212. Bond, K., M. Ospina, S. Blitz, M. Afilalo, S. Campbell, M. Bullard, G. Innes, B. Holroyd, G. Curry, M. Schull et al. 2007. Frequency, determinants and impact of overcrowding in emergency departments in Canada: A national survey. Healthcare Quarterly 10 (4): 32–40. http://dx.doi.org/10.12927/ hcq.2007.19312. Bowker, G., & S. Star. 1999. Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press. Braid, D. 2006. Hospital “war zone” deaths feared. Calgary Herald, 28 July, B1. Campbell, M. 2008. (Dis)continuity of care: Explicating the ruling relations of home support. In DeVault, People at Work, 266–88. Campbell, M., & N. Jackson. 1992. Learning to nurse: Plans, accounts and action. Qualitative Health Research 2 (4): 475–96. http://dx.doi.org/10.1177/ 104973239200200407. Campeau, A. 2008. Professionalism: Why paramedics require “theories-ofpractice.” Journal of Emergency Primary Health Care 6 (2): 1–7. Canadian Association of Emergency Physicians (CAEP) Working Group. 2002. The future of emergency medicine in Canada: Submission from CAEP to the Romanow Commission. Canadian Journal of Emergency Medicine 4 (11): 359–438.

174

M.K. Corman and K. Melon

Canadian Association of Emergency Physicians (CAEP) & CTAS National Working Group. 2006. Canadian Triage & Acuity Scale Combined Adult/ Pediatric Education Program. Canadian Association of Emergency Physicians (CAEP) & National Emergency Nurses Affiliation (NENA). 2003. Joint position statement on access to acute care in the setting of emergency department overcrowding. NENA Outlook (Spring): 15–19. Canadian Broadcasting Corporation (CBC). 2007. Report calls Calgary ER services a “perfect storm.” 26 September. Retrieved www.cbc.ca/canada/ calgary/story/2007/09/26/chr-quality.html. Canadian Institute for Health Information (CIHI). 2005. Understanding Emergency Department Wait Times. Ottawa: CIHI. Clarke, J., & J. Newman. 1997. The Managerial State: Power, Politics and Ideology in the Remaking of Social Welfare. London: Sage. Cribb, A. 2008. Organizational reform and health-care goods: Concerns about marketization in the UK NHS. Journal of Medicine and Philosophy 33 (3): 221– 40. http://dx.doi.org/10.1093/jmp/jhn008. Daly, J. 2005. Evidence-based Medicine and the Search for a Science of Clinical Care. Berkeley, Los Angeles: University of California Press. Daniel, Y. 2008. The “textualized” student: An institutional ethnography of a funding policy for students with special needs in Ontario. In DeVault, People at Work, 248–65. DeVault, M., ed. 2008. People at Work: Life, Power, and Social Inclusion in the New Economy. New York: New York University Press. Diamond, T. 1992. Making Gray Gold: Narratives of Nursing Home Care. Chicago: University of Chicago Press. http://dx.doi.org/10.7208/ chicago/9780226144795.001.0001. Dopson, S., & L. Fitzgerald, eds. 2005. Knowledge to Action? Evidence-based Health Care in Context. Oxford: Oxford University Press. Emergency Medical Services Chiefs of Canada. 2006. The future of EMS in Canada: Defining the new road ahead. Retrieved www.semsa.org/ Downloadables/EMSCC-Primary%20Health%20Care.pdf. Epstein, Steven. 2007. Inclusion: The Politics of Difference in Medical Research. Chicago: University of Chicago Press. http://dx.doi.org/10.7208/ chicago/9780226213118.001.0001. Folaron, J. 2003. The evolution of Six Sigma. Six Sigma Forum 8: 38–44. Frank, A., M.K. Corman, J. Gish, & P. Lawton. 2010. Healer/patient interaction: New mediations in clinical relationships. In I.L. Bourgeault, R. DeVries & R. Dingwall (eds) Handbook on Qualitative Health Research, 34–52. New York: Sage Publications.

Managing Professionals on the Front Line of Emergency Services

175

Griffith, A., & L. Andre-Bechely. 2008. Institutional technologies: Coordinating families and schools, bodies and texts. In DeVault, People at Work, 40–56. Hall, Kathleen D. 2005. Science, globalization, and educational governance: The political rationalities of the new managerialism. Indiana Journal of Global Legal Studies 12 (1): 153–82. http://dx.doi.org/10.2979/GLS.2005.12.1.153. Heath, C., P. Luff & M.S. Svensson. 2003. Technology and medical practice. Sociology of Health & Illness 25 (3): 75–96. http://dx.doi. org/10.1111/1467-9566.00341. Kim, C., D. Spahlinger, J. Kin & J. Billi. 2006. Lean health care: What can hospitals learn from a world class automaker?” Journal of Hospital Medicine 1 (3): 191–9. http://dx.doi.org/10.1002/jhm.68. Kollberg, B., J. Dahlgaard & P. Brehmer. 2007. Measuring lean initiatives in health care services: issues and findings. International Journal of Productivity and Performance Management 56 (1): 7–24. http://dx.doi. org/10.1108/17410400710717064. Lang, M. 2006. Emergency room probe leads to improvements. Calgary Herald, 5 August, B1. Lemieux-Charles, L., & F. Champagne, eds. 2004. Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives. Toronto: University of Toronto Press. Liepert, R. 2009. Patient is sick but getting better. Calgary Herald, 29 June, A11. Logan, S. 2006. Province lashed for miscarriage. Calgary Sun, 5 August , N5. Martin, A., & M. Lynch. 2009. Counting things and people: The practices and politics of counting. Social Problems 56 (2): 243–66. http://dx.doi. org/10.1525/sp.2009.56.2.243. McCoy, L. 2008. Institutional ethnography and constructionism. In J.A. Holstein & J.F. Gubrium (eds), Handbook of Constructionist Research, 701–14. New York: Guilford. Murray, J.M. 2003. The Canadian triage and acuity scale: A Canadian perspective on emergency department triage. Emergency Medicine 15 (1): 6–10. http://dx.doi.org/10.1046/j.1442-2026.2003.00400.x. Murray, M., M. Bullard & E. Grafstein. 2004. Revisions to the Canadian emergency department triage and acuity scale implementation guidelines. Canadian Journal of Emergency Medicine 6 (6): 421–7. Mykhalovskiy, E. 2001. Towards a sociology of knowledge in health care: Exploring health services research as active discourse. In Armstrong, Armstrong & Coburn, Unhealthy Times, 146–65. Mykhalovskiy, E., & L. Weir. 2004. The problem of evidence-based medicine: Directions for social science. Social Science & Medicine 59 (5): 1059–69. http://dx.doi.org/10.1016/j.socscimed.2003.12.002.

176

M.K. Corman and K. Melon

Ng, D., G. Vail, S. Thomas & N. Schmidt. 2010. Applying the lean principles of the Toyota production system to reduce wait times in the emergency department. Canadian Journal of Emergency Medicine 12 (1): 50–7. Paramedics Association of Canada (PAC). 2008. Home Page. Retrieved http:// paramedic.ca. Pence, E. 2001. Safety for battered women in a textually mediated legal system. Studies in Cultures, Organizations and Societies 7 (2): 199–229. http:// dx.doi.org/10.1080/10245280108523558. Pike, M., & C. Gibbons. 2008. Paramedic shortage: A call for action. National Human Research Review. Retrieved http://www.novascotia.ca/health/ehs/ documents/Paramedic HR paper.pdf. Porter, T. 1995. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press. QMI Agency. 2010. Man dies after seven hour wait in Montreal ER. 20 October. Calgary Sun, N27. Rankin, J. 2001. Texts in action: How nurses are doing the fiscal work of health care reform. Studies in Cultures, Organizations and Societies 7 (2): 251–67. http://dx.doi.org/10.1080/10245280108523560. Rankin, J., & M. Campbell. 2006. Managing to Nurse: Inside Canada’s Health Care Reform. Toronto: University of Toronto Press. Roudsari, B., A. Nathens, C. Arreola-Risa, P. Cameron, I. Civil, G. Grigoriou, R. Gruen, T. Koepsell, F. Lecky, R. Lefering, M. Liberman, C. Mock, H. Oestern, E. Petridou, T. Schildhauer, C. Waydhas, M. Zargar & F. Rivara. 2007. Emergency medical service (EMS) systems in developed and developing countries. Injury: International Journal of the Care of the Injured 38: 1001–13. Schull, M., J.P. Szalai, B. Schwartz & D. Redelmeier. 2001. Emergency department overcrowding following systemic hospital restructuring: trends at twenty hospitals over ten years. Academic Emergency Medicine 8 (11): 1037– 43. http://dx.doi.org/10.1111/j.1553-2712.2001.tb01112.x. Schull, M.J., M. Vermeulen, G. Slaughter, L. Morrison, & P. Daly. 2004. Emergency department crowding and thrombolysis delays in acute myocardial infarction. Annals of Emergency Medicine 44 (6): 577–85. http:// dx.doi.org/10.1016/j.annemergmed.2004.05.004. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Toronto: AltaMira. Swanson, B. 2005. Careers in Health Care. 5th ed. Blacklick, OH: McGraw-Hill. Timmermans, S., & M. Berg. 2003. The Gold Standard: The Challenges of Evidencebased Medicine and Standardization in Health Care. Philadelphia: Temple University Press. Weinberg, D. 2003. Code Green: Money-driven Hospitals and the Dismantling of Nursing. Ithaca, NY: Cornell University Press. Woodard, T. 2005. Addressing variation in hospital quality: Is Six Sigma the answer? Journal of Healthcare Management 50 (4): 226–36.

6 “Let’s be friends”: Working within an Accountability Circuit marjorie devault, murali venkatesh, and frank ridzi

New forms of governance are introduced unevenly, sometimes strategically and sometimes opportunistically, in ways that may reveal local managerial creativity and an attunement with discourses of the “new public management” (NPM). As a discourse and mindset of efficiency, devolution, cost containment, and accountability gain traction, local administrators of public sector programs may act artfully in response to changing conditions and demands. They act within the accountability circuits of program legislation, but our analysis suggests that accountabilities may also operate through more diffuse and complex circuits, as local institutions respond to the distinctive demands of their local environments, that is, to lateral as well as vertical demands for accountability. Our discussion focuses on county-level determination of eligibility for benefits in the Chronic Care strand of the US Medicaid program, which provides financial support for health care for people with a very low income. Determining eligibility is a sometimes surprisingly complex matter of examining an individual – more specifically, the “needy” applicant as “textually constituted” (Smith 1990) on the Medicaid application – to check whether she or he fits the circumstances envisioned in the enabling legislation. That legislation, along with the rules and regulations that flow from it, serve as the “boss texts” that provide the criteria for determining eligibility. Those who make the determinations are accountable to higher-ups at the county and to state and federal Medicaid administrations; they must apply the criteria properly, fairly, and in a legally defensible manner. In the county we examined, front-line eligibility staff called Income Maintenance (IM) Specialists also feel the pressure of a lateral demand for accountability

178

M. DeVault, M. Venkatesh, and F. Ridzi

from the long-term care facilities that receive Medicaid payments for eligible recipients. Facilities have pushed front-line Medicaid eligibility staff to process applications and render eligibility decisions in a timely manner because delayed reimbursement can create or exacerbate cash flow challenges for facilities. Our analysis examines the efforts of Medicaid staff in one New York county office to respond to these multiple pressures. We track changes that have been ongoing since the late 1990s, drawing upon fieldwork begun in 2000–2001 and continuing since 2005. We suggest that the strategies adopted by local administrators in the county we studied have had the effect of inventing a new kind of public-private partnership. Nursing homes have become partners in the work of front-line case processing, and the partnership has speeded up payments to those facilities that participate. The interpenetration of public and private interests we see in this case, and the business tropes of efficiency and customer service that have accompanied the changes, may be characteristic of the kinds of arrangements that follow from the new governance strategies. The Textual Constitution of Eligibility US public assistance programs have long been characterized as complex “rule-bound, paper-driven” bureaucracies (Lens & Pollack 1999:63). Stemming from federal legislation in the 1970s on “separation of services,”1 when IM staff replaced social workers in determining eligibility in such programs in the United States, (paper) documents moved to centre stage. Where a social worker might have determined eligibility based on an individualized and holistic assessment of the applicant, taking into account her embodied, local, and personal particulars (to use Smithian terminology), front-line IM staff make decisions on the basis of standardized financial metrics. This shift from social caseworkbased evaluation to standardized criteria and metrics in determining eligibility has meant that since the 1970s verification of the applicant’s need has become primarily financial and textual: “Perhaps more than any other change, this increased reliance on documentation and verification symbolized the [US public assistance] system’s metamorphosis from the personalized social casework-approach of the past to the modern welfare bureaucracy” (ibid.). As a consequence, Chronic Care applications are heavily text-centred and text-mediated. Before deciding eligibility, front-line eligibility staff are required to perform an exhaustive verification of the applicant’s

Working within an Accountability Circuit

179

income and financial resources going back three years. An application deemed to be both complete and valid by Medicaid for eligibility purposes would consist of a duly completed application form and all required evidentiary documentation. As one front-line eligibility worker put it, she is trained to take literally every entry on the application to be a claim made by the applicant, a claim that must be verified (see also Zimmerman 1970). She explained: “We start off looking at it as everything is a claim until we have the paperwork to document it. And then it becomes a fact. Once we get satisfactory documentation then it flips and becomes a fact.” It may take an applicant many weeks to assemble this material. One of our respondents used the term “information traffic jam” to describe the confusion over rules and regulations applicants experienced in assembling the Medicaid application for Chronic Care benefits. A New York State report observed that applicants not infrequently “end up lost in the Medicaid maze” (NYAHSA 2003: 12). The eligibility determination process for Chronic Care intake (Figure 6.1) is in use generally by Medicaid administrations in the United States. This complex process begins when the applicant submits her application to Medicaid and schedules the eligibility interview, which is a required step in the process and usually occurs face-to-face between a front-line eligibility worker and an applicant (or authorized representative) at the county Medicaid offices. The worker assigned to the case starts the interview by reviewing the case file (which includes the application). She may ask the applicant a few or many questions depending on how complete and valid the application is. At the end of the interview, which is usually scheduled for one hour, the worker writes up a pending letter, which lists all missing documents and hands the applicant the letter. The applicant is allowed up to four weeks to procure the missing documents but may request an extension to the pending period. If at the end of this period the application is still incomplete in any way, the worker mails out an extension letter listing missing documents and gives the applicant up to an additional three weeks to complete the application. An applicant may receive more than one extension letter over the course of the process. When she deems the application to be both complete and valid, the IM worker reviews the evidence and “writes up” the case, stating her decision; she then completes a narrative, giving her reasons for it. Approved cases are said to be opened, and applicant and facility receive a budget letter authorizing Medicaid billing for reimbursement. Applicants can appeal negative (denied) decisions to the county Fair Hearing (legal)

180

M. DeVault, M. Venkatesh, and F. Ridzi

Figure 6.1. Eligibility Determination Process

Working within an Accountability Circuit

181

Department and await its verdict, which is final. Two other decisions are possible. If the applicant fails to provide long-pending documentation even after multiple extension letters, the IM worker may decide to close a case for “failure to complete,” or the applicant may withdraw her application (or it may be withdrawn, owing to her demise), in which case it is removed from consideration for eligibility. The italicized terms refer to actions on (and involving) specific paper texts or ensembles of such texts. For applicants, this process can be quite daunting; in complex longterm care cases, the application with all its supporting documents may amount to a file six to nine inches thick. Preparing such a file, completely and accurately, requires a great deal of skilled “application work” (D.E. Smith, personal communication). When this work is performed by applicants themselves, it is rarely acknowledged as work,2 perhaps because it is uncompensated and perhaps also because applicants are typically “one-shotters” (Heimer 2006), who undertake the work only once. Thus, applicants and front-line staff are positioned quite differently in this as in other bureaucratic contexts. It is important to note that the reforms we discuss below were directed towards the preparation and handling of applications and did not target the fundamental course of the eligibility determination process, as shown in Figure 6.1. All Medicaid applications must be verified and evaluated before eligibility can be determined, and the applicant must be interviewed. This highly institutionalized process is in wide use by Medicaid administrations everywhere and is under the control of Medicaid’s “boss texts” – which is to say it is not easily changed. What the reformers we studied targeted instead was the application, which is the key input into the process and is front and centre at the eligibility interview (Step 1). It is also the sole basis on which eligibility is determined. In selling the reforms and providing their set of recommendations to facilities, reforms and recommendations concerning the reinvention and repositioning of the role of the Medicaid caseworker, and the latter’s focus on assembling complete and valid applications, the Unit’s managers were saying to the facilities: if the application is valid and complete (or as close to being so as possible) at Step 1, Steps 3 through 5 of the process could be eliminated or accelerated to cut out up to seven weeks from the process. Once the application is approved, Medicaid will reimburse the facility, and such reimbursement may be retroactive – but only for 90 days. When the approval process takes much longer than that, the facility will have provided care for which

182

M. DeVault, M. Venkatesh, and F. Ridzi

there will be no reimbursement. Thus, a time saving of up to seven weeks can have significant positive implications for their facilities’ financial bottom line, in particular their cash flow. Medicaid and Long-Term Care in the United States We situate our analysis in the broader context of long-term care in the United States, which is based on a private-pay system; people who have resources must seek long-term care in a private marketplace and pay for it themselves. Because long-term health care can be so expensive in the United States, many Americans who enter nursing homes on a private-pay basis quickly exhaust their private resources and become impoverished and then look to public assistance programs like Medicaid to pay for the remainder of their care. They are eligible for public assistance only when they have exhausted those resources. The Medicaid program (or “Medical Assistance” as it is called in the group of programs that make up US public assistance programs in New York State) is intended to provide financial assistance to persons whose income and assets fall below what they require for necessary medical care. In short, Medicaid is designed for the very poor. To qualify for Medicaid, applicants must first be found financially eligible. In New York State, county Medicaid, located in the Department of Social Services (DSS), is the local government agency that determines eligibility. County Medicaid administers the program under state and federal mandates. Medicaid legislation looks ahead, as do those who may need to rely on the program, towards this trajectory of financial impoverishment. The law includes provisions for sheltering assets such as a family home where a spouse is still living, and some individuals who foresee “spending down” their assets make gifts to children or others in advance of that outcome.3 The program legislation has been crafted so as to allow, but also to limit sharply, such attempts to shelter assets; financial planners who work with relatively affluent populations have invented a field of work – labelled Medicaid Estate Planning – oriented towards providing advice about how those with assets can protect them either through such gifting or the establishment of various kinds of trusts. The literature is mixed on the consequences of these wealth transfer strategies. While some analysts charge that they constitute “legal welfare fraud for the middle class” (Wegner & Yuan 2004), others estimate that the numbers and sums involved, on average, are relatively modest (Lee, Kim & Tanenbaum 2006). In any case, these provisions and the strategies that

Working within an Accountability Circuit

183

have grown up around them provide part of the explanation for the complex application process outlined above, which prescribes such an extensive, forensic audit of the applicant’s circumstances.4 The events we analyse here involved significant changes in the work routines of program staff, made within the constraints of the rigidly controlling “boss texts.” The changes were introduced over time by a new Chronic Care administrator, who in 1997 was charged with improving the difficult relations between the county Medicaid agency and local facilities. Through a local executive council of long-term health care organizations, established to share information, the nursing homes had brought pressure to bear on county Medicaid to reduce a large and growing backlog of pending cases. The processing bottleneck was a matter of considerable urgency for these facilities; they complained that they were admitting residents who appeared to be eligible for Medicaid support but whose eligibility remained undetermined for many months. Once a resident was deemed eligible, the facility could apply for reimbursement for only three months of care already provided. Thus, delays in processing could have very significant budgetary effects, in particular as they related to the facility’s operating cash flow. Our analysis is based on fieldwork conducted by Venkatesh since 2005. The data include more than 30 open-ended interviews of 60–90 minutes with county Medicaid and facility directors and staff; six short interviews with applicants and family members; observations of several training sessions for front-line staff in the facilities; and hundreds of pages of program documents, including the minutes of the task force established by the incoming county Medicaid director in 1997 to look into the application processing backlogs (the task force’s recommendations produced the reforms and the changes instituted by county Medicaid starting in 1998). There are also data from a mail survey designed collaboratively with county Medicaid to gather facilities’ perspectives on eligibility determination postreforms. Finally, we draw on data from an earlier study, conducted in 2000–2001, when county Medicaid piloted a video interviewing procedure for program intake; those data include formal and informal interviews and program documents related to video interviewing. We have written elsewhere about the specifics of these reform efforts in the County (Venkatesh, DeVault & Ridzi 2010a, 2010b), focusing on the institutional activities that produced the reforms. Here, we step back from those detailed analyses to consider how these front-line activities are reorganizing a local institutional complex of long-term

184

M. DeVault, M. Venkatesh, and F. Ridzi

care. The institutional ethnography approach provides a fruitful way of illuminating how the activities undertaken at the front line both are shaped by the text-mediation of “ruling relations” (Smith 2005) and also activate those relations in ways that are consequential elsewhere. Reorganizing the Work of Eligibility Determination In 1997, the county’s new Medicaid director set up an internal task force to look into the complaints from facilities and to propose a solution to the case-processing delays and backlogs, and the task force’s recommendations provided the impetus for the reforms discussed here. First, Medicaid Chronic Care applications would be processed by a specialized entity to be called the Medicaid Chronic Care Unit (hereafter the Unit), established in 1998 as a specialized agency within county Medicaid that would be administratively separate from Community Medicaid. Henceforth, the Unit would handle long-term care applications; the Community Medicaid group at the county would handle short-term care requests. Medicaid requires of Chronic Care applicants (relative to the Community Medicaid) a correspondingly higher volume of evidentiary documentation attesting to their “need.” The burden on the front-line staff determining eligibility is also correspondingly greater for Chronic Care cases, as they must thoroughly verify the documentary evidence provided before approving or denying eligibility in a legally defensible manner. During the period we consider, the Unit’s managers adopted an ensemble of strategies to speed up eligibility determination, focused not only on the Unit’s eligibility staff but also the front-line staff at longterm care nursing homes. These strategies included the promotion of a new cooperative approach to eligibility determination that designated facilities as partners; the establishment of a regular “pre-screening” training program (offered monthly and so called because it purported to train caseworkers on how to assemble complete and valid applications); a program of biannual “facility” meetings (organized by the Unit as a way of keeping lines of communication open with facilities) designed to recruit caseworkers to effectively fulfil their new role as collaborators with county eligibility staff; and finally, the production of a set of support texts that guided the preparation of a properly filed application. As these changes took hold, the Unit introduced Internet video interviewing to enhance the efficiency of the eligibility interview (at which the Unit staff reviewed an application) and to support the

Working within an Accountability Circuit

185

reforms already in place. In the remainder of this section, we provide an account of each of these changes and assess their consequences. Intake in Medicaid and other US means-tested programs has in the past often been a matter of harsh, personalized scrutiny. In the present case, the Medicaid director’s new mandate to the task force in 1997 and later to the Unit’s managers and staff was that a hyper-sceptical, even forensic, approach should be replaced by a more sympathetic and cooperative spirit.5 She adopted the slogan “Let’s be friends” in order to convey to her own staff and those in the nursing homes what she intended. Rather than leaving candidates themselves to file applications – which were inevitably incomplete, at best – she wished to provide concrete, useful guidance as to how such application work should proceed. Most applicants are not only “one-shotters,” but also require assistance with the application process, owing to the health or cognitive problems that have brought them to long-term care; thus, the reforms were designed to recruit (and allow) nursing home casework staff to provide more assistance with application work than in the past. While such a procedure would introduce new tasks for facility caseworkers, taking up those tasks would be in their own interest, since complete and valid applications were meant to lead to quicker eligibility determinations. The slogan “Let’s be friends” was a headline for the business tropes that imbued the new idea of partnership between the Unit and facilities. The Unit administrator talked of the reforms as improving “customer service,” where “customer” referenced both the applicant and the facility (and merged their interests). Unit staff spoke to facilities in unprecedented ways, displaying a keen awareness of the facilities’ concerns for their financial viability. Efficiency and accountability were meant to lead to more timely eligibility determination, a result seen as benefiting not only applicants, but also the bottom-line financial interests of the facilities. In the Unit, eligibility work is done by specialized paraprofessional staff – officially referred to in the county’s civil service job codes as “Income Maintenance Specialists” – in recognition of their specialized skills (interestingly, the official job title of eligibility staff at Community Medicaid continues to be “IM Workers”). The Unit’s IM Specialists are specifically trained in the accountability circuits of application processing, and they develop expertise in the kind of asset evaluation required in Chronic Care processing. While both groups of IM workers are concerned solely with financial eligibility, in keeping with the separation of

186

M. DeVault, M. Venkatesh, and F. Ridzi

services philosophy that has separated financial and social work functions, the establishment of the IM Specialist position acknowledges the additional level of expertise required for Chronic Care case processing and eligibility determination. Unit managers began to encourage a parallel specialization (and a parallel separation of services) in the nursing homes and urged that a specialized and dedicated caseworker appropriately trained in the intricacies of Chronic Care “pre-screening” and application assembly be appointed to assist patients applying for Medicaid. They further recommended that the caseworker report not to the facility’s social services office but to the business office, based on the argument that Medicaid casework was income maintenance work, not social work. As the reforms went forward, all but one of the four facilities we studied did indeed reinvent and reposition the casework function as recommended by the Unit, placing the Medicaid caseworker within the business office. At pre-screening training sessions, the trainees6 worked through materials collectively called the “application packet,” learning how the Unit’s eligibility staff would read the application so that they understood how to file it properly with the appropriate supporting documents. Included in the application packet was an array of “support texts” designed by Unit managers and staff, which could be used as worksheets guiding the preparation of various parts of the application. These locally designed support texts not only indicated the information that was needed for the creation of complete and valid applications, but also provided instructions about how the required information could be obtained, with the appropriate forms of evidentiary documentation. For example, one piece of documentation required of applicants who have not filed tax returns during the several years preceding their application (typically because they do not receive enough income to require filing) is a US Internal Revenue Service form documenting that the applicant was not required to file a tax return. A one-shotter applicant is unlikely to know how to obtain the form, or even that it exists, whereas for Unit staff and trained caseworkers at facilities requesting it is a simple matter. A Unit-designed support text – called the “Income Tax Returns” form – provides step-by-step instructions, such as “If you filed an Income Tax Return, but do not have a copy of the return, you can call [phone number here]” or “If you did not file a return, you must call [phone number here].” Once the application has been filed with the Unit, the applicant is called to an eligibility interview, in which an IM Specialist reviews the

Working within an Accountability Circuit

187

application, attempting to clarify ambiguities and fill in missing information. Frequently – especially before the reforms were effected in 1998 – the outcome of the eligibility interview would be to “pend” the application, requiring the applicant to gather additional documentation to complete it. Again, the applicant’s lack of prior experience with the process could lead to avoidable delays in responding to such queries, and a long cycle of pending and re-review might ensue before a final determination could be made. If a trained facility caseworker were available, the Unit’s argument went, she could assist with application assembly, pushing the applicant to begin to gather the required evidentiary documentation or gathering it herself on behalf of the applicant. A few years after the initial reforms, the Unit began to offer a videointerviewing option in facilities with trained caseworkers. Applicants who chose video eligibility interviewing did not need to leave the facility for the interview, and the caseworker who was helping to assemble the application could easily sit in on the interview and be a participant in it. Thus, the three parties involved in the application assembly and processing work – the Unit’s eligibility staff, the applicant, and the caseworker – could have a real-time conversation centred on the application in order to resolve questions more quickly and determine with greater accuracy exactly the documentation that was still needed for a complete and valid application. In effect, these reforms reorganized the relations of application work. What had previously been a two-way interaction involving the applicant and Medicaid staff became a triangular interaction that included the applicant and front-line staff from both organizations. The “Let’s be friends” idea suggested that applicants would be seen as presumably worthy individuals in need of assistance, rather than as potential scam artists bent on getting what they could. But “Let’s be friends” also signalled a new kind of cooperation between the Medicaid Unit and the facilities, with a lateral accountability circuit, and this new organizational “friendship,” with its “bottom-line” logic, would appear to be the more durable aspect of the changes discussed here.7 The new consensual division of labour post-reforms was mutually beneficial: facilities devoted the resources of their front-line staff – the caseworker – to assembling valid and complete applications, enabling the Unit’s frontline eligibility staff – IM Specialists – to process applications and render a determination in a timely manner. The new cooperative relation between the Unit and facilities rested on this localized lateral accountability circuit: if facilities kept up their end of the bargain as partners,

188

M. DeVault, M. Venkatesh, and F. Ridzi

the Unit could honour its commitment to timely case processing and eligibility decisions. The Unit and most local facilities see the reforms as successful.8 It remains to be seen, however, whether these reforms can be sustained. The Deficit Reduction Act (DRA) of 2005 has introduced a new layer of “boss text” regulations for Medicaid application assembly and processing, and the DRA’s stringent new rules and restrictions on the qualification of assets for long-term care eligibility have alarmed facilities in that their ability to recover costs of care may be reduced as a result. This has prompted facilities to look very carefully at their costs in general and pursue containment strategies with ever more vigour, and they have begun to grumble at taking on so much of the up front burden of pre-screening the applicant and application assembly, a burden they feel should appropriately be borne by the Unit. In addition, the “Let’s be friends” conceptual frame and the reforms that it has helped bring about are associated strongly with individual leaders – the county Medicaid director in 1997 and the Unit’s inaugural supervisor, who was appointed to head up the Unit when it was established in 1998. The supervisor retired in 2008 (the Medicaid director had retired earlier). In that year, at a special meeting convened by the long-term care executive council to discuss implications of the DRA for Medicaid eligibility, the new supervisor reiterated the importance of the facilities’ doing their part in their own interest, given the new rules and the additional delays these might cause if applications were incomplete or invalid. Her appeal to the facilities foregrounded the partnership relation: “although our ultimate goals differ (ours being a final case determination and yours being paid bills), we have a joint investment in our mutual success in getting that case opened [approved] promptly ... I use the terms “joint” and “mutual” because ours is a partnership ... And that partnership has been the premise for the Chronic Care Unit from the start ... Most importantly, we need you to stay on board with us ... to revitalize our efforts and to renew that trust in our relationship once again.” What we see in this reform effort is a delicately balanced set of accountabilities and their fragility in a changing political and economic context. The reforms have adjusted accountabilities in several ways. First, the Unit’s “Let’s be friends” frame produced strategies devised to assist facilities to textually constitute the applicant in the most effective way possible within the constraints of the Medicaid “boss” texts. The Unit was saying to facilities: we’ll help you be effective partners by helping you pre-screen applicants and assemble valid and complete applications.

Working within an Accountability Circuit

189

The new division of labour on which this strategy rests and the mutual accountability implied in this division mitigates to some degree the dominance exercised by Medicaid’s boss texts. If these boss texts insist that the only legal basis on which eligibility can be decided is the application, then the secondary, local circuit provides the basis for effectively constituting the applicant on the application to improve her chances of being found eligible. Thus, the secondary circuit helps to moderate the experience of ruling relations by the applicant and the facility. Looking at the secondary circuit and the mutual accountability on which it rests, we see that Unit managers were coming to understand the facilities as “customers” to whom the Unit was accountable. In setting up the Unit and instituting the changes that stemmed from it, the county was responding to pressure from facilities. The Medicaid director and Unit managers used the term “customer service” to refer to a value they wished to pursue in the Unit’s relations with facilities in regard to eligibility determination. At the same time, the Unit was pushing facilities to reorganize their Medicaid casework function and tie it to their financial bottom line. Their detailed recommendations to facilities on reinventing and repositioning the casework function further emphasized this tie. The facilities were not unaware of the tie; they definitely were aware (and as a result, they brought pressure to bear on county Medicaid). However, the Unit offered to help them concretely with their cash flow concerns, and this was a first. In its legislative outline, Medicaid is intended to help the poor defray their health care costs, not to help facilities with their financial bottom line. Thus, the Unit was breaking new ground in this regard. In the next section, we analyse more closely how these changes came about and how they were implemented. Then we discuss their consequences and what they can teach us about governance today. Making Change on the Front Line How were these reforms accomplished? Cost-cutting is a “fact of life” in contemporary organizations, and this “fact” has prompted longterm care facilities to take some unprecedented steps in order to better manage their costs and risks. They have begun to cooperate much more extensively than in the past with both hospitals, on the one hand, and the Unit, on the other. The genesis and form of this public-private nexus is unusual. In this county, cash-strapped facilities united under the aegis of the long-term care executive council to bring pressure on

190

M. DeVault, M. Venkatesh, and F. Ridzi

the Medicaid Unit to process applications in a timely manner in order to speed up Medicaid reimbursement. That development gave the county an opening to propose a new division of labour in case processing and to urge facilities to cooperate in the new arrangement in order to help themselves.9 Our discussion above suggests that the Unit’s reform discourse (“Let’s be friends”) and the alignment of facility interests with the new division of labour were part of the story of change. But the new procedure was held in place textually, and the introduction into application and processing work of the ensemble of support texts was invented to support new routines and practices. The Unit’s pre-screening training program for the caseworkers at facilities was organized explicitly around the application form as a controlling text; trainers would “walk through” the application with caseworkers participating in the session, explaining exactly what the front-line eligibility would be looking for and how claims must be supported in the evidentiary documentation submitted along with the application. As the trainers walked through the form, they also pointed to the support texts produced by the Unit to guide and improve application assembly work. Given the thoroughly text-mediated character of Medicaid’s complex provisions, it is not surprising that change required yet more texts (though one certainly might argue that there are real ironies in the adoption of a new layer of textual work to address the complexities of the existing documentary process). The fact that innovation occurs in such textual forms provides evidence of the ways that texts coordinate frontline action. The Unit’s ensemble of support texts intended for caseworkers can be sorted into four categories: lists, task aids, information sheets, and templates. The checklist, for example, specifies the kinds of evidentiary documentation required for a complete and valid application. Task aids offer instructions for completing a task or a step in a task, such as the “Income Tax Returns” form, a task aid mentioned earlier. A third category of support text provides supplemental information on completing the Medicaid application, such as the sheet on “Information Notice to Couples with Institutionalized Spouses.” Templates are the most coercive in intent and are designed to restrict user input to checking boxes, circling options, and filling in blanks, often requiring a stark “yes” or “no” response when there may well be a complex story to tell. Templates may provide a small number of delimited spaces for free-form written input. Examples of templates include consequential texts such as the “Information Release Form,” which enables the facility

Working within an Accountability Circuit

191

to be formally included in the Unit’s interaction with the applicant over the course of eligibility determination. Most important, the support texts – like other bureaucratic documents – are meant to be used by any occupant of an organizational position. The text guides action and thereby ensures that any trained facility caseworker will be able to assist a resident in preparing the application. Front-line staff at the Unit and their managers certainly know this function of texts, as their own actions are constrained by the state application for public assistance (including Medicaid). Front-line staff in any county are expected to apply the same procedures and rules in consistent ways so that the state’s public-assistance resources are distributed equitably. And if front-line personnel do not perform such (reasonably) consistent intake work,10 they face sanctions from administrators at the state and federal levels. One key support text provides the foundation for the reorganization of application work: the Information Release Form mentioned above. Prior to the reforms of 1998, facilities could not be directly involved in the process of eligibility determination. They had always been powerfully impacted by the outcome, of course, but could do little more than file the application on the applicant’s behalf and (anxiously) await the verdict. For privacy and confidentiality reasons the Unit dealt directly with the applicant over the course of the process; the caseworker was not in the picture and the applicant was not obliged to update the caseworker; nor did the applicant have to provide copies of the Pending and Extension Letters to the caseworker. Post-1998, as a logical follow-up to their new designation as partners in determining eligibility, the Unit brought the facilities directly into the process by means of the Information Release Form. Eligibility determination now involved triadic interaction of the IM Specialist, the applicant, and the caseworker as direct participants in the process, and this new paper form (in the template format) has been the legal enabling instrument. Such a form did not exist in 1997, and the task force quickly recognized the urgent need to develop one. As the minutes of the task force noted: “By securing the appropriate [information] releases, the Unit could provide the Facilities with copies of explicit pending letters, who, in turn, could aid client/rep. [the client’s representative].” At the pre-screening training one of us observed for this research, the trainer stressed the importance of arranging for this form to be signed as early as possible upon the patient’s admission to the facility, regardless of whether she intended to apply for Medicaid benefits: “Upon admission

192

M. DeVault, M. Venkatesh, and F. Ridzi

most nursing homes have someone sign almost immediately – even if they say that they are going to be privately paying – a release of information form ... In most places they’ve found that upon admission a patient or the family signs this and you have it on file. The minute you guys need to call and talk to us about this person applying you have written permission from the family to do so.” The Information Release Form allowed the caseworker to access copies of the Pending and Extension letters. Now she had a basis for knowing where the application was located in the process, and she could track down, or assist the applicant to track down, required documents, thus avoiding delays due to Pending and Extension Letters. This text and its uses highlight the ambiguity of the reforms from the patient’s point of view. While in most cases patients and their family members are likely grateful for assistance with a dauntingly complex and unfamiliar bureaucratic process, the price of that assistance is to cede control over information about themselves and their circumstances. Together, the ensemble of support texts produced by Unit trainers provides a neat illustration of the thoroughly text-mediated character of eligibility determination. These artful products crafted by Unit staff point to the textual terrain on which they must operate, and they draw facility staff into an interlocking virtual space. While the patient is the official applicant for benefits, the process of eligibility determination has increasingly become an exchange of texts between organizations. Realignments of Public and Private Sector Work To the extent that the rearrangement of work we examine here has been successful, we suggest that it realigns the state’s provision for long-term care so as to address more adequately than in the past the concerns of private sector facilities. Thus, it represents – like other analyses in this book – a change that seems a straightforward matter of efficiency and better service, but that in fact has implications with a broader reach; in effect, it produces a new form of public-private partnership. The reforms never took hold in this county’s major public nursing home, and without doubt there are multiple explanations for the facility’s resistance. The administrator did assign a staff member from the facility’s business office to take on the new, specialized work of application support; that worker grieved the assignment and the arrangement fell through. The facility did not adopt the layered staffing approach we saw in the private facilities, which relied on specially

Working within an Accountability Circuit

193

trained caseworkers to handle the Medicaid pre-screening training and application assembly work, work that is primarily financial in nature (recall that front-line staff at the Unit doing eligibility determinations are called Income Maintenance Specialists). Not surprisingly, perhaps, the professional social work staff in the public facility have resisted the addition of Medicaid casework to their other duties, characterizing Income Maintenance work as demeaning, low-level clerical and technical work. (Apropos these complaints, it is interesting to note that the federal legislation on services separation appears to concur with this characterization, labelling IM workers “eligibility technicians.”) One might analyse these developments as the result of managerial failures or social workers’ attempts to resist erosions of their professional authority and status; the contribution of varying fiscal realities at the public facility may also be significant. As the facility of last resort for the impoverished, this facility does not have the option of discharging patients who cannot pay. It is surely in the interest of the public facility to increase the flow of reimbursements from Medicaid, but these funds are supplemented by additional county and state budget streams. Unit managers expressed great frustration with the public facility’s resistance to change (and with the poorly prepared applications they continued to submit); that frustration arose not only in regard to the (negative) service implications for applicants, but also from the financial implications for the facility. The applicant has a rather “ghostly” presence in the eligibility determination process as the target of benefits but not the direct recipient of them. Overall, reforms instituted by the Unit seem to have had positive effects for applicants and their families, who often assist them with the process and who receive assistance with care of their relative (the applicant) if their applications are successful; certainly, the reform allows for a quicker resolution of the financial uncertainties of nursing home care. However, applicants, besides being one-shotters, are unorganized and do not have the collective power of the facilities to bring pressure on the Unit to make or retain such changes. They benefit primarily when their interests coincide with those of the organizations involved. The Unit has made some efforts towards, for example, outreach and application assistance for those seeking Chronic Care benefits delivered through home services. But in that arena, the Unit lacks organizational partners with sufficient resources to provide the kinds of cooperation that longterm care facilities have been willing to undertake.

194

M. DeVault, M. Venkatesh, and F. Ridzi

Conclusion Medicaid is one of the major entitlement programs that have contributed to an increasingly unsustainable budget deficit in the United States, and the program undergoes continual scrutiny and revision with an eye to cost containment. Because the program is open-ended, and every eligible applicant must be enrolled, it is increasingly costly for states, despite the federal cost-sharing. Thus, in the recent period of fiscal constraint, the states have experimented with strategies meant to reduce expenditures, often involving more stringent regulations for eligibility determination – strategies of diversion and delay that typically shift care work onto family members (Harrington Meyer & Storbakken 2000). Recent federal legislation such as the Deficit Reduction Act includes regulations reducing the possibilities of wealth transfer that are meant to save $6.3 billion over 10 years (Iglehard 2007); these new provisions will involve additional work in eligibility processing, as we noted above. In this overall context, the reforms we studied stand out, because they seem designed to facilitate, rather than impede, Medicaid eligibility. Our analysis suggests that these front-line reformers are working within two accountability circuits. On the one hand, they are accountable to the “boss text” of program legislation: they must work within the parameters of enabling regulations. At the same time, they have responded to a different set of local accountabilities, working with those in other organizations (the hospitals and long-term care facilities in the county) also charged with the provision of care for the elderly. By responding not only to their legal mandate, but also to the concerns of the facilities that depend on Medicaid reimbursement, these reformers have forged and strengthened a local, horizontal accountability circuit that links organizational entities in the county in unprecedented new ways. They have enlisted trained facility caseworkers to assist overburdened eligibility staff at the Unit in a new division of labour; they have also reorganized Medicaid intake work to align it with the business interests of the facilities. In return for this responsiveness to facilities, the Unit asked those organizations to be accountable to it by cooperating in the new division of labour. Unit managers told the facilities, in effect: “We can do our job better only if you help us out in the ways we recommend.” That both entities were willing to sign on to the new arrangement suggests that both were willing to adopt similar values of efficiency in and accountable performance of their interlocking work processes.

Working within an Accountability Circuit

195

The analyses collected in this volume explore the contours of “the new public management” – understood as “changes in governance in the public sector that emphasize improved accountability, efficiency and effectiveness” that constitute “almost a revolution” (Smith 2009). Many of the analyses of these changes point to the ways in which they drive reductions in public expenditure or cuts in services. The local managerial innovations we discuss here seem to arise from and draw upon the spirit of efficiency, devolution, cost containment, and accountability that is part of this new way of thinking about the public sector. However, these reforms have arisen locally, and they may well stay local in scope. Whether such reforms can be sustained in a time of ever-rising health care costs, evertightening budgets confronting facilities, the ever-increasing need for services demanded by individuals seeking or needing institutionalized long-term care, and the long-term consequences the reforms may have for the landscape of long-term care in the county remains to be seen.

NOTES 1 In 1972, the federal Department of Health, Education and Welfare mandated what has come to be known as “separation of services,” “requiring states to have two separate and autonomous organizational units in their public welfare agencies ... one unit would be responsible for income maintenance, the other for social service” (Courtney & Dworsky 2003). Although the mandate was repealed in 1976 as part of the “new federalism” of the Nixon administration, separation of services remains common throughout the US social welfare system. 2 In some other arenas, however, advocates have recognized these efforts and organized to support the work. See, for example, the help sheets produced by the BC Coalition of People with Disabilities, at www.bccpd. bc.ca/publications/bcdisabilitybenefits.htm. 3 Some of these allowances have recently been made more stringent under the provisions of the US Deficit Reduction Act of 2005, whose provisions took full effect in 2009. 4 Apropos these specialized forensic procedures, in 2005 Chronic Care Medicaid managers at our research site retained the services of a “forensic accountant” to “assist the front-line eligibility staff in determining and interpreting the flow and transfer of assets in a wide variety of financial scenarios” requiring skills and specialized knowledge not available at the agency. The county’s need for such forensic services may be expected to increase under the Deficit Reduction Act’s stringent new rules restricting

196

5

6

7

8

9

M. DeVault, M. Venkatesh, and F. Ridzi the types of financial assets that may qualify for exemption under Medicaid (in particular those assets that the applicant had transferred to a beneficiary in order to impoverish herself and qualify for Medicaid benefits). Historically, US public assistance eligibility staff were trained to use “thoroughgoing skepticism” and a “hard-headed commitment to establishing the ‘“facts of the matter” (as opposed to unsupported claims)” before determining eligibility (Zimmerman 1969: 331). What the county Medicaid director pushed for was not to dilute this “investigative” stance, but to involve the facilities – in their new capacity as partners in eligibility determination – in appropriately documenting claims so that applications from their patients were both valid and complete. Attendees at these sessions are typically facility caseworkers; other facility managers and staff from the business and social services departments and estate planning attorneys may also participate. In a sense, one might argue that this administrator’s key reform was to systematize cooperation among workers across organizations. Hamilton (2009) found that lower-tier women clerical workers in a large bureaucratic organization developed informal cooperative strategies that she labels “horizontal coordination” (as opposed to the vertical coordination that follows formal lines of authority). Hamilton attributes the patterns she found to the gender composition of the organizational workforce, and it may be significant that the creative administrator we studied adopted what some might label “womanly” strategies, to be implemented by largely female front-line staff. On the other hand, we have been especially interested in the text-mediated character of the process of eligibility determination and the text-mediated reform strategies adopted in order to change that process, and it may also be significant that those working with texts operate in a distinctive organizational niche where communication is especially useful, whether it is about what a brief notation really means or how to resolve a complex situation into one or another check-box response. As one-shotters, applicants have no basis for comparing pre- and postreforms experiences of going through the eligibility determination process, because typically they apply only once. Indeed, at least one facility had suggested that they should be given the authority to make eligibility decisions – a shift in authority relations that would have been radical and would have required significant revisions in the enabling legislation. The Unit’s reform strategy left the authority relations in place: none but the Unit could determine eligibility. But the reforms gave facilities a greater sense of control over the applicant’s “textual constitution” on the Medicaid application, on the sole basis of which the Unit’s front-line staff would decide eligibility.

Working within an Accountability Circuit

197

10 What may be taken as “consistent” in any circumstance is, of course, a collective and consensual construction of practitioners, as Cicourel (1964: chap. 3) demonstrates by opening up the “black box” of survey interviewing.

REFERENCES Cicourel, A.V. 1964. Method and Measurement in Sociology. New York: Free Press. Courtney, M.E., & A. Dworsky. 2003. Comparing welfare and child welfare populations: An argument for reintegration. Paper presented at the Joint Center for Poverty Research Conference, Child Welfare Services Research and its Policy Implications, Washington, DC, 20–21 March. Hamilton, J.L. 2009. “Caring/Sharing:” Gender and horizontal co-ordination in the workplace. Gender, Work and Organization 18 (s1): e23–48. Harrington Meyer, M., & M.K. Storbakken. 2000. Shifting the burden back to families? How Medicaid cost-containment reshapes access to long term care in the United States. In M. Harrington Meyer (ed.), Care Work: Gender, Labor, and the Welfare State, 217–28. New York: Routledge. Heimer, C.A. 2006. Conceiving children: How documents support case versus biographical analysis. In A. Riles (ed.), Documents: Artifacts of Modern Knowledge, 95–126. Ann Arbor: University of Michigan Press. Iglehard, J.K. 2007. Medicaid revisited: Skirmishes over a vast public enterprise. New England Journal of Medicine 356 (7): 734–40. http://dx.doi. org/10.1056/NEJMhpr066650. Lee, K., H. Kim & S. Tanenbaum. 2006. Medicaid and family wealth transfer. Gerontologist 46 (1): 6–13. http://dx.doi.org/10.1093/geront/46.1.6. Lens, V., & D. Pollack. 1999. Welfare reform: Back to the future! Administration in Social Work 23 (2): 61–77. http://dx.doi.org/10.1300/J147v23n02_05. New York Association of Homes and Services for the Aging (NYAHSA). 2003. Preserving Long Term Care for the Long Term Future. Albany: NYAHSA. Smith, D.E. 1990. The Conceptual Practices of Power: A Feminist Sociology of Knowledge. Boston: Northeastern University Press. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: AltaMira. Smith, D.E. 2009. Briefing notes for “Governance and the Front Line” workshop. Venkatesh, M., M.L. DeVault & F. Ridzi. 2010a. Paper work (Part 1): Changing Medicaid in West County. Manuscript.

198

M. DeVault, M. Venkatesh, and F. Ridzi

Venkatesh, M., M.L. DeVault & F. Ridzi. 2010b. Paper work (Part 2): Changing healthcare facilities in West County. Manuscript. Wegner, E.L., & S.C.W. Yuan. 2004. Legal welfare fraud among middle-class families: Manipulating the Medicaid program for long-term care. American Behavioral Scientist 47 (11): 1406–18. http://dx.doi.org/10.1177/ 0002764204265341. Zimmerman, D.H. 1969. Record keeping and the intake process in a public welfare agency. In S. Wheeler (ed.), On Record: Files and Dossiers in American Life, 319–54. Albany, NY: Russell Sage.

SECTION THREE

This section contains the first of the workshop dialogues. Janz, Nichols, Ridzi, and McCoy sketched out the framework for the dialogue during the workshop itself. After the workshop, they wrote the individual papers and collaborated on the overall structure and introduction. Their dialogue brings together studies of the institution of accountability circuits in various government-funded community programs designed to provide narrowly specialized public services. The work processes involved are embedded in accountability circuits; that is, from inception to action, externally funded community projects must be organized to fit the available reporting categories. Each of the four studies in the group explicates the work processes distinctive to the program and how the accountability circuits enter into the work organization. Of course, reframing what people are doing and what it means is no simple matter. Nor do front-line workers blindly follow the new routines. The chapters that follow address themes we have seen in other sections of this book, for example, the reorganization of work routines to coordinate people-work with the managerial technologies making it accountable, and the incursion of managerial relevancies into the organization of professional work. The struggle of front-line workers to find a way to work within a transforming environment is an underlying theme in chapters in previous sections. In the chapters of the third and fourth sections, however, they are made explicit.

This page intentionally left blank

7 A Workshop Dialogue: Outcome Measures and Front-Line Social Service Work shauna janz, naomi nichols, frank ridzi, and liza m ccoy

We begin with a wide and busy terrain of front-line work: social services to help people live with difficult life problems, get jobs, escape homelessness, settle in a new country, or learn new skills. In North America, many social services are provided by independent organizations that receive funding from third parties, such as government, charitable foundations, and private donors. It is customary to talk about the voluntary or non-profit sector in this context and indeed many, perhaps most, of the independent organizations that offer human services are incorporated as non-profit organizations (NPOs). Funders such as the United Way and private foundations fund only programs delivered by non-profit organizations, usually with charitable status. But if we begin on the ground, in the sites where thirdparty-funded social services are offered, we see that private, for-profit organizations are also active in those areas of social service delivery funded by government. A hallmark of the neoliberal state is the contracting out of service delivery. This involves the creation of markets in which independent agencies compete for government contracts to deliver programs and services and the opening of these markets to private businesses, giving them access to public funds as a source of profit. Also participating in these service-contract/project-funding markets are arms-length state agencies that apply for program funds from other governmental agencies. Therefore, if we begin not from categories (such as the voluntary or non-profit sector), but from the work itself and the relations that organize it, we find that we must map the terrain somewhat differently. Central to this field are the circuits of application, contract, and accountability (Smith 2005) through which funding is distributed to

202

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

independent organizations and recipient organizations report on their use of the funds. Most third-party funding is for specific programs or services, rather than general grants for organizations to use for programs or administration as they see fit. It is common for social-serviceproviding agencies to develop a portfolio of programs and services funded by different supporters, all requiring different forms of application, record keeping, and reporting. In recent years, service organizations in both the United States and Canada have noticed a dramatic increase in the staff time and record keeping required for applications and reports (e.g., Eakin 2007; Lara-Cinisomo & Steinberg 2006), along with a demand for greater detail on organizational activities, numbers served, and, in particular, measurable outcomes. Funding bodies increasingly require that applicant agencies organize their work to conform with a results- or outcomes-based approach to management. Applicants for funding must draw up accounts of proposed programs using standardized logic models (showing inputs, outputs, and measurable outcomes) or generate service targets and develop forms of quantitative measurement to show how progress towards these targets is being made. Once funds are secured, front-line workers need to do their work in ways that produce the kinds of outcomes, reports, and documentation required by funders as a condition for continued or future funding. In this chapter, we explore the lived actuality of these generalized relations of governance from sites where we were active as researchers and participants within the field of externally funded human service work. We worked in different North American countries (the United States and Canada) and different provinces within Canada (British Columbia, Alberta, Ontario); we worked with different types of agencies involved in different types of human service work. But as we sat around a table in a chilly room in October 2009, sharing stories of our experience and our research, we discovered a common strand across our sites and stories: the power that text-based procedures for creating the visibility of measureable outcomes have in shaping the work of front-line staff, in subtle and not-so-subtle ways. Sharing our work and research stories, we described for each other what we had come to understand about the way these technologies were present in the settings where we worked or researched. We teased out the generalizing discourses and managerial practices that produced similar processes and experiences in these different sites; we also explored the differences. The organization of this chapter carries that

Outcome Measures and Front-Line Social Service Work

203

conversation forward. In the following sections each author describes a particular site of third-party funded social service delivery, highlighting the funding and accountability relations that prevailed, the specific outcome measures and other accountability texts in use, and the way those technologies organized the work of front-line staff – who, in some cases, were the researchers themselves – as well as the experience of clients. Shauna Janz describes her work in a small, for-profit social service agency providing support services to people living with disabilities in British Columbia. The agency needed formal accreditation to secure ongoing access to government contracts; a government-approved consultation introduced a program of Continuous Quality Improvement that called for measurable evidence of improvement in how their clients were managing. Naomi Nichols writes about her work and research within an Ontario non-profit emergency youth shelter, where she developed and then participated in implementing a grant-funded pilot life skills program. In order to demonstrate the success of the program and to secure future contracts to continue providing it, she had to ensure the production of measurable outcomes. Frank Ridzi describes a program to bring computer and Internet resources to the residents of low-income housing projects. The program was funded by a US federal government agency; obtaining the funds had involved the grant writer in completing an electronic program logic model offering a standardized template of approved outcome options. The newly hired program coordinator then faced the challenge of developing the program in ways that would produce those outcomes. Liza McCoy writes about research exploring the organization of employment services to immigrants in Alberta. Most of these programs were funded by a single government ministry that used standardized contracts, eligibility requirements, and outcome measures to coordinate independent service-providing agencies into an integrated system implementing provincial labour force development policy. In all of the sites where we worked and researched, front-line staff came to be accountable for outcomes that called for quantifiable or textually evidenced changes in the lives or behaviour of clients, or that required clients – who themselves were not, in some cases, accountable to the agency – to participate in activities or undertake actions that could then be counted as the program’s outcomes. How these forms of governance shaped the work of front-line staff is the focus of the stories that follow.

204

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

For-Profit Contractors, Accreditation, and Accountability SHAUNA JANZ

Pathways,1 a small for-profit social service agency, provides several different programs for individuals with disabilities living in British Columbia. Each program receives funding through different governmental and arms-length governmental bodies, whether as program-specific contracts or as contracts tied to specific clients in need of supported services. For example, many of the agency’s programs receive funding contracts from Community Living BC (CLBC), a provincial crown corporation connected to the BC Ministry of Children and Family Development (MCFD). Other programs receive funding from the Regional Health Authority, a quasi-autonomous governmental body connected to the BC Ministry of Health Services. Each contract has its own specifications for how the time, hence the money, attached to each program or individual client can and cannot be spent. Every year a meeting is held centred on a client’s Individual Program Plan (IPP) – a document representing the client, her program, and the progress she has attained towards her program goals. The IPP meeting brings together the client, any of the client’s family/friends who are involved in her support, the client’s support worker(s), the director of the agency, any social workers the client may have, and representatives from the Regional Health Authority or CLBC funding body (depending on the client’s program). The IPP becomes a contract between the funder, service agency, and client, delineating what is expected to take place over the next year in the provision of services to the client. As I have written elsewhere (Janz 2009), the IPP is a direct, textually mediating, ruling relation between front-line workers’ work and interactions with their clients and the funder making decisions about contract awards and renewals. The IPP is activated at the yearly meeting and becomes the touchstone for decision-making, both at the front-line level and at the contract management level. In order to maintain his contracts and services, the director of Pathways is guided to align the agency’s reporting to its funders with the goals outlined in the IPP reviewed annually with the funders. The need for front-line workers to write reports that provide measures of how

Outcome Measures and Front-Line Social Service Work

205

their work meets the expectations laid out within the IPP is of paramount importance to the continuing viability of the agency, which depends on retaining its contracts. In other words, front-line reporting practices act as a way for funders to evaluate the work being done by front-line support staff and hence judge how well the agency is delivering its services. Accreditation as a Discursive Tool for Government Contract Management Pathways is currently seeking accreditation from the Commission on Accreditation for Rehabilitation Facilities, commonly known as CARF. Accreditation is becoming more prevalent in the human services field as it is valued by Canadian provincial and federal government bodies (Baines 2004). In 1999 the BC Ministry of Children and Family Development (MCFD) mandated accreditation for all of its contracted service providers receiving total annual contracts of at least $500,000, stipulating that “service provider organizations who fail to earn or maintain accreditation may not be eligible for funding for any additional services and may be subject to contract termination” (MCFD 2006). Although Pathways receives below $500,000 in contracted funding, the director of the agency has decided to become accredited in order to stay competitive and move “up a notch” to successfully receive funding through continued contracts (Janz 2009). The provincial accreditation mandate for social service contracts acts to regulate the choices made by smaller agencies such as this one to become accredited in order to remain credible in the social services sector vis-à-vis an increasingly competitive market for government funding.2 Accreditation, once a voluntary process directed through an external and independent board, is now operating as a mechanism to help ensure efficient and accountable government contract management and service delivery.3 Accreditation imports an ideological discourse of Continuous Quality Improvement (CQI) into local settings of human service delivery. As a discursive technology, CQI provides a particular frame that shifts front-line priorities towards gathering reportable and quantifiable outcomes for the evaluation of their, and hence the agency’s, performance. CQI is a ruling priority, which makes visible only certain types of local work at an agency’s front line to inform “quality assurance” and accountability. Only front-line work with clients that

206

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

is relevant to the provision of reportable evidence of CQI become visible and accountable to the ruling relations of contract management. Front-line work with clients gets taken up within reports and IPPs in ways that displace the actualities of clients’ lives and support needs with priorities of meeting CQI objectives in a measurable and quantifiable way. The Work of Continuous Quality Improvement The discourse of CQI imported into Pathways through extra-local accreditation standards and best practices, vis-à-vis relations of government funding, informed and organized how I worked and interacted with my clients. CQI imposed a focus on measurement and outcome performance into the work processes at the agency’s front line. In the next subsection, I use my own experience and data from interviews with other front-line workers within the agency to show how our work was being regulated by priorities other than meeting our clients’ needs. Front-Line Support Work As front-line support workers at Pathways, our time was divided among many different clients, all of whom had a diverse set of needs and support concerns. Our time was spent either in the households of clients helping with physical rehabilitation, managing dietary needs and other activities of daily living or out in the community engaged in various activities, such as banking; grocery shopping; exercising at the local fitness centre; attending t’ai chi classes; providing work support at various job sites in the city; providing support throughout negotiations with the Ministry of Housing and Social Development; providing informal counselling; connecting with appropriate health information sessions; providing transportation to various medical appointments, speech classes, dentists, specialists, and food banks; and providing access to any other opportunities that our clients expressed interest in. We also spent time in the office filling out daily progress notes; filling in time, mileage, and expense sheets; creating various behavioural and fitness charts to measure client progress and goals; writing reports to document clients’ program progress; writing out individual client program plans; and, when required, writing legal requests for conditional sentences so as to not disrupt a particular client’s support program.

Outcome Measures and Front-Line Social Service Work

207

In efforts to demonstrate that Pathways was actively striving for “continuous quality improvement” of its services, tracking charts were created by front-line workers to transform our clients and our work into measurable data that could be used to indicate improvement of our services to funders and accrediting surveyors. Tracking charts became a primary means for front-line workers to better track and measure clients’ “goals” and “behaviours” associated with their “goal attainment” outlined in the IPP. In order to show that the agency was doing “quality work” and “continuously improving,” we had to show that we, as support workers, were doing “good work” and continuously striving for “improvement,” as shown through our work that was made measurable. One way to demonstrate that we were doing “good work” and “continuously improving” was to measure our client’s “progress” – if they were improving in the particulars of their program (as measured by various chosen/created variables) it reflected our own good work in supporting them towards their various goals. Measurable reporting became more and more synonymous with exhibiting “good” front-line work; as one worker stated, “You need to document, measure, ‘are they [the clients] progressing?’ because what happens – it is improvement of service to clients.” However, I struggled to make measurable our clients’ multifaceted lives. Showing client progress through measurable tracking became one and the same: showing improvement of client services.4 With the introduction of the tracking charts, created by front-line workers to better measure client “goals, behaviours, and progress” for reporting requirements, we became increasingly focused on the problematic or “inappropriate” behaviours of our clients that were in need of improving and could be measured as improving. Efforts to make my clients’ lives commensurable with CQI and its requirements of quantifiable, measurable data for outcomes reporting not only diminished my working rapport and relations with those individuals I supported, but also made invisible the struggle to make clients’ lives measurable and the repercussions of objectifying their personhoods. In the next section I will use a composite account taken from my work experiences with many different clients and situations (de Montigny 1995), to illuminate my efforts to work within the ruling discourse of CQI. My account of work with a client, Ted, provides the details and intricacies within the support work that I did and the types of observations I made while working with clients, while not being specifically tied to any one individual.

208

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

A Visit to the Bank with Ted At 9:30 a.m., I call Ted to remind him that I will be arriving at his place in the next half hour and to ensure that he has showered, put on deodorant, had some breakfast, received assistance from his caregiver in taking his insulin, and relieved himself if necessary. I drive to Ted’s residence at a caregiver home across town, his second placement in a different home in the past three months – the previous caregiver had failed to give him adequate dietary care to help him control his diabetes. Ted is waiting outside, sitting on his walker and smoking a cigarette. We head to the bank for him to deposit his bimonthly assistance cheque from the Ministry of Housing and Social Development. On the way, after some friendly chatter, I reiterate the types of behaviours that are deemed appropriate in the bank and in his interactions with the bank teller and the behaviours that would be inappropriate. Two months ago he had been doing really well in his interactions with the bank tellers. He had achieved mostly “ones” and “twos” on the behavioural scale that I created to track his behaviours (“ones” being very appropriate behaviour, and “fives” being very inappropriate behaviour), almost allowing me to leave him totally independent in his banking errands. I created this scale not only to better track Ted’s behaviours but also to have a mechanism to measure his program progress in order to report it consistently, regardless of which support workers spend time with Ted. In the last month, however, Ted has been declining in his progress and reverting to old patterns of using inappropriate language in his interactions (he returned to mostly “threes,” “fours,” and the occasional “five” on his behavioural tracking chart). I had decided on a set of particular behavioural variables a few months ago after watching Ted in his interactions and judging which ones seemed more observable to track (such as his use of titles when referring to someone, type of eye contact/ staring, yelling out, and comments on people’s physical appearance). I gave him examples, taken from the behavioural variables I had chosen to track in his charts: “Ted, if the teller is a woman, staring at her chest, asking her out, commenting on her looks and calling her ‘my lady’ are not suitable.” I ask him to repeat which behaviours are suitable and remind him of the big colourful chart we had made months ago that listed all the types of behaviours appropriate for him to engage in when going to the bank. We arrive at the bank and I assist Ted in getting his walker out of the trunk of my car. I am hoping that the teller is not an attractive woman

Outcome Measures and Front-Line Social Service Work

209

who is wearing anything that hints at cleavage or curvaciousness. I am wondering if I should be alongside him in the line-up or if I can stand back and just observe (sitting in a waiting chair, where I can still see his interactions with the teller and the teller’s facial expressions, so that I can be aware if the contact looks as though it is going downhill). I decide to stand alongside him and become vigilant as he starts a conversation with a woman in front of him, but I determine that he is doing well, even though he asks questions that are not what I would deem wholly “socially acceptable.” Initially, I decide that his comments are not rude and do not warrant any written comment or “rating.” But later I wonder if this incident should count as a “one” on his chart? I find it difficult to decide if I am tracking the degree of a particular behaviour in a moment or tracking the number of times a particular behaviour happens. In the end, I decide not to concern myself with the plethora of extraneous variables that could be confounding the measurement of Ted’s progress. It happens that the teller is a man, and I let out an inaudible sigh of relief. Ted chats with the teller, as he completes his transaction, holding up the line for slightly longer than most would in the same situation. I wait patiently, smiling at the few impatient faces in line. I ponder the social norms and cues that we become socialized to and how slight variances from these “unspoken agreements” become more pronounced when we work with individuals who may lack “accepted” social skills. How are these skills taught and, further, how is social awareness tracked to show that improvement is being made? All in all, the bank interaction goes well, and Ted and I review why this is so once we are back in the car. I mentally make a note that, when I return to the office, I can add “ones” to his behavioural chart, while acknowledging that a large part of his success was because the teller was male. Again, who wants to concern themselves with the extraneous factors impacting Ted’s behaviours? It is hard enough to tease out and choose a handful of variables displayed in his regular activities that are easy to observe and track in a consistent way. Breakdown of Rapport with Ted A few weeks later, Ted’s behaviours are still sliding, as could be seen in his tracking chart. All of the different behavioural variables that I tracked with him to record his progress were consistently rated at

210

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

“four” or “five” on the sliding scale. Every interaction with him in the past two weeks has ended in some outburst that warrants my ending our day together and the support that I provide for him. I had been spending up to three days a week with Ted in the previous months until his behaviours started deteriorating. I cut back on his hours. The director of the agency and I thought that maybe this would provide Ted with a “reality check,” since he always expressed his desire to receive support, and yet he was not treating me, or others in public, respectfully. I dread today, as I have during the past few weeks of working with him. Our rapport has been sliding precipitously, and I am feeling more confused and impatient, wondering how our relationship has changed and why he is acting out so much after showing such improvement in the weeks before. Ted’s inappropriate behaviours have increased in frequency, specifically in relation to his comments to others in public, yelling out, and actions and words towards me as his support worker. My co-workers, the director, and I discuss the possibility of trying a new support worker with Ted for the time being to see if that will improve his behaviours and hence his program progress. I approach Ted as he is sitting on the bus stop bench a few houses down from his home. He enjoys sitting on the bench and watching pedestrians, occasionally yelling out a startling “hello” from across the street. I start to explain to him why our time together has dwindled and why I will be replaced by another support worker for a while, when abruptly he starts furiously yelling at me, “Stop treating me like a kid!” followed by a string of profanities. I have never seen Ted so angry or emotion-filled in all my months of working with him. I am startled and my stomach turns, as yet again I apologize and walk away from supporting him, owing to his continued “inappropriate” yelling. I take a deep breath as anger, sadness, and confusion wash through my body. It was agreed upon with my director that I should cease my support for the day if Ted’s behaviours did not change. The director insists that my own safety and well-being are the first priority and that I do not deserve to be disrespected or mistreated by any client. But I feel so horrible walking away from Ted as he sits fuming at the bus stop. How is this supporting him? What has happened in the past few weeks and months? Why have I failed so miserably at relating to Ted and providing him with a space of support, patience, and understanding? I feel internal dissonance between what I want my support work

Outcome Measures and Front-Line Social Service Work

211

to be about with Ted and how my support work is actually happening with Ted. A Socially Located Unease Upon analytic reflection of the social organization of my interactions with Ted, I saw that my work experiences were not as isolated or unusual as I had believed while I was working with clients. As de Montigny so eloquently writes of his social work experiences, “Our pain and confusion and the questions that emerge from our daily lives are not merely idiosyncratic, but are socially located and socially organized ... Through our unending contact with this institutional apparatus, both our own and clients’ realities become reportable, accountable, and visible in [institutional] terms” (1995: 15). My work with Ted was taking shape differently from how I envisioned my support work, and yet, at the time, I was unaware of how this disjuncture was socially organized. In my work with Ted, the need to measure his behaviours infiltrated my thoughts and interactions with him as I worked to provide evidence of CQI. I was torn between my empathetic responses to his support needs and the institutionally induced surveillance of his “continuous improvement,” which would reflect the agency’s “quality” of service delivery. The discourse of CQI operates as a technology that subordinates clients’ lives to outcomes evaluation for government-contracted service delivery management. It was not long after these disheartening events that I started resisting in my own support work by ceasing to “track” my clients’ behaviours and “program progress.” My resistance, however, was rooted in the strength of the internal dissonance I felt while interacting with clients in a quantifiable manner, rather than a conscious decision based on my understanding of how my support work was being organized by priorities of contract management. Our agency was still in the stages of implementing the organizational and reporting changes required by CARF accreditation, and therefore I could take advantage of this organizational turbulence and resist these quantifiable reporting measures. This may not be the case for the other front-line workers who feel similarly once accreditation has been awarded and all of the textual and reporting systems are fully integrated. Also, it should not go unnoticed that some front-line workers I spoke to strongly embraced these changes in the reporting and “tracking” of clients’ program progress, because it offered a method of making visible their good work

212

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

and productivity, especially within a setting ripe with ambiguity and organizational change. Regardless of how different individuals view and take up new reporting requirements and “tracking,” the discourse of CQI organizes front-line workers’ work and thinking about their work in ways that surface clients’ lives as measurable data to fulfil reporting requirements to funders. Accreditation, as mandated by the BC Ministry of Children and Family Development, operates to manage and regulate the local work of front-line service delivery for managerial purposes. Only frontline work with clients that is quantitative and measurable provides the evidence of outcomes and continual improvement and becomes visible and accountable to the ruling relations of accreditation and government contract management. Accreditation is tied to contract management as it is taken up by government personnel as a tool of accountability to manage, regulate, and evaluate the “quality” of government contracted human services. Human service work within both non-profit and forprofit agencies that is conducive to measurable outcomes reporting is gathered by accreditation processes and can be used by government officials to coordinate service delivery through evaluation within and between service agencies across the province. From the story I have shared, we catch a glimpse of how broader relations of governance through funding and accountability organize a local setting and the people inhabiting it. We see that relations of contract governance, and in this case relations of accountability through accreditation procedures, are much more than just organizational change and an increase in record-keeping; they are actually changing the way front-line workers think about and do their work. They are changing the way we, as front-line workers, interact with and develop our relationships with those individuals who are in need of our support and services.

Research and Development Work at an Ontario Youth Shelter NAOMI NICHOLS

In this chapter section, I describe my efforts to do community-based institutional ethnographic research from the standpoint of young people who use a youth shelter. My intention is to show how a

Outcome Measures and Front-Line Social Service Work

213

desire to orient research to community development hindered my ability to work in the context of actual people. I describe how my efforts to establish grounds for collaboration with (and transfer or mobilize knowledge to) human service sector agencies meant that I engaged the very technologies that I observed being employed to manage and account for work across the human service interface (e.g., program marketing, outcomes management, data collection, fee-for-service structures). Reflecting on my own experiences, I show how an imperative to generate knowledge that is useful to the communities we work with shapes our approach to research and data analysis, particularly when community partners are looking for particular kinds of research evidence (an evidence that can be inserted into logic models, funding proposals, and so forth). Throughout the section, my aim is to show how community-based research draws people – researchers, community practitioners, community members – into relations that have a coordinating or ruling effect. Although I did not see it at the time, I now recognize that the project’s development work reflects neoliberal rationalities (about the self-managing individual, for instance) and extends relations through which practitioners’ work with youth is managed. Further, and perhaps most unsettling, the life-skills program that my research inspired is a means for drawing young people into relations through which their conduct can be institutionally transformed and effectively corralled. In order to makes sense of this unexpected outcome, I turned my institutional ethnographer’s lens on the research process itself, guided by the observation that researchers are always situated both within and outside the relations their research seeks to understand (Griffith 1998). I begin by briefly outlining my research problematic. As the section progresses, I describe the political-institutional backdrop to my research and development activities. My goal is to show how the funding and accountability practices, professional knowledge, and institutional hierarchies that coordinate work across the human service interface also organize research that occurs in this institutional setting. Directing analytic attention to the research process itself, I attempt to discover how my initial desire to engage young people in an investigation and critique of the human service sector was transformed into a project for improving the “life skills” of homeless youth. In the final subsection, I explore how my work to create, fund, and coordinate the “Transitioning Life-skills Program” undermined my hard-won relationships with

214

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

the young people who use the youth shelter and my ability to conduct research from their standpoint. The Project My research evolved in relation to ongoing activities at a youth shelter in Ontario, Canada. The research involved people whose working lives are shaped at the human service interface: those who (voluntarily and otherwise) use these services and those who provide and manage them. I set out to learn from the young people who use the Youth Emergency Shelter (YES) how their work to sustain housing is organized. My goal was to map5 the “complex of ruling relations – the multiple activities of individuals, organizations, professional associations, agencies, and the discourses they produce and circulate – that are organized around a particular function [i.e., the mediation of youth homelessness]” (Smith quoted in Mykhalovskiy & McCoy 2002: 19). I used Institutional Ethnography’s (IE) investigative approach – its mandate to figure out how things work – to cultivate a collaborative relationship with YES and to carry out the primary research activities. Early research findings were used to build capacity at the youth shelter where the project was situated. Working with shelter staff, I developed and secured funding for an intervention to reduce shelter recidivism and increase shelter revenue. We called this intervention the Transitioning Life-skills Program (TLP). It was developed to support young people’s positive, sustained transitions out of the shelter and into their own rooms or apartments. At the same time, we intended that the economic structure of the program would allow the shelter to generate the income it required to remain open. Using Research to Inform Community Development Because I wanted to do research that would be useful to my community partner, I conducted a number of interviews with staff at YES. They were concerned that the shelter no longer provided programming (job readiness training, employment supports, addictions counselling, empowerment workshops, etc.) for young people who used the shelter. Beyond offering a bed and a meal and getting young people hooked up with social assistance through Ontario Works (OW), the province’s social assistance program, there was very little else in

Outcome Measures and Front-Line Social Service Work

215

place, structurally, to ensure that young people did not end up homeless again. I set out to discover how economic relations contributed to a loss in programming at the shelter. Ultimately, I learned that funding relations constitute a single strand in a web of ruling ideas and practices that shape work that happens at the youth shelter and across the human service interface more generally. The development aspects of my research aimed to use this knowledge to create, fund, and implement an intervention that answered shelter workers’ calls for more programming while supporting young people’s efforts to find stable housing in the community. But as this chapter indicates, my strategic use of research findings did not adequately acknowledge the coordinative effects of the results-based funding and reporting mechanisms we used in our work. Funding Relations YES’s most recent contract with the city was shaped by a consultation process, which was led by someone who specializes in the provision of long-term residential care for the elderly. The consultant applied the same funding formula that is used in long-term care facilities to the hostelling sector in this Ontario town. As such, YES is funded on a per diem basis. The shelter receives provincial funding through the municipal government and OW. The funding is accounted for as it moves through these two independent channels. By the time it reaches the shelter, there is already a chain of accountability in place. The practice of filtering government funding into community-based organizations (non-profits, charities, and even ministry-mandated organizations) draws participating agencies into subtle and multilayered accountability relations. Contracts, fee-for-service practices, program evaluation tools, and logic models coordinate what happens at each level of dispersal and allow provincial funds to be tracked and accounted for at the local level (e.g., via program outcomes or results). In a per diem model, strong performance is indicated by the number of clients served. The per diem formula means that the shelter is paid only when someone uses its services (and even then, the funding covers only two-thirds of the cost of an occupied bed). The adoption of a per diem funding structure inserts the shelter into a competitive field where it must compete for clients and for other resources to ensure it is able to

216

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

cover the other costs of service provision. For the shelter’s per diem arrangement to be economically sustainable, the shelter must maintain high and stable rates of occupancy (which conflicts with frontline staff’s desire to help youth find stable housing). Most important for shelter workers’ concerns about a lack of programming, the per diem model places the shelter in a deficit position whereby simply staying open requires significant economic creativity on the part of Wendell, the executive director (ED) and the board. Small shelters with varying rates of occupancy, such as YES, are required to seek out other funding sources (e.g., grants, contracts, or fee-for-service arrangements). Working with Contradictory Knowledge When I agreed to create and seek funding for the TLP, my development work was oriented to solidifying economic relations between the shelter and the Children’s Aid Society (CAS) and to establishing further grounds for collaboration with practitioners who work with youth. I attempted to situate my work in the context of institutional relations within which YES operates while also incorporating the actual (and often contradictory) experiences of the young people I was working with. During conversations and interviews, young people would tell me that they would be using the shelter only for a day or two to get back on their feet. I would then watch as they stayed for the entire 42 days OW would fund. A year later, many of the same people had returned to the shelter. They certainly never explained that their use of the shelter was a result of poor life skills. They would describe being kicked off OW or out of CAS, being evicted from apartments, or being involved in break-ups and fights with family and friends. These young people have life skills in a literal sense – they have honed the skills they need to survive from one day to the next. Problems arise for young people when administrative timelines and processes do not correspond with the actualities of their days and nights. The life skills they possess are not acknowledged and formal life-skills learning opportunities seem to have no bearing on their everyday experiences. The TLP was meant to offer a programmatic structure whereby young people would receive one-on-one mentoring to learn life skills over the course of their ordinary days and nights. The program is meant

Outcome Measures and Front-Line Social Service Work

217

to acknowledge the material conditions of a young person’s actual life and the reality that he or she already possesses all kinds of life skills that are unique to these lived experiences. The goal of the program is to work individually with young people to help them figure out how to live outside the care of a parent or guardian. The process begins with a conversation between a young person and a mentor. The conversation is inspired by interview techniques used by institutional ethnographers (DeVault & McCoy 2006). It aims to draw young people into conversations about their work to maintain housing, to take care of their physical and mental health and nutritional and economic needs, to participate in schooling, and so on. The interviewer does not presuppose what this work should look like, in favour of learning from participants how their lives are organized and the supports (if any) they need to live independently in the community. While the structure of the TLP was meant to reflect the concerns of young people, it is clear that a focus on life-skills development comes from outside the standpoint of the young people who stay at YES. When working collaboratively with a human service sector agency, one must take this kind of contradictory knowledge into account. Had I proceeded solely from the standpoint of young people, I would have failed to establish grounds for collaboration with practitioners at YES and would not have successfully secured an Ontario Trillium Foundation grant for the shelter. Creating the Program Based on my efforts to figure out how economic relations were implicated in a lack of programming at YES, we developed the fee-for-service TLP, which we intended to sell to other local service providers. The TLP was meant to offer a structure through which the shelter could generate the revenue it required to cover the costs of emergency shelter provision for youth. The specific organization of the program also provided a framework through which YES could be held accountable to its paying customers. By tracking young people’s progress towards their individual “transitioning goals,” staff could demonstrate that their clients were making progress. Providing life-skills training to homeless (and/ or CAS involved) youth could represent income generation for the shelter and become a means for observing and comparing the efficacy of particular shelter workers and the social-emotional development of individual young people.

218

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

While I have attempted to preserve IE’s approach to learning from and with people, a TLP conversation is oriented to the production of “baseline measures” and “targets” with regard to a young person’s life-skills development. The Ontario Trillium Foundation funding proposal for the TLP was inspired by Wendell’s (YES’s executive director) observation that YES’s lack of bureaucracy was its primary organizational advantage. This apparent flexibility was seen to represent YES’s “competitive advantage” in the human service market (Wendell, field note 2007). In the proposal I argued that YES’s streamlined organizational structure allowed the shelter to nimbly respond to the needs of its clients at a lower cost than other service providers could accomplish. YES’s ability to underbid its competitors also significantly shaped its relations with the local CAS. YES provides housing for CAS “youth in care” at lower rates than other service providers (e.g., group homes or supported living environments) can guarantee. The entrepreneurial flavour of the relationship between YES and CAS is maintained through frequent contract negotiations and fee-for-service measures (e.g., YES staff deliver the CAS summer and March break programming). The funding structure for the TLP is influenced by YES and CAS’s history of economic partnerships, and the practical and discursive organization of the TLP was deliberately oriented to CAS policies for working with adolescent Crown wards. CAS quickly became the TLP’s primary consumer, and this economic relationship significantly shapes how work within the TLP is organized. A TLP assessment conversation is distinctly shaped by relations of accountability: funders are interested in seeing their clients economically and socially “transitioned” out of institutional care. I wanted these conversations to contrast the multi-paged, standardized lifeskills inventories that I repeatedly encountered over the course of my research, while also providing the program coordinator and youth workers with the textual evidence they needed to talk about their work with youth in terms of the language of an Ontario Trillium Foundation grant application’s “expected results,” “activities,” and “performance indicators.” At the time of its invention, however, I did not foresee the degree to which an orientation to “results” would shape how front-line staff would work with young people who stay at YES. Once the funds were secured, I knew that Wendell and the program coordinator would have to conduct their work in relation to the terms of the Trillium proposal.6 One-on-one youth workers and volunteers

Outcome Measures and Front-Line Social Service Work

219

would need to track their work with youth in ways that allowed the program coordinator (a) to report on the program’s progress towards its target results and (b) to indicate outcomes to potential fee-for-service funders. Most significantly for the program’s sustainability (and our ability to secure start-up funding), the program coordinator and the ED would need to generate fee-for-service revenue by marketing the program to other agencies in the community. The success of our grant application was dependent on our demonstrated commitment to participation in a competitive service market; and our competitive success required a flexible approach to service provision that would allow us to target the needs of specific consumers of our program. Having to step into the coordinator’s position for a while until the results of the granting competition were announced, I experienced how the front-line technologies I point to in this chapter section (documentary and assessment practices, contract negotiations, institutional hierarchies, life-skills interventions, etc.) drew me into relations that govern; the minutiae of young people’s lives and practitioners’ work were brought under institutional scrutiny and evaluated in economistic terms, my expertise was employed to sell the program to funders and institutional clients, and, as I will demonstrate in the next subsection, my ability to conduct research from the standpoint of actual young people was diminished. Working for Results While the shelter waited to hear the results from the Trillium-granting competition, I acted as the program coordinator on a voluntary basis. In this role, I engaged in the work processes I sought to describe, becoming fluent in the work knowledge of agencies that serve youth. In order to make sense of and productively use this knowledge, I documented my day-to-day work. I also became increasingly aware that I could not ensure that the program I had developed was simultaneously accountable to the logic of the human service interface and the actualities of young people seeking shelter. By tracing young people’s experiential accounts into relations of ruling, I could ensure that my primary research objectives were oriented to learning how housing and homelessness are organized. But it was challenging to maintain accountability to actual people when I was working within these same relations, creating and coordinating a program that reduced work with people to work for results.

220

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

The following story about Jordan exemplifies a time when (despite my best intentions) I was caught up in, and perpetuating, institutional relations of accountability and surveillance. Jordan’s Intensive Support and Supervision Program (ISSP)7 team were threatening to cut the “hard-to-serve,” fee-for-service funding the shelter received to pay Jordan’s mentor. They wanted to see more evidence that Jordan was making progress: going to school regularly and achieving credits; attending regular visits with his therapist and anger-management coach; and proving that he could independently uphold his responsibilities around hygiene, chores at the shelter, medical care, and so on. The goals that the ISSP team identified had arisen in the context of a standardized life-skills assessment that indicated Jordan had weak or no life skills. Conversely, the goals he created as part of the TLP reflected his positive experiences participating in a cooking program, his love of computers and video games, an improved relationship with his mother and sister, and a decrease in violent interactions with other young people at the shelter. The TLP reflected Jordan’s experiences in the world and his goals for himself. Worried that the shelter would lose the funding for Jordan’s mentor and concerned about the success of this new (and, at the time, largely unfunded) program, I returned to the shelter after Jordan’s ISSP meeting and developed a tracking system so that Jordan could show the ISSP team that he was indeed keeping his room clean, waking up to go to school, and taking care of his personal hygiene. I gave him a number of charts and asked him to put them up in his room at the shelter so that he and his mentor, Rick, would remember to use them. The idea was to generate textual evidence that Jordan was making progress. Jordan wanted to keep track in his journal or keep the sheets in his drawer, but I insisted that they go on the wall to encourage accountability to these goals. At a follow-up meeting with Jordan and Rick a few days later, it became apparent that the tracking system was not working for Jordan: I must have asked Jordan how the showering and hygiene stuff was going because at this point he started to get angry and told me that it had always been fine. I suggested that while this may be the case this is not the official story (as captured in the original life-skills assessment) and that if it is indeed fine, we need to show this so as to refute that other assessment. At this point Jordan started getting really upset, asking angrily who told me that he doesn’t shower. This is directed accusingly at [his mentor] Rick. I stumble a bit because I don’t want to compromise their relationship.

Outcome Measures and Front-Line Social Service Work

221

I remind him about the first life-skills inventory that was done and suggest that while it may not be apt, it did document very specific things about Jordan. One of these things was that he doesn’t regularly take care of his teeth, shaving, showering, etc. By now, Jordan is red in the face and banging the computer in frustration. (Nichols, field note 2008)

When I typed up the handwritten field notes a month after the meeting, I could see that I had lost sight of my responsibility to young people like Jordan whose experiences comprised the starting place for my research. Instead, my actions were shaped against a concern to demonstrate accountability to our funders: “I was obviously working from the standpoint of the ISSP and our other potential clients (like CAS and OW), rather than listening to and honouring what Jordan was telling me. He didn’t want to have tracking sheets posted in his room because it is embarrassing. I was worried that otherwise he’d lose them. We needed to show ‘progress’ to keep our program alive and to allow Jordan to keep Rick as his one-on-one worker, but in my concern to demonstrate accountability to our funders, I lost sight of the needs of this kid” (Nichols, field note 2008). When institutional ethnographers talk about institutional capture, it is typically in reference to moments when one adopts (or is simply unaware of) a particular institutional frame in his or her work (Smith 2005). In this instance, I embodied the institutional frame of the human service sector where the research was situated. I did not even register Jordan’s experiences’ disappearing from the story. Had I not taken the extra time to reflect on field notes, I would have been less likely to observe my increasing drift towards the institutional demands of the program’s paying customers and away from the actual experiences of young people. In the end, reflecting on my experiences with Jordan prompted me to begin the process of pulling back from the day-to-day functioning of the TLP and life at the shelter. I limited my development work to educational/reflective activities with practitioners and refocused my energies on spending time with young people on their terms. Conclusion This chapter section represents an attempt to understand how my research and development work were influenced by the same relations that I observed having a coordinating effect on the lives of young people and social service practitioners. While I offer a somewhat linear description of

222

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

the relations that shaped my work, the human service interface actually comprises an extraordinarily complex field of action. In my description, I have attempted to capture the recursivity of certain ideas and practices across the front lines of the human service sector. My goal is to convey the complexity of this politico-institutional terrain by showing how particular ideas and practices feed into and back on one another. My work to create the Transitioning Life-skills Program was influenced by YES’s precarious economic status. Funding relations at YES reflect the dispersal, and marketization, of services (Clarke & Newman 1997) across the non-profit sector in Ontario. In turn, these economic relations shape, and are shaped by, practitioners’ efforts to be accountable (to other service providers, funders, their clients, etc.). Demonstrating accountability requires techniques for rendering one’s work visible, comprehensible, and comparable; the calculative technologies we employ to demonstrate accountability also allow people’s work to be managed. Neoliberal ideas about individual responsibility influence how practitioners take up various managerial technologies as part of their ordinary work routines and how they understand their work with youth. The focus on individualized service provision and the emphasis on youth empowerment or self-mastery (the underlying focus of any life skills program) are contoured by the same complex of ideas that have shaped the managerial turn in practitioners’ work. Throughout, I have directed attention to the tensions that have shaped the research process. I have deliberately attempted to convey the messiness of the experience. As a new researcher, I was not prepared for this. I naively came to this work with the objective of creating opportunities for public pedagogy: public space for young people to learn from one another (how the human service sector is organized) and to teach practitioners how things work from their standpoint (Nichols 2009). Theoretically, this idea is compelling; however, it fails to take into account the coordinating effects of community-based research. A collaborative community-academic partnership may require that a researcher navigate conflicting expectations about a research project and its outcomes. As an IE researcher, I realize it is important to see how particular ideas and practices organize people’s work. Even with this insight, however, the researcher must work diligently to conduct her research from the standpoint of people whose lives are legitimately lived outside dominant institutional frames. My experiences with Jordan indicate the challenges one faces in attempting to conduct activist or development work within, and against, ruling discourses and practices. This chapter

Outcome Measures and Front-Line Social Service Work

223

section brings into view the social relation we call community-based research.

The Neighbourhood Computer Lab: Funding and Accountability FRANK RIDZI

The Neighborhood Networks (NN) program as implemented in Greenville, NY, was a prototypical “scaling up” of a model that is believed to be successful. Across the United States and elsewhere, there is a rising interest in finding innovative programs and “bringing them to scale” by enlarging their capacity in the communities they presently serve and expanding their impact by replicating them in other communities across the nation and the world. The Neighborhood Networks program began in 1995 as a federal initiative aimed at bridging the digital divide by “promoting self-sufficiency and providing technology access to residents” living in housing subsidized by the US Department of Housing and Urban Development (HUD) Federal Housing Administration (FHA). By creating computer and technology labs in sites of concentrated public housing, the NN program has been able to offer job training classes to adults, after-school and mentoring services for youth and assistance for seniors seeking to keep in touch with loved ones and access important age-appropriate information (such as that pertaining to health care). Growing to include over 1,400 centres in all 50 states and amassing a database of best practices, the NN program developed a reputation for having an impact. In 2001, Congress decided to scale up or expand the NN program by offering competitive grant funding to Public Housing Authorities (PHAs) to create and expand computer learning centres on or near PHA developments. This is the funding source and primary source of accountability for the NN program that is operated by Greenville’s public housing authority. In the following pages I explore the “logic model” as one of the central technologies that manage both performance and subjectivities in the funding and accountability processes of this “scaling up.” The W.K. Kellogg Foundation, a pioneer in the field, defines the logic model as “a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan, and the changes or results

224

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

you hope to achieve ... a picture of how you believe your program will work. It uses words and/or pictures to describe the sequence of activities thought to bring about change and how these activities are linked to the results the program is expected to achieve” (2004: 1). The logic model is today a pervasive artefact of the new public management approach to program design, implementation, and evaluation. It serves as both a tool and a technical discourse that infuses the process of developing programs for non-profit organizations. Subsequent to the Government Performance and Results Act (GPRA) (1993), public servants have come to use the logic model structure as a means of demonstrating their conformity with the spirit of the legislation (McLaughlin & Jordan 1999; Renger & Titcomb 2002). Though existing in a variety of formats, logic models typically include the following components: (a) a definition of the problem, situation, or need that a program will meet; (b) a listing or the materials and resources required by the program to address this need; (c) a description of the services or activities that the program will perform to meet the need; (d) a list of the outputs that will be tracked pertaining to the services or activities (such as the enumerated attendance at classes held); and (e) the outcomes or improvements in the problem, situation, or need that will be tracked and attributed to the program. Though the widespread use of logic models is common knowledge, little is documented about how this technology is actually taken up and used in the daily lives of program implementers. I argue that this technology takes on a heightened importance in the scope of “scaling up” “successful programs” because it carries the metaphorical DNA of the original program; ensures that replicators do not stray too far from the original goals; and offers a feedback loop such that local outcomes at each of the replicated sites can be systematically reported, aggregated, and tabulated for the purposes of extra-local oversight. Logic models are utilitarian technologies of modern governance and a way of coordinating the multiple “disintegrated” local sites of the state. Nevertheless, they are not automatic. Instead, logic model technology provides parameters and discursive hints that are deconstructed and made sense of at each of the local sites through which it passes. As a result, they serve to “herd” or corral local thinking, but only so far. At each point the meaning of the logic model’s components must be negotiated and reconciled with the realities of everyday life on the front lines. In Figure 7.1 (below) I examine the trajectory of the logic model as it appeared within Greenville, paying close attention to the work that individual workers did both to activate the text of

Outcome Measures and Front-Line Social Service Work

225

Figure 7.1. Logic Model Trajectory

Extralocal Funder

HUD

HUD

Local Executive Staff/ Housing Agency

Grant Writer

Local Programming Staff

Program Director

Local External Contractor

Front-Line Staff

Database Designer

Application for Funding

Hiring Staff

Acquisition of Materials

Program Evaluator

Delivery of Services

Evaluation & Reporting

the logic model and to ensure that their experiences were represented within it. The Trajectory of the NN Electronic Logic Model The life of the NN logic model began with the extra-local funder. HUD adopted this technology and embedded the NN program goals within it. As part of the request for proposals that was sent out to all seeking to compete for funds to replicate the program, a logic model was required. HUD contracted with a technology vendor to incorporate advanced macro computer programming in desktop technology. The result was an Electronic Logic Model or “eLogic Model” that looks very much like a typical logic model created in Microsoft Excel. However, the main difference between this technology and a logic model one would create by oneself in Excel is that the components filling the model were limited to preprogrammed choices. Rather than define local needs or activities or proposed outcomes for oneself, the person completing the model had to select from the choices in a drop-down box. The only thing that

226

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

Figure 7.2. Logic Model “Job” Sequence

could be typed in were the numerical goals (such as number of people enrolling and completing courses, obtaining their GEDs, improving their GPAs, or maintaining employment for a specified period of time). These choices were limited and provided the grant writer with a “foolproof” template from which to design the local program’s specifications. Figure 7.2 provides an overview of the categories included in the logic model. By completing this logic model and submitting it with the rest of the application to HUD, the local staff could be seen to be organized in their thinking by this extra-local technology. This included not only the types of services they might offer, but also how they planned to go about monitoring progress and outcomes. By selecting the winners of the competitive granting process, HUD in turn rewarded those that most closely fit HUD’s aspirations. The negotiations between funder and applicant, however, are only the first phase of the temporal trajectory that the logic model embarked on through the local site before returning to its extra-local source. For the local recipient hired staff, such as a program director, the logic model was a central point for orientation in how their job was to be done and what standards would be used to assess their proficiency. In Greenville, Jim, the program director who was hired, was new to the NN concept and design. The logic model served as a template for figuring out exactly what he was to do and how much of it. It also was a point of contention and negotiation with the grant writer that led them to jointly send revised goals to HUD. In order to manage all of the various outputs and outcomes, the housing authority contracted with a database designer to build a desktop

Outcome Measures and Front-Line Social Service Work

227

application where the director and his staff could keep track of their many outputs and outcomes. Armed with this desktop application (which amounted to a database translation of the logic model) the director then set about negotiating with staff about how and when to enter data – keeping in mind that these would be the data that would be tabulated by the external evaluator the agency contracted with. Ultimately, this information would be the source of the reports that would be returned to HUD by typing in the achieved numerical outcomes on the original logic model next to the initial numerical goals set forth by the grant writer. Working through Program Start-up with Jim, the Newly Hired Director Jim had completed a degree in adult education and was excited to apply his knowledge to a real-world situation that would help others. The Neighborhood Networks program seemed like the right place to do so. It had demonstrated success as a pilot program in other neighbourhoods. Its goal was to increase self-sufficiency through employment and to decrease dependence on public assistance. Previous assessment revealed that it had done just that. Jim’s job was to establish the same computer program in a new housing project. His first task was to research, price, and procure the computers. Within a few months, Jim had rapidly transformed an extra housing unit, in an inner city project, into a remarkable computer centre. However, there was one problem: no one was coming in to use the program for its designed purposes of life-skills training, job readiness classes, GED, and ESL. There was good reason for this. Jim’s program was set in an area of high need and low income. As he had learned in his graduate studies, people in such situations are often stuck in the “tyranny of the moment”; they are so focused on satisfying their immediate needs – food, transportation, clothing for their children, and so on – that they are unable to find time to work on the type of personal advancement that the NN program was meant to support. It was also, as he would soon learn, an unsafe community. The first time he heard gunshots, he was surprised, but even more surprised to hear from residents about how frequent shootings in the neighbourhood were. Once, he and his staff heard shouts of “Get down! Get down!” from outside the building. Fearful of a shooting, they dove under desks to wait out what turned out to be a false alarm. Women told him stories about leaving their homes only to be told by strange men on the street that they should go

228

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

back inside, because “something was going to happen soon.” People in the neighbourhood lived in fear of the danger, yet Jim noticed that they also demonstrated remarkable courage. For instance, one woman asked a potential gunman if they could postpone whatever they were about to do until after the kids had walked home from school and were safely in inside. Though Jim knew he did not need to put himself in such danger, he decided to stick it out. Despite his fear, he focused on persuading people to use the centre, so that they would have the resources to climb out of poverty and could eventually leave behind the violence of the neighbourhood. Trying to Tweak Community Needs to Accountability Standards After setting up, Jim was able to welcome residents to the computer lab with enthusiasm. Though people would come, he noticed that few were using the lab for the things specified in the logic model – the things that he would get credit for. He soon realized that this was a major problem. If he did not get the right people with the right interests to come to the lab to complete the programming indicated in the logic model, it would appear to his grant-makers that he was doing nothing with the grant money. As a result, he turned to me as the program evaluator to try to figure it out. Through multiple discussions we came up with a series of ideas, some of which he had already tried on his own. He set to work with these strategies. First, he requested that the housing agency query its database for phone lists of residents he could call to introduce his centre and discuss the benefits. The data query revealed that over 70 per cent of the housing residents had no phones or their phones had been disconnected. The remainder proved hard to reach. Next he tried sending out fliers along with official housing agency mailings, but it proved to be a problem for the agency and they refused to permit Jim to send out any more fliers after the first one. Next he tried going door-to-door to invite people to the computer centre. Although fearful because of shootings, Jim attempted this time-consuming but relatively positive way to connect with a few residents. Even when the interaction had been positive, however, such door-to-door contact did not result in residents coming to the centre for educational purposes as much as he would have liked. Also, among those who did answer the door many said they had not even seen the first flier sent out with the official mailing.

Outcome Measures and Front-Line Social Service Work

229

Thinking of alternative incentives, Jim tried offering food to draw people to computer learning events, but he soon found out that the grant prohibited spending money on food. He began brainstorming and had begun to network with a healthy childhood program and a food bank to combine services so that he could get people in the door and then interest them in programs. However, this was a “chicken and egg” situation. Other organizations would not send their staff to conduct programs unless Jim could promise that people would be there, and he could not ensure people would be there unless there were food giveaways. Recruiting members of the residents’ association to support and advertise the program was also a challenge. Jim found that only one person was active, as the “committee,” and she did not welcome collaboration. Also, staff warned Jim that this woman could make his life miserable if he was not careful. This resident had been angered that the agency had not involved residents in the writing of the grant in the first place. When Jim proudly announced to her that he had opened the centre (following months of work to procure the computers and set them up), the resident replied, “You did? Well that was a mistake.” Nevertheless, this woman thought there should be a grand opening. Intending to follow her lead and thus begin a relationship of collaboration, Jim offered to have one but the woman was offended and said, “Well, if you’re just going to have a party to make me happy, then don’t do it.” When Jim asked what type of food she thought they should have, the resident took offence again and said, “Well, when you have a meeting in your neighbourhood, what do you eat?” Seeking outside help, Jim tried recruiting volunteers to offer counselling sessions. This seemed to work well at first, as many people were willing to volunteer, but, as Jim soon learned, volunteers are harder to manage than people on the payroll. They would often not show up. How could he get people to come to programs if he was not sure volunteers would be there to present them? Learning from this experience, Jim hired residents to work at the centre. This seemed to work well, since the two residents he hired are responsible, motivated, caring individuals, who took initiative and were willing to go the extra mile. They worked extra hours without pay and even spent their own extremely limited funds to buy supplies such as headphones and snacks. However, the grant allowed for only 20 hours a week for resident salaries. This meant each could work only 10 hours, which was not enough to provide accessible, consistent service.

230

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

Perhaps the most successful outreach strategy was a last-minute idea. Jim heard that an annual Christmas giveaway was always well attended, since families are always in need of gifts for their children around the holidays. He quickly purchased books to reinforce his program’s emphasis on reading and literacy. He then put up fliers at the earlier giveaway and asked families to be referred to him. It worked! It was the best attended event of the program to date. Ideally, those residents who picked up books would come back to the centre for additional services, but this did not happen to nearly the extent Jim would have liked. Throughout this hail of efforts, Jim was motivated to ensure that residents would use his computer lab as the grant intended. He wanted to increase the numbers even if what the grant portrayed as the community’s needs differed from what he came to see them as. In fact he began to wonder about lack of consistency between the three standpoints of the grant’s originators, himself, and the residents. These matters, however, are for elaboration elsewhere. Here the focus is on the accountability circuit of this form of governance and how it shaped the work of the coordinator and staff here in ways that did not necessarily jibe with their own experiences or the interests of the residents they set out to serve. Negotiating Disjunctures between Lived Experience and the Logic Model The trajectory of the logic model in this particular usage appeared to present a closed circuit whereby extra-local ideas entered the local sphere and oriented hiring strategy, database infrastructure construction, staff programming and self-evaluation practices before returning to the extralocal source with local surveillance data. This trajectory epitomizes the multi-phase utility of logic models that funders espouse, in which they assist with program planning, program implementation, and program evaluation (W.K. Kellogg Foundation 2004). In reality, however, each of these stages involved the work of local individuals to rectify the model with everyday pressures. The grant writer had to juggle both a desire to offer high enough aspirations to obtain the grant and realistic expectations of the staff they will be able to hire. The program director and staff had to determine how they would define a course completion. Would it be those who attend all, half, or some classes? Or would they define each day a session is offered as a separate class to avoid not counting the

Outcome Measures and Front-Line Social Service Work

231

work they do with those having spotty attendance records? Finally, the director and staff had to negotiate discrepancies between the numerical goals the grant writer promised and what they were able to deliver. Throughout such negotiations staff exercised their own mastery of dominant discourses to activate the logic model technology in ways favourable to them. Logic models are by design and even by name meant to be “logical.” They present a sequential, rational chain of causality whereby program actions and resources are hypothesized to make an impact on the social world by changing a problem situation or solving a social need. In the present case, however, I use this example of a logic model and its trajectory as it occurs in the field to pry open a window through which to view contrasts or “disjunctures” between the logic of the model and the reality of everyday life as experienced by program staff on the front lines of governance. The logic of the logic model originated in the abstract extra-local perspective. It was made by federal government public administrators who had an interest in instructing local adapters as to how the program’s steps and components related to one another. They were also quite concerned with demonstrating the effectiveness of their program. Indeed, it was through showing outputs and outcomes that funding was able to be politically advocated for and allocated to the NN program in the first place. It was a master document that oriented local activities to extra-local interests and concerns. At first blush, the extra-local concerns represented by the logic model appear completely reasonable and, well, “logical.” However, upon closer scrutiny and analysis of what occurs on the ground level the slippages in this logic and the ways in which the logic is contested in subtle but consequential ways become visible. This was not a matter of carelessness or insensitivity on the part of the creators of the logic model; indeed, these disjunctures were not visible even to program staff before they attempted to implement the program. Rather, they are symptoms of disconnects that arise often as extra-local forms of governance attempt to corral and organize local needs and circumstances to their ways of thinking and organizing. This is an inherent tension as governance increasingly calls for “scaling up” programs that work in one location and expanding them geographically and across other institutions. Central to this approach is the generalizability of situations, and crucial to its survival is the measurability of success by aggregating data in multiple sites.

232

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

These tensions between the extra-local logic of the logic model and everyday experience are perhaps most evident in the definition and redefinition of service activities and outcomes. The grant writer is the first local person to take up this discursive chain. Her most pressing concern was winning grants, not necessarily implementing them, since other staff members would do that. As such, she was placed between the extra-local federal government and the local implementers. From this meso-local positioning she attempted to mediate local and extra-local interests in abstract terms. She scoured the RFP (request for proposals/ application) materials seeking to infer and appeal to the thinking of federal grant makers. This encouraged her to promise intensive activities and impressive outcomes. At the same time she held in her mind an abstract program director who would be hired to do all of the things she promised. Like many things in the abstract, this hypothetical person was imagined to hold what became an exceedingly impressive résumé and toolbox of personal skills. In this case the grant writer saw it as appropriate or at least expedient to offer computer training courses, life/personal-skills training, literacy training, GED instruction, job readiness classes, ESL (English as a second language) classes, and mentoring/homework help. Each of these components has been linked to the benefit of the low-income residents typical in public housing and so made sense to the grant writer. Perhaps more important, each was a selection on the logic model drop-down box. However, with each increase in offering, the new to-be-hired program director assumed new challenges with complex sets of competences if he/she were to do it well. In much the same way, the program director inherited the grant writer’s abstract notions and numbers of success and outcome goals. The implications of this abstract translation of extra-local goals to local goals did not become apparent until the program director was hired. At this point the abstract goals began to take on meaning in the local site where the program was to be implemented (i.e., persuading actual people to attend and complete job readiness, computer training, and other courses). However, this process was gradual and began with optimism. At the outset the goals seemed doable, as did the activities. The director had to reconcile the activities promised in the grant with his own skill set. This involved figuring out what he himself could do and what he needed to partner with others to accomplish. A mid-implementation review of grant requirements revealed that much had changed in the thinking of the program director. Initial

Outcome Measures and Front-Line Social Service Work

233

assumptions about the ways in which residents would participate in programming had to be refined in the face of what actually happened. In particular, the realities of life challenges and barriers to participation for potential program participants turned out to be greater than anticipated. Stories about clients being scared to leave their homes and being wounded by random gunfire on the way to the computer centre emerged as counter-narratives to the ease of recruiting people to improve their personal skills that the director had imagined upon taking the job. After much reflection and discussion the director came to a new understanding of how he could be of greatest positive impact on the community he serves. His new understanding was based on the “Bridges Out of Poverty” body of literature that emphasizes differing perspectives and needs that result from growing up and living in different economic class situations. In particular, this literature helped the director to conceptualize why program participants and potential participants did not respond to programming opportunities in the ways he (or the grant writer or the logic model) initially assumed they would. For instance, he refined how he defined classes and completions to accommodate what he called the “tyranny of the moment.” He used this phrase to signify that life was so hectic, unpredictable, and unstable for residents that his initial understanding of a “class” as a series of hour-long sessions culminating with a skill test no longer made sense. Instead, he modularized classes to be one-time events that did not require multiple attendances. Through such practices we can see a chain of interpretations as the logic model filtered through levels of actors and became picked up, acted upon, and interpreted before being passed along to the next link (along with new interpretations that did not make it into the document but were passed on interpersonally). As the grant writer picked up the RFP and cast it in local terms via the application, so did the program director and staff redefine the specificity of what abstract ideas would and did mean on the front lines. So too, we may assume, were the relevancies of the program activities in the logic model redefined by program participants according to their interests and perceptions. The interactional outcomes of these client-staff interactions were then recorded in the database from which the program evaluator extracted and tallied them – offering further interpretations and analysis – before the cycle was completed with the forwarding of this information to the federal HUD office with the completed logic model.

234

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

In this way, the logic model emerged as a core technology for both shaping and coordinating the subjectivities of multiple actors across local and extra-local locations. It was a central ordering text that linked funding and accountability in an information circuit. However, it was also a contested text, which accumulated different meanings as it passed through different locations and interacted with the local interests and capacities of the individuals who encountered it. At each point it was local persons who activated this text, and it was here that disjunctures between the abstract extra-local logic of the logic model and local experience had to be reconciled. Different front-line workers accomplished this in different ways. The grant writer sought to encourage the program director to collaborate with outside programs that had different competencies when he could not find a person to hire that met all of the qualifications promised in the logic model. The program director in turn sought to rethink what the activities and outcomes actually meant as conceptual categories. The evaluator encouraged staff to document redefinitions and to maintain program data in consistent and easy-totally formats. It is through following such tensions and their resolutions that the trajectory of the logic model became a circuit ripe for exploration and a tool for explicating the emergence of disjunctures between extra-local ideas and the experience of their local constituents as programs were scaled up across geographical locations.

“If our statistics are bad, we don’t get paid”: Outcome Measures in the Settlement Sector LIZA M C COY

Helping new immigrants get settled and find employment is an active area of social service provision in Canada. In large cities, where multiple agencies are involved in this work, it is common to speak of a “settlement sector.” During 2004–2007 I explored the organization of this sector in a city in Alberta, focusing on employment services for new immigrants. My study8 began from the experience of immigrant women with backgrounds in non-regulated business professions (administration, marketing, information technology, human resources), who had recently come to Canada and were hoping to find jobs that would use their education and expertise. Together with my research assistant,9 I interviewed women about their experience of attending informational

Outcome Measures and Front-Line Social Service Work

235

workshops, meeting with employment counsellors, and taking part in bridging programs that provided unpaid work experience in local companies. As we learned from the women about the services they were using, we began interviewing people in the agencies that offered the services. I wanted to learn how these particular services came to be the ones on offer and why they were organized the way they were. We spoke with program coordinators and staff about the view of client needs and labour market organization that informed their program offerings, but we also talked with them at length about how their programs were funded and how they were accountable; for we knew from prior experience (and other institutional ethnographic research, e.g., Grahame 1998; Ng 1988; see also Nichols 2008) that in the world of non-profit service delivery, funding and accountability relations are significant organizers of the work that can be done and how it proceeds. These employment services were offered by non-profit immigrantserving agencies, for-profit educational businesses, non-profit colleges, and multi-purpose agencies like the YWCA, but despite the variety of agencies, the great majority of the services were funded by Alberta’s Ministry of Human Resources and Employment, as it was called at the time. It was not just that the programs shared a common funder; there was evidently a common structure of program elements and reporting requirements, because we kept hearing the same terms and similar descriptions (e.g., “level-one service”) from people in different agencies. So our next interviews were with the people in the ministry who fund and monitor local employment services. What we learned is that there is a set of policies and guidelines that identify the types of employment service the province offers and the elements that must be present (or cannot be present) to be eligible for funding. The Alberta government then contracts with non-profit agencies and for-profit businesses to deliver these employment services in ways that align with provincial goals for labour force development. Proposals to deliver services are solicited through requests for proposals (RFPs) issued by the ministry, although agencies can and do approach the ministry with program ideas as long as these fit within the established menu of program types. A contract specifies what services the agency will deliver and how the agency will keep track of and report ongoing service delivery. The funding is usually given in instalments, triggered by a report of activity completed to date. Targets and outcome measures are an important part of this circuit of funding

236

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

and accountability. A ministry official explained: “We do fee-for-service contracts, and we contract for outcomes. We do deliverables contracts.” Another important coordinative feature is a database shared by all the agencies delivering services under contract to the ministry. All client information and records of services delivered to clients must be entered in this database on an ongoing basis and as a condition of the contract. Some contracted employment services are offered on an unrestricted basis, but others are strictly apportioned according to the ministry’s eligibility policies. By consulting the database, staff at any one agency can determine what other employment services a client has received, at that agency or at other agencies, which may have implications for the services that the client can now acceptably be offered. The database is also monitored by the program officer at the ministry, to check fee-for-service invoices against records of service provision and to flag instances of apparent duplication of service, such as when a client has been enrolled in multiple workshops on the same topic: “We start asking questions to the service provider … so why do you think [this client] needed another course on resume writing?” In addition to monitoring the database, the program officer makes regular site visits and the ministry contracts with a private business to telephone program clients to verify that they obtained the services recorded by the agency. In these ways, through standardized contracts, close monitoring, and the shared database that constructs a client history of cross-agency service use, the ministry uses a plurality of individual agencies and forprofit businesses, each with its own aims, to deliver an integrated system of employment services that implement provincial labour force development policies and control access to services. There is much that could be said about Alberta’s labour force policy, the types of services funded, and the distribution of funds across service types, but my focus here is on exploring the intersection between contract-stipulated eligibility requirements and outcome measures as they enter and shape the work of front-line staff and the services available to clients. Below, I discuss some dimensions of the ways these forms of governance operated in two different programs that were delivered under contract to the ministry. The Employment Resource Centre The Immigrant Employment Centre (a pseudonym), run by a nonprofit educational institute, offered employment resources, workshops, and counselling for immigrants with backgrounds in the licensed

Outcome Measures and Front-Line Social Service Work

237

trades and regulated professions (e.g., electricians, nurses, lawyers, engineers, pharmacists). Established under a request for proposals issued by the ministry, its mandate was to help foreign-trained professionals and tradespeople learn about the licensing process for their line of work and strategize their entry into the local labour market. Its services were organized according to the common schema of service types for a ministry-funded employment centre. These included “levelone services” (operating a self-serve resource centre open to anyone on a drop-in basis), “level-two services” (offering workshops open to people in the specific target group) and “level-three services” (one-onone “career coaching”). Access to level-three services, which were more labour intensive and therefore more expensive to fund, was the most closely regulated. Clients were screened to establish their eligibility; this involved both a personal interview and checking the database to see if they had already had coaching at another agency. The centre’s contract with the ministry called for the staff to meet with those clients eligible for individual employment coaching up to three times over a one-year period. At the first session, the coach and the client were supposed to draw up something called an Investment Plan. The staff explained that in the past this was called an Action Plan, but lately the government had introduced the new name, which they were now using in their work. This Investment Plan listed what the client wanted to accomplish during the year. The outcome most clients at this centre wanted was to get re-credentialled in their profession or trade and then to find employment in their field. But that outcome could not be written into the Investment Plan as an action item, which was to be limited to what could reasonably be accomplished within a year. As the staff explained in a group interview, the re-credentialling process often took longer than one year, and for some clients, it could take up to a year just to assemble and have translated the required transcripts, diplomas, and certificates from the client’s country of origin. What was shaping this use of the Investment Plan were the accountability relations of the contract. One of the outcome targets the program had to meet was for at least 60 per cent of clients to successfully complete all the items in their Investment Plans within one year of their first meeting with a career coach. The centre was in only its first year of operation when I met with the staff, so they were orienting to this accountability but had not yet gone through a complete year of service. Nonetheless, they explained how they worked with clients to ensure that the client’s recorded goals stood a chance of being met through

238

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

actions largely under the volition of the client. This work took particular skill on the part of the coaches: listening to clients talk about what they wanted (a job in their profession) and then breaking down this goal into steps they could accomplish in one year. A research officer on staff at the centre provided the coaches with information about how long the re-credentialling process took for different professions and the steps that were involved; this was important, they said, to help them avoid setting investment goals with clients that could not be met, thereby affecting the outcome measures. Another issue, of course, is that much of what immigrants must do to get re-credentialled and find professional employment depends on decisions taken by other people, such as employers or the professional organizations that regulate the examination and licensing process. Thus, a good Investment Plan abstracted, from these coordinated work processes, discrete actions that the client could initiate and that could be reported as completed whether or not the client had actually achieved re-credentialling or found a job. A good plan could include statements such as “will research labour market for oncology nurses,” “will get diploma translated,” “will get foreign credentials assessed,” “will take upgrading course,” and “will write licensing exam.” The agency was establishing a procedure for contacting clients by telephone to check on the status of their Investment Plans at three months, six months, and one year after the establishment of the Plan, as required by their contract. The information gathered would be recorded in the provincial database, as well as serve to calculate the program’s reported outcomes. The Investment Plan was formulated as the client’s plan and the client’s responsibility. It was intended not only to clarify what needed to be done and in what order, but also to teach clients to approach their job search as a self-directed and self-realized project. They were “investing” in their “human capital.” This approach was aligned with broader neoliberal discourses of personal responsibility, which have strong currency in Alberta. Although the client was not accountable to the centre in any compelling way, the centre was accountable to the ministry funder for over half of their clients’ completing all of the items in their plans. The clients’ work counted as the outcomes of the agency. Staff oriented to this in several ways. Apart from their efforts to help clients establish the right sort of Investment Plan, coaches placed confidence in the motivations and aptitudes of clients. As one coach pointed out, rationalized lists of action items made sense to their clients, who were used to such approaches from their professional work. They also described

Outcome Measures and Front-Line Social Service Work

239

how, when checking with clients by phone on the status of their Investment Plans, they would emphasize that the coaches were there to help them and would encourage them to come in for an appointment if they were getting stalled on completing their action items. Thus, after having drawn up the right sort of plan with clients, part of the counsellors’ work was to try to keep them motivated and accountable for carrying out actions that could count as achieving the action items on their plans.10 The Bridging Program In some cases, the outcome measures established by the ministry contract do require that clients actually get jobs. Among the ministry’s portfolio of service types are longer programs that combine some kind of classroom portion with an unpaid work placement. One particular program, run by a non-profit service agency, was mandated to help immigrants get jobs in fields for which they already had training and experience in their countries of origin. Each intake comprised around 22 participants from a variety of occupational backgrounds. Eligibility requirements written into the contract stipulated that prospective participants had to have worked in their field for at least two years before coming to Canada and must not have held a job in this field in Canada. It was permissible, however, for them to have done “survival” jobs in Canada, as long as these were outside the field of their education and expertise. The classroom portion of the program involved the study of business English and “Canadian workplace culture” as well as job search skills and interviewing practice. Work placements were supposed to be in the same area as the participant’s prior experience, albeit at an “entry level.” The women in my study who went through this program, or another one like it, who had been administrators or managers or business professionals, received work placements doing administrative assistant or data entry work. The explanation was that this served as a foot in the door; certainly the participants had not been able to get such jobs on their own, even though most of them hoped they would eventually get jobs that used more of their expertise (and some did; see McCoy & Masuch 2007). Programs like this are more labour intensive and costly, per client, than the employment centre. The government officials therefore wanted the kind of results that cleared the books. Thus, the main accountable

240

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

outcome measure was numbers of participants employed in full-time work in their field, which was interpreted to include entry-level work in the broad way described above. Currently, the program coordinator explained, the program’s success rate was 78 per cent for employment after three months and over 80 per cent for employment within six months of the end of the program. In my interview with Alice, the program coordinator, we talked about the work that went into screening and selecting participants. Unlike the employment centre, which accepted for coaching any prospective client who met the contract-specified eligibility criteria, the bridging program consistently drew many more applicants than there were funded places to accommodate them; Alice explained that there were usually 20 eligible applicants for every one place. A significant part of her work involved screening interviews with each applicant. In selecting applicants Alice was orienting to various concerns, such as balancing the occupational mix and choosing those whose occupational areas she could find work placements for, but a central concern was choosing participants who were likely to be successful in getting jobs within the six-month period, so that the program could meet its outcome targets. Employability, in terms of education and employment record, was not usually her main concern, as her program was flooded with applications from immigrants with post-secondary degrees and steady work histories in professional occupations. Rather, she attended to how she thought a particular applicant would fare in the labour market. For example, she said she would be hesitant to accept an older man who had been a bank manager, because most employers would simply not feel comfortable hiring him into an entry-level position. But Alice’s main focus was on the applicants themselves: how they would take up the resources of the program and what they were likely to do four or six months in the future. Like the career coaching, meeting the outcome target relied, in the end, on the actions and choices of the participants. Alice needed to select participants who would conscientiously work on adapting themselves to “Canadian” workplace “culture,” cheerfully accept the work placements they were offered, and conduct themselves in ways that would be pleasing to the employers. It was not uncommon for work placement companies to offer contract or permanent jobs to the program participants; this reputation for yielding actual jobs was one reason the program had so many applicants. Alice therefore needed participants who would accept an (entry-level) job if it were offered by the placement company; failing that, participants needed to put the

Outcome Measures and Front-Line Social Service Work

241

effort into looking for a job on their own and be prepared to accept what they could find within a six-month period. What Alice needed to screen out, if possible, were people who might decide, after going through the program, to attend the local technical institute to retrain for a new career, a choice that many skilled immigrants make when faced with barriers to professional-level employment. She also needed to screen out people who might prefer to wait for a job offer closer to their previous level of professional scope and status, rather than accept an “entrylevel” position. Asking people directly about their intentions and goals was part of the interview, of course, but Alice had also developed characterological indices she considered reliable. For example, she looked for a “positive attitude.” But of particular importance, she emphasized, was whether the applicant had held a “survival job.” Recall that, under the eligibility criteria, it was permissible for applicants to have held such jobs in Canada or currently to be employed part time in such work (e.g., food service, cleaning, construction, retail sales) if it was unrelated to their previous occupational field. For Alice, however, the “survival job” had shifted from being eligibility neutral to, if not a requirement, at least a strong advantage. She explained how she interpreted the fact of a participant’s having held a survival job: “They have a strong work ethic, that they want to get out there and find work and that they don’t mind doing it. A survival job is entry-level work, and we’re going to find them entry-level work [in the work placement]. If they can do a survival job, they’re able to do an entry-level job in their field, because their status isn’t huge then. Status is a big issue with these people.” This use of the survival job as a screening category is noteworthy because such jobs are a matter of much concern for many immigrants with professional backgrounds. Some people have to do them because they need the money, but they worry that working as a janitor, for example, is professionally contaminating and will make it harder for them to be seen as a professional by prospective employers. Other immigrants draw on their savings to avoid taking survival jobs, which often involve unfamiliar and exhausting physical labour, sometimes in unsafe conditions and with humiliating labour relations, in order to focus their time and energy on taking upgrading courses and finding work in their field. If it is financially possible, couples with two potential earners will designate one person to earn an income through survival work while the other one focuses on looking for professional work; once that person has a good job, the other spouse can look for work in her or his field. On the other hand, some of the women I interviewed considered it a good

242

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

thing to do retail sales work because it gave them an opportunity to practise English with an ever-changing stream of people, in a relatively clean environment; they saw this as a useful preparation for professional work in their field. These are only some of the numerous frameworks in operation for assessing the advantages and disadvantages of survival jobs. Reading survival jobs as an indicator of work ethic and an appropriately low investment in status, however, is an interpretation that arises in the relevancies of front-line work and applicant screening, in the context of particular outcome targets. Once participants selected for the program had completed the course component and the unpaid work placement, the contract called for the agency to offer three weeks of job search support. Nevertheless, Alice said, they met with some participants for a longer time. This work was not remunerated by the ministry, although under the terms of the contract, it became a necessary investment. She explained, “We want them to be working, because that’s how we get paid. If our statistics are bad, we don’t get paid.” It was routine ministry policy, written into many of their contracts, to hold back part of a program’s payment for services provided and make it contingent upon successful outcomes after 90 or 180 days. A director at the ministry commented, “So if people aren’t working, and that was the outcome that was negotiated, then the contractor doesn’t get paid. So it makes the contractor a lot more interested, I think, in doing a good job – although the counter-argument to that is, well, are they just taking the best candidates. I mean, I know that.” For Alice, as we have seen, the best candidates were not just those with the most job experience and an appearance and manner that would be appealing to employers, but those who could be counted on to follow the job search advice offered in the program and engage in the labour market in specific ways. Conclusion: “It Is the Clients’ Choice What They Do” The ministry’s procedures were aimed at implementing provincial policy for meeting labour force needs and helping un- and underemployed individuals gain full-time, paid employment, thereby ending their claim on employment-related or income-support services funded by the province. This involved a standardized menu of program options and required elements, common types of outcome targets and close monitoring. (A ministry official described the province-wide policy for contracting employment services as “prescriptive” but “fair.”)

Outcome Measures and Front-Line Social Service Work

243

At the same time, the ministry relied on its third-party contractors, especially community-based agencies, to develop projects of service delivery that fitted the ministry guidelines while addressing, to some extent, the particular needs and circumstances of their constituencies, thereby drawing diverse groups and communities into the scope of the ministry’s policy. However, in the case of bridging programs the ministry still controlled the eligibility requirements as well as the number of places to be funded. These were the terms that agency staff most often reported efforts to renegotiate in subsequent contracts, as they were frequently confronted with requests from potential, and in their eyes appropriate, clients whom they could not serve under the terms of their current contracts.11 But when it came to bridging programs, the ministry was usually reluctant to fund as many places as the program staff believed they could fill. In this, the ministry was orienting to the broader policy goal of aligning the supply of labour with the ability of the local labour market to absorb it. The visibility of this alignment was framed by the outcome measures used, which required clients to get jobs – to be absorbed into the labour market – within a short, specified period of time. In other words, employers’ demand was privileged over immigrants’ demand for the type of training and experience provided by bridging programs. At the local level of program delivery, however, ministry officials tended to talk more about the preferences and choices of immigrant clients than the preferences and hiring choices of employers. Programs were designed to give immigrants useful information about the labour market and to facilitate (indeed, encourage) their self-adjustment to the expectations and cultural practices of the workplace. Yet clients could not always be counted on to follow the advice they were given; “You know, the counsellors work very hard trying to guide them in the right direction, but it is the clients’ choice what they do,” said a ministry program officer. Still, the clients’ choices and actions were essential in meeting the outcome measures, giving rise to the kinds of front-line work described above. And although the experience of the immigrant men and women using the services has not been the focus of this discussion, we can also recognize that they are drawn into a kind of work in which they strategically utilize the resources available to them, negotiating the ministry or program’s view of what they should be doing, as they develop and pursue their own learning and employment projects (for a fuller discussion of this work in the context of a bridging program, see Soveran 2011).

244

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

Conclusion We have described aspects of front-line work in settings that reflect the diversity of the types of organization that seek third-party service contracts. Pathways is a for-profit, privately owned social service agency that delivers services under contract with various governmental and armslength governmental bodies. The Youth Emergency Shelter is a non-profit agency partly funded through different contracts and fee-for-service arrangements with a variety of government agencies. The neighbourhood computer lab is a program offered by a municipal governmental agency that had successfully applied for a program grant under a competitive program run by a federal agency. The two employment-related programs for immigrants were run by large, multi-service non-profit organizations and both were funded by the same provincial ministry. Our stories also offer a gradual increase in organizational scope, from the experience of a front-line worker (Janz) to a front-line program director (Nichols) to a contractor/evaluator working with a program director (Ridzi) to a researcher exploring different programs in a sectorlevel analysis (McCoy). Yet all begin from the work and experience of clients and front-line workers in these settings, and all explore the actualities of front-line work and the procedures for reporting this work in commensurable terms as outcome measures. Outcome targets and measures of the sort implemented in the settings we describe are based in – and produce – a conceptual distinction between activities (working with clients, running programs) and outcomes (some measurable change made in the lives of clients or service users). This is a deliberate technology; formulations of its intent can be found in the literature on logic models (e.g., Ernst & Morrison 2002). It is warranted, in part, by client-centred frameworks: it is supposed to make staff more conscious of the need to make “a difference” in clients’ lives, rather than just proliferating activities. It can also be warranted by the Continuous Quality Improvement discourse, as Janz shows. And, quite powerfully, it aligns with value-for-money arguments and functions like the private sector’s ROI (return on investment) technology, producing the comparability of programs receiving different grants and carrying out different activities. Nichols, Ridzi, and McCoy show how it carries that managerial organization into the funding-recipient agency, right to the front line. All of the programs and services we describe involved outcome targets that called for changed behaviour or personal achievement on the

Outcome Measures and Front-Line Social Service Work

245

part of clients or program participants. These results then counted as a measure of staff success in running the program or as an indicator of improvement in service quality. In the stories of Janz, Nichols, and McCoy, we see programs where the outcome measures refer to plans drawn up for individual clients. The success of the staff’s work is to be shown through procedures for commensurably counting client achievement of, or progress towards, the goals written in the individual plans. Janz and Nichols describe the tensions this approach creates in the context of social service work with marginalized people. Both settings involved services in which the behaviour of clients is an object of rehabilitative attention; front-line workers support clients, but also guide and assist them to amend their behaviour in ways identified by experts or front-line workers themselves. This is always a delicate balance. In Janz’s and Nichols’s examples, we see how that balance was tilted in uncomfortable ways when client progress came to count as program or staff success. The client and his or her progress – which was supposed to be the end goal in this system – became a means to the end of showing “progress” as a textual achievement in outcome measures. There is a time element here, introduced by the schedule of reporting requirements. Front-line staff who work with people in difficult life circumstances know that change takes time and that there is backsliding as well as progress. Both Janz and Nichols describe how their orientation to producing documented improvement resulted in their rushing or pushing clients in unproductive ways, which they would not have done under other circumstances. (See Brown 2006 for a complementary account of the work that social service clients do in order to produce the personal changes that count as the agency’s outcomes.) In Ridzi’s and McCoy’s examples we see situations where clients or program participants have little accountability to program staff, yet their actions in taking up the resources of the program are measured as program outcomes. Program staff may have only occasional contact with clients and minimal influence over their actions (except for the participants in the bridging program during the months the program is in operation; this changes when the program finishes and they are on their own looking for jobs). Staff face the challenge of getting clients – of their own volition – to undertake activities within a contract-specified length of time that, when reported, will count as outcomes of program success. As both Ridzi and McCoy also show, very often the possibility of engaging in such activities, or being successful in them, depends on broad environmental conditions and institutional processes beyond the

246

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

control of individual clients or program staff. In Ridzi’s example, the coordinator struggled just to get residents to attend programs, given the dangers they faced on the streets and other more pressing priorities in their lives. In McCoy’s example, we see staff drawing up individual plans with clients that orient to what is doable by the client within the time period of the reporting requirements or selecting participants who can be counted on to do what the program needs them to do in order to meet its targets. As Janz points out, front-line staff are not necessarily opposed to outcome measures, which in many areas have become an accepted and expected part of this kind of work. Although we have focused primarily on disjunctures, we emphasize here the importance of a nuanced understanding of how these procedures work in the lives of front-line staff. Outcome targets clarify what staff should work towards and provide situationally respected justification for day-to-day decisions; the visibilities they create are available as a record of staff accomplishment that can also be usefully activated within the employment relationship. Outcome targets and measurement procedures often make sense to staff within a neoliberal discourse of accountability that has increasing purchase in the non-profit and social service sector and has become linked to standards of professionalism. A textual procedure that creates difficulties in one aspect of front-line work can also solve problems in another; for example, the successful meeting of outcome targets, however achieved at the front line, can be used to advocate for increased funding. All of this occurs within interconnected institutional discourses, rationalities, and coordinative procedures. In this chapter, we have explored some of the corners in which they come together, in actual settings of front-line social service work.

NOTES 1 All front-line agencies whose work is described in this chapter are referred to by pseudonyms. 2 The MCFD subsidizes the accrediting process only for those agencies receiving over $500,000 in annual contracts. Smaller agencies may not have the capacity or financial means to undertake accreditation, therefore losing the competitive edge they need to remain funded and viable (see Janz 2009 for further analysis on accreditation as a ruling priority in the BC social services sector).

Outcome Measures and Front-Line Social Service Work

247

3 The ministry is also researching ways to combine government accountability strategies with that of accreditation reporting and standards. For example, CARF and MCFD are partnering to form a Joint Outcomes Project to research how to best align accreditation and MCFD standards and reporting requirements. 4 As I have explicated in my MA thesis research, measurable tracking and reporting in this particular agency are becoming synonymous with quality service provision and a way for the front-line worker to substantiate her “good work”; hence the agency’s “quality service delivery.” 5 I used community mapping (Amsden & VanWynsberghe 2005), which is a common community-based research strategy, but I did so as a means to convey findings to practitioners in the field, not as a method of engaging participation in data generation or analysis. 6 In particular, their work would need to reference the standardized Results and Activities Workplan and Request Budget Form. 7 The ISSP is an Ontario initiative to keep vulnerable young people out of custody. It is a partnership between the youth criminal justice system and community agencies and/or practitioners to help young people who have significant mental health issues and youth criminal justice records to integrate into community life. 8 During this time I also participated in the settlement sector as a member of the board of directors of an immigrant-serving agency, a busy volunteer position I held for six years. In this section, however, my discussion focuses on what I learned through my formal research project, which was funded by the Social Sciences and Humanities Research Council of Canada (SSHRC Standard Research Grant No. 410-2003-1846). 9 Cristi Masuch was my assistant for this research, and her contributions are gratefully acknowledged. 10 The coordinative role of both the Investment Plan and the outcome targets was made more visible when I interviewed staff and observed counselling sessions at an agency that did not have a contract with the ministry. This agency delivered employment counselling on a modest scale with funding from a federal settlement program that worked on a capitation basis, in which eligible new clients were counted once for funding purposes, regardless of the number of visits they made to the agency or the different services they received. The employment counsellors would meet individually with clients, eliciting their employment goals and providing information and advice, with the expectation that clients would take from this what they wanted. No formal, itemized list of actions was drawn up. Many clients, the staff explained, came to only one

248

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

employment counselling session. Others, however, chose to meet with an employment counsellor a second or third time. When a client returned for another session, the focus would be on the client’s current questions or employment interests (counsellors often began the session by asking “How can I help you today?”); clients were not asked to account for whether they had or had not done whatever had been discussed at the previous session. There was no routine follow-up with clients at specified times, although staff thought it would be good to provide one. However, the federal funder did not require the agency to meet outcome targets as a condition of continued funding, so it did not make sense to use limited staff time in contacting clients by telephone, when that time could be used more effectively, in revenue terms, by seeing new clients. 11 This was an especial concern for independent, non-profit, immigrantserving agencies, which usually defined their constituencies in broad, inclusive terms (e.g., all immigrants, refugees, and refugee claimants in the local area, regardless of citizenship status or time in Canada). They thus had to manage the disjuncture between their declared mission and the narrower – and varying – eligibility requirements imposed by the funders of particular programs (e.g., only landed immigrants who had been in Canada for less than three years).

REFERENCES Aucoin, P. 1995. The New Public Management: Canada in Comparative Perspective. Ottawa: Institute for Research on Public Policy (IRRP). Amsden, J., & R. VanWynsberghe. 2005. Community mapping as a research tool with youth. Action Research 3 (4): 357–81. http://dx.doi. org/10.1177/1476750305058487. Baines, D. 2004. Caring for nothing: Work organization and unwaged labour in social services. Work, Employment and Society 18 (2): 267–95. http://dx.doi. org/10.1177/09500172004042770. Brown, D. J. 2006. Working the system: Re-thinking the institutionally organized role of mothers and the reduction of “risk” in child protection work. Social Problems 53 (3): 352–70. http://dx.doi.org/10.1525/ sp.2006.53.3.352. Clarke, J., & J. Newman. 1997. The Managerial State: Power, Politics and Ideology in the Remaking of Social Welfare. London: Sage. de Montigny, G.A.J. 1995. Social Working: An Ethnography of Front-line Practice. Toronto: University of Toronto Press.

Outcome Measures and Front-Line Social Service Work

249

DeVault, M.L., & L. McCoy. (2006). Institutional ethnography: Using interviews to investigate ruling relations. In D.E. Smith (ed.), Institutional Ethnography as Practice, 15–44. Toronto: Rowman & Littlefield. Eakin, L. 2007. We can’t afford to do business this way: A study of the administrative burden resulting from funder accountability and compliance practices. September. Toronto: Wellesley Institute. Retrieved www. wellesleyinstitute.com/files/cant_do_business_this_way_report_web.pdf. Ernst, K., & D. Morrison. 2002. Moving to outcome-based strategic planning: A presentation for board members. Canadian Outcomes Research Institute. Calgary. Retrieved www.hmrp.net/CanadianOutcomesInstitute/projects/ presentation_common/outcome_based_strategic_planning_p/Moving%20 to%20Outcome-Based%20Strategic%20Planning_files/frame.htm. Grahame, K.M. 1998. Asian women, job training, and the social organization of immigrant labor markets. Qualitative Sociology 21 (1): 75–90. http:// dx.doi.org/10.1023/A:1022123409995. Griffith, A.I. 1998. “Insider/outsider: Epistemological privilege and mothering work.” Human Studies 21 (4): 361–76. http://dx.doi.org/ 10.1023/A:1005421211078. Janz, S.L. 2009. Accreditation and government contracted social service delivery in British Columbia: A reorganization of front-line social service work. Unpublished MA thesis. Victoria, BC: University of Victoria. Lara-Cinisomo, S., & P. Steinberg. 2006. Meeting funder compliance: A case study of challenges, time spent, and dollars invested. Pittsburgh: RAND Labor and Population. Retrieved www.rand.org/pubs/monographs/2006/ RAND_MG505.pdf. McCoy, L., & C. Masuch. 2007. Beyond “entry-level” jobs: Immigrant women and non-regulated professional occupations. Journal of International Migration and Integration 8 (2): 185–206. http://dx.doi.org/10.1007/ s12134-007-0013-0. McLaughlin, J.A., & G.B. Jordan. 1999. Logic models: A tool for telling your program’s performance story. Evaluation and Program Planning 22 (1): 65–72. http://dx.doi.org/10.1016/S0149-7189(98)00042-1. Ministry of Children and Family Development. British Columbia (MCFD). 2006. Accreditation of contractors. Retrieved www.mcf.gov.bc.ca/ accreditation/contractors.htm. Mykhalovskiy, E., & L. McCoy. 2002. Troubling ruling discourses of health: Using institutional ethnography in community-based research. Critical Public Health 12 (1): 17–37. http://dx.doi.org/10.1080/09581590110113286. Ng, R. 1988. The Politics of Community Services: Immigrant Women, Class and State. Toronto: Garamond.

250

S. Janz, N. Nichols, F. Ridzi, and L. McCoy

Nichols, N. 2008. Understanding the funding game: The textual coordination of civil sector work. Canadian Journal of Sociology 33 (1): 61–88. Nichols, N. 2009. Strange bedfellows: A transformative community-based research project inspired by Hannah Arendt and Dorothy E. Smith. Theory into Practice 2 (2): 61–73. Rankin, J., & M. Campbell. 2006. Managing to Nurse: Inside Canada’s Health Care Reform. Toronto: University of Toronto Press. Renger, R., & A. Titcomb. 2002. A three-step approach to teaching logic models. American Journal of Evaluation 23 (4): 493–503. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: Altamira. Smith, D.E. 2007. Making change from below. Social Studies 3 (2): 7–30. Soveran, L. 2011. Empowerment and conformity: An ethnography of a bridge-to-work program for immigrant women. Unpublished MA thesis, University of Calgary. W.K. Kellogg Foundation. 2004. W.K. Kellogg Foundation logic model development guide. Retrieved www.wkkf.org/knowledge-center/ resources/2006/02/WK-Kellogg-Foundation-Logic-Model-DevelopmentGuide.aspx.

SECTION FOUR

The final section of this book takes up the transformation of work and of consciousness associated with the new managerial regimes. As frontline workers coordinate their work with the managerial routines, a new conceptual framing of their work is established. This second workshop dialogue, which includes studies by Grace, Zurawski, and Sinding, shifts perspective towards self-governance as a feature of how redesigned managerial practices shape frontline work. Each study describes how individuals caught up in new managerial practices and the institutional circuits that organize them engage actively in controlling their new work situations. Zurawski’s study introduces us to the workings of an institutional circuit in the private sector and describes how employees work with the responsibilities imposed on them. Grace details the everyday work of accountability as it is reshaping the higher education vocational sector in Australia. Sinding’s piece is also distinctive: it brings into view a client’s/patient’s experience of her practice from the recipient side of front-line work. The final paper in the book is Susan Wright’s ethnography of changes in university governance in Denmark, focusing on the government’s imposition of a point system to evaluate the performance of faculty. Concentrating on university reforms in Denmark, she describes the development of performance indicators that use a point system to evaluate the performance of faculty members. The government mechanism imposed fails to take into account the diversities of product and performance characteristic of universities. Wright’s fieldwork shows how faculty managed the points system both pragmatically as well as through resistance to the work organization

252

Section Four Introduction

coordinated by the points system. Wright’s chapter brings into view the difficulties of applying a standardized template to work processes that differ widely. Thus, the positivist world of the life sciences lends itself much more easily to the new reporting mechanisms; faculty in the social sciences and humanities, on the other hand, find it much more difficult to articulate their work with the criteria established within the new reporting frames.

8 A Workshop Dialogue: Institutional Circuits and the Front-Line Work of Self-Governance lauri grace, cheryl zurawski, and christina sinding

The studies incorporated into this chapter are concerned with how people at the front lines of the three very different institutional complexes work with the institutional processes they encounter, enact, resist, and interrogate. It focuses in particular on how people participating in institutional circuits are drawn into self-governance: how their desires for self-determination and capacities for problem solving are hooked into institutional objectives (Sorensen & Triantafillou 2009). In each of the presented instances, people at the front line “do” self-governance as they report on their everyday activities. Their selfreports are coordinated with the textual frames active in their organizational context. As well, their daily participation in institutional circuits is organized to be aligned with imposed reporting requirements (Smith 1990: 93–100). Three instances of the operation of institutional circuits in three different institutional contexts and forms are featured in this chapter. In institutional ethnography, an accountability circuit (McCoy 1999; Smith 2005) is a form of coordination that brings people’s front-line work into alignment with institutional imperatives through the activation of texts. This concept has been generalized in Griffith and Smith’s introduction to this book as “institutional circuits,” which includes “accountability circuits” as well as others. Two of the studies included in this chapter (i.e., Grace and Zurawski) explore accountability circuits; the third (Sinding) is an institutional circuit, though it does not conform to the accountability model. In Grace’s analysis of the Australian Vocational Education and Training (VET) sector, a hierarchy of government regulatory texts establishes an accountability circuit that is explicitly textual and external

254

L. Grace, C. Zurawski, and C. Sinding

and has the force of regulation. Individual educators participate in selfgovernance through the development of documentary representations of their teaching work. In Zurawski’s exploration, the textual nature of the accountability circuit is explicit and internal to a business organization that has embraced human resource development (HRD) as a business strategy. Employees of the business organization learn how to navigate an accountability circuit in order to comply with organizational requirements that link a portion of salary increases and bonus payments to their engagement in lifelong learning. Sinding focuses on a practice increasingly common in cancer care, that of offering – or assigning – responsibility for treatment decisions to patients. The textual nature of the institutional circuit is more diffuse than in the two accountability circuits, as it comprises clinical practice guidelines, decision aids provided to patients by their oncologists, and consent documents. Both patients and physicians participate in self-governance; they conduct themselves in interactions such that patients (appear to) declare a treatment choice, and physicians’ guidance is (or appears) minimal. The chapter will present a brief account of each of these instances and will then move on to discuss the resonances and dissonances between the operation of institutional circuits in these different contexts and forms. It will reveal how individuals at the front line are drawn into self-governance through institutional circuits that convey explicit and implicit messages about how individuals within the context must understand and conduct themselves. It will argue that these institutional circuits bring together people who have – or at least appear to have – shared interests in certain outcomes. In some settings and some instances, benefits do accrue to people at the front line. Yet other outcomes are sacrificed by the apparent alignment of interests, foreclosed by the operation of the institutional circuit. Analysis of these three instances also reveals a paradox in which self-governance through institutional circuits shifts responsibility from the organizational level to the individual while simultaneously eroding professional judgment at the front line. Finally, the discussion will argue that individuals at the front line adopt a range of strategies to navigate institutional circuits in such a way that they complete the circuit while still progressing towards addressing their own wants and needs. In other words, regardless of how individuals feel about or chafe against the institutional circuits in which they participate, in practice they also have begun to accept their participation as inevitable.

Institutional Circuits and the Front-Line Work of Self-Governance

255

Accountability Circuits in Vocational Education and Training LAURI GRACE

This section draws on a study of vocational educators in Australia. The Australian Vocational Education and Training sector is that sector of the Australian education system that offers post-compulsory education and training relating to occupational or work-related knowledge and skills (Knight & Nestor 2000: 42). As of December 2009, vocational qualifications were available for work roles in more than 80 industries, incorporating such diverse activities as civil construction, public safety, beauty, community service, animal care, horticulture, mining, and funeral services (NTIS: n.d.). Educators in the vocational sector work within a range of institutional environments, including public Technical and Further Education (TAFE) institutes, private training institutions, universities, secondary schools, and workplaces that choose to align staff training programs with formal vocational qualifications. The regulatory framework of the vocational education sector comprises a complex hierarchy of government agreements, funding arrangements, and compliance texts. This “intertextual hierarchy” (Smith 2006: 79–87) organizes accountability circuits, that is, those institutional circuits requiring specified and measurable performances or outcomes to meet the mandate of the regulatory frames. This section describes two levels of texts that make up the “intertextual hierarchy”: national competency standards that govern the content of vocational education programs and a national quality framework that defines standards for the delivery and management of those programs. Together these texts establish an accountability circuit that governs almost every aspect of the professional work of educators in the VET sector. National competency standards define the content of vocational education programs. All learning and assessment programs that lead to recognized vocational qualifications are aligned with text-based competency standards, developed in consultation with industry and endorsed by government authorities as a “specification of performance which sets out the skills, knowledge and attitudes required to operate effectively in employment” (DEEWR: n.d.). Competency standards define performance requirements for employees and for graduates of

256

L. Grace, C. Zurawski, and C. Sinding

vocational education programs. In textual form, competency standards are typically structured as sets of individual “units of competency,” each of which purports to describe a particular job function or work role, and the skills and level of skill required to perform that function or role. With few exceptions, units of competency are integrated into national texts called “Training Packages.” While the name “Training Packages” might suggest resources to support training and learning, these texts are instead complex assessment and qualification frameworks that are used to assess and recognize the skills of employees and vocational learners. As assessment frameworks, the influence of units of competency over the content of educational programs derives from the expectation that educators will design their training in such a way as to ensure that learners are adequately prepared for assessment against the performance requirements outlined in the unit of competency. The second level of text, the national quality framework, defines standards for the delivery and management of vocational education programs. Before an organization is authorized to deliver educational programs that lead to vocational qualifications, the organization must first achieve government registration as a Registered Training Organisation (RTO) by demonstrating compliance with the provisions of the Australian Quality Training Framework (AQTF; DEST 2007b: 1). Introduced in 2001, revised in 2005 and 2007, with further changes in 2010, the AQTF is a set of textbased compliance standards intended to assure “nationally consistent, high-quality training and assessment services for the clients of Australia’s vocational education and training system” (Training.com.au 2007). To achieve compliance with the quality framework and achieve RTO status, organizations must document their procedures and systems. Within the text of the AQTF, it is the responsibility of the RTO to develop and maintain this documentation. In practice, it is vocational educators who are linked in with these governance arrangements in a range of ways, central among which is the production of training and assessment plans and other documentation. Individual educators engage in self-governance by documenting their training and assessment plans for each unit of competency they teach; they are held accountable for both the content and the delivery of their teaching practice, as the plans they document are subject to formal audit against the relevant unit(s) of competency and the AQTF standards. Following is an account of the work that vocational educators do to write up their training and assessment plans. “Louise”1 is a vocational

Institutional Circuits and the Front-Line Work of Self-Governance

257

educator in the community services industry. With extensive experience working in community services, her role as an educator is to prepare students to work in this field, in positions such as youth worker or drug and alcohol support worker. As Louise goes about her work of preparing a training and assessment plan, she confronts the units of competency that her students will be assessed against. In the first vignette, Louise works through a unit of competency called “Respond holistically to client issues”: For a start, what does that mean? And you should see the bloody language in that! We were sitting there absolutely baffled thinking, “Now what do we do with this?” This is a catch-all unit. Basically you can use it to teach whatever you like … I just know the other day I was sitting there with my colleague at work … [The unit] had these great big long sentences that didn’t mean anything to me. And we read them over and over and looked at each other, and between fits of giggles and what not, we sort of decided that it was just really – you know … So what have we got? [Reading aloud from the unit:] “Use observations, assessment tools and questioning to identify possible presenting issues.” I mean … [pauses] Well, I guess it means you sit and observe someone and make notes about how they’re behaving as part of an assessment process. But that’s not how we work. And “assessment tools” meaning, when someone comes into an agency to get help, we have questionnaires that we go through, and impact sheets – I guess they mean that. And “questioning.” I mean, we teach students, “Don’t fire questions at people.” “Seek information from a range of appropriate sources to determine the range of issues that may be affecting the client within organisation’s policies and procedures regarding autonomy, privacy and confidentiality.” That is a huge sentence. Like it’s three lines long, and I’ve got no idea what they mean.

Despite her experience working in community services roles and despite being a knowledgeable reader of vocational education texts, Louise encounters structural and linguistic complexity as she works to make sense of this unit. Training Packages are not texts that simply can be picked up and read; Louise needs to “unpack” (an official term; see DEEWR 2008) the Training Package she is working with by reading it, interpreting it, and applying it to the contexts her students hope to be working in. The language of Training Package units of competency has been described as “abstract, dense and distant” (Jennings 2004: 16), characterized

258

L. Grace, C. Zurawski, and C. Sinding

by the use of passive voice, abstract language, and complex and unfamiliar terms. These grammatical forms are not commonly used in everyday speech but are widely adopted in workplace documents constructed through “organizational literacy” (Darville 1995: 254–7). Organizational literacy involves semantic techniques such as a nominalization and agentless passive voice that highlights organizational processes and omits the agents who enact those processes. Reading and responding to such texts requires an understanding of more than just the words on the page – the reader must also draw on background knowledge of what has been omitted and how the text itself is used in relevant organizational processes. Reading Training Packages requires a particular organizational literacy, and vocational educators like Louise and her colleagues must “unpack,” or translate, that language into the literacy of teaching and learning. The training and assessment plan that Louise is developing will form part of her organization’s quality compliance documentation and as such will be subject to audit. AQTF audits are conducted according to key regulatory texts, including the Essential Standards for Registration, the National Guideline for Risk Management, and the Audit Handbook (DEST 2007b, 2007c, 2007a). These national government texts are supported by a suite of a further 14 publicly available national publications relating to AQTF implementation (Training.com.au 2007). While the standards and guidelines that govern the audit process are determined by national government, audits themselves are scheduled and conducted by state government authorities responsible for the registration and monitoring of Registered Training Organisations (RTOs). An AQTF audit is officially described as follows: “An audit is a planned, systematic and documented process used to assess an RTO’s compliance with the AQTF 2007 Essential Standards for Registration. It also provides an RTO with information about the quality of its training, assessment, client services and the management systems it uses to support the continuous improvement of its operations and outcomes” (DEST 2007a: 4). The Audit Handbook outlines five types of AQTF audit that are conducted by state authorities (ibid.: 4–5). Each RTO undergoes an audit to assess its initial application for registration; at the discretion of the state authority, an RTO may also be audited within its first year of operation (a “post-initial audit”); after its first year (a “monitoring audit”); when state and national authorities determine a need to audit RTOs servicing a particular industry sector (a “national strategic industry audit”); and finally, in response to complaints made about the RTO. In addition, the National Guideline for Risk Management outlines provision for audits to be conducted when an RTO applies to renew its registration or to increase the range of qualifications it is registered to deliver (DEST 2007c: 11).

Institutional Circuits and the Front-Line Work of Self-Governance

259

In addition to the range of external audits conducted by state registering bodies, RTOs are required to conduct their own internal audits to ensure their ongoing compliance with the AQTF standards. Copies of internal audit reports are specified as part of the documentation that may be examined when an RTO is audited for compliance against the AQTF standard that “The RTO collects, analyses and acts on relevant data for continuous improvement” (DEST 2007b: 4; 2007d: 9). The frequency and timing of internal audits is determined by each RTO. The local training and assessment plans developed by Louise and other vocational educators form part of the suite of documents that are examined as part of these external and internal audit processes. Local documentation of training and assessment strategies and programs is specified in the AQTF 2007 Users’ Guide to the Essential Standards for Registration as appropriate evidence of organizational compliance with AQTF Standard 1, which specifies that “The Registered Training Organisation provides quality training and assessment across all of its operations” (DEST 2007a: 4; 2007d: 10–12). The plan that Louise is working on will be available for examination against the detailed requirements of AQTF Standard 1. An AQTF auditor’s formal power is derived from their position in the audit process. In some cases, this formal power is reinforced through local consultancy arrangements under which training organizations engage current or former auditors to guide them through the process of documenting their practice to achieve compliance. Or, in the terms that Louise experiences it: Basically, everybody went totally nutty last year trying to prepare for this audit. And they had people, ex-auditors, coming in and doing professional development sessions with us, evaluating our materials, telling us what was what … There was all these no-noes … So we were told by our management, ‘Do what they say.’ We were running around like chooks without heads,2 changing all our unit outlines and assessment tools … So when you talk about my autonomy being interfered with? In a huge way. Absolutely huge.

Like Training Packages, the AQTF standards are also written in complex and abstract language that must be interpreted. In the following vignette, Fiona explains that this process of interpretation can lead to inconsistencies in audit requirements: I was audited four times in my last job, and every single audit brought totally different things up – there was no consistency in the application of the standards. One piece of documentation I had, one auditor thought,

260

L. Grace, C. Zurawski, and C. Sinding

‘That was fantastic! That’s great, that’s best practice’; another auditor’d come in and go ‘That’s not right, and I don’t like this.’ I go, ‘Alright, fair enough,’ you know. And I think that’s that whole ambiguity with that is ‘What the hell do you want from me? Just tell me and I’ll do it!’

In these accounts Louise and Fiona express their awareness and frustration that, in the work of documenting their training and assessment plans, they are writing up an account of their professional practice in a form that makes their practice accountable to authorities both within and external to their organization. They are drawn into self-governance by entering an accountability circuit in which they report on their everyday activities in terms that are meaningful within the national regulatory framework, and this process connects their local practice to national government agendas (McCoy 1998: 407). Fiona’s comment “Just tell me and I’ll do it!” clearly signifies her acceptance (although reluctant acceptance) that her authority to make decisions on her own professional practice is constrained by reporting requirements against which she will be held accountable. The shifting boundaries that arise from inconsistent interpretations make it difficult for even experienced vocational educators to figure out what is required and develop strategies that would enable them to navigate the audit process and alter their position of power in relation to audits. Many educators remain in a position of “not knowing” what an individual auditor will require in any particular audit. When training and assessment plans are presented for audit, they are closely examined against the specific requirements of both the unit of competency and the AQTF. Vanessa experienced this as a very detailed examination in which an external auditor began a registration audit by opening one of several volumes of the Training Package and randomly selecting a specific criterion to examine. Vanessa proudly reports that her own documentation was so detailed that, presented with such a specific demand, she was able to demonstrate compliance: “I had an auditor come in here, first thing he did was open it [the Training Package] up to the evidence guide [a sub-section within a unit of competency] and say, ‘I want to see your assessments meet that [point] there.’ Now fortunately, I knew that, so I was able to go bingo! There it is, this is how I make sure that it meets it.” The work of documenting educational practice to demonstrate compliance might continue throughout the audit process itself. Jessica is a highly experienced vocational educator in a training management role. Her account of her own experience highlights how the audit process takes power away from professional

Institutional Circuits and the Front-Line Work of Self-Governance

261

educators. In order to participate in this process, Jessica must learn and use the organizational literacy of AQTF and audit, rather than the literacy of education: You’re never quite sure if you’re right. You go into an audit and you think, you’ve got your evidence piled up to the ceiling. And they didn’t need 90 per cent of it, so you’ve spent hours compiling evidence that they didn’t need. But they needed all this other stuff that you didn’t prepare. So that while they’re there you’re rushing around like a mad thing trying to get all the evidence that isn’t there because it wasn’t clear that that was actually what was needed.

Despite the lack of clarity about what an auditor will be looking for, the implications of failing to meet audit requirements are significant. Louise expresses this as follows: You’re too scared to put something on a unit outline, in case you’re doing the wrong thing – you know – and that your college is going to be found non-compliant because of you! ... We’ve felt very oppressed by it. Because we were told over and over and over by our management we could lose our RTO status. One area of the college ... nearly lost a course or a couple of courses, and it was a very real threat. Even now we’re under a threat because they’ve said, “Basically you’re compliant, but there’s a few things you need to do, and we’re going to come back and check that you’ve done them.” You just feel a lot of pressure, and you’re backed into a corner as a lecturer.

Louise’s references here to the threat of “losing” RTO status, or losing one or more courses, refer to the ultimate sanction if her institution is found non-compliant at audit: its authority to deliver some or all of the vocational education programs it offers might be withdrawn. This would clearly have immediate implications for the educators employed to deliver those programs, as well as for the students in the programs. While withdrawal of registration is the most severe penalty possible within the audit process, as an educator Louise demonstrates a strong sense that she is individually responsible for protecting her RTO from this penalty by ensuring that the training and assessment plans she documents will comply with audit requirements. While one of the principal purposes of preparing training and assessment plans is to demonstrate compliance with the AQTF, there has been some indication of an emerging expectation that these documents should also be used to provide course information to students. In this

262

L. Grace, C. Zurawski, and C. Sinding

widely but not universally adopted practice, documentation that may have been prepared with an AQTF auditor as the intended audience is issued to students without alteration. So, for example, when a student on her first day of training to become a youth worker might expect the assessment requirements of her program to be explained in language that she can understand, she might instead be handed a substantial document that presents this information in the language of the complex and abstract units of competency. This practice had been adopted by Louise’s organization, and educators were required to reflect this practice in their training and assessment plans. But educators express concern about the impact on students of having assessment requirements presented in language that even teachers struggle to make sense of, and it was a practice that Louise actively resisted: I wouldn’t expose them to this sort of stuff ... I would not expose them to this ... I’m basically making a bit of a stand about it. But to cover myself what I’m doing, I’m doing nice little unit outlines that cover everything, and then I’m saying, ‘If you’d like to know more about the criteria on which you are being assessed, please see the back of this booklet.’ And then I have this chucked in the back, so they can read it if they want, but otherwise they don’t have to. Because it’s very upsetting.

This vignette highlights how learning and assessment plans aligned with external accountability requirements can replace appropriate with inappropriate educational practices. As an experienced educator, Louise recognizes that giving her students complex documents that outline their assessment requirements in abstract language is “very upsetting” and does a disservice to those students. Yet the accountability circuit constrains her in such a way that she finds herself having to adopt and document this approach. Louise then undertakes additional work that she hopes will reduce the negative impact on her students; she provides information about the program and the assessment in language that will be familiar to her students, and, in addition, she includes the unit of competency as required by her organization but places this at the back of her “unit outline” (course booklet), where students will encounter it only if they look for it. Training Packages and the Australian Quality Training Framework are just two texts within a “maze-like array” (DET Qld 2003) of legislation, funding agreements, procedural guidelines, and implementation frameworks that characterize the Australian Vocational Education and Training (VET) sector. One of the goals of this hierarchy of texts is

Institutional Circuits and the Front-Line Work of Self-Governance

263

the establishment of a national system of vocational qualifications that allow individuals in any part of Australia to achieve nationally recognized educational qualifications. While arrangements have long been in place for the national recognition of school- and university-level qualifications, prior to the establishment of the VET sector no such arrangements existed for vocational qualifications; with the exception of trade qualifications achieved through apprenticeships, vocational qualifications were often not recognized beyond the local region in which they were issued. With 1.7 million students enrolled in publicly funded VET programs in 2008 (NCVER 2009: 1), it can be seen that many people benefit from the establishment of a system of nationally recognized vocational qualifications. Yet within the benefits of this system, there are particular approaches in which the accountability circuits of VET are enacted in ways that limit the freedom that vocational educators have to draw on their professional training and expertise in shaping their practice.3 Instead, vocational educators who participate in selfgovernance within such approaches to accountability find themselves constrained to shape their practice around frames provided by their institutions and national and state governments. Louise and her colleagues, as knowledgeable readers of VET texts, attempt to negotiate the accountability circuits while retaining some level of professional authority over their own educational practice.

The Circuit of Accountability for Lifelong Learning CHERYL ZURAWSKI

This discussion takes place in the context of business organizations wherein employers intervene in the everyday work life of their employees in order to manage their “performance”4 as lifelong learners. These employers have embraced “human resource development” as a business strategy for “unleashing human expertise” (Swanson & Holton 2001: 4). As a business strategy, HRD can be distinguished from training, the traditional method of learning employers have used to improve employees’ on-the-job performance (Blanchard & Thacker 1999; Merriam, Caffarella & Baumgartner 2007). While training (e.g., in computer skills) endures as an important form of lifelong learning in business organizations, the boundaries of HRD extend beyond it and include a wide range of other formal

264

L. Grace, C. Zurawski, and C. Sinding

and informal lifelong learning activities (Slotte, Tynjala & Hytonen 2004) that can and often do take place outside classrooms or training rooms and without a teacher or trainer in sight. Within the extended boundaries of HRD, employers focus more on managing programs that enlist and monitor employees’ participation in lifelong learning than they do on delivering training. Prominent among employer programs to enlist employees in lifelong learning is “performance management.” Performance management defined broadly as “any activity designed to improve the performance of employees” (Storey & Sisson 1993: 131) became fashionable (Fletcher & Perry 2001) as an HRD strategy of business organizations in the 1990s.5 To employers, performance management represents a more hands-on and continuous approach to managing employees’ performance than the episodic (usually annual) performance appraisal (Bach 2000; Gold 1999a; Storey & Sisson 1993). Performance appraisal, long a mainstay of management practice, involves evaluating an employee’s job performance as the basis for mainly administrative decisions affecting them, for example, pay raises, promotions, layoffs, and terminations. While performance appraised below management expectations might be attributed to a “gap” in employees’ knowledge and skill subsequently to be “filled” by training (Bach 2000), performance management is more anticipatory than remedial in that it is geared to keeping employees’ knowledge and skills up-to-date and relevant to the evolving needs of employers (Armstrong 2003; Gold 1999b: Cardy 2004). As part of performance management systems designed to manage both the evaluative and developmental aspects of employees’ job performance, many employers have introduced employee development planning (Armstrong 2003; Floodgate & Nixon 1994; Tamkin 1996) as a work process by which employees’ lifelong learning in relation to their work is officially planned, carried out, reviewed, and rewarded. In business organizations with performance management systems, employees become active in preparing and implementing their development plans, and managers and human resource developers (employees whose jobs include HRD tasks and duties) become active in ensuring that employees’ activities meet employer requirements (Smith 2005). This section draws on a study of the two-way coordination of employees’ lifelong learning in a business organization located in the Canadian prairies. Here the discussion focuses on the operation of an accountability circuit comprising organizational texts used to regulate the lifelong

Institutional Circuits and the Front-Line Work of Self-Governance

265

learning that employees engage in and how employees’ activities as lifelong learners are represented, reported, and recognized within the business organization. Stories and descriptions shared by eight of the fifteen employees6 interviewed for the study provide the “data” for this section. To highlight what an employee said about his or her own work and experience, direct quotes are used. When sequences of action that “transcend the local experience” (McCoy 1999: 47) of any individual employee are described, “a composite built up from multiple sources” is presented (DeVault & McCoy 2006: 40). At ABC Company (a pseudonym), the text-mediated circuit of accountability was put in place in 2005 when the board of directors – a group of people elected under the authority of provincial legislation to direct the business and affairs of the company – added lifelong learning to its “balanced scorecard”7 system of performance management. Balanced scorecards are a tool that employers use in an effort to manage individual employees’ job performance in line with organizational objectives that have been articulated from four “perspectives” (or ways of knowing about performance) that Kaplan and Norton (1992) argue are relevant to every organization. Three of the four perspectives are in addition to the financial perspective from which business organizations like ABC Company have traditionally managed their performance (think of “return on investment” [ROI] as a measure of profitability, or “gross revenues” as a measure of the total value of products a business produces). The additional perspective of interest here is the learning and growth perspective, since it is the one that makes possible the hands-on and continuous management of employees’ performance as lifelong learners. The addition of non-financial perspectives marks a change in the “pattern of visibility” (McCoy 1998: 397) constructed through the balanced scorecard system of performance management compared with systems of performance management relying upon the financial perspective alone. In this way, employees are required to prepare and implement an employee development plan and become accountable for meeting a target involving a specified number of hours of lifelong learning. By importing lifelong learning as a relevant perspective into the balanced scorecard, employees become responsible (and accountable) for realizing the board of directors of ABC Company’s intent to draw more heavily on their collective “corporate intelligence” (Hodkinson & Bloomer 2002: 31) as a means of achieving organizational objectives. To further support this intent, ABC Company has followed the lead of many other business organizations that no longer pay a wage

266

L. Grace, C. Zurawski, and C. Sinding

equivalent of the monetary value attached to a job constructed in the abstract as a list of tasks and duties for someone to perform. Under the balanced scorecard system of performance management, employees of ABC Company are paid according to how well they are determined to have performed on the job (Aguinis 2007). These determinations and adjustments to pay that are warranted by them are made (as will be seen later) during biannual performance reviews. The visibility of lifelong learning as something employees must do became evident during interviews when employees were asked to explain why learning and development is one of the perspectives on performance that their company’s board of directors includes on the balanced scorecard. Naomi, for instance, puts it this way: “I think why they put that in there is because they feel that they want to provide us with any avenue to pursue education … We become a better asset to them if we’re knowledgeable in what we do.” For Deanna, the balanced scorecard reflects the importance ABC Company attaches to lifelong learning. interviewer: [The balanced scorecard] says something organizationally about the importance attached to learning? deanna:Yup ... right. It says that ABC Company as a whole wants to see that learning isn’t forgotten ... interviewer: Why is it significant that it not be forgotten? deanna: Because things don’t stop moving. Things aren’t the same today as they were 20 years ago and they won’t be the same as they are 20 years from now. So if you stop learning, it’s harder to catch up in the future. Like, if you just stick with what you know today and don’t learn anything, ever, then it becomes difficult to keep up with things or grow with things.

Deanna’s view was echoed by Shelly: shelly: The industry is evolving all the time. To stay current, you have to keep learning. And as you learn, you grow ... So whether it’s a formal learning experience or an informal learning experience, I think we have to be learning all the time and be open to learning all the time. i n t e r v i e w e r : What would happen to ABC Company if its employees did not learn all the time and its employees weren’t open to learning all the time? shelly: I don’t think we’d evolve. ABC Company wouldn’t be able to keep pace with the rest of the industry.

Institutional Circuits and the Front-Line Work of Self-Governance

267

These excerpts from transcripts of interviews with Naomi, Deanna, and Shelly show their awareness of a connection between their employer’s efforts to manage their performance as lifelong learners and the consequences of non-performance for both employees and the company. Not to see the part that lifelong learning plays in keeping themselves and the company they work for from being left behind is seen as risky. These employees’ views, commonly echoed in the literature (see Fenwick 1998), reflect a dominant discourse, which justifies employer-sanctioned HRD interventions into the everyday work lives of their employees in the interest of industrial progress, competitiveness, and even survival in what are regarded as rapidly and everchanging economic times (see, e.g., Bratton et al. 2004; Clarke 2004; Rainbird, Fuller & Munro 2004). The balanced scorecard reaches into the local settings where employees work via the employee development plan form. The form, provided for employees’ use by the HR department, establishes lifelong learning as a “key responsibility area” under which all employees are responsible and accountable for preparing and implementing an employee development plan. Space under the form heading “action plan” is left blank. The blank is equivalent to a question that employees must answer (Smith 2005) in order to prepare an employee development plan that meets organizational requirements. The organization’s requirements are that employees fill in the action plan blank with an objective (i.e., a statement of what the employee will undertake to do under the learning and development category of performance) and action plan (i.e., a statement of how the employee intends to accomplish their objective). Institutional ethnographers have a special interest in the work employees do in taking up those interrogatory devices (Smith 2005: 226) that transpose experienced actualities into text. Here, however, employees’ work in preparing their employee development plan by filling in the action plan blank is prospective; they are called upon under the terms of their employee development plan to enter in the appropriate place actualities of lifelong learning they propose to experience – actualities waiting to happen, if you will. Employees documenting the lifelong learning they propose to do in this way do not just enter an accountability circuit; they add a component to it. The component is the employee development plan, a text that employees are required to author and then to use to govern themselves in carrying out and reporting on their lifelong learning. At ABC Company, employee development plans are to be SMART. SMART is a mnemonic for remembering the employer-specified criteria

268

L. Grace, C. Zurawski, and C. Sinding

that a proper instance of a learning and development objective is to meet. A learning and development objective should be specific, measurable, attainable, relevant, and time based. The SMART mnemonic appears at the top of the employee development plan form and features prominently in organizational texts such as various employee communication materials about the balanced scorecard performance management system at ABC Company. Reference to the mnemonic was also frequently made by employees in their descriptions of how they governed themselves in putting together the action plan part of their development plan. For these employees, the mnemonic serves as the standard against which to self-assess the learning and development objective they formulate to enter into the appropriate place on the employee development plan form. To meet the standard, to be SMART, means to mind the mnemonic’s assembly instructions (Smith 1990). Kelly, for instance, comments on how she makes sure her learning and development objective is SMART: “I just basically ... write out exactly what I want to do for each of those letters. So like I put in how much time I’m going to spend on it, what the exact results are going to be from it.” Deanna explains, using a hypothetical example of a course she wants to take: “So you can pick a course ... then you would put that you’re wanting to complete that course by June 30 of this year and the exam will be written on such and such a date and then a reason behind why you’re going to take that, where’s it’s going to take you, what it’s going to help you accomplish.” When employees like Kelly and Deanna respond to the assembly instructions provided by the SMART mnemonic, they become self-governing in assessing the propriety of their learning and development objectives against criteria that are not of their own making. The mnemonic does not permit them to judge for themselves and, with reference only to their own aspirations, the lifelong learning that would be worthwhile to set out to accomplish; it imposes a standard judged by their employer as worthwhile for them to attend to in preparing an employee development plan that will align their individual performance with the dictates of the balanced scorecard. The employee development plan, so constituted by an action plan based on a SMART objective, guides employees in the self-governing work they subsequently do. This work includes implementing their plan (e.g., taking courses at a college or university, attending a workshop offered by a professional association to which they belong, logging on to an online program of instruction) and keeping track of their lifelong learning (writing down or otherwise remembering the details

Institutional Circuits and the Front-Line Work of Self-Governance

269

of that lifelong learning). In this passage, Doris talks about what she understands to be the responsibilities of employees once they have prepared their employee development plan: Well, to summarize, our responsibility is to keep track of the things that we are doing so that you can fill out your [form] with some actual information that is beneficial for your manager because in the end, your manager needs to send this to HR and they look at it and need to see something that they are expecting to see. So I guess I look at it like I’m not going to make this difficult for my manager. I’m going to put in the things, the proper things, the trackable things. And I know that it’s to my benefit to meet all my goals, so I’m going to put goals in there that are meetable. I’m not going to go crazy with that either because I know what I’m capable of and I know how much time I’m prepared to give … That’s just the way it is. This is what we do now so there’s no point in fighting it or making a big fuss about it. That is just not what I’ll do.

Here we get a sense of what is involved for employees in navigating the accountability circuit they had a hand in laying down. Doris shares her understanding of employees’ responsibilities in terms that reveal to us a concern with how the form she fills in will be read and acted upon by others. Managers and human resource developers have their own work to do in relation to Doris and the accountability circuit she is navigating. Doris lets us in on navigational strategies she uses to make easier her own and others’ work. One of them is to set out to do what she knows she can accomplish and has time for (a strategy in keeping with the SMART criteria discussed earlier). Two additional strategies depend on her knowing how to provide information that is beneficial to her manager (because it accounts for the lifelong learning she has done) and that human resource developers expect to see (so that they can administer pay for performance and provide an account to the board of directors of the lifelong learning that employees of ABC Company have collectively accomplished). Rather than fight or make a fuss, Doris explains how she facilitates the operation of the accountability circuit (and her navigation through it) by providing a textual trail that others can follow in doing the work the accountability circuit requires of them. Doris is far from the only employee who is resigned to doing what is required to support the operation of the accountability circuit. Employee development planning is typically regarded by employees as a routine and required, albeit time-consuming and not particularly enjoyable, part

270

L. Grace, C. Zurawski, and C. Sinding

of their jobs. Nolan, for instance, who says he hates filling in the employee development plan form and while he is doing it wonders about better ways he could be spending his time, put it this way: “It’s just part of your job and I just try and think of it as, well, it’s something I’ve got to do. It is what it is.” What it is to Doris is a “monkey on her back,” something she is “forced into doing” if she wants to be able to earn a raise and a bonus. Like Doris, Deanna clearly understands the link between her performance as a lifelong learner and the pay her employer has made contingent on it through the regulatory text of the balanced scorecard. “It is part of what you’re paid for,” she says in a matter-of-fact way. Donna, who says she misses the “old days” when preparing and implementing an employee development plan was not a job requirement, also acknowledges that “unless you’re living under a rock,” lifelong learning through the experience of paid work is “kind of hard to ignore” at ABC Company. As Monique puts it: “You just know that it’s an expectation.” Twice a year, managers check up on whether employees are meeting expectations as part of performance reviews. In the run-up to these reviews, employees make their “results achieved” as work-related learners textually visible (McCoy, 1998) to their managers. They do so by filling in the blank of their employee development plan reserved for accounts of their performance. The blank is also an interrogatory device calling upon employees to recount the lifelong learning they have carried out and then to transpose and offer up what they have recounted as experienced actualities for managerial review. To aid the recounting of “results achieved,” several employees, including Deanna, make it a routine practice to keep track of their lifelong learning. They do so, for example, by opening up their form and entering in bullet points, keeping a paper file, or making a notation on their calendars or other records. In Deanna’s words: “I just go in and type bullet points right on my form. Then when it’s getting close to review time, it’s just a matter of comparing my bullet points to my action plan to see if they fit. Have I reached the initial [objective] that I set for myself? How did I reach the initial [objective] I set for myself?” Managers begin performance reviews once employees have submitted their assessment of their “results achieved.” Narrative comments such as “I agree with you” or “this is an area I’d like to discuss further” are often entered by managers in reply. However, it is not narrative but a numerical rating that is needed to complete the accountability circuit that connects employees’ front-line work as lifelong learners to ABC Company’s balanced scorecard performance management system.

Institutional Circuits and the Front-Line Work of Self-Governance

271

Two texts are taken up to produce the numerical rating. The first of these is a set of HR department guidelines that serve as a textual filter (Smith 2006; Wilson & Pence 2006) for sorting employees’ “results achieved” into categories of lifelong learning, for example, a university course that ABC Company will recognize as worthy of pay for performance. Once sorted into categories, a predetermined and corresponding allocation is made of a number of lifelong learning hours for which a manager is justified (according to the guidelines) in giving an employee credit (e.g., a university course equals “X” number of credit hours) towards the annual board-specified balanced scorecard target. Once the total number of hours is tallied, a second textual filter – this one in the form of a grid that equates the total of hours credited to the employee to a rating on a scale of 1 to 4 (i.e., the higher the tally, the higher the rating) – is used. The rating, in turn, is equated to a number of points that represents the “score” the employee has earned for performance as a lifelong learner. Following a discussion with the employee about the rating, the manager submits it to the HR department for calculation of any adjustments to pay warranted by the employee’s individual performance (made at mid-year and year-end) and to bonuses (paid at year-end) warranted by an accounting of the collective performance of all employees. The discussion in this section concludes with consideration of the consequences of the operation of the accountability circuit in terms of the positions constructed for employees who respond to the accountabilities it imposes on them. When ABC Company incorporated lifelong learning into its balanced scorecard system of performance management, it was, according to an email announcement sent out by an HR department manager, “to assist with the goal of becoming an employee of choice and employer of choice.” It was after reflecting on passages of talk in which employees described their understanding of the meaning of “employee of choice” and “employer of choice” that the terms became recognizable as speech genres, or “configurations of meaning” (Smith 1999b: 120) that have developed in the sphere of activity associated with employee development planning. The speech genre “employee of choice” organizes and standardizes the idea that employees who do the work that is required to navigate the accountability circuit are more valuable to and valued by their employer than employees who do not. Deanna shared her understanding of the speech genre this way: ABC Company “is looking for employees of choice … people that are knowledgeable and that are willing

272

L. Grace, C. Zurawski, and C. Sinding

to continue to learn and continue to know new things.” To Nolan, an employee of choice is someone “who kind of embraces a similar value” to that which ABC Company places on lifelong learning: “I think they would appreciate far more an employee that comes in and is energetic about it [lifelong learning] than someone who comes in and finds it blatantly appalling.” To Kelly, employees of choice “work hard, want to learn, want to develop.” Employees who take on the work of navigating the accountability circuit typically referred, either implicitly or explicitly, to their desire to do what they needed to do in order to earn pay for performance as a lifelong learner. They talked about finding “ways to get it [lifelong learning] done” (Naomi’s words) and making participation in employee development planning “work out for you” (according to Kelly). As Doris notes: “The accountability is in your hands. Basically it’s put in your hands to get the skills you need to do your job properly. It’s really up to you. You’ve just got to find ways to do it. It’s a bonus if you can do it and if you can’t, you’re penalized” (a reference to pay forfeited for non-performance) In finding ways to get the work done in order to pave the way for an exchange of pay for performance, employees seem to be aware of a trade-off they are making. Naomi reveals this awareness: “I think it’s just a hand-in-hand thing. Here [are] the tools to do your job. You can do it or not, it’s up to you, but then don’t cry if you didn’t get a raise because you’ve been given the opportunity.” Nigel echoes Naomi’s awareness of a trade-off when he comments: “If they [ABC Company] are going to scratch your back, you have to return the favour.” He goes on to express what, in addition to pay for performance, he hopes to get out of returning the favour: “I don’t want to say it’s all about money because I mean you have to like what you do too. But I’d be lying if I didn’t say that was a big part of it for me. I want to be successful as I can in the shortest amount of time so that, you know, down the road when I am 55 ... can I throw in the towel and move out to my farm I want to live on and get out of the city. So that is my bigger goal.” A companion to “employee of choice” is “employer of choice.” The latter speech genre organizes and standardizes the idea that a company that provides its employees with avenues to pursue lifelong learning is one that people looking for a job want to work for and existing employees want to stay with. To Deanna, “employer of choice” means that ABC Company “is a company that people want to work for.” For Monique, “employer of choice” represents an image her employer is trying to

Institutional Circuits and the Front-Line Work of Self-Governance

273

portray: “They want to have educated employees. That’s just important, it’s just good business to have the best, smartest people working for you.” Nolan says he “knows ABC Company always wants to be the employer of choice” and that supporting employees’ lifelong learning is “a big part” of what it takes for an employer like his to be recognized as one.

Institutional Circuits in Cancer Care CHRISTINA SINDING

This section draws from a study of cancer care in Ontario, Canada. It focuses on the increasingly common professional practice of describing treatment options and asking (or requiring) patients to choose among them. That cancer treatment involves choices and that patients have a central role in making these choices are features of practice embedded in the texts organizing cancer care. In recommendations regarding provider-patient communication in the Program on Evidence-based Care published by Cancer Care Ontario (CCO), cancer care professionals are called to “make it explicit that there are choices to be made and that the patient should be involved in these choices.” Two bullet points later, the report goes further, encouraging professionals to arrange a return visit for patients “when they have made a decision” (Rodin et al. 2008: 5; emphasis added). At the cancer centre where this study was based, health professionals locate the impetus for patient involvement in treatment decisionmaking in key clinical trial findings. In the 1980s, two chemotherapies for early-stage breast cancer were found to have the same survival outcomes but different side-effects and treatment schedules; later trials showed that mastectomy and lumpectomy with radiation conferred the same benefits in terms of recurrence and survival. As the story goes, the dilemmas clinicians now faced – how to recommend one course of treatment over another when clinical outcomes were equivocal – intersected with the increasing visibility and voice of women with breast cancer in the public sphere. The practice of laying out treatment options and encouraging women to choose among them took hold, and decision aids (computer programs, videos, visual displays, and brochures that make clinical information about treatment risks and benefits available in an accessible form) proliferated.

274

L. Grace, C. Zurawski, and C. Sinding

Evaluation studies proceeded apace, describing positive associations between involvement in decision-making and various quality-of-life measures (Hack et al. 2006). One might argue that with decision aids the resources required for making treatment choices were democratized: information once available only to those with access to (and the capacity to decode) scientific knowledge was now available to all. Yet, while such aids make medical information more available, they also establish medical information as “what matters” in how people receive one treatment over another (or receive treatment at all). Critical qualitative studies with patients and professionals highlight how treatment pathways are shaped by actualities well beyond medical information: social locations, relationships, and responsibilities; life experiences; professional practices, especially referral practices; and health care policy (Hudak et al. 2002; Mykhalovskiy & McCoy 2002; Sinding & Wiernikowski 2009).8 Further to these critiques is the tension – remarkably unacknowledged – between a practice that purports to be in patients’ interests and many patients’ discomfort with the practice. The literature on treatment decision-making recognizes that the desire to make treatment decisions is variable among patients and along treatment trajectories (Charles, Gafni & Whelan 1997; Say, Murtagh & Thomson 2006). Yet reluctance to make treatment decisions is rarely explored.9 Quite commonly, patients’ disinclination to declare treatment choices is met with a call for interventions to support them to do just that. Ambivalence about treatment decision-making is persistently translated into deficit, into a condition that requires remedying. This section takes seriously patients’ ambivalence about their role in treatment decision-making. It assumes that any institutional practice carries messages about how service providers and services users are to conduct themselves; in other words, institutional practices are embedded in specific courses of action that must be negotiated and taken up or refused at each link in the social relation. In this section, women’s descriptions of their encounters with physicians are explored for the patient-physician courses of action they carry and for the types of statements and actions rendered possible and foreclosed by current decision practices. Women’s ambivalence about their responsibility for decisions and their attempts to reconfigure patient-physician relations can be understood as responses to accountabilities they did not choose. At the same time, we see women’s active participation in accountability relations and their

Institutional Circuits and the Front-Line Work of Self-Governance

275

formulation of their own statements and actions in accordance with institutional imperatives. The words of two research participants, Sheila and Robyn, are highlighted. Sheila and Robyn are younger than the average cancer patients, are very well educated (both hold PhDs in science disciplines), and have high-status occupations and well above average incomes. According to findings in the literature on treatment decision-making, women with this demographic profile are especially likely to welcome active involvement in medical decisions. In terms more relevant to this chapter, Sheila and Robyn are especially well positioned to govern themselves effectively as the sort of patient-subjects made available by cancer care decision practices. Their discomfort with and attempts to renegotiate their positions are thus particularly revealing. In their accounts, we can see traces of contemporary discourses about service users operating across public service institutions made manifest in texts at the cancer centre (e.g., the “Your Health Care – Be Involved” campaign, described in the discussion). These discourses typically “combine an apparent increase in power (as partner, as customer) with increasing responsibilities (to participate in policy making or service delivery, to make informed choices)” (Barnes & Prior 2009: 5). We can also see traces of evidence-based medicine (EBM) in the forms of patient-physician relations women describe, relations mediated by scientific evidence (Mykhalovskiy & Weir 2004). As we watch how women negotiate their responsibilities and their relations with physicians, we can see especially the disjuncture between EBM’s probabilistic rationality – the neutral account of statistical “likely-tohappens” – and women’s efforts to have their doctors say what they really want and reveal what they really know. In the passage below, Robyn reflects on a meeting with her oncologist after her chemotherapy dose had been reduced to alleviate difficult side-effects: robyn: He said, “Well what do you want to do? Do you want to go back up to the full dose and risk having another episode or do you want to stick on the 15 per cent, the reduced by 15 per cent?” And I remember thinking, don’t ask me that, you decide. But then I got crafty and I figured – and I figured this out pretty quickly – just keep him talking for about twoand-a-half minutes and they’ll figure out what it is they really want to do. So that’s what I did. interviewer: OK. So how did you keep that going?

276

L. Grace, C. Zurawski, and C. Sinding

robyn: Oh I don’t know, just using your little interviewing tricks and maybe expressing a little uncertainty. But I just found if I could keep him talking long enough, they’d show their true colours. interviewer: And do you remember how they showed their true colours? robyn: Well, with that particular incident it was very clear that he wanted me to go back up to the full dose. Which I did.

Reflecting on similar dynamics, Sheila described an interaction with her surgeon as he presented the options of mastectomy or lumpectomy: “He said ... ‘It is your choice. I will do what you want me to do.’ And then I pushed him and I said, ‘What would you do?’ And he didn’t want to answer. And I said, ‘All right, medically what is the better choice – never mind, medically what’s the better choice?’ And he said, ‘My opinion is the more tissue I can remove the lower the risk.’” On the face of it, the work that women are doing here is quite puzzling. They appear to be trying to secure something from physicians that physicians are unable or reluctant to offer. The resource that is elusive in these interactions is, it seems, more substantial direction or guidance. In both instances, the women describe interactions in which particular kinds of statements on the parts of physicians appear to be out of bounds or prohibited or at least require withholding for some time or until certain conditions have been met. Ziebland and colleagues noted a similar finding in their study of women with ovarian cancer; in some cases “the medical team seemed (to the woman) strangely reluctant to express an opinion” (2006: 365). These accounts echo the comments of one of the administrative staff at the cancer centre who, diagnosed with cancer, asked several of her oncologist colleagues what they would do in her situation. None would respond to her question; she finally received what she called a “back door” answer – an answer delivered indirectly, and in a way that suggested its unauthorized nature. As Sheila reflects on her relationship and a specific meeting with her medical oncologist, we learn more about the particular sort of patient assumed or required in contemporary cancer care: You’re asking me to make a choice here and I’ll go away and make an informed choice. But if I make the wrong choice from a medical perspective you’d better tell me I’ve made the wrong choice and tell me why ... Before I answered him I actually did say, “Look, you know I know you’re

Institutional Circuits and the Front-Line Work of Self-Governance

277

telling me that these [treatments] are equal and I accept your – the logic is there but if I choose one that you wouldn’t have chosen I want you to tell me that you would have chosen the other.” He said, “OK, I’ll tell you.” And so then I said, “I want [this] one.” And I said I understand about the risks, I understand about this, but you know. And he said, “Well, actually that’s the one I typically go with.” OK, we’re in agreement then. But I actually did tell him that. And he says, “Good, we can start next week.” I said, “Sounds good to me.” So that’s what we did.

In this passage, Sheila describes the self-governing she understands to be required of her as a patient. She is required to make a choice, and more specifically a choice that is “informed.” To align her own conduct with policies and practices governing decision-making in cancer care she must acquire medical information and decide on a course of treatment relatively autonomously – she will “go away” and do this. She is also required to speak first, to name a chemotherapy; she cannot ask him to declare his opinion before she has declared her choice. In one of the curious paradoxes of self-governance, it also appears here that taking up the “involved, informed patient” position – engaging with “the logic” of the treatment options, becoming accountable to this particular formulation of patienthood – is also a way to resist (sole or primary) responsibility for the treatment decision. An involved patient, here, achieves a (more) involved physician; a patient engaged with the medical information achieves a physician willing to commit, if not to saying what he or she would do, at least to taking a stand on the choice she names. Above, Robyn names her actions as “crafty”; she perceives herself, at some level, to be tricking her oncologist into saying what “he wanted,” what he thought was best for her. Sheila recounts formulating a question in a way that secured the sort of response she sought. She makes visible the skilful semantic work required to make a desire for more direction actionable in relation to current practices (Smith 2005): she transforms her question from one that seeks direction for her as an individual and that addresses the oncologist personally to one that solicits a probabilistic assessment of the general clinical situation. Sheila’s work allows us to see the sorts of self-governance required in this clinical setting. For even when a patient secures a response meaningful to her, both the woman and the physician remain accountable to a discourse in which patients are responsible for treatment decisions. The physician’s words (and silences) and the women’s words (and

278

L. Grace, C. Zurawski, and C. Sinding

silences) continue to make the interaction “reportable” as one in which the patient, not the physician, determines the treatment. Of course, it is entirely commonplace in health care interactions for physicians to seek patients’ consent to treatment. Conversational analytic work describes the subtle and persistent efforts of physicians to achieve a patient’s endorsement of a treatment recommendation in the course of a consultation (Stivers 2006). The situation in cancer care seems quite different at this historical moment. Here, the patient is called to name the treatment, and the physician endorses the patient’s choice – or does not. Certainly there are instances in which oncologists and nurses work to persuade patients to take a treatment they think is likely to make a significant difference in whether or not a cancer recurs. But the default practice, expected of both patients and physicians, calls for the physician to provide information and options and for the patient to name the treatment. In reflecting on the nature of cancer treatment Sheila makes more visible what she wants from her oncologists: I think what it is, is not precise. Like medicine and cancer treatment and all that stuff is as much black magic ... I think for me its acknowledgement that it’s imprecise and so it is a bit of a crap-shoot. But at the same time good gamblers know a little, have some insight into it ... it’s not black and white, it’s very grey and so – but within that greyness these guys are, they’ve seen hundreds and they’ve read about thousands of women and so they do have their own perceptions of what might work and it’s that intuition and gut feel based on practical experience. I want that too. I want to tap into that. I don’t just want clinical blah blah blah. OK, I believe that, I can get to those conclusions if I read enough. Why I think you’re an expert is because you’ve applied that. And so I’m wanting your best intuition and gut even though we don’t call it that because that’s very not medical [laughs], that’s not professional.

Sheila believes that the treatments are equivalent in scientific terms. She believes she has received the research evidence; she believes her oncologist has conveyed all of the statistically significant information about risks and benefits. But this is not all there is, and this is not all she wants from him. She also wants him to bring his judgment, his opinion and his expertise to bear on her situation. In a realm that is, she suggests, as much magic and gamble as it is science, she calls for his “intuition and gut

Institutional Circuits and the Front-Line Work of Self-Governance

279

feel.” These are sources of knowledge that she values and that she wants to draw forward from him to secure the best possible treatment (and future) for herself. Yet these sources of knowledge are, as she says, “very not medical ... not professional.” Which is, it seems, one of the reasons she has to work so diligently to access them. While the formal currency in the realm of cancer care is information about treatment options, benefits, and risks, current institutional accountabilities appear to have created an underground economy in advice and direction, and perhaps also in clinical wisdom. Sheila’s reflection on how a physician’s practical experience might not be considered “professional” aligns with the concern that EBM is eroding (explicit use of) experience and professional judgment (Gray & McDonald 2006). In women’s accounts we see how a discourse of evidence-based medicine can organize and limit how women and physicians can speak and act, how they can draw on their experiences, and how they can be in relationship with one another. As Mykhalovskiy and Weir (2004) point out, however, EBM does not merely diminish one sort of physician-patient relation, but activates another, one mediated by scientific evidence. The women in our study evoked – and worked to arrange for themselves – relations with physicians mediated by something other than evidence. However compassionately the EBMsanctioned doctor offers the statistical probabilities, some women with cancer clearly desire a physician with different sorts of commitments, a physician willing say what he or she really wants for patients, willing to say what he or she really knows. They invoked physicians who make their experience, intuitions, and stake in patients’ lives more apparent and available, who are engaged with patients in ways the current organization of cancer care seems to constrain. Contemporary identities for citizen-users commonly “combine an apparent increase in power (as partner, as customer) with increasing responsibilities (to participate in policy making or service delivery, to make informed choices)” (Barnes & Prior 2009: 5). Treatment decision practices at the cancer centre have this quality: they appear to offer women more power at the same time as they change and ramp up the responsibilities women bear. Like their sisters diagnosed with cancer 10 years ago, women are responsible for myriad profound and practical tasks (one woman in our study, noting that she did not actively pursue information about various treatments, said she was concentrating on other things: “What was going to happen when I was out of commission, who was going to ... get Christmas going. You know, stuff like

280

L. Grace, C. Zurawski, and C. Sinding

that”). In keeping with decision policies and practices in cancer care, women diagnosed with cancer now are also responsible for engaging medical information; locating their own bodies and lives in relation to statistical probabilities of recurrence and death; and naming their own treatments.10 Women’s ambivalence about their responsibility for treatment decision-making can be understood as discomfort with or resistance to the particular patient-physician relations carried by current practices and specifically with how these relations foreclose certain aspects of care. Yet contemporary self-governance is complex. While the enjoinder to make treatment decisions often required crafty evasion, being positioned as the decision-maker was also welcomed as entitlement. As Robyn said of her treatment, “Obviously it’s my choice and I want it to be my choice and even if they didn’t say it was my choice, it was my choice.” Here we can see how patient empowerment might well have a dual character, carrying meanings and operating institutionally in ways that are at least partly distinct from how patients experience and understand it. In Managing to Nurse, Janet Rankin and Marie Campbell (2006) show how “quality” as it operates as a feature of health-care management is quite distinct from adequate nursing care and, further, its deployment institutionally commonly overlooks or erases the actualities of good and bad care encountered and experienced by patients and their families. A woman who took part in this study made reference to the “Your Health Care – Be Involved” campaign, ubiquitous at the cancer centre. While the title echoes the discourse of patient participation in decisionmaking, the campaign, organized by the Ontario Hospital Association (OHA), is focused on patient safety. The embeddedness of (what appear to be) patient empowerment messages in texts focused on reducing system-generated harms suggests that health professionals and women with cancer may be conducting themselves in relation to agendas and interests quite distinct from their own. Here the exhortation to patients to “be involved” resonates with broader patterns of governance: individuals who are ill are made responsible not only for managing their own health condition and its concomitant risks, but also for managing the institution and the risks it poses. Links between patient safety as a “global imperative” (Donaldson & Philip 2004) and patient involvement in treatment decision-making are suggested by Godolphin (2009) and Vincent and Coulter (2002). Godolphin notes that, in responding to major UK reports about medical errors, the Department of Health (DH) and the Institute of Medicine

Institutional Circuits and the Front-Line Work of Self-Governance

281

have “insisted on patient involvement in decision-making and training of health professionals for ‘new rules’ for twenty-first century health care that make the patient the ‘source of control’” (2009: 186). Prescribing errors and lack of adherence to medications raises financial and health concerns and, Godolphin writes, some errors and adverse events are avoidable through patient involvement. There are additional threads to be pursued here. Wennberg and colleagues (among them Annette O’Connor, a Canada Research chair) (2007) draw together studies that show decreasing rates of surgery in study groups whose members were “fully informed” or took part in “shared decision making.” On the basis of such studies, these authors claim, the informed patient – not the professionally defined guidelines – should be established as the arbiter of medical necessity. Writing of the situation in the United States, they argue that the Centers for Medicare and Medicaid Services (CMS), through its pay-for-performance agenda, should compensate hospitals for establishing a certified shared decisionmaking process. Eventually (they suggest five years after the demonstration project begins), providers who fail to adopt the revised definition of medical necessity should no longer be reimbursed for performing discretionary surgeries. A link between patients’ responsibility for treatment decisions and efforts to contain health-care costs becomes apparent here; the mechanism that enables this link – pay-for-performance governance in health care – is also made visible. Chapter Discussion and Analysis: Resonances and Dissonances The three instances presented above are very different; yet looking at them through the frame of institutional ethnography reveals strong resonances between the operation of the institutional circuit in each instance. In the Australian VET sector, there is an accountability circuit that is clearly textual and external and has the force of regulation. In the HRD instance, the accountability circuit is explicitly textual and internal to the business organization. In the breast cancer clinic, the textual nature of the institutional circuit is more diffuse and exists in documents such as practice guidelines and decision aids. Despite the differences, educators, employees, and cancer patients all are drawn into selfgovernance through textually mediated institutional circuits in which they translate their own activities, decisions, or intentions into categories that have been set up to fit an institutional text. The information

282

L. Grace, C. Zurawski, and C. Sinding

they create can then be read by themselves and others within its terms. In creating these representations, the reality of what is actually happening in each instance disappears and the text-based representation takes its place. Each of these instances reveals an institutional message about how people at the front line are to understand and conduct themselves within the particular context, and these messages are reflected in the terms of some form of reporting template. In VET, the AQTF and Training Packages convey explicit institutional messages about how educators must understand and conduct themselves in developing and delivering vocational training programs; the reporting template is in the form of textual training and assessment plans that educators must develop for each program they deliver. In the instance of HRD within a business organization, texts convey messages about how employees are to understand and conduct themselves as lifelong learners; the reporting template takes the form of the employee development plan that employees produce and then use to demonstrate they are performing as a lifelong learner in keeping with organizational requirements. In the cancer clinic, practice guidelines and patient education materials convey messages about how physicians and women patients are to understand and conduct themselves; the reporting template is in the various forms women sign signifying consent for treatment. In each context, the messages conveyed are both implicit and explicit. In each instance, institutional circuits bring about an apparent shift in responsibility from the organizational level to the individual at the front line who must now participate in self-governance. Training and assessment plans documented by individual vocational educators are integral to maintaining each institution’s registration and its authority to deliver vocational qualifications. Individual employees are responsible for ensuring their business organization’s ongoing competitiveness and survival through the lifelong learning they undertake as a result of preparing and implementing their SMART employee development plans. Individual cancer patients are responsible for considering the weight of clinical evidence and choosing the most appropriate treatment option for their individual diagnosis; in undertaking this work, individual patients help providers demonstrate organizational adherence to best practices in patient-centred care. In each of these instances, institutional circuits bring together different parties who, in very broad terms, share an interest in certain outcomes. In vocational education both the management of Registered

Institutional Circuits and the Front-Line Work of Self-Governance

283

Training Organisations and the educators employed within those organizations have a shared interest in achieving compliance and thereby maintaining their registration to continue delivering educational programs (and continue employing the educators who work in those programs). Similarly, both management and employees come to share the interest of ABC Company’s board of directors in maintaining the business organization’s competitiveness and reputation as a learning organization capable of economic survival. In the cancer clinic, physicians, other medical staff, and patients and their families all share an interest in selecting the best treatment options for each individual patient. In each instance, there are individual benefits to successful self-governance through engagement with the institutional circuit and negative implications of failure to self-govern. For vocational educators, the tangible benefit for achieving compliance is the continuation of their employment, while the implication of being found “non-compliant” is the risk that their RTO will lose authority to deliver the programs in which those educators are employed. ABC Company employees earn salary increases and bonuses for their participation in lifelong learning that supports the organization’s business strategy; there is either less or no pay for performance when employees do not plan, document, and undertake appropriate lifelong learning. The benefits to cancer patients are felt as particularly tangible and significant, relating to the selection of the “right” or “wrong” cancer treatment. In general ways, then, organizational and individual interests coincide in the institutional circuit managing front-line work; various benefits accrue to the self-governance of individuals. Yet there are other things going on here. Each of these institutional circuits appears to effect a close and positive alignment between organizational interests and the personal interests of individuals. This is perhaps most obvious in the instance of ABC Company, where the interest of the board of directors in positioning the company competitively seems to intersect with each employee’s interest in being rewarded for developing her or his own knowledge and skills. Yet definitions of learning narrow to those that are eligible for reward, and others, which may fulfil employees’ non-monetary aspirations, fade away. Similarly, in cancer care, treatment decision practices allow cancer patients a chance to assert some power in a life situation often characterized by a sense of helplessness; these practices allow them to claim some control over a future thrown into question. Yet the myriad resources beyond medical information that women might draw on to face cancer are obscured, and, as shown

284

L. Grace, C. Zurawski, and C. Sinding

above, aspects of professional care that women value are foreclosed. For educators, deeply held convictions about the value of vocational education, ostensibly entrenched as RTO registration is sustained, are undermined in the very process of achieving compliance. Among the most significant negative consequences of the forms of self-governance activated in institutional circuits designed to manage front-line work is the erosion of professional judgment. It is ironic that while self-governance through institutional circuits shifts responsibility to individuals at the front line, it simultaneously erodes the freedom of front-line workers to use their professional judgment. In each instance presented in this chapter, it is apparent that self-governance is achieved in such a way that professionals shape their practice using the frames of the institutional circuit and its associated institutional texts rather than shaping their practice based on their pedagogical, professional, and personal frameworks. Vocational educators lament reporting and compliance frameworks that in some cases require them to document what they deem poor educational practice. In devising learning and development objectives that conform to the SMART mnemonic, employees lose the freedom to judge for themselves the lifelong learning in relation to their work that would be worthwhile. Cancer patients express frustration that taking on the responsibility for making their own treatment decisions constrains their access to the physician’s judgment, opinion, and expertise. In the accounts offered by informants in each of these studies, we can also see the work that people at the front line must do to navigate institutional circuits in such a way that they complete the circuit but to some extent still get what they need and want. Both Robyn and Sheila were able to explicitly describe the work they did in the attempt to renegotiate their position. Robyn described herself as getting “crafty” and using “little interviewing tricks” to elicit the information she really wanted from her physician. Sheila described the skilful semantic work she undertook in declaring her treatment choice in such a way that she elicited feedback from her physician about the appropriateness of her choice against other options. This adroit navigation work is clear in the cancer clinic instance. Yet traces of this work are also evident in the other two instances. Louise devised strategies to comply with a reporting format that required her to show that she included units of competency in her student information, but she included these texts in a way that foregrounded her own explanation of course content and assessment criteria and provided the formal documentation only as

Institutional Circuits and the Front-Line Work of Self-Governance

285

additional information for students who looked for it. Employees of ABC Company complete the circuit of accountability for lifelong learning aware that, while they may not be fulfilling their own aspirations, they can at least be paid for fulfilling those of their employer. Regardless of how they felt about their accountabilities and responsibilities and despite the success of some strategies to renegotiate them, there is a level of acceptance at the front line of the inevitability of being positioned in this way. Across the participants whose accounts are presented here, acceptance ranges from those who express a level of discomfort but nonetheless participate in self-governance to those for whom the institutional circuit managing their work has become a way of thinking and has ceased to be something that is imposed. In the vocational education context, educators accepted that they were responsible for the ongoing registration of their institutions by ensuring that they documented educational practice to demonstrate compliance with audit requirements. Vanessa, in particular, expressed pride in her ability to anticipate what the auditor would ask and to include the required information in her training and assessment plan, ready to present at audit. Fiona and Jessica expressed frustration with aspects of the audit process, but still indicated their willingness to do whatever was necessary and provide whatever evidence was asked for, in order to meet the auditor’s expectations. Even Louise, who described herself as feeling “oppressed” by the accountability circuit, reshaped her documented practice in order to achieve compliance. Doris, Nolan, Deanna, and Monique all expressed a matter-of-fact acceptance of employee development planning, despite its being a time-consuming process that they did not enjoy. Going along was to them preferable to forfeiting the opportunity to be paid for performance as a lifelong learner. Despite the significant implications of making a poor choice, Sheila and Robyn conformed to the requirements that they be seen to make their own “informed” decisions. In this chapter, we have shown how the managerial development of institutional circuits increasingly trades on our hopes and desires: to teach, to learn, to make decisions for ourselves. The self-governing capacities of individuals and groups have long been foundational to ruling (Sorensen & Triantafillou 2009). Yet the mechanisms by which self-governance is activated, and the nature of self-governance itself, have shifted. Whereas discipline prompted docile bodies, capable individual selves are now brought about “through initiatives that augment their self-esteem, life-long learning abilities, entrepreneurial capacities

286

L. Grace, C. Zurawski, and C. Sinding

and flexibility and sense of responsibility” (ibid.: 3). In this ostensibly kinder and gentler self-governance, people at the front line are still subject to negative consequences for failing to conduct themselves properly and are drawn into institutional courses of action by the apparent alignment of institutional interests with their own personal wishes. While there are opportunities in these newly configured institutional circuits, there are also, as we have attempted to show, important and often invisible losses. These institutional circuits also trade on values attributed to emancipatory social movements. Lifelong learning, first advocated internationally as the basis for educational system reforms aimed at promoting democracy, equal opportunity, and self-fulfilment for all (Rahnema & Ward 1972), has been reduced, in the instance of ABC Company, to a business strategy aimed at promoting economic competitiveness. Employees are key to this business strategy, but the lifelong learning they do by navigating the accountability circuit of their business organization has more economic than social currency. In cancer care, patient participation in treatment decision-making appears to bring medical practice into alignment with feminist efforts to reclaim women’s control over their own bodies and of knowledge related to health and health care. It is not entirely clear, however, that the sort of involvement made available by contemporary decision-making practices is the sort that was envisioned by women’s health activists (Lupton 1997).11 In more basic terms, requiring patients to be active in treatment decision-making, even if they prefer not to be, undermines the autonomy that proponents of the practice often claim to support. Even if there were compelling evidence that declaring treatment choices confers benefits, calling on patients to participate in treatment decisions because it is “good for them” hardly challenges paternalism. Finally, few educators in Australia dispute the individual and social benefits of offering a system of nationally recognized vocational qualifications to parallel established systems of recognized school and university qualifications. Yet on closer examination it is apparent that the discourses of quality that are foundational to this national system do not always support good educational practice. By exploring self-governance and institutional circuits in three very different contexts, this chapter has sought to contribute to the institutional ethnography project of piecing together representations and analyses of institutional processes from different positions (Smith 1999a: 79). It has shown that, despite significant differences in the nature of the institutions explored and the textual form of their institutional circuits,

Institutional Circuits and the Front-Line Work of Self-Governance

287

there are strong resonances between the ways in which individuals at the front line are caught up in institutional objectives and drawn into the work of self-governance.

NOTES 1 This and all other participant names throughout this chapter are pseudonyms. 2 A “chook” is a chicken; in this colourful Australian colloquialism, Louise conjures a visual image of desperate but aimless activity, akin to an unrestrained chicken running for a while after its head is cut off. 3 The limitations these institutional circuits impose on students are explored elsewhere (Grace 2008: 77–102). 4 “Performance” is an abstract concept that suppresses and suspends the presence of employees as actual subjects (Smith 1990) and in doing so, renders their work an object of management. 5 While the accountability circuit discussed in this section is a feature of a performance management system in use in a business organization, a strong interest in performance management in non-business organizations also grew as “value for money and clear accountability” (Bach 2000: 242) emerged as managerial relevancies in recent decades. 6 Among the other informants were managers, human resource developers, and a consultant to the business organization. 7 David Kaplan and Robert Norton (1992; Kaplan 2009) are credited with introducing the balanced scorecard system of performance management in the 1990s. Today, theirs is among the most dominant and preferred systems of performance management adopted (and adapted) by business and non-business organizations in Canada, elsewhere in North America, and around the world. 8 A study by Mykhalovskiy and McCoy (2002) of people living with HIV and AIDS deliberately contrasted accounts of treatment decisions among people in a range of social locations and “quickly learned that the treatment decision making discourse is a way of speaking about coming to be on treatments that is available primarily to middle-class men.” For people with HIV/AIDS (PHAs) living on social assistance or in jail, being on treatments “could involve relations of compulsion rather than decision making” (2002: 31). 9 For exceptions, see Waterworth & Luker (1990) and Noerreslet, Jemec & Traulsen (2009).

288

L. Grace, C. Zurawski, and C. Sinding

10 While not addressed in this chapter, responsibility for treatment outcomes also appears to have landed on patients (Sinding et al. 2012). 11 As Horin (1995) notes, “I might have yelled ‘Power to the People’ in some demo 20 years ago ... but I didn’t actually mean power to me over every technical decision that would crop up in my life. I didn’t seek to be ‘empowered’ in matters that bored me, like tax, or that totally baffled me, like expensive [medical] tests” (cited in Lupton 1997: 373). Lupton (1997) suggests that efforts to promote patient participation in treatment decisionmaking have failed to fully engage the particular emotional and embodied vulnerability of illness and patients’ concomitant dependence on physicians. These kinds of analyses have led some commentators to suggest that patient autonomy might be best supported not through efforts to better inform patients about their options, but through attention to governance structures that ensure physicians act in the best interests of patients (O’Neill 2002).

REFERENCES Aguinis, H. 2007. Performance Management. Upper Saddle River, NJ: Pearson Education. Armstrong, M. 2003. A Handbook of Human Resource Management Practice. 9th ed. London: Kogan Page. Bach, S. 2000. From performance appraisal to performance management. In S. Bach & K. Sisson (eds), Personnel Management: A Comprehensive Guide to Theory and Practice, 241–63. Malden, MA: Blackwell. Barnes, M., & D. Prior. 2009. Examining the idea of “subversion” in public services. In M. Barnes & D. Prior (eds), Subversive Citizens: Power, Agency and Resistance in Public Services, 3–16. Bristol: Policy. Blanchard, P.N., & J.W. Thacker. 1999. Effective Training: Systems, Strategies and Practices. Toronto: Prentice Hall. Bratton, J., J. Helms Mills, T. Pyrch & P. Sawchuk. 2004. Workplace Learning: A Critical Introduction. Aurora, ON: Garamond. Cardy, R.L. 2004. Performance Management: Concepts, Skills and Exercises. Armonk, NY: M.E. Sharpe. Charles, C., A. Gafni & T. Whelan. 1997. Shared decision-making in the medical encounter: What does it mean? (Or it takes at least two to tango). Social Science & Medicine 44 (5): 681–92. http://dx.doi.org/10.1016/ S0277-9536(96)00221-3. Clarke, N. 2004. HRD and the challenges of assessing learning in the workplace. International Journal of Training and Development 8 (2): 140–56. http://dx.doi.org/10.1111/j.1468-2419.2004.00203.x.

Institutional Circuits and the Front-Line Work of Self-Governance

289

Darville, R. 1995. Literacy, experience, power. In M. Campbell & A. Manicom (eds), Knowledge, Experience, and Ruling Relations: Studies in the Social Organization of Knowledge, 249–61. Toronto: University of Toronto Press. Department of Education, Employment and Workplace Relations (DEEWR). N.d. Department of Education, Employment and Workplace Relations: National Training System Glossary. Retrieved www.dest.gov.au/sectors/ training_skills/policy_issues_reviews/key_issues/nts/. Department of Education, Employment and Workplace Relations (DEEWR). 2008. Implementation and use of training packages. In Training Packages @ Work: Back to Basics Edition 3. Retrieved www.tpatwork.com/back2basics/ db1_implementationa.htm. Department of Education, Science and Training (DEST). 2007a. AQTF 2007: Building Training Excellence: Audit Handbook. Canberra City: DEST. Department of Education, Science and Training (DEST). 2007b. AQTF 2007: Building Training Excellence: Essential Standards for Registration. Canberra City: DEST. Department of Education, Science and Training (DEST). 2007c. AQTF 2007: Building Training Excellence: National Guideline for Risk Management. Canberra City: DEST. Department of Education, Science and Training (DEST). 2007d. AQTF 2007: Building Training Excellence: User’s Guide to the Essential Standards for Registration. Canberra City: DEST. Department of Employment and Training Queensland (DET Qld). 2003. Training Packages @ Work: Back 2 Basics: An Introduction to Australia’s National Training System for Teachers and Trainers. Brisbane: Training Products Support, DET Qld. DeVault, M., & L. McCoy. 2006. Institutional ethnography: Using interviews to investigate ruling relations. In D.E. Smith (ed.), Institutional Ethnography as Practice, 15–44. Lanham, MD: Rowman & Littlefield. Donaldson, L., & P. Philip. 2004. Patient safety: A global priority. Bulletin of the World Health Organization 82 (12): 891–970. Fenwick, T. 1998. Questioning the concept of the learning organization. In S.M. Scott, B. Spencer & A.M. Thomas (eds), Learning for Life: Canadian Readings in Adult Education, 140–52. Toronto: Thompson Educational. Fletcher, C., & E. Perry. 2001. Performance appraisal and feedback: A consideration of national culture and a review of contemporary research and future trends. In N. Anderson (ed.), Handbook of Industrial, Work and Organizational Psychology, 127–44. Thousand Oaks, CA: Sage. Floodgate, J.R., & A.E. Nixon. 1994. Personal development plans: The challenge of implementation. Journal of European Industrial Training 18 (11): 43–7. http://dx.doi.org/10.1108/03090599410073550.

290

L. Grace, C. Zurawski, and C. Sinding

Godolphin, W. 2009. Shared decision-making. Healthcare Quarterly 12 (sp): 186–90. http://dx.doi.org/10.12927/hcq.2009.20947. Gold, J. 1999a. Performance appraisal. In J. Bratton & J. Gold (eds), Human Resource Management: Theory and Practice, 213–36. 2nd ed. Mahwah, NJ: Lawrence Erlbaum. Gold, J. 1999b. Human resource development. In J. Bratton and J. Gold (eds), Human Resource Management: Theory and Practice, 306–57. 4th ed. Houndsmill: Pergamon Macmillan. Grace, L.J. 2008. Vocational Education in Australia: The Power of Institutional Language. Saarbrücken: VDM Verlag Dr Müller. Gray, M., & C. McDonald. 2006. Pursuing good practice? The limits of evidence-based practice. Journal of Social Work 6 (1): 7–20. http://dx.doi. org/10.1177/1468017306062209. Hack, T., L. Degner, P. Watson & L. Sinha. 2006. Do patients benefit from participating in medical decision making? Longitudinal follow-up of women with breast cancer. Psycho-Oncology 15 (1): 9–19. http://dx.doi. org/10.1002/pon.907. Hodkinson, P., & M. Bloomer. 2002. Learning careers: Conceptualizing lifelong work-based learning. In K. Evans, P. Hodkinson & L. Unwin (eds), Working to Learn: Transforming Learning in the Workplace, 29–43. London: Routledge. Horin, A. 1995. It’s the price we pay for empowerment. Sydney Morning Herald, 7 October, 21. Hudak, P.L., J.P. Clark, G.A. Hawker, P.C. Coyte, N.N. Mahomed, H.J. Kreder & J.G. Wright. 2002. “You’re perfect for the procedure! Why don’t you want it?” Elderly arthritis patients’ unwillingness to consider total joint arthroplasty surgery: A qualitative study. Medical Decision Making 22 (3): 272–8. Jennings, V. 2004. Text Analysis Report on a Unit of Competency from the National Beauty Training Package. Paper Developed for Course Working Text. Queensland: Griffith University. Kaplan, R.S. 2009. Conceptual foundations of the balanced scorecard. In C.S. Chapman, A.G. Hopwood & M.D. Shields (eds.), Handbook of Management Accounting Research, 1253–69. Amsterdam: Elsevier Kaplan, R.S., & D.P. Norton. 1992. The balanced scorecard: Measures that drive performance. Harvard Business Review 70 (1): 71–9. Knight, A., & M. Nestor. 2000. A Glossary of Vocational Education and Training Terms. Leabrook, South Australia: National Centre for Vocational Education Research. Lupton, D. 1997. Consumerism, reflexivity and the medical encounter. Social Science & Medicine 45 (3): 373–81. http://dx.doi.org/10.1016/ S0277-9536(96)00353-X.

Institutional Circuits and the Front-Line Work of Self-Governance

291

McCoy, L. 1998. Producing “what the deans know:” Cost accounting and the restructuring of post-secondary education. Human Studies 21 (4): 395–418. http://dx.doi.org/10.1023/A:1005433531551. McCoy, L. 1999. Accounting Discourse and the Textual Practices of Ruling: A Study of Institutional Transformation and Restructuring in Higher Education. PhD dissertation, University of Toronto. Merriam, S.B., R.S. Caffarella & L.M. Baumgartner. 2007. Learning in Adulthood: A Comprehensive Guide. 3rd ed. San Francisco: Jossey-Bass. Mykhalovskiy, E., & L. McCoy. 2002. Troubling ruling discourses of health: Using institutional ethnography in community-based research. Critical Public Health 12 (1): 17–37. http://dx.doi.org/10.1080/09581590110113286. Mykhalovskiy, E., & L. Weir. 2004. The problem of evidence-based medicine: Directions for social science. Social Science & Medicine 59 (5): 1059–69. http://dx.doi.org/10.1016/j.socscimed.2003.12.002. National Centre for Vocational Education Research (NCVER). 2009. Australian Vocational Education and Training Statistics: Students and Courses 2008. Adelaide: NCVR. National Training Information Service (NTIS). N.d. Comprehensive information for the training sector specialist: Browse training packages. Retrieved 22 December 2009. www.ntis.gov.au. Noerreslet, M., G. Jemec & J. Traulsen. 2009. Involuntary autonomy: Patients’ perceptions of physicians, conventional medicines and risks in the management of atopic dermatitis. Social Science & Medicine 69 (9): 1409–15. http://dx.doi.org/10.1016/j.socscimed.2009.08.036. O’Neill, O. 2002. Autonomy and Trust in Bioethics. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511606250. Rahnema, M., & E.C. Ward. 1972. Learning to Be: The World of Education Today and Tomorrow. Paris: UNESCO. Rainbird, H., A. Fuller & A. Munro. 2004. Workplace Learning in Context. London: Routledge. Rankin, J., & M. Campbell. 2006. Patient satisfaction and the management of quality. In Managing to Nurse: Inside Canada’s Health Care Reform, 112–38. Toronto: University of Toronto Press. Rodin, G., J.A. Mackay, C. Zimmerman, C. Mayer, D. Howell & M. Katz. 2008. Provider-Patient Communication: A Report of Evidence-Based Recommendations to Guide Practice in Cancer. Program in Evidence-Based Care. Toronto: Cancer Care Ontario. Say, R., M. Murtagh & R. Thomson. 2006. Patients’ preference for involvement in medical decision making: A narrative review. Patient Education and Counseling 60 (2): 102–14. http://dx.doi.org/10.1016/j.pec.2005.02.003.

292

L. Grace, C. Zurawski, and C. Sinding

Sinding, C., & J. Wiernikowski. 2009. Treatment decision making and its discontents. Social Work in Health Care 48 (6): 614–34. http://dx.doi. org/10.1080/00981380902831303. Sinding, C., P. Miller, P. Hudak, S. Keller-Ollaman & J. Sussman. 2012. Of time and troubles: Patient involvement and the production of health care disparities. Health (London) 16 (4): 400–17. Slotte, V., P. Tynjala, & T. Hytonen. 2004. How do HRD practitioners describe learning at work? Human Resource Development International 7 (4): 481–99. http://dx.doi.org/10.1080/1367886042000245978. Smith, D.E. 1990. The Conceptual Practices of Power: A Feminist Sociology Knowledge. Toronto: University of Toronto Press. Smith, D.E. 1999a. From women’s standpoint to a sociology for people. In J.L. Abu-Lughod (ed.), Sociology for the Twenty-first Century, 65–82. Chicago: University of Chicago Press. Smith, D.E. 1999b. Writing the Social: Critique, Theory, and Investigations. Toronto: University of Toronto Press. Smith, D.E. 2005. Institutional Ethnography: A Sociology for People. Lanham, MD: AltaMira. Smith, D.E. 2006. Incorporating texts into ethnographic practice. In D.E. Smith (ed.), Institutional Ethnography as Practice, 65–88. Lanham, MD: Rowman & Littlefield. Sorensen, E., and P. Triantafillou. 2009. The politics of self-governance: An introduction. In P. Triantafillou & E. Sorensen (eds), The Politics of SelfGovernance, 1–22. Surrey: Ashgate. Stivers, T. 2006. Treatment decisions: Negotiations between doctors and patients in acute care encounters. In J. Heritage & D.W. Maynard (eds), Communication in Medical Care: Interaction between Primary Care Physicians and Patients, 279–312. New York: Cambridge University Press. Storey, J., & K. Sisson. 1993. Managing Human Resources and Industrial Relations. Milton Keynes: Open University Press. Swanson, R.A., & E.F. Holton. 2001. Foundations of Human Resource Development. San Francisco: Berrett-Koehler. Tamkin, P. 1996. Practical applications for personal development plans. Management Development Review 9 (7): 32–6. http://dx.doi.org/10.1108/ EUM0000000004289. Training.com.au. 2007. AQTF 2007 – A better system for everyone. Retrieved www.training.com.au/aqtf2007/. Vincent, C.A., & A. Coulter. 2002. Patient safety: What about the patient? Quality & Safety in Health Care 11 (1): 76–80. http://dx.doi.org/10.1136/ qhc.11.1.76.

Institutional Circuits and the Front-Line Work of Self-Governance

293

Waterworth, S., & K.A. Luker. 1990. Reluctant collaborators: Do patients want to be involved in decisions concerning care? Journal of Advanced Nursing 15 (8): 971–6. http://dx.doi.org/10.1111/j.1365-2648.1990.tb01953.x. Wennberg, J., A. O’Connor, E. Collins & J. Weinstein. 2007. Extending the P4P Agenda, Part 1: How Medicare can improve patient decision making and reduce unnecessary care. Health Affairs 26 (6): 1564–74. http://dx.doi. org/10.1377/hlthaff.26.6.1564. Wilson, A., & E. Pence. 2006. U.S. legal system interventions in the lives of battered women. In D.E. Smith (ed.), Institutional Ethnography as Practice, 199–225. Lanham, MD: Rowman & Littlefield. Ziebland, S., J. Evans & A. McPherson. 2006. The choice is yours? How women with ovarian cancer make sense of treatment choices. Patient Education and Counseling 62 (3): 361–7. http://dx.doi.org/10.1016/j.pec.2006.06.014.

9 Knowledge That Counts: Points Systems and the Governance of Danish Universities susan wright

The term “governance” as applied to universities has more than one meaning. It was once widely used from the fourteenth to sixteenth centuries in England to mean the way an institution like a university was run, how a landed estate or even a whole country was kept in good order, and how an individual conducted business by maintaining “wise self-command” (Oxford English Dictionary 1989: vol. 7, 710). In almost all contexts – except universities – these meanings had fallen into desuetude by the eighteenth century, only suddenly to burst back into use in the 1990s. Their decline coincided with governing becoming the specialized role of a “government,” which, through the machinery of a centralized bureaucracy, managed the population and economy of a nation-state. The resurgence of “governance” in the 1990s heralded a change in the political order, when “‘government’ … becomes less identified with ‘the’ government – national government – and more wide ranging. ‘Governance’ becomes a more relevant concept to refer to some forms of administrative or regulatory capacities” (Giddens 1998: 32–3). There were three main characteristics of this shift from government to governance in the 1990s. First, instead of the bureaucratic management of a society, governments increasingly accomplished the maintenance of order and the delivery of services through networks of agencies and actors operating on global, national, and local scales and including transnational agencies, international corporations, state and public institutions, arms-length agencies, and civil society organizations (CSOs) (Rhodes 1997). Governments were to encourage enterprise and competition by contracting out service delivery to such networks of partners (known in the United States as alternative service delivery [ASD]) (e.g., Osborne & Graeber 1992). Second, what had to be governed were no longer clear

Knowledge That Counts

295

organizational structures but this network of often obscure linkages. Contracting organizations were free to manage their own production processes or enter into subcontracts with others. Government tried to maintain control through technocratic measures such as setting performance targets and key performance indicators, conducting audits, checking contract compliance, and basing payment on the number and quality of outputs (Dean 1999). Often these technocratic measures acted, in Foucault’s terms, as “political technologies” (Dreyfus & Rabinow 1982: 196) in that the political and ideological aims of government were not made explicit but were embedded in the detailed operations of these apparently politically neutral and purely administrative systems. Third, this system of governing relied on individuals’ freely exercising their own agency; but they were to learn from pedagogies embedded in political technologies and were to exercise their freedom in ways that achieved the government’s vision of order and contributed to the international success of the competition state (Rose 1989; Pedersen 2011). This new meaning of governance echoed the old in that it spanned the three scales of the self-management of individuals, the running of institutions, and the ordering of a country, now part of a reconceptualized space of global competition. But between the old and the new meanings of governance, there was an important shift in who had the power to define “good governance.” It was no longer up to people or institutions to maintain their own “wise selfcommand” in a bottom-up fashion. Now “good governance” was defined “top-down” and was achieved when the government’s ideas of the proper order of the country were enacted in the management of organizations and the conduct of individuals. The apotheosis of this art of government was to find a single technical measure that would operate on all three scales at once and that would simultaneously order the competitive state, the enterprising organization, and the “responsibilized” individual according to the government’s ideological and political vision. This chapter will focus on the university, one of the only institutions that has kept alive the original idea of governance when it otherwise fell into disuse.1 In that original sense, governance refers to the array of ways that a university orders its own affairs by managing its relations with the state, maintaining its own internal organization, and instilling certain values and expectations of individual conduct. Now this meaning of governance is overlain by the resurgent meaning, in

296

S. Wright

which it is government that defines the contribution of universities to the competitive state, the ways that the institution should be organized and managed, and the appropriate behaviour for “responsible” academics and students to adopt. As will be discussed in this chapter, the Danish government’s reforms of universities are a good example of the introduction of this top-down form of governance. In particular, the Danish government’s system for allocating a scale of points for different kinds of research publications was a political technology that aimed to bring the ordering of the sector as a whole, individual institutions, and academic staff into alignment. The government used the points system to establish competition for funding between universities, which was considered a necessary prerequisite for them to perform well on the world stage; it made clear to newly appointed strategic leaders the priorities to set for their organization; and every individual quickly learned what was expected of them to maximize “what counts.” In short, the points system was an attempt, through a single mechanism, to set up an institutional circuit that took governance from the world stage to the self-management of the individual on the front line and back again. Systems of governance do not always work as designed. The chapter will start by setting out the two strands of thinking that informed the university reforms in Denmark. One strand was the reform of the public sector to create a competition state, and the other strand refocused the work of universities on what the government deemed necessary for Denmark to succeed in a global knowledge economy and maintain its position as one of the richest countries in the world. In both strands of the reforms, performance indicators, such as the points system, became an important mechanism of university governance. The second section summarizes the long process of designing the points system for the government to use in funding algorithms for the sector and for university leaders to use as a tool of management. The third section is based on fieldwork in a faculty that had long used such points systems. Academics had internalized the system’s priorities, but had also internalized conflicts between their own motivation and the system’s incentives, with resultant high levels of stress. The fourth section, based on fieldwork in another faculty where the points system was a new phenomenon, explores the ways that academics used different combinations of pragmatic accommodation and principled resistance to the system’s imperatives, until finally it was withdrawn.2

Knowledge That Counts

297

Governance and the Global Knowledge Economy A major reform of university governance in Denmark started with a University Law in 2003. This law was in keeping with the wider reform of the public sector that the finance ministry had been developing since the 1980s (Wright & Ørberg 2008). Called “aim and frame steering” (mål- og rammestyring), ministers were no longer to run the bureaucratic delivery of services. Instead, they were to focus on formulating the political goals for their sector and the legal and budget framework through which they were to be realized. The delivery of these services and the achievement of the political goals were then contracted out to agencies. In a process Pollitt et al. (2001) call “agentification,” parts of the bureaucracy and other state-run organizations – for example, universities – were turned into such agencies, with the legal status of a person and the power to engage in contracts with the ministry. The ministry steered these agencies by writing clear performance goals into the contracts along with numerical and quality measures for their achievement. For example, the ministry’s contracts with universities contain long lists of the numbers and percentage rise in outputs of graduates and PhDs, publications, externally funded projects, and so on to be achieved within a defined period. The state auditor annually checks the universities’ reports about the fulfilment of these contracted targets. Output and performance measures have also become more important in the allocation of state funding, on which the universities are predominantly reliant. Payments for teaching already were (since 1994) entirely based on the numbers of students who passed their exams each year. Following the 2003 law, the ministry worked on defining and weighting the criteria for increasingly basing other elements of their funding on outputs and for allocating this funding competitively between the universities. As will be shown below, a points system based on the number of publications and proxies for their “quality” became a key mechanism for shifting towards output and performance payments in the government’s new way of steering the university as one of its public sector “service providers.” While these changes to the steering of universities were clearly part of a reform of the whole public sector, the minister for research also tied them closely into a strategy for Denmark’s future economic success. Denmark had been an avid participant in the work of the Organisation for Economic Co-operation and Development (OECD), which through the 1990s promoted the idea that the future lay in a global economy

298

S. Wright

operating on a new resource – “knowledge.” This idea was taken up by other transnational organizations like the European Union (EU), the World Economic Forum (WEF), and the World Bank (WB). They argued that a future global knowledge economy was both inevitable and fast approaching. Each country’s economic survival, they maintained, lay in its ability to generate a highly skilled workforce capable of developing new knowledge and transferring it quickly into innovative products and new ways of organizing production. The OECD, in particular, developed policy guidance for its members (the 30 richest countries in the world) to make the reforms deemed necessary to survive this global competition. It measured and ranked their performance and galvanized national ministers into an emotionally charged competition for success and avoidance of the ignominy of failure. Universities were thrust onto centre stage in this vision of the future. They were to “drive” their country’s efforts to succeed in the global knowledge economy. As well as aiming to attract the “brightest brains” through the fast-growing and lucrative international trade in students, many governments set a target for 50 per cent of school leavers to gain higher education, and they sought to reform education so that students acquired not only high-level cognitive skills, but also the “transferable” skills thought necessary for employment in a global knowledge economy. Policy-makers widely adopted the idea that university research should shift from Mode 1 (motivated by disciplinary agendas) to Mode 2 (motivated by social need) (Gibbons et al. 1994). In a bowdlerized version of this argument, the Danish government’s catchphrase for their university reform was “From idea to invoice,” arguing that academics should develop closer relations with industry and focus on results that would lead to innovations. The OECD developed checklists, tool kits, guidance notes, and best-practice documents to help governments go about reforming universities. These included changing the management of universities to make them capable both of entering into partnerships with industry and the state and of delivering the performance these partners expected. The Danish University Law of 2003 brought the agendas for both the competition state and the global knowledge economy to bear on university management. Previously, students and academic, administrative, and technical staff had elected the leaders and decision-making bodies at every level of the organization; now all were abolished, apart from elected study boards, which continued to be responsible for the design, running, and quality of education programs. They were replaced by a

Knowledge That Counts

299

governing board, the majority of whose members were appointed from outside the university. The board appointed the rector, like a CEO of a company. He or she appointed deans, who appointed heads of department. In what was called “unified management” (enstrenget ledelse), each leader was accountable to and had an obligation of loyalty towards the superior who had appointed him or her and was no longer, as in the previous structure, primarily accountable to the people he or she led. Although a later amendment required the “unified management” to involve employees in decisions, the faculty and departmental boards and their rights and powers, which had involved members of the university in decision-making, had been abolished. For the first time, the rector now spoke “on behalf of” or even “as” the university, as a coherent and centrally managed organization (Ørberg 2007). This was a clear break from the idea of the university as a community of academics, administrators, and students. By changing the legal status, state steering, financing, and management of universities, the minister claimed he was “setting universities free”; he was making them into agencies with the power to enter contracts with the state, industry, and other organizations, and he was also giving the new leaders “freedom to manage” – it was up to them how they ran “their” organization as long as they delivered on contracts. With the rector as the head of a strongly line-managed and coherent organization, empowered to decide on the strategic use of the university’s funding and acting as an interlocutor with the ministry, politicians, and industry, the minister claimed that government could restore its trust in universities. When, shortly afterwards, the minister initiated mergers between universities and with government research institutes, he felt that at least three Danish universities were now capable of appearing within the top 10 in Europe measured by one of the world-ranking tables (Kofoed & Larsen 2010). In his view, universities now had the kind of organization needed to drive Denmark’s efforts to succeed in the global knowledge economy and to that end could be trusted with increased government funding. A Globalization Council was established by the prime minister and produced a strategy that argued that Denmark’s continuing status as one of the world’s wealthiest countries depended largely on the performance of its universities (Government of Denmark 2006). To achieve this, a “globalization pool” during the years 2010–2012 substantially increased university budgets. In the government’s view, to incentivize Danish universities to become “Global Top Level Universities,” this funding had to be allocated

300

S. Wright

competitively and on the basis of “quality indicators” (ibid.: 22). Right from the start, academics were worried that the indicators would be used not just to establish competition within the sector, but as tools for internal management, to allocate funding between faculties and departments and to incentivize the behaviour and even hire and fire individual staff (Emmeche 2009b). The ministry’s steering group stated explicitly that the “quality indicators” were expected to have an effect on the behaviour of individual researchers, motivating them to publish their research in the most prestigious “publication channels” that could be used to compare research quality internationally (FI 2007, 2009b). In the ministry’s task of devising the output indicators and the formula for the competitive funding system, the agendas of the public sector reforms and the preparation for the global knowledge economy came together. By choosing indicators that counted in the world rankings, restructured the sector competitively, and made clear to each individual what counts, it seemed they had found a mechanism that brought these three elements of governance into alignment. Devising a System for Competitive Allocation of Funding The process of devising indicators that would mobilize the whole university sector, the internal organization of each institution, and each individual academic and would improve Denmark’s standing in the global university rankings is presented diagrammatically in Figure 9.1. In autumn 2006, the ministry started to look for “quality” indicators for teaching, knowledge transfer (videnspredning), and research on which to allocate funding competitively between universities. In negotiation with Universities Denmark (an organization representing the rectors and board chairs of Denmark’s eight major universities), it was decided that, for teaching, the existing calculation of outputs – the number of students who passed their year’s exams – could also be used as a measure of “quality.” This was doubted by some academics who had argued repeatedly that a system that rewarded faster throughput of students with fewer dropouts and fewer failures might improve “value for money” but might also, perversely, incentivize the lowering of standards. The government rejected this argument, claiming it could rely on academics’ professionalism to maintain standards.3 Paradoxically, the government designed indicators to change academics’ behaviour, but also depended on academics’ resisting these incentives. The ministry set up working groups to devise new quality indicators for outputs

Knowledge That Counts

301

Figure 9.1. Institutional Circuitry: The Points System from Individual Performance to World Rankings

302

S. Wright

in knowledge transfer (videnspredning) and research. The knowledge transfer working party produced a report that was criticized for poorly defining activities, which ranged from industrial innovation to enhancing public debate and democracy. Eventually, knowledge transfer was dropped as an indicator. The working party charged with devising an indicator for research quality began reviewing available European models. They rejected the United Kingdom’s Research Assessment Exercise, based on peer review panels, as too costly in staff time. The Leuven model combined a number of indicators: PhD completions, external funding, and citation rates for publications. Research commissioned by the humanities faculties of Danish universities showed that measures based on commercially produced citation indexes were inappropriate for the humanities, as humanities faculty published very little in the international journals covered by those firms (Faurbæk 2007).4 It was agreed that there should be one measure for all disciplines. Therefore, the working party adapted the Norwegian model (Schneider 2009), which allocated differential points to journal articles, chapters in edited volumes, and monographs, depending on whether they were “top level” or not and peer reviewed or not. In this model, “quality” is not assessed directly but relies on the journal or publisher’s peer-reviewing and “international” status (defined as in an international language and with under twothirds of contributors from the same country). The Australian system of auditing and ranking universities called Excellence Research for Australia (ERA) entailed similar ranked lists of journals until the minister cancelled them at the last minute. He said this was because university managers were using the lists in an “ill-informed and undesirable way” to set publications in top-ranked journals as targets for academics (Carr 2011). In contrast, the Danish government’s aim was for managers and academics to treat measures as targets. The Danish model required all academics to enter their publications into a national database each year and points would be allocated to each publication according to an authorized list of which journals and publishers were “level 1” or “level 2.” Level 2 journals were defined as the leading international journals that published the top 20 per cent of the “world production” of articles in a field. To create this authorized list, in late 2007 the ministry, with the agreement of Universities Denmark, set up 68 disciplinary groups involving 360 academics. They delivered their lists to the ministry in March 2009. The ministry found that the same journal could appear on two lists at different levels – presumably

Knowledge That Counts

303

because it was central to one discipline but more peripheral to another. When the ministry published its consolidated list on its website, immediately 58 of the 68 chairs signed a petition saying it was not an appropriate tool for distributing funding and asking the ministry to remove the list from its website (FORSKERforum 2009a; Richter & Villesen 2009). One disciplinary group found 89 of the journals they had put in the “lower level” had been upgraded to “top level,” while 30 of their most important journals had been downgraded (Richter & Villesen 2009). In another disciplinary group, seven coffee table magazines suddenly appeared in the “top level.” No Danish journals or Danish publishers appeared as “top level,” disadvantaging subjects such as Danish language, literature, history, and law (Larsen et al. 2009). Overall, 1 per cent of all the journals academics had selected as important had disappeared (ibid. 2009). The press confronted the minister, who admitted, “It’s not as easy as one may think to make a ranking list of 20,000 journals,” and the list disappeared from the ministry’s website (Richter 2009; FORSKERForum 2009b). The discipline groups were asked to rework their lists, but this time each journal was allocated to a specific discipline to avoid overlaps. They delivered their lists again in September 2009, but 32 of the disciplinary group chairs signed a statement to the effect that they could not vouch for this indicator and their advice was not to use it for funding allocation (Emmeche 2009b: 2). The disciplinary groups had worked for two years and still listed only journals; there were no lists of all the publishing houses for monographs and edited volumes relevant to each discipline, let alone decisions about which of them were “level 1” and “level 2.” The ministry therefore published the ideal version of the points system alongside a “temporary” one. By default, the temporary list seems to have become permanent. Notably, it downgraded the points for monographs and edited volumes, which are the publication outlets used predominantly by the humanities (see Table 9.1). Now that the ministry had its lists and could calculate the research points for each university in each year, it had to decide what weight to give these points in the funding allocation model (see Table 9.2). An allocation model had already been developed in the late 1990s, based on 50 per cent for teaching, 40 per cent for external funding, and 10 per cent for PhD completions, but this was used only to distribute marginal amounts in an ad hoc fashion (Schneider & Aagaard 2012: 195). Now the ministry proposed that research points should be given a 50 per cent weighting, teaching 30 per cent, and knowledge transfer 20 per cent, but Universities Denmark rejected this. In 2009 Universities Denmark

304

S. Wright

Table 9.1. Danish Publications Points System Form of publication

Low level

Toplevel

“Temporary” All: 6 points

Scientific monograph

5 points

8 points

Article in scientific journal

1 point

3 points

Unchanged

Article in edited volume with ISSN number

1 point

3 points

All: 0.75 points

Article in edited volume

0.5 point

2 points

All: 0.75 points

Source: FI 2009b. Notes: In addition, a PhD thesis initially earned 2 points, a “habilitation” or professorial thesis 5 points, and a patent 1 point. Later PhD theses were removed from the points system to avoid their counting twice, as ‘completed PhDs’ was already a category used for the distribution of the block grant; see Table 9.2. Table 9.2. Weighting of Indicators in the Formula for Competitive Allocation of Basic Grant

Teaching

Externally funded research

Research publication points

Completed PhDs

2010

45

35

10

10

2011

45

30

15

10

2012

45

20

25

10

Source: FI 2009a.

finally suggested (echoing the Leuven model) that the indicators should be teaching 45 per cent, PhD completions 10 per cent, and research 45 per cent. But they argued that research should be subdivided into 35 per cent for funding from external sources (e.g., contracts with industry or grants from the research council) and the research publication points should be given only a 10 per cent weighting, although this figure would increase gradually to 25 per cent. The government agreed to this proposal (FI 2009a). The final stage in setting up this system depended on gaining the agreement of enough political parties to give the proposal a majority in Parliament. Universities Denmark finally backed the minister’s “authorized list” and competitive funding formula, even though there was still disquiet among members of the disciplinary groups. The spokesperson for the Radical Liberals, who had been holding out against this process, took Universities Denmark’s approval to mean “the universities” approved, seriously misunderstanding that Universities Denmark was the voice of

Knowledge That Counts

305

the rectors and that, under the 2003 University Law, the university no longer had mechanisms for speaking collegially (Emmeche 2009a). She finally acceded on 5 November 2009, just in time for the system to be implemented in the Finance Law from January 2010 (FI 2009a). The new competitive funding formula would be applied not to the universities’ existing block grants, but only to additional funding, called “the globalisation pool.”5 The political parties agreed on a text (Ministry of Science, Technology and Development 2009), which explained that they would increase funding for research and development by 10,000 million kroner over three years, so that public funding of research would meet the Bologna Process target of 1 per cent of GNP. Of this extra funding, 67 per cent was allocated to special initiatives such as upgrading laboratories (1,000 million kroner each year), Danish participation in international innovation partnerships (30–90 million kroner each year) or collaboration with the private sector (130–90 million kroner each year) and around 200 million kroner per year was used to increase the teaching output payment per student passing exams in the humanities and social sciences. Of the extra funding 32 per cent was allocated to research. But of this, about a third was allocated to “strategic research” and earmarked for the government’s priority research areas (e.g., bio-products and food research received 50–70 million kroner each year). A further third was allocated to special programs in “free research” (Research Council competitive grants that are responsive to researchers’ initiatives but for which demand far outstrips supply). This meant that, as shown in Table 9.3, the globalization pool increased the universities’ annual basic grant by very little – an increase of 7.8 per cent from 2009 to 2010 and by much smaller amounts in the following Table 9.3. Universities’ Research Block Grant (Basisbevilling) 2006–2012 (in million kroner)

Total block grant for research

2006

2007

2008

2009

2010

2011

2012

6.2

6.5

6.9

7.5

7.7

8.0

8.1

7.8%

3.7%

1.25%

0.300 (3.9%)

0.570 (7.1%)

0.720 (8.9%)

Increase on previous year Of which, competitive allocation based on biblio-metric points









Sources: For 2006–2009, 2012 Budget Law. For 2010–2012, Sivertsen & Schneider (2012: Table 2.5).

306

S. Wright

years. Initially only 3.9 per cent of the basic grant was allocated between the universities on the basis of the points system to which so much administrative and academic effort had been devoted over the previous three years, although that had risen to 8.9 per cent by 2012. Even more important, an evaluation of the points system in 2012 revealed that the redistribution effect of the points system, compared with the previous method of allocating the basic grant, was only 1.6 per cent. That is, it was responsible for about 11.5 million kroner out of 720 million kroner in 2012, the year when it was most significant (Sivertsen & Schneider 2012: 23). It clearly takes a very small financial incentive to establish a competitive ethos between universities. For some universities, which historically received comparatively little basic funding from the government, this new source of funding from research publications could be an important additional income. But as other universities followed suit, and all increased their research output, they would find themselves competing over a finite pool in a zero-sum game. As each university increased their research points, the value of each point would decline, yet they would have to keep up the pace of the treadmill, ever increasing their research output and their points score so as to maintain their position relative to the other universities and their share of the competitive funding. True to the new system of “Aim and Frame” steering, the minister used his contracts with university leaders to commit them to use output and “quality” indicators to create a competitive ethos throughout their organization. In its contract with the minister for the period 2006–2008, the university on which this chapter is focused committed itself to developing internal systems for allocating research funding according to “international quality criteria” in 2007 and to distribute up to 10 per cent of its budget between faculties on the basis of these criteria in 2008. The rector’s contracts with faculty deans further outsourced this commitment to allocate funding competitively between departments. For example, the humanities faculty contract obliged the dean to allocate 10 per cent of funding between departments on “quality” criteria in 2008 and the faculty’s research committee learned that if they did not develop a method for allocating funding based on research quality by spring 2007, the rector would withhold 6.3 million kroner from the faculty’s budget (Humanities Faculty Secretariat 2006). The research points system exemplifies the art of devising a mechanism that not only renders down a whole complex of activities to one

Knowledge That Counts

307

measure, but also aims to work across several scales at once. First, the research points system intends to organize the whole sector into a competition for “world class” status by emphasizing publication in the “top” journals. But the Danish “level 2” journals are not coincident with those counted in world rankings. The Times Higher Education (THE) “World University Rankings” uses only the 12,000 academic journals indexed by Thomson Reuters’s Web of Science database and in any case gives a total weighting of only 36 per cent to publication performance.6 The Shanghai Jiao Tong “Academic Ranking of World Universities” gives a 90 per cent weighting to publications but counts only those in science subjects, as the citation indexes for the humanities and social sciences are so inaccurate (see Figure 9.1). Second, the points system induces managers to change their university’s internal organization and funding allocations to incentivize faculties and departments to prioritize this kind of research. The points system is emerging, above all, as an instrument for management control. Third, it makes everyone aware of “what counts” and how to adjust their behaviour if they are not only to promote their own careers, but also to do their best for their department’s economy (and hence their own conditions of work). In the process, the points system imposes an external definition of value, undermining (or trying to undermine) academics’ ability to exercise their own judgment about how and on what range of activities to devote their time, energy, and commitment – which arguably is the basis of their training and the source of their professionalism. While such systems are often coercive (Shore & Wright 2000), they are not determinant. What was the response within universities? I will explore in turn how leaders and academics reacted in the faculties of life sciences and humanities. These faculties were located very differently in the knowledge economy and in the quest for “world class” status. The research points system, as part of the new Aim and Frame form of governance, outsourced responsibility for achieving the government’s political aims for the sector to managers and academics and tried to direct their agency within tight boundaries. The next sections will explore how academics contested and resisted, supported and endorsed, or “misrecognized” the meaning of mechanisms like the research points system and the ways this system worked together with other elements of Aim and Frame steering (Wright 2005; Wright & Ørberg 2009). In other words, through their own strategies and search for “room for manoeuvre,” each of our interviewees is seen as being actively involved – not all with equal access to institutional power – in

308

S. Wright

the continual process of influencing their situation and the system of which they were a part. Life Sciences: Performance and Expansion The Life Sciences Faculty covered subjects that the Danish government deemed central to the global knowledge economy and to its efforts to reposition university research in a globally competitive, commercialized field. Part of what Shorett, Rabinow & Billings (2003: 123) call “a new ecology” of science, public interest, and market, the life sciences have been restructured to form international complexes of venture capital, biotechnology firms, commercial research labs, leading university departments, and, to a lesser extent, social action and user groups (Latour 1998).7 The life sciences had long been one of the government’s priority areas and had benefited from special funding for at least a decade. In 1999 and 2009, the Strategic Research Council reserved substantial funds for research in this field and, as mentioned above, the Globalization Pool contained a special allocation for food sciences (Ministry of Science, Technology and Development 2009). The government’s points system accorded well with the life sciences’ existing pattern of publication in high-prestige journals, and indeed the faculty had operated a similar research points system for many years. In the late 1990s, the faculty had introduced a system of performance indicators to steer its biennial budget. These indicators included the number of peer-reviewed publications, how often they were cited, and the impact factor of the journal they were published in, measured over a six-year period. They also included the amount of external funding; for the faculty as a whole, this total had risen from 200 million kroner in 1999 to 386 million kroner in 2007. These established systems for assembling “performance” data were very close to the government’s points system and its formula for the competitive allocation of funding between universities. In the Life Sciences Faculty, the government’s points system slid into place in the steering arrangement with barely a murmur. The Life Sciences Faculty had welcomed the 2003 University Law, especially the appointment of strategic leaders. The life sciences department we studied had a strategy of continual expansion and, as we shall see, did not have the same wariness about political hostility or foreboding about cuts that marked the Humanities Faculty. It was at the forefront of developments that mirrored the government’s image of the

Knowledge That Counts

309

future university. But this department was also marked by very high levels of stress. Among many accounts of stress that we heard in interviews, members of the department were especially shocked by three cases (concerning both academic staff [VIPs] and technical and administrative personnel [TAPs]) in recent years. One person had collapsed, lifeless, in the corridor at work, resembling the descriptions of karoshi in Japan. Two others had experienced a similar collapse at home and described how they suddenly could not function at all – they could not read, mark exam papers, or write reports. They had been on sick leave for several months and their experiences prompted colleagues to reflect on the consequences of high levels of pressure at work. The department had participated in a trial analysis of their work environment (APV) to identify the causes of stress, had developed an action plan, and had made stress a subject of open discussion. But cases still were occurring and causing alarm. In this department, with its mature experience of the kind of “performance culture” that the government was trying to instil in universities, there were three main themes that ran through interviewees’ accounts of their attempts to fathom why the steering arrangement was causing so much stress and what to do about it. The first theme was continual expansion. Even though the department had always scored best on the performance indicators and gained a good share of the faculty’s basic grant (basisbevilling), this source of funding amounted to only about 30 per cent of the departmental income. Nor (in sharp contrast to the Humanities Faculty) was income from teaching substantial or important. Most of the department’s income came from external funding. Their leader described how he was continually contacted and asked to join research collaborations with Danish and international industry and in large EU projects. The department had recently successfully applied for a multi-million kroner grant from a Danish foundation. In this faculty, departmental leaders received the overheads on these externally funded projects and received their basic funding as a block grant. Departments were empowered to decide how to use their combined income to cover salaries and running costs as well as their own, locally determined developments (e.g., new posts or new courses). In addition to employing the TAPs and PhD students necessary to carry out the research, this department used its external project funding to establish further professorships and lectureships, on the grounds that the people appointed would soon bring in more than enough external funding to cover their own salaries. There was

310

S. Wright

a “gung-ho” attitude to continual expansion, putting the space, the research equipment, and the 50 TAPs crucial to the conduct of experiments and trials under considerable strain. Laboratory staff had computer charts with elaborate plans to fit every test for every project into 10-minute slots in the laboratories over the next six months. Two TAPs had broken down from stress when VIPs were cross that their projects did not have enough time or space. The TAPs and their facilities were running at full capacity all the time, and they described these signs of lack of appreciation or respect as “the final straw” that caused a breakdown. Both VIPs and TAPs felt insecure in their employment.8 The department’s basic funding (basisbevilling) was used to cover the whole salaries of the department leaders and main administrators and the leaders of the five research groups into which all the 90 or so VIPs were organized. A small number of assistant and associate professors and some TAPs were also funded from the basic funding. But the salaries of most VIPs and TAPs were either entirely covered by externally funded projects or made up of a mix of basic and external funding. Even those who were on permanent contracts explained that they felt very insecure because they did not know if they were supported by what they called “finance ministry” money (basisbevilling) or external project funding, or “if they [the leaders] will be successful with the next application” and, even so, “if there will be work in it for me.” The department’s strategy of continuous expansion made VIPs and TAPs dependent on the leaders, with little power to control their own futures. Equally, it put leading VIPs under pressure to keep up a continual flow of externally funded projects if their research group and TAPs were to stay in employment. Each time a major new grant made a step rise in their income, resulting in the appointment of additional staff, pressure to keep future income at this expanded level was created. This unending expansion was not a tenable strategy, especially as the government’s reforms reduced the leaders’ room for managing the department’s economy. In the previous system, the faculty’s wellestablished budget planning gave departments a two-year horizon so that they could determine their own priorities and developments. The government’s new steering system was based on annual budgets, which were announced just before the start of the financial year in question. Not only had the government instituted a system of shortterm funding decisions, but much of each university’s “block grant” was ring fenced for particular government-determined expenditures.

Knowledge That Counts

311

Through these mechanisms, government micro-managed the universities and left them reacting to sudden government-induced threats to their liquidity and solvency (Wright & Ørberg 2009). When the government used the steering arrangement in this way in 2009,9 it exposed the vulnerability of strategies of continual expansion. There were firings of academic staff in the Life Sciences Faculty, among others, and intensified pressures and insecurities. The second theme was “performance.” Departmental meetings were focused on, among other things, the number of publications and the amount of money “brought home” since the last meeting. At these meetings, several people made amusing asides that they could not remember what the article was about but they could remember the name of the prestigious journal where it was published. We did not hear discussions of what the publications contributed to the research field. Not only was performance what counted, but in this department it was what mattered. Everyone we spoke to knew that creating outputs within a specified time was crucial if they hoped to build a career in the department. There had been such pressure to perform that one of the actions taken to reduce stress by the local employer-union consultative committee (LSU) was the production of a paper, which set out the average annual performance outputs expected of each permanently employed VIP. Under “research,” it lists the following points: make funding applications to maintain your own research, be the first author on one publication, contribute to other publications as co-author, give one national and one international research paper, and give one event disseminating science to the public. This had apparently done much to reduce the pressure to perform to more realistic levels. But still there was a frenetic pace of “performance.” We observed a PhD student’s public presentation of her work in progress, for example, where she was advised to write an article on her next transatlantic plane journey so as to up her rate of output and meet her completion deadline. Another PhD student expressed frustration that all that counted were these narrow measures of performance, whereas students developed themselves across many dimensions in the course of PhD work. We heard several accounts of PhD students unable or unwilling to perform like this, who had either left because of stress or, as one told us, had taken a job in the private sector because it was a more caring environment. Performance is also unremitting, and both VIPs and TAPs revealed in their accounts of their working life that they had been constantly

312

S. Wright

working at full tilt over many years. Sometimes cases of stress were attributed to a family crisis, but it seemed that these people were already working flat out and they had no spare capacity when a family illness or problem demanded their attention. When university mergers took place in 2007, and the formerly well-functioning administrative systems were replaced by ones that did not work, the university had not set aside funds to help with the transition. The departmental administrator, who was already overworking, found herself working late at night and, unable to sleep, coming in again in the early hours of the morning to try to make the finance, personnel, and other systems work, so as to keep abreast of the department’s administration. This departmental administrator, under the stress action plan (mentioned above), was the contact point for anyone exhibiting stress symptoms, or for anyone who saw a colleague with such symptoms. She was empowered to intervene immediately and, following discussion with the person concerned, change his or her work commitments to relieve the pressure and give some chance of recovery. Yet in trying to carry the additional administrative burdens arising from the merger, she herself collapsed from stress and was on sick leave for six months. Several VIPs tried to take collective initiatives to solve problems in the work environment. One research group leader along with two senior colleagues had made a concerted effort to create an anxiety-free environment in their research group. That research group was located at a distance from the rest of the department. They met in their “homely” kitchen at lunchtime and the relations between lecturers and PhD students was fairly “equal,” judging by the way they all initiated topics of conversation – although TAPs were more silent. They had a group meeting every week, and every third week the academics met in the kitchen for a work-in-progress seminar, clearly held in an established atmosphere of constructive comment. They had created opportunities to try out preliminary ideas, discuss drafts, and get supportive, critical responses, with, as they said, no finger-pointing. The research group leader acted as a buffer between the group and the wider institutional environment, but the group was integrated into the department financially and in terms of decision-making and could not insulate itself in the same way as, it will be seen, the research centre in the humanities could. The third theme, which occurred frequently in interviews, was time. There were two conceptions of time. “Performance time” was that spent on funded projects and creating outputs that counted. Time spent work-

Knowledge That Counts

313

ing with colleagues or students to discuss an idea or develop a skill that did not count directly towards an output was “invisible time” or, according to the dominant logic, “wasted time.” For example, we saw a senior PhD student work with a new PhD student to refine the design of the latter’s questionnaire. Similarly, a lecturer explained how hard she had worked to develop a student’s writing skills and a little later in the interview said that a major source of stress was not being able to work out where her time had gone. Some people put a considerable amount of “invisible time” and effort into developing colleagues’ skills and abilities that they needed to meet the performance demands, or into facilitating groups and making sure projects worked well. Others did not. One interviewee told us that when one of his PhD students was about to go down with stress, a new supervisor was found who worked with the student to solve the problems with the thesis and submit it on time. His focus was so strongly on “performance time” that this second kind of activity (that some people take on voluntarily or that is offloaded onto them by others) is not even associated with actual time – it is “invisible time.” Several interviewees made the point very strongly that individuals were responsible for their own time. The department leader explained: “If you are a lecturer, you have total research freedom … You have to do some teaching and you are supposed to do some research. So you will apply for some grants … for different projects. And of course if you are successful, and have grants for three different projects, you will be quite busy. And you maybe need to spend all your time to execute these different projects. But, you know, that’s your own problem. You have been too successful. And you didn’t include in the budget funding for a PhD student and a post doc that you could hire to do most of the work for you.” Other VIPs held strongly to the idea that they were responsible for their own time, but they spoke from the position of the person in the above account, who had brought the problem on himself or herself by taking on too much. This idea, that academics had the right to control their own time used to be a central component of the concept of academic freedom in a previous steering system. VIPs seemed to attribute this “old” meaning to a concept that, in this department, had shifted to responsibility for their own time. In the context of the way the steering arrangement worked in this department, being responsible for managing the pressure for everincreasing performance had become a form of self-exploitation and a source of stress. This department lacked a system for allocating hours

314

S. Wright

(such as was found in the Humanities Faculty), which VIPs could use to protect themselves and negotiate with leaders, that is, a system that would record not just time allocated to funded projects but also a tariff for teaching, for departmental services, and for the currently invisible work of facilitating research processes and staff development – and maybe even for free research. As Bovbjerg (2011) argued cogently, modern workers are meant to exhibit the ability and will to expand their capacities endlessly and to take on ever more challenges at the same time as they are made responsible for deciding when and how to say “no” to the pressure from their leaders for ever-increasing performance. Interviewees recorded that the department leader and department administrator responded immediately and positively if they ever said that they had too many tasks; but they also conveyed the enormous strength and courage needed to say “no” in this “can-do” environment of continuously expanding performance. This was especially the case where VIPs not only had too many projects demanding performance time, but also engaged in staff development and group facilitation and other tasks consuming invisible time. They met all their performance commitments and wrote up current projects, published in international journals, and earned their research points, but what they especially missed out on was time to write pieces reflecting on how a number of their projects over the years had contributed to the field or papers that identified gaps and shaped the future of the field, or textbooks to influence a future generation. I asked one interviewee whether she had “free research” time, not tied to work on projects, in which to do such writing, but she did not understand what I meant. One of the department’s initiatives to handle stress was to encourage employees to exercise tight control over their working hours and keep evenings and weekends work free. Two VIPs described how they used such strict control of time to manage the pressure to perform and were very efficient and productive within working hours as a result. But both also said they had “learned to limit their ambition.” The members of this department knew that what counted were externally funded projects and articles in “level 2” international journals. As modern kinds of self-managing workers, they were highly effective in producing these outputs. Yet, when the work for which people “burned” was continually displaced by more pressing project work and outputs, sometimes for months or years, the displacement of passion by performance was clearly causing stress.

Knowledge That Counts

315

Humanities: Uproar over Points Systems Whereas the creative industries are seen as key to the knowledge economy in many countries, in Denmark, where the focus is on life sciences, pharmaceuticals, information technology, and engineering, the government deemed the humanities largely irrelevant. Humanities faculties felt threatened by a history of government hostility, even though successive pieces of research demonstrated their students’ employability and success in using their education in the knowledge economy (FI 2004; Ministry of Science, Technology and Development 2005a; Hesseldahl et al. 2005; Ministry of Science, Technology and Development 2005b; Copenhagen University et al. 2008). The points system was important for the Humanities Faculty because, although they had many links with industry and civil society, these did not yield substantial external funding and their income relied heavily on the government’s output payments for teaching and “basic” funding for research.10 Aim and Frame steering through output indicators and competitive funding was completely new for the humanities faculty. Although the newly appointed faculty leadership caught the government’s impetus towards the future knowledge economy and decided that it and the points system were inevitable, many academics saw it as another threat to their academic work and values, and some sought to resist it on principle. When the government first announced that it would develop a research points system, the faculty leadership took a pragmatic approach. They joined with all the other university humanities faculties in Denmark to document current publication patterns (Faurbæk 2007) and to lobby for monographs and edited volumes to be given points comparable to journal articles (see above). The prodean in a public presentation explained how she spent over a year lobbying the ministry to include monographs in the point system, as otherwise the results would have been “very worrisome” for the humanities. The faculty leaders’ contract with the rector committed them to develop a competitive system for allocating funding internally, and they took the opportunity to create one suited to the humanities and used it to influence the university leadership’s and the ministry’s criteria and funding formula (Humanities Faculty Secretariat 2007a). The Humanities Faculty leaders set up two committees to develop points systems for the quality and output of research and knowledge spreading, mirroring those initially established by the ministry. Even when the ministry’s committee

316

S. Wright

for knowledge spreading collapsed in disarray, the faculty continued with theirs on the grounds that knowledge dissemination was an especially important task for the humanities. These faculty committees produced two points systems that interlocked to form a continuous scale. At the top end, 90 points were given for a habilitation thesis, and the scale descended through peer-reviewed monographs, journal articles, non-peer-reviewed anthologies, school textbooks, dictionaries, translations, computer games, theatre productions, the holding of conferences, museum exhibitions, newspaper feature articles, consultancies, courses for firms, public lectures, and debates to interviews with journalists, which earned one-third of a point at the other end. Listing 59 items in 17 categories, this combined points scale carefully tried to capture and calibrate the range of academic activity in the humanities. The faculty leaders worked hard for two years to try to shape the government’s, the university’s, and the faculty’s steering arrangements in ways that protected, or at least did not further damage, the political standing and funding of the humanities. When the faculty leaders unveiled their points system for research quality and circulated it for consultation within the faculty, the uproar that ensued revealed the appointed leaders’ lack of communication channels with academics. They were accountable to the leaders above them and, as one said, were motivated by the desire to “perform.” According to committee minutes, the prodean relied on the appointed department leaders’ communicating with their “hinterlands” about the points system (Humanities Faculty Secretariat 2006). But department leaders were appointed to manage academics, not to represent them, and, after the abolition of department boards, had few means to do so. Many academics had heard nothing about their leaders’ lobbying work in the university and at the ministry. Many were surprised, according to the minutes of an open faculty meeting, when the prodean explained that the points system was a response to a political demand. The government’s Globalization Council had determined to allocate extra funding to universities on a competitive basis, decided by research quality. This had been translated into the ministry’s contract with the university, committing the latter to allocate 10 per cent of its basic grant internally on the basis of a quality measurement system. The rector’s contract with the dean now stated that, if the faculty did not develop a system for measuring research quality, the rector would withhold 6.3 million kroner from the faculty’s budget (Humanities Faculty Secretariat 2006, 2007c).

Knowledge That Counts

317

This explanation assuaged some, but the uproar still did not die down. There was a widespread feeling among academics that the faculty leadership should have resisted this form of steering. They rejected the leadership’s view that the points system was inevitable and argued that they should have stood out against the government’s demand on principle; as one of them said, “The ministry is not God. And even God was negotiated with by Abraham” (author’s translation). In January 2007, 128 people from the Humanities Faculty signed a petition against the introduction of the points system on the grounds that it reduced a complex and diverse research area to arbitrary measures of the number of outputs, not their quality, and it rewarded speculation in publication strategies rather than more, let alone better, research: “It doesn’t measure anything, but just generates numbers that look like measurements” (Petition to the University’s Governing Board 2007; author’s translation). The question of where to send the petition brought into sharp relief the fact that the new, appointed leaders owed commitment, loyalty, and accountability to those above them, not to those below them (Forskningsfrihed? 2007). In the end, the petition was addressed to the university’s governing board, which under the 2003 University Law was the university’s highest authority and therefore presumably was the body legally responsible for “safeguarding the university’s research freedom and ethics” (Folketinget 2003: Clause 2, part 2; author’s translation). The introduction of the research points system was one of many changes happening at that time. A month after the uproar over the research points system, the faculty’s trade union representatives collectively sent an open letter to the dean, and there followed other articles and open letters in the press in what the press called “Humanities’ cry for help.” Too many reforms were happening at once. In addition to the new leadership system and the research quality points system, the leadership was trying to make the faculty into what they called a modern knowledge organization with a new committee culture that could respond quickly to changes in the external environment. For example, study boards (studienævn), the elected bodies that involved staff and students in the design, running, and quality of each education program, were consolidated into one board per department, reducing academics’ direct involvement in running their own courses in the name of saving their time on “administration.” In addition, education programs had to be quickly reorganized from 44 to 18, there were a new admissions system and a new national marking scale, and the use of campus

318

S. Wright

space was being reorganized. There was a freeze in appointments until economic management was devolved to departments. Many of these reforms were criticized for being too hasty, with guidelines that were imprecise and deadlines for comments that were too short. Sometimes the demands were suddenly withdrawn as “miscommunication” after people had put in considerable effort (Ammitzbøll 2007; Baggersgaard 2007; Richter 2007). The trade union representatives’ letter said that many of the changes felt “like a diktat from above.” There was an “unending stream” of meetings, plans, schemes, reforms, and requests for comments, which meant less time for teaching and research; a freeze on filling posts had increased teaching workloads and academics were looking for positions elsewhere. The pressure was breaking academics’ loyalty to the university, and the union representatives’ letter to the dean concluded: “All in all, we consider the situation grave. The mood is depressed, insecure and stressed. It is our responsibility to inform the leaders and request local and central leaders to make a serious effort, by involving the employer-union committee, to rectify things and recreate a good and constructive work environment” (author’s translation). At about the same time, the faculty leadership launched a program called “Humanities in the 21st Century” aimed at creating a dialogue and collective consciousness within the faculty that could be projected publicly to make the humanities more visible to relevant interest groups and to demonstrate that the humanities were good value for taxpayers’ money. A trade union representative reacted to the dean’s presentation by saying that it was the university’s top-down steering that created work conditions that militated against a collective identity in the humanities. He said that workers’ influence had gone, everything was detail-steered, they were bossed around, and many had lost their pleasure in being at work – something that previously had characterized the place (quoted in Villesen 2008b; author’s translation). In what was described as a “shouting match” (Villesen 2008a), the dean rejected this characterization, reportedly saying that the employees were whining, whereas they should see themselves as privileged: their slack work hours meant it was unknown whether they were drinking beer at Nyhavn (a popular harbour-front sun spot crowded with bars) or having good ideas. The dean later apologized in the press for her unfortunate comment and restored an orderly environment, but the initiative that aimed to create dialogue had instead revealed the gulf between the leaders and those they now managed.

Knowledge That Counts

319

Academics increasingly used the press to communicate with their leaders. One remarked on the “Nyhavn incident”: “It was as if [the dean] wasn’t the employees’ representative. She appeared like a politically appointed person that had to tighten humanities up” (quoted in Villesen 2008b; author’s translation). Another asked for more involvement and trust: “Humanities education is in the middle of an important change process ... But at the moment, administrative changes are rushed through from above and so quickly that our own ideas about how the disciplines should look don’t really play a role. We would like to be co-actors in the change process” (Baggersgaard 2007; author’s translation). Others reflected on their alienation: “The ownership that academics felt for the department has now slipped out of our hands” (Villesen 2008a; author’s translation); and “When we had self-steering there was a feeling that it’s our department for good or ill – it’s something we should try to make work well. Now that the ministry and parliament have made these changes, it’s harder to take such a big responsibility. Now we’re more like wage workers (quoted in ibid.; author’s translation). When the dean sent the faculty’s “knowledge spreading” points system out for consultation, it generated uproar, but this time it was conducted in the national media and on blog sites. One of the dean’s critics pointed out in a newspaper article that this created the hilarious situation of his earning a point every time he criticized his dean in the press. The points system, he argued, made it much easier to write a great number of blogs, of transitory significance, at 1 point each, than to write a school textbook on literary history for use for years to come, at 36 points. This, he said, was an invitation to inappropriate behaviour (Villesen 2008b). A newspaper picked up his article, contacted four leading scholars from one department in the Humanities Faculty, and in an editorial quoted nonsense from their responses, such as “Tjalala bum” or “Beware of the bogeyman.” Each thereby earned a point towards their department’s income next year. The editorial concluded that journalists had a responsibility to spread their points around and should not ring just one department in future (Villesen 2008a). All concerned in these public debates saw the points system as a mechanism aimed at changing academics’ behaviour. There were three main points of view. The first was principled opposition to a system designed to make people respond to an instrumental rationality and change their professional values and conduct. A debate book explored the logic of the system: chase points, not knowledge; be a good citizen and earn income for your department by producing a large quantity

320

S. Wright

of lower-quality publications from your existing research; and do not start a new research area, as it takes too long to begin publishing – a path-breaking researcher is a loss-maker (Auken 2010: 55–6; my translation). The second viewpoint, espoused by faculty leaders, was principled support for the intended effects of the points system on academics’ behaviour. The dean referred to the model’s “effect on upbringing (opdragelse),” a word usually applied to children, and members of the faculty’s research committee talked of the system “regulating behaviour” (adfærdsregulerende) (Humanities Faculty Secretariat 2007a: 3; 2007b: 5; 2007c: 4). To the prodean, the incentives were in keeping with academic professionalism and enhanced quality: competition to get published in “top” journals would improve quality and “promote a publication behaviour that will strengthen humanities in the long run” (author’s translation). The third viewpoint recognized a distance between academic professional values and political pragmatism, a distance that verged on the cynical. For example, the rector, in his response to the petition, referred to (and implicitly endorsed) the university newspaper’s report of an open faculty meeting, in which a professor, a leading figure in protecting academic freedom, argued: “It [the points system] has nothing to do with measuring research quality. It’s because the humanities has need for a system to convince the world, the rector, and the minister that they get something for their money. So stop calling it a quality measure; it’s not, it’s a cover [i.e., a screen under which humanities can hide and survive] … It’s just a way to hold the ferocious wolves at bay” (author’s translation). The dean responded to the public debate over the “knowledge spreading” points system in a tone similar to that of the rector, almost chiding the protesters for being naive: she did not believe that highly trained and intelligent academics would respond to the system’s incentives by writing a lot of newspaper articles instead of writing a book (Information 2008; Villesen 2008c). There is a contradiction in the third view. On the one hand, leaders expected academics’ professional standards, forged in a previous era of governance, to persist. On the other hand, the points system, as part of a new system of steering and governance, was intended to reshape the sector, the institution, and the individual, and make them behave according to its incentives.11 Overall, the third view says: this new system of governance is intended to reshape academics’ conduct, and academics should merely adopt it cynically as a protective cover, and it should be resisted by sustaining older academic values.

Knowledge That Counts

321

Humanities Academics’ Responses: Pragmatism, New Opportunities, and Principled Opposition At faculty meetings, in the national press, in blogs, and in this study’s research interviews, academics discussed how the points system, in the context of the new leadership and steering systems, acted as a mechanism aiming to change their mode of thinking about themselves and their work. People’s responses were influenced, first, by the extent to which they saw the points system as something acting on them, as against something they could use to their advantage; and, second, by the extent they were either prepared or able to adopt the leaders’ distance and cynicism in the third viewpoint above or took the first viewpoint of principled opposition and felt there should be professional integrity in their academic persona. A woman adjunct, who was seeking a lectureship, took a thoroughly pragmatic approach by avowedly adopting the points system and the government-funding model as guides to her behaviour and use of time. She welcomed them as the long-overdue establishment of a transparent way of evaluating people on a level playing field on which women would be able to compete, at last, on equal terms with men for appointments and promotions. The system now told her exactly on what she should devote her time and energy: publishing articles in “top” journals to earn maximum points and getting a large EU project to raise the department’s external funding. Nothing else counted, and she would not put time or energy into anything other than these projects. She regretted the changes to the study boards and the loss of involvement and ownership over the course that she taught. She thought the ending of meetings in which the teaching group got together meant she lacked knowledge over how her input fitted into the whole course, but in order to do what the leaders wanted, she would reduce her “departmental citizenship” and do exactly what was required. She showed us the department’s system of allocating hours to different tasks, and this would be her guide. She would let a pragmatic approach pervade her academic activities so that the department leader had no excuse but to advertise a lectureship for which she could apply and to ensure that her CV stood every chance of succeeding. It would now be clear if less qualified men were promoted over her. A senior professor took a more nuanced but still pragmatic approach: he would adapt his activities to “what counts” in order to be able to continue doing “what matters.” He was involved in several faculty

322

S. Wright

committees and was aware of the political background to the faculty’s points system. He objected to this kind of system because it was “a screw without end” – it “wants more and more, spreads its logic and runs by itself. Such a system ... becomes progressively more finemeshed and more and more complicated, and we have to use more and more time to do it correctly” (author’s translation). But he accepted the points system as an inevitability in the prevailing political climate. He praised the dean for designing a points system for “knowledge spreading” that was suited to the humanities: “Before someone pushes a system down over our heads, let’s make our own, and see if we can sell it [to the ministry]” (author’s translation). He felt the faculty leadership had negotiated to get the best result possible for the humanities. Now he was assessing whether and how he should adjust his own activities. He pointed out that even though the 2003 University Law required universities to develop closer relations with the surrounding society, the points system privileged publications in English for an international and academic-only audience. He could publish a book in English with a “top” press, but only five people would ever read it, and, anyway, humanities had an obligation to write for a Danish audience. Nor was there a clear division between “research” and “knowledge dissemination” in humanities: his latest book was based on new knowledge and research funded by the Research Council, but the publisher also marketed it to education institutions and the general public. How should this be “counted?” He would continue publishing for a public readership in order to generate social debate about cultural institutions, but he had become an advisor to the publisher so as to persuade them to set up peer-reviewing and ensure that his next book would count for more. Maybe he would do just enough “top” publishing to earn points for his department and protect the rest of his time for working on local cultural and communication activities and doing what he felt a senior academic should do to contribute to society. He pointed out that perhaps such a strategy was available only to an established professor, who, as he said, could feel fairly safe. What concerned him most was the “management power” that went into fulfilling the demands from above to register and deliver the right numbers. He felt that before the 2003 University Law, deans and department leaders had not had enough “management power” to go in and sort out the minority of malfunctioning departments and academics who did not work or behave properly. Currently, deans and department leaders had considerable management power, but it was

Knowledge That Counts

323

used primarily to fulfil demands from the ministry and the university leadership. The department leader comes round to find out how many international conferences his colleagues have held this year, as he has to deliver a certain number to the dean. The department is too big and the top-down demands for performance by numbers so great that the department leader cannot use his management power to solve the existing interpersonal problems and create an enthusiastic and exciting work environment. It falls to the leaders of the research groups in the department to do the kind of management that shapes the work environment, but only 20 per cent of their time is allocated for both leading their research group and being on the department’s research committee. Meanwhile, with leaders who are “not our own” and no forums through which to exercise more than a corrective influence on actions emanating from above, as he expressed it, “we have been put outside the door of our own house” (author’s translation). He no longer felt a sense of ownership of his workplace and was less collective minded. Whereas the government and the leadership clearly wanted a university of academic entrepreneurs, he found himself acting more and more like a wage labourer. Increasingly cantankerous and scrupulous, he found himself demanding that if a task had to be done, he needed an allocation of hours for it, and he would not sit through activities like the university leadership’s events to foster entrepreneurship if he could not see the “point” in them. Another woman, a fairly recently appointed professor, saw the competitive system as opening up opportunities for her to pursue her professional dream of the kinds of scholarship that had been closed to her before. She had entered into the various initiatives for competitive funding of research. She had won one of the minister’s prizes as a “star researcher” and had been awarded competitive funding to establish a research centre and other substantial research grants. This centre’s annual performance was judged against the publication output of other centres, mainly from the sciences. Its members had therefore adopted a publication strategy similar to that of their competitors. They focused on getting research quickly into the public domain and produced a high number of often multi-authored articles aimed at “top” journals. This strategy mirrored the ministry’s points system and would score well on the faculty’s points system. They found this level of output extremely demanding for a humanities subject, but they explained that they paid the price of this competitive strategy in order to create a space for pursuing research topics that had never gained approval in departments

324

S. Wright

under the old elected leaders and their male cronyism. As one of her colleagues said: “When there was an elected leader, it was always a man, and he would protect his own, both in terms of gender and in terms of research agendas … Only a small elite ever had research freedom: those with permanent positions. A clique set the agenda for everyone else and if you didn’t like it, then you had to move somewhere else. This was called solidarity. People with permanent positions talked about freedom, and half the teaching was done by external [casual] lecturers, who had no right to do research, and in the name of solidarity they [the casual staff] did the teaching.” Within this new system of appointed leaders, upward accountability, and the abolition of organs for dialogue and influence at departmental level, they had established their research centre as a self-contained oasis with the kind of collaborative, mutually supportive and “flat” organization that was not possible under the previous form of governance and before such pockets of competitive funding became available. The members of the centre, gathered for a group interview over lunch, explained how they were able to develop international networks, explore new research approaches, co-manage projects and create a supportive environment for each other and the PhD students, and share in discussion and decision-making with an openness and enthusiasm never possible before. Their strategy was similar to the research group in life sciences that also tried to create an open and flat structure, but the difference was that this centre had external funding, which made it largely independent of the department. The external funders did not expect the centre to have this form of management. They expected to interact with the “star” professor as an all-powerful leader atop a pyramid of top-down management, and projects now had to be large, headed by a single person, with colleagues and PhD students represented as subordinate and managed. The “star” professor, without whom the centre could not exist, acted as a buffer between these external demands for power concentration and the flat, open, and cooperative way that the senior staff and PhD students ran the centre. She also acted as an ambassador to the rest of the university. She had worked hard to get centre staff appointed to long-term positions to give them a secure future when the centre’s funding ended. There were still instances of “mysterious” decisions elsewhere in the faculty, announced by male colleagues, as evidence of continuing buddying and cronyism, but they had a good relationship with their head of department, and the way that international networking, “top” publications, and external funding now “counted” meant

Knowledge That Counts

325

that they had a new-found recognition and respect that gave them negotiating power. She used this not only to secure their own research area, but also to play a role in shaping the new steering arrangement in her own university and also, through talks at training events for research leaders, nationwide. However, the members of the centre had to be alert to shifts in the direction the wind was blowing and “move faster than the wind,” which was a continual pressure. This was a highoctane strategy to use the new steering arrangement as an opportunity for positive change – a strategy that was extremely demanding and not open to everyone. In sharp contrast, a male lecturer who was establishing a very successful career, not through a secluded centre but through research, teaching, and teaching administration in a mainstream department, took an approach of principled opposition. For him, the research points system made a travesty of the university. Like the established male professor described above, he argued that it rewarded only high-prestige academic publishing and contradicted the government’s own requirement, written into the 2003 University Law, that universities should relate more strongly with “the surrounding society.” For him it was central for the humanities to write both for an academic and a popular audience; to give talks to local groups all over the country on topics of Danish literature, history, and culture; to engage in public debate in the media; and to use their own judgment about which topics to pursue, regardless of their immediate value to Danish industry. But he was not prepared to shroud himself in a cloak of performativity under which he could continue to work on “what matters.” He wanted an integral approach to the pursuit of academic knowledge running coherently through all his activities. If academics’ work was to be articulated through the points system, it meant that, to be a good citizen in their department, they must put all their energy into academic publications that scored points and withdraw from the other activities. Even worse, the points system attacked the scholarship of exploring ideas deeply and over a period of time. As the point system rewarded quantity not quality, it invited cynicism and academic gamesmanship, like “salami-slicing” research into as many articles as possible to earn maximum points. He said that the energy and enthusiasm had gone out of a previously very vibrant faculty; many colleagues expressed feelings of thorough tiredness and a loss of thirst for scholarship; and the talk was of searching for jobs elsewhere. Colleagues were not inactive: they raised local and national petitions and argued their principled position strongly in

326

S. Wright

faculty meetings and the media. But their arguments, based on their professional knowledge and experience, were being made irrelevant in the face of the government and the leadership’s espousal of a need to act urgently to meet a fast-approaching and inevitable future. This lecturer expressed a feeling of dispossession even more strongly than the male professor who was put outside the door of his own house. He wanted to inhabit a figure of academic integrity – one of passionately pursuing knowledge and sharing critical understanding with students and the general public – which stood in sharp contrast to instrumental and pragmatic responses to the points system. A union representative who was on the university’s governing board pointed to the split between academics’ motivation and the incentives built into the points system: “Incentives only work if academics understand them as a positive support for their inner motivation. This system does not chime with their inner motivation at all – a need for recognition and a love of knowledge – to put it a bit pompously. That’s why people choose to work at a university. It does not make sense to score points if the points don’t measure what one thinks one should be doing” (quoted in Villesen 2008c; author’s translation). One of the petition organizers put it more dramatically: “The points system impinges on the individual researcher’s actual work in a completely destructive way” (author’s translation). A faculty member who had a senior position in the Royal Danish Academy of Sciences and Letters gave a label to this feeling of an attack on the professional persona: “The research quality model has a potentially damaging effect: that is, the individual researcher has felt her or himself hit existentially. As the process has gone on, it has at the same time become clear that it isn’t about the individual doing their best in relation to the model, and that means the question of self-value has been separated from the question of points” (Humanities Faculty Secretariat 2007a; author’s translation, emphasis added). This “existential stress” had two dimensions: a threat to their sense of professional identity and self-worth through a changed relationship to their work; and the change to top-down leadership, through which they lost responsibility for decision-making and for making their department and faculty successful. For these academics to adopt a pragmatic cloak in order to protect their “real” academic values was an existential step too far. There were no half measures and no conception of the possibility of cynical game-playing in their reaction to the point systems. If they were to follow the incentives in the point system wholeheartedly, be good departmental citizens, and do

Knowledge That Counts

327

what it took to earn maximum points, this would undermine the quality of their academic work. Conclusion The introduction of a points system to count, value, and rank research output and use the results in a formula for the competitive allocation of funding was core to a new form of university governance in Denmark. It has been argued above that such new forms of governance rely on one mechanism to try to reorder three scales of activity at once: the organization of a whole sector, the management of constituent organizations, and the “wise self-conduct” of individuals. In terms of reorganizing the university sector, the government argued that Denmark should aim to have at least one university in the world’s “top 100” and that the best way to achieve this was to make all eight Danish universities compete with each other on the basis of the same criteria. There was little public discussion of the implications of this strategy. First, the focus on publishing in “top” journals in all disciplines did not accord with the methodologies used in world rankings. Second, why make all universities compete for funding over the number of international journal publications when this matched the profile of only some universities? Why was it in Denmark’s best interests to reorganize the whole sector around this narrow measure of a “world top-class” university when the sector was characterized by a diversity of universities, with some focusing on traditions of radical education and on working for their region? Third, the standardized allocation of points privileged those disciplines deemed central to Denmark’s survival in the global knowledge economy and was not equitable in its effects across the sector. The implications of implementing a standardized points system in different faculties and disciplines were made very clear by the Danish humanities faculties. Although the leaders of humanities faculties lobbied hard, the eventual system, as shown above, easily fitted life sciences and rewarded their English-language, journalbased publication pattern, but it devalued key elements of the humanities publication pattern. Fourth, the competitive points system created a “treadmill.” Universities do not receive a fixed payment per research point; rather, a funding pot is divided between them according to their share of the year’s total points. Departments, faculties, and universities are pitched against each other in a continual quest to increase their output in order to sustain their relative share of the funding. As they

328

S. Wright

collectively increase the speed of the treadmill, they raise Denmark’s total publications output and the total points score, but the value of each point, or the return on this increased effort, declines. The effects in terms of stress and collapse were seen in the life sciences department above, but from the government’s standpoint it is a very cost-effective way of getting ever-increasing output from the sector. In terms of transforming the internal organization of universities, the points system became an important management tool. The setting up of a competitive system for internally allocating funding throughout the organization was symbolically and practically a means of demonstrating the existence of a new “unified” leadership. As seen above, the contract between the rector and the dean of humanities required the faculty to establish a competitive system for allocating funding between departments. The faculty leadership invested heavily in developing such a points system for both research and knowledge spreading and treated this as a sign of their ability to perform their new role and to demonstrate their loyalty and accountability to the top-down leadership structure. This new leadership system was predicated on two expectations, which were not always aligned, about the effects of the points system on the third scale – academics’ own “wise self-management.” First, it was expected that academics would quickly learn “what counts” and would adjust their behaviour. But leaders were not consistent about the response they expected: sometimes they wanted academics to adopt new values and behaviours; at other times they expected them both to change their conduct and to sustain the values associated with the previous system of governance. Two examples were noted above. First, the teaching payment based on students’ passing exams is meant to encourage academics to increase student throughput, yet government does not expect output-based funding to act as an incentive for lowering standards. Instead, the government looks to academics’ old (and now disincentivized) sense of professionalism to maintain educational quality. Second, whereas the research points system rewarded the number of publications, not their quality, one prodean felt that the premium for publishing in peer-reviewed international journals would act as an incentive to change the publication pattern of humanities’ academics and would thereby improve quality. Other leaders seemed to admit that the incentives built into the research points system ran counter to academics’ professional values and advocated a pragmatic approach verging on the cynical. They expected academics to respond by doing

Knowledge That Counts

329

enough of “what counts” to create a protective carapace under which they could continue to do “what matters.” A similar situation faced universities in the United Kingdom: Shore and Wright (1999) described “schizophrenic academics” who complied with the demands of audit culture, yet tried to continue researching and teaching in keeping with their own academic values as well – until they became too exhausted to sustain this double workload. The second assumption, integral to the new form of governance, is that the system’s incentives would accord with academics’ own professional motivation. In one case, cited above, it did so. A woman professor in the humanities engaged successfully in the new competitive ethos and expended great effort on writing funding applications and engaging in new publishing practices. As a result, she was able to open up a new space, not available previously, both to develop a new research field and to establish a “flat” collegial environment. Hansen (2011) shows that such super-successful academics (also known as project barons) are successful precisely because they keep focused on their core research agenda throughout their careers. They find ways to bend the changing funding systems and organizational conditions to their research needs (rather than continually adapting their research to external incentives, as government expects). In this case the woman professor used new sources of competitive and external funding to create a research centre at arms’ length from the new “unified” leadership. Just at the time when universities were becoming coherent, centrally steered organizations, such super-successful academics acted as buffers between the mainstream and their centres and introduced new loosely coupled spaces into the organization. Most academics could not find this kind of buffered space in which they could maintain their academic integrity. Even where a research group in the life sciences tried to create a mutually supportive environment, they were still integrated into the financial incentives and decisionmaking systems of the department and the leader could not buffer them as effectively as did the super-successful leader in the humanities with her externally funded centre. In the life sciences, the system of governance did work across all three scales, and the academics knew that they had to focus their energy on “what counted” (journal publications and external funding) if they were to have a career in the department or even maintain their employment. The associated ideas of “performance,” “time,” and “competitive expansion” were so normalized, even hegemonic, that they were rarely discussed in public. Where people learned

330

S. Wright

to curb their ambition in order to perform according to the incentives and had lost sight of using research freedom to advance their discipline, the pressures were internalized and exhibited as stress. In the humanities, one interviewee adapted herself to the new conditions in order to test whether the supposedly transparent performance criteria would generate more gender-equal appointment decisions in practice. Another considered how to adopt the carapace of performance while continuing underneath with the full range of work that mattered. But he was very conscious of, and worried about, changes he perceived in his own behaviour and attitude that smacked of the niggling wage labourer who no longer had control of his own work environment. In the uproar in the faculty, academics took a principled stand against the measurement of their work in ways that did not accord with their professional ambitions and values. As one put it, this was an existential threat. The British professor of anthropology Marilyn Strathern commented (personal communication) about a similar problem when audit culture was being introduced in the United Kingdom: academics are trained to seek the truth, and they believe that this truth should integrally inform all aspects of their persona, so it seems like dissembling when a surface compliance is required that should be split off from an integral core. The conflict and confusion over the research points system seem to derive partly from the double meaning of governance. Two meanings of governance were in play throughout these events: the “old” meaning of an individual and organization keeping themselves in good order through their own wise self-command; and the “new meaning” where the government decides the mission and aims for organizations and individuals; sets up incentive systems like the competitive allocation of funding based, inter alia, on research points; and expects organizations and individuals to “voluntarily” order themselves in response. Much of the stress in the life sciences and the uproar in the humanities can be attributed to this shift from a “bottom-up” to a “top-down” form of governance. Expressed in terms of having to limit their own ambition or being “set outside the door” of their own house, the existential threat came from government’s trying to assume the power narrowly to redefine the purpose of universities and usurp academics’ responsibility for their own work. The uproar in the humanities shows how, where passion and points are at loggerheads, motivation and incentive come asunder. If the individual’s wise self-conduct, the third scale of governance, does not fall into line with the leaders’ organizational incentives

Knowledge That Counts

331

and the government’s league-table mission for the sector – what Hazelkorn calls “the academic arms race” (2008: 209) – the supposedly invisible workings of this top-down system of governance become exposed and can be contested. Acknowledgments Very many thanks to Dorothy Smith and Alison Griffith for inviting me to participate in the invitational workshop Governance on the Frontline, 15–18 October 2009, at York University, Toronto, Canada. It was an honour and an inspiration to work with Dorothy Smith and the members of the Institutional Ethnography network, and to get their feedback on an early draft of this chapter. I am very grateful to the interviewees in the two departments studied and to the life sciences department for inviting me to present this analysis. Many thanks to Claus Emmeche, not just for maintaining the blog and website Forskningsfrihed?, which is an invaluable site for discussing university reforms, but also for giving me detailed comments on this and earlier texts about the research points system. Finally, thanks to Jakob KrauseJensen and the late Kirsten Marie Bovbjerg for giving me inspiration throughout the whole ‘stress project’ and extremely helpful feedback on this chapter, and I thank them, Jakob Williams Ørberg and Rebecca Boden for their comments.

NOTES 1 The route of the concept of autonomous university governance from shared European origins through the intervening centuries varies in different countries, especially with the influence of Humboldtian ideas of university autonomy in Germany and Denmark (see Moutsios 2012). 2 The research involved analysing the evolution of the government’s points system through a trail of government documents, newspaper articles and blogs over three years. It also involved 21 interviews with managers and academics at different stages in their careers, participant observation at meetings and events, and the analysis of minutes, policies, and petitions generated within the faculties. 3 This view was given by a former government official who had been instrumental in devising this exam-based output funding for teaching in an interview conducted by Ørberg and Wright.

332

S. Wright

4 The Thomson ISI citation index is the most used. The way that the five major publishing firms are benefitting financially from government policies to publish in ‘top’ journals and count citations and journal impact, is documented by Ciancanelli (2007). 5 The globalization pool included new money allocated to universities to lead Denmark’s engagement in the global knowledge economy. It was also the vehicle for returning to universities part the of 2 per cent public sector budget cut applied to universities (and all public sector budgets) each year. Money for upgrading laboratories came from profits from the state system of renting buildings to the universities. 6 The latest THE methodology gives a 30 per cent weighting to citation scores in Web of Science journals and a 6 per cent weighting for the number of articles per staff member published in those journals (see www.timeshighereducation.co.uk/world-university-rankings/2012-13/ worldranking/methodology). 7 Shorett, Rabinow, & Billings (2003: 124) record that in the United States, corporations account for over half of all national funding for biomedical research and development and supply 14 per cent of funding for academic research in biotechnology areas. More than 25 per cent of life science faulty participates in industry relationships, as do 39 per cent of genetic researchers in clinical departments. 8 There is no academic tenure in Denmark. Lecturers and professors are on permanent appointments, associate professors and PhD students are on fixed term appointments, and ‘external lecturers’ are casually employed. 9 In November 2009 the political parties agreed the university budget that was to come into effect in January 2010. Thus, with less than two months’ notice, Copenhagen University learnt that, although its basic grant would go up by 63 million kroner, so much of it had been earmarked for specific purposes that the university had a shortfall for funding staff salaries. The university leadership reduced the basic funding of faculties by 60–70 million kroner, and this translated into the cutting of 130 posts, mainly in the departments of biology, life sciences and geography (Düwel 2009; Copenhagen University 2009). 10 The humanities faculty’s economy was dependent on both the government’s basic funding for research and the government’s funding for teaching output. The humanities faculty had fewer than 300 externally funded research projects, whereas the life sciences faculty had 1,500, half funded by Danish public funds, a third by Danish private funds and 250 by the EU or other international sources. Life sciences had a third more staff than humanities, employed predominantly on research, with very

Knowledge That Counts

333

little teaching. Of the humanities staff, three-fifths worked on research, half were teaching, but a sixth were on casual teaching contracts. 11 As mentioned above, the government’s system of payment for university teaching operates on a similar confusion: it makes a payment to the university only when a student passes an exam, to act as an incentive for throughput, but it relies on academics’ (previous and now disincentivized) professional standards to uphold quality.

REFERENCES Ammitzbøll, L. 2007. Nødråb fra humanoria. Magisterbladet. 6–7 June. Auken, S. 2010. Hjernedød. Til forsvar for det borgerlige universitet. Copenhagen: Informations Forlag. Baggersgaard, C. 2007. Nødråb til dekan. Universitetsavisen. 29 March. Bovbjerg, K.M., ed. 2011. Motivation og mismod. Effektivisering og stress på offentlige arbejdspladser. Aarhus: Arhus Universitetsforlag. Carr, K. 2011. Improvements to excellence in research for Australia. Australian Government media release, 30 May. Retrieved http://archive. innovation.gov.au/ministersarchive2011/Carr/MediaReleases/Pages/ IMPROVEMENTSTOEXCELLENCEINRESEARCHFORAUSTRALIA.html. Ciancanelli, P. 2007. (Re)producing universities: Knowledge dissemination, market power and the global knowledge commons. In D. Epstein, R. Boden, R. Deem, F. Rizvi & S. Wright (eds), Geographies of Knowledge, Geometries of Power: Framing the Future of Higher Education. World Yearbook of Education 2008, 67–84. London: Routledge. Copenhagen University. 2009. Rektor’s briefing on the financial situation. 20 November. Retrieved www.humanities.ku.dk/about/management/rector.pdf. Copenhagen University, Aalborg University, Aarhus Business School, Copenhagen Business School, Roskilde University Center & University of Southern Denmark. 2008. Humanistundersøgelsen 2007. Humanisternes veje fra uddannelse til job. Copenhagen: Copenhagen University. Dean, M. 1999. Governmentality: Power and Rule in Modern Society. London: Sage. Dreyfus, H., & P. Rabinow. 1982. Michel Foucault: Beyond Structuralism and Hermeneutics. Brighton: Harvester. Düwel, L. 2009. Væksten der blev væk. Kureren. 30 November. Retrieved http://kureren.ku.dk/artikler/november_2009/vaeksten_der_blev_vaek/. Emmeche, C. 2009a. Hvem er danske universiteter? Universitetsavisen. 10 September. Retrieved http://universitetsavisen.dk/debat/ synspunkt-hvem-er-danske-universiteter.

334

S. Wright

Emmeche, C. 2009b. Mareridt, damage control eller forskningsrelevante kvalitetskriterier? Notat om faggruppernes forbehold overfor den bibliometriske forskningsindikator efter niveaudelingsprocessen og indtastning af tidskriftlisterne pr. 15/9–2009. Humanistisk Forums Blog. Retrieved http://humanioraforum.wordpress.com. Faurbæk, L. 2007. Humanistisk Forskningskvalitet. Rapport om det humanistiske kommunikationsmønster og internationale forskningsmodeller. Copenhagen: Humanities Faculty and Copenhagen University Library. Folketinget [Parliament]. 2003. Act on Universities. Act No. 403 of 28 May. Retrieved www.videnskabsministeriet.dk/cgi-bin/theme-list. cgi?theme_id=138230. FORSKERForum. 2009a. Fagligt oprør mod embedsmands-ranglist. 17 March. Retrieved http://www.forskeren.dk/?p=198. FORSKERForum. 2009b. Embedsmands-ranglist trukket tilbage. 27 March. Retrieved www.forskeren.dk/?p=218. Forskning og Innovationsstyrelsen (FI). 2004. Humanistisk viden i et vidensamfund. Copenhagen: Ministry of Science, Technology and Development. Retrieved www.fi.dk/publikationer/2004/ humanistisk-viden-i-et-vidensamfund. Forskning og Innovationsstyrelsen (FI). 2007. Kommissorium for styregruppen til udvikling af dansk kvalitetsindikator for forskning. 27 February. Forskning og Innovationsstyrelsen (FI). 2009a. Aftale mellem regeringen (Venstre og Det Konservative Folkeparti), Socialdemokraterne, Dansk Folkeparti og Det Radikale Venstre om ny model for fordeling af basismidler til universiteterne. 30 June. Retrieved www.fi.dk/forskning/ den-bibliometriske-forskningsindikator/aftale-om-basismidler-efterresultat.pdf. Forskning og Innovationsstyrelsen (FI). 2009b. Samlet notat om den bibliometriske forskningsindikator. 22 October. Copenhagen: Research and Innovation Agency. Retrieved http://static.sdu.dk/mediafiles// A/0/7/%7BA0719ADA-D762-418B-A97A-DB62C6630B95%7D22.%20 oktober%202009-%20Samlet%20notat%20om%20forskningsindikatorer.pdf. Forskningsfrihed? 2007. Debat: KBH: Målsystemet, der ikke kunne mål. Blog. 28 February. Retrieved http://professorvaelde.blogspot.ca/2007/02/debatkbh-mlesystemet-der-ikke-kunne.html. Gibbons, M., C. Limoges, H. Nowotny, S. Schwartzman, P. Scott & M. Trow. 1994. The Production of New Knowledge. London: Sage. Giddens, A. 1998. The Third Way: The Renewal of Social Democracy. Cambridge: Polity. Government of Denmark. 2006. Progress, Innovation and Cohesion. Strategy for Denmark in the Global Economy – Summary. Copenhagen: Globalization

Knowledge That Counts

335

Council. May. Retrieved www.globalisering.dk/multimedia/Pixi_UK_web_ endelig1.pdf. Hansen, B.G. 2011. Adapting in the knowledge economy: Lateral strategies for scientists and those who study them. PhD thesis, Copenhagen Business School. Hazelkorn, E. 2008. Learning to live with league tables and ranking: The experience of institutional leaders. Higher Education Policy 21 (2): 193–215. http://dx.doi.org/10.1057/hep.2008.1. Hesseldahl, M., H.E. Nørregård-Nielsen, K.M. Lauridsen, A.M. Skov, M. Kyndrup, I.W. Holm & P. Øhrgaard. 2005. Humanistiske kandidater og arbejdmarkedet. Rapport fra en uafhængig arbejdsgruppe. Copenhagen: Ministry of Science, Technology and Development. Retrieved http://fivu. dk/publikationer/2005/humanistiske-kandidater-og-arbejdsmarkedet/ humanistiske-kandidater-og-arbejsmarkedet.pdf. Humanities Faculty Secretariat. 2006. Minutes of the meeting of the Research Committee. 6 October. Humanities Faculty Secretariat. 2007a. Minutes of the meeting of the Research Committee. 15 March 2007. Humanities Faculty Secretariat. 2007b. Presentation of the Research Quality Model. 30 March. Humanities Faculty Secretariat. 2007c. Registration of Research Quality in the Humanities Faculty. 20 April. Information. 2008. Universiteternes virkelighed og journalistens billig grin. Information. 29 January. Retrieved www.information.dk/153867. Kofoed, K.L., & J.D. Larsen. 2010. Universitetsrangliste er en prestige Information. January. Retrieved www.information.dk/221702. Larsen, P.S., A.-M. Mai, H. Ruus, E. Svendsen & O. Togeby. 2009. Forvarsel. Er det slut med at forske på dansk? Politiken. 15 March. Retrieved http://politiken.dk/debat/analyse/ECE669351/ forvarsel-er-det-slut-med-at-forske-paa-dansk/. Latour, B. 1998. From the world of science to the world of research? Science 280 (5361): 208–9. http://dx.doi.org/10.1126/science.280.5361.208. Ministry of Science, Technology and Development. 2005a. Danmark skal vinde på kreativitet: Perspektiver for dansk uddannelse og forskning i oplevelsesøkonomien. Report of the Arbejdsgruppen vedr. oplevelsesøkonomi. Copenhagen: Ministry of Science, Technology and Development. Ministry of Science, Technology and Development. 2005b. Humanistiske uddannelser i tal. Copenhagen: Ministry of Science, Technology and Development. Ministry of Science, Technology and Development. 2009. Fordeling af globaliseringsreserven til forskning og udvikling 2010–2012. 5

336

S. Wright

November. Retrieved http://vtu.dk/lovstof/politiske-aftaler/ fordeling-globaliseringsreserven-forskning-udvikling-2010-2012/. Moutsios, S. 2012. The European particularity. Working Papers on University Reform No. 18. February. Copenhagen: Danish School of Education, Aarhus University. Retrieved http://edu.au.dk/forskning/omraader/epoke/ publikationer/workingpapers/. Ørberg, J.W. 2007. Who speaks for the university? Legislative frameworks for Danish university leadership, 1970–2003. Working Paper on University Reform No. 5. May. Copenhagen: Danish School of Education, Århus University. Retrieved www.dpu.dk/site.aspx?p=9165. Osborne, D., and T. Graeber. 1992. Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector. New York: Plume. Oxford English Dictionary, The. 1989. 2nd ed. Oxford: Clarendon Press. Pedersen, O.K. 2011. Konkurrence staten. Copenhagen: Hans Reitzels Forlag. Petition to the University’s Governing Board. 2007. Concerning the introduction of a point system to measure research in the humanities faculty. 26 January. Pollitt, C.K., K. Bathgate, J. Caulfield, A. Smullen & C. Talbot. 2001. Agency fever? Analysis of an international policy fashion. Journal of Comparative Policy Analysis 3 (3): 271–90. http://dx.doi.org/10.1023/A:1012301400791. Rhodes, R.A.W. 1997. Understanding Governance: Policy Networks, Governance, Reflexivity, and Accountability. Buckingham: Open University Press. http://www.amazon.co.uk/Understanding-Governance-ReflexivityAccountability-Management/dp/0335197272/ref=sr_1_1?s=books&ie=UTF 8&qid=1396108028&sr=1-1&keywords=understanding+governance%3A+p olicy+networks#reader_0335197272 Richter, L. 2007. Humanister: Reformer dræber arbejdsglæden. Information. 25 July. Retrieved www.information.dk/137756. Richter, L. 2009. Sander: Ranglister over forskning er ikke så enkelt. Information. 19 March. Retrieved www.information.dk/185787. Richter, L., & K. Villesen. 2009. Ministeriets rangliste over forskning er fyldt med fejl. Information. 18 March. Retrieved www.information.dk/185737. Rose, N. 1989. Governing the Soul. London: Free Association. Schneider, J. 2009. An outline of the bibliometric indicator used for performancebased funding of research institutions in Norway. European Political Science 8: 364–78. http://ffarkiv.pbworks.com/f/BibliometricIndicator-Norway_JW.Schneider2009.pdf. Schneider, J., & K. Aagaard. 2012. “Stor ståhej for ingenting” – den danske bibliometriske indicator. In K. Aagaard & N. Mejlgaard (eds), Dansk Forskningspolitik efter Årtusindskiftet. 187–213 Aarhus: Aarhus Universitetsforlag. http://faggruppe68.pbworks.com/w/file/

Knowledge That Counts

337

fetch/54834966/Dansk_forskningspolitik_efter_a%CC%8Artusindeskiftet_ kapitel_8.pdf Shore, C., & S. Wright. 1999. Audit culture and anthropology: Neoliberalism in British higher education. Journal of the Royal Anthropological Institute 5 (4): 557–75. http://dx.doi.org/10.2307/2661148. Shorett, P., P. Rabinow & P.R. Billings. 2003. The changing norms of the life sciences. Nature Biotechnology 21 (2): 123–5. http://dx.doi.org/10.1038/ nbt0203-123. Sivertsen, G., & J. Schneider. 2012. Evaluering av den bibliometriske forskningsindikator. Rapport 17/2012. Oslo: Nordisk institutt for studier av innovasjon, forskning og utdanning (NIFU –Nordic Institute for studies in innovation, research, and education). Universitetsavisen. 2007. Lad som ingenting. Universitetsavisen. 24 April. Villesen, K. 2008a. Humanoria-konflikt skyldes universitetsloven. Information. 8 February. Villesen, K. 2008b. Humanoria-dekan: de ansatte klynker. Information. 8 February. Villesen, K. 2008c. Vrede over nyt pointsystem. Information. 25 January. Wright, S. 2005. Processes of social transformation: An anthropology of English higher education policy. In J. Krejsler, N. Kryger & J. Milner (eds), Pædagogisk Antropologi: Et fag i tilblivelse, 185–218. Copenhagen: Danmarks Pædagogiske Universitets Forlag. Wright, S. 2008. Governance as a regime of discipline. In N. Dyck (ed.), Exploring Regimes of Discipline: The Dynamics of Restraint, 75–98. EASA Series. Oxford: Berghahn Wright, S., & J.W. Ørberg. 2008. Autonomy and control: Danish university reform in the context of modern governance. Learning and Teaching: International Journal of Higher Education in the Social Sciences 1 (1): 27–57. http://dx.doi.org/10.3167/175522708783113550. Wright, S., & J.W. Ørberg. 2009. Prometheus (on the) rebound? Freedom and the Danish steering system. In J. Huisman (ed.), International Perspectives on the Governance of Higher Education: Alternative frameworks for coordination, 69–87. New York: Routledge.

This page intentionally left blank

Conclusion alison i. griffith and dorothy e. smith

This chapter is called “Conclusion,” but in a sense there can be no definitive conclusion. Through research, institutional ethnography develops our knowledge of the extra-local relations permeating our everyday lives; it grows and builds on research; and research adds innovations to the ethnographic tools needed to explore beyond the local settings of direct observation. Research projects such as those presented in this book extend our understanding of institutional organization, of ruling relations, of how texts coordinate what people do and, of course, always with attention to how people experience and are active in these areas. A distinctive feature of institutional ethnography is that its discoveries, both substantial and methodological, are not confined to specific institutional settings or relations. Early on in its development, it was a surprise to us to discover how much we could learn from one another across different institutional areas such as post-secondary education, municipal development, and hospitals. A notion such as “institutional circuits,” a recurrent theme in this collection, is an example of what institutional ethnographers have discovered in linking research findings across different institutions. In the studies collected here, authors have paid careful attention to the actual activities of the workplaces. Our notion was that if we could trace the everyday work of people on the front line through to the recording of that work in new managerial terms, then we would be able to (1) show how front-line work was changing; (2) highlight the governing relations that were coordinating the changes; (3) identify the disjunctures between front-line work and what can be “objectively” recorded; and (4) bring into view the invisible work at the front line as people

340 A.I. Griffith and D.E. Smith

managed the gaps between what people are actually getting done and producing the information required for managerial governing. Each chapter in this collection makes visible, though not necessarily explicitly, institutional circuits as a distinctive aspect of governing. The term locates circular processes that coordinate the everyday actualities of experience with the objectified categories and concepts of the institution. Institutional frames organize selectively what will be recorded or otherwise entered into the textual representations that make actualities institutionally actionable. Accountability circuits are a special type of institutional circuit focusing on making performance or outcomes produced at the front line accountable in terms of managerial categories and objectives. Institutional circuits, including managerial accountability circuits, have become an ordinary presence in front-line work. They appear on the computer screen in front of the teacher preparing report cards or admitting clerks filling in medical numbers. They are the standardized tests for students in grade 3 or the standardized publication outcomes that represent the productivity of university faculty. They are the standardized reports that go back to the funding agency. All are textual; the realities they constitute are virtual. At each moment of their appearance, people interact with them. They may change parts of them, add to them, and save their changes, and the modified text disappears only to reappear at the next step in the social relation it accomplishes – at which point someone else acts on the text that appears in front of them. In addition to computers, institutional circuits also appear on handheld tablets, smartphones, or other hardware that has been programmed to receive the text. Recognizing how people’s work is controlled, managed, and, more generally, coordinated through the medium of texts of many kinds is integral to the very possibility of taking ethnography beyond what can be learned from local observation while keeping it anchored in the local setting of people’s work (Smith and Turner 2014). Institutional ethnography’s discovery of text-work-text sequences has been foundational to how the studies collected here have been able to show institutional circuits organizing work in varying institutional settings. Though the particulars of the institutional circuits described in these chapters may be changing, these studies have developed ways of making observable and intelligible what is going on behind our backs. They tell us what to look for and how to trace the relations reorganizing the front line of work with people.

Conclusion 341

Research builds bases on which further research can be developed. The institutional ethnographies brought together in this book focus on the reorganizing of governing in contemporary society as people experience it in their work at the front line. In so doing, these studies suggest directions in which further exploring might travel, connecting what has been learned of how front-line work is being managed with the extended relations that institutional circuits are tied into. Our workshop discussions and our reading of the chapters that are assembled here have suggested to the editors two distinct regions for further research. 1. Any particular work-text-work sequence of action organized as an institutional circuit connects with relations (understood as sequences of action) beyond it. The topic identified by the heading “Extended Social Relations” (below) will open up the issue of exploring the complex of relations with which various institutional circuits coordinate, including issues around the technologies that play key roles in transliterating actualities into fitting textual representations. 2. There is a missing piece in these studies (with one or two exceptions). The notion of institutional circles draws attention to how the textual realities represent the actualities of what people are doing at the front line. However, most of the studies represented in this volume focus on institutional employees. An exception is Sinding’s piece (Chapter Eight), which brings into view the work of someone on the other side of the front-line work relationship, in this case, a patient. This opens up another region to be explored, that of the work of clients, patients, and others being dealt with institutionally. Here, of course, institutional ethnography’s generous conception of “work” stretches the term beyond its ordinary equation with the job to include anything people do that takes time and effort and is intentional. Becoming aware of this missing piece points to directions for research that take up the work being done by the people served or otherwise dealt with institutionally as they participate in institutional courses of action. What is the work involved for them as they become objects of institutionally provided services? Think of the time spent sitting in the emergency department of a hospital waiting to see the physician – that is work. How does clients’ or patients’ work coordinate what Ellen Pence (2009) has described as “institutional fragmentation” (the multiple specialized institutional agencies or departments dealing

342 A.I. Griffith and D.E. Smith

with individual cases)? How are the actualities they experience organized – even if only contingently – within the institutional circuits of changing public management? This topic is developed in the section that follows “Extended Social Relations,” under the heading “People at Work.”

Extended Social Relations When introducing institutional ethnography as a method of inquiry, Dorothy used the figure of a woman looking up into a complex of relations organizing her everyday life and experience (Smith 1987: 171; 2006: 3). The figure stresses that the direction of inquiry is given by someone or some people’s experience in a particular local setting as it is organized by extra-local or translocal relations. When we take the position of the “small hero” looking up into these relations (we can imagine them as the mountain range to be explored), we find that complexes organized functionally and identified as distinct institutions are embedded in yet other relations – those of the institutionally relevant discourses, of government, and, beyond regional government, of the transnational governance of organizations such as the International Monetary Fund (IMF), the World Bank (WB), the United Nations (UN), and the Organisation for Economic Co-operation and Development (OECD). The chapters in this book have explored varieties of the institutional circuits that bind front-line work and workers into higher-level textual media of governing – we have called them “boss texts” – that are hitched up into relations extending beyond those that have been brought into focus. Virtual texts are the currency here. The shadowy mountain range that we could see from our research is textually organized to work within a digital medium. The computer or hand-held tablet is a screen on which particular texts can be displayed one at a time. These virtual texts are in conversation with other computer-based informatics. While computer manufacturers and software developers may talk about information being stored in a digital cloud, the digital cloud is, in fact, located somewhere. That virtual data must be available to administrators and managers (located somewhere and some-when and at work) who read through the data – the textual reality that has been rendered at the front line by someone working with an institutional technology. The technology frames what is managerially visible and invisible. It standardizes

Conclusion 343

across different local actualities the textual representations that enter into the institutional circuits. These studies open directions for further exploration. Relations beyond the particular ethnographic focus are apparent in many if not most chapters. Among others are Rankin and Tate’s (Chaper Four) explication of the HSPnetTM for nurse educators, which was developed by the private sector in relation to the educational standards created by the BC Academic Health; Corman and Melon’s (Chapter Five) description of how the Canadian Triage and Acuity Scale is coordinated with another technology called the electronic Patient Care Record; and Darville’s (Chapter One) implication of the International Adult Literacy Survey (IALS) in changes in literacy education in Canada and its connections with the OECD, which had been taken up by the Canadian government (Kerr’s account in Chapter Three of standardized testing in Ontario suggests a similar tie-in). In exploring beyond levels of public sector organization framed by the studies in this book, it is important to keep in mind that we are still committed to the ethnographic practices of the field and hence to discovering just how these relations are organized by actual people. In some accounts of Canada’s connections to the OECD we might find, say, the use of the term “influence.” Immediately, our institutional ethnographic sensors are alerted. Here is a term that presents two problems. For one, it has no referent. It is never clear what is going on/ being done by actual people at actual times and in actual places that authorizes the use of the term. Second, the use of the term introduces a causal logic, however modestly, and is hence at odds with an ethnographic practice that insists on always returning to people’s doings. An institutional ethnography looks for the implied but hidden presence of what actual people have been doing and how their doings have been coordinated obscured by a term such as “influence.” The key step is to shift out of the mode of “no referents” (see Giltrow 1998: 341–2) to an investigation of what may be going on among people that “influence” overlays and hides. Reflecting on the various papers in the volume suggests that research directions would orient differently in differing institutional settings. The issues of concern in setting the governing text described by Campbell (Chapter Two) have to do with establishing standardizing procedures for evaluating the effectiveness of aid provided to non-governmental organizations in various developing countries and hence with accountability consequential for funding. While clearly such procedures would

344 A.I. Griffith and D.E. Smith

make it possible to compare the effectiveness of aid given to different countries, ranking is not the objective, whereas it clearly is the purpose of the IALS and of the Danish government’s imposition of techniques of ranking academics (see Wright’s Chapter Nine). The part played by national governments in promoting such commensuration is suggestive of where further examination might go to find out how the visible quality of the training and education of its labour force (see Darville’s Chapter One) or the commensurability of its university faculty (see Wright’s Chapter Nine) became policy interests and how such evaluative measurement gets hooked into the transnational complex of relations Darville’s study opens up. Reading these studies also suggests a search for the source of the various technologies and how they are actually produced and “sold.” Technologies are specific to the institutional context and the distinctive character of the front-line work realizing institutional objectives. Reporting procedures developed to maintain quality control over paramedics, as described by Corman and Melon (Chapter Five), are put together very differently from the template format used in the applications for Medicaid described by DeVault, Venkatesh, and Ridzi (Chapter Six). The work of paramedics is resistant to standardized representation; the reporting procedures described by Corman and Melon are mediated by the inscriptive work of paramedics who have to find ways of fitting into the reporting framework what they were actually having to do in response to a particular individual’s troubles and a particular situation. Applications for Medicaid, by contrast, have to fit a template format that “restrict[s] user input to checking boxes, circling options and filling in blanks.” DeVault, Venkatesh, and Ridzi describe this technique as the most coercive. The tight definition of responses functions like an interrogation; responses cannot be qualified, improved in accuracy, or elaborated. There are no options. The template is likely tied to the precisely written legislation, both probably written in detail within the public service itself. Completed applications must closely fit legislative specification and either are approved or fail. Though information from more than one individual may be accumulated for statistical purposes, the ranking of individuals in relation to one another is not one of the template’s functions – in contrast, say, with adult literacy or educational testing. However, there are also significant differences in what the representational technology must transliterate. Every situation paramedics encounter is unique; what they do has to be responsive to what they

Conclusion 345

confront within the scope of their skills and responsibilities. The reporting procedure formalizes those responsibilities, which include appropriate consultation with hospital services and physicians. By contrast, applications for Medicaid involve representations of actualities that are already in the standardized and measurable form of money or, like the applicant’s home, can be readily represented monetarily as an asset with a definable monetary value as real estate. The technical specificity of the technologies being used to standardize and measure local institutional performance or outcomes invites further research. The technology for measuring academic performance imposed on Danish universities that Wright (Chapter Nine) describes, looks fairly straightforward. It could be backed up by the use of more technical devices such as those of bibliometrics (techniques of the statistical evaluation and ranking of citations to an academic work and numbers and sites of publication). There remains, however, the fundamental problem, which Wright examines in the Danish setting, of disjuncture between techniques of measurement and their adequacy as representation. “The problem ... is that all research is not created equal. And the most common tool used to measure it – counting the number of published articles, analysing citations – fails to … account for the differences between disciplines and institutions” (Pearson 2012). Recently, Canadian universities have been ranked using technologies of measurement that avoid the problem of those fairly well adapted to the natural sciences but at odds with what faculty are “producing” in the humanities and social sciences (Wright in Chapter Nine; Pearson 2012). At least one consulting company in Canada, the Higher Education Strategy Associates (HESA), has devised “a new ranking system to measure research strength at Canadian universities,” which “evaluat[es] individual professors using the standards of their own discipline” (Pearson 2012; see also HESA 2012) The intervention of the HESA in devising a method of evaluating universities in Canada directs attention to the presence of managerial consultants with an interest in making developments and improvements in the technology being applied to institutional circuits in both the public and the private sectors. It seems likely that what we have been witnessing is itself a moving and changing process in which the application of given institutional technologies creates a form of trial-and-error learning to which consultants are attentive. Problems emerge that had not been anticipated; technologies can be elaborated and further consultation sold.

346 A.I. Griffith and D.E. Smith

A casual reading of the business pages as well as of consultant companies’ websites also suggests the existence of a managerial technology discourse that intersects the corporate and public sectors. It is evidenced in the recurrence of acronyms that are not spelled out – such as ERP (enterprise resource platforms/planning), HR (human resources), and HCM (human capital metrics). Such practices locate a discourse in institutional ethnography’s ordinary sense of actual social relations among people coordinated in texts, probably mostly but not exclusively electronic ones. These could be explored through development of institutional ethnography’s methods of inquiry that would extend discoveries already made that were informed by concerns arising among people whose everyday work lives are being reorganized. Zurawski’s (Chapter Eight) study is suggestive in this respect, indicating where investigation might start in looking for how the work of employees in self-improvement coordinates with the managerial framework and objectives of the companies they work for. People at Work The second topic for further exploration that has arisen from the editors’ retrospective reading is one that is missing except, as mentioned earlier, in Sinding’s (Chapter Eight) account. It is not a criticism of the work that has been done to direct attention to a region that calls for further research. In many of the studies included, those at the front line are directly working with people. Nevertheless, we do not learn about how the introduction of new public management in the public sector enters into what is happening to the people being worked with. The presence of those indirectly caught up in the new managerial organization becomes visible in Janz’s (Chapter Seven) account of how introducing performance criteria into the work of front-liners working with people who have suffered brain damage imposes these criteria indirectly on to those they serve. Nichols (Chapter Seven) provides some analogous insight. Only Sinding’s (Chapter Eight) study, however, provides an direct account of the active participation of someone on the other side of the front-line institutional interchange. And there is much to be learned by exploring how new managerial practices enter into and reorganize the work of being a client, a patient, a prisoner, and so on. Developing this insight suggests that the work and coordinating practices of the people who are dealt with, served, and handled at

Conclusion 347

the front line is a potentially important topic to be researched ethnographically. Some of the people involved in people-work organization are actually at work producing as their actualities what are textually represented as the “outcomes” attributed to the institutional front-line workers – students in Ontario schools taking the tests that are used to evaluate schools and teachers (Kerr in Chapter Three); how the outcomes of providing employment services to immigrant women’s training rely extensively on work done by the women themselves (McCoy in Chapter Seven). Typically the technologically standardized representational procedures imposed displace any recognition of people’s actual life situation in all its relational and economic complexities. No account is taken of the work involved as the people being dealt with participate, willingly or not, in the institutional process (see, e.g., in Chapter Seven, Nichols’s conception of “youth work” as a descriptor of the interdependent actions of welfare workers and homeless youth). As a field for further research this directs attention to the fact that work being done by the people served or dealt with at the front line of the public sector is integral to institutional processes. A model is provided by a research proposal made by George Smith, Eric Mykhalovskiy, and Douglas Weatherbee (2006). It made the issue of how health, housing and welfare services provided to people with AIDS (at a time when little was known about how to treat the condition) were coordinated They proposed to look not at trans-sector coordination at the institutional level, but at how the institutional functions did or did not coordinate with the work being done by people with AIDS to keep themselves going. Mykhalovskiy and Liza McCoy (Mykhalovskiy & McCoy 2002; Bresalier, McCoy & Mykhalovskiy 2002; McCoy 2002) went on to take up with the people receiving treatment how they worked in relation to the health care institutions, and Mykhalovskiy & McCoy (2002) introduced the concept of “healthwork” to locate the distinctive ways in which patients work in relation to these institutions. We might also be interested in taking up research on how the work of people being dealt with institutionally coordinates what Pence (2009) has described as institutional fragmentation – discontinuities in the demands of different institutional agencies. As stressed in the Introduction to this book, work organizations that deal with individuals are dealing with the always unique and uniquely situated. Unlike an assembly line that conveys sequences of items at identical stages through standardized processing at each stage and bearing the

348 A.I. Griffith and D.E. Smith

products of a given stage on into the next, people-work institutions have to deal with an individual who is the same person throughout whatever procedures he or she is subject to. People, perhaps students or patients or clients of other kinds, move between different sites of institutional encounters: they go to class, they sit in emergency departments waiting for treatment, or they follow the relevant coloured line on the floor or wall that leads them to the X-ray department. They manage the fares and scheduling that gets them to the welfare department; they turn up, or don’t turn up, in court; they study to take the standardized tests that will be used to evaluate their schools and their teachers; they arrange their meal schedule so that blood and urine tests can be taken at appropriate bodily moments. And so on. Working with people is always in some way or another at odds with producing the standardization imposed by the institutional circuits in which the coordinated work on both sides of the “front line” gets represented as the textual reality. Concluding the Conclusion These suggestions have been evoked by the studies in this collection. Reorganizing in the public sector in western societies has been happening and is happening. An important first step is to become at least aware of what is only marginally visible if we watch news programs or read news on the Internet or in newspapers. In this collection of ethnographies, the extended social relations beyond and embedding front-line work with people, mainly in the public sector, have been investigated, by focusing on the technologies contriving institutional circuits and how they organize people’s work. By proposing research aimed at making visible both how such circuits are sourced and designed and the work of the people dealt with in people-work organizations, we are suggesting that these fine ethnographies can become the groundwork for further discoveries and further explications of what is going on largely behind our backs. Though different institutional research sites would make observable different further complexes of relations, building on researches represented in this book would open up the ways that changes at the transnational level, both in economic organization and in governance, translate into changes, direct or indirect, in people’s everyday lives. We would be making visible how our societies are being reorganized within changing complexes of relations ordinarily invisible.

Conclusion 349 REFERENCES Bresalier, M., L. McCoy & E. Mykhalovskiy. 2002. From compliance to medical practice. In M. Bresalier & the Making Care Visible Group (eds), Making Care Visible: Antiretroviral Therapy and the Health Work of People Living with HIV/AIDS, 65–103. Toronto: Making Care Visible Group (M. Bresalier, L. Gillis, C. McClure, L. McCoy, E. Mykhalovskiy, D. Taylor & M. Webber). Giltrow, J. 1998. Moderning authority: Management studies and the grammaticalization of controlling interests. Journal of Technical Writing and Communication 28 (4): 337–58. Higher Education Strategy Associates (HESA). 2012. Clients. Retrieved http://higheredstrategy.com/clients/. McCoy, L. 2002. Dealing with doctors. In M. Bresalier & the Making Care Visible Group (eds), Making Care Visible: Antiretroviral Therapy and the Health Work of People Living with HIV/AIDS, 1–36. Toronto: Making Care Visible Group (M. Bresalier, L. Gillis, C. McClure, L. McCoy, E. Mykhalovskiy, D. Taylor & M. Webber). Mykhalovskiy, E., and L. McCoy. 2002. Introduction. In M. Bresalier & the Making Care Visible Group (eds), Making Care Visible: Antiretroviral Therapy and the Health Work of People Living with HIV/AIDS, xi–xxi. Toronto: Making Care Visible Group (M. Bresalier, L. Gillis, C. McClure, L. McCoy, E. Mykhalovskiy, D. Taylor & M. Webber). Pearson, M. 2012. UBC comes in at head of the class among country’s research schools: New ranking system marks efforts and output of Canada’s universities. Vancouver Sun, 29 August: B2. Pence, E. 2009. (In)visible Workings: Problematic Features Workbook. St Paul: Praxis International. Smith, D.E. 1987. The Everyday World as Problematic: A Feminist Sociology. Milton Keynes: Open University Press. Smith, D.E. 2006. Introduction. In D.E. Smith (ed.), Institutional Ethnography as Practice, 1–11. Lanham, MD: Rowman & Littlefield. Smith, D.E., and Turner, S.M. (eds) 2014. Incorporating Texts into Institutional Ethnographies. Toronto: University of Toronto Press. Smith, G.W., E. Mykhalovskiy & D. Weatherbee. 2006. A research proposal. In D.E. Smith (ed.), Institutional Ethnography as Practice, 165–79. Lanham, MD: Rowman & Littlefield.

This page intentionally left blank

Contributors

Marie Campbell, PhD, is Professor Emerita at the Faculty of Human and Social Development, University of Victoria. Her publications include (with Janet Rankin) Managing to Nurse: Inside Canada’s Health Care Reform (2006) and (with Fran Gregor) Mapping Social Relations: A Primer in Doing Institutional Ethnography (2002). She was an international scholar in American University – Central Asia, Bishkek, appointed through the Open Society Institute. Michael Corman, MA, is an Assistant Professor of Sociology, School of Nursing, University of Calgary, Qatar. His PhD research explores the social organization of emergency medical services, specifically the work of paramedics. He has taught in the Department of Sociology at both the University of Calgary and Mount Royal University in Calgary, Alberta. Richard Darville, PhD, is an Associate Professor, School of Linguistics and Language Studies, Carleton University. He has taught literacy in community college and prison programs in British Columbia and has been an activist in practitioners’ and advocacy organizations in British Columbia and nationally. Marjorie DeVault, PhD is a Professor of Sociology at the Maxwell School, Syracuse University. She is the author of Feeding the Family: The Social Organization of Caring as Gendered Work (1991) and Liberating Method: Feminism and Social Research (1999), and editor of People at Work: Life, Power, and Social Inclusion in the New Economy (2008).

352

Contributors

Lauri Grace, PhD, is a Senior Lecturer in the Faculty of Arts and Educational Studies, Deakin University, Australia. Her publications include Vocational Education in Australia: The Power of Institutional Language (2008). Alison I. Griffith, PhD, is a Professor in the Faculty of Education, York University, Toronto. With Dorothy E. Smith, she has published Mothering for Schooling (2004); edited (with C. Reynolds) Education, Equity and Globalisation (2002); and edited (with E. St John and L. Allen-Haynes) Families in Schools: A Chorus of Voices (1997). She is involved in the SSHRC Insight Grant “Schools, Safety, and the Urban Neighbourhood” (2013–18). Shauna Janz, MA, Department of Sociology, University of Victoria, is a musician and songwriter whose MA thesis research provided the impetus for her chapter in this collection. Lindsay Kerr, PhD, University of Toronto, completed her PhD in 2010. Her dissertation will be published as Risk & Safety in Schools: Re-regulating Education. She has also published Between Caring & Counting: Teachers’ Take on Education Reform (2006). Liza McCoy, PhD, is a Professor in the Department of Sociology, University of Calgary. She has published in the social organization of knowledge, which she has approached through investigations in the areas of immigration, health, education, and visual representation. Karen Melon, MA, School of Nursing, University of Calgary, is a registered nurse certified in emergency nursing and a PhD student at the University of Calgary. Her MA thesis was an extensive institutional ethnographic study that critically examined hospital emergency care and health system restructuring. She is a co-author of “Beat the clock! Wait times and the production of ‘quality’ in emergency departments,” published in Nursing Philosophy (2013). Naomi Nichols, PhD, is the Postdoctoral Fellow for the Canadian Homelessness Research Network and the Homeless Hub at York University, Principal Investigator on a five-year SSHRC project on youth and community safety, and the co-leader of a knowledge to action project in family health equity at the Hospital for Sick Children in Toronto.

Contributors 353

Her book All My Life I’ve Slipped through the Cracks is under review by the University of Toronto Press. Janet Rankin, PhD, is an Associate Professor in the School of Nursing, University of Calgary. Her book (co-authored with Marie Campbell) Managing to Nurse: Inside Canada’s Health Care Reform (2006) chronicles 30 years of the “managerial turn” in the organization of nursing services. Most recently, Rankin has been exploring the social organization of nursing education at the intersection of nursing practice. Frank Ridzi, PhD, is an Assistant Professor of Sociology and Director of Urban and Regional Studies, Le Moyne College, New York. He previously served as founding director of the Center for Urban and Regional Applied Research and as Kauffman Entrepreneurship Professor. Christina Sinding, PhD, is an Associate Professor, Faculty of Social Work, jointly appointed to the Department of Health, Aging and Society at McMaster University. She has published (with R. Gray) Standing Ovation: Performing Social Science Research about Cancer (2002). Her research explores what women diagnosed with cancer do to get (through) cancer treatment and services. Dorothy E. Smith, PhD, Professor Emerita, University of Toronto, has published numerous papers and several books, starting in 1975 with a volume edited with Sarah David called Women and Psychiatry: I’m Not Mad, I’m Angry, followed by what is probably her best known work, The Everyday World as Problematic: A Feminist Sociology (1987). Other book publications include The Conceptual Practices of Power: A Feminist Sociology of Knowledge (1990); Texts, Facts and Femininity: Exploring the Relations of Ruling (1990); Writing the Social: Critique, Theory and Research (1999); Institutional Ethnography: A Sociology for People (2005); (with Alison Griffith) Mothering for Schooling (2005); an edited collection of studies by institutional ethnographers, Institutional Ethnography as Practice (2006); and (with David Livingstone and Warren Smith) Manufacturing Meltdown: Reshaping Steelwork (2010). Betty Tate, RB, BSN, MSN, is retired from the Collaborative Nursing Program at the Comox Valley Campus of North Island Community College, British Columbia.

354

Contributors

Murali Venkatesh, PhD, is an Associate Professor and Director, Community and Information Technology Institute, Syracuse University. He has published (with R.V. Small, and J. Marsden) Learning-in-community: Reflections on Practice (2003). He was a Senior Research Fellow, Center for Reflective Community Practice, MIT. Susan Wright, PhD, is a Professor of Anthropology, Aarhus University, Denmark. She has published extensively the area of anthropology and policy. She edited (with C. Shore and D. Pero) Policy Worlds: Anthropology and the Anatomy of Contemporary Power (2011). Cheryl Zurawski, PhD, is an academic coordinator in Athabasca University’s Faculty of Business and a sessional lecturer in the University of Calgary’s Faculty of Education. She earned her PhD in education (adult education and human resource development) from the University of Regina in 2012.