302 19 3MB
English Pages 384 Year 2018
INTERNATIONAL LIBRARY OF POLICY ANALYSIS SERIES EDITORS: IRIS GEVA-MAY & MICHAEL HOWLETT
POLICY ANALYSIS IN
the United States
Edited by John Hird
POLICY ANALYSIS IN THE UNITED STATES
International Library of Policy Analysis Series editors: Iris Geva-May and Michael Howlett, Simon Fraser University, Canada This major new series brings together for the first time a detailed examination of the theory and practice of policy analysis systems at different levels of government and by non-governmental actors in a specific country. It therefore provides a key addition to research and teaching in comparative policy analysis and policy studies more generally. Each volume includes a history of the country’s policy analysis which offers a broad comparative overview with other countries as well as the country in question. In doing so, the books in the series provide the data and empirical case studies essential for instruction and for further research in the area. They also include expert analysis of different approaches to policy analysis and an assessment of their evolution and operation. Early volumes in the series will cover the following countries: Australia • Brazil • China • Czech Republic • France • Germany • India • Israel • Netherlands • New Zealand • Norway • Russia • South Africa • Taiwan • UK • USA and will build into an essential library of key reference works. The series will be of interest to academics and students in public policy, public administration and management, comparative politics and government, public organisations and individual policy areas. It will also interest people working in the countries in question and internationally. In association with the ICPA-Forum and Journal of Comparative Policy Analysis. See more at http://goo.gl/raJUX
POLICY ANALYSIS IN THE UNITED STATES Edited by John A. Hird
International Library of Policy Analysis, Vol 12
First published in Great Britain in 2018 by
Policy Press North America office: University of Bristol Policy Press 1-9 Old Park Hill c/o The University of Chicago Press Bristol BS2 8BB 1427 East 60th Street UK Chicago, IL 60637, USA +44 (0)117 954 5940 t: +1 773 702 7700 [email protected] f: +1 773 702 9756 www.policypress.co.uk [email protected] www.press.uchicago.edu © Policy Press 2018 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested. ISBN 978-1-4473-3382-1 hardcover ISBN 978-1-4473-4600-5 paperback ISBN 978-1-4473-4601-2 ePub ISBN 978-1-4473-4602-9 mobi ISBN 978-1-4473-3383-8 ePdf The right of John A. Hird to be identified as editor of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved: no part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise without the prior permission of Policy Press. The statements and opinions contained within this publication are solely those of the editor and contributors and not of the University of Bristol or Policy Press. The University of Bristol and Policy Press disclaim responsibility for any injury to persons or property resulting from any material published in this publication. Policy Press works to counter discrimination on grounds of gender, race, disability, age and sexuality. Cover design by Qube Design Associates, Bristol Front cover: image kindly supplied by www.istock.com Printed and bound in Great Britain by CPI Group (UK) Ltd, Croydon, CR0 4YY Policy Press uses environmentally responsible print partners
Contents List of tables and figures Notes on contributors Editors’ introduction to the series Introduction by John A. Hird
vii viii xiv 1
Part One: History, styles, and methods of policy analysis in the United States
7
one
9
two three
four
Policy analysis in the United States David L.Weimer The evolution of the policy analysis profession in the United States Beryl A. Radin The argumentative turn in public policy inquiry: deliberative policy analysis for usable advice Frank Fischer Reflections on 50 years of policy advice in the United States Laurence E. Lynn, Jr.
Part Two: Policy analysis by governments five
six seven
eight
31 55
73
91
The practice and promise of policy analysis and program evaluation 93 to improve decision making within the U.S. federal government Rebecca A. Maynard Policy analysis in the states 113 Gary VanLandingham Policy analysis and evidence-based decision making at the local level 131 Karen Mossberger, David Swindell, Nicholet Deschine Parkhurst, and Kuang-Ting Tai Committees and legislatures 153 Philip Joyce
Part Three: Policy analysis outside of government
171
nine
173
ten eleven
Policy advisory committees: an operational view Michael Holland and Julia Lane Public opinion and public policy in the United States Saundra K. Schneider and William G. Jacoby Political parties and policy analysis Zachary Albert and Raymond J. La Raja
183 205
v
Policy analysis in the United States twelve
Policy analysis by corporations and trade associations Erik Godwin, Kenneth Godwin, and Scott Ainsworth thirteen Policy analysis and the nonprofit sector Steven Rathgeb Smith fourteen The media Annelise Russell and Maxwell McCombs fifteen Think tanks and policy analysis Andrew Rich
223
Part Four: Policy analysis education and impact internationally
295
sixteen
297
Public policy education in the United States Michael O’Hare seventeen The status of the profession: the role of PhD and masters programs in public policy education Nadia Rubaii eighteen The influence of policy analysis in the United States on the international experience B. Guy Peters Index
vi
245 265 281
319
339
353
List of tables and figures Tables 5.1 5.2 5.3 6.1 7.1 8.1 8.2 12.1 17.1
Selected evidence clearinghouses 2017 Sample snapshots of evidence on the effectiveness of education programs and practices reviewed by the What Works Clearinghouse Illustrative evidence-guided funding streams Relative strengths and weaknesses of state-level policy analysis organizations Local government sources for research on new policies and practices (%) Congressional staffing, 1979–2009 Selected items from GAO high risk list Participation levels and likelihood of success of business and citizen organizations Top 10 masters in public affairs according to USNWR, 2016
102 103
Frequency of policy analysis tool usage (% sometimes or often) Resource utilization for decisions (% to a fair or large extent) Influence on outcome of policy/management decision (% to a fair or large extent) Extent of performance data usage by purpose (% used moderately or extensively)
138 139 140
105 121 141 154 161 228 330
Figures 7.1 7.2 7.3 7.4
142
vii
Policy analysis in the United States
Notes on contributors Scott H. Ainsworth is Professor and Head of Political Science in the School of Public and International Affairs at the University of Georgia. Professor Ainsworth’s research focuses on rational choice models of politics, policy making, and the intersection of public institutions (Congress, courts, agencies, and so on) and private interests. He is the co-author of Lobbying and Policymaking: The Public Pursuit of Private Interests and Abortion Politics in Congress: Strategic Incrementalism and Policy Change and the author of Analyzing Interest Groups: Group Influence on People and Policies. Zachary Albert is a PhD student in political science at the University of Massachusetts Amherst. His research examines political parties and polarization, campaign financing, and the development of public policy in an increasingly partisan political landscape, often using network analysis.
Frank Fischer is Professor Emeritus of Politics and Global Affairs at Rutgers University and is currently research scholar at Humboldt University in Berlin, Germany. Professor Fischer’s research specialties are environmental policy and public policy; U.S. foreign policy; public policy analysis; politics of climate change; and scientific expertise and democratic governance. He is co-editor of Critical Policy Studies journal published by Routledge and editor of Handbook of Public Policy Series for Edward Elgar. Professor Fischer received a number of awards, including the Policy Studies Organization’s Harold Lasswell award for scholarship in the field of public policy. Erik Godwin is a lecturer at Brown University’s Taubman Center for Public Policy and American Institutions. He is the co-author of Lobbying and Policymaking: The Public Pursuit of Private Interests. Godwin is also the Director of the Office of Regulatory Reform at the state of Rhode Island’s Office of Management and Budget and instructor at the U.S. Office of Personnel Management. Kenneth Godwin is Professor Emeritus of Political Science and Public Administration at the University of North Carolina Charlotte. Previously, he was named the Marshall Rauch Distinguished Professor of Political Science at the University of North Carolina Charlotte and he has served as the Rockefeller Environmental Fellow at Resources for the Future. Professor Godwin is the author or co-author of seven books concerning public policy issues and interest groups, including Lobbying and Policymaking: The Public Pursuit of Private Interests. From 2000 to 2006, he served as the co-editor of Political Research Quarterly. John A. Hird is Professor of Political Science and Public Policy, and Dean of the College of Social and Behavioral Sciences, at the University of Massachusetts viii
Notes on contributors
Amherst, where he was the founding director of its Center (now School) of Public Policy. His research focuses on the use of expertise in policy making, and he has authored or co-authored several books, including Superfund: The Political Economy of Environmental Risk and Power, Knowledge, and Politics: Policy Analysis in the States. He is currently engaged in an NSF-funded project on the use of science and other forms of evidence in regulatory policy making. Michael Holland is the Executive Director of the Center for Urban Science and Progress at New York University. He has a background in research policy and the oversight of federal research programs at the White House Offices of Management & Budget and Science & Technology Policy, the US House of Representatives Science Committee, and the US Department of Energy. William G. Jacoby is Professor in the Department of Political Science at Michigan State University, Editor of the American Journal of Political Science, and 2017 President of the Southern Political Science Association. Professor Jacoby is also a Faculty Research Associate at the Inter-university Consortium for Political and Social Research (ICPSR) where he is an instructor in, and the former Director of, the ICPSR Summer Program in Quantitative Methods of Social Research. His areas of interest include mass political behavior (public opinion and voting behavior) and quantitative methodology (measurement theory, scaling methods, and statistical graphics). Philip Joyce is Senior Associate Dean and a Professor of Public Policy in the University of Maryland’s School of Public Policy. Professor Joyce’s teaching and research interests include public budgeting, performance measurement, and intergovernmental relations. He is the author of The Congressional Budget Office: Honest Numbers, Power, and Policymaking and co-author of Government Performance: Why Management Matters and Public Budgeting Systems (9th edition). Professor Joyce is a Fellow of the National Academy of Public Administration. He is the recipient of several national awards, including the Aaron Wildavsky Award for lifetime scholarship in public budgeting and finance. Julia Lane is Professor in the Wagner School of Public Policy at New York University. She is also a Provostial Fellow in Innovation Analytics and a Professor in the Center for Urban Science and Policy. She also co-founded the Institute for Research on Innovation and Science (IRIS) at the University of Michigan. Professor Lane has published over 70 articles in leading journals, and authored or edited 10 books, including her most recent Big Data and Social Science Research: Theory and Practical Approaches, Privacy, Big Data, and the Public Good: Frameworks for Engagement, and The Handbook of Science of Science Policy (co-edited with Kaye Husbands-Fealing, Jack Marburger, Stephanie Shipp, and Bill Valdez, Stanford University Press, 2011).
ix
Policy analysis in the United States
Raymond J. La Raja is Professor of Political Science at the University of Massachusetts Amherst. His research interests include political parties, interest groups, elections, campaign finance, political participation, American state politics, public policy, and political reform. Professor La Raja is the co-founder and former co-editor of The Forum: A Journal of Applied Research in Contemporary Politics and he is a member of the Academic Advisory Board of the Campaign Finance Institute. He has co-authored or authored several books, including Campaign Finance and Political Polarization: When Purists Prevail and Small Change: Money, Political Parties and Campaign Finance Reform. Laurence E. Lynn, Jr. is the Sydney Stein, Jr. Professor of Public Management Emeritus at the University of Chicago. Professor Lynn’s most recent books are Public Management: Old and New, Madison’s Managers: Public Administration and the Constitution (with Anthony M. Bertelli), and a textbook, Public Management: Thinking and Acting in Three Dimensions (with Carolyn J. Hill). For lifetime contributions to public administration research and practice, he was selected as a John Gaus lecturer by the American Political Science Association and is a recipient of the Dwight Waldo and Paul Van Riper awards by the American Society for Public Administration, the recipient of the inaugural H. George Frederickson award by the Public Management Research Association, and the Charles H. Levine Memorial Lecturer at American University.” Rebecca A. Maynard is University Trustee Chair Professor of Education and Social Policy at the University of Pennsylvania and former Commissioner for the National Center for Education Evaluation and Regional Assistance at the Institute of Education Sciences. Over her career, she has directed many large scale social experiments to test policies and programs to improve health, education, welfare, and workforce outcomes. Her current research focuses on methods for integrating program evaluation and improvement science and on improving the quality and utility of research syntheses. Maxwell McCombs is the Jesse H. Jones Centennial Chair in Communication Emeritus in the School of Journalism at the University of Texas at Austin. Professor McCombs is internationally recognized for his research on the agenda-setting role of mass communication, the influence of the media on the focus of public attention. Since the original Chapel Hill study with his colleague Donald Shaw coined the term ‘agenda setting’ in 1968, more than 400 studies of agenda setting have been conducted worldwide. In 2011 Professors McCombs and Shaw were awarded the Helen Dinerman Award of the World Association for Public Opinion Research for their continuing work in this area. Karen Mossberger is Professor of the School of Public Affairs at Arizona State University. Her research interests include local governance, urban policy, digital inequality, evaluation of broadband programs, and e-government. Her most recent x
Notes on contributors
books are Digital Cities: The Internet and the Geography of Opportunity (with C. Tolbert and W. Franko), as well as the Oxford Handbook of Urban Politics (with S. Clarke and P. John), which includes contributions from around the globe. Michael O’Hare is Professor of Public Policy in the Goldman School of Public Policy at the University of California Berekley. He has been editor of the Curriculum and Case Notes section of the Journal of Policy Analysis and Management, is currently an associate editor of the Journal of Public Affairs Education, and has published frequently on quality assurance and best practices in professional teaching. Professor O’Hare’s research history has included periods of attention to biofuels and global warming policy, environmental policy generally, including the ‘NIMBY problem’ and facility siting, arts and cultural policy, public management, and higher education pedagogy. Nicholet Deschine Parkhurst is a PhD student in public administration and policy in the School of Public Affairs at Arizona State University. Parkhurt’s research interests include American Indians, substance abuse prevention, health disparities, tribal governance, tribal consultation, American Indian public policy, and tribal telecom. B. Guy Peters is the Maurice Falk Professor of American Government at the University of Pittsburgh. He has written extensively in the areas of public administration and public policy, both for the United States and comparatively. Professor Peter’s most recent books include Contemporary Approaches to Public Policy, Handbook of Public Administration in Latin America and Governance and Comparative Politics. Professor Peters has received the Lifetime Achievement Award from NISPACEE and the Fred Riggs Award for Lifetime Achievement in Public Administration. Beryl A. Radin is a member of the faculty at the Georgetown Public Policy Institute of Georgetown University. She is Editor of the Georgetown University Press book series Public Management and Change. Her government service included two years as a Special Advisor to the Assistant Secretary for Management and Budget of the US Department of Health and Human Services and other agencies. Much of her work has focused on policy analysis, intergovernmental relationships, and federal management change. Her most recent books are the second edition of her book on policy analysis, Beyond Machiavelli: Policy Analysis Reaches Midlife and Federal Management Reform In a World of Contradictions. She has published nine other books and a wide range of articles. Andrew Rich is the Executive Secretary of the Harry S. Truman Scholarship Foundation. As Executive Secretary, he directs the independent federal agency that provides merit-based Truman Scholarships to college students who plan to attend graduate school in preparation for careers in public service. Dr. Rich is the xi
Policy analysis in the United States
author of Think Tanks, Public Policy, and the Politics of Expertise as well as a number of wide-ranging articles about think tanks, interest groups, foundations, individual donors, and the role of experts and ideas in the American policy process. Before joining the Truman Scholarship Foundation, Dr. Rich was President and CEO of the Roosevelt Institute and Associate Professor and Chairman of the Political Science Department at the City College of New York (CCNY). Nadia Rubaii is Associate Professor of Public Administration at Binghamton University and Co-Director of the Institute for Genocide and Mass Atrocity Prevention. Dr. Rubaii has extensive consulting experience with local governments, nonprofit organizations, and graduate public affairs programs around the world, with a particular emphasis on Latin America. Dr. Rubaii’s research focuses on how local government managers can more effectively respond to increasing diversity among employees and the population, how public affairs programs can most effectively prepare the next generation of public service professionals, and the opportunities and barriers to international accreditation in public affairs. Annelise Russell is a PhD student in Government at the University of Texas at Austin. Her research interests include public policy within U.S. institutions, specifically Congress and the media, with an emphasis on how new media platforms serve elite interests. She currently serves as director of the undergraduate program for the Policy Agendas Project, which collects and organizes data to track changes in the national policy agenda. Saundra K. Schneider is Professor in the Department of Political Science at Michigan State University and the Director of the ICPSR Summer Program in Quantitative Methods of Social Research at the University of Michigan. She is also the 2020 President of the Southern Political Science Association. Her current research examines the policy priorities of the American states, the role of administrative influences in social welfare program developments, the linkages between public opinion and public policy, and the effectiveness of governmental responses to natural disasters. Professor Schneider is the author of Dealing with Disaster: Public Management in Crisis Situations. Steven Rathgeb Smith is the Executive Director of the American Political Science Association. Previously, he was the Louis A. Bantle Chair in Public Administration at the Maxwell School of Citizenship and Public Affairs at Syracuse University. He also taught for many years at the University of Washington where he was the Nancy Bell Evans Professor of Public Affairs at the Evans School of Public Affairs and director of the Nancy Bell Evans Center for Nonprofits & Philanthropy. Dr. Smith has authored and edited several books, including Nonprofits for Hire: The Welfare State in the Age of Contracting, Governance and Regulation in the Third
xii
Notes on contributors
Sector: International Perspectives, and Nonprofits and Advocacy: Engaging Community and Government in an Era of Retrenchment. David Swindell is the Director of the Center for Urban Innovation and Associate Professor in the School of Public Affairs at Arizona State University. His work focuses primarily on community and economic development, especially public financing of sports facilities, the contribution of sports facilities to the economic development of urban space, collaborative arrangements with public, private, and nonprofit organizations for service delivery, and citizen satisfaction and performance measurement standards for public management and decision making. Professor Swindell’s latest book, American Cities and the Politics of Party Conventions (with co-authors Eric Heberlig and Suzanne Leland)will be published in 2017. Kuang-Ting Tai is a Graduate Research Assistant at the School of Public Affairs and Administration, Rutgers University-Newark. Gary VanLandingham is Professor and Reubin O’D. Askew Senior Practitioner in Residence at the Reubin O’D. Askew School of Public Administration and Policy at Florida State University. His areas of specialization include evidencebased policy making, program evaluation, policy analysis, performance measurement, and Florida government. Prior to joining the Askew School, Dr. VanLandingham was founding Director of the Pew-MacArthur Results First Initiative, a national initiative which supports evidence-based policy making in states and local governments, and as Director of the Florida Legislature’s Office of Program Policy Analysis and Government Accountability. David L. Weimer is the Edwin E. Witte Professor of Political Economy at the La Follette School of Public Affairs at the University of Wisconsin-Madison. His research focuses broadly on policy craft and institutional design. He has done policy-relevant research in the areas of energy security, natural resource policy, education, criminal justice, and research methods. Professor Weimer is the author of Medical Governance: Values, Expertise, and Interests in Organ Transplantation and Behavioral Economics for Cost-Benefit Analysis: Benefit Validity when Sovereign Consumers Seem to Make Mistakes. He is co-author of Organizational Report Cards, Policy Analysis: Concepts and Practice (6th edition) and Cost-Benefit Analysis: Concepts and Practice (4th edition).
xiii
Editors’ introduction to the series Professor Iris Geva-May and Professor Michael Howlett, ILPA series editors Policy analysis is a relatively new area of social scientific inquiry, owing its origins to developments in the US in the early 1960s. Its main rationale is systematic, evidence-based, transparent, efficient, and implementable policymaking. This component of policymaking is deemed key in democratic structures allowing for accountable public policies. From the US, policy analysis has spread to other countries, notably in Europe in the 1980s and 1990s and in Asia in the 1990s and 2000s. It has taken, respectively one to two more decades for programmes of public policy to be established in these regions preparing cadres for policy analysis as a profession. However, this movement has been accompanied by variations in the kinds of analysis undertaken as US-inspired analytical and evaluative techniques have been adapted to local traditions and circumstances, and new techniques shaped in these settings. In the late 1990s this led to the development of the field of comparative policy analysis, pioneered by Iris Geva-May, who initiated and founded the Journal of Comparative Policy Analysis, and whose mission has been advanced with the support of editorial board members such as Laurence E. Lynn Jr., first coeditor, Peter deLeon, Duncan McRae, David Weimer, Beryl Radin, Frans van Nispen, Yukio Adachi, Claudia Scott, Allan Maslove and others in the US and elsewhere. While current studies have underlined differences and similarities in national approaches to policy analysis, the different national regimes which have developed over the past two to three decades have not been thoroughly explored and systematically evaluated in their entirety, examining both sub-national and non-executive governmental organisations as well as the non-governmental sector; nor have these prior studies allowed for either a longitudinal or a latitudinal comparison of similar policy analysis perceptions, applications, and themes across countries and time periods. The International Library for Policy Analysis (ILPA) series fills this gap in the literature and empirics of the subject. It features edited volumes created by experts in each country, which inventory and analyse their respective policy analysis systems. To a certain extent the series replicates the template of Policy Analysis in Canada edited by Dobuzinskis, Howlett and Laycock (Toronto: University of Toronto Press, 2007). Each ILPA volume surveys the state of the art of policy analysis in governmental and non-governmental organisations in each country using the common template derived from the Canadian collection in order to provide for each volume in the series comparability in terms of coverage and approach. Each volume addresses questions such as: What do policy analysts do? What techniques and approaches do they use? What is their influence on policymaking in that country? Is there a policy analysis deficit? What norms and values guide xiv
Policy analysis in the United States
the work done by policy analysts working in different institutional settings? Contributors focus on the sociology of policy analysis, demonstrating how analysts working in different organisations tend to have different interests and to utilise different techniques. The central theme of each volume includes historical works on the origins of policy analysis in the jurisdiction concerned, and then proceeds to investigate the nature and types, and quality, of policy analysis conducted by governments (including different levels and orders of government). It then moves on to examine the nature and kinds of policy analytical work and practices found in non-governmental actors such as think tanks, interest groups, business, labour, media, political parties, non-profits and others. Each volume in the series aims to compare and analyse the significance of the different styles and approaches found in each country and organisation studied, and to understand the impact these differences have on the policy process. Together, the volumes included in the ILPA series serve to provide the basic data and empirical case studies required for an international dialogue in the area of policy analysis, and an eye-opener on the nuances of policy analysis applications and implications in national and international jurisdictions. Each volume in the series is leading edge and has the promise to dominate its field and the textbook market for policy analysis in the country concerned, as well as being of broad comparative interest to markets in other countries. The ILPA is published in association with the International Comparative Policy Analysis Forum, and the Journal of Comparative Policy Analysis, whose mission is to advance international comparative policy analytic studies. The editors of each volume are leading members of this network and are the best-known scholars in each respective country, as are the authors contributing to each volume in their particular domain. The book series as a whole provides learning insights for instruction and for further research in the area and constitutes a major addition to research and pedagogy in the field of comparative policy analysis and policy studies in general. We welcome to the ILPA series Volume 12, Policy Analysis in the United States, edited by John A. Hird, and thank the editor and the authors for their outstanding contribution to this important encyclopedic database. Iris Geva-May Professor of Policy Studies, Baruch College at the City University of New York, Professor Emerita Simon Fraser University; Founding President and Editor-in-chief, International Comparative PolicyForum and Journal of Comparative Policy Analysis Michael Howlett Burnaby Mountain Professor, Department of Political Science, Simon Fraser University, and Yong Pung How Chair Professor, Lee Kuan Yew School of Public Policy, National University of Singapore
xv
Introduction John A. Hird Policy analysis traverses and aims to bridge the space between truth and power. Indeed, policy analysts are trained to be in a position to ‘speak truth to power.’ With its political origins in the United States during the progressive era, the operational origins of policy analysis occurred in the 1950s when both truth and power were more clearly defined and when those in power presumably were interested in truth. Policy analysis emerged from systems analysis, which was applied successfully to clearly delineated problems with unambiguous objective functions.1 The success of systems analysis and related methods gave rise to an early technocratic orientation to policy analysis, extrapolating from relatively straightforward problems to the belief that complex systems, such as the U.S. economy, could be fine-tuned through appropriate government intervention guided by analysis. Good policy outcomes would follow if only ‘the best and the brightest’ wielded the right tools. The country did not go to war, including the ‘war on poverty,’ without thinking it would win, and a belief in analysis bolstered that view. This volume engages leading U.S. public policy scholars describing changes in policy analysis in the United States over time and in various sectors as well as projecting its future. Representing the U.S. portion of an international series on policy analysis, the first four chapters provide an overview of policy analysis and the profession in the United States in historical, philosophical and, in one instance, personal terms. The next four chapters focus on policy analysis in government, including federal, state, and local, as well as in the U.S. Congress. The following seven chapters outline policy analysis outside of government, involving public opinion, corporations and trade associations, the media, political parties, and so on. The final three chapters focus on public policy education, the role of graduate programs in public policy, and the influence of the United States on the international experience with policy analysis. As the concluding chapter by Guy Peters notes, the formal recognition of policy analysis as a field and profession began in the United States, though it has improved through international exposure and is now arguably more influential in other parts of the world. The Vietnam war, ‘stagflation,’ the war on poverty, and Watergate diminished confidence in government and served to remind citizens and policymakers that expertise was insufficient to solve complex public problems. This in turn shifted the orientation of policy analysis away from purely technocratic solutions, engendering greater humility in the application of analysis to addressing public problems, not necessarily solving them. The presumption at the time was that both analysts and decision makers needed each other, and because political leaders frequently cooperated across party lines, there was greater potential for persuasion by analysis. The chapters by David Weimer and Beryl Radin trace the 1
Policy analysis in the United States
lineage of policy analysis and the role of policy analysts from its origins through contemporary use. Laurence Lynn’s chapter is a personal reflection on the development and implementation of policy analysis in the United States over the last 50 years, while Frank Fischer’s chapter outlines an alternative, ‘argumentative,’ role of policy analysis in policymaking. The chapters by Rebecca Maynard, Gary VanLandingham, Karen Mossberger et al., and Philip Joyce outline the use of policy analysis in federal and sub-national governments, while Michael Holland and Julia Lane cover its use in policy advisory committees. This era also saw rapid growth in schools of public policy, established to train policy analysts for agencies throughout federal and sub-national governments.2 The chapters by Michael O’Hare and Nadia Rubaii cover public policy analysis education. Many became disillusioned that policy analysis in practice failed to live up to its promise. Policy analysis often came to be viewed by policymakers as late, off-target, incomplete, and therefore of questionable value.3 However, the presumption was that policy analysis—if it could overcome these hurdles—would be utilized because decision makers were thought to value analysis in weighing various policy options.4 This argument rests on the assumption that enough voters are paying sufficient attention to link politicians’ positions and votes with policy outcomes, and will therefore hold them accountable. Now, however, we face the prospect that no matter how well policy analysis performs its functions, policymakers may simply disregard analysis without political consequence. Contingent truth, limited trust in elites, heightened political partisanship, wealth concentration, and technological changes have altered the role of policy analysis. Rapid and free exchange of information was once thought to contribute to an educated citizenry, yet the sheer volume of and ability to target information (and misinformation and disinformation) at the individual level produces an informationally polarized citizenry with seemingly orthogonal information conduits. Coupled with diminished trust in large institutions—the media, government, political parties—the curatorial role that such institutions once played in filtering fiction from fact and offering shared societal understanding and perspectives has faded. Significant partisanship in national policymaking diminishes the role of analysis as well. This volume’s chapters by Saundra Schneider and William Jacoby, Zachary Albert and Raymond La Raja, Annelise Russell and Maxwell McCombs, Steven Rathgeb Smith, Erik Godwin et al., and others underscore these points. The combination of extreme partisanship in national politics and an under-informed public suggests that the future of policy analysis could be tenuous. If decision makers no longer need, or believe they need, analytical insights to make decisions, and if the public does not hold them accountable for policy outcomes, policy analysis will have limited impact. Policy analysis needs both truth and power to succeed. Public policy is a massive and complex weave of activities, from various sectoral interventions—private, public, voluntary—to engagement at the local, sub-national, national, and international levels in virtually all areas of human activity. As with any complex system, change in one area engenders changes 2
Introduction
in another. With current (as of this writing, mid-2017) U.S. national policy holding a decidedly oppositional stance to the use of facts and evidence, not to mention science, in reaching policy decisions, responses have been many, from individuals to governments. For example, scientists held hundreds of marches across the nation on Earth Day in 2017, and reports indicate that more scientists are becoming engaged in public policy and even running for elected office. Court challenges to federal regulations still require confirmation that decisions are not arbitrary and capricious, which in many cases requires establishing fidelity to scientific research findings. More fundamentally, the locus of policy activity in some areas, such as climate change, has shifted within the United States from the federal government to cities and states, and internationally to other nations taking leadership roles. Other responses are likely, including the possibility that partisanship has reached its apex. Because of the complexity of policy issues coupled with limited understanding of the impacts of various interventions, policymakers will always make decisions under conditions of uncertainty and ambiguity. Science is far better at illuminating problems than determining solutions; climate change and obesity are but two recent examples. Policy analysis—analyzing the disparate impacts of various means to address public problems—is exceedingly difficult and, because it is not scientific, has not been a research focus in the academy. Instead, as several chapters in this volume show, various forms of policy analysis emerge not mainly from government agencies or the scientific community but from lobbyists, trade associations, ‘think tanks,’ corporations, and other institutions that are rarely disinterested. Several chapters also underscore inequities in the production of policy analysis. The chapter by Erik Godwin et al. highlights how business interests dominate the regulatory comment process, fueled by deep pockets and analytical expertise unmatched by citizen groups. Zachary Albert and Raymond La Raja note how increasing partisanship throughout a candidate’s career leads to political actors receiving information and analysis from divergent ideological sources. As well, increasing concentration of wealth means that well-heeled donors on the Left and Right can create ‘think tanks,’ publishing houses, and other outlets for specific ideological causes. This creates the potential for policy analysis to tip toward moneyed interests to the detriment of the broader public interest and can potentially exacerbate widening inequalities. Andrew Rich notes the influence of money on think tanks, once the home of frequently disinterested analysis and now far more ideological and politically conservative than in the past. Policy analysis is as important as ever in the face of ‘alternative facts’ and frequent repudiation of evidence and science by political leaders. Disinterested analysis of policy proposals is critical to effective governance; U.S. Congressional Budget Office (CBO) budget scores still affect Congressional deliberations, even when the results are not congenial to the party in power. The need to distinguish fact from fiction and provide policymakers with dispassionate analysis is all the more important with the cacophony of voices hawking policy ideas. As several chapters note, recipients of policy analysis are not only elected officials, but also political 3
Policy analysis in the United States
parties, nonprofits, and other institutions in the policy sphere. Yet paradoxically today’s political environment, where ‘truth’ is frequently unmoored from facts and analysis is disparaged, also challenges the future of disinterested policy analysis. Policymakers and advocates have always cherry-picked analysis compatible with their understanding of how to change the world, but the outright rejection of evidence, analysis, and science should worry all proponents of democracy. The willful rejection of facts by some national political leaders as well as the rejection of broad scientific consensus foreclose the demand for policy analysis and serious political debate. Policy analysis has over the past 60 years moved from the expectation that truth would help define if not dictate policy outcomes to the point where some in power openly disregard evidence, analysis, and science. Whether policy analysis is ignored, becomes another tool to advance particular interests, or retains its original commitment to serve the broad public interest will depend on what policymakers seek from analysis. Speaking truth only matters if those in power are listening. Notes 1
2
3
4
Stokey and Zeckhauser (1978) outline many policy analytical methods taught as part of policy analysis training. Earlier volumes include Hitch (1955), Eckstein (1958), McKean (1958), Hitch and McKean (1965), Schlesinger (1968), Schultze (1969), and Rivlin (1971). Earlier policy school graduates worked mostly in the public and nonprofit sectors. Subsequently, and to the consternation of many concerned that the ethic of public service was being lost, graduates increasingly worked in the private sector as well. See the chapter by Rubaii in this volume. Despite initial expectations, policy analysis rarely if ever determined policy outcomes. Policymakers on the Left and Right have long ignored analysis when it was inconvenient, such as President Johnson ignoring cost estimates of Medicare or Presidents Bush and Trump repudiating the implications and even veracity of climate science. Indeed, attempts to demonstrate the direct utilization of traditional policy analysis in driving policy decisions have found few examples (Shulock 1999; Hird 2017). Policymakers are presumed to demand policy analysis because they seek to understand the anticipated outcomes of proposed policies since these policies will affect their constituents (Lupia and McCubbins 1994). It does not depend on policy makers’ concern for the public interest, though of course sometimes that is the case. Policymakers have an electoral self-interest in understanding the impacts of proposed policies.
References Eckstein, O. (1958) Water Resource Development: The Economics of Project Evaluation, Cambridge, MA: Harvard University Press. Hird, J. A. (2017) ‘‘How Effective Is Policy Analysis?’’ in L. S. Friedman (ed) The Effectiveness of Public Policy Analysis, Berkeley, CA: University of California Press, pp 44–84. Hitch, C. (1955) ‘‘An Appreciation of Systems Analysis’,’ Journal of the Operations Research Society of America 3(4): 466–81. Hitch, C. J. and McKean, R. N. (1965) The Economics of Defense in the Nuclear Age, New York: Atheneum. 4
Introduction
Lupia, A. and McCubbins, M. D. (1994) ‘‘Who Controls? Information and the Structure of Legislative Decision Making’,’ Legislative Studies Quarterly 19: 361–84. McKean, R. N. (1958) Efficiency in Government Through Systems Analysis, New York, NY: J. Wiley. Rivlin, A. (1971) Systematic Thinking for Social Action, Washington, DC: Brookings Institution. Schlesinger, J. R. (1968) Defense Planning and Budgeting, Santa Monica, CA: RAND. Schultze, C. (1969) The Politics and Economics of Public Spending, Washington, DC: Brookings Institution. Shulock, N. (1999) ‘‘The Paradox of Policy Analysis: If It Is Not Used, Why Do We Produce So Much of It?’’ Journal of Policy Analysis and Management 18(2): 226–44. Stokey, E. and Zeckhauser R. (1978) A Primer for Policy Analysis. New York: W.W. Norton & Company.
5
Part One History, styles, and methods of policy analysis in the United States
7
ONE
Policy analysis in the United States David L. Weimer
A narrow definition of policy analysis as a professional activity emphasizes client orientation and public perspective: ‘policy analysis is client-oriented advice relevant to public decisions and informed by social values’ (Weimer and Vining 2011, p 24). A full assessment of the role of policy analysis in the United States requires a somewhat broader definition that encompasses policy research by relaxing the requirement that the analysis be for a specific individual or institutional client. So, for purposes of this review, I define policy analysis as professionally provided advice relevant to public decisions and informed by social values. In addition to government employees who occupy positions specifically labeled as policy analyst, I include other public officials, such as public managers, who devote some of their professional time to policy analysis, as well as persons in the for- and nonprofit sectors whose professional duties involve providing policy-relevant advice. Under this definition many people in the United States work as policy analysts. What factors have shaped the emergence of policy analysis as a common professional activity in public life?1 In this introductory chapter, I identify what I believe to be the four primary sources of demand for policy analysis in the United States: first, evidence-based support for reform (or advocacy); second, specialized expertise to inform legislative and executive decisions; third, a response to the increasing scope and complexity of government involvement in society; and fourth, a mechanism for politicians to discipline themselves and others from pursuing politically attractive policies that do not promote the public interest. For each of these sources of demand I briefly discuss its historical roots and sketch its implications for contemporary practice. I also note the incentives these demands have created for higher education to produce people with relevant policy-analytic skills.
Demand arising from the desire for reform (advocacy) The United States has a plethora of nonprofit organizations with missions to bring evidence to bear in specific policy areas or in support of various governmental ‘reforms.’ As one person’s desired reform may be another person’s feared folly, perhaps advocacy would be a better term. Nonetheless, in retrospect, I believe that reform is the appropriate term for the first such organizations, the Municipal Research Bureaus, the first American ‘think tanks.’ Although reform remains a generally appropriate description of the goals of contemporary think thanks,
9
Policy analysis in the United States
increasing ideological differentiation sometimes brings competing reforms into conflict. By the beginning of the 20th Century, the Progressive movement had scored a number of electoral victories in major U.S. cities. However, these victories were often reversed in subsequent elections by local political machines and consequently did not result in lasting improvements in terms of reduced corruption in, and increased efficiency of, municipal governments. An alternative approach, proposed by progressive reformers and funded originally by wealthy philanthropists, was the creation of organizations to gather data about municipal services and propose changes to operations that would make them more efficient and less prone to corruption. The first such organization, financed by the likes of Andrew Carnegie and John D. Rockefeller, was founded in 1906 in New York (Schacter 1995). It became the model for similar organizations in other cities (Gill 1944; Schacter 2002). Within 10 years, the model had spread to 21 major cities under names such as the Citizens’ Bureau in Milwaukee, the Civic Federation in Chicago, and the Survey Commission in Atlantic City (Gill 1944). A primary tool of the bureaus was the systematic collection of data about the operation and organization of city governments through ‘surveys,’ which have been described as the empirical basis for the emerging field of public administration (Bertelli and Lynn 2006). These surveys were extensive. For example, the newly created Rochester Bureau of Municipal Research, supported by George Eastman and other local philanthropists, commissioned the New York Bureau of Municipal Research to do a survey of the City of Rochester government. The resulting report of 546 pages was packed with facts about municipal operations and recommendations to support its stated goals: ‘1. To get things done for the community through cooperation with persons who are in office, by increasing efficiency and eliminating waste. 2. To serve as an independent non-partisan agency for keeping citizens informed about the city’s business’ (New York Bureau of Municipal Research 1915). The bureaus were certainly advocates for efficiency. However, they claimed that they were not trying to influence broad policy, but rather foreshadowing contemporary rhetoric about evidence-based policy making, that they were providing neutral facts and advice aimed at improving efficiency in the administration of policy. Nonetheless, advice about how to reorganize and staff municipal governments clearly meets the definition of policy analysis, both narrowly focused on operational issues and broadly focused on institutional design. Further, ‘… while the bureau movement is based upon the fact-finding approach, bureaus must not be regarded as mere thinking machines. As the director of the Detroit agency has said: “The bureaus think, dream, consult, stimulate, educate, irritate and somehow change governmental outlook and public conscience of the city if they are good”’ (Gill 1944, p 37). In addition to the spread of private municipal bureaus, a number of cities created their own units, usually called efficiency bureaus, to do research on city operations (Lee 2008). One of the earliest examples was a research agency 10
Policy analysis in the United States
established in Milwaukee by its socialist administration in 1910 (Gill 1944). The efficiency bureaus conducted studies similar to the surveys done by municipal bureaus and many evolved into the municipal offices that currently advise city officials on budgets and other fiscal matters. Expanding the perspective beyond specific cities, a number of think tanks, now commonly defined as public policy research, analysis, and engagement organizations (McGann 2005), were created to provide research relevant to national policy. In 1916, the Brookings Institution began providing fact-based research relevant to national policy issues. Subsequently, a number of other think tanks were created, including the National Bureau of Economic Research in 1920, the American Enterprise Institute for Public Policy in 1938, Resources for the Future in 1952, and the Urban Institute in 1968. The growth of think tanks in the United States has accelerated over the last 20 years with the total now exceeding 1,830 with almost 400 located in Washington, D.C. (McGann 2016, p 31). Although many of the early think tanks were created at least in part to advocate for some type of reform, until the 1980s most of the prominent think tanks could be described as either universities without students or contract research organizations, with the former providing more policy research and the latter more policy analysis (in the narrow definition). However, since then a third type, the advocacy tank, has proliferated. Advocacy tanks combine a strong policy, partisan or ideological bent with aggressive salesmanship and an effort to influence current policy debates. Advocacy tanks synthesize and put a distinctive ‘spin’ on existing research rather than carrying out original research. What may be lacking in scholarship is made up for in their accessibility to policymakers (Weaver 1989, p 567). Advocacy tends to be along identifiable ideological lines (McGann 2005, pp 11–13): conservative (for example Heritage Foundation, American Enterprise Institute, Manhattan Institute), libertarian (for example Cato Institute, Reason Foundation), center-Right (for example Center for Strategic and International Studies), centrist (for example Resources for the Future, National Bureau for Economic Research, Public Policy Institute of California), center-Left (for example Urban Institute, Brookings Institution), Left (for example Center on Budget and Policy Priorities, Center for American Progress). The growth and increased public and political prominence of think tanks have both positive and negative implications for policy analysis. On the positive side of the ledger, they provide venues for policy research and analysis more directly relevant to public policy issues than is often the case with academic policy research, which is usually driven by disciplinary rewards for generality rather than relevance to public policy decisions. Their sheer number also creates many opportunities for policy analysts to work outside of the constraints imposed by government 11
Policy analysis in the United States
employment. Their incentives to please donors and gain favorable media coverage encourage them to invest in translating research and analysis into information directly relevant to policy decisions and disseminating it widely. On the negative side of the ledger, ideological orientations not adequately disciplined by widely accepted research standards may undercut the credibility of analysis generally and lead to ‘dueling studies’ rather than a search for the best available answers to policy-relevant questions. Concern about the reaction of potential donors may also affect think tank agendas. It has even been suggested that the dominance of think tanks at the ‘crossroads of the academic, political, economic, and media spheres’ has ‘undermined the value of independently produced knowledge ... by institutionalizing a mode of intellectual practice that relegates its producers to the margins of public and political life’ (Medvetz 2012, p 7). However one tallies the ledger, think tanks certainly play an important role in policy analysis in the contemporary United States.
Demand arising from the need for expertise From its very beginning, the executive branch of the federal government sought advice from committees of citizens—President Washington convened the Whiskey Rebellion Commission, among others, to supplement the meager capacity of the fledgling executive branch for gathering and applying information to policy problems (Smith 1992, p 14). As early as 1842, Congress attempted to limit expenditures on the numerous advisory committees that had been created, but use of advisory committees continued to grow in step with the expanding role played by the federal government in the economy (Croley and Funk 1997). By the time the current framework requiring that advisory committee membership be balanced, their functioning transparent, and their durations limited was put in place by the Federal Advisory Committee Act (P.L. 92-463) in 1972, the federal government had almost 1,500 advisory committees. The number had fallen to approximately 800 by 1978 (Petracca 1986, p 85) but had grown to over 900 by 2008 (U.S. GAO (Government Accountability Office) 2008, p 1) and currently exceeds 1,000 (U.S. GSA [Government Services Administration] 2015). A driving force for the use of advisory committees was the difficulty of maintaining specialized expertise within the bureaucracy, initially because of its limited capacity overall and later because civil service rules made hiring and firing more difficult and therefore less amenable to adding personnel with needed expertise quickly. The need for such expertise was especially urgent during wartime. In 1863, during the Civil War, Congress created the National Academy of Science at the request of President Lincoln to advise federal departments on scientific matters. The increasing demand for scientific expertise during World War I led to the creation of the National Research Council to provide administrative infrastructure for the National Academy of Science in 1916. Increasingly, the National Research Council, which now serves the National Academy of Science, the National Academy of Engineering, and the Institute of 12
Policy analysis in the United States
Medicine, undertakes projects requested by Congress as well as by administrative agencies (Morgan and Peha 2003). These projects often involve the production of policy analyses that assess social problems and compare possible alternative policies for addressing them. World War II created demand not just for expertise drawn directly from the natural sciences, but also for expertise on how to use scarce resources more efficiently both across and within organizations. This expertise, drawing on mathematics, statistics, and the new game theory, eventually became known as operations research. Motivated by the analytical successes of the British ‘Blackett’s circus’ during the Battle of Britain, operations research units were set up throughout the U.S. military (Fortun and Schweber 1993). They worked on topics ranging from intelligence questions, such as how to use serial numbers on captured equipment to estimate enemy rates of production (Champkin 2013), to operational issues, such as the size of convoys that would maximize the ratio of U-boat losses to merchant ship losses (Hitch 1953). The U.S. military maintained capacity in operations research following the War. It also became the primary client for newly established contract research organizations such as the RAND Corporation. Analysts in the Pentagon and the RAND Corporation were resources for the implementation of Program Planning and Budgeting Systems (PPBS) in the Department of Defense, which focused attention on programs rather than categories of inputs. For example, rather than focusing budgeting on categories of expenditures, such as personnel and equipment aggregated across programs, PPBS sought to organize budgets by purposes, such as air defense or logistical support. It was initiated in 1961 by Secretary Robert McNamara, who had been an operations researcher during the War and later President of the Ford Motor Company. The apparent success of PPBS in the Defense Department led President Johnson in 1965 to ask all agency heads to implement it. It had at best modest success, largely because the circumstances that yielded benefits in the Defense Department, including analytical capacity, relevant information, and clear goals, generally were not present in most other federal agencies at the time (Wildavsky 1969). Nonetheless, failed efforts to implement PPBS and its successors, such as Zero-Based Budgeting during the Carter Administration, required the addition of technically trained personnel, some of whom outlasted these top-down initiatives and contributed to the development of analytical capacity within agencies in the longer run. (See also the chapter by Lynn in this volume.) Several factors encouraged state and local governments to adopt PPBS (Harty 1971). The Ford Foundation funded the George Washington University StateLocal Finances Project, which worked with five states, five counties, and five cities to assess the feasibility and desirability of adopting PPBS. This effort was subsequently supported by the Department of Housing and Urban Development and encouraged through incentives in federal legislation. As was the case at the federal level, these efforts generally did not implement the full PPBS model and appear to have produced modest improvements in budgeting processes but 13
Policy analysis in the United States
relatively little increase in general capacity in policy analysis (Harty 1971, Exhibit 1). Another sort of expertise began to be demanded by the federal government in the late 1960s as those implementing President Johnson’s Great Society searched for ways to address poverty. Large-scale social experiments, with random assignment of participants into treatment and control groups, offered the possibility of assessing whether programs could produce desired outcomes. Beginning with the New Jersey Income Maintenance Experiment in 1968, federal agencies have funded contract research organizations to conduct over 240 major social experiments (Greenberg and Schroder 2004). These experiments require expertise in research design, implementation, and statistical analysis, within both the contracted organizations and the bureaus that conceive, select, fund, and monitor them. Beyond the specific information these experiments created, they also contributed to the general capacity of the federal government to do and support policy research. The New Deal greatly expanded the role of the federal government in the economy, both through programs implemented by existing federal bureaus, such as the Department of Agriculture, and newly created agencies, and through increased economic regulation by the Securities and Exchange Commission (SEC), the organization that replaced the Bureau of Corporations, the Federal Trade Commission (FTC), and the Federal Communications Commission (FCC). In the 1970s, the federal role in health and safety, so-called social regulation, was expanded through the creation of the Occupational Safety and Health Administration (OSHA), the Consumer Product Safety Commission (CPSC), and the Environmental Protection Agency (EPA). The federal website (www. regulations.gov), created to increase the transparency of rulemaking, now carries notices, proposed rules, final rules, and other materials related to rulemaking for approximately 300 federal agencies. The expanded federal role required personnel to fill in the details of laws passed by Congress and executive orders issued by the president. The expansion of federal rulemaking required agencies to add personnel with substantive and legal expertise to design and justify rules. These regulators often served as policy analysts, diagnosing problems with existing rules, proposing alternatives, and assessing the alternatives in terms of legislative (and agency) goals. Although the Administrative Procedure Act of 1946 (P.L.79-404) regularized the process of notifying potentially interested parties through the general requirement for the publication of proposed rules in the Federal Register and accepting the comments of these stakeholders before the finalization of rules, much of the analysis of proposed rules, especially those proposed by agencies rather than independent commissions, was and is typically done in camera, or, when open, influenced by informal and idiosyncratic public participation (West 2009; Yackee 2012). This lack of transparency tends to obscure the analysis that underlies public proposals. Nonetheless, most rules are public policies that require at least minimal analysis if only in descriptive comparison with current policy. 14
Policy analysis in the United States
The American states also regulate. Indeed, they created agencies to regulate the professions, corporations, the insurance industry, public utilities, and some natural resources during a period when the federal role was limited largely to the regulation of railroads by the Interstate Commerce Commission. Although federal regulation has preempted the states in areas such as transportation and finance, states continue to regulate but they vary considerably in terms of the professional resources they make available for the task (Teske 2004). Often their work is supported by national associations, such as the National Association of Insurance Commissioners, that provide policy research relevant to the work of the state regulators. In addition to augmenting in-house expertise through the use of advisory committees, many federal and some state agencies extend their analytical capabilities through contracts with consulting firms. The contracts allow agencies to procure the services of variously skilled personnel on a temporary basis. At one extreme, the contracted personnel may work independently to deliver a specific product, such as a policy analysis or a database. At the other extreme, contracted personnel may perform ongoing tasks as if they were part of the agency staff. In either case, the writing and monitoring of effective contracts requires some inhouse expertise. Indeed, an irony of such privatization of analysis is that, for it to be effective, agencies may have to develop advanced expertise not only in contract management, but also in the substance of the issue (Vining and Weimer 1990). In addition to the need to secure specialized expertise quickly, occasional agency downsizing and routinely strict limits on numbers of employees have also driven contracting for analysis. In periods of retrenchment, agencies often find it easier, or at least faster, to contract for expertise than to retrain employees, especially when confronted with new issues or policy responsibilities. The increasing use of such contracting, variously referred to as ‘third-party government, the hollow state, the shadow bureaucracy, the blended public workforce, articulated chains of third parties, public programs and their partners, steering rather than rowing,’ raises general concerns about accountability within U.S. public administration (Dubnick and Frederickson 2010, p i143). Specifically with respect to policy analysis, it means that the content of many analyses nominally done by government agencies may actually have been produced by private entities. It also means that any counts of policy analysts within the federal and state bureaucracies would likely grossly underestimate the number of people actually doing policy analysis on their behalf. In a few cases, Congress has delegated analytical responsibility to nongovernmental actors. These delegations tend to be in policy areas, such as health, characterized by rapid technological or scientific advances and substantial complexity, which places a premium on the tacit knowledge of those actually working in the area. For example, Congress gave the Advisory Committee on Immunization Practices authority to specify the vaccines covered by the Vaccines for Children Program through which the Centers for Disease Control and Prevention make bulk
15
Policy analysis in the United States
purchases of vaccines for distribution to state and local public health agencies (Weimer 2010a). The difficulty civil service systems face in maintaining specialized, cuttingedge expertise within the bureaucracy provides a rationale for delegation. For policies from which there will be identifiable losers, such as children who have negative side-effects from vaccination or the allocation of very scarce transplant organs, legislators may seek to insulate themselves from demands by delegating policy design to groups of professionals and other stakeholders rather than to a bureau that constituents would expect their representatives to influence on their behalf (Weimer 2006). A prominent example of such stakeholder rulemaking is the Organ Procurement and Transplantation Network (OPTN), which has responsibility for developing the content of rules with literally life-and-death consequences for the procurement and allocation of cadaveric organs. The rules must be issued by the Department of Health and Human Services to have the force of law. However, no substantive rules have ever been issued by the department. Instead, the rules gain force because federal funding requires transplant centers to be members of the network in good standing to receive Medicare funding, which in turn requires them to adhere to the rules. Participants in the OPTN benefit from a near-universal longitudinal database on all entrants to the transplant waiting lists as well as on their own tacit knowledge about transplantation to analyze rules continually and to change them incrementally (Weimer 2010b). The increasing demands for specialized expertise have resulted in many individuals and organizations outside of government participating in policy analysis through advisory committees, contracts, and delegated responsibilities. This participation has been both a substitute for and a complement to analytical capacity and production within governments. At least at the federal level, on net it almost certainly has contributed to an increase in the quantity and technical sophistication of policy analysis done on a routine basis by government agencies over the last 50 years.
Demand arising from the need to manage complexity The expanding role of government in the economy created a demand not only for specialized expertise, but also for analysts to support management of the concurrent complexity of public policy. Both legislators and executives required information to construct and execute increasingly complicated budgets. New programs often gave considerable discretion to public administrators, who required them to consider alternatives for implementation. As dramatically evidenced recently by the Patient Protection and Affordable Care Act 2010 (P.L. 111-148), complexity continues to arise from the shared governance required when the federal government attempts to induce change in policies under the direct purview of the states (Haeder and Weimer 2013).
16
Policy analysis in the United States
General analytical support for executives and legislatures The modern era of efforts to create general analytical capacity within the federal government followed the revelation of limited administrative capacity during World War I. In 1921 Congress created twin agencies, the Bureau of the Budget (now the Office of Management and Budget), and the General Accounting Office (now the Government Accountability Office) to provide neutral and competent support for improving federal fiscal policy (Mosher 1984). The Bureau of the Budget originally focused exclusively on reigning in agency budgets, a sometimes politically difficult task when agencies had strong congressional constituencies, but later developed roles as ‘referee’ and ‘whipping boy.’ The former involved resolving jurisdictional disputes among agencies; the latter involved cutting earmarked funds of questionable value that were inserted in the budget to please particular constituencies—members of Congress could get credit for the insertions and place blame on the Bureau of the Budget for the removals (Wolf 1999). In 1970 the Bureau of the Budget was reconstituted as the Office of Management and Budget. An increase in the number of political appointees undercut its reputation for neutrality but did result in it playing a greater analytical role in policy development for successive administrations. It currently employs about 470 people. The General Accounting Office (GAO) initially served as an auditor of the financial transactions of agencies. The immense volume of such transactions during World War II led it to refocus on agency procedures and practices, providing Congress with information about agency operations that would otherwise be opaque, typically through publicly available policy-analytic reports that included recommendations. Congress reinforced the policy analysis function of the GAO with the Legislative Reorganization Act of 1970 (P.L. 91-510), which calls for the GAO to conduct program evaluations and cost-benefit analyses. The office currently employs almost 3,000 people, many of whom do policy analysis. In a typical year, it makes almost 1,500 specific recommendations to improve government operations—it reports that 79% of the recommendations it made in fiscal year 2011 were implemented by the end of fiscal year 2015 (U.S. GAO 2016, p 31). Although the GAO certainly has been influential in many policy areas, its quantitative claims of efficacy may be too optimistic because recommendations often come from the agencies whose programs are being evaluated. The Legislative Reorganization Act also gave more independence and greater responsibility for providing policy analysis in support of congressional committees to the Legislative Research Service of the Library of Congress and renamed it the Congressional Research Service (Kravitz 1990). The Congressional Research Service (CRS), which employs about 600 people, conducts analyses for both congressional committees and individual members of Congress. Its relatively low profile but high level of service to members of Congress has contributed to its stability, which in turn enables it to serve as a repository of ‘institutional memory.’ In fiscal year 2015, it reported that its staff responded to ‘62,000 individual 17
Policy analysis in the United States
requests; hosted over 7,400 congressional participants at seminars, briefings and trainings; provided over 3,600 new or refreshed products; and summarized over 8,000 pieces of legislation’ (U.S. CRS (Congressional Research Service) 2016, p iii). Unlike the GAO, the CRS generally does not make recommendations and only selectively makes its reports public. Battles with the Nixon Administration over the impoundment of congressionally appropriated funds led Congress to strengthen its capacity for analyzing budgets. It created the Congressional Budget Office (CBO) through the Congressional Budget and Impoundment Control Act of 1974 (P.L. 94-344). The CBO is organized into eight divisions. Of these, by far the largest is the Budget Analysis Division, which analyzes the president’s budget and scores the fiscal implications of proposed legislation, among other tasks. Of the approximately 235 employees, over half hold positions with the formal title of analyst, with most focusing on budgetary and fiscal impacts of policies. A fourth organization, the Office of Technology Assessment (OTA), provided Congress with analyses of issues with major technological or scientific components. Created in 1972 (P.L. 92-484), it was eliminated in 1995 along with reductions made to the staffs of all of the congressional support agencies after Republicans gained a majority in the House of Representatives after many years in the minority and wished to signal their commitment to deficit reduction. One factor in the demise of the OTA may have been that Congress could easily obtain substitute analyses from universities, think tanks, and professional societies; another factor may have been that its analyses, often done by analysts without deep connections to those working in substantive areas, sometimes challenged the preferred policies of powerful interests, undercutting the support from representatives sympathetic to these interests who did not rely on the OTA for the sorts of ongoing support provided by the other three congressional support organizations (Bimber 1996). It appears that the GAO, the CRS, and the CBO have succeeded in, and established reputations for, providing analyses that embody neutral competence, which Kaufman (1956, p 1060) defined as the ‘ability to do the work of government expertly, and to do it according to explicit, objective standards rather than to personal or party or other obligations and loyalties.’ Unlike the administrative agencies that bear responsibility for advancing the president’s policy agenda, these congressional agencies are designed to serve both the majority and minority parties in each chamber. As the control of the chambers can change as often as every two years, neutrality, and the perception of neutrality, reduces the chances that changes of partisan control will threaten the existence of the agencies. The perception of neutrality is furthered by stable professional staffs and by the appointment of nonpartisan heads with long terms (Weimer 2005). For example, the Director of the CBO is appointed for a term of four years and the Comptroller General, who heads the GAO, is appointed for a term of 15 years. Although the Director of the CRS does not have a fixed term, appointment is made by the Librarian of Congress in consultation with the Joint Committee on the Library rather than directly by congressional leadership (Brudnick 2013). 18
Policy analysis in the United States
These organizations contribute to the policy analysis that ‘… clearly does flow through congressional communication networks’ (Whiteman 1995, p 181). The states have created agencies to support legislative work analogous to those serving the federal government. In some cases the states actually led the way. The first professional, nonpartisan organization to provide drafting and research support to a state legislature was the Wisconsin Legislative Reference Bureau, which was created in 1901. Only a few states followed during the next 40 years, but in 1941 California created the Legislative Analyst’s Office, which has substantial analytical capacity and served as a model for the creation of similar offices in other states as well as for the CBO. In addition to the sorts of services provided by the CBO and the CRS, it also provides institutional memory relevant to policy debates, a function that has increased in importance with the adoption of term limits for legislators (Radin 2013). Based on information provided by the Council of State Governments and the National Conference of State Legislatures, Hird (2005) identified 105 nonpartisan agencies providing analytical and research support to state legislatures. Of 80 agencies responding to questions about their activities, 81% reported policy analysis as a major activity. A particularly influential organization supporting policy making by a state legislature is the nonpartisan Washington State Institute for Public Policy (WSIPP). Beginning with its efforts to identify and assess alternatives to prison expansion, WSIPP has conducted numerous meta-analyses of evaluations of various types of programs and uses the estimated effect sizes to conduct sophisticated and influential cost-benefit analyses of alternative portfolios of programs for the Washington State legislature (Vining and Weimer 2009). Over the last four years, 23 states have adapted for their own use the WSIPP tool through the Results First Initiative,2 a joint effort of the MacArthur Foundation and the Pew Charitable Trusts. The initiative helps states organize stakeholders as a constituency for the use of cost-benefit analysis and provides technical assistance for implementing the WSIPP tool with state-specific data. Some larger cities also have created nonpartisan organizations that provide analysis across the range of substantive issues. The most prominent example is the Independent Budget Office of the City of New York. Created in 1989 during a major revision of the city charter, it produces issue briefs and other policy analyses in addition to carrying out its responsibilities for fiscal forecasting and providing assessments of the mayor’s preliminary and final budgets. Although a general assessment of the prevalence of such agencies at the local government level is not available, it does appear that the related activity of performance measurement is pervasive in local government departments (Melkers and Willoughby 2005). Analytical support within agencies All the major federal agencies have offices that provide policy analysis in support of the agencies’ missions. Prior to World War II, agency heads generally relied on their legal staffs for agency-wide policy analysis; subsequently, they began 19
Policy analysis in the United States
creating offices that specialized in providing policy analysis. For example, during the 1930s the Secretary of the Interior relied on the Office of the Solicitor for policy analysis; it was not until 1947 that an office with departmental-wide responsibility for policy analysis was created and eventually became the Office of Policy Analysis in 1974 (Nelson 1989). The coordinating role at the department level is especially important. For instance, consider the Office of the Assistant Secretary for Planning and Evaluation in the large and complex Department of Health and Human Services. Analysts play a role in policy development, including the construction of alternatives and the assessment of their consequences. In addition to also playing roles in oversight and research, other obviously analytical roles, and ‘firefighting’—the immediate response to requests for information relevant to urgent issues—guidance for new employees identifies the role of ‘desk officer,’ which involves coordinating policy relevant to particular programmatic areas (Assistant Secretary for Policy and Evaluation n.d.). As a long-time member of the staff reflected: ‘Coordination with other staff analysts in the budget office, the legislative office, and other staff offices is critical. Assistant secretaries don’t like to be a lone voice of dissent on a policy issue’ (Greenberg 2003). In many circumstances public managers must play the role of policy analyst in choosing among alternatives for implementing policies to be effective in pursuing the missions of their agencies; in the central analytical offices of major departments, it appears that analysts must play the role of manager in coordinating with other analysts and program personnel to be effective. Policy analysis staffs carrying out functions similar to those of analysts in federal agencies can also be found in most large state agencies. The extensive use of forand nonprofit organizations to deliver social services means that policy-relevant information and even analytical capacity are often distributed within policy networks consisting of diverse organizations sharing some common goal (Milward and Provan 2000). One can speculate that the role of ‘desk officer’ for analysts in state agencies has grown in importance along with the increase in provision of social services by other organizations. One should also recognize that some organizations within the network may be primary sources for, or contributors to, policy analysis. Policy networks may also play a role in shaping the process of policy diffusion across multiple jurisdictions by providing analysis that can be used by advocates during consideration of adoption and by administrators during implementation. For example, policy-relevant information produced by local, state, and national organizations has played an important role in promoting the diffusion of drug courts, which seek to break the link between substance abuse and criminal behavior through greater reliance on treatment rather than punishment. A private organization, the National Association of Drug Court Professionals, championed drug courts through information sharing, which has led to their adoption in over 2,000 jurisdictions since the feasibility of the concept was first demonstrated in 1989 (Hale 2011).
20
Policy analysis in the United States
Politicians wishing to tie their own (or their colleagues’) hands The imposition of procedural rules on decision processes by politicians has been viewed normatively as a way of ensuring fairness (Stewart 1975) and positively as a way of controlling bureaus through the forced generation of information and the enfranchisement of specific constituencies (McCubbins et al.. 1987). In the case of requirements for the routine production of policy analysis, politicians can also be thought of as trying to counter future incentives to support policies favored by influential stakeholders but not necessarily in the public interest. They may be willing to tie their own hands and, perhaps more importantly from their perspective, the hands of their future colleagues in this way because of uncertainty about what policies will be considered in the future. Whereas the political cost of forgoing a constituent benefit in the present is likely to be high, the perceived political cost of adopting a rule to make such constituent deference in the future more difficult may be low behind a ‘veil of ignorance’ about what future demands will actually materialize (Weimer 1992). In other words, politicians who favor analysis in general may oppose the conduct of ad hoc analyses in specific cases when they anticipate the possibility that the analyses will not support their preferred policies. Nonetheless, they may accept the risk that analyses in the future will sometimes not support their preferred policies because the analyses will also help them prevent policies that they oppose. Procedural rules requiring analysis in specified circumstances have been adopted by both Congress and the president. The environmental impact statement process is a prominent example of a congressionally imposed rule requiring analysis. It fits most obviously within the conventional view of procedural requirements as the empowerment of constituencies, but can also be viewed as Congress setting the status quo of required analysis that would have to be explicitly overruled in future legislation. A more mundane example is the routine requirement of cost estimates for pending legislation. Under the National Environmental Policy Act (PL 91-190) adopted in 1970, federal agencies must prepare environmental impact statements before taking any action that would significantly affect the human environment. Although environmental impact statements typically require substantial analytical resources, they have been found to have a number of common deficiencies that undercut their value as policy analyses, including failure to compare alternatives and consideration of only a narrow scope of impacts (Gregory et al. 1992). Nonetheless, the preparation of environmental impact statements involves analysts in agencies and consulting firms as well as in the interest groups that choose to mount challenges to them during administrative processes or in court. Congress sought to tie its own hands in the consideration of water resource projects, which historically were one of the most common ingredients in ‘pork barrel’ legislation. Early in the last century Congress began requiring predictions of costs and benefits for proposed projects, and by mid-century it required the Army Corps of Engineers to conduct cost-benefit analyses (Hammond 1966). 21
Policy analysis in the United States
The imposition of the hurdle of having to show positive net benefits made it somewhat easier to weed out the worst projects. The process became sufficiently important that the Bureau of the Budget compiled guidelines on the application of cost-benefit analysis to water and related land projects (Bureau of the Budget 1952). Cost-benefit analysis has since become an important part of agency rulemaking. It evolved out of a series of executive actions intended to force agencies to consider the likely consequences of their proposed rules (Copeland 2011). President Nixon created Quality of Life reviews of rules dealing with the environment, consumer and occupational safety, and public health that were overseen by the Office of Management and Budget (Shultz 1971); President Ford required agencies to consider the impacts of rules on inflation (Executive Order 11821); and President Carter required agencies to conduct cost-effectiveness analyses of proposed rules relative to plausible alternatives (Executive Order 12044). The application of cost-benefit analysis to all major agency rules was initiated by President Reagan in 1981 (Executive Order 12291). A proposed rule involving more than $100 million in annual costs, benefits, or transfers requires an assessment of its costs and benefits through a Regulatory Impact Analysis (RIA), which is subject to review by the Office of Management and Budget. RIAs, which often are hundreds or more pages in length, are summarized in proposed rules published in the Federal Register. Subsequent presidents have maintained the requirement for cost-benefit analysis while extending the scope and analytical requirement of the RIA. President Clinton expanded the rules subject to analysis by creating the category of economically significant rules to include rules that potentially affect in a material way a sector of the economy, productivity, competition, employment, the environment, public health or safety, or state and local governments (Executive Order 12866). President G. W. Bush’s Office of Management and Budget (OMB) and Council of Economic Advisors formalized OMB guidance for conducting cost-benefit analyses, including the requirement that a reasonable number of alternatives to the proposed rule be considered (OMB (Office of Management and Budget) 2003). President Obama expanded the scope of cost-benefit analysis of rules by directing agencies to conduct retrospective assessments to determine if continuation of rules in place was economically justified (Executive Order 13563) and to submit their plans for reviews to the OMB (Executive Order 13610). As executive orders apply only to administrative agencies, independent regulatory commissions (those agencies that adopt rules by majority voting of commissioners appointed for fixed terms) such as the FCC and the SEC are not required to conduct RIAs. Congress sought to extend the RIA requirement to the independent regulatory commissions through a provision of the Contract with America Advancement Act (PL 104-121) by requiring that they file summaries of the analyses with the Comptroller General. It appears, however, that these summaries are pro forma and largely uninformative about how cost-benefit analyses are conducted (Sherwin 2006). President Obama urged the independent 22
Policy analysis in the United States
regulatory commissions to conduct the analyses mandated for other agencies (Executive Order 13579). Congress has imposed other analytical requirements on rulemaking. The Regulatory Flexibility Act (P.L. 96-254) seeks to force agencies to anticipate the effects of rules on small entities such as businesses with few employees and local governments serving small populations. The Paperwork Reduction Act (P.L 104-13) requires agencies to justify the collection of data from the public. The Unfunded Mandates Reform Act (P.L. 104-4) nominally requires analysis of rules that involve costly mandates on the private sector, state or local governments, or Native American tribes. However, because its conditions overlap with RIA requirements, it generally does not require substantial additional analysis in support of proposed rules. Compliance with procedural requirements for analysis forces the development of analytical capacity within government. To the extent that the analyses become influential, either through informing policy choice or justifying it, various interests outside of government have an incentive to invest at least in the capability for critiquing these analyses. In this way, analysis contributes to democratic participation, perhaps more so than to immediate problem solving (Shulock 1999).
Supply response to policy analysis demands Prior to the 1970s those entering careers as policy analysts did so with training from many different fields of study, including public administration, law, substantively oriented professional programs such as urban planning and public health, operations research and statistics, business, and social science disciplines. Especially within public administration and the substantively oriented professions, specializations in policy analysis developed to provide training more oriented toward problem solving. Beginning in the late 1960s, graduate programs designed to provide preparation specifically for careers in policy analysis began to be established at prominent U.S. universities. These programs, which were created as alternatives to traditional public administration programs that focused primarily on ‘the institutional and operational aspects of government’ (Engelbert 1977, pp 228–9), emphasized interdisciplinary approaches to policy design and implementation. The movement toward stand-alone programs in public policy benefitted from Ford Foundation grants awarded in 1973 to programs located in seven major universities and the RAND Corporation. The foundation decided ‘to bypass traditional public administration schools and look to new public policy programs—already underway,’ all of which displayed a conviction that ‘a blend of social sciences can be fashioned to educate people to analyze and manage complex problems’ (Ford Foundation 1976, p 5). The difference between public administration and policy analysis programs diminished substantially over time. The increasing inclusion of policy analysis 23
Policy analysis in the United States
training within public administration schools resulted in its recognition by the National Association of Schools of Public Affairs and Administration as a major subject area in 1974 (Engelbert 1977). At the same time, two factors pushed the new policy analysis schools to be more like public administration programs. First, the faculty of the schools recognized the importance of implementation in determining the extent to which policies actually produce their intended outcomes. Organizational processes and leadership, which are traditional topics within public administration, are critical to implementation—administrators must often fill in details of policy designs as they put them in place, determine options for change and their likely consequences, and take the initiative to change existing policies when they do not appear to be yielding desirable outcomes. Second, the university incentives that many programs faced led them to introduce public management concentrations to expand their student markets beyond the relatively small numbers of students both interested in policy analysis careers and quantitatively prepared to develop technical skills demanded in many policy analysis positions. The new schools of public policy, along with several major research organizations, founded the Association for Public Policy Analysis and Management in 1979 to promote the practice and teaching of policy analysis and public management. Currently among its institutional members are over 70 training programs based in U.S. universities. It sponsors the Journal of Policy Analysis and Management and holds annual research conferences that attract both academics and practitioners. A number of other organizations, including the Society for Risk Analysis and the Society for Benefit-Cost Analysis, also seek to engage both those who conduct and those who teach specific types of policy analysis.
Conclusion The four demands for policy analysis will not abate. Indeed, demands arising from the need for expertise and the management of public sector complexity can reasonably be expected to continue to increase. Further, the analytical capacity arising from these demands has been embedded in both governmental and nongovernmental organizations. Therefore, the prevalence of policy analysis in the United States is unlikely to decrease and may very well continue to increase. Yet, in an environment of crass public discourse and heightened partisan polarization, one can question whether policy analysis actually contributes to public policies that improve society. Policy analysis abounds, but to what avail? Governments routinely face choices about how to use scarce resources. Often these choices do not involve great political controversy. We may disagree over whether refuse should be collected by public employees working for a government agency or by private employees working for a private contractor. We may even disagree about what weights should be placed on the appropriateness, reliability, and cost-effectiveness of services. Nonetheless, we would likely value analysis that helped us understand the tradeoffs in things we value across the various 24
Policy analysis in the United States
alternatives. In some cases, despite our differences in party or ideology, we may want even very important decisions to be informed by neutrally competent analysts—my guess is that most of us want decisions about how to minimize the harm caused by infectious diseases to be based on epidemiological expertise rather than purely ideological predisposition. Although there seems to be a greater willingness in today’s polarized environment to challenge official statistics and predictions that previously were taken at face value, there also continues to be a willingness to take the products of some analyses as the basis for common assumptions in budgeting and other policy making. Projections and cost estimates by legislative support agencies like the CBO may be accepted because of the credibility or consistency of their source, or simply because without some common assumptions policy making would just be too chaotic. Analyses that must be done during the federal rulemaking process, the primary source of the detailed substantive content of public policy in the United States, almost certainly are not the primary basis for the choice of rules. Nonetheless, the analyses probably improve the social value of adopted rules overall by forcing consideration of alternatives and by helping to weed out those with large negative net benefits. The capacity developed by agencies to comply with analytical requirements may also have spillover effects to other agency tasks, such as the identification of problems deserving attention. Many politicians have embraced the rhetoric of the value of evidence-based policy making. Of course, good policy analysis is evidence-based whenever relevant evidence exists or can be assembled within the constraints under which the analysis must be produced. Fortunately, the large policy research enterprise in the United States generates considerable evidence relevant to policy through experimental and quasi-experimental program evaluations, natural experiments, and creative observational studies. The digital revolution has already reduced the costs of finding evidence, and ‘big data’ may yet provide new ways of creating it. When relevant evidence does exist, the rhetoric of evidence-based policy making may provide a bridge over ideological differences. This may be happening today with respect to criminal justice policy. Both conservatives and liberals share the goal of reducing crime. However, conservatives have tended to favor the punishment goals of isolation, deterrence, and retribution, while liberals have tended to favor rehabilitation. However, increasing incarceration costs have made conservatives in many states open to consideration of alternatives to long prison terms; continuing recidivism has made liberals more concerned about how increasingly scarce program funds are used. Identifying evidence-based interventions that appear to reduce both correctional costs and recidivism may be policies that can gain support across the ideological spectrum. Technological change has dramatically changed the availability of data and information since policy analysis first arose as a recognized profession. Early practitioners and students spent time physically searching for information in libraries, archives, and visits to organizations. The telephone was often the 25
Policy analysis in the United States
most valuable tool for gathering information needed in short-term analyses. Computation was costly, often involving decks of punch cards submitted to mainframe computers. Those trained and practicing today have ready access to amazing amounts of data and information through the Internet and World Wide Web. They also have personal computers with great varieties of accessible software that put sophisticated statistical and computational capabilities at their fingertips. These technological changes have reshaped the world of the policy analyst from one in which overcoming the scarcity of data and information was routinely a challenge to one in which sifting through the great abundance of data and information to find that which is valid, relevant, and useful is more often the challenge. The onset of ‘big data’ makes the reshaping of the analytical world even more dramatic, opening up new possibilities for analysis but also risking false inferences—an inconsistent statistical estimator remains inconsistent no matter how large the sample size. The ease of computation offers great possibilities for doing more sophisticated statistical estimation and prediction, assessing more complex relationships through techniques such as computable general equilibrium and agent-based models, and taking better account of uncertainties in predictions through Monte Carlo simulations. However, the availability of data and the ease of computation without solid grounding in the relevant underlying theory risks misuse, or what I have heard Aidan Vining colorfully refer to as the problem of ‘chimps with uzis.’ Just as technological change offers opportunities for, and poses challenges to, rationalist policy analysis done primarily by professionals (Weimer and Vining 2011), it does so as well for participatory policy analysis in its various forms (Durning 1993). The cost of participation generally discourages all but the most interested individuals from participating in public decisions and contributes to relatively more influence for stakeholders with more resources. Most obviously, the Internet facilitates transparency and reduces the cost of access to existing administrative processes, such as rulemaking. More dramatically, social media opens up the possibility for new fora in which individuals can potentially engage in the analytical process at very low cost. Advocates for participatory policy analysis thus have a new set of tools available. Can they develop protocols that elicit meaningful participation in policy analysis despite manipulation by organized interests and disruption from Internet trolls? Will such participation lead to policy choices that are better for the broader society? In conclusion, the logics of the historical demands for policy analysis in the United States will likely continue. Finding ways to make it more useful should continue to be a major concern for public affairs scholars. Notes 1
2
26
Parts of this chapter originally appeared in David L. Weimer (2015) ‘La Evolución del Análisis de Políticas en Estados Unidos: Cuatro Fuentes de Demanda’, Foro Inernacional 40(2): 540–75. I thank Angela Evans, John Hird, Laura Flammad, Reynaldo Ortega, and the faculty of the Centro de Estudios Internacionales, El Colegio de México, for helpful comments. http://www.pewtrusts.org/en/projects/pew-macarthur-results-first-initiative.
Policy analysis in the United States
References Assistant Secretary for Policy and Evaluation, n.d., All about ASPE: A Guide for ASPE Staff. Bertelli, A. and Lynn, L. Jr. (2006) Madison’s Managers: Public Administration and the Constitution, Baltimore, MD: Johns Hopkins University Press. Bimber, B. (1996) The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment, Albany, NY: State University of New York Press. Brudnick, I. (2013) Legislative Branch Agency Appointments: History, Processes, and Recent Proposals, CRS Report for Congress, August 6. Bureau of the Budget (1952) Circular No. A-47, December 31. Champkin, J. (2013) ‘The German Tank Problem’, Significance 10(5): 28 Copeland, C. (2011) Cost-Benefit and Other Analysis Requirements in the Rulemaking Process, Congressional Research Service, August 30. Croley, S. and Funk, W. (1997) ‘The Federal Advisory Committee Act and Good Government’, Yale Journal on Regulation 14(2): 452–557. Dubnick, M. and Frederickson, H. G. (2010) ‘Accountable Agents: Federal Performance Measurement and Third-Party Government’, Journal of Public Administration Research and Theory 20(1): i143–59. Durning, D. (1993) ‘Participatory Policy Analysis in a Social Service Agency: A Case Study’, Journal of Policy Analysis and Management 12(2): 297–322. Engelbert, Ernest A. (1977) University Education for Public Policy Analysis, Public Administration Review 37(3): 220–36. Ford Foundation (1976) Ford Foundation Program in Public Policy and Social Organization, New York: Ford Foundation. Fortun, M. and Schweber, S.S. (1993) ‘Scientists and the Legacy of World War II: The Case of Operations Research (OR)’, Social Studies of Science 23(4): 595–642. Gill, N. (1944) Municipal Research Bureaus: A Study of the Nation’s Citizen-Supported Agencies, Washington, D.C.: American Council on Public Affairs. Greenberg, D. and Schroder, M. (2004) The Digest of Social Experiments, Washington, D.C.: Urban Institute Press. Greenberg, G. (2003) ‘Policy Advice at the Department of Health and Human Services, Then and Now’, Journal of Policy Analysis and Management 22(2): 304–7. Gregory, R., Keeney, R. and von Winterfeldt, D. (1992) ‘Adapting the Environmental Impact Statement Process to Inform Decisionmakers’, Journal of Policy Analysis and Management 11(1): 58–75. Haeder, S. and Weimer, D. (2013) ‘You Can’t Make Me Do It! State Implementation of Insurance Exchanges under the Affordable Care Act’, Public Administration Review 73(1): S34–47. Hale, K. (2011) How Information Matters: Networks and Public Policy Innovation, Washington, D.C.: Georgetown University Press. Hammond, R. (1966) ‘Convention and Limitation in Benefit-Cost Analysis’, Natural Resources Journal 6(2): 195–222. Harty, H. (1971) ‘Status of PPBS in Local and State Governments in the United States’, Policy Sciences 2(2): 177–89. 27
Policy analysis in the United States
Hird, J. (2005) Policy Analysis in the States: Power, Knowledge, and Politics, Washington, D.C.: Georgetown University Press. Hitch, C. (1953) ‘Sub-Optimization in Operations Problems’, Journal of the Operations Research Society of America 1(3): 87–99. Kaufman, H. (1956) ‘Emerging Conflicts in the Doctrines of Public Administration’, American Political Science Review, 50(4): 1057–73. Kravitz, W. (1990) ‘The Advent of the Modern Congress: The Legislative Reorganization Act of 1970’, Legislative Studies Quarterly 15(3): 375–99. Lee, M. (2008) Bureaus of Efficiency: Reforming Local Government in the Progressive Era, Milwaukee, WI: Marquette University. McCubbins, M., Noll, R., and Weingast, B. (1987) ‘Administrative Procedures as Instruments of Political Control’, Journal of Law, Economics and Organization 3(2): 243–77. Philadelphia, PA: McGann, James G. (2016) 2015 Global Go To Think Tank Index Report. Think Tanks and Civil Society Program, Philadelphia, PA: University of Pennsylvania, February 9. Medvetz, T. (2012) Think Tanks in America, Chicago, IL: University of Chicago Press. Melkers, J. and Willoughby, K. (2005) ‘Models of Performance-Measurement Use in Local Governments: Understanding Budgeting, Communication, and Lasting Effects’, Public Administration Review 65(2): 180–90. Nelson, R. (1989) ‘The Office of Policy Analysis in the Department of the Interior’, Journal of Policy Analysis and Management 8(3): 395–410. Milward, H. and Provan, K. (2000) ‘Governing the Hollow State’, Journal of Public Administration Research and Theory 10(2): 359–80. Morgan, M. and Peha, J. (eds) (2003) Science and Technology Advice for Congress, Washington, D.C.: Resources for the Future. Mosher, F. (1984) A Tale of Two Agencies: A Comparative Analysis of the General Accounting Office and the Office of Management and Budget, Baton Rouge, LA: Louisiana State University Press. New York Bureau of Municipal Research (1915) ‘Government of the City of Rochester, N.Y.: General Survey, Critical Appraisal, and Constructive Suggestions’, Prepared for the Rochester Bureau of Municipal Research. Albany, NY: J.B. Lyon Company (Printers). Office of Management and Budget (2003) OMB Circular A-4, ‘Regulatory Analysis’, September 17. Petracca, M. (1986) ‘Federal Advisory Committees, Interest Groups, and the Administrative State’, Congress & The Presidency 13(1): 83–114. Radin, B. (2013) Beyond Machiavelli: Policy Analysis Reaches Midlife (2nd edn), Washington, D.C.: Georgetown University Press. Schacter, H. (2002) ‘Philadelphia’s Progressive-Era Bureau of Municipal Research’, Administrative Theory & Praxis 24(3): 555–70.
28
Policy analysis in the United States
Sherwin, E. (2006) ‘The Cost-Benefit Analysis of Financial Regulation: Lessons from the SEC’s Stalled Mutual Fund Reform Effort’, Stanford Journal of Law, Business, and Finance 12 (Fall): 1–60. Shulock, N. (1999) ‘The Paradox of Policy Analysis: If It Is Not Used, Why Do We Produce so Much of It?’ Journal of Policy Analysis and Management 18(2): 226–44. Shultz, G. (1971) ‘Memorandum for the Heads of Departments and Agencies’, Office of Management and Budget, October 5. Smith, B. L. R. (1992) The Advisors: Scientists in the Policy Process, Washington, D.C.: Brookings Institution Press. Stewart, R. (1975) ‘The Reformation of Administrative Law’, Harvard Law Review 88(8): 1669–813. Teske, P. (2004) Regulation in the States, Washington, D.C.: Brookings Institution Press. U.S. Congressional Research Service (2016) Annual Report of the Congressional Research Service of the Library of Congress for Fiscal Year 2015, January. U.S. Government Accountability Office (2016) Summary of GAO’s Performance and Accountability Report: Fiscal Year 2015, GAO-16-3SP. U.S. Government Services Administration (2015) Federal Advisory Committee Act (FACA) Management Overview (http://www.gsa.gov/portal/content/104514) Vining, A. and Weimer, D. (2009) ‘Assessing the Costs and Benefits of Social Policies’, in D. Weimer and A. Vining (eds), Investing in the Disadvantaged: Assessing the Benefits and Costs of Social Programs, Washington, D.C.: Georgetown University Press 1–16. Weaver, K. R. (1989) ‘The Changing World of Think Tanks’, PS: Political Science & Politics 22(3): 563–78. Weimer, D. (2010a) Medical Governance: Values, Expertise, and Interests in Organ Transplantation, Washington, D.C.: Georgetown University Press. Weimer, D. (2010b) ‘Stakeholder Governance of Organ Transplantation: A Desirable Model for Inducing Evidence-Based Medicine?’, Regulation & Governance 4(3): 281–302. Weimer, D. and Vining, A. (2011) Policy Analysis: Concepts and Practice (5th edn), Upper Saddle River, NJ: Pearson. West, W. (2009) ‘Inside the Black Box: The Development of Proposed Rules and the Limits of Procedural Controls’, Administration and Society 41(5): 576–99. Whiteman, D. (1995) Communication in Congress: Members, Staff, and the Search for Information, Lawrence, KS: University of Kansas Press. Wildavsky, A. (1969) ‘Rescuing Policy Analysis from PPBS’, Public Administration Review 29(2): 189–202. Wolf, P. (1999) ‘Neutral and Responsive Competence: The Bureau of the Budget, 1939–1948 Revisited’, Administration and Society 31(1): 142–67. Yackee, S. (2012) ‘The Politics of Ex Parte Lobbying: Pre-proposal Agenda Building and Blocking during Agency Rulemaking’, Journal of Public Administration Research and Theory 22(2): 373–93.
29
TWO
The evolution of the policy analysis profession in the United States Beryl A. Radin What does it mean to be a policy analyst? Despite the growth of the field in the United States over the past several decades, this is not a profession that the general public understands. Indeed, as time has gone by since the profession developed, there sometimes appears to be less agreement about the activity than there was in its earliest days. It is obvious that policy analysis has not gained a place in the world of professions equal to that of law, medicine, or engineering. Since many of the original policy analysts worked for the government, it was difficult for those outside of the field to differentiate between public administrators and policy analysts.1 There are many reasons for this situation, but at least one of them is the inability of this field to define itself in the context of a changing economic, political, and social environment in the United States and, at the same time, to find a way to describe and appreciate the contributions of the fluid profession. For the answer to the question ‘What is a policy analyst?’ is different today than it was in the 1960s, when the profession first defined itself. In order to understand these changes, one must view these developments as an example of the intellectual history of the development of a profession that—in its practice—must directly confront the broader changes in American society. Policy analysis, perhaps more than the traditional professions (for example, law, medicine, or engineering) directly confront the changing political values of the day. The America of the 1960s was characterized by a strong belief in progress, abundance, and the possibilities of public sector change. The economic, political, and social fabric of the society supported a reliance on expertise and experts. Soon afterward, however, the climate of opinion within the society had shifted. Americans were less confident about the future, focused on the realities of scarcity, and pulled back from public sector responsibilities. And as many policy issues confront ideological and value differences, there has been an attempt to avoid them through reliance on technocratic solutions. Yet, at the same time, many in the society are much less willing to defer to so-called experts than they were in the past. According to one experienced practitioner, by the 1990s the terms ‘fear, paranoia, apprehension, and denial’ described the life of a policy analyst (quoted in Beam 1996). But at the same time that this shift has occurred, ironically the number and range of jobs in the policy analysis field have increased markedly during that period. But the 21st century has brought less clarity about what a policy analyst would or could do. In addition, 31
Policy analysis in the United States
while new definitions of the field have emerged, there are still examples of past approaches to policy analysis in both the practice and training of policy analysts. There are those who bemoan the modifications in the profession and continue to point to a golden day when decision makers listened to experts and seemed to take the work of the policy analysts seriously (Williams 1998). Yet there are others who believe that the dimensions of the field in the 1990s were more appropriate to the institutions, values, and processes of American democracy than were the more elite practices of the world of the 1960s. These modifications, it could be argued, have produced a profession that is classically American as it has become more pluralistic and open, whereas the earlier version of the field had more in common with societies that had traditions of elitism and secrecy. Yet as the 21st century developed, it sometimes appears that pluralism and openness have led to increased conflict and inability to deal with multiple perspectives.
Policy analysis in three eras One can depict the realities of the policy analysis profession as occurring during three periods: 1960 to 1989, 1990 to 2009, and the 2010s. Each of these eras approached many of the same questions that confront the policy analysis practitioner today: the conflict between the imperatives of analysis and the world of politics; the analytic tools that have been used, created, or discarded over those 50 years; the relationship between decision makers and analysts as the field has multiplied and spread; and the assumptions about the availability and appropriateness of information that can be used in the analytic task. Each also reflects the shifts in society’s attitudes toward public action, the availability of resources to meet public needs, and the dimensions of policy making. Using three points of time to frame this contrast does provide a way to emphasize the shifts that have occurred in the profession. This treatment thus is not a history but a contrast between three generations. But all of these generations continue to be active today. Thus, the newer generations can expect the approaches of the past to continue in some form today. During these three eras, policy analysts operate with quite different standards of success, reflecting the essentially normative character of the profession. For some, success comes when decision makers seek and accept their advice. For others, the reward is involvement—even tangentially—in the decision process. Some measure success as the ability to do interesting work and apply a range of analytical methodologies. Still others define their success in terms of their ability to advance substantive policy positions or at least to influence the views of others involved in the decision-making process. Some analysts believe that they must include all possible options and strive for a comprehensive approach to policy problems. Others, by contrast, begin with a set of values (or even a specific agenda) and define their roles as advocacy analysts. Although policy analysts initially focused on the stage of the policy process when alternative approaches to issues were being formulated, it is clear that the scope of the role has widened. Policy analysts are now involved in bringing policy 32
The evolution of the policy analysis profession in the United States
issues to the policy agenda, working with the decision makers (particularly at the legislative level) in the actual adoption of policy, ascertaining the details of implementation, and engaging in the evaluation process.
Era one: 1960–89 Although policy analysis as a distinct field and emerging profession is relatively recent, the practice of providing policy advice to decision makers is an ancient art. Our tendencies to focus on the uniqueness of the contemporary American environment have often evoked approaches that ignore the lessons of history. A field that seeks to create its own special identity works hard to differentiate itself from the past. Creating a profession The early stages of policy analysis grew out of the experience of World War II. In the post-World War II period, social scientists began to play a role in the decision-making process. The imperatives of war had stimulated new analytic techniques—among them systems analysis and operations research—that sought to apply principles of rationality to strategic decision making. The precepts developed in the natural sciences found expression in the social sciences in two modes: positivism (using the concept of laboratory experimentation to seek to differentiate the true from the false)2 and normative economic reasoning (using the concept of a market made up of individual rational actors as the basis for prescription). Although still in an embryonic form, the computer technology of that period did allow individuals to manipulate what were then considered large data sets in ways that had been unthinkable in the past. In a sense, the techniques that were available in the late 1950s cried out for new forms of use. Yehezkel Dror, one of the earliest advocates for the creation of policy analysis as a new profession, described the early phases of this search for new expressions as ‘an invasion of public decision-making by economics’ (1971, p 226). Further, he wrote, ‘Going far beyond the domain of economic policy making, the economic approach to decision making views every decision as an allocation of resources between alternatives, that is, as an economic problem’ (Dror 1971, p 226). DoD: organizing a policy analysis office The office that was established in the Department of Defense to carry out McNamara’s analytical agenda became the model for future analytic activity throughout the federal government. As the office developed, its goal of providing systematic, rational, and science-based counsel to decision makers included what has become the classic policy analysis litany: problem identification, development of options or alternatives, delineation of objectives and criteria,
33
Policy analysis in the United States
evaluation of impacts of these options, estimate of future effects, and—of course— recommendations for action. In many ways, the establishment of this office represented a top-level strategy to avoid what were viewed as the pitfalls of traditional bureaucratic behavior. Rather than move through a complex chain of command, the analysts in this office—regardless of their rank—had direct access to the top officials in the department. Their loyalty was to the secretary of the department, the individual at the top of the organization who sought control and efficiencies in running the huge department. This office was viewed as an autonomous unit, made up of individuals with well-hewn skills who were not likely to have detailed knowledge of the substance of the policy assignments given them. Their specializations were in the techniques of analysis, not in the substantive details of their application (see the chapters by Lynn and Weimer in this volume). There was an assumption that the policy analysts who were assembled would constitute a small, elite corps, made up of individuals who would be expected to spend only a few years in the federal government. Their frame of reference and support system were atypical for a federal employee. Sometimes called the ‘Whiz Kids,’ this staff was highly visible: both its PPBS activity and its general expertise came to the attention of President Lyndon Johnson. In October 1965, the Bureau of the Budget issued a directive to all federal departments and agencies, calling on them to establish central analytic offices that would apply the PPBS approach to all their budget submissions. Even during these days, however, there were those who were skeptical about the fit between the PPBS demands and the developing field of policy analysis (Wildavsky 1969, pp 189–202). Creating a new profession The policy analysis institutions of the early 1960s shared many of the problems and characteristics of those who practiced the ancient art of providing advice to decision makers, but though the functions contained similarities, something new had occurred. What had been a relatively informal set of relationships and behaviors in the past had moved into a formalized, institutionalized world. Although the process began with responsibility for the PPBS process, it became obvious very quickly that the policy analysis task had moved beyond a single analytic technique. Much still depended on the personal relationships between the analyst/adviser and the decision maker/ruler, but these organizations took their place in public, open, and legally constituted organizations. This field was conceptualized as integral to the formulation stage of the policy process—the stage of the process where analysts would explore alternative approaches to ‘solve’ a policy problem that had gained the attention of decision makers and had reached the policy agenda. Both decision makers and analysts saw this early stage as one that was separable from other aspects of policy making. It did not focus on the imperatives of adopting the preferred alternative (particularly 34
The evolution of the policy analysis profession in the United States
in the legislative branch) or on the details of implementing an enacted policy inside an administrative agency. Instead, it focused on the collection of as much information and data as was available to help decision makers address the substantive aspects of the problem at hand. Less than two years after the diffusion of PPBS throughout the federal government, Yehezkel Dror sounded a clarion call that defined a new profession in what has become a classic article in the Public Administration Review. Published in September 1967, Dror’s ‘Policy Analysts: A New Professional Role in Government’3 sought to differentiate policy analysis from systems analysis and to define characteristics of policy analysts as government staff officers. Dror emphasized the confidential relationship between the decision makers and the analyst. He envisioned a relatively small staff, with a minimum of between 10 and 15 persons but a preference for a staff of between 25 and 30. By the early 1970s, the field began to assume both visibility and self-definition. Through support from private foundations, between 1967 and 1970, graduate programs in public policy were begun at Harvard, the University of California at Berkeley, Carnegie-Mellon, the RAND Graduate Institute, the University of Michigan, the University of Pennsylvania, the University of Minnesota, and the University of Texas at Austin (Heineman et al. 1990; Wildavsky 1997). Almost all of these programs were at the master’s degree level, focused on training professionals to enter the policy analysis field (see chapters by O’Hare and Rubaii in this volume). There were sufficient examples of the new breed within federal agencies for Arnold Meltsner to begin a study of their experience in the early 1970s. His work Policy Analysts in the Bureaucracy (1976) suggested that policy analysts used two different types of skills—technical and political skills. The individual who emphasized technical skills has a ‘desk and shelves … cluttered with papers. His books will be the same as those of academic social scientists’ (Meltsner 1976, pp 19–20). The individual who highlighted technical skills ‘is convinced that he is objective, a scientist of sorts’ (Meltsner 1976, p 23). That individual sees the policy analyst as someone who can do high-quality, usually quantitative policyoriented research that satisfies him and his peers. By contrast, the individual who emphasizes political skills is, in many ways, more like a traditional bureaucrat. But that individual wants to be able to work on the major problems facing the agency. ‘They want to be where the action is’ (Meltsner 1976, p 32). Meltsner’s work indicated that there were a number of attributes that contributed to the variety of behaviors; some had to do with the analyst’s own skill level and personal comfort with various approaches. Others had to do with client expectations. The profession after a decade By the mid 1970s, the policy analysis field did seem to be moving toward a professional identity. It had its own journals, a range of university-based 35
Policy analysis in the United States
professional schools, and individual practitioners (largely in government agencies) who identified themselves as policy analysts. Yet many issues remained unresolved that would surface as problems for the future. It appeared that policy analysis functions legitimately included planning, evaluation, and research but the relationship between policy analysis and budgeting was not clear. Some agencies had followed the PPBS path and directly linked the analytical activity to fiscal decision making. Others separated the two functions. Many policy analysis shops consciously separated their responsibilities from those of the operating divisions in the department or agency, following Dror’s advice. Even those analysts who exerted political skills did not focus on the implementation stage of the policy process. If they accentuated politics at all, they highlighted the political realities of the agenda-setting stage of the process, looking to the agenda of their political masters and to the political intrigue involved in achieving passage of a proposal. It was a rare analyst who believed that effectiveness was related to the ability to raise implementation issues in the analysis, requiring interaction with the operating units. Some policy analysts found that they were assuming two quite different roles in their activity. Not only were they the neutral analysts, laying out all possible options, but they were also advocates for their preferred positions, which they believed best represented the ‘national interest.’ Unlike advisers in other societies, American policy analysts often described their role as including concern about the best policy option that was in the ‘public interest.’ Despite the difficulty of defining a single ‘public interest’ in the diverse American society, this language was often used by analysts. This created a tension that was not easy to manage (see Nelson 1989, p 401). Finally, as the field took shape, the definition of success in the practice of policy analysis was not always obvious or agreed on. Was success the ability to convince the client decision maker to adopt your recommendation? Was an analyst successful when he or she helped the client understand the complexity and dimensions of a policy choice situation? Or was the analyst successful when the work that was performed was of a quality that was publishable and well received by peers within the profession? These problems continued to define the profession in the coming years.
Era two: 1990–2003 The world that faced the early policy analysts in the federal government from 1960 to 1989 was very different from that confronting the policy analyst in the 1990s. The years that had elapsed since policy analysis activity began in the 1960s were characterized by dramatic economic, social, and political changes in American society. The environment of possibility and optimism of the early 1960s was replaced by a mood of scarcity and skepticism. Indeed, by the end of the 1960s, the experience of the Vietnam War and demands for equality within
36
The evolution of the policy analysis profession in the United States
the society forced many to acknowledge that there were large cleavages among groups within the nation. When it became clear that policy analysis had become a part of the decisionmaking process and language, multiple actors within the policy environment began to use its forms and sometimes its substance. As a result, policy analysis has become a field with many voices, approaches, and interests. As it has matured, the profession has taken on the structure and culture of American democracy, replacing the quite limited practice found only in the top reaches of government bureaucracies, which characterized the early stages. The impact of these macro-level changes was not always obvious to policy analysis practitioners in the government. Perhaps the most direct effect of the modifications was in the range of possibilities for policy change that were before them. As Alice Rivlin has noted, the crusading spirit attached to the early days of the field was closely linked to the large number of new programs that were being created (Rivlin 1998). Soon, however, the agenda for many analysts focused mainly on changes in existing policies, not on the creation of new policies and programs. Policy analysts did experience other shifts in their world. In the early 1960s, beginning with the PPBS activity in the Department of Defense, it was assumed that policy analysis units would be established at the top of organizations, looking to the top executives and senior line staffers as the clients for analysis.4 The focus of the activity was upward, and the separate nature of the policy analysis unit minimized concern about the organizational setting in which the analysis took place. By the mid 1970s policy analysis activities had dramatically increased throughout the structure of federal government agencies. Most of the units began in the top reaches of departments or agencies but as time went on, policy analysis offices appeared throughout federal agency structures, attached to specific program units as well as middle-level structures. As it became clear that the terms of policy discourse would be conducted (at least superficially) in analytic terms, those who had responsibility for subunits within departments or agencies were not willing to allow policy shops attached to the top of the organization to maintain a monopoly on analytic activity. By the mid 1980s, any respectable program unit had its own policy staff—individuals who were not overwhelmed by the techniques and language of the staff of the original policy units and who could engage in debate with them or even convince them of other ways of approaching issues. Both budget and legislative development processes were the locations of competing policy advice, often packaged in the language and form of policy analysis. As a result, staffers appeared to become increasingly socialized to the policy cultures and political structures with which they were dealing. Those staffers who stayed in an office for significant periods of their careers became attached to and identified with specific program areas and took on the characteristics of policy area specialists, sometimes serving as the institutional memory within the department (Rein and White 1977). 37
Policy analysis in the United States
The institutionalization and proliferation of policy analysis through federal departments (not simply at the top of the structure) also contributed to changes in the behavior of those in the centralized policy analysis units. Those officials became highly interactive in their dealings with analysts and officials in other parts of the departments. Increasingly, policy development took on the quality of debate or bargaining between policy analysts found in different parts of the agency. In addition, policy analysis staff found they shared an interest in activities performed by other offices; in many ways, staffers behaved more like career bureaucrats than the in-and-out analysts envisioned in the early writings on the field. Longevity and specialization combined to create organizational units made up of staff members with commitments to specific policy areas (and sometimes to particular approaches to those policies). As a result of the spread of the function, staff implicitly redefined the concept of ‘client.’ Though there had always been some tension over the client definition—it could be your boss, the secretary, the president, the society, or the profession—the proliferation of the profession made this even more complicated. In part, this was understandable as analysts developed predictable relationships with programs and program officials over time. Not only were clients the individuals (or groups) who occupied top-level positions within the organization, but clients for analysis often became those involved in the institutional processes within the agency as well as the maintenance of the organization itself (see Nelson 1989). Analysts, not their clients, became the first-line conduit for policy bargaining. The relationships among policy analysts became even more important by the late 1970s, when the reality of limited resources meant that policy debate was frequently focused on the modification of existing programs, not on the creation of new policies. Analysts were likely to be working on issues where they already had a track record, either in terms of substantive policy approaches or involving relationships with others within the organization. By the mid 1980s, policy analysts activities across the federal government took on the coloration of the agencies and policy issues with which they worked. The policy office in the Department of Health and Human Services had an approach and organizational culture that was distinct from that in the Department of Labor or in the Environmental Protection Agency (EPA). Agencies with highly specialized and separate subunits invested more resources in the development of policy analysis activities in those subunits than in the offices in centralized locations. EPA’s responsibilities in water, air, and other environmental problems were different enough to make such an approach effective. Some policy offices were organized around substantive policy areas (for example, HHS had units in health, social security, social services, and welfare) while others were created around functional approaches to analysis.
38
The evolution of the policy analysis profession in the United States
Policy analysis beyond the bureaucracy As the policy analysis function proliferated within the federal government, it also spread outside the government to include a variety of actors engaged in the policy process. The dynamic that spawned the proliferation of policy units within the bureaucracy similarly stimulated the developments outside the organizations. If the policy discourse were to be conducted in the language of analysis, then one needed to have appropriate resources available to engage in that discussion. One could find a discernible increase in interest in policy analysis in the U.S. Congress by the 1970s (Malbin 1980). Members of Congress sought to develop their own analytic capacity. Indeed, one observer noted that these amendments ‘constitute[d] the congressional declaration of analytical independence from the executive branch’ (Robinson 1992, p 184) and, in some cases, actually exceeded the capacity of the executive branch (Robinson 1989, p 2). Despite the interest in policy analysis, the expectations of legislative institutions were viewed as different from those of executive branch policy analysts. Carol H. Weiss noted that at least some congressional staffers treat analysts ‘less as bearers of truth and clarity than as just another interest group beating on the door’ (Weiss 1989, p 411). She has suggested that the structure of Congress does act as a deterrent to the use of analysis—tangled committee jurisdictions, tenure of committee members, shortage of time, the oral tradition, staff fragmentation, and, of course, the dynamics of decision making in a legislative body. Yet analysis was valued as staff on committees brought with them respect for analytic procedures and members of Congress found that they must be in a position to evaluate the work of others; they must be on top of the analytic work produced by agencies and others and help their bosses avoid surprises. Think tanks Think tanks were seen to be influential in American society during this period. (See chapters by Weimer and Rich in this volume). Think tanks do vary in the degree to which they take positions. When they do take positions, they behave much like interest groups with an advocacy posture. Some groups are identified with a general policy direction, but the work that is produced by individuals within the organization does not fall into a clear or predictable approach. Some think tanks have actually become extensions of government agencies because agencies now depend on them to do their essential work. In an era of government staff reductions and contracting out, there has been an increasing use of outside groups as surrogate federal officials, relying on procurement mechanisms such as standing-task-order agreements to carry out these relationships (Metcalf 1998; also see the chapter by Rich in this volume).
39
Policy analysis in the United States
Policy analysis and interest groups Because policy debate does often deal with analytical findings, it is not surprising that many interest groups established their own in-house policy analysis capabilities. To help carry out their well-known policy agendas, these groups have found it useful to have staff who are comfortable with analytical techniques, methodologies, and research findings. Some interest groups have made investments in staff who are able to critique the work of others, particularly emphasizing areas in which so-called objective analysis actually reflects differences in values or where methodological determinations skew the recommendations or findings. Other groups have actually conducted their own analyses, even encouraging original data collection to support their positions or, at the least, to show that there is an alternative to the ‘official’ analysis that has been performed (Primus 1989). Ironically, although analysts in interest groups are thought to be far from the image of the policy analyst without a policy agenda, sometimes they may be valuable sources of information to others in society. But it is not always easy to define the boundaries between advocacy and fairness or between objectivity and fairness. The value or policy perspectives of interest group analysts are often much clearer than are those of either government or think tanks. In the latter, values may be embedded in the analytic techniques used or in the way the problems are framed. Some groups stake their reputation on the quality of information they produce and often become the source of information for decision makers as well as members of the press. Their experience—focusing on the production of rigorous, high-quality work that is organized around a commitment to lowincome citizens—suggests that it is possible to exhibit both commitment and rigor The spread of the policy analyst role across government and nongovernmental locations has created new opportunities that were unthought of in the 1960s. Students who graduated from the policy schools found jobs in the field and learned that they were, indeed, part of a new profession. In addition, during this period policy shifts took place in many domestic policy areas that devolve new authorities and responsibilities to players at the state and local level, which stimulated a new demand for analytic activity outside of Washington (see the chapter by Godwin, Godwin, and Ainsworth in this volume). A response to complexity: institutionalizing the policy analysis function As the years progressed, there were major changes in several of the attributes that were associated with the early stages of the policy analysis field. As fiscal scarcity and limited budgets became the reality in many policy areas, changes occurred in the availability of funds for new or enlarged programs. Ideological shifts demanded changes in the assumptions of policy analysts working inside government. Experience with analysis that was both positive and negative contributed to a growing sophistication both inside and outside government. At the same time, staffing in a number of policy analysis offices was reduced. 40
The evolution of the policy analysis profession in the United States
One analysis suggested that the size of eight federal offices declined from 746 individuals in 1980 to 426 in 1988 (May 1998). The bureaucratization and spread of the government policy analysis function created a reality that was quite different from that envisioned by Dror and the field’s early practitioners. The personal relationship between the top official in the organization and a middle-level policy analyst became tenuous, if it existed at all. An analyst is much more likely to provide advice to another analyst, who in turn provides it to another analyst. Rarely did an analyst assume the role of personal counselor to the top decision maker. These changes suggest a broader range of clients than that described by Meltsner (1976). Clients are not only decision makers but the client relationship may also be defined by the institutional processes of governing (planning, budgeting, regulation development, legislative drafting), maintenance of the organization, and—perhaps as a result of strong attacks on the programs of some domestic agencies—the analyst himself or herself. In addition, the work of the analyst demands a broader range of skills than those described by Meltsner (see Radin 1992). The technical and political skills that he described continue to be required, but so also were skills in organizational analysis, specific knowledge of the program area, and evaluation techniques. The growth of evaluation The early policy analysis practitioners emphasized analytic functions that were prospective (that is, they analyzed new program possibilities that were being considered). The evaluations conducted during the early years of the profession were devised as evaluations of experiments that created new ways to deliver or define programs. But shifts in the policy environment diminished the number and size of new programs that were created and instead concentrated on fine-tuning existing programs. Thus it is not surprising that later analytic shops would also be concerned about employing retrospective analytic techniques (examining the effects of program interventions through various evaluation techniques). By the 1970s, however, evaluation activities became more commonplace in the policy analysis units although some agencies did create offices for evaluation that were separate from the policy analysis shops. Evaluation activities were also found in other settings; for example, several federal offices of inspector general created units that did undertake evaluations. Though these evaluations did not adopt the classic social science approach to evaluation, they emphasized the production of information that would be of direct use to decision makers (Thompson and Yessian 1992). During this period, significant resources were available to think tanks, research centers, and consulting firms to undertake federally financed evaluations. A series of social experiments were conducted to test policy and program possibilities before their enactment (Weiss 1998, p 13) but because the experiments had fairly
41
Policy analysis in the United States
long time requirements, their results were usually not available in a timely fashion, at least for the decision makers who had commissioned the studies. By the 1980s, many evaluation studies were much more modest in scale and usually sought to produce findings that would be timely for decision makers. A series of evaluation projects in the education field were mandated by Congress, seeking to provide information that would be useful to the reauthorization process in Congress (the decision cycle that required Congress to renew, revise, or eliminate programs with a finite life cycle). Evaluations were conducted to examine different approaches to welfare reform. Though specifying a user of the evaluation did not avoid all the problems common to the evaluation enterprise, the information produced by several of these efforts did play a role in the decisionmaking process (see Radin 1984). Moving toward implementation The policy analysts of the 1960s emphasized the early stages of the policy process— what was viewed as the formulation stage. Some analysts focused on work that would bring issues to the policy agenda; most, however, moved into action when they were confronted with a policy problem and sought to formulate alternatives that would advise the decision maker. By the 1980s, policy analysts were found in later stages of the policy process. Others began to pay more attention to issues related to implementation and to raise those implementation concerns when policies were being devised. By the mid 1970s, both academics and practitioners in the field realized that a significant number of problems (or what some viewed as failures) associated with the Great Society programs had occurred because program designers had not considered the implementation of these efforts when they constructed new programs or policies (see, for example, Nakamura and Smallwood 1980). The academic response Aaron Wildavsky’s essay ‘Principles for a Graduate School of Public Policy,’ written in 1976, discussed the academic dimensions of the policy analysis field. As he described the process of organizing the Graduate School of Public Policy at Berkeley, Wildavsky highlighted the importance of problem solving ‘rather than fact grubbing’ in the creation process (Wildavsky 1976, p 408). As the schools aged through the 1980s and into the 1990s, however, it was difficult to sustain the spirit of experimentation within them. There was some tendency for the faculty to revert to the academic disciplines from which they came.
Era three: 2003–present The field of policy analysis that exists in the 21st century is quite different from that found in the two earlier phases. The world of the 1960s that gave rise to this 42
The evolution of the policy analysis profession in the United States
field in the United States often seems unrelated to the world we experience today. These shifts have occurred as a result of a range of developments – technological changes, changes in the structure and processes of government both internally and globally, new expectations about accountability and transparency, economic and fiscal problems, and increased political and ideological conflict. It is interesting that increasingly policy observers have been using the phrase ‘wicked problem’ to describe problems that are difficult or impossible to solve.5 But at the same time that these developments require shifts in the way that the field operates, they also continue many of the expectations from the past. Until circa 2000, the literature on policy analysis was focused almost entirely on the experience within the United States. The exception to this pattern was found in a 1984 book authored by Brian Hogwood and Lewis Gunn, Policy Analysis for the Real World. They noted: The problem of generating suitable and readily available teaching material for British courses has not yet been overcome. Although our primary interest was in producing policy analysis materials for British students, our experience has made us aware of the limitations of much American literature for American students, particularly those who have previously thought of policy analysis as merely American politics rehashed, or as arcane mathematical techniques (Hogwood and Gunn 1984, pp v–vi). By the first decade of the 21st century, the concerns expressed by Hogwood and Gun were being addressed by a range of scholars around the world. The introduction of these issues produced a literature that indicated that the field was rich and varied and, in particular, attentive to the context of the systems in which policy analysis took place. Evidence of a global perspective on policy analysis was found in the programs of a variety of professional meetings (for example the International Political Science Association created a new section on Comparative Policy Analysis) and in the publication of a number of journals (for example Journal of Comparative Policy Analysis), textbooks and edited volumes from around the world. The developments in the 21st century suggest that there are many ways to sort out topics relevant to the policy analysis field. Some topics concerned the earlier eras—types of policy issues, the diverse relationships between analysts and clients, the type of analysis required, its time frame, the stage of the policy process at which it occurs, where in the system it occurs (for example, whether it takes place inside government or outside government), the impact of the structure of the government involved, the placement of analysis in central agencies vs. program agencies, whether analysts and clients are career or political actors, the appropriate skill set found in analysts, and the boundaries between policy analysis and management.
43
Policy analysis in the United States
For the first time it became clear that it was important to give attention to the structure of the relevant government and the history of its operations. Thus, variation in policy analysis approaches can be attributed to the structure of government (for example, whether it is a centralized or federal system) or to the historical demands of eliminating colonialism, achieving democracy, or responding to the end of the Soviet Union. For many, the policy analysis field was evidence of American exceptionalism. But if anyone was pushed to come up with those comparisons, they are likely to have emphasized the differences in the political structure between a parliamentary system where the executive branch is viewed as a part of the legislative branch and the shared power institutional design found in the United States between legislative, executive, and judicial systems. Variations that are found between parliamentary systems and the U.S. system have made it more difficult to generalize about a single approach that undergirds the policy analysis function. Policy analysis in the United States has moved far beyond government. It is found in all nodes of the policy system; not only does it occur inside of government, but it is found in interest groups, nongovernmental organizations, and state and local agencies. The shifts that have taken place in the United States in the field have changed the view of clients; clients are now not limited to top officials within the agencies. The proliferation of policy analysts and policy analysis offices throughout the nooks and crannies of an agency means that the offices found at the top reaches of the departments or agencies no longer have a monopoly on the analysis production. As a result, analysts attached to these offices now have to negotiate with analysts in other parts of the department. In some cases, ideology—not information—has become the currency of policy debate. In addition, clients have moved from individual officials to collective bodies or even institutional processes within the agency. These developments have spawned an acknowledgement that analysis could take place in all of the stages of the policy process. Analysis plays a role in bringing policy issues to the policy agenda and is involved in the formulation of policy details, in ascertaining the details of implementation, and in the evaluation process. In some instances, analysts associated with particular policies or programs have personally moved from an analytic role to an implementation role (being actually involved in managing a program). The focus on implementation has led to a blurring of boundaries between management and policy development; this has also been supported by the contemporary interest in performance management, which sometimes links evaluation activities to performance assessment.
Where are we today in the United States? Level of formalization of the analytical process The image of the policy analyst as a quasi-academic staffer whose product is a set of written documents no longer fits many practicing US policy analysts, however 44
The evolution of the policy analysis profession in the United States
there are still those who conform to the original model of the profession. Yet there are an increasing number of analysts who make their contribution to the process through meetings and other forms of more informal interaction. In some settings the stylized approach to analysis, ending with formal recommendations, has been replaced by interactions in which the analyst is one of a number of participants in policy discussions. In addition, there is a sense of urgency about the need for decisions, a blurring of lines between managers and analysts, and, as well, a blurring of differences between analysts and more traditional academic researchers. Availability of information on the Internet allows broader access to information than was possible in the past when information was largely controlled by the analysts. The proliferation of policy analysis both inside and outside of government has supported an approach to information and data that moves far beyond a positivist orientation. As Carol Weiss (1983) described it, it is difficult to think about information without also acknowledging interests and ideology. While in the past U.S. policy analysts seemed to have discovered ways to minimize the impact of politicization on issues on their task, this had become more difficult to do by the second decade of the 21st century. Often the demand for analysis was effectively a request by decision makers for justifications for their political agendas. As such, the clients for analysis were not interested in thinking about alternatives to that position. This was similar to the way that lawyers used evidence for their arguments. These changes (and others) have contributed to the existence of a policy analysis profession in the United States that has a number of attributes. Perhaps the most dramatic set of changes dealing with globalization in the field in the United States has taken place in the classroom. The boundaries between policy analysis and management have become increasingly blurred and, at the same time, policies are actually implemented by state and local governments and private organizations which may not be willing to share information about their experiences. Perhaps most importantly, they have contributed to a significant modification in the skills that are viewed as essential for individuals hiring the field. Networks as clients Yet another development has occurred in the 21st century that has had a major impact on the world of the policy analysts and the way they think about clients. Views about decision-making processes have moved to quite a different approach. The views of an individual client were often framed in the hierarchical decisionmaking model. In that model, decisions followed the traditional hierarchical structure and thus the assumed client would have authority and power to make a decision. By the end of the 20th century, however, policy analysts often assumed that decision making was the result of a bargaining process. Thus, the proliferation of analysts and analytic organizations fits nicely into the bargaining relationships
45
Policy analysis in the United States
between multiple players, most of whom were located somewhere within the governmental structure. By the first decade of the 21st century, another approach was added to the decision-making repertoire: the use of networks. Although networks have captured the interest of scholars in a variety of fields both in the United States and abroad, it is not always clear how they operate as a formal decision-making approach. Agranoff and McGuire describe a network as a structure that ‘includes governmental and nongovernmental agencies that are connected through involvement in a public policy-making and/or administrative structure through which information and/or public goods and services may be planned, designed, produced, and delivered’ (McGuire and Agranoff 2011, p 266). Their analysis focuses on the operational limitations of networks caused by power imbalances, overprocessing, and policy barriers. They note that networks represent both formal and informal approaches and many of them do not possess formal authority to act while others merely exchange information and identify problem-solving approaches. When networks contain a mixture of actors with different resources, it is not always clear how those without formal authority can operate within those relationships. These are issues that are embedded in situations where the network itself is the policy analyst’s client. Since the network is not an entity with clear or simple goals, how does the analyst determine the interests of the body when—by design—it contains players drawn from multiple interests and settings? And many of those interests represent substantive policy conflicts. These conflicts can emerge from the combination of public sector and private sector players, representatives of interest groups, multiple public agencies, and players from the various nodes of the intergovernmental system. McGuire and Agranoff note that it is difficult to conclude that networks are replacing governmental agencies or that networks are controlling government. It is not always clear where the boundaries lie between government agency and network. But they (and others) suggest that it is important for researchers to look at the reality of who actually makes decisions in and through networks. While there is fairly widespread acknowledgement that decisions are often made by networks (rather than individual or only government officials), it is not clear how these developments affect the policy analysis process. The economic environment Policy analysts have always been concerned about questions related to the cost of proposed action. The early interest in cost-benefit analysis not only pushed analysts to think about the overall cost of an action but also gave them a framework to think about who pays and who benefits from decisions. In that sense, the analytic approach of cost-benefit analysis required analysts to think about the consequences of their recommendations for real people and to acknowledge that there might be differential costs and benefits to different categories of individuals. 46
The evolution of the policy analysis profession in the United States
During the early years of the profession analysts rarely based their decisions only on cost calculations. Recommendations were not expected to be based only on the lowest cost alternative but, rather, analysts tried to trade off multiple values and to determine the ratio between costs and benefits. It certainly helped that the environment of the early 1960s was largely one of growth and possibilities. By the end of the 1960s, analysts were operating in a Vietnam War environment where there was an acknowledgement that it was hard to have both guns and butter. But it was still possible to ask questions that involved determinations of the abstraction, ‘the public interest’. During the second phase of the profession, beginning in the mid 1970s, the proliferation of policy analysis offices made it obvious that there were multiple interests at play in most policy decisions. Analysts worried about both costs and benefits but calculated them in terms of their clients. It was the political process that eventually made the trade-offs between the interests of multiple actors in terms of bargaining and negotiation. As the 21st century began, 9/11 and two subsequent wars interacted with more conservative views that advocated a limited role for government and sought to lower tax rates. Increasingly, analysts inside of government saw themselves as writing briefs for decision makers that justified their preferred positions. These and other factors brought a fiscal condition which led to high debt, resistance to increase taxes, and skepticism about the effectiveness of government action. This created a climate in which cost was discussed without thinking about benefits. It pushed analysts to think in short-term rather than long-term frameworks. Debate was often posed in ideological terms and issues were captured by highly politicized interests. Debates became increasingly based on budget numbers not on real assessments of program effectiveness. Changing skills and staff The original depiction of policy analysts was conceptualized by Arnold Meltsner in the 1970s based on his interviews with the first generation of policy analysts in the federal government. His framework differentiated between types of clients, emphasizing the dynamics that occurred as a result of the proliferation of policy analysis inside of government agencies. However, this typology did not reflect the growth of the policy analysis profession outside of government. Today a large percentage of policy analyst jobs are found outside of government. Many organizations are looking for staff who have substantive knowledge of specific policy areas. Individuals come into policy analyst positions because of their specialized knowledge of a policy sector. In addition, organizations seem to be seeking written communication skills, collaborative team work, and computerrelated skills. They also focus on candidates’ oral communication skills as well as evidence of their organizational abilities. A number of organizations are looking for individuals who have the ability to deal with policymakers. If the organization is involved in international work, it often seeks evidence of international experience as well as language skills. If a policy analysis organization is identified 47
Policy analysis in the United States
with particular ideologies or policy positions, it might require potential staffers to show their commitment to their positions. During the 21st century changes have occurred both in the world of practice as well as in the academy. The modifications seem to be more visible and pronounced in the academic world as the field reaches its mid life. While the policy analyst practitioners represent diverse and varied roles and skills, the academic face of the field seems to have swung back to more traditional academic perspectives. As a result, there does appear to be a disjuncture between the world of practice and the academy. There are several patterns that seem to be evidence of this retreat to traditional academic perspectives. First, the differentiation between policy research and policy analysis has become much less clear (Vining and Weimer 2010). Policy research approaches draw on broad social science research and do not focus on the relationship between the policy analyst and the client he or she is advising. In that sense, the work is less sensitive to the advising role of policy analysts. Second, faculty members are increasingly recruited from traditional academic fields and have been evaluated through the performance criteria of those fields. Third, fewer faculty members than in earlier years either come to the academy with experience as practitioners or are encouraged to spend time in a practitioner role during sabbaticals or other forms of leave. Fourth, the academic work of the faculty tends to retreat into technocratic postures and to avoid policy issues, which require attention to framing questions and problems related to defining appropriate strategies. Much of the work that is accomplished may be important methodologically but does not confront the types of policy problems where there are limited formal data sets.
Conclusion At least three different generations can be identified in this depiction of the development of the policy analysis field in the United States. It is not surprising that this field is constantly developing and moving since it operates within a highly turbulent environment. After more than 50 years, the policy analysis field has taken its place in the contemporary world of decision making but it is also clear that the original view of the policy analysis activity as a process of advising the prince or ruler is very limited. Policy analysis is now a field in the United States and around the globe with multiple languages, values, and forms, and with multiple individuals and groups as clients. Its conversations take place across the policy landscape. Though its diversity is frustrating and dissonant to some, it also mirrors American society itself. Instead of being a profession captured and controlled by an elite few, it is now democratized, with an eclectic set of practices and people who represent very different interests. As the field has become more complex, however, it is increasingly difficult to establish a clear sense of professional norms and a definition of success. 48
The evolution of the policy analysis profession in the United States
As the profession has changed over time, policy analysts have struggled to figure out what criteria they should use to evaluate their own work. Analysts have fewer opportunities to point to new programs and take pride in the role they played in bringing them to life. More frequently than not, analysts circumscribe their activity to small-scale issues and working at the margins of policy debates. Given the range of actors and analysts involved in policy discussions, it is difficult to point to one’s individual contribution to the process. Analysts continue to confront the balancing act that has been a part of the profession since its early days: the conflict between responsiveness to the client and values of rigor and fairness. Although few analysts would wrap themselves around abstract concepts of neutrality and objectivity, recognizing that values and interests are implicit in most policy discussions, they do want to produce work and advice that are as complete and balanced as possible. The norms established for the academic expression of the field are not always sensitive to the real world of decision making. Though this has been a challenge for analysts since the early days of the profession, the boundary lines between analysis and advocacy have made this task more difficult in recent years. There seems to be a disconnect between the analyst’s perception of selfworth (often drawn from Era one) and the real contribution that the individual makes in the nooks and crannies of the policy process. Many analysts seem to believe that formal studies or reports are the only types of products that are noteworthy. They seem to need a language to describe what they do and to convince themselves—as well as others—that they contribute to the process. They are challenged to think about ways of developing standards for their work that reflect their diverse experiences. What are standards against which work is measured? Is it drawn from a peer review model? Should accountability revolve around questions of utilization? How should one deal with the perception that this is an elitist profession? Neither has there been a commitment to encourage self-reflection on the part of seasoned professionals and to give them the tools to think about their contributions. There has been very little attention given to patterns of career development in the field, the kinds of skills that are now appropriate for the profession, and the expectations about packaging or marketing the results of the analytic process. Students should be familiar with techniques such as organizing and running focus groups; they have to figure out ways to gauge the intensity level around certain issues, assessing not only classic political feasibility questions but also the level of venom around some policy areas. They are required to be sensitive to the demands of making presentations to busy clients and to acknowledge that many clients participate in networks with multiple players and diverse perspectives.
49
Policy analysis in the United States
Future changes in the policy environment Policy analysts can expect to deal with several shifts in the policy environment. These include the end of budget deficits, globalization, devolution, and political conflict, all of which challenge the current framework that has been developed in the early years of the profession. Fiscal scarcity For many years, analysts operated within the confines of budget deficits, the parameters of policy possibilities being constrained by very limited resources. In the short run, the decision rules that have been adopted for creating budgets will continue to constrain activity at the national level and largely emphasize short-term proposals. Globalization Increasingly policy issues have global dimensions, and analysts will be required to think about the ways that policy decisions framed in the context of the U.S. system will affect policies in other countries. Conversely, when thinking about policy issues in the United States, analysts are challenged to think about the effects of international perspectives on domestic policy. Decentralization and devolution Increased interest in federal systems and other forms of decentralization, the growth of the European Union, and the end of the Soviet Union have all created complex political and social systems that share some attributes with the United States. These developments seem to have contributed to interest in analytic and advising processes that had not been contemplated in traditional parliamentary or controlled systems. Over the past several decades, devolution of responsibilities to state and local government and reliance on contracting in the private sector has shifted the points of leverage for policy change away from Washington. Devolution of policies to state and local levels has also created new opportunities and interest in policy analysis by nongovernmental groups. Political and ideological conflict It appears that the current environment of ideological and partisan conflict is likely to continue into the future. Given the intensity of the conflict in the contemporary setting, it is likely that—at least in the short run—policy analysts will continue to confront the pressures of ideological and partisan debates. The highly charged
50
The evolution of the policy analysis profession in the United States
political and partisan environment has created increased pressure to focus on the views of the client rather than the lessons that emerge from balanced analysis. A commitment to democratic values There have been commentators throughout the evolution of the policy analysis field who have expressed fears about the monopoly on analysis found in the early years of practice. Analysts are now attached to a pluralistic and fragmented policy world. Though this has opened up the profession, it has also made it more difficult to define in rigorous terms the attributes that make up ‘good’ policy analysis or to find effective ways of dealing with networks. More people are now aware of the limitations of focusing only on efficiency values as they establish criteria for assessing options. Technology developments and a commitment to open information have eroded a tendency for analysts to hold information close at hand. Efforts to democratize data have increased the ability of various players to have information available to them. Concern about transparency has become stronger and has had an impact on the work of policy analysts. Today the situation is very complex. We are a field with multiple languages, values, and forms, and with multiple individuals and groups as clients. We confront conflict between analysts who strive for objectivity and those who begin with an advocacy posture. Often our work involves dueling between analysts. Our conversations take place across a very broad policy landscape. We operate with many different definitions of success. Some of our efforts meet these expectations and others do not. But policy analysis in the United States today has taken its place in the contemporary world of decision making. Notes 1 2 3 4 5
This chapter draws on Beryl A. Radin (2013) Beyond Machiavelli: Policy Analysis Reaches Midlife (2nd edn) Washington, DC: Georgetown University Press. Heclo (1972, p 117) suggests that developments in political science did not always support an enthusiastic move toward policy advising. Reprinted in Dror (1971, pp 225–34). Much of this discussion is drawn from Radin (1997). The term was originally coined by Rittel and Weber (1973).
References Beam, D. (1996) ‘If Public Ideas Are So Important Now, Why Are Policy Analysts So Depressed?’ Journal of Policy Analysis and Management, 15(3): 430–37. Dror, Y. (1971) Ventures in Policy Sciences, New York: Elsevier. Heclo, H. (1972) ‘Modes and Moods of Policy Analysis’, British Journal of Political Science, 2(1): 83–108. Heineman, R., Bluhm, W., Peterson, S., and Kearny, E. (1990) The World of the Policy Analyst: Rationality, Values, and Politics, Chatham, N.J.: Chatham House Publishers. 51
Policy analysis in the United States
Hogwood, B. and Gunn, L. (1984) Policy Analysis for the Real World, Oxford: Oxford University Press. Malbin, M. (1980) Unelected Representatives: Congressional Staff and the Future of Representative Government, New York: Basic Books. May, P. (1998) ‘Policy Analysis: Past, Present, and Future’, International Journal of Public Administration 21(6–8): 1098. McGuire, M. and Agranoff, R. (2011) ‘The Limitations of Public Management Networks’, Public Administration 89(2): 266 Meltsner, A. (1976) Policy Analysts in the Bureaucracy, Berkeley, CA: University of California Press. Metcalf, C. (1998) ‘Presidential Address: Research Ownership, Communication of Results, and Threats to Objectivity in Client-Driven Research’, Journal of Policy Analysis and Management 17(2): 153–63. Nakamura, R. and Smallwood, F. (1980) The Politics of Policy Implementation, New York: St. Martin’s Press. Nelson, R. (1989) ‘The Office of Policy Analysis in the Department of the Interior’, Journal of Policy Analysis and Management 8(3): 395–410. Primus, W. (1989) ‘Children in Poverty: A Committee Prepares for an Informed Debate’, Journal of Policy Analysis and Management 8(1): 23–34. Radin, B. A. (1984) ‘Evaluations on demand: Two Congressionally mandated education evaluations’, in R. Gilbert (ed) Making and managing policy, New York, NY: Marcel Dekker. Radin, B. A. (1992) ‘Policy Analysis in the Office of the Assistant Secretary for Planning and Evaluation in HEW/HHS: Institutionalization and the Second Generation’, in C. Weiss (ed), Organizations for Policy Analysis: Helping Government Think, Newbury Park, CA: Sage Publications, 144–60. Radin, B. A. (1997) ‘Presidential Address: The Evolution of the Policy Analysis Field: From Conversation to Conversations’, Journal of Policy Analysis and Management, 16(2): 204–18. Rein, M. and White, S. (1977) ‘Policy Research: Belief and Doubt’, Policy Analysis 21(1): 239–71. Rittel, H. and Weber, M. (1973) ‘Dilemmas in a General Theory of Planning’, Policy Sciences 4: 155–69. Rivlin, A. M. (1998) ‘Does Good Policy Analysis Matter?’ Remarks at the Dedication of the Alice Mitchell Rivlin Conference Room, Office of the Assistant Secretary for Planning and Evaluation, Department of Health and Human Services, Washington, D.C., February 17. Robinson, W. (1989) ‘Policy Analysis for Congress: Lengthening the Time Horizon’, Journal of Policy Analysis and Management 8(1): 2. Robinson, W. (1992) ‘The Congressional Research Service: Policy Consultant, Think Tank, and Information Factory’, in C. Weiss (ed), Organizations for Policy Analysis: Helping Government Think, Newbury Park, CA: Sage Publications, 184.
52
The evolution of the policy analysis profession in the United States
Thompson, P. and Yessian, M. (1992) ‘Policy Analysis in the Office of Inspector General, U.S. Department of Health and Human Services’, in C. Weiss (ed), Organizations for Policy Analysis: Helping Government Think, Newbury Park, CA: Sage Publications, 161–77. Vining, A. and Weimer, D. (2010) ‘Foundations of Public Administration: Policy Analysis’, Public Administration Review. Weiss, C. (1983) ‘Ideology, Interests and Information: The Basis of Policy Positions’, in D. Callahan and B. Jennings (eds), Ethics, the Social Sciences, and Policy Analysis, New York: Plenum Press. Weiss, C. (1989) ‘Congressional Committees as Users of Analysis’, Journal of Policy Analysis and Management 8(3): 411. Weiss, C. (1998) Evaluation: Methods for Studying Programs and Policies, New Jersey, NJ: Prentice Hall. Wildavsky, A. (1969) ‘Rescuing Policy Analysis from PPBS’, Public Administration Review 29: 189–202. Wildavsky, A. (1976) ‘Principles for a Graduate School of Public Policy’, Journal of Urban Analysis, in A. Wildavsky (1979) Speaking Truth to Power: The Art and Craft of Policy Analysis, Boston, MA: Little, Brown & Co., 407–19. Wildavsky, A. (1997) ‘Rescuing Policy Analysis from PPBS’, in Classics of Public Administration, Fort Worth, TX: Harcourt Brace College Publishers, 275–88. Williams, W. (1998) Honest Numbers and Democracy, Washington, D.C.: Georgetown University Press.
53
THREE
The argumentative turn in public policy inquiry: deliberative policy analysis for usable advice Frank Fischer
The argumentative turn in the development of the field in the United States emerged from a practical problem. Early into the development of policy analysis it became apparent that much of the work was either of questionable value or even useless to policymakers. Toward the end of the 1970s scholars were becoming increasingly aware of the significant disjunction between social research and the practical world of policy decision making. An effort to explain this became a pressing question for the newly emerging discipline. It was in response to this disturbing problem that the argumentative turn had its origins.1 The discussion here opens with a discussion of the basic issues involved in the effort to establish a discipline capable of offering effective advice for policy decision makers. The focus then shifts to problems that appeared in the initial stages of the field by the application of empirical social scientific methods to policy problem solving. Two competing perspectives for confronting these difficulties are outlined, one political in nature and the other dealing with epistemological and methodological issues. The focus then turns to the methodological orientation as delineated in the argumentative turn. Finally, after presenting a logic framework for policy argumentation, the chapter closes with a discussion of the implications of the approach for the relation between citizens and policy analysts.
Knowledge, politics, and policy: early developments The role of political and policy advice are not new questions in politics and governance. The significance of politically-relevant counsel, together with the question of who should supply it, is found in the earliest treatises on political wisdom and statecraft. It is a prominent theme in the philosophy of Plato, in Machiavelli’s discussion of the role of the Prince, St. Simon and August Comte’s theory of technocracy, the ‘Brain Trust’ of Franklin Roosevelt’s New Deal, the writings of the U.S policy intellectuals during the Great Society of the 1960s, and modern-day think tanks since Ronald Reagan and Margaret Thatcher, just to name a few of the more prominent instances (Fischer 1990). Today it is a prominent topic in the discourses of policy theory. From the outset, a basic theme running through such writings has focused on replacing or minimizing political debate with more rigorous or systematic modes 55
Policy analysis in the United States
of thought. Within this tradition we recurrently find the idea that those with knowledge should rule (Fischer 1990). By the time of St. Simon and Comte such writings involved a call for a more ‘positivist’ epistemological form of knowledge, emphasizing testable theory rather than the ‘negativism’ of earlier philosophers based on critique and speculative reason. Whereas philosophical critique was seen to provide no concrete foundation for social progress, the accumulation of positivist knowledge provided the basis for the building of a harmonious, prosperous society. Policy analysis is in the first instance an American story with roots in the Progressive Era of American politics around the turn into the 20th century. During this period, there was an explicit call for technocratic expertise in politics. Herbert Croly, a prominent Progressive thinker of the period, became a devotee of the theories of Comte (Levy 1985). Drawing on Comte’s theory of technocracy, he promoted the role of policy expertise as an approach to political reform designed to deal with the pressing social and economic problems brought on by rapid urban industrialization. Specifically, he and many leading Progressives called for the adoption of the principles and practices of Taylorism and ‘Scientific Management’ to guide such reforms (Wiebe 1967).2 In this view, a ‘neutral’ scientific method as opposed to political argumentation would show the way to the good society. The influence of this ‘Progressive’ creed is attested to by the election of two Progressive presidents in the United States, one a Republican—Theodore Roosevelt—and the other a Democrat—Woodrow Wilson. These elections and the politics surrounding them reflected the would-be ‘value-free’ nature of scientific management and the ‘nonideological’ approach to good government through expertise that it prescribed. Indeed, Wilson, himself a political scientist (and often considered one of the founders of the American discipline of public administration), called for scientifically explicating the efficient practices of Prussian bureaucracy and applying them—independently of culture or context— to American government. In the process, this new field of inquiry and its practices would replace the traditional emphasis on legal-rational bureaucratic authority with social scientific principles of organization and management (Wilson 1987; Weber 1978). Technocracy received renewed support in the 1920s from Thorstein Veblen (1933) and his call to replace the capitalist price system with the decisions of efficient engineers. The goal for Veblen was to eliminate waste and encourage efficiency in the American economy through methods and practices of experts. Pinning his hopes on the political possibilities posed by a newly emerging professional class, he called on the ‘absentee owners’ of corporate America to transfer their power to reform-minded technocrats and workers. Throughout this early work the term ‘technocracy’ was seen as a positive new direction for government. The message was straightforward: replace the talk of business owners or politicians with the analysis of the experts. Some technocratic thought was influenced by emerging socialist governments emphasizing planning, although it
56
The argumentative turn in public policy inquiry
was often difficult for this enthusiasm to be explicitly expressed, especially when it related to the newly formed Soviet Union. A major expansion of the Progressive ideas came with New Deal ‘Brain Trust’ of the 1930s, also adding a new dimension of expert advice. Particularly influential were the writings of John Maynard Keynes, holding out the possibility of technically managing the economy (Graham 1976). This period saw a largescale influx of economists and social scientists in Washington, employed in policy planning and related activities throughout the growing administrative state. It also featured a change in the politics of the policy process, or what came to be called the New Deal ‘Liberal Reform Strategy’ (Karl 1975). The political strategy brought experts and politicians together, through so-called ‘Blue Ribbon’ advisory commissions, to identify and analyze contemporary social and economic problems. The reports of these commissions were then turned into central components of the liberal reform agenda by party politicians, subsequently offered to voters in electoral campaigns. In the postwar period the emphasis on the role of empirical social science in political advisement took full form in the development of the field of policy analysis, from Harold Lasswell to Aaron Wildavsky. Although policy analysis emerged in less grandiose terms than Lasswell had envisioned it, including his recognition of the need to include more than empirical investigation, the field and its practices that emerged in the 1960s were dominated by positivist-based methods. A worry here was the fact that the public seemed to be disturbingly uninformed and often irrational in their thinking. A major manifestation of this was fascism in Europe during World War II. Another was the emergence of various studies in the United States that showed relatively high levels of ignorance on the part of the public, raising important questions about the viability of democratic politics in an increasingly complex society. It was a theme that resonated with Dewey’s (1927) earlier apprehensions about the ability of the public to meaningfully engage issues in a technological society. In fact, Lasswell emerged from the context of Progressivism and was decisively influenced by American Pragmatism in the form of Dewey’s approach to social problem solving. To address Dewey’s problem of the public, Lasswell outlined what he referred to as the ‘policy sciences of democracy.’ It was an orientation designed as an approach for providing politicians, policy decision makers, and the public with the relevant knowledge needed to render informed decisions. Toward this end, Lasswell (1951) advanced a grand multidisciplinary approach that included qualitative disciplines like anthropology as well as economics and political science. The approach recognized that policy analysis requires two dimensions: It needs not only a contextual understanding of how policy processes work, but also a body of causal knowledge related to the particular problems at issue—poverty, environment, and so on. Taken together, this has proven to be no small order.
57
Policy analysis in the United States
Technocratic expertise and the rise of counter-expertise: policy argumentation By and large, policy analysis neglected Lasswell’s broader theoretical orientation. Taking root in the United States, the actual practice started out with a much narrower technical or technocratic perspective, influenced by World War II operations research in combination with postwar developments in systems analysis. The field, as we know it today, emerged with two US wars in the 1960s—the War on Poverty and the Vietnam War—before traveling abroad. Both were driven by political forces committed to ‘rational’ policy analysis. The War on Poverty started out with a noble goal, namely to end poverty in American society. This was based on a straightforward theory of policy and policy analysis. It assumed a relatively direct connection between knowledge and decision making; scientifically tested social scientific findings would speak directly to the problems at hand. It assumed that facts and values could be neatly separated and therefore policy research could be largely value-free. And it assumed that better arguments based on such knowledge could diminish, if not eliminate, political disagreement. When politicians and the public possess empirically demonstrated facts and solutions, so went the contention, what was there left to argue about? The approach also presumed, in theory, that we have adequate causal problemoriented knowledge to intervene effectively. That is, we know the reasons why problems such as poverty exist and that such knowledge can be translated into action programs (Farmbry 2014). But experience rather quickly showed this to be a serious misjudgment. Much of the War on Poverty, based on research funded earlier by the Ford Foundation, tended to deal more with the consequences of poverty rather than attacking its underlying causes. And the Vietnam War, as the Secretary of Defense Robert McNamara (1996) later admitted, was largely based on a misunderstanding of Vietnamese culture and political motivations, with disastrous results for both the Vietnamese and the United States. In this early phase, there was a great flurry of activities devoted to policy formulation, using causal findings about poverty from various disciplines, packaged as a comprehensive program (Wood 1993). The simplicity of the policy thinking was captured by the ‘Moon/Ghetto’ metaphor (Nelson 1977), which assumed a straightforward equivalency between engineering and social engineering: If the country could put a man on the moon, it could eliminate poverty in an ‘affluent’ society. At the same time, the Vietnam War was pursued technocratically by the Pentagon, using elaborate systems-based planning methods and a reliance on empirical measures, such as body counts and kill ratios. The enthusiasm for such methods established policy analysis as both an attractive academic specialty and an emerging job category (an interesting story unto itself). There was a ‘policy analysis explosion’, as the editors of Society magazine titled a symposium on the subject (Nagel 1979). Moreover, this was academically contagious. Both policy theory and policy analysis started to travel across to various European countries, even if only slowly at first. The Netherlands, a land 58
The argumentative turn in public policy inquiry
long known for its import/export industries, was one of the first to take up this literature and bring this enterprise to Holland, thanks in significant part to Robert Hoppe. And slowly other European countries began to follow suit, especially in northern Europe. This gave rise to ongoing exchanges between European and American policy scholars. Some Europeans even made their careers by more or less representing U.S. theories in their own countries. But many also began by making new contributions as well, especially in matters related to policy implementation and evaluation. Policy-analytic theory, however, encountered some unexpected and very challenging questions along the way, particularly as they related to the promise of problem solving. Early on the ‘utilization’ of knowledge proved to be more difficult than conventional policy theory suggested. Not only did the United States fail to solve the poverty problem, it flat out lost the Vietnam War, despite the policy roles of the ‘best and brightest,’ as the phrase went (Halberstram 1972). The concern was captured by the Secretary of Defense James Schlesinger (1969, p 310), when he testified to Congress that ‘everyone is, in principle, in favor of policy analysis but few are hopeful that its conclusions will be utilized in real-world-policy making.’ And the Chairperson of the Council of Economic Advisors, Charles Schultz (1968, p 89), concluded that ‘What we can do best analytically, we find hardest to achieve politically.’ These concerns gave rise to a new specialization and journal in the field concerned with the question of utilization. The journal Knowledge explored both practical cases studies concerning the uses—successes and failures—of policy findings and broader discussions of the relation of knowledge in society more generally. Along the way, others such as Rittel and Webber (1973) spoke of ‘wicked problems,’ in which not only the solutions are missing, but the definition of the problem as well is unknown or uncertain. Today, scholars also refer to ‘messy problems’ to capture the uncertainty and risks associated with complex problems like climate change or global financial crisis (Ney 2009).
In search of usable advice: political and disciplinary responses There were broadly speaking two competing responses to these setbacks, one political and the other academic. On the political front, conservatives singled out the methods and practices of policy analysis as ‘metaphysical madness,’ an argument that picked up steam from the Nixon to the Reagan years. Send these ‘so-called experts’ back to the ivory tower, these conservatives argued: bring back lawyers and businessmen, viewed to be closer to the practical realities of main street. However, these conservatives—both politicians and intellectuals—did not end up rejecting policy analysis; rather they turned it to conservative purposes. Instead of using policy analysis to create new social programs, they discovered that they could also use it to cut or eliminate the same programs. Counseled by conservative policy intellectuals, these administrations shifted first to ‘evaluation research’ directed to program outcomes rather than policy formulation inputs. 59
Policy analysis in the United States
Insofar as social programs often show mixed results, given the complexity of social problems, they discovered that they could use rigorous, empirical evaluations to provide arguments for cutting programs out of the budget. It is not hard to challenge programs with either uncertain—or mixed— outcomes or on methodological grounds. Questions related to how the data were obtained, which definitions and assumptions were employed, the nature of the sample groups, what kinds of statistical tests were employed, and the like. Indeed, this gave rise to a kind of ‘politics of methodology.’ To further facilitate this orientation, the economists’ method of cost-benefit analysis was elevated by conservatives to serve as the test for all programs. Given the difficulties in monetizing social benefits as opposed to costs (most of which are much easier to identify than benefits), many liberal programs—especially social programs—have trouble surviving a cost-benefit test. A cost-benefit test can thus function to filter out deliberation on social-democratic issues by adding a business-friendly overlay on the decision process. This is particularly the case for programs justified in the name of the ‘public interest’ or ‘common good.’ It is not that deliberative argumentation vanishes; the decision makers always deliberate among themselves. It is rather the scope of the deliberation that is at issue—in particular, who is excluded. In this regard, cost-benefit analysis and evidence-based decision making tend to be used to justify political decisions made behind closed doors. For conservatives, usable policy knowledge involves a mix of program performance data and comparisons of costs and benefits, presented as the essence of rationality in decision making. This approach was further supported by ideas about ‘evidence-based’ policy analysis, which emerged in Britain during the Blair years, before spreading to other parts of the world. In short, empirical objectivity became the privileged mode of thinking, as regularly reflected in the rhetoric of politicians and policymakers. This orientation is not all that different from the earlier approaches the conservatives were criticizing. But this was about partisan politics, not usable knowledge per se. The competing response to the problem of usable knowledge came mainly from academic policy scholars, who recognized the need to confront the fact that policy analysis—though widely commissioned as an approach to problem solving—was not widely used. Research showed that some two thirds of policy research was never used in one way or another by the decision makers who paid for it. This led deLeon (1989) to ironically comment that policy analysis would not be able to pass a cost-benefit analysis. One of the first efforts to put this problem in theoretical perspective came from Collingridge and Reeve (1989), two British researchers who referred in their book, Science Speaks to Power, to the ‘unhappy marriage’ between social science and policy. They developed a two-category model—one, an ‘over-critical model’ in which scientific contributions lead only to continuing technical debate and thus fuel political conflict; the other an ‘under-critical model’ in which analysis serves to legitimate predetermined political positions. These two models align 60
The argumentative turn in public policy inquiry
with how conservatives—as well as many politicians generally—have used analysis: either to hamper programs they do not like with further expert debate (such as in the case of climate change) or to justify their existing positions (for example, fiscal austerity). The problem led Lindblom and Cohen (1978) to ask: What is ‘usable knowledge’? Insofar as giving up on policy analysis in a complex and uncertain world seemed unwise, the challenge was to discover what sort of knowledge might be used. The first consideration was the fairly obvious recognition that politicians, in whichever country we find them, are wired differently than professional policy analysts. While policy analysts seek to be problem solvers, this is only a secondary concern of politicians. Uppermost in their minds is not whether their decisions will solve a particular policy problem, but rather what impact their actions will have on their ability to stay in office and/or to make further career gains. For this reason, politicians choose personally trusted advisors to ensure that only ‘safe’ information reaches them. Despite the orientation of many policy-analytical scholars, their advisors are often not chosen for their policy-analytic expertise, but for their ability to protect their interests. This tendency, of course, is not new, but we still tend to overlook its deeper implications. More typically, policy analysts look for ways to break through to decision makers—to bridge the gap—rather than to fully acknowledge and confront the depth of the divide. Most mainstream practitioners—in the United States, Europe, and elsewhere—still approach policy analysis as a form of rational— or at least ‘semi-rational’—problem solving. Even if the process is not entirely ‘rational,’ so the argument goes, positivist analysts should strive to make it more rational. But this has largely failed; we need to rethink our practices. As a move in this direction, Lindblom and Cohen (1978) find the solution to the problem in turning away from an exclusive emphasis on professional-analytic inquiry to what they have called ‘ordinary knowledge.’ In this view, the idea would be to improve our ‘ordinary’ policy knowledge rather than further pursue—in vain—the effort to supply scientifically validated policy knowledge. This move would, at the same time, break down the divide between analysts and decision makers (practitioners who communicate and trade in ordinary knowledge). It would, as such, help decision makers on their own terms, rather than only those of academic social science (Fischer 2000). Another approach that moved in a similar direction was advanced by Carol Weiss (1977). In response to the failures to produce usable knowledge, she called for a shift from the problem-solving focus to what she called an ‘enlightenment function.’ In this view, policy analysis is seen to play a less technical, more intellectual role. Rather than seeking technical solutions—which go wanting anyway—the task is more appropriately understood as helping decision makers think about the problems they face—that is, to improve their understanding with the help of relevant findings and analytical perspectives (Fischer 2009). This role might not be the direct one that policy analysts have sought, but it is not to be underestimated. For one thing, policy concepts have clearly penetrated political 61
Policy analysis in the United States
discourse; politicians speak of a ‘culture of poverty,’ a ‘broken window theory of crime ‘clash of civilizations,’ and so on). Indeed, how to think about and frame issues is what social scientists do well. Seen this way, social scientific policy inquiry is scarcely a failure. It is on this perspective which the argumentative turn has sought to build an alternative methodological orientation.
The argumentative turn The concept of the ‘argumentative turn’ in policy analysis was first introduced by Frank Fischer and John Forester in 1993. They set out a new orientation in policy analysis that represented a shift away from the traditional empirical-analytic approach to problem solving to include the study of language and argumentation as essential dimensions of theory and analysis in policy making and planning. The book was instrumental in stimulating a large body of work in policy research both in the United States and Europe over the subsequent decades. Since the publication of the book, the emphasis on argumentation has converged with other developments in the social sciences focused on discourse, deliberation, social constructivism, narration, and interpretative methods (See Fischer and Gottweis 2012). In pursuit of an alternative approach to policy inquiry, the argumentative turn links postpositivist epistemology with social and political theory and the search for a relevant methodology.3 At the outset, the approach emphasized practical argumentation, policy judgment, frame analysis, narrative storytelling, and rhetorical analysis, among other things (Gottweis 2006). From the early 1990s onward, argumentative policy analysis matured into a major strand in the contemporary study of policy making and policy theory development. As one leading policy theorist put it, this perspective became one of the competing theoretical perspectives (Peters 2004). Over these years the argumentative turn expanded to include work on discourse analysis, deliberation, deliberative democracy, citizen juries, governance, expertise, participatory inquiry, local and tacit knowledge, collaborative planning, the role of the media, and interpretive analysis, among others. Although these research emphases are scarcely synonymous, they all focus attention of communication and argumentation, especially the processes of utilizing, mobilizing, and assessing communicative practices in the interpretation and praxis of policy making and analysis (Fischer 2003; Gottweis 2006). First and foremost, argumentative policy inquiry challenges the view that public policy analysis can be a value-free endeavor.4 Whereas neopositivist approaches generally embrace a technically oriented rational approach to policy making—an attempt to provide unequivocal, value-free answers to the major questions of policy making—the argumentative approach rejects the idea that policy analysis can be a straightforward application of scientific techniques.5 Instead of a narrow focus on empirical measurement of inputs and outputs, it takes the policy argument as the starting point of analysis. Without denying the importance of empirical analysis, the argumentative turn 62
The argumentative turn in public policy inquiry
seeks to understand the relationship between the empirical and the normative as they are configured in the processes of policy argumentation. It thus concerns itself with the validity of empirical and normative statements but moves beyond the traditional empirical emphasis to examine the ways in which they are combined and employed in the political process. This orientation is especially important for an applied discipline such as policy analysis. Insofar as the field exists to serve real-world decision makers, policy analysis needs to be relevant to those whom it attempts to assist. The argumentative turn, in this regard, seeks to analyze policy to inform the ordinary-language processes of policy argumentation, in particular as reflected in the thought and deliberation of politicians, administrators, and citizens (Lindblom and Cohen 1979). Rather than imposing scientific frameworks on the processes of argumentation and decision making, theoretical perspectives generally designed to inform specific academic disciplines, policy analysis thus takes the practical argument as the unit of analysis. It rejects the ‘rational’ assumptions underlying many approaches in policy inquiry and embraces an understanding of human action as intermediated and embedded in symbolically rich social and cultural contexts. Recognizing that the policy process is constituted by and mediated through communicative practices, the argumentative turn therefore attempts to understand both the process of policy making and the analytical activities of policy inquiry on their own terms. Instead of prescribing procedures based on abstract models, the approach labors to understand and reconstruct what policy analysts do when they do it, to understand how their findings and advice are communicated, and how such advice is understood and employed by those who receive it. This requires then close attention to the social construction of the normative—often conflicting—policy frames of those who struggle over power and policy. These concerns take on special significance in today’s increasingly turbulent world. Contemporary policy problems facing governments are more uncertain, complex, and often riskier than they were when many of the theories and methods of policy analysis were first advanced. Often poorly defined, such problems have been described as far ‘messier’ than their earlier counterparts—for example, climate change, health, and transportation (Ney 2009). These are problems for which clear-cut solutions are missing—especially technical solutions—despite concerted attempts to identify them. In all of these areas, traditional approaches—often technocratic—have proven inadequate or have failed. Indeed, for such messy policy problems, science and scientific knowledge have often compounded problem solving, becoming themselves sources of uncertainty and ambiguity. They thus generate political conflict rather than helping to resolve it. In a disorderly world that is in ‘generative flux,’ research methods that assume a stable reality ‘out there’ waiting to be discovered are of little help and prone to error and misinterpretation (Law 2004, pp 6–7). The remainder of the chapter explores a number of aspects of the approach, particularly as they pertain to giving policy advice. While discussion of policy expertise and advice-giving in politics is in no way new, the practice of expertise 63
Policy analysis in the United States
and advice changed over the latter part of the 20th century (See the chapter by Radin in this volume). The author seeks to show that the argumentative turn can be usefully employed to interpret these changing patterns, and also to offer a number of potentially useful prescriptions. Toward this end, the discussion begins with a brief presentation of the long history of thought about political expertise and advice, moving from political philosophy to social science. It then examines the limits of the social scientific approach and presents the new argumentative approach that has emerged as its challenger, especially as reflected in the field of policy analysis. The advantages of a postempiricist argumentative approach to policy inquiry are then presented and several practical contributions to policy deliberation based on its tenets are outlined. The discussion closes with an examination of the further postpositivist challenges posed by policy deliberation and the argumentative approach.
The logic of policy argumentation The turn to argumentation offers a useful alternative to the mainstream technocratic approach to policy analysis and its problems. But an argumentative approach is in itself not enough to qualify as critical policy analysis. The defining characteristic of critical policy analysis is the task of assessing standard technoempirical policy findings against higher level norms and values. Such an approach, in other words, requires that policy arguments be submitted to a higher-level normative critique. To clarify this task, we can start by outlining a logic of policy argumentation. Following Majone (1989, p 63) we can understand a policy argument to involve a complex mix of empirical findings and normative interpretations linked together by an ‘argumentative thread.’ Whereas conventional policy analysis typically focuses on the statistical analysis of the empirical elements, the goal of a critical policy analysis is reflexive deliberation. To be reflexive means not only to focus on the problems and decisions designed to deal with them, but also to examine the normative assumption on which they are based. Toward this end, the aim is to explore and establish the full range of components that the argumentative thread draws together. This includes ‘the empirical data, normative assumptions, that structure our understandings of the social world, the interpretive judgments involved in the data collection process, the particular circumstances of a situational context (in which the findings are generated or the prescriptions applied), and the specific conclusions’ (Fischer 2003, p 191). Beyond technical issues, the acceptability of the policy conclusion in such an analysis depends on this full range of interconnections, including tacit elements withheld from easy scrutiny. Whereas it is commonplace for empiricists to maintain that their empiricalanalytical orientation is more rigorous and therefore superior to less empirical, more normative-based methods, ‘this model of critical policy argumentation actually makes the task more demanding and complex.’ The policy researcher
64
The argumentative turn in public policy inquiry
‘still collects the data, but now has to situate or include them in the interpretive framework that gives them meaning’ (Fischer 2003, p 191). In Evaluating Public Policy (Fischer 1995; also see Fischer 2012; Fischer and Gottweis 2012), the author has presented a multi-methodological scheme for integrating these levels of analysis along concrete case illustration. Drawing on Toulmin’s (1958) informal logic of argumentation, the approach offers a fourlevel logic of interrelated discourses that moves analysis from concrete questions concerning programmatic efficiency or effectiveness up through the relevant situational context of the program, through an assessment of the programmatic implications for the societal system to the abstract normative concerns concerning the impact of the policy on a particular way of life. More specifically, they range from the questions of whether a policy program works, whether or not it is relevant to the particular situation to which it is to be applied, and how it fits into the structure and process of the existing society system, to a critique of this system in terms of the basic normative ideals that do or do not undergird it (Fischer 1995; 2003). Such a critique, from a Habermasian (1987) perspective, is motivated by the ‘emancipatory interest’ inherent to human inquiry. It is this additional set of deliberations—contextual, societal, and normative critique—that defines critical policy inquiry. Such deliberations across the four levels offer the critical policy analyst a framework for organizing a dialectic communication between policy analysts and the participants involved in a deliberation. It is the case, to be sure, that many of the dominant policy players will not be interested in engaging in such an extended deliberation as it would run the risk of exposing basic ideological beliefs to discussion and criticism. But this in no way renders the perspective irrelevant. Critical policy analysis is advanced to serve a larger public, not just the immediate decision makers and stakeholders. It can be used, in this regard, as a probe for testing opposing arguments, thus permitting other parties to a deliberative struggle—particularly affected parties—to construct better arguments that both enrich the quality of the communicative exchanges and, in the process, increase the chances of more effective and legitimate outcomes. In this approach, good policy advice depends not only on empirical evidence, but also on establishing understandings that can help to forge consensus. Here the best decision is frequently not the most efficient one. Rather, it is the one that has been deferred until all disagreements have either been discursively resolved or placated, at least enough to be accepted in specific circumstances. Even though apparently less-than-optimally efficient, such decisions have a greater chance of being politically usable and thus implementable. Because deliberation can eliminate or diminish the counter-reactions of those who otherwise seek to block a particular decision, it also serves to keep the political group together. The trust and good faith that results from such deliberation carries forward to future policy decision making. The policy argument then is never an objective category per se. It is instead a politically negotiated social construction. As such, we can understand policy 65
Policy analysis in the United States
knowledge to be a ‘hybrid’ fusion, a kind of ‘co-production,’ of empirical and normative elements (Jasanoff 2006; Fischer 2009). More than just a matter of efficiency, policy statements are fundamentally normative constructs about why and how an action should be done. As Schram and Neisser (1997) have demonstrated, a policy also conveys directly and indirectly a deeper narrative about the particular society of which it is part. It is point clearly illustrated by Schneider and Ingram’s (1993) discussion of the social construction of a policy’s target population. In particular, they lay out the ways in which such constructions—for instance, who are the deserving and undeserving recipients of public monies—are typically employed by political actors to manipulate public opinion to gain electoral advantage.
Concluding perspective: policy epistemics The argumentative perspective seeks to move beyond this divide by opening up to critical scrutiny the practices of science itself. Of essential importance here is the fact that the normative elements lodged in the construction of empirical policy research rest on interpretive judgments and need to be made accessible for examination and discussion. Recognizing that the social meanings underlying policy research are always interpreted in a particular sociopolitical context— whether the context of an expert community, a particular social group, or society more generally—a fully developed deliberative approach focuses on the ways such research and its findings are themselves built on normative social assumptions that, in turn, have implications for political decision making. That is, they are embedded in the very understandings of the objects and relationships that policy science investigates. Indeed, the very construction of the empirical object to be measured is sometimes at stake. For this reason, empirical policy science cannot exist independently of normative constructions. While introducing deliberation as a complement to the analytic process is an important advance over a narrow technical orientation, from the postpositivist argumentative perspective it can only be understood as a platform from which the next step can be taken. Beyond a complementary approach, deliberation has to be moved into the analytical processes as well. Beyond understanding empirical and normative inquiry as separate activities that can potentially inform one another, they need to be seen as a continuous process of inquiry along a deliberative spectrum ranging from the technical to the normative. As deliberation and interpretative judgment occur in both empirical and normative research, they need to be approached as two dimensions along a continuum. It is important to concede that deliberative policy inquiry enters here relatively uncharted territories. From a range of practical experiments, such as consensus conferences and citizen juries, it is evident that citizens are more capable of participating in deliberative processes than generally recognized (Fischer 2009). But questions about the extent of participation, as well as when and where it 66
The argumentative turn in public policy inquiry
might be appropriate, pose complicated issues and thus need to assume a central place on the research agenda. Particularly challenging is the question of the degree to which citizens can actually engage in the analytic aspects of inquiry. Various research projects demonstrate that they can participate in at least parts of it (Wildavsky 1997), but where should we draw the line? Reconceptualizing empirical and normative inquiry along an interpretive continuum offers an approach for pursuing the answers to these questions. Elsewhere the author has proposed that such investigation be taken up as a component of a new research specialization called ‘policy epistemics’ (Fischer 2000). The development of policy epistemics would draw extensively on work in social constructivism, the newer interpretive sociology of science, and the postKuhnian work in the philosophy of science. However, where the constructivist approaches in sociology and philosophy focus on the conduct of science, policy epistemics and the turn to arguments extend the perspective to the goals of an applied social science—policy analysis—and the task of giving advice to decision makers. The orientation, as such, stands between professional policy research and action-oriented policy making. It thus explores the ways in which research speaks to different normative perspectives in the world of action—what kinds of knowledge(s) are relevant to particular situations, what the normative implications of particular empirical findings are, how empirical policy findings and normative perspectives can brought together in a deliberative process, and the like. These are questions that draw on a constructivist perspective but, at the same time, require different practical modes of thought and deliberation. Policy epistemics, then, brings together the relevant work in interpretive policy analysis, deliberative experimentation, discursive practices, and policy narration to explore ways in which the empirical/normative divide can be opened up to facilitate a closer, more meaningful deliberation among citizens and experts. As such, policy epistemics focuses on the ways people communicate across differences, the flow and transformation of ideas across borders of different fields, how different professionals groups and local communities see and inquire differently, and the ways in which differences become disputes. Following the lead of Willard (1996), it takes the ‘field of argument’ as a unit of analysis. As polemical discussion, argumentation is the medium through which people—citizens, scientists, and decision makers—maintain, relate, adapt, transform, and disregard contentions and the background consensus. Whereas traditional policy analysis has focused on advancing and assessing technical solutions, policy epistemics investigates the way interpretive judgments work in the production and distribution of knowledge. In particular, it examines the social assumptions embedded in research designs, the specific relationships of different types of information to decision making, the uses of information, the different ways arguments move across different disciplines and discourses, the translation of knowledge from one community to another, and the interrelationships between discourses and institutions.
67
Policy analysis in the United States
Elaborating our understanding of the epistemic dynamics of public controversies would allow for a more enlightened understanding of what is at stake in a particular dispute, including the evaluation of both competing viewpoints and the effectiveness of policy alternatives. In doing so, the differing, often tacitly held, contextual perspectives and values could be juxtaposed, the viewpoints and demands of experts, special interests groups, and the wider public directly compared, and the discursive dynamics among the participants scrutinized. This would by no means sideline or exclude scientific assessment; it would only situate it within the framework of a more comprehensive evaluation. These are questions that underlie the kinds of concerns and problems that confront policy decision makers and citizens. There is no shortage of literature illustrating the ways in which failures in policy making are often attributable to simplistic technocratic understandings of the relationship between knowledge and politics. It is just to these sorts of differing rationalities underlying citizens’ and experts’ responses to each other that policy epistemics would turn our attention. While such work would not need to absorb the efforts of the field as a whole, it could become one of the important specializations in policy analysis. Those who engage in it could work to facilitate democracy while at the same time studying more specially the processes that are involved in attempting to extend it. Short of providing policy solutions per se, it would go a considerable distance toward making an applied policy discipline more relevant to the needs and interests of both political decision makers and the citizens. And no less important, as Rorty (1979) has put it, it would help us ‘keep the conversation going.’Indeed, keeping the political conversation about policy going has emerged as one of the crucial issues of the day, given the rise of distorted forms of communication such as ‘fake news’ and ‘post-truth.’ For this reason, when we look at policy argumentation in the United States today, we can only conclude that the tasks of deliberative policy analysis are as important—even more important— than they have ever been. Although argumentation is part of the policy process generally, as outlined here, the focus has become even more salient and thus important in contemporary times. In the current political context of extremely polarized political contestation, argumentation has begun to spin out of control. Given that politics in the United States is currently divided into two extremes—Right and Left—fueled by talk radio and think tanks on both sides of the divide, argumentation has become more rhetorical, aggressive, and accusatory. Our ability to understand the logic of valid argumentation is thus crucial to analytically sorting out the competing claims. Given the very detrimental impact that these new realities have on American political culture and the ability of both citizens and politicians to meaningfully talk to one another about the many problems plaguing the country, an understanding of the nature of deliberation, in particular the relation of normative ideological claims to empirical evidence, is crucial to the future of democratic governance.
68
The argumentative turn in public policy inquiry
Notes 1 2
3
4
5
This chapter is a revised version of two earlier essays: Fischer, F. (2015), as well as an article that appeared in French in the French Political Science Review. In line with the enthusiasm of the era, leading University of Chicago sociologist Lester Ward even argued that legislators should have training in the social sciences to be qualified for office, if not be social scientists themselves. Postpositivism refers here to a long established tradition in the social sciences that approaches the social world as uniquely constructed around social meanings inherent to social and political action. Explanation cannot meaningfully proceed as context-independent or value-free. Thus the pursuit of methods based on the epistemologies of the ‘hard science’ lead to a misrepresentation and thus misunderstanding of the social objects under investigation. For theoretical discussion of the postpositivist perspective in policy analysis, see Hawkesworth (1988), Forester (1999) Yanow (2000), Hajer and Wagenaar (2003), Fischer (2003), Lejano (2006), Wagenaar (2011), and Fischer and Gottweis (2012). For empirical applications, see Fischer (1993), Hajer (1995), Schram and Neisser (1997), Bevir and Rhodes (2003), Gabrielyan (2006), Mathur (2008), Needham (2009), Dubois (2009), Orsini and Smith (2010), Plehwe (2011), and Sandercock and Attili (2012). Like all concepts, the concepts of ‘positivism’ and ‘neopostivism’ have their limitations. Nonetheless these concepts have a long tradition in epistemological discussions in the social sciences. The use of the term ‘neopositivist’ is employed to acknowledge that there have been a number of reforms in the ‘positivist’ tradition that recognize the limitations of earlier conceptions of the approach, taken to refer to the pursuit of an empirically rigorous, value-free, causal science of society. That is, there is no one neopositivst approach. The term is employed as a general concept to denote an orientation that continues to strive for empirically rigorous causal explanations that can transcend the social context to which they apply, but recognizes the difficulties encountered in achieving such explanations. Neopositivist policy analysts (Sabatier, for example) typically argue that while policy research cannot be fully rational or value-free, the analysis should nonetheless takes these to be standards toward which they should strive. For general references to these debates see Bernstein (1971), Hawkesworth (1988), and Fischer (2009).
References Bevir, M. and Rhodes, R. (2003) Interpreting British Governance, London: Routledge. Collingridge, D. and Reeve, C. (1989) Science Speaks to Power: The Role of Experts in Policy Making, London: Frances Pinter. deLeon, P. (1989) Advice and Consent: The Development of the Policy Sciences, New York: Russell Sage Foundation. Dewey, J. (1927) The Public and Its Problems, New York: Swallow. Dubois, V. (2009) ‘Toward a Critical Policy Ethnography: Lesson from Fieldwork on Welfare Control in France,’ Critical Policy Studies 3(2): 221–39. Farmbry, K. (ed) (2014) The War on Poverty: A Retrospective, Lexington, MA: Lexington Books. Fischer, F. (1990) Technocracy and the Politics of Expertise, Newbury Park, CA: Sage Publications. Fischer, F. (1993) ‘Policy Discourse and the Politics of Washington Think Tanks’, in F. Fischer and J. Forester (eds), The Argumentative Turn in Policy Analysis and Planning, Durham, NC: Duke University Press, 21–42. 69
Policy analysis in the United States
Fischer, F. (1995) Evaluating Public Policy, Belmont CA, Wadsworth. Fischer, F. (2000) Citizens, Experts, and the Environment: The Politics of Local Knowledge, Durham: Duke University Press. Fischer, F. (2003) Reframing Public Policy: Discursive Politics and Deliberative Practices, Oxford: Oxford University Press. Fischer, F. (2009) Democracy and Expertise: Reorienting Policy Inquiry, Oxford: Oxford University Press. Fischer, F. (2012) ‘Debating the Head Start Program: The Westinghouse Reading Scores in Normative Perspective’, in M. Hill and P. Hupe (eds.), Public Policy, London: Sage Publications. Fischer, F. and Gottweis, H. (2012) The Argumentative Turn Revisited: Public Policy as Communicative Practice, Durham, NC: Duke University Press. Fischer, F. (2015) ‘In Pursuit of Usable Knowledge: Critical Policy Analysis and the Argumentative Turn’ in F. Fischer, D. Torgerson, A. Durnova and M. Orsini (eds), Handbook of Critical Policy Studies, Cheltenham: Edward Elgar Publishing. Forester, J. (1999) The Deliberative Practitioner, Cambridge, MA: MIT Press. Gabrielyan, V. (2006) ‘Discourse in Comparative Policy Analysis: Privatization in Britain, Russia and the United States’, Policy and Society 25(2): 47–76. Gottweis, H. (2006) ‘Argumentative Policy Analysis’, in J. Pierre and B. G. Peters (eds), Handbook of Public Policy. Thousand Oaks, CA: Sage, 461–80. Graham, O. L. (1976) Toward a Planned Society: Roosevelt to Nixon, New York: Oxford University Press. Habermas, J. (1987) The Theory of Communicative Action, Cambridge, MA: Polity. Halberstram, D. (1972) The Best and the Brightest, New York: Random House. Hajer, M. (1995) The Politics of Environmental Discourse, Oxford: Oxford University Press. Hajer, M. and Wagenaar, H. (2003) Deliberative Policy Analysis: Understanding Governance in the Network Society, Cambridge: Cambridge University Press. Hawkesworth, M. E. (1988) Theoretical Issues in Policy Analysis, Albany, NY: SUNY Press. Jasanoff, S. (2006) ‘Ordering Knowledge, Ordering Society’, in S. Jasanoff (ed), States of Knowledge. The Co-production of Science and Social Order, London: Routledge, 13–45. Karl, B. (1975) ‘Presidential Planning and Social Science Research’, in Perspectives in American History, 3, Cambridge, MA: Charles Warren Center for Studies in American History. Lasswell, H. (1951) ‘The Policy Orientation’, in H. Lasswell and D. Lerner (eds), The Policy Sciences, Stanford, CA: Stanford University Press. Law, J. (2004) After Method: Mess in Social Science Research, New York: Routledge. Lejano, R. (2006) Frameworks for Policy Analysis: Merging Text and Context, New York: Routledge. Levy, D. (1985) Herbert Croly of the New Republic: The Life and Thought of an American Progressive, Princeton, NJ: Princeton University Press.
70
The argumentative turn in public policy inquiry
Lindblom, C. E. and Cohen, D. (1979) Usable Knowledge: Social Science and Social Problem Solving, New Haven, CT: Yale University Press. Majone, G. (1989) Evidence, Argument, and Persuasion in the Policy Process, New Haven, CT: Yale University Press. Mathur, M. (2008) Citizen Deliberation in Urban Revitalization: Discursive Policy Analysis of Developmental Planning Processes in Newark, Saarbrucken: VDM Verlag. McNamara, R. (1996) In Retrospect: The Tragedy and Lessons of Vietnam, New York: Vintage Books. Nagel, S. (1979) ‘Policy Analysis Explosion’, Transaction 16(6): 9–10. Needham, C. (2009) ‘Interpreting Personalization in England’s National Health Service: A Textual Analysis’, Critical Policy Studies 3(2): 204–20. Nelson, R. H. (1977) The Moon and the Ghetto, New York: W.W. Norton. Ney, S. (2009) Resolving Messy Problems: Handling Conflict in Environment, Transport, Health and Aging Policy, London: Earthscan. Orsini, M. and Smith, M. (2010) ‘Social Movements, Knowledge and Public Policy: The Case of Autism Activism in Canada and the U.S’, Critical Policy Studies 4(1): 38–57. Peters, B.G. (2004) Review of ‘Reframing public policy: Discursive politics and deliberative practices’, Political Science Quarterly, 119(3): 566–67. Plehwe, D. (2011) ‘Transnational Discourse Coalitions and Monetary Policy: Argentina and the Limited Powers of the Washington Consensus’, Critical Policy Studies 5(2): 127–48. Rittel, H. W. and Webber, M. D. (1973) ‘Dilemmas in a General Theory of Planning’, Policy Sciences 4: 155–69. Rorty, R. (1979) Philosophy and the Mirror of Nature, Princeton NJ: Princeton University Press. Sandercock, L. and Attili, G. (2012) ‘Multimedia and Urban Narratives in the Planning Process: Film as Policy Inquiry and Dialogue Catalysts’, in F. Fischer and H. Gottweis (eds), The Argumentative Turn Revisited: Public Policy as Communicative Process, Durham, NC: Duke University Press. Schlesinger, J. (1969) Testimony to U.S. Congress, Senate Subcommittee on National Security and International Operations, Planning-ProgrammingBudgeting Hearings, 91st Cong. 1st session. Schneider, A. and Ingram, H. (1993) ‘Social Construction of Target Populations: Implications for Politics and Policy’, The American Political Science Review 87(2): 334–47. Schram, S. and Neisser, P. T. (eds) (1997) Tales of State: Narrative in Contemporary U.S. Politics and Public Policy, New York: Rowman and Littlefield. Schultz, C. L. (1968) The Politics and Economics of Public Spending, Washington, D.C.: Brookings. Toulmin, S. (1958) The Uses of Argument, Cambridge: Cambridge University Press. Veblen, T. (1933) The Engineers and the Price System, New York: Viking. Wagenaar, H. (2011) Meaning in Action: Interpretation and Dialogue in Policy Analysis, New York: M.E. Sharpe. 71
Policy analysis in the United States
Weber, M. (1978) Max Weber: Selections in Translation, W. G. Runciman (ed), Cambridge: Cambridge University Press. Weiss, C. (1977) ‘Research for Policy’s Sake: The Enlightenment Function of Social Research’, Policy Analysis 3(4): 531–45. Wiebe, R. H. (1967) The Search for Order, 1877–1920, New York: Hill and Wang. Wildavsky, A. (1997) But Is It True? A Citizen Guide to Environmental Health and Safety, Berkeley, CA: University of California Press. Willard, C.A. (1996) Liberalism and the problem of knowledge: A new rhetoric for modern democracy, Chicago, IL: University of Chicago Press. Wilson, W. (1987) ‘The Study of Administration’, Political Science Quarterly 2: 1–23. Wood, R. C. (1993) Whatever Possessed the President? Academic Experts and Presidential Policy, 1960–88. Cambridge, MA: MIT Press. Yanow, D. (2000) Constructing race and ethnicity in American: category-making in public policy and administration, New York: M.E. Scharpe.
72
FOUR
Reflections on 50 years of policy advice in the United States Laurence E. Lynn, Jr.
For which of you, intending to build a tower, sitteth not down first, and counteth the cost, whether he have sufficient to finish it? … Or what king, going to make war against another king, sitteth not down first, and consulteth whether he be able with ten thousand to meet him that cometh against him with twenty thousand? (Luke 14: 28–31)
Birth of a profession On August 25 ,1965 President Lyndon B. Johnson sent a directive to the heads of all federal departments and agencies. They were to use a planning, programming, and budgeting (PPB) system modeled on the system Secretary of Defense Robert S. McNamara had installed at the Department of Defense in 1961 as a basis for their agency’s annual budget recommendations. PPB’s beating heart was policy analysis. Using PPB is simply good management, LBJ suggested (Lynn 2015). More important was that LBJ’s mandate was good politics. Launching a wellpublicized effort to demonstrate financial discipline and restraint in pursuing the administration’s policy priorities would increase the prospects that Congress would support the president’s pending budget requests. At the time, Lyndon Johnson was caught in a political vice. He had committed his administration to a War on Poverty and a Great Society and, as well, to military success in Vietnam. Congressional leaders of his party were, however, refusing to consider tax increases to pay for these priorities. Brookings economist Charles Schultze, then head of the U.S. Bureau of the Budget, had already directed agency heads to create task forces to identify possible budgetary savings to make room for higher-priority activities. Two months later, Johnson’s PPB mandate abruptly superseded Schultze’s earlier directive. This pivot was a product of bureaucratic politics. Schultze and Joseph Califano, formerly an aide to McNamara, now on the White House staff, proposed the PPB mandate to head off a move by other Johnson aides that the two men thought unwise: committing LBJ to the promulgation of a comprehensive set of longterm goals for America. Schultze and Califano had discussed introducing PPB to domestic policy planning on a pilot basis, but they pushed instead for going all-in. In a fraught political environment, LBJ embraced their bold proposal.
73
Policy analysis in the United States
Three decades later, Peter deLeon claimed that public policy analysis in its various guises, along with the emergent ‘policy sciences,’ ‘have become prevalent, … virtually ingrained in the woof and warp of government’ (deLeon 1997, p 7). Five decades on, David Weimer says in his overview essay in this volume that policy analysis is now a recognized profession in the United States, an institutionalized field of scholarship, teaching, and practice. The birth and coming of age of this profession have been brilliantly assayed in the magisterial writings of Beryl Radin and Weimer in this volume and in many other publications, and the field’s literature adds insights and detail to the story. I will not attempt to improve on these accounts. Instead, because the fivedecade period from LBJ’s mandate to now coincides with my own career in the profession—I have participated in it as a scholar, teacher, and practitioner1—I will offer here personal reflections on the field’s emergence and its raison d’être in American governance, illustrated here and there with a few personal anecdotes. Unlike many fields of study and practice, ours does not systematically incorporate the history of its ideas and its seminal literature into the training of its professionals, probably because there is no consensus on such things. In what follows, I will suggest what I think the seminal events, ideas, and literatures are and why their importance merits greater prominence in our training.
Policy based on calculation? It’s what Jesus would do. The notion that rulers should sitteth down, counteth, and consulteth before making consequential policy and operational decisions has biblical origins (and surely even more ancient ones—Bronze age Babylon is inconceivable without calculation). Doing so is also simply common sense, the exercise of which has, as Weimer notes, a long history in the United States. In my reckoning, the field’s American history includes a number of political, intellectual, and institutional milestones in its evolving toward where we are now. • Thomas Jefferson commissioned Merriweather Lewis and William Clark and a team of scientifically trained experts, through the agency of the American Philosophical Society, to map the lands included in the Louisiana Purchase and gather information to inform American policies for settling the west. • President William Howard Taft created a Commission on Economy and Efficiency to study ways to improve the efficiency of proliferating government organizations, including the notion of a national budget. • Beginning in 1912, municipal research bureaus began informing local governments about the needs of a growing, urbanizing, industrializing America, including the concept of a municipal budget and a wide range of new public services. • Theodore Roosevelt inaugurated social and economic regulation and expanded the delegation of administrative authority to administer public services.
74
Reflections on 50 years of policy advice in the United States
• The Budget and Accounting Act 1921 interposed the president and a new bureau of the budget between federal spending agencies and the Congress, laying the foundation for a powerful executive branch role in policy making (Schick 1970). It also authorized creation of the General Accounting Office (GAO) to audit federal financial accounts. • In 1927, three private research and training organizations were amalgamated into the Brookings Institution (whose scholars were, ironically, to oppose the New Deal as bad fiscal policy). • President Herbert Hoover’s Research Committee on Social Trends surveyed America’s development using the scientific mood and method ‘to study simultaneously all of the fundamental social facts which underlie all our social problems’ in order to correct the ‘undiscriminating emotional approach and … insecure factual basis in seeking for constructive remedies of great social problems’ (Hoover 1933). • The New Deal inaugurated an American welfare state with sharply expanded roles for the federal government in economic and social affairs. From 1933 to 1943, when it was abolished by Congress, the National Resources Planning Board (NRPB), eventually located in the newly created Executive Office of the President, conducted an ambitious policy planning program. • Through the 1940s and 1950s the Department of the Interior developed a tradition of policy planning and analysis initiated during the long tenure of Harold Ickes as Secretary. (I became head of the successor to the original Program and Policy Planning Office in 1973.) • In 1945 Project RAND was instituted at the Douglas Aircraft Company under contract to the War Department’s Office of Scientific Research and Development to analyze the potential relationship between military planning and weapon systems development. Project RAND came under the aegis of the U.S. Air Force in 1948 as the RAND Corporation which, over the next several years, developed the Planning-Programming-Budgeting System (PPBS) that McNamara would later install at the Pentagon. • In 1946 the Council of Economic Advisers was added to the Executive Office of the President to provide expert economic analysis and advice to the president regarding a wide range of economic policy issues in the aftermath of the Great Depression and World War II. • An Office of Policy Planning that was to become a fixture in the Office of the Secretary of State was created by George F. Kennan in 1947. • In 1951 Daniel Lerner and Harold Lasswell published a seminal edited volume entitled The Policy Sciences. • During the 1950s and 1960s, according to then-Comptroller General Elmer Staats, the Bureau of the Budget undertook numerous activities that prefigured PPB: review of the cost-benefit analysis in water resource programs required by Congress; long-range budget projections featuring high, low, and most likely estimates for use in policy making; a systematic budget preview process conducted by the Bureau of the Budget, a mission-related functional framework 75
Policy analysis in the United States
for budget preview and special analyses of selected programs; lawfully sanctioned performance- and cost-based budgeting; selective development of formal agency program planning (Staats 1968). • In 1961 Secretary of Defense Robert S. McNamara instituted the RANDCorporation-conceived PPBS, which would become the model for LBJ’s 1965 mandate (discussed above). • The Congressional Budget and Impoundment Control Act 1974 authorized creation of the Congressional Budget Office (CBO), which, along with the GAO and the Congressional Research Service (CRS, formerly the 1914-vintage Legislative Reference Service) enabled Congress to be a strong counterweight to the U.S. Office of Management and Budget (OMB, formerly the Bureau of the Budget) in providing information and analysis for policymakers. • Congress initiated and enacted the Government Performance and Results Act (GPRA) in 1993, accelerating the movement of government toward performance managements at all levels. In this selective recounting, several important facts (which will be amplified below) emerge. 1) Contemporary policy analysis institutions have evolved over the course of American history. LBJ’s 1965 mandate was classic incrementalism: evolution, not revolution. 2) Far from being antithetical to Madisonian politics, as critics have claimed, institutionalized policy analysis is a consequence of Madisonian politics— of ambition checking ambition—and one of its legitimizing elements. Demonstrably efficient government is good politics at every level, and being efficient requires analysis. 3) Strengthening the capacity of governments to act reasonably and effectively by affording their executives (and legislators) direct access to good information and analysis enables the separation of powers and republican governance.
What were they thinking? One Saturday morning in 1968, I and some members of my staff were finishing some tables of calculations that McNamara had asked my boss to have prepared for him. “I told him,” my boss told me, “to call you directly if he wants to see them.” The call came, and off I went with the table he wanted to see. McNamara was alone in his office, standing behind the enormous desk once used by General John J. Pershing. When I approached, he spread his arms wide and grinned, calling my attention to the chaos of documents in front of him. He sat down and I stood beside him as he looked at the table I’d brought. He scanned the dozens of numbers for several moments then stabbed one of them with his finger. “That doesn’t look right,” he said. I looked at the number he was pointing to. I had no idea what might be wrong. “That’s OK,” he said, “just 76
Reflections on 50 years of policy advice in the United States
check it.” He thanked me and I left to instruct my staff to check the numbers. McNamara was right. It was a typo, and we corrected it. I was a 31-year-old policy analyst working in the Systems Analysis Office, and I had, in Alice Rivlin’s memorable phrase, ‘a place at the table.’
A great debate That McNamara’s reliance on PPBS instigated the emergence of policy analysis as a profession is generally acknowledged. The why’s and the how’s of that emergence are not so well known, but they are important to understanding why, 50 years later, that profession is integral to American public governance. There was a clear and compelling model for the kind of advice-giving illustrated by the anecdote above: providing policymakers with independent, real-time access to analytic resources. “At this time,” Winston Churchill said at the outset of World War II, “each department presented its tale on its own figures and data. [Some departments], though meaning the same thing, talked different dialects.”2 So, he went on: One of the first steps I took … was to form a statistical department of my own … I now installed my friend and confidant of many years [P.M.S. Blackett, who was to win the Nobel Prize for physics in 1948] ... with half a dozen statisticians and economists whom we could trust to pay no attention to anything but realities. This group of capable men, with access to all official information, was able … to present me continuously with tables and diagrams.. They examined and analyzed with relentless pertinacity … and also pursued all the inquiries which I wanted to make myself (Enthoven and Smith 1971). The analysis of military operations by Blackett and his colleagues (collectively known as Blackett’s Circus) gave impetus to the emerging field of operations research and, through the RAND Corporation, to the founding ideas of policy analysis. McNamara took the same Churchillian step, and in the same spirit, in 1961. It is noteworthy that policy analysis in McNamara’s Pentagon was the responsibility of the Office of Systems Analysis. The word ‘systems’ was chosen deliberately to convey the idea that military forces should be analyzed and planned as systems of interacting components designed to produce a specific, and measurable, type of capability. Seeing complex policies and programs as ‘systems’ is also at the heart of domestic policy analysis. The first question I took up as a policy analyst was this: What is the most efficient way to create the capability to move military personnel and equipment to theaters of combat rapidly enough to forestall long-drawn-out hostilities? Approximate, and reasonable, answers could be obtained using linear programming models that generated least-cost mixes of aircraft, ships, and pre-positioned equipment 77
Policy analysis in the United States
to meet military planners’ rapid deployment schedules. Use of these models had the added advantage of generating ‘shadow prices’ (values [analogous to marginal costs] assigned to resource constraints designed into them). One of these resource constraints was overseas airfields and ports needed to facilitate deployment. At the time, the Department of State was negotiating overseas base rights with some of our foreign allies. State’s senior officials sought McNamara’s endorsement of the large appropriations needed to ‘buy’ these rights from our allies. Their view seemed to be: We must pay whatever is necessary. To McNamara, that sounded like: Base rights are priceless. Really? No, not really. We could calculate shadow prices on base rights to support our war plans using alternative ways of procuring and using ships, aircraft, prepositioned equipment, and foreign bases. No single overseas facility was priceless because there were alternatives. One afternoon my boss, Alain Enthoven, and I entered a small inner sanctum at the State Department where we were to brief two-time presidential candidate W. Averell Harriman, then head of State’s iconic Office of Policy Planning, and key members of his staff. We explained why, if, say, Okinawa were unavailable, we could still meet military objectives by buying more or more capable aircraft and ships or using other bases more intensively. We suggested the costs of doing that, in effect, were the value of Okinawa. Our message to a giant of American diplomacy and Democratic politics: think about this issue in a different way; there are alternatives. We left guessing that one of our auditors was already calling a military colleague at the Pentagon and saying, “Now I know what you’re up against!”
A bit of bravado Based on her experiences as policy analyst and planner in the U.S. Department of Health, Education, and Welfare in the 1960s, Alice Rivlin noted unapologetically that ‘a bit of bravado is necessary to overcome the inertia of government, to get attention, and to win a place at the decision table’ (1971, p 4, italics added).3 Policy analysts had learned, however, that ‘educators, doctors, civil servants, … even generals’ are ‘knowledgeable, necessary, and not always wrong.’ As to their contributions, Rivlin believed that ‘analysts have probably done more to reveal how difficult the problems and choices are than to make the decisions easier.’ Aaron Wildavsky, having founded the Berkeley School of Public Policy, described ‘the art of policy analysis’ as follows: Policy analysis must create problems that decision-makers are able to handle with the variables under their control and in the time available ... by specifying a desired relationship between manipulable means and obtainable objectives. … Unlike social science, policy analysis must be prescriptive; arguments about correct policy, which deal with the
78
Reflections on 50 years of policy advice in the United States
future, cannot help but be willful and therefore political (Wildavsky 1979, p 16).
Policy analysis? Or muddling? In his curiously overlooked 1968 book The Politics and Economics of Public Spending, Charles Schultze, the policy analysis movements progenitor, articulated its goals (Schultze 1968, pp 19–24): • ‘careful identification and explanation of goals and objectives of policies and programs’ in order to ‘widen but not homogenize’ agency perspectives on their programs and on alternative ways of achieving them; • analysis of the outputs of public programs; • measurement of total social as well as program costs projected into the future; • an analysis of the costs and effectiveness of alternative ways to achieve public policy objectives; and • conducting these types of analysis as a systematic part of the budget review process. To further clarify PPB’s intent, Schultze compared it to Charles Lindblom’s 1959 articulation of ‘The Science of Muddling Through’ (Lindblom 1959).4 Lindblom emphasized, said Schultze, the deep, subtle, complex, and difficult to specify conflicts among our social values. ‘We discover our objectives and intensity that we assign to them only in the process of considering particular programs or policies’ (Schultze 1968, p 38, italics in original). In effect, ends and means are, as a practical and political matter, chosen simultaneously through processes such as successive limited approximations and partisan mutual adjustment by which we deliberate on concrete policy proposals (Lindblom 1979). These choices, moreover, may change with each round of deliberations. Schultze conceded that Lindblom’s depiction of political choice seemed antithetical to the problem-solving rationality of decision making informed by policy analysis. He insisted, however, that it was not. ‘It is not really important,’ he said, ‘that the analysis be fully accepted by all participants in the bargaining process’ (1968, p 75). Analysis is intended to help—the word ‘help’ is important— sort out arguments about the facts and, far from depoliticizing choice, to steer debate toward issues where there are real differences of value and, therefore, where political judgment is most needed. Schultze recognized, furthermore, that it may be necessary to ‘guard against the naiveté [of policy analysts] who ignore political constraints … and [as well] the naiveté of the decision maker’ who ignores resource constraints and believes that nobility of purpose produces good policy outcomes. Schultze’s point is simply that the scarcity of resources available to governments requires policymakers to explore how to get the most out of those resources. Policy analysis assists that exploration, particularly in the context of budget making. If ‘muddle’ we must, let’s do it with good analysis. 79
Policy analysis in the United States
One of the first muddles McNamara’s policy analysts confronted was why all military personnel rotations between the United States and our overseas bases were by the Navy’s World War II-vintage troopships. The analysts quickly showed— you could do it on the backs of envelopes—that it would be cheaper to rotate the troops by air; that way, DoD could save the cost of the military personnel needed to fill the sealift pipeline. A no brainer, no? No! A bureaucratic battle was necessary to get the point across, but it was an early policy analysis success. Policy analysis wasn’t magical thinking or an affront to politics; it was analytical thinking of potentially significant practical value. It caught on. When I joined the systems analysis staff in 1965, all three military services had their own think tanks to enable them to match wits with McNamara’s ‘whiz kids.’ A decade later, when I joined the faculty of the expanding Kennedy School of Government at Harvard, numerous contract research firms were assisting the government with policy analysis and research and program evaluation.
It’s ‘metaphysical madness’! Seriously? Despite the (to me) evident common sense of the experience-based rationale for policy analysis of Schultze and Rivlin, and its evident political popularity, controversies began to erupt in academia. Critics piled on. Policy analysts were not only ignorant of how the political world works, they posed a danger to democracy, to community values, and the rule of law.5 • Harvard law professor Lawrence Tribe (1972) criticized policy ‘science’ for its overemphasis on economic assumptions and instrumental rationality and for neglecting ‘distributive ends, procedural and historical principles, and the values … associated with personal rights, public goods, and communitarian and ecological goals’ (p 105). • Richard Nelson (1977) criticized what he called ‘traditional’ policy analysis for being ‘relatively blind to exactly the kind of disagreements, and conflicting interests which need to be perceived in order to guide search for solutions that, over the long run, do not harm significant values or groups’ (pp 75–6). He proposed extending the traditional model of policy analysis, de-emphasizing ‘choice and decision’ and paying greater attention to
• the saliency of organizational structure ... and ... the open-ended evolutionary way in which policies and programs do and should unfold. … [C]entral high-level decision making ... [is] immune to ... social sciences’ standard research methods. … There is little hope to comprehend basic patterns of governance and policy making within the thin slices of time typically considered by contemporary social and policy sciences’ (p 79).
• Observing with cold contempt the ‘metaphysical madness’ of ‘policy scientists’ aiming to supplant politicians and statesmen (and inadvertently conceding the premise of the policy analysis movement, viz., that analysts are influential), 80
Reflections on 50 years of policy advice in the United States
Edward Banfield sniffed that the popularity of social-science-based policy analysis in both the executive and legislative branches could not be justified by ‘the existence of a body of knowledge about how to solve social problems’ (Banfield 1977, p 6). ‘Professionals,’ he observed, ‘because of their commitment to the ideal of rationality, … are chronically given to finding fault with institutions ... and, by virtue of their mastery of techniques of analysis, to discovering the almost infinite complexity and ambiguity of any problem [until problems are viewed as] too complicated for ordinary people to deal with ...’ (p 18). In retrospect, I can identify three diagnoses of what the fuss was all about. It was the economists To non-economists, it is in the DNA of economists to be hegemonic and, in their assertiveness, obnoxious. Their ‘metavalue’ of efficiency, which they seem to regard as tantamount to ‘the welfare of society’ or ‘value maximization’ is regarded in turn as absurdly reductive (Radin 2013, p 150). An economist—I, for example—might retort, “if you’re not for ‘value maximization’, what are you for?” Or “Muddling is not a value.” But that’s not fair. The legitimate issue is whether economic thinking is, because of its inherent limitations—yes, its reductiveness—inimical to good public policy. Perhaps societal values are more likely to be maximized by other processes and ways of thinking. It helps the economists’ case that efficiency has terrific political salience. If you build an efficient government, they will vote for you. The important point, however, is that, though economic reasoning is central to policy analysis, 50 years of analysis has made it clear that the exploration of alternatives to identify those most likely to be practical and effective requires knowledge and insights from disciplines and cognate fields ranging from anthropology to zoology. Even economists have grasped that. It was the ‘policy scientists’ In 1951, Lerner and Lasswell’s The Policy Sciences laid out a framework for the emergence of what would become ‘a discernible professional activity’ that some regarded as a ‘discipline’ (Brewer 1974). Two decades later, a new journal, Policy Sciences, appeared, edited by Edward Quade, the RAND mathematician behind PPBS and author of the 1975 text, Analysis for Public Decisions (Quade 1975), publicized as providing ‘analytic methods as alternatives to traditional public policy decision-making methods’ (italics added). Because ‘systems analysis’ was one of the ‘policy sciences’ and because Quade was closely associated with this new field, seeing what Nelson called ‘traditional policy analysis’ as the same thing as ‘the policy sciences’ (as Tribe and Banfield,
81
Policy analysis in the United States
quoted above, and perhaps even Lindblom seemed to do) was not a stretch. Both were oriented toward introducing systematic analysis into policy-making processes. But conflating the two and labeling it all ‘metaphysical madness’ is ridiculous. Contrast Schultze’s and Rivlin’s language for the ambitions of policy analysis— resources are scarce, analyze alternatives, get to the table—with Brewer’s 1974 declaration, ‘There is an obligation to alert society’s participants to an impending problem; the policy scientist has a right to clarify and elucidate the competitiveness and complementarity of interests and objectives of those acting on behalf of society and of the society being acted upon ...’ (Brewer 1974, p 11, italics added). Some might consider that quote a better example of metaphysical madness than the more circumspect claims of the policy analysts. It was, and is, academia All things considered, I’ve come to believe that the ‘problem,’ if there is one, with the policy analysis profession is its having become too entwined with academia and its values and incentives. Radin apparently agrees. ‘[T]here are some disturbing signs,’ she writes, ‘that the field has retreated into the ivory tower of assumed technocratic postures that limit its use by decision makers’ (Radin 2013, p 224). During the decade I spent practicing policy analysis in the federal government from 1965 to 1974—this is how I was socialized into the profession—I encountered many policy-oriented academics. I recall that I found most of them out to lunch. When I became an academic myself, I learned why. The intellectual environments in government had been focused on public policy issues of real political importance. Policy and deliberations could be intense. The higher the stakes, the greater the intensity. In our desire to speak truth to power, the question we policy analysts had to confront was, ‘What is the truth we need to speak?’ That is, what claims are we willing to make to policy makers based on our analysis? What do we want them to think and why? Will our claims and reasons stand up to partisan scrutiny? McNamara famously told his policy analysts, “You do the analysis. I’ll do the politics.” He was an exception. In academia, discussions of actual public policy issues were mainly staged affairs: seminars, colloquiums, guest lectures. The intensity, such as it was, was about academic politics, and the contending interests were factions within the disciplines, fields of study, and methodologies. The stakes weren’t allocations of public resources of policy outcomes but allocations of academic recognition. The rewards weren’t policy achievements but academic achievements and awards. I didn’t fight them. I joined them. I got pretty good at it. But it wasn’t the same thing. Academia isn’t the public sector. Policy analysis isn’t policy research. My argument here starts with a story about another field. In the early postwar years, the established (also for about five decades) field of public administration came under withering attack, not for its ‘metaphysical madness’ but for its lack of intellectual rigor and its propensity to embrace simplistic formulas for how to govern, its inattention to the findings of academic research. Anthony Bertelli 82
Reflections on 50 years of policy advice in the United States
and I reconsidered those criticisms and came to realize that the half-centuryold profession had gained considerable intellectual insight into how American government actually works and real wisdom concerning the strengths and weaknesses of our Madisonian republic (Bertelli and Lynn 2006). So why the sudden smackdown? The answer is that the training of public administrators and the creation of knowledge to inform practice, once the responsibility of private research and training institutions—the Institute for Public Administration, the Brookings Institution, the bureaus of municipal research—had been appropriated during the New Deal by elite universities like Columbia, Harvard, and Syracuse, which had finally grasped that Big Government was creating opportunities for universities to provide credentials and legitimacy to its administrators. In no time, a profession that enjoyed healthy respect in the real world of governing was plunged into an identity crisis as Dwight Waldo, Herbert Simon, Robert Dahl, and other scholars laid waste to its training and traditions and unscientific ‘principles’ (later to be rehabilitated as intellectually quite respectable). Much good has resulted from that takeover. Universities do help legitimize the profession by credentialing its practitioners. The application of social science concepts and methods beginning in the 1930s enriched the intellectual resources available to public administrators as students and practitioners. Professional associations became forums for intellectual exchange between practitioners and professors. In our profession, something similar happened. Ford Foundation largesse led to the establishment of professional schools of public policy and public affairs at the nation’s elite institutions and brought the whole system of academic incentives and rewards to bear on what constituted knowledge for practice (Lynn 1996). The policy analysis paradigm of Schultze and Rivlin became subject to robust skepticism from the scholars quoted above. The distinction between ‘policy analysis’ as an administrative technology of governance and ‘policy research’ as its offline counterpart became blurred. Arguably the policy analysis profession has also enjoyed benefits from university association. Public policy faculties are interdisciplinary, their journals and professional associations are healthy, and the professional degrees have earned respect in public personnel systems. At the graduate professional degree level, students strongly identify with public policy and the skills needed to use it as a tool of governance. The public management contingent among public policy professionals has spawned professional organizations that arguably have eclipsed their public administration counterparts in their influence on practice. While the field’s academic foundations are a source of strength, it is fortunate, and essential, that the field’s intellectual life is leavened by government organizations such as the GAO, CRS, and CBO, private policy research institutions such as Brookings, the Urban Institute, the Center on Budget and Policy Priorities, and the American Enterprise Institute, client-oriented policy research firms such as
83
Policy analysis in the United States
MDRC and Mathematica Policy Research, and innumerable counterparts to these organizations at state and local levels of government. Academia nonetheless tends to create distractions from and disincentives to engage in intellectual activity grounded in the gritty work of governing in real time. Incorporating practitioners in the profession’s governance and professional forums and publications is an effective antidote to academic decadence.
On reflection: some verities Beryl Radin’s chapter in this volume, as well as her other work, highlights transformation: of the institutions that demand and supply policy analysis and advice and of the character of practice, all wonderfully illuminated by the personal experiences of participants. I believe, nonetheless, that there are some verities that can link the field’s past, present, and future. Five decades ago, here was an excitement, an animating spirit, a belief that policy analysts had something important to contribute to the quality of American governance and public policy. That founding spirit must be sustained. It can be summarized in the following propositions. Policy analysis must see itself as committed to enhancing the performance of America’s Madisonian scheme of politics and governance. No politics– administration dichotomy for us, no running government like a business, no ‘neutral competence.’ Policy analysts are conduits of knowledge and useful ideas to those who govern us at all levels of society. This is emphatically not a commitment to Lasswell’s ‘policy science of democracy.’ Madison argues in The Federalist No. 51 that ‘the great security against dangerous concentrations of power consists in giving to each branch of government’ the necessary constitutional means and personal motives to resist encroachments of the others. … [A]mbition must be made to counteract ambition.’6 By ambition, Madison meant: ‘each department should have a will of its own.’ It follows that efforts by policy analysts to strengthen the capacity of the executive branch of government are not tantamount to promoting tyranny, as the field’s postpositive critics have alleged. Those same policy analysts are also engaged in strengthening the capacity of legislatures to check the ambitions of executives by evening the analytic playing field (which was distinctly uneven when I was a practitioner). Courts have privileged policy analysis by endorsing it when finding that governments have shown they are acting reasonably and, therefore, deserve judicial deference. In the same vein, though American politics is seldom regarded as anything close to rational, the fact is that our political processes depend far more on the giving of reasons than is commonly acknowledged. Rulemaking requires it. Budget making requires it. Legislative oversight requires it and so do electoral politics. The media demand it. So do citizens in their personal transactions with their governments. The transcendent question in political discourse is: Why? The business of policy analysis is to address such questions. 84
Reflections on 50 years of policy advice in the United States
The rule of law requires reasoning. Managerial judgments and actions are subject to judicial review for their compliance with constitutional and statutory law and other lawful acts. According to the doctrine of judicial deference, in general, U.S. Appellate Judge David Tatel puts it: Congress delegates authority to administrative agencies not to authorize any decision at all, but to permit agencies to apply their expertise. The reason-giving requirement allows courts to determine whether agencies have in fact acted on the basis of that expertise. … At a minimum, the requirement of reasoned justification ensures that the best arguments that can be marshaled in favor of a policy are enshrined in the public record, so that anyone seeking to undo the policy will have to grapple with and address those arguments. In this way, the reason-giving requirement makes our government more deliberative. And even when it slows agency action, it makes government more democratic (Tatel 2010, pp 7–8). Resources to serve public and community interests through the instrumentalities of government are limited. That means that budget making is the backbone of American public governance. It follows that the raison d’être of policy analysis must include, as Schultze emphasized, consideration of how limited resources can be deployed most effectively in the service of public interests. That in turn requires analysis of alternatives. That requires somebody to figure out what we’re trying to accomplish, if not Lindblom’s ‘muddlers’ then the administrators who are charged with getting the job done and answering to lawmakers and judges for what they’ve done. Ideologues of whatever stripe are the most likely to be offended by policy analysis being a basis for advice to policy makers (unless, of course, they can rig analysis in their favor). Policy analysis is conducted in a scientific spirit in which all claims are submitted to the possibility of disconfirmation based on evidence. Reasons and evidence supporting them are the mother’s milk of policy analysis. The policy analysis profession in the United States encompasses both clientoriented policy analysis and social-science-based public policy research, as Weimer points out. But the imperatives of front-line policy analysis in a world of Madisonian politics are markedly different from the imperatives of university-based policy research and from the imperatives governing policy research in soft-moneydriven organizations. The norm of intellectual integrity must be respected in all of these settings but the inevitable differences among their methods and their outputs must also be taken into account. So, too, must the pathologies that are inevitable in each of them. Policy analysis is not a religion. It’s invincibly secular, multi-ethnic human activity. It follows that the institutions of professional policy analysis, such as the Association for Public Policy Analysis & Management (APPAM), must, as mechanisms of professional accountability, be diverse. While credentialing is the 85
Policy analysis in the United States
province largely of universities, the profession’s intellectual ethos and content must reflect the robust involvement of executive agencies of government, think tanks, institutes, and research centers partially dependent on soft money and donations, consultancies, legislative entities such as the CBO, the CRS, the GAO, and similar entities in state and local governments. Policy makers need honest brokers, people who will see to it that all responsible possibilities are carefully and honestly considered, that uncertainties are identified, that assumptions are tested, that ‘what if?’ questions are raised and addressed. It can be dangerous work. It can be exhilarating. It can be a waste of time. That it is essential is now established. Finally, our profession must continue to acknowledge that counting and calculation are an important part of what we do. Perhaps this goes without saying, but I’ve often heard groans when serial number crunchers are at the podium. I understand that; at times, the Journal of Policy Analysis and Management (JPAM) has seemed more like a journal of multiple regression analysis than a forum with the intellectual vitality associated with essays and case analyses. But policy deliberation needs numbers. The era of ‘big data’ and data analytics is here, and that means new opportunities and challenges for policy analysts. We must be able to enlighten policymakers on the magnitude and extent of things. Sometimes that can still be done on the backs of envelopes, but more often it cannot. A word on our profession’s literature must be added to round out these reflections. As a doctoral student in economics, reading the discipline’s classics was mandatory so that we understood the history of its ideas and how and why they evolved the way they did. It is equally important for the public policy doctoral students I’ve taught lately, whether they aspire to academic careers or to service in governments. For the bookshelf of classics for our profession, I nominate these 11 books as seminal to understanding the conceptual, organizational, political, and administrative dimensions of policy analysis: Friedmann, J. (1987) Planning in the Public Domain: From Knowledge to Action, Princeton, NJ: Princeton University Press. Hitch, C. J. and McKean, R. N. (1967) The Economics of Defense in the Nuclear Age, Cambridge, MA: Harvard University Press. Kingdon, J. W. (1995) Agendas, Alternatives, and Public Policies (2nd edn), New York: Harper Collins (republished in 2003 in the series ‘Longman’s Classics in Political Science’, with a new foreword.) Majone, G. (1989) Evidence, Argument, and Persuasion in the Policy Process, New Haven, CT: Yale University Press. Morone, James A. (1990) The Democratic Wish: Popular Participation and the Limits of American Government, New York: Basic Books. Pressman, J. and Wildavsky, A. (1979) Implementation (2nd edn), Berkeley, CA: University of California Press. Rivlin, A. M. (1971) Systematic Thinking for Social Action (The 1970 H. Rowan Gaither Lectures in Systems Science), Washington, DC: The Brookings Institution. 86
Reflections on 50 years of policy advice in the United States
Schultze, C. L. (1969) The Politics and Economics of Public Spending (The 1968 H. Rowan Gaither Lectures in Systems Science), Washington, DC: The Brookings Institution. Simon, H. A. (1997) Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations (4th edn), Glencoe, IL: Free Press. Wilson, J. Q. (1989) Bureaucracy: What Government Agencies Do and Why They Do It, New York: Basic Books. Vickers, S. G. (1983) The Art of Judgment: A Study of Policymaking, New York: Harper & Row. In retrospect, much of the intellectual Sturm und Drang that seems to have been ignited by LBJ’s 1965 mandate seem almost inadvertent satires of reality by those who either never had to wrap their minds around how to design and administer a public program so that it works or who were distracted by esoteric arguments about ‘policy science.’ As I noted above, when you stir too much academic selfabsorption into the bricolage that is policy analysis, a tasty buffet can become over-seasoned with ideological rancor, interdisciplinary rivalries, turf battles, and narrowly academic ambition. Not to worry: 50 years on, policy analysis has a place at the tables where policies are made and implemented. We have reached the same point, as the medical profession once had to do: we do a lot more good than harm. Notes 1 2
3 4 5 6
These reflections draw on Lynn (1969, 1980, 1989, 1999, 2001, 2015). Quoted by Enthoven and Smith (1971, pp 87–8). Of this group of experts, it has been noted that ‘several of [them were] more than mildly eccentric and frequently irritating to the naval and military commanders they advised’ (Budiansky 2013). Not a few of McNamara’s ‘whiz kids’ were similarly irritating. I, however, was not. This section closely follows a discussion in Lynn (1999). At: http://urban.hunter.cuny.edu/~schram/lindblom1959.pdf. The following section is adapted from Lynn (1999). Madison quotations are from The Federalist, Modern Library College Editions (softcover), published by Random House, n/d.
References Banfield, E. C. (1977) ‘Policy Sciences as Metaphysical Madness’, in R. C. Goodwin (ed) Statesmanship and Bureaucracy, Washington DC: American Enterprise Institute, pp 1-35. Bertelli, A. M. and Lynn Jr., L. (2006) Madison’s Managers: Public Administration and the Constitution, Baltimore, MD: Johns Hopkins University Press. Brewer, G. D. (1974) ‘The Policy Sciences Emerge: To Nurture and Structure a Discipline’, The RAND Paper Series, P-5206, p iii. https://www.rand.org/ content/dam/rand/pubs/papers/2008/P5206.pdf. Budianski, S. (2013) Blackett’s War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare, New York: Alfred A. Knopf. 87
Policy analysis in the United States
deLeon, P. (1997) Democracy and the Policy Sciences, Albany, NY: University of New York Press. Enthoven, A. C. and Smith, K. W. (1971) How Much Is Enough? Shaping the Defense Program, 1961–1969, New York: Harper & Row. Hoover, H. (1933) ‘Statement on the Report of the-President’s Research Committee on Social Trends,’ 2 January. Online by Gerhard Peters and John T. Woolley, The American Presidency Project, http://www.presidency.ucsb.edu/ ws/?pid=23396. Lindblom, C. (1959) ‘The Science of Muddling Through’, Public Administration Review 19(2): 79–88. Lindblom, C. (1979) ‘Still Muddling, Not Yet Through’, Public Administration Review 39(6): 517–26. Lynn Jr., L. E. (1969) ‘System Analysis—Challenge to Military Management’, in D. I. Cleland and W. R. King (eds), Systems, Organizations, Analysis, Management: A Book of Readings, New York: McGraw-Hill, 216–31. Lynn Jr., L. E. (1980) ‘Policy Analysis: The User’s Perspective’, in G. Majone and E. S. Quade (eds), Pitfalls of Analysis, John Wiley for the International Institute for Applied Systems Analysis. Lynn Jr., L. E. (1989) ‘Policy Analysis in the Bureaucracy: How New? How Effective?’ Journal of Policy Analysis and Management, 8(3): 373–77. Lynn Jr., L. E. (1996) Public Management as Art, Science, and Profession, Chatham, NJ: Chatham House. Lynn Jr., L. E. (1999) ‘A Place at the Table: Policy Analysis, Its Postpositivist Critics, and the Future of Practice’, Journal of Policy Analysis and Management 18(3): 411–25. Lynn Jr., L. E. (2001) ‘The Making and Analysis of Public Policy: a Perspective on the Role of Social Science’, in D. Featherman and M. Vinovskis (eds), Social Science and Policymaking: A Search for Relevance in the Twentieth Century, Ann Arbor, MI: University of Michigan Press, 187–217. Lynn Jr., L. E. (2015) ‘Reform of the Federal Government: Lessons for Change Agents,’ in R. Wilson, N. Glickman, and L.E. Lynn, Jr. (eds), LBJ’s Neglected Legacy: How Lyndon Johnson Reshaped Domestic Policy and Government. Austin, TX: University of Texas Press. Nelson, R. R. (1977) The Moon and the Ghetto: An Essay on Public Policy Analysis. New York: W.W. Norton. Quade, E. (1975) Analysis for Public Decisions, Waltham, MA: American Elsevier. Radin, B. (2013) Beyond Machiavelli: Policy Analysis Reaches Midlife, Washington, DC: Georgetown University Press. Rivlin, A. M. (1971) Systematic Thinking for Social Action, Washington, DC: The Brookings Institution. Schick, A. (1970) ‘The Budget Bureau That Was: Thoughts on the Rise, Decline, and Future of a Presidential Agency’, Law and Contemporary Problems 35: 519–39. Schultze, C. L. (1968) The Politics and Economics of Public Spending, Washington, DC: The Brookings Institution. 88
Reflections on 50 years of policy advice in the United States
Staats, E.B. (1968) ‘Perspective on Planning-Programming-Budgeting’, The GAO Review (Summer): 12. Tatel, D. (2010) ‘The Administrative Process and the Rule of Environmental Law’, Harvard Environmental Law Journal 34: 1–8 Tribe, L. H. (1972). ‘Policy Science: Analysis or Ideology?’ Philosophy and Public Affairs, 2: 66–110. Wildavsky, A. (1979) Speaking Truth to Power: The Art and Craft of Policy Analysis, Boston, MA: Little, Brown.
89
Part Two Policy analysis by governments
91
FIVE
The practice and promise of policy analysis and program evaluation to improve decision making within the U.S. federal government Rebecca A. Maynard
This chapter draws on a 40-year history of patchwork efforts to use data and data analysis to inform the development of public policy and to shape its implementation. The chapter begins with a description of the evolution of the policy process in the United States, drawing largely on experiences within the U.S. Departments of Health and Human Services (DHHS), Education (DOED), and Labor (DOL). All three agencies have been major supporters of and contributors to advances in the methods of policy analysis and the generation and use of program evaluation to guide decision making. The chapter draws on the roles of these agencies in laying the groundwork for the current emphasis on evidence-based policy making, in part because of their leadership roles and in part because of the author’s first-hand experience working with these agencies. The chapter ends with a discussion of the movement over the last decade to create and use credible evidence on the impacts and cost-effectiveness of programs, policies, and practices as the foundation for more efficient and effective government and reflections on the potential implications of the recent change in the administration for the future of that movement.
Evolution of policy development in federal agencies Serious attention to program evaluation and public policy analysis in the United States dates to the late 1960s as the federal government, under the leadership of President Lyndon B. Johnson committed to the ‘War on Poverty.’ Interest in and commitment to using the power of government to combat poverty and its root causes created an imperative for more and better evidence to understand the social, economic, and education needs of society. President Johnson’s commitment in 1964 was ‘... not only to relieve the symptoms of poverty, but to cure it and, above all, to prevent it’ (Johnson 1964). The War on Poverty included policies aimed at three areas of concern: (1) economic security and health care for the elderly, disabled, and poor; (2) economic opportunity through employment for all; and (3) equity in educational opportunity (Matthews 2014). These commitments were underscored by four major legislative actions that, together, resulted in unprecedented levels of public investment in the health and welfare of American society flowing from legislative 93
Policy analysis in the United States
actions (Johnson 1964). Key among these actions were amendments to the Social Security Act 1965, which created Medicare and Medicaid and expanded social security benefits coverage; the Food Stamp Act 1964, which institutionalized what had been a pilot program to improve nutrition among poor families; the Economic Opportunity Act 1964, which established Head Start and programs to improve workforce preparation of young people (for example, Job Corps, VISTA, and the federal work-study program); and the Elementary and Secondary Education Act (Title 1), which provides financial support to school districts serving populations with high concentrations of poor children (Klein 2015; U.S. Department of Health Education and Welfare 1969).
Rising demand for policy analysis and program evaluation These expansions in federal support for and management of public policies created a demand for monitoring and evaluation work and, thus, for a cadre of professionals to carry it out—namely program evaluation and policy analysis professionals. This new class of professionals needed skills in using data for routine monitoring of this growing number of large, nationally dispersed programs and policies. They also needed analytic skills and methods to support simulations of the likely response to modifications in policies or their contexts of implementation (Bourguignon and Spadaro 2006; Citro and Hanushek 1991) and to support rigorous evaluations of the net impacts and cost-effectiveness of the programs and policies. Policy analysis and evaluation tools are applied for a variety of purposes, including to generate evidence on the benefits (or lack thereof) of public policies; to develop effective program monitoring systems; to generate credible estimates of fiscal impacts of policies and policy changes and their distributions across stakeholder groups; and to estimate take-up rates and costs of program and policy changes. The policy analyst’s job is to generate ‘professionally provided advice relevant to public decisions and informed by social values’ (Weimer and Vining 2017). In contrast, the job of program and policy evaluators is to generate empirical evidence to inform the policy process. Thus, it prioritizes information on the effectiveness (or lack thereof) of programs, policies, and practices (Scriven 1993; Madaus et al. 1983). Over the 40 years following the ‘birth’ of the field, a small band of ‘warriors’ from the philanthropic, policy, and research communities fought and won many battles to increase and improve the availability and use of reliable evidence about what does and does not work to address the needs of poor and vulnerable populations (Gueron and Rolston 2013). As a consequence of the breadth of governmental policy intervention, there are many areas of overlap across agencies, creating zones of shared responsibility for the generation and use of monitoring and evaluation. For example, in the areas of health, welfare, and education, much of the responsibility rests with the DHHS, DOED, and DOL. Moreover, each of these agencies has primary responsibility for initiatives that also align 94
The practice and promise of policy analysis and program evaluation ...
with the missions of the other agencies—such as Head Start (DHHS), Dropout Prevention Initiatives (DOED), and Job Corps (DOL). While the missions and leadership of the federal agencies are largely controlled by the Executive Branch of the government, the agencies are responsible to Congress for carrying out mandated policies and programs, they depend on Congress for financial resources to support mandated and discretionary policies and programs, and they are called on routinely to advise both Congress and the Executive Branch in deliberations about national priorities and responses to them.
Evolving structures for policy analysis and program evaluation As the programs growing out of the War on Poverty took shape, the foundation for the current level and structure of government oversight was laid. Initially, many of the analysis and evaluation needs were localized within a single government agency and tightly focused on monitoring particular programs and policies. For example, Congressional champions for programs like the Elementary and Secondary Education Act, which provided supplemental educational resources to districts, tied that support to specific accountability and evaluation provisions (Hogan 2007; Weiss 1998). Similarly, heavy reliance on social scientists in research universities can be traced to the War on Poverty. For example, the template for Head Start emerged from the work of a panel of experts that included prominent pediatrician Robert Cook, and prominent child psychologist Ed Zigler (Administration for Children and Families 2015), while the template for the negative income tax experiments (see further below) is credited to economists from the Massachusetts Institute of Technology and the University of Wisconsin (Ross 1970; Levitt and List 2009). Evaluations of these programs in their early years were funded through the public agencies responsible for their management. Although as academic researchers were drawn into this applied research, they began tapping into private sources of funding to supplement the federally funded work. For example, throughout its history, the Head Start program has been evaluated periodically through both federally supported studies and through field-initiated research carried out by academics or fledgling research organizations (McKey 1985; Currie and Thomas 1993, 1998). By the late 1960s, we had also witnessed the birth of large-scale demonstrations to test the implementation feasibility, assess the impacts, and estimate the costbenefits of bold new public policies not yet rooted in legislation. The sizes of these evaluation enterprises were such that they spawned the birth of large social science research centers, both within the academy and within free-standing research organizations. Two prominent examples are the National Health Insurance Experiments (Newhouse and RAND Corporation 1993) and the negative income tax experiments (Kershaw and Fair 1976; Watts and Rees 1977a, 1977b). These large public investments in program demonstration projects to test bold new ideas were accompanied by serious efforts to vet their promise. In so far 95
Policy analysis in the United States
as the foundational arguments for some of these ideas were economic, the early empirical methods used in their vetting drew heavily on economic theory and early versions of predictive analytics, commonly referred to as microsimulation (Ballas and Clarke 2001; Citro and Hanushek 1991). Notably, these early modeling efforts took advantage of the emerging access to high-capacity computers and large-scale, nationally representative data sets from the Census Bureau and Bureau of Labor Statistics, and administrative records of federal programs like Aid to Families with Dependent Children (AFDC), Medicaid, Unemployment Insurance, and Food Stamps. The majority of the evaluations were conducted through emergent research organizations housed within major universities (for example, the University of Wisconsin, in the case of the Negative Income Tax Experiments) and in the emerging research firms and Institutes (for example, RAND in the case of the National Health Insurance Experiment).
Resources to support policy analysis and program evaluation For more than 100 years, the federal government has had agencies responsible for monitoring federal policies and programs and for providing policy analysis support—namely, the forerunner of the current Government Accountability Office (GAO) and the Congressional Research Service. ‘The U.S. Government Accountability Office (GAO) is an independent, nonpartisan agency that works for Congress’ (Government Accountability Office n.d.). Founded in 1921 under the Budget and Accounting Act, the agency became independent of the Executive Branch in 1941. Its primary function has been and continues to be monitoring how the federal government invests taxpayer dollars, including examining the effectiveness and cost-effectiveness of federal policies and programs. ‘The Congressional Research Service (CRS)[, which dates to 1914 and is housed in the Library of Congress,] serves as shared staff to Congressional committees and Members of Congress’ (Congressional Research Service n.d.). CRS staff engage in all stages of the legislative process, from the early considerations of issues under consideration by the legislature to oversight of enacted laws and a range of agency activities. In essence, it serves as a reference service to provide Congress with information, including monitoring and evaluation results, to guide its policy development and evaluation processes. The expanded role of the federal government in direct services and financial support for programs that occurred in the latter half of the 20th century created a demand for rigorous monitoring and evaluation of government policies and practices across government agencies, as well as branches within them. This changed the scale and scope of the roles of GAO and CRS and led to the creation of two new agencies—the Congressional Budget Office (CBO) and the Office of Management and Budget (OMB). The CBO, which was created in 1974 as a nonpartisan agency, employs a staff of ‘economists and budget analysts charged with producing dozens of reports and hundreds of cost estimates for proposed legislation’ (Congressional Budget Office n.d.). The OMB, which was 96
The practice and promise of policy analysis and program evaluation ...
established in 1970, ‘serves the President of the United States in implementing his vision across the Executive Branch’ (Office of Management and Budget, n.d.). In addition to having responsibility for developing the President’s budget proposal to Congress, CBO also has responsibility for measuring the quality of agency programs, policies, and procedures, including inter-agency coordination of initiatives, where warranted.
Increased demand for more and better evaluations of federal programs The mandates of these federal agencies, together with increases in the number and scale of social welfare and anti-poverty programs, contributed to increased demand for more and better evaluation of federal programs, as well as for improvements in the infrastructure to support them. Much of this demand arose from within the federal government and, thus, was supported by the government. Furthermore, during this period, there were, for the first time, robust planned investments in program and policy development and testing by the federal government that engaged the nonprofit and academic sectors. For example, although the government convened the expert panel that conceived Head Start as implemented under the War on Poverty and subsequently supported extensive monitoring and evaluation of the program, many other organizations have also supported evaluations of Head Start over the years (Zigler and Valentine 1979; Zigler and Muenchow 1992; McKey 1985). In the 20 years following the launch of the War on Poverty, the increase in social service programs across government was subjected to many evaluations—some funded by the federal government, but increasingly often supported in other ways. Importantly, these evaluations varied widely in their focus, scale, and quality. For example, most of the more than 600 publications reporting on evaluations of Head Start within its first 20 years were based on small-scale studies supported by state and local governments, philanthropies or other nonprofit entities, or conducted by academic researchers with no or very limited external support (McKey 1985). As a result, many were not especially well focused and few generated evidence on the effectiveness of the program. Similarly, the expansion of government investments in youth employment programs following the launch of the War on Poverty (for example, Job Corps and various programs funded under the Youth Employment Demonstration Projects Act) was accompanied by funding for evaluations of the programs (Betsey et al. 1985). However, as with Head Start, the vast majority of the published findings from the evaluations were judged by a National Academy Panel to lack credibility (Boruch 1985). Moreover, the quality of the evidence was not widely different among studies funded under different auspices. The fraction of studies judged to have credible evidence (about 30%) was only slightly above that in a prior review of studies funded by the Office of Economic Opportunities (Rossi 1969)—a finding that contributed to the National Academy Committee’s call 97
Policy analysis in the United States
for major improvements in standards for and approaches to program evaluation (Boruch 1985).
Evolution of the policy analysis and evaluation professions The War on Poverty and the ensuing more active role for the federal government in social, economic, and education policy sparked the birth of professional organizations and journals focused on policy analysis and program evaluation. For example, the Eastern Evaluation Research Society (EERS), the Association of Public Policy Analysis and Management (APPAM), and the Network of Schools of Public Policy, Affairs, and Administration (NASPAA) all emerged in the 1970s to serve program evaluation and policy analysis professionals. At about the same time, universities began offering courses in evaluation methodology (Stufflebeam 2001). These professional organizations and the post-secondary education institutions they serve have played critical roles in shaping the training and support of policy analysts and program evaluation experts. These organizations also are professional homes for many of the policy analysts and evaluators who perform their work in various market sectors, including the graduate schools of public policy and administration, policy offices in all levels of government, research and evaluation experts in major philanthropic organizations, and the large number of evaluation researchers in for-profit and nonprofit research organizations.
The work of policy analysts and program evaluators Until recently, policy analysts relied primarily on published research and readily accessible program administrative data to judge existing policies and practices, craft new options for consideration, and estimate likely benefits and costs of program and policy alternatives. In contrast, program evaluators have tended to immerse themselves in a wide range of field-based evaluation projects, large and small. These studies often rely on experimental or quasi-experimental methods to estimate impacts (or lack thereof) and they quite often include complementary descriptive and qualitative components (Creswell 2003; Maynard 2006; Gueron and Rolston 2013). Up until the early to mid 1980s, the policy analysts’ work tended to be driven by the interests of specific constituent groups, such as a Congressional Committee or a policy office within a federal agency, such as the DOL or DHHS. Moreover, up until the mid 1980s, policy analysts were fairly trusting of the quality of evidence reported in accessible publications. However, by the late 1980s, the profession had matured and there were several well-established research organizations specializing in program evaluation and policy analysis—for example, Abt Associates, Mathematica Policy Research, MDRC (formerly, the Manpower Demonstration Research Corporation), RAND, SRI International (formerly Stanford Research Institute), and Westat. Moreover, faculty scholars at prominent universities including Chicago, Harvard, 98
The practice and promise of policy analysis and program evaluation ...
Wisconsin, and Stanford, were actively involved in large-scale, randomized, controlled trials of health, welfare, employment and training, and education programs. Some of these academic researchers also were heading major universitybased research centers that were capable of supporting modest to large-scale program evaluations and carrying out sophisticated policy analyses. Importantly, by this time, philanthropic organizations, such as the Ford and Rockefeller Foundations, were joining the ranks of those demanding and supporting largescale field tests of program and policy innovations. Change was slow, but steady, due to the vision, commitment, and grit of a relatively few individuals who were intent on moving public policy analysis and evaluation from a profession dominated by academics whose work prioritized personal interests and professional incentives for publishing in peer-reviewed journals, to one dominated by individuals focused on producing useful, credible evidence to inform policy and practice, irrespective of their home base. Indeed, this dominance by academic researchers was viewed by some as the root cause of the dearth of credible evidence to guide policy decisions, such as how the DOL should respond to the disappointing early results of the Job Training Partnership Act 1982 (Doolittle et al. 1993).
Greater acceptance of social experiments By the late 1980s the culture was shifting toward acknowledgement that claims of evidence of effectiveness based on randomized controlled trials generally had stronger warrants than did claims derived from evaluations based on other sample designs and analysis methods (Greenberg et al. 1999). The reason is simple: Having a control group generated by a lottery provides the basis for knowing what would have happened to individuals in the study sample who were subjected to the focal program or policy had they, instead, continued under the otherwise prevailing conditions (Gueron and Rolston 2013). By this time, several notable social experiments had demonstrated quite convincingly the benefits of having experimental as opposed to quasi-experimental or non-experimental evidence to support policy decisions, particularly when policies are controversial and/or the stakes are high. For example, it proved to be very useful to have ‘irrefutable’ evidence from experiments to support the highly controversial issue of whether welfare recipients should be subjected to mandatory job search as a condition of their eligibility for benefits (Bloom and Blank 1990), whether teenage mothers with infants should be required to work or be in school to receive their benefits (Maynard 1995), and whether it is cost-effective for the government to invest in alternative schools and other forms of employment training for high school dropouts (Dynarski and Gleason 1998; Christenson and Thurlow 2004). Importantly, by the late 1980s, experimentation had gained a strong, if not widespread, standing among important segments of the evaluation, policy, and practitioner communities as a feasible, responsible, and ethical way to incentivize states to come up with cost-effective strategies for administering federally funded 99
Policy analysis in the United States
welfare, and employment and training programs (Rossi 1987; Myers and Dynarski 2003; Fraker and Maynard 1987). This was facilitated by the fact that the evaluation community had become more ‘adept at ... convince[ing] staff that ... [random assignment] would avoid unknown and incalculable uncertainty’ (Gueron and Rolston 2013, pp 256–7). Over the ensuing years, the fraction of funded program and policy evaluations in the welfare, education, and employment and training policy spaces relying on experimental designs increased substantially. For example, the first major experimental design evaluation funded by the DOE was a large-scale, multisite, multi-program evaluation of dropout prevention programs (Dynarski et al. 1998); the DOL funded experimental evaluations of the Job Training Partnership Act (Bloom et al. 1987); and the DHHS supported many experimental design evaluations of welfare policy changes, including welfare policies for teenage mothers (Kelsey et al. 2001; Maynard 1995) and Welfare-to-Work waiver experiments of the 1990s (Freedman et al. 2000). In fact, the Welfare-to-Work programs were among the first federal initiatives to grant waivers from federal policy guidelines conditional on generating experimental evidence to substantiate (or not) the cost neutrality of the state-favored policies (Harvey et al. 2000).
Paradigm shift By the turn of this century, we had entered a new playing field. As early as 1981, President Reagan issued an executive order requiring impact analysis of all major federal policies and programs ‘to reduce the burdens of existing and future regulations, increase agency accountability for regulatory actions, provide for presidential oversight of the regulatory process, minimize duplication and conflict of regulations’ (Federal Register 1981). Not only have these requirements persisted (U.S. Department of Health and Human Services 2016), more noteworthy is the fact that it is becoming commonplace that policymakers request and expect to use evidence to guide their planning and monitoring efforts, even though they recognize that the available evidence is often lacking in depth, breadth, or direct applicability. It also has become more routine to expect systematic tracking of inputs and outputs associated with public policies and programs. A byproduct of this monitoring has been the creation of rich data sets, albeit of varying degrees of utility and accessibility, for purposes other than that for which they were commissioned. But, the most notable change has been the increasing interest in use of evidence quite explicitly in policy decisions about how to allocate resources, how to monitor and reward performance, and where and how to invest evaluation dollars (Haskins and Baron 2011; Mervis 2013). These changes would not have been possible had there not also been decisions across federal agencies and offices to create clearinghouses to collect, review, and make publicly available the findings of prior evaluations they judge to have generated credible evidence of the likely impacts of programs, policies, and practices in their sphere of responsibility. Although these efforts began nearly 100
The practice and promise of policy analysis and program evaluation ...
20 years ago (Reidy et al. 1998), it was well into the Obama administration before they reached a level of maturity that makes them integral to the work of Congressional committees, OMB, and the Program and Planning Offices of the DHHS, DOED, and DOL. It remains to be seen whether this momentum will continue with the transition to the Trump administration and its promise to adopt a more free-market approach to decision making that relies on a quite different definition of ‘evidence’ (Gordon & Haskins 2017). The Institute of Education Sciences under the leadership of its first Director, Grover (Russ) Whitehurst, set the tempo for the movement to create evidence platforms with its launch of the What Works Clearinghouse in 2002 (Whitehurst 2003). Importantly, numerous other public and nonprofit agencies have followed suit. Across the federal government, there are now a number of evidence clearinghouses. These include clearinghouses focused on education, teen pregnancy prevention, child development and welfare, and criminal justice, as well as more generic platforms like the Campbell Collaboration created by the research and evaluation communities (Table 5.1). For the evidence-based policy movement as we know it today to continue, it will be important that these types of infrastructure investments be continued—something that may or may not fit with the Trump administration’s view of the role of evidence in the policy process. These platfor ms have made it more practical for government and nongovernmental agencies to incorporate evidence criteria into their program funding decisions. For example, the Every Child Succeeds Act 2015 (ESSA) includes provisions regarding allowable uses of federal education monies that depend on the amount and rigor of the evidence on expected impacts (Gross 2016). Similarly, the Office of Adolescent Health’s Teen Pregnancy Prevention Program is now linking much of its program funding to evidence requirements (Office of Adolescent Health 2016).
Needed improvements in the evidence base Despite the widespread recognition of the importance of credible evidence to guide public policy development, the clear majority of policy decisions are still being made with little credible supporting evidence to guide how we disburse the more than $6 trillion spent annually in the United States on health, welfare, economic security, and defense annually (Kettl 2015). There are three primary reasons: (1) it takes time and resources to accumulate credible evidence on the broad range of issues relevant to health, social welfare, and education; (2) it takes time and resources to integrate evidence into decision-making processes; and (3) sometimes, other considerations do and should trump cost considerations, such as values and competing priorities. Not only is the current evidence base lacking in coverage, but much of it is obsolete and a shamefully large share of it is methodologically flawed—particularly much of that based on studies conducted prior to the turn of the century. For example, major public policy issues in education revolve around teacher training 101
Policy analysis in the United States
Table 5.1: Selected evidence clearinghouses 2017 Name
Sponsoring agency
Weblink
Best Evidence Encyclopedia
Johns Hopkins University, Center for Data Driven Reform in Education
www.bestevidence.org/overviews/ overviews.htm
Campbell Collaboration
Campbell Collaboration
www.campbellcollaboration.org/lib/
Clearinghouse for Labor Evaluation and Research (CLEAR)
U.S. Department of Labor
http://clear.dol.gov/
Crime Solutions
Office of Justice Programs
www.crimesolutions.gov/default. aspx
Systematic Reviews
I-3: International Initiative for Impact Evaluations
www.3ieimpact.org/en/evidence/ systematic-reviews/
Evidence for ESSA
Johns Hopkins University
http://evidenceforessa.org
Find Youth Info
The Interagency Working Group on Youth Programs
https://youth.gov/evidenceinnovation/program-directory
Clearinghouse on Families and Youth
National Commission on Families and Youth
http://ncfy.acf.hhs.gov/
Home Visiting Evidence of Effectiveness U.S. Department of Health and Human (HomVEE) Services, Administration for Children and Families, Office of Planning, Research, and Evaluation
http://homvee.acf.hhs.gov/
Teen Pregnancy Prevention Review Study Database
U.S. Department of Health and Human Services
http://tppevidencereview.aspe.hhs. gov/StudyDatabase.aspx
Benefit-cost Results
Washington State Institute of Public Policy
www.wsipp.wa.gov/BenefitCost
What Works Clearinghouse (WWC)
U.S. Department of Education, Institute of Education Sciences
http://ies.ed.gov/ncee/wwc/
Workforce Systems Strategies
U.S. Department of Labor, Employment and Training Administration
https://strategies.workforcegps.org/
Source: This table builds on Maynard et al. (2016), Table 1.
and compensation policies. A recent review of evidence by the What Works Clearinghouse of four widely touted strategies to improve the teaching force— Teach for America (TFA), the Teacher Advancement Program, the New Teacher Induction Model, and My Teaching Partner—identified 49 potentially relevant studies. While seven of 24 studies of TFA met the Clearinghouse’s evidence standards, there was only one study meeting standards for each of the other three programs (author’s review of the What Works Clearinghouse Intervention Report website). This is in spite of the fact that the standards of evidence applied by the Clearinghouse are not especially stringent, allowing inclusion of studies using either randomization or matching to create the comparison group (Institute of Education Sciences 2016). More generally, the vast majority of the published literature the What Works Clearinghouse identified as relevant to particular topic areas it has reviewed fails to meet its standards for credibility. For example, an examination of the website identified 9,292 citations for studies of relevance to topics the Clearinghouse had 102
The practice and promise of policy analysis and program evaluation ...
reviewed. However, only 10% of the studies yielded evidence on effectiveness of the interventions of interest that was judged to be sufficiently credible to warrant reporting (Table 5.2). Most of the studies did not measure impacts and, of those that did, a high fraction used data and/or analytic methods that resulted in estimates of questionable credibility. However, on the positive side of the equation, over half of the studies that met evidence standards were well designed and implemented randomized controlled trials. Notably, these findings on the prevalence and quality of evidence in education are similar to those for evaluations in the areas of social welfare and employment and training (Coalition for Evidence Based Policy 2013).
Reliance on evidence in funding decisions It is easy to be disheartened by these statistics. However, the good news is that these evidence platforms exist to help policymakers, practitioners, and the public learn what is and is not known about the effectiveness of various policies and programs that are in place or being considered. Indeed, where credible evidence does exist, it has become increasingly important and more widely used to inform policy. For example, the decision by Congress to appropriate $1.5 billion over four years to support the Maternal, Infant, and Early Childhood Home Visiting Program was driven by a 30-year history of research on one program model, Nurse–Family Partnership, and all of that research had been produced by the program’s developer (Goodman 2006; Schmit et al. 2015; Haskins and Margolis 2014; Olds et al. 2013). But, the focus on evidence did not stop with congressional action to fund the programs. The legislation that authorized the program required that 75% of the funds be spent on one of 11 programs that had scientific evidence of their effectiveness (U.S. Department of Health and Human Services, n.d.a). As of January 2017, the list of ‘effective programs’ had grown to 19 programs (U.S. Department of Health and Human Services, n.d.b). Reflecting the growing commitment to rigorous evidence for monitoring and future policy
Table 5.2: Sample snapshots of evidence on the effectiveness of education programs and practices reviewed by the What Works Clearinghouse
Number of studies Total
Studies identified
9,292
Relevant method and designs
2,501
Met evidence standards
944
without reservations
601
with reservations
343
Teacher excellence topic area 179 67 34 21 13
Percentage of studies identified Total
Teacher excellence topic area
100%
100%
27%
37%
10%
19%
6%
12%
4%
7%
Source: Tabulations by author of data from the What Works Clearinghouse website’s Review of Individual Studies Tool: https://ies.ed.gov/ncee/ wwc/ReviewedStudies
103
Policy analysis in the United States
decision making, Congress also appropriated funding to continually evaluate four of the most prevalent of the approved models (Michalopoulos et al. 2015; U.S. Department of Health and Human Services, n.d.c). And, states and philanthropic organizations are funding evaluations of many of the less well-known program models, including some evidence-based models that have been adapted to local conditions (U.S. Department of Health and Human Services (n.d.d). The most recent shifts in the policy arena have entailed integrating evidence use throughout the design, development, testing, and institutionalization processes. We see this featured in evidence frameworks developed to guide federal funding decisions, such as the Common Evidence Guidelines for Education Research issued by the National Science Foundation and the Institute of Education Sciences (Earle et al. 2013) and the Framework for Design Based Implementation Research (Means and Harris 2013; Fishman et al. 2013). These frameworks, together with strong encouragement from the OMB’s Office of Evidence-based Initiatives (Mervis 2013; Nussle and Orszag 2014) have increased substantially familiarity with and buy-in from the policy analysis community (Khagram and Thomas 2010; Thyer and Meyers 2011). Indeed, across the board, increasing numbers of funders and shares of funding for public programs and policies are being shaped by considerations related to evidence on expected impacts, returns on investment (benefits-costs), and efficiency (cost-effectiveness). Across government, the level and strength of evidence on the likely impact of public policies, practices, and programs is shaping what is funded, the level of funding, and requirements for program monitoring and evaluation. Many of the evidence-based programs recently launched in the DHHS, DOE, and DOL, as well as programs overseen by the Corporation for National Community Service within the Executive Branch, carry with them evidence requirements for award of funds, as well as requirements for rigorous evaluations to substantiate (or not) their effectiveness (Table 5.3).
Looking forward Over the past 40 years, the fields of policy analysis and program evaluation have evolved from craft occupations, tailored to the local employment setting and available resources, to mature professions with more explicit expectations about formal training requirements and approaches to the design, implementation, conduct, and dissemination of the work. Professionals in these fields now face more demanding preparation for their jobs. However, they also can expect to enjoy a more central role in development, enactment, improvement, and evaluation of the programs, policies, and practices. Forty years ago, most individuals entering these fields lacked formal training in policy analysis or program evaluation; rather, most had training in a discipline such as sociology, economics, political science, or statistics. Today, policy analysts and evaluators are more likely to have had formal training in a school of public policy or public administration. Many also will have an advanced degree in applied economics, quantitative sociology, or
104
The practice and promise of policy analysis and program evaluation ...
Table 5.3: Illustrative evidence-guided funding streams Program, website
Description
Maternal, Infant, and Early Childhood Home Visiting Program
Health Resources and Services Administration (HRSA), in close partnership with the Administration for Children and Families (ACF), funds states, territories and tribal entities to develop and implement voluntary, evidence-based home visiting programs using models that are proven to improve child health and to be cost-effective. These programs improve maternal and child health, prevent child abuse and neglect, encourage positive parenting, and promote child development and school readiness.
http://mchb.hrsa.gov/ programs/homevisiting/ Social Innovation Fund
Authorized by the Edward M. Kennedy Serve America Act in April of 2009, the Social Innovation Fund is a program of the Corporation for National and Community Service (CNCS). The Social http://www.nationalservice. Innovation Fund’s (SIF) portfolio represents $241 million in federal grants and more than $516 gov/programs/socialmillion in non-federal match commitments. To date, the SIF’s Classic program has awarded innovation-fund/about-sif grants to 27 grant-making organizations and 189 nonprofits working in 37 states and the District of Columbia. Through the SIF’s Pay for Success program, over 30 jurisdictions across the United States are engaged in testing and implementing Pay for Success projects. First in the World http://www2.ed.gov/ programs/fitw/index.html
The FITW program funded by the U.S. Department of Education to support the development, replication, and dissemination of innovative solutions and evidence for what works in addressing persistent and widespread challenges in postsecondary education for students who are at risk of not persisting in and completing postsecondary programs, including, but not limited to, adult learners, working students, part-time students, students from low-income backgrounds, students of color, students with disabilities, and first-generation students.
Investing in Innovation
The Investing in Innovation Fund, established under section 14007 of the American Recovery and Reinvestment Act 2009 (ARRA). The purpose of this program is to provide competitive grants to applicants with a record of improving student achievement and attainment in order to expand the implementation of, and investment in, innovative practices that are demonstrated to have an impact on improving student achievement or student growth, closing achievement gaps, decreasing dropout rates, increasing high school graduation rates, or increasing college enrollment and completion rates. These grants encourage eligible entities to work in partnership with the private sector and the philanthropic community, and to identify and document best practices that can be shared and taken to scale based on demonstrated success.
http://www2.ed.gov/ programs/innovation/ index.html
Trade Adjustment Assistance Community College and Career Training
http://doleta.gov/taaccct/
Workforce Innovation Fund
https://www.doleta.gov/ workforce_innovation/
In 2009 the American Recovery and Reinvestment Act amended the Trade Act 1974 to authorize the Trade Adjustment Assistance Community College and Career Training (TAACCCT) Grant Program. On March 30, 2010 President Barack Obama signed the Health Care and Education Reconciliation Act, which included $2 billion over four years to fund the TAACCCT program. TAACCCT provides community colleges and other eligible institutions of higher education with funds to expand and improve their ability to deliver education and career training programs that can be completed in two years or less, are suited for workers who are eligible for training under the TAA for Workers program, and prepare program participants for employment in high-wage, high-skill occupations. The Department is implementing the TAACCCT program in partnership with the Department of Education. The Workforce Innovation Fund Grant Program is authorized by the Full-Year Continuing Appropriations Act, 2011 (P.L. 112-10). These funds support innovative approaches to the design and delivery of employment and training services that generate long-term improvements in the performance of the public workforce system, both in terms of outcomes for job seekers and employer customers and cost-effectiveness. The program has three goals: (1) provide services more efficiently to achieve better outcomes, particularly for vulnerable populations and dislocated workers; (2) support both system reforms and innovations that facilitate cooperation across programs and funding streams in the delivery of client-centered services to job seekers, youth, and employers; and (3) build knowledge about effective practices through rigorous evaluation and how effective strategies to disseminating the effective programs to the broader workforce system.
Source: Adapted from Maynard et al. (2016), Table 1.
105
Policy analysis in the United States
political science. Increasingly, they are also likely to be grounded in principles of both quantitative and qualitative research methods, rather than one or the other. Up through the end of the Obama administration, all signs had been that the trends toward more and smarter use of evidence of various forms to inform the policy process would persist, with a premium placed on highly credible evidence of expected impacts from alternatives under consideration. And, even should the recent change in administrations result in a roll-back in the demands for evidence, it will do so in the context of a nation that has much greater capacity now than it did even 10 years ago to generate credible evidence due to the explosion of program administrative data and data warehouses that make extraction and sharing cheaper and quicker. It also is a world in which liberal and conservative policymakers have become much more accustomed to using evidence. Importantly, the field of evaluation now operates on higher standards for evidence. It has made methodological advances and advances in evaluation tools to routinize many complex and tedious analytic tasks in the design and conduct of rigorous research—for example, tools to conduct complex power analyses (Liu et al. 2006; Dong and Maynard 2013; Dong et al. 2016); do hierarchical modeling (Raudenbush and Bryk 2002); conduct randomization of sample members (Schochet 2015); perform complex data mining (Baker and Inventado 2014); and do complex processing and analysis of qualitative data sets (Ritchie et al. 2013). Thus, there is higher overall quality in the evaluations being conducted. Continued expansion and improvement of the policy analysis and program evaluation enterprise requires investment in the data infrastructure. We now live in a world where massive amounts of data are generated at all levels of government, most of which stays within a limited network of offices or agencies. The speed and quality of evaluations could be greatly enhanced by improvements in local data systems and by strategic integration across data systems (Fantuzzo and Culhane 2015). However, a major barrier to sharing and integration is a lack of trust in the process (Manzi 2012), which may require the promulgation of principles and practices for evaluation, much as we have for statistical agencies (Citro 2014). We are approaching consensus on what constitutes credible evidence, we have strong methods for generating that evidence, and we have a pipeline of skilled evaluators. We also have templates for the types of integrated data systems needed to support quick turnaround and ongoing analyses, and we have several strong platforms for identifying, synthesizing, and disseminating evidence. On the flip side, government agencies and the publics they serve have increasingly been demanding to see the evidence before they lend their support to new policies, programs, or practices. However, it remains to be seen how enduring these trends will be across administrations. As of this printing, there are serious efforts by leaders in the evidence movement on both sides of the political divide to advise the Trump administration on ways the advances in evidence-based policy making can also be useful to their goals (for example, Feldman 2017). Although we have seen some shifts in the emphasis in terms of evidence use in policy development and management, we have seen no evidence that recent trends 106
The practice and promise of policy analysis and program evaluation ...
toward more and better use of evidence will not continue. Strong public policy depends on the vigor and vigilance of the policy analysts and program evaluators and their commitment and capacity to work with policymakers and the public to promote sharing of the evidence we have and noting the gaps. The biggest unknown is how the quite different values held by different administrations will influence policy design and implementation processes. References Administration for Children and Families (2015) ‘History of Head Start’, Washington, DC: Office of Head Start, U.S. Department of Health and Human Services, https://www.acf.hhs.gov/ohs/about/history-of-head-start. Baker, R. S. and Inventado, P. S. (2014) ‘Educational Data Mining and Learning Analytics’, in Learning Analytics, New York: Springer, 61–75. Ballas, D. and Clarke, G. P. (2001) ‘Modelling the Local Impacts of National Social Policies: A Spatial Microsimulation Approach’, Environment and Planning C: Government and Policy 19(4): 587–606. Betsey, C., Hollister Jr, R. G., and Papageorgiou, M. R. (eds) (1985) Youth Employment and Training Programs: The YEDPA Years, National Academies of Sciences. Bloom, B. and Blank, S. (1990) ‘Bringing Administrators into the Process’, Public Welfare 48(4): 4–12. Boruch, R. (1985) ‘Implications for the Youth Employment Experience for Improving Evaluation Research and Applied Social Policy’ in C. Betsey, Hollister Jr, and Papageorgiou (eds), Youth Employment and Training Programs: The YEDPA Years, National Academies of Sciences. Bourguignon, F. and Spadaro, A. (2006) ‘Microsimulation as a Tool for Evaluating Redistribution Policies’, Journal of Economic Inequality 4(1): 77–106. Christenson, S. L. and Thurlow, M. L. (2004) ‘School Dropouts: Prevention Considerations, Interventions, and Challenges’, Current Directions in Psychological Science 13(1): 36–9. Citro, C. F. (2014) ‘Principles and Practices for a Federal Statistical Agency: Why, What, and to What Effect’, Statistics and Public Policy 1(1): 51–9. Citro, C. F. and Hanushek, E. A. (1991) Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling Volume II Technical Papers, Washington, DC: National Academy Press. Coalition for Evidence-Based Policy (2013) ‘Randomized Controlled Trials Commissioned by the Institute of Education Sciences since 2002: How Many Found Positive versus Weak or No Effects’, http://coalition4evidence.org/ wp-content/uploads/2013/06/IES-Commissioned-RCTs-positive-vs-weakor-null-findings-7-2013.pdf Google Scholar. Congressional Budget Office (n.d.) ‘Congressional Budget Office: About Us’, https://www.cbo.gov/about/overview.
107
Policy analysis in the United States
Congressional Research Service (n.d.) ‘Congressional Research Service: Overview’, Congressional Research Services, https://www.loc.gov/crsinfo/ about/. Creswell, J. W. (2003) ‘Research Design: Quantitative, Qualitative and Mixed Methods Approaches’, Thousand Oaks, CA, Sage Publications, Inc. Currie, J. and Thomas, D. (1993) Does Head Start Make a Difference? No. w4406, New York: National Bureau of Economic Research. Currie, J. and Thomas, D. (1998) School Quality and the Longer-term Effects of Head Start, No. w6362, National Bureau of Economic Research. Dong, N. and Maynard, R. (2013) ‘PowerUp!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-experimental Design Studies’, Journal of Research on Educational Effectiveness 6(1): 24–67. Dong, N., Reinke, W. M., Herman, K. C., Bradshaw, C. P., and Murray, D. W. (2016) ‘Meaningful Effect Sizes, Intraclass Correlations, and Proportions of Variance Explained by Covariates for Planning Two-and Three-Level Cluster Randomized Trials of Social and Behavioral Outcomes’, Evaluation Review 40(4): 334–77. Doolittle, F., Bell, S., Bloom, H., Cave, G., Kemple, J., Orr, L., Traeger, L., and Wallace, J. (1993) ‘Summary of the Design and Implementation of the National JTPA Study’, New York: Manpower Demonstration Research Corporation. Dynarski, M. and Gleason, P. (1998) How Can We Help?: What We Have Learned from Evaluations of Federal Dropout-Prevention Programs, Princeton, NJ: Mathematica Policy Research. Dynarski, M., Gleason, P., Rangarajan, A., and Wood, R. (1998) ‘Impacts of Dropout Prevention Programs: Final Report (School Dropout Demonstration Assistance Program Evaluation Research Report)’, Princeton, NJ: Mathematica Policy Research. Earle, J., Maynard, R., and Neild, R. (2013) ‘Common Evidence Guidelines for Education Research’, Washington, DC: Institute of Education Sciences and National Science Foundation, http://ies.ed.gov/pdf/CommonGuidelines.pdf. Fantuzzo, J. and Culhane, D. P. (eds) (2015) Actionable Intelligence: Using Integrated Data Systems to Achieve a More Effective, Efficient, and Ethical Government, New York, NY: Palgrave Macmillan. Federal Register (1981) Regulatory Impact Analysis Requirements: Executive Order 12291, https://www.archives.gov/federal-register/codification/ executive-order/12291.html. Feldman, A. (2017) Strengthening Results-focused Government, Washington, DC: Brookings Institution, https://www.brookings.edu/wp-content/ uploads/2017/01/es_20170130_evidencebasedpolicy.pdf. Fishman, B. J., Penuel, W. R., Allen, A. R., Cheng, B. H., and Sabelli, N. (2013) ‘Design-based Implementation Research: An Emerging Model for Transforming the Relationship of Research and Practice’, National Society for the Study of Education 112(2): 136–56. 108
The practice and promise of policy analysis and program evaluation ...
Fraker, T. and Maynard, R. (1987) ‘The Adequacy of Comparison Group Designs for Evaluations of Employment-related Programs’, Journal of Human Resources, 22(2): 194–227. Freedman, S., Knab, J. T., Gennetian, L. A., and Navarro, D. (2000) ‘The Los Angeles Jobs-First Gain Evaluation: Final Report on a Work First Program in a Major Urban Center,’ New York, NY: Manpower Demonstration Research Corporation. Gordon, R. and Haskins, R. (2017) ‘How the New Budget Could Hurt One of the Smartest Ideas in policymaking, Politico, March 31, http://www.politico. com/agenda/story/2017/03/the-trump-administrations-misleading-embraceof-evidence-000385. Government Accountability Office (n.d.). ‘Government Accountability Office: About Us’, http://www.gao.gov/about/. Greenberg, D., Shroder, M., and Onstott, M. (1999) ‘The Social Experiment Market’, The Journal of Economic Perspectives, 13(3): 157–72. Gross, B. (2016) ‘What Works? Evidence-Based Policymaking Under ESSA’, Maximizing Opportunities Under ESSA, SEA of the Future (6): 25–37. Gueron, J. M. and Rolston, H. (2013) Fighting for Reliable Evidence, New York: Russell Sage Foundation. Harvey, C., Camasso, M. J., and Jagannathan, R. (2000) ‘Evaluating Welfare Reform Waivers under Section 1115’, The Journal of Economic Perspectives, 14(4): 165–88. Haskins, R. and Baron, J. (2011) ‘Building the Connection Between Policy and Evidence’, in R. Puttick (ed), Using Evidence to Improve Social Policy and Practice, Alliance for Useful Evidence, 25–51. Haskins, R. and Margolis, G. (2014) Show Me the Evidence: Obama’s Fight for Rigor and Results in Social Policy, Washington, DC: Brookings Institution Press. Hogan, R. L. (2007) ‘The Historical Development of Program Evaluation: Exploring Past and Present’, Online Journal for Workforce Education and Development, 2(4): 5. Institute of Education Sciences (2016) What Works Clearinghouse Procedures and Standards Handbook, Version 3, Washington, DC: Institute of Education Sciences. Johnson, L.B. (1964) ‘Remarks Upon Signing the Economic Opportunity Act’, August 20, online by Gerhard Peters and John T. Woolley, The American Presidency Project, http://www.presidency.ucsb.edu/ws/?pid=26452. Kelsey, M., Johnson, A., and Maynard, R. A. (2001) The Potential of Home Visitor Services to Strengthen Welfare-to-work Programs for Teenage Parents on Cash Assistance, Princeton, NJ: Mathematica Policy Research, Incorporated. Kershaw, D. and Fair, J. (1976) The New Jersey Negative Income Tax Experiment (Vol. 1): Operations, Surveys, and Administration. New York: Academic Press. Kettl, D. F. (2015). ‘The Job of Government: Interweaving Public Functions and Private Hands’, Public Administration Review 75(2): 219–29.
109
Policy analysis in the United States
Khagram, S. and Thomas, C. W. (2010) ‘Toward a Platinum Standard for Evidence-Based Assessment by 2020’, Public Administration Review 70(s1): S100–6. Klein, A. (2015) ‘ESEA Reauthorization: The Every Student Succeeds Act Explained’, Education Week, November 30. Levitt, S. D. and List, J. A. (2009) ‘Field Experiments in Economics: The Past, the Present, and the Future’, European Economic Review 53(1): 1–18. Liu, X., Spybrook, J., Congdon, R., Martinez, A., and Raudenbush, S. (2006) Optimal Design for Multi-level and Longitudinal Research, Ann Arbor, MI: University of Michigan, Survey Research Center of the Institute of Social Research. Madaus, G. F., Stufflebeam, D., and Scriven, M. S. (1983) ‘Program Evaluation’, in Evaluation Models, Viewpoints on Educational and Human Services Evaluation. Boston, MA: Kluwer-Nijhoff, pp 3-22. Manzi, J. (2012) Uncontrolled: The Surprising Payoff of Trial-and-error for Business, Politics, and Society, New York: Basic Books. Matthews, D. (2014) ‘Everything You Need to Know about the War on Poverty’, Washington Post, January 8, https://www.washingtonpost.com/news/ wonk/wp/2014/01/08/everything-you-need-to-know-about-the-war-onpoverty/?utm_term=.0f4881a0575b. Maynard, R. (1995) ‘Teenage Childbearing and Welfare Reform: Lessons from a Decade of Demonstration and Evaluation Research’, Children and Youth Services Review 17(1–2): 309–32. Maynard, R. A. (2006) ‘Presidential Address: Evidence-based Decision Making: What Will It Take for the Decision Makers to Care?’ Journal of Policy Analysis and Management 25(2): 249–65. Maynard, R., Goldstein, N. and Nightingale, D. S. (2016) ‘Program and Policy Evaluations in Practice: Highlights from the Federal Perspective’, in L. Peck (ed) New Directions for Evaluation, San Francisco, CA: Jossey-Bass/Wiley, pp 109– 34, DOI: 10.1002/ev.20209, http://onlinelibrary.wiley.com/doi/10.1002/ ev.20209/ McKey, R. H. (1985) ‘The Impact of Head Start on Children, Families and Communities’, Final Report of the Head Start Evaluation, Synthesis and Utilization Project, http://files.eric.ed.gov/fulltext/ED263984.pdf. Means, B. and Harris, C. J. (2013) ‘Towards an Evidence Framework for DesignBased Implementation Research’, Yearbook of the National Society for the Study of Education 112(2): 350–71. Mervis, J. (2013) ‘An Invisible Hand Behind Plan to Realign US Science Education’, Science 341(6144): 338–41. Michalopoulos, M., Lee, H., Duggan, A., Lundquist, E., Tso, A., Crowne, S., Burrell, L., Somers, J., Filene, J. H., and Knox, V. (2015) ‘The Mother and Infant Home Visiting Program Evaluation: Early Findings on the Maternal, Infant, and Early Childhood Home Visiting Program. OPRE Report 2015-11’, Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. 110
The practice and promise of policy analysis and program evaluation ...
Myers, D. and Dynarski, M. (2003) ‘Random Assignment in Program Evaluation and Intervention Research: Questions and Answers’, http://files.eric.ed.gov/ fulltext/ED478982.pdf. Newhouse, J. P. and RAND Corporation, Insurance Experiment Group (1993) Free for All?: Lessons from the RAND Health Insurance Experiment, Cambridge, MA: Harvard University Press. Nussle, J. and Orszag, P. (eds) (2014) Moneyball for Government, Washington, DC: Disruption Books. Office of Adolescent Health (2016) ‘U.S. Department of Health and Human Services (2016). Teen Pregnancy Prevention Program’, https://www.hhs.gov/ ash/oah/oah-initiatives/tpp_program/about/#. Office of Management and Budget (n.d.) ‘Office of Management and Budget: About Us’. Olds, D., Donelan-McCall, N., O’Brien, R., MacMillan, H., Jack, S., Jenkins, T., and Pinto, F. (2013) ‘Improving the Nurse–family Partnership in Community Practice’, Pediatrics 132(2): S110–17. Raudenbush, S. W. and Bryk, A. S. (2002) Hierarchical Linear Models: Applications and Data Analysis Methods (vol. 1), Thousand Oaks, CA: Sage. Reidy, M., Goerge, R., and Lee, B. J. (1998) Planning for Human Service Reform Using Integrated Administrative Data, Chicago, IL: Chapin Hall at the University of Chicago. Ritchie, J., Lewis, J., Nicholls, C. M., and Ormston, R. (eds) (2013) Qualitative Research Practice: A Guide for Social Science Students and Researchers, Thousand Oaks, CA: Sage. Ross, H. (1970) ‘An Experimental Study of the Negative Income Tax’, Child Welfare 49(10): 562–69. Rossi, P. (1987) ‘The Iron Law of Evaluation and Other Metallic Rules’, Research in Social Problems and Public Policy, 4: 3–20. Rossi, P. H. (1969) ‘Practice, Method, and Theory in Evaluating Social Action Programs’, in On Fighting Poverty, New York: Basic Books, 217–34. Schmit, S., Schott, L., Pavetti, L. and Matthews, H. (2015) ‘Evidence-Based Home Visiting Programs in Every State at Risk if Congress Does Not Extend Funding’, Center for Budget and Policy Priorities, http://www.cbpp.org/sites/ default/files/atoms/files/3-10-14hv.pdf. Schochet, P. Z. (2015) ‘Statistical Theory for the RCT-YES Software: Designbased Causal Inference for RCTs, Second Edition (NCEE 2015–4011)’, Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Analytic Technical Assistance and Development, http://ies.ed.gov/ncee/edlabs. Scriven, M. (1993) ‘Hard-won Lessons in Program Evaluation’, New Directions for Program Evaluation, 58: 1–107. Stufflebeam, D. (2001) ‘Evaluation Models’, New Directions for Evaluation 89: 7–98. Thyer, B. A. and Myers, L. L. (2011) ‘The Quest for Evidence-based Practice: A View from the United States’, Journal of Social Work 11(1): 8–25. 111
Policy analysis in the United States
U. S. Department of Health and Human Services, (n.d.a) ‘Health Resources Services Administration, Administration for Children and Families, Home Visiting’, https://mchb.hrsa.gov/maternal-child-health-initiatives/homevisiting-overview. U.S. Department of Health and Human Services (n.d.b) ‘Evaluating Home Visiting Programs: State and Local Examples’, https://www.childwelfare. gov/topics/preventing/prevention-programs/homevisit/evaluate-programs/ examples/. U.S. Department of Health and Human Services, Administration for Children and Families (2013) ‘Summary of Child Welfare Waiver Demonstrations’, https:// www.acf.hhs.gov/sites/default/files/cb/waiver_summary_final_april2013.pdf. U.S. Department of Health and Human Services, (n.d.c) ‘The Maternal, Infant, and Early Childhood Home Visiting Program Partnering with Parents to Help Children Succeed’, https://www.acf.hhs.gov/sites/default/files/ecd/ home_visiting_issue_brief_2015.pdf. U.S. Department of Health and Human Services (2016) ‘Regulatory Guidelines for Impact Analysis. Washington, DC: U.S. Department of Health and Human Services’, https://aspe.hhs.gov/sites/default/files/pdf/242926/HHS_ RIAGuidance.pdf. U.S. Department of Health, Education and Welfare (1969) ‘History of Title I ESEA. U.S. Department of Health, Education and Welfare’, http://files.eric. ed.gov/fulltext/ED033459.pdf. Watts, H. W. and Rees, A. (eds) (1977a) New Jersey Income Maintenance Experiment (vol. 2), Labor Supply Responses, New York: Academic Press. Watts, H. W. and Rees, A. (eds) (1977b) New Jersey Income Maintenance Experiment (vol. 3), Expenditures, Health and Social Behaviors; and the Quality of Evidence, New York: Academic Press. Weimer, D. and Vining A. (2017) Policy Analysis: Concepts and Practice, 6th Edition, New York, NY: Routledge. Weiss, C.H. (1998) Evaluation: Methods for Studying Programs and Policies (2nd edn), Upper Saddle River, NJ: Prentice Hall. Whitehurst, G. J. (2003) The Institute of Education Sciences: New Wine, New Bottles, http://files.eric.ed.gov/fulltext/ED478983.pdf. Zigler, E. and Valentine, J. (1979) Project Head Start: A Legacy of the War on Poverty, New York: Free Press. Zigler, E. and Muenchow, S. (1992) Head Start: The Inside Story of America’s Most Successful Educational Experiment, New York: Basic Books.
112
SIX
Policy analysis in the states Gary VanLandingham
Most assessments of policy analysis in the United States focus on the federal government and mention activity in the 50 American states only in passing. This oversight is unfortunate as a large volume of policy analysis occurs at the state level, both within these governments and by a wide range of nonprofit and private organizations including think tanks, research institutes, and advocacy organizations. This activity supports the high volume of innovation that occurs at the state level, a portion of which is subsequently adopted at the federal level. States have been called, with good reason, ‘laboratories of democracy’ (Osborne 1990). For example, Minnesota was the first state to authorize charter schools, which since have been authorized across the country. In 2006 Massachusetts developed a universal health insurance program that served as the model for ‘Obamacare’—the 2010 federal Affordable Care Act. More recently, in 2010, Wyoming became the first state to require energy companies to identify the chemicals used in hydraulic fracturing drilling (fracking), California and other western states have established a cap-and-trade system to reduce carbon emissions, Wisconsin has eliminated collective bargaining rights for most of its public employees, and Kansas has enacted major income tax cuts in the hopes of spurring economic growth (California Environmental Regulation Agency 2015; Samuels 2015; Vockrodt 2015; Moncrief and Squire 2013). Whether one supports or opposes such actions, it cannot be argued that these innovations by states have been inconsequential. Given the ongoing political gridlock at the federal level, states and other sub-national governments are likely to continue to be the drivers of policy innovation in the United States for the foreseeable future. Neglecting the state-level policy analysis that often supports such activity would thus provide a limited and misleading view of the field. The history of policy analysis within the states has been greatly shaped by the institutional changes that have occurred within these governments over the past 50 years (see the chapter by Lynn in this volume.) Given the diversity among the states, it is not surprising that their policy analysis organizations and activities also vary widely. While state-level policy analysis has grown rapidly, it has also fragmented, and many policy analysis organizations face important challenges. This chapter discusses these trends and the potential future of policy analysis in the states.
113
Policy analysis in the United States
Development of policy analysis in the states While policy analysis in the states is robust, this is a relatively recent development. In fact, before the mid 20th century most states had very limited policy analysis capacity and were considered to be backwaters. While there were notable exceptions, such as the actions taken by some states early in the 20th century to regulate corporations, by the 1930s through the 1960s the federal government served as the primary driver of policy innovation through its response to national challenges such as the Great Depression, World War II, the Cold War, the Civil Rights era, and the War on Poverty (Moncrief and Squire 2013; Radin 2006). In part, the federal government had assumed its leading role because state governments had significant institutional weakness. As noted in a recent text: If the national government was usurping some of the policy-making roles traditionally left to the states and their localities, it was largely due to the states’ intractability in exercising their responsibilities, and inability to meet the needs of their citizens … And when states did change, it was often too little, too late’ (Moncrief and Squire 2013, pp 5–6). A key problem was that many states had outmoded political systems dominated by local interests that failed to address larger problems facing their jurisdictions. For example, until the mid 1960s, Florida’s legislature was among the least representative in the country. Each of the state’s 67 counties was guaranteed at least one seat in the legislature, despite the fact that their populations ranged from less than 10,000 to over 1 million. As a result, less than one fifth of the state’s voters could elect a majority of both chambers of the state legislature and it was controlled by the ‘Pork Chop Gang’—a coalition of legislators from rural counties who were primarily known for supporting segregation, minimizing taxation, and resisting government reforms. The Florida constitution, written in the 1880s in reaction to the post-Civil War Reconstruction era, also placed significant limitations on the legislature, which was to meet only once every other year for a maximum of 60 days. This situation was far from unique—most state legislatures operated under severe restrictions on their operations and lacked the capacity needed to serve as independent and equal branches of government (Coburn and deHaven-Smith 2002; Kurtz 2010; NCSL (National Conference of State Legislatures) 2000). This situation began to change in the 1960s, when major reforms to state governments, coupled with growing interest in institutionalizing analysis within government, brought substantial changes to the level and quality of policy analysis within the states. A key stimulus to these reforms was a series of U.S. Supreme Court decisions (Baker v Carr 1962; Reynolds v Simms 1964) that found that the malapportionment of state legislatures was so great that it violated the equal protection requirements of the federal constitution. States were required to create 114
Policy analysis in the states
legislative districts that represented more equal populations, which, in many states, had the effect of shifting power from rural to urban areas and opening legislative membership to a new generation of lawmakers who were interested in addressing the problems facing their states (NCSL 2000). Concurrently, a major effort was initiated to reform state legislatures, led by influential groups including the Ford Foundation, the Carnegie Corporation, the Citizen’s Conference on State Legislatures, and the Eagleton Institute of Politics at Rutgers University. These groups highlighted the institutional weaknesses of state legislatures and recommended actions to increase the professionalism of these bodies, including holding more frequent and longer sessions, raising legislator pay to enable a wider range of citizens to serve, and significantly increasing legislative staffing. In response, many states shifted from biennial to yearly sessions and some adopted year-round cycles where legislators met throughout the year (Moncrief and Squire 2013; CSG (Council of State Governments) 2010; Kurtz 2010). As part of these reforms, most states substantially increased their internal supply of policy analysts. By 1974, two thirds of states reported that their central budget office conducted program effectiveness analyses (Mushkin 1977) and almost all states expanded their legislative staffing to include attorneys, budget and policy experts, researchers and program evaluators, information technology and communications specialists, and political advisors. These staff were often located in central agencies that served members in both chambers and parties, such as fiscal offices that analyzed budget proposals and program evaluation offices that examined the operation of executive branch agencies. By 2009, state legislatures reported employing approximately 28,000 professional staff, which enabled members to assess and generate policy and budget proposals independently of the executive branch (Hird 2005; Kurtz 2010; NCSL 2000, 2015b). Actions by the federal government also supported the development of policy analysis within the states. Responding to the increased desire to incorporate analysis into government processes, the Government Accountability Office promoted staff training within state legislative and executive branches to better ensure that federal dollars were being used effectively and efficiently. The federal government also spurred the development of several large data initiatives— including the Uniform Crime Reporting system, the Medicaid Management Information System, the Family Support Information System, the Social Services Information system, and the Client Information System—to support analysis capacity in the states (Mushkin 1977). Additionally, several multi-state organizations were established or were expanded to become important policy analysis resources for state governments. These included the National Conference of State Legislatures, which serves legislators and staff; the National Governor’s Association, which represents these chief executives; and the Council of State Governments, which serves officials in all three branches. In addition, similar organizations were formed to serve specialized groups of officials such as the National Association of State Fiscal Officers and the National Association of Chief Information Officers. These organizations 115
Policy analysis in the United States
disseminate policy-relevant information through meetings and research reports, and play a key role in spurring the diffusion of innovations among states (Berry and Berry 2007; Moncrief and Squire 2013). In recognition of this growing policy analysis capacity of states as well as national political trends, the federal government has devolved significant responsibility for many programs to the states, such as job training, public assistance, and environmental regulation. These changes increased the range and complexity of issues facing states, but also served to support their high level of policy innovation (NCSL 2000).
Current status of policy analysis in states A high level of policy analysis now occurs throughout state governments, although this activity varies greatly across the nation in part due to states’ substantial differences in wealth, level of taxation, number of employees, party control, political culture, and balance of power between their governors and legislatures (Hird 2005; Kraft and Furlong 2007; Lieske 1993; Weimer and Vining 2011). Key nodes of activity within most states include legislative units, executive branch offices, and external bodies such as nonprofit think tanks and university centers (Mayne et al. 1999). Policy analysis by state legislatures State legislatures have made a substantial investment in staffing to provide their members with access to research and data analysis services. In addition to committee and caucus staffing, many legislatures have established units that perform policy analysis studies. As noted by Hird (2005), these units fall into three major categories: central research offices that respond to specific questions posed by legislators and committees; committees that provide bill and fiscal analyses services during legislative sessions but perform more substantive policy analyses during the interim between sessions; and specialized offices that have the full-time core mission of conducting substantive policy analyses (Carter-Yamauchi 2014; NLPES (National Legislative Program Evaluation Society) 2015). A 2014 survey of this third set of legislative units, conducted by the National Legislative Program Evaluation Society and completed by offices from 39 states, showed that they share similarities but also exhibit important differences. While a few have existed for over 50 years, most were created in the 1970s when legislatures were rapidly expanding their research capacity. However, three states (Alaska, Maine, and North Carolina) have created legislative policy analysis offices since 2000. The specific responsibilities of the offices also vary. In addition to conducting qualitative and quantitative policy analyses, some also perform investigations, and a few have a role in developing, assessing, or validating their states’ performance measurement systems. In most states, the offices’ research agenda is primarily set by the legislature, either through provisions in statute or 116
Policy analysis in the states
appropriations proviso or by directives from legislative leadership or governing committees. However, in some states individual legislators or committees may request studies. Somewhat reflecting the population size of their parent states, the offices’ staffing varies substantially. Most have between 10 and 25 staff, but five have 50 or more analysts. Similarly, the number of reports published by these offices ranges from under 10 to over 50 policy analyses annually (NLPES 2015). The activities of these legislative policy analysis offices can be portrayed by profiling units in two states: the Florida Office of Program Policy Analysis and Government Accountability (OPPAGA) and the New Mexico Legislative Finance Committee (LFC). The Florida OPPAGA is one of the larger state legislative policy analysis offices. It has the mission of supporting the state’s legislature and ‘provides data, evaluative research, and objective analyses to assist legislative budget and policy deliberation’ (OPPAGA (Florida Office of Program Policy Analysis and Government Accountability) 2015). The Office was created in 1994 after the state’s Auditor General had resisted repeated legislative requests to broaden its mission beyond its traditional financial and compliance auditing. OPPAGA was staffed by transferring 88 positions from the Auditor General and it was given the directive to conduct program evaluations and policy studies and support the development of a statewide performance measurement system (Roth 2015). OPPAGA is authorized to conduct studies on the full range of state government activities. In 2014, its reports addressed a wide array of issues, including options for increasing lottery revenues, the effectiveness of Florida’s system for detecting Medicaid fraud and abuse, and the status of efforts to remove barriers to business creation and expansion. In addition to these long-term studies, the office also conducts a large number of short-term analyses that are not published but are transmitted to legislators as confidential memoranda. In 2010, the last year for which full data are available, the office published 65 policy analysis and evaluation reports and projected that it would issue 165 confidential research memoranda (OPPAGA 2009, 2010, 2015). In recognition of its work, OPPAGA has received awards from the American Society for Public Administration’s Center for Accountability and Performance and the National Legislative Program Evaluation Society for the quality of its analyses and impact on public policy (NLPES 2015). The New Mexico LFC was created in 1957 as the fiscal and management arm of the New Mexico legislature. In 1991, the LFC also assumed responsibility for conducting program evaluations from the Office of the State Auditor. LFC’s mission is to ‘provide the Legislature with objective fiscal and public policy analyses, recommendations and oversight of state agencies to improve performance and ensure accountability through the effective allocation of resources for the benefit of all New Mexicans’ (LFC (New Mexico Legislative Finance Committee) 2015a). Among the LFC’s responsibilities are to develop a budget proposal that the legislature considers as an alternative to the governor’s budget recommendations. The LFC also issues an annual policy analysis report that examines key issues facing the state and actions that the legislature may wish to address during its next 117
Policy analysis in the United States
session. In addition, the LFC issues quarterly reports that analyze the performance measures reported by each agency and recommends actions that should be taken to address problem trends. For example, in 2015 the LFC noted that the percentage of juvenile justice clients who subsequently enter the adult corrections system was increasing and it recommended specific steps that the Children, Youth and Families Department should take to address this problem (LFC 2015b). The LFC conducts in-depth policy analyses of state issues and programs and released 11 such reports in 2014. For example, one report noted that the state’s graduation rate was substantially below the national average and used a benefitcost analysis model to assess the fiscal and social impact of this problem; this analysis found that successful actions to graduate 2,600 more students annually would generate about $700 million in net benefits over the students’ lifetimes (LFC 2014). In recognition of its high quality of work, the LFC was awarded the 2015 Excellence in Evaluation Award by the National Legislative Program Evaluation Society (NLPES 2015). Policy analysis by states’ executive branch agencies A variety of executive branch organizations produce policy analysis studies. A 2015 survey of executive budget offices by the National Association of State Budget Officers found that 35 reported conducting program evaluations using performance data, and 19 offices reported performing economy and efficiency studies (NASBO 2015). Several states also have established units, such as Colorado’s C-Stat, Maryland’s ‘StateStat’ and Washington State’s ‘Results Washington’, that seek to improve outcomes and operational efficiency by tracking performance indicators and using these metrics in regular discussions with agency managers (Behn 2014). Individual executive branch agencies within states often have research units that conduct policy analyses and evaluations. While there are no comprehensive data available on these offices, established networks of these units in some policy areas demonstrate their proliferation. For example, Statistical Analysis Centers operate in every state and the District of Columbia to collect, analyze, and disseminate criminal and juvenile justice data, and just under half the states have established Sentencing Commissions, which analyze court data and make recommendations to improve the consistency of penalties assessed for criminal violations (Justice Research and Statistics Association 2015; National Association of Sentencing Commissions 2015). As noted by Mayne et al. (1999), the focus of policy analyses by executive branch organizations tends to be operational in nature, focusing on strategies to improve efficiency and effectiveness through changes to existing programs. Due to political considerations, few such studies assess program outcomes or consider issues that agency leadership does not wish to raise.
118
Policy analysis in the states
Policy analysis by state universities and think tanks Another key source of policy analysis in the states is the large number of universitybased and private organizations that generate policy studies intended to influence the policy-making process (Hird 2005). The number of such state-based think tanks has grown substantially in recent years. While a 1996 study of think tanks identified 187 such entities in the 50 states and the District of Columbia (Rich 2006), the author’s 2014 update that used the same methodology found that this number had increased to 659 state-level organizations (see the chapter by Rich in this volume on think tanks.) Many of these organizations combine their policy research activities with an advocacy focus. In fact, many state think tanks are formal affiliates of national organizations that have explicit ideological aims. For example, Americans for Prosperity—a national organization founded by conservative/libertarian billionaire businessman David Koch that supports free markets and entrepreneurship and advocates for lower taxes, reduced government spending, and limited regulation— has affiliated think tanks in 35 states that advocate for similar actions (FactCheck. org 2015; AFP (Americans for Prosperity) 2015). Such ideologically focused think tanks are often highly political and operate as part of policy networks that combine policy research with lobbying, fundraising, and electoral support for candidates who support their agendas. Other affiliated groups of think tanks support similar causes but are not as politically focused. For example, the Government Research Association, a network of citizen and business-oriented think tanks which seeks ‘improvement of government and the reduction of its costs’, lists affiliates in 19 states that share similar aims (GRA 2015). On the other side of the political spectrum, the U.S. Public Interest Research Group, ‘a consumer group that stands up to powerful interests whenever they threaten our health and safety, our financial security, or our right to fully participate in our democratic society,’ has affiliated think tanks in 47 states (USPIRG (U.S. Public Interest Research Group) 2015). In contrast, other state-level think tanks, particularly those located in public universities, seek to operate as politically neutral information centers. These organizations typically have a specific policy area focus. For example, the LeRoy Collins Institute at the Florida State University focuses on analyzing financial issues facing governments within Florida. In recent years the Institute has issued a series of reports examining challenges in the state’s tax system, the funding status of Florida’s state and local public pension systems, and financial and accountability issues relating to the state’s community development special districts (Florida State University LeRoy Collins Institute 2015). Some university-based think tanks provide policy analysis services that extend beyond their home states. For example, the Center for Evidence-based Policy within the Oregon Health and Science University works with policymakers in more than 20 states to provide evidence intended to guide health policy decisions. The Center’s Medicaid Evidence-based Decisions Project supports a network of health agencies in 17 states by providing comprehensive research reviews 119
Policy analysis in the United States
on the effectiveness of health-related interventions; the participating states use this information to inform their Medicaid benefit design and coverage policy choices such as whether to fund different angiography procedures for diagnosing obstructive coronary artery disease. The Center’s Drug Effectiveness Review Project provides similar reviews of medical literature to inform the drug coverage policy decisions of 13 states’ Medicaid and public pharmacy programs (Center for Evidence-Based Policy 2015). Likewise, the Washington State Institute for Public Policy (WSIPP), which is housed within the Evergreen State College, carries out non-partisan research at the direction of the state legislature and WSIPP’s Board of Directors. The Institute evaluates state programs and conducts comprehensive research reviews to identify ‘what works’—those programs that high-quality evaluations have shown to be effective in achieving desired outcomes such as increased high school graduation rates and reduced recidivism of persons released from prison. WSIPP also has developed a benefit-cost model that analyzes the long-term return on investment that the state could achieve through investments in a broad range of program alternatives across five social policy areas, including criminal justice, substance abuse, mental health, child welfare, and Pre-K to secondary education. These reports have been influential in informing the Washington legislature’s policy choices, and, as discussed below, the benefit-cost model has been adopted by 23 other states that participate in the Pew-MacArthur Results First Initiative (WSIPP (Washington State Institute for Public Policy) 2015; Pew 2017). The relative strengths and weaknesses of the different sources of state-level policy analysis are summarized in Table 6.1, with the major differences being the types of issues each source tends to focus on, their ability to obtain state data, their degree of organizational independence, and their access to policymakers.
Impact of state-level policy analysis organizations A discussion of the role of policy analysis in state government would be incomplete without considering its use by key stakeholders. Predictably, available research suggests that state-level policy analyses are most likely to have impact when conducted within the organization that is making the examined policy choice (for example, when a legislative research unit conducts an analysis to inform an upcoming legislative budget decision), in part because internal researchers have the advantage of direct access to policymakers, which aids communication of results and helps build confidence in the reliability of the analysis (Mayne et al. 1999). Similarly, research indicates that state-level analyses that include readily actionable recommendations have a relatively high degree of use, as do studies that incorporate benefit-cost analysis (VanLandingham 2011; White and VanLandingham 2015).
120
Policy analysis in the states
Table 6.1: Relative strengths and weaknesses of state-level policy analysis organizations Organization Legislative
Executive and agency
External think tank
Relative strengths • Organizational separation from state agency operations can increase objectivity of their analysis of executive branch • Ready access to legislative policymakers and understanding of legislative process can enhance relevance of studies • Typically have access to a wide range of agency data • Studies often focus on fiscal impact, which can be useful in budget process • Location within executive branch and understanding of agency processes may increase relevance of studies to agency and executive leadership • Ready access to agency data and officials may enhance research quality • Studies often focus on operational issues that are within agency control, generating actionable recommendations for program improvement
Relative weaknesses • May have incomplete knowledge of agency operations • Results may be seen as political, particularly if legislative and executive branches are controlled by different parties • Focus on legislative priorities may limit utility to other stakeholders • May avoid issues that are politically sensitive to the legislature
• May avoid inquiries or reporting findings that are politically sensitive to executive branch; findings may be challenged as self-serving • Focus on meeting agency or executive information needs may narrow scope of analysis to operational matters • Analysis units may be subject to disproportionate cutbacks when executive leadership changes or budget shortfalls arise • More likely to focus studies on program • May have limited access to government and social outcomes rather than data and incomplete knowledge of operational issues program operations • May have access to specialized expertise • Results may not be timed to decisionnot readily available to executive and making windows legislative agencies • May have limited access to • Research not as subject to time policymakers, hindering communication constraints of results • Separation from state entity may • Findings may be seen as political or increase objectivity and perspective self-serving if generated by ideological • May use communication strategies advocacy organization (press conferences, social media) that • Topics may not be attuned to political are not as available to government viability or relevance analysts to promote findings
Source: Adapted from Mayne et al. (1999, p 32).
Hird (2005) examined the impact of analyses by legislative research offices by surveying state legislators; this analysis concluded that the offices were moderately influential in policy outcomes. Larger offices and those that performed more analytical and longer-term studies were more highly rated by the surveyed legislators than smaller offices and those that provided mostly descriptive analyses. 121
Policy analysis in the United States
Interestingly, this study also found that legislative research units in states that were controlled by a single political party tended to be smaller and focused primarily on fact gathering, possibly because the dominant parties in such states wished to ensure that the research offices did not produce politically inconvenient analyses that the minority party could use to challenge their policy agendas. VanLandingham (2011) similarly examined legislative policy analysis offices and found that perceptions of impact were closely related to these offices’ organizational placements. Units that were located in legislative committees or independent offices were viewed more positively by senior legislative staff than those housed in legislative auditing offices, largely because the former worked more closely with legislative stakeholders and provided more readily actionable reports. Offices located in states that had experienced recent changes in party control were viewed as less influential than those in states with more stable legislative environments, likely because changes in party control disrupted the offices’ relationships with key legislators and staff. The impact of policy analyses generated by external entities appears to vary by source. As noted by VanLandingham (2006), a nationwide survey of senior legislative staff assessed the perceived credibility and usefulness of alternative sources of policy analysis to their state legislatures. The National Conference of State Legislatures was rated as ‘credible’ by 92% of survey respondents and as ‘useful’ by 87% of these senior staff. Over half of these persons also viewed information provided by federal agencies, other national organizations such as the Council of State Governments, and industry groups as highly or somewhat credible and useful. In contrast, less than half of the survey respondents viewed information provided by partisan offices, think tanks, and the media as highly or somewhat credible and useful. Clark and Little (2002) found that legislators from states where legislators frequently transition to higher offices, those serving in part-time legislatures, and those from states without term limits were most likely to rely on information from national organizations. State agency use of policy analysis has not been as widely studied. Lester (1993) found that agency officials reported relying primarily on policy information from their peers in other agencies, newspapers, counterparts in federal agencies, and the governor’s office; policy analysis from research organizations and universities was not relied on heavily. A 2015 study on state agencies’ use of information to improve their operations found that use of analysis varied by type of agency— natural resource, substance abuse, and mental health agencies reported the highest use of such analyses (Jennings and Hall 2015).
Challenges facing state-level policy analysis While policy analysis in the states is currently robust, it faces several challenges. Increased political polarization, legislative term limits, and changes in party control have fundamentally changed the environment in which the work occurs in many states. State-level policy analysts also operate in an increasingly crowded 122
Policy analysis in the states
information marketplace, making it more difficult for them to gain the attention and support of policymakers. These challenges have negatively impacted many well-respected policy analysis organizations in recent years. Limits on legislative service, commonly called term limits, have fundamentally changed the policy environment in many states. These limitations, which prohibit elected officials from serving in the state legislature for more than a specified numbers of years, are in effect in 15 states (NCSL 2015a). These restrictions have typically been created through citizen initiatives, generally in reaction to a desire to increase the diversity of candidates and out of a belief that long-serving legislators have too much power and/or are under the control of special interests (Bowser 2005). These limits vary by state, with six instituting lifetime limits on service—the most restrictive being Michigan, with six years for its house and eight for senate (NCSL 2015a). While research on the impact of term limits is limited, it is generally agreed that they reduce institutional knowledge and professionalism in legislatures (Squire 2012; Percival 2015). Mirroring the national trend, the political environment of many states has also become more ideologically polarized over time (Shor 2014). Many states have also experienced changes in party control of their executive and legislative branches in recent years (NCSL 2015a). Collectively, these trends can make it harder for state policy analysts to build and maintain relationships among influential policymakers. Overall, these changes have had significant implications for policy analysis organizations, which can find themselves in much less supportive political environments. While policy analysis offices have a mission of ‘speaking truth to power’ as neutral and objective honest brokers, several have reported that they now operate in a ‘shoot the messenger’ environment in which they became suspect to interest groups and political leaders that wish to only receive analysis results that support their desired actions (Radin 2006; Denhardt 1981). These challenges have affected many state policy analysis organizations. For example, term limits became effective in Florida in 2000 and resulted in a growing deference by the legislature to the governor in policy and budget decisions as experienced legislators were increasingly forced out of office. Beginning in the mid 2000s, the governor repeatedly proposed eliminating the OPPAGA after it had issued several reports critical of his policy proposals. The state’s term limits took full effect in 2010 when the last of the long-serving members (those who had moved to the Senate in the early 2000s after being termed out of the House of Representatives) were forced to leave office. The 2011 Legislature substantially reduced funding for its research units, substantially changed the environment in which they work, and eliminated some units altogether. OPPAGA’s budget was cut by approximately 25% and its staffing was reduced from 80 to 48 analyst positions. OPPAGA’s organizational independence was significantly reduced, eliminating the requirement that its director be confirmed by the full legislature for a four-year term and instead specifying that they serve at the pleasure of the Speaker of the House and President of the Senate. Much of OPPAGA’s work is now no longer publically available but is transmitted directly to legislative 123
Policy analysis in the United States
leadership and is exempt from public record disclosure. The Auditor General, which conducts financial audits, experienced a similar staffing cut and its state agency oversight duties were reduced, with the frequencies of routine operational audits of agencies decreased to once every three years. Two other legislative analysis offices were defunded—the Technology Review Workgroup, which examined the state’s information technology systems, and the Legislative Committee on Intergovernmental Relations, which examined the impact of state policy on local governments (NCSL 2010; Roth 2015). Other well-regarded state policy analysis units have experienced similar negative outcomes in recent years. For example, the Oregon Progress Board, which tracked progress towards the economic, social, and environmental goals established in the state’s strategic plan, was eliminated in 2009 (Weber et al. 2014), and the Kentucky Long-Term Policy Research Center, which examined policy issues facing that state, was defunded by that state’s legislature in 2010 (Lexington Herald-Leader 2010). Characterizing the political nature of such actions, the Ohio Legislative Office of Education Oversight was eliminated in 2005, reportedly after an incoming House Speaker objected to a report the office had issued on charter schools, an issue of personal interest to the new presiding officer (Zajano 2005).
Whither policy analysis in the states? State-level policy analysts are facing much more complex environments than they did in the past. In the field’s infancy, there was a general consensus that more information and greater transparency was needed to improve policy making, and that analysts who faithfully filled this information gap would be heard and supported. The experiences of the past 40 years have lifted the veil of naïveté on which this perspective was based, but have also provided much greater insights into the roles that analysts can successfully play within the policy process. Strategic use of these insights is increasingly important because the environment in which state policy analysts must operate is becoming more challenging. Many analysts are now facing legislative term limits that erode institutional memory and capacity, much greater political polarization that focuses on winning policy debates and has little concern for protecting independent sources of information, and increased competition from other sources of often ideologically based policy research. State policy analysis offices operate in minefields in which missteps can be fatal. They must be able to quickly prove their worth as honest information brokers to new actors who quickly shift in and out of political positions, and they can easily be seen as political impediments to be removed if the information they provide is unwelcome. Moreover, where they once were one of the only sources of analysis available to state policymakers, state policy analysts now operate in a crowded analytical space, where information is available from many sources including federal and state agencies, university think tanks, and national and state advocacy groups (Radin 2000, Hill 1997) and a growing discordant stew of new media websites, bloggers, and opiners that often have little regard for facts. 124
Policy analysis in the states
The growing diversity in policy analysis providers may be viewed in alternative ways—as a positive outcome that supports the democratic process in the states by enabling groups of citizens to compete in the policy arena, or as an outsourcing of policy analysis to interest groups. A meta-analysis of the role of think tanks in the policy process (Results for Development 2014) found that think tanks encourage demand for policy alternatives and support open, democratic political systems. However, others have argued that many of these outside groups have explicit policy agendas and blur the line between research and advocacy, resulting in disputes over both facts and conclusions as well as contributing to ideological polarization (Radin 1997). At least three alternative futures for policy analysis in the states appear feasible. In the first scenario, the current status quo would continue, with state-level policy analysis provided by both a rich mixture of organizations within the governments and a wide range of external entities, including university research centers and advocacy-oriented think tanks. In this future, policy analysts working within state governments would continue to experience the challenges created by messy politics; their ride would be bumpy at times but their offices would generally survive in something closely resembling their current form. As exemplified by New Mexico’s LFC, such offices would need to be flexible and politically sensitive, continually seeking to build trust, understanding, and reciprocity with key stakeholders while preserving their independence and ability to ‘speak truth to power’ (Radin 1997). In a second scenario, the current negative trends would continue and policy analysis offices within states would be increasingly marginalized and politicized. As exemplified by the experiences in Florida, Kentucky, and Oregon, many internal policy research offices may find their role increasingly limited to providing shortterm and descriptive research that is politically non-threatening (Hird 2005); substantive analysis would be largely outsourced to favored external entities that are trusted to deliver the conclusions desired by political elites. In this future, policy analysts would often find themselves in the role of providing justifications for the preferred policy choices of leaders rather than serving as trusted advisors who provide objective information to inform these choices. In a third scenario, policy analysis within states would survive but evolve into new forms. In this future, the mission of policy analysis units would broaden to include serving as knowledge curators in addition to their traditional function as research generators. This new role would seek to address one of the inherent challenges facing policy analysts—the decision-makers they serve often require quick answers to many questions, yet original research to address these matters is very time-intensive. While a huge amount of information that could help to inform these policy choices can be quickly accessed via the web, much of this material is of dubious quality, including, unfortunately, many published policy analysis studies. To address this problem, policy analysts can use resources developed by a growing number of research intermediaries to curate relevant materials and quickly provide valid and useful information to decision makers. 125
Policy analysis in the United States
These intermediaries include a large number of research clearinghouses that have been created to conduct systematic reviews of available research and develop lists of ‘what works’—those policy choices that rigorous analyses and evaluations have found to be effective. The clearinghouses include entities such as Crimesolutions.gov, operated by the U.S. Department of Justice; Blueprints for Healthy Youth Development, operated by the University of Colorado, Boulder; and the aforementioned Coalition for Evidence-Based Policy. Using structured protocols, the clearinghouses assess and rate interventions based on the level of rigorous evidence that exists about the outcomes they generate. The PewMacArthur Results First Initiative, a partnership of The Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation, has further consolidated this information by creating a centralized database that enables policy analysts to quickly identify the ratings issued by eight clearinghouses that examine social policy interventions. The Results First Initiative has also developed a benefit-cost model that enables analysts to use this evidence base to predict the long-term return on investment that their governments could achieve by funding various social programs. Such curation resources can enable state policy analysts to expand the range of services that they can provide to their policymakers in relatively short time frames. For example, legislative and executive branch policy analysis offices in 20 states, including the New Mexico LFC, have used tools provided by Results First to analyze the effectiveness of their states’ investments in criminal justice and other social program areas. New Mexico has used the LFC’s analyses to inform decisions to direct over $80 million to evidence-based programs that were predicted to generate strong returns on investment to state residents (VanLandingham and White 2015). The ability to provide such analysis has contributed to LFC’s strong reputation as well as the ongoing support it enjoys from policymakers. Other state policy analysis organizations may wish to adopt such a curation function to help them address the challenging environments in which they operate. References Americans for Prosperity (2015) ‘States’, http://americansforprosperity.org/. Behn, R. (2014) The PerformanceStat Potential: A Leadership Strategy for Producing Results, Washington, DC: Brookings Institution Press. Berry, F. and Berry, W. (2007) ‘Innovation and Diffusion Models in Policy Research’, in P. A. Sabatier (ed), Theories of the Policy Process (2nd edn), Boulder, CO: Westview. Bowser, J. (2005) ‘The Effects of Legislative Term Limits’, in The Book of the States 2005 114–15. Lexington, KY: The Council of State Governments. California Environmental Protection Agency (2015) ‘Cap-and-trade program’, www.arb.ca.gov/cc/capandtrade/capandtrade.htm. Carter-Yamauchi, C. (2014) ‘The History and Evolution of Legislative Committee Systems’, RACCS Newsletter, Fall 2014. Denver, CO: NCSL.
126
Policy analysis in the states
Center for Evidence-Based Policy (2015) ‘About Us’, www.ohsu.edu/xd/ research/centers-institutes/evidence-based-policy-center/evidence/index. cfm 1. Coburn, D. and deHaven-Smith, L. (2002) Florida’s Megatrends—Critical Issues in Florida, Gainesville, FL: University of Florida Press. Council of State Governments (2010) The Book of the States 2010, Lexington, KY: The Council of State Governments. Denhardt, R. (1981) ‘Towards a Critical Theory of Public Organization’, Public Administration Review 41(6): 633. FactCheck.org. (2015) ‘Americans for Prosperity’, http://www.factcheck. org/2014/02/americans-for-prosperity-3/. Florida Office of Program Policy Analysis and Government Accountability (2009) Business Plan 2009–10, Tallahassee, FA: OPPAGA. Florida Office of Program Policy Analysis and Government Accountability (2010) ‘Reports, 2010’, www.oppaga.state.fl.us/ReportsYearList.aspx?yearID=22. Florida Office of Program Policy Analysis and Government Accountability (2015) ‘About OPPAGA’, www.oppaga.state.fl.us/shell.aspx?pagepath=about/ about.htm. Florida State University LeRoy Collins Institute (2015) ‘Welcome to the LeRoy Collins Institute’, http://collinsinstitute.fsu.edu/. Government Research Association (2015) ‘About GRA’, www.graonline.org/ about-the-governmental-research-association/. Hird, J. (2005) Power, Knowledge, and Politics: Policy Analysis in the States, Washington, DC: Georgetown University Press. Jennings, E. Jr. and Hall, J. (2015) ‘State Agency Attention to Scientific Sources of information to Guide Program Operations’, in A. Shillabeer, T. Buss, and D. Rousseau (eds), Evidence-Based Public Management: Practices, Issues, and Prospects (1st edn), Armonk, NY: M.E. Sharpe. Justice Research and Statistics Association (2015) ‘Statistical Analysis Centers’, www.jrsa.org/sac/. Kraft, M. and Furlong, S. (2007) Public Policy: Politics, Analysis, and Alternatives (2nd edn), Washington, DC: CQ Press. Kurtz, K. (2010) ‘75 Years of Institutional Change in State Legislatures’, in The Book of the States 2010, Lexington, KY: The Council of State Governments 85–91. Lester, J. (1993) ‘The Utilization of Policy Analysis by State Agency Officials’, Science Communication 14(3): 267–90. Lexington Herald-Leader (2010) ‘State Research Center’s Demise Comes as Surprise’, April 7. Lieske, J. (1993) ‘Regional Subcultures of the United States’, Journal of Politics 55(4): 888–913.
127
Policy analysis in the United States
Mayne, J., Divorski, S., and Lemaire, D. (1999) ‘Locating Evaluation: Anchoring Evaluation in the Executive or the Legislature, or Both or Elsewhere?’ in R. Boyle and D. Lemaire (eds), Building Effective Evaluation Capacity: Lessons from Practice, New Brunswick, NJ: Transaction Publishers 38. Moncrief, G. and Squire, P. (2013) Why States Matter: An Introduction to State Politics, Plymouth, UK: Rowman and Littlefield. Mushkin, S. (1977) ‘Policy Analysis in State and Community’, Public Administration Review 37(3): 245–53. National Association of Sentencing Commissions (2015) ‘Sentencing Commission/Council Contact List’, http://thenasc.org/images/ NASCComCouncilContactList20150807.pdf. National Association of State Budget Offices (2015) ‘2015 Budget Processes in the States’, https://higherlogicdownload.s3.amazonaws.com/NASBO/9d2d2db1c943-4f1b-b750-0fca152d64c2/UploadedImages/Reports/2015%20 Budget%20Processes%20-%20S.pdf. National Conference of State Legislatures (2015a) ‘Ensuring the Public Trust 2015: Public Policy Evaluation’s Role in Serving State Legislatures’, Denver, CO: NCSL. National Conference of State Legislatures (2015b) ‘Size of State Legislative Staff—1979, 1988, 1996, 2003, 2009’, www.ncsl.org/research/about-statelegislatures/staff-change-chart-1979-1988-1996-2003-2009.aspx. National Legislative Program Evaluation Society (2015) ‘NLPES Awards Program’, www.ncsl.org/legislators-staff/legislative-staff/program-evaluation/ nlpes-awards.aspx. New Mexico Legislative Finance Committee (2014) Cost-Effective Options for Increasing High School Graduation and Improving Adult Education, Santa Fe, NM: LFC. New Mexico Legislative Finance Committee (2015a) ‘About LFC’, www.nmlegis. gov/lcs/lfc/lfc_about.aspx. New Mexico Legislative Finance Committee (2015b) Children, Youth and Families Department, Key Quarterly Performance Measures Report 1st Quarter 2015, Santa Fe, NM: LFC. Osborne, D. (1990) Laboratories of Democracy, Boston, MA: Harvard Business School Press. Percival, D. (2015) Smart on Crime: The Struggle to Build a Better American Penal System, Boca Raton, FL: CRC Press. Radin, B. (1997) ‘Presidential Address: The Evolution of the Policy Analysis Field: From Conversation to Conversations’, Journal of Policy Analysis and Management 16(2): 204–18. Radin, B. (2000) Beyond Machiavelli: Policy Analysis Comes of Age, Washington DC: Georgetown University Press. Radin, B. (2006) ‘Professionalization and Policy Analysis: The Case of the United States’, in H. K. Colebatch (ed), The Work of Policy: An International Survey, Lanham, MD: Lexington Books. 128
Policy analysis in the states
Results for Development (2014) Linking Think Tank Performance, Decisions, and Context, Washington DC: Results for Development. Rich, A. (2006) ‘State Think Tanks’, Database from Andrew Rich, received from personal communication, April 3. Roth, T. (2015) Personal Interview, October 10. Samuels, R. (2015) ‘Walker’s Anti-union Law has Labor Reeling in Wisconsin’, Washington Post, February 26. Shor, B. (2014) ‘How U.S. State Legislatures Are Polarized and Getting More Polarized (in 2 Graphs)’, Washington Post, January 14, www.washingtonpost. com/blogs/monkey-cage/wp/2014/01/14/how-u-s-state-legislatures-arepolarized-and-getting-more-polarized-in-2-graphs/. Squire, P. (2012) The Evolution of American Legislatures: Colonies, Territories, and States 1619–2009, Ann Arbor. MI: The University of Michigan Press. The Pew Charitable Trusts (2017) ‘Where We Work’, www.pewtrusts.org/en/ projects/pew-macarthur-results-first-initiative/where-we-work. U.S. Public Interest Research Group (2015) ‘About Us’, www.uspirg.org/page/ usp/about-us. VanLandingham, G. (2006) A Voice Crying in the Wilderness—Legislative Oversight Agencies’ Efforts to Achieve Utilization, unpublished doctoral dissertation, Florida State University, Tallahassee, FL. VanLandingham, G. (2011) ‘Escaping the Dusty Shelf: Legislative Oversight Agencies’ Efforts to Promote Utilization’, American Journal of Evaluation 32(1): 85–97. VanLandingham, G. and White, D. (2017) ‘New Mexico Results First: Lessons From a Long-Term Evidence-Based Policymaking Partnership’, in J. Owen and A. Larson (eds), Researcher–Policymaker Collaboration: Strategies for Launching and Sustaining Successful Partnerships,New York, NY: Routledge. Vockrodt, S. (2015) ‘Brookings Institution: Kansas Income Tax Cuts Were Not a Good Idea’, http://www.pitch.com/news/article/20562140/brookingsinstitution-kansas-income-tax-cuts-were-not-a-good-idea Washington State Institute for Public Policy (2015) ‘Benefit Cost Results’, www. wsipp.wa.gov/BenefitCost. Weber, B., Worcel S., Etuk, L., and Adams, V. (2014) Tracking Oregon’s Progress: A Report of the Tracking Oregon’s Progress (TOP) Indicators Project, Corvallis, OR: Oregon State University. Weimer, D. and Vining, A. (2011) Policy Analysis: Concepts and Practice (5th edn), New York: Routledge. White, D. and VanLandingham, G. (2015) ‘Cost-benefit Analysis in the States: Status, Impact, and Challenges’, Journal of Cost-benefit Analysis 6: 369–99. Zajano, N. (2005) Personal Interview, March 15.
129
SEVEN
Policy analysis and evidence-based decision making at the local level Karen Mossberger, David Swindell, Nicholet Deschine Parkhurst, and Kuang-Ting Tai Since the municipal reform movement at the turn of the 20th century, data and analysis have been tools for change and innovation at the local government level. With the advent of new technologies and sources of data, local governments are actively exploring how to use these effectively for evidence-based decision making. While there is a lack of systematic research on how policy analysis is being used at the local level, local governments have some special opportunities for experimentation and learning in this area. Local governments deliver many core services in the United States, ranging from public safety to human services delivery. They are also engaged in many different policy areas, including land use planning for purposes of economic development, health and safety, and even morality policy (Sharp 2005). As the level of government closest to citizens, local governments may involve citizens in processes for collecting, sharing, and discussing data and analysis. So, as VanLandingham notes, in his chapter in this volume, that states are often thought of as laboratories of democracy in the federal system, cities represent an even broader range of experimentation, with varied tax and service preferences as well as managerial and governance approaches nested within the frameworks of their home states. While this chapter highlights the uses of the traditional tools of analysis at the local level, we will also consider performance management data, open data, and predictive analytics using ‘big data.’ After examining some case studies demonstrating local applications of policy analysis and data, we consider how these recent trends may influence the use of analysis at the local level.
Context for local policy making in the United States In order to understand the use of policy analysis and evidence at the local level, we first offer a basic description of the policy and institutional landscape. Substantial variation in policy responsibilities and capacities exists across local governments, based on several factors: type of government, state law, and city size. Local government is an important provider of services in many policy areas in the United States, accounting for just under one third of non-defense spending (Wolman 2012). This includes public safety, housing and community amenities, education, transportation, recreation, and culture as domains where 131
Policy analysis in the United States
local government spending accounts for a significant portion of total public expenditures; but local governments are also engaged in many other policy areas as well (Wolman 2012). Because local governments in the United States depend to a great extent on own-source revenues (in comparison with other developed nations), they also have strong incentives to undertake local economic development, encouraging investment and employment within their boundaries (Wolman 2012; Mossberger and Stoker 2001). In the 2012 Census of Governments, there were 90,056 local governments in the United States. This total included approximately 3,000 counties, 20,000 municipalities, and 16,000 townships, all of which had general-purpose local governments with diverse policy responsibilities. In terms of special purpose governments, the United States was home to approximately 13,000 independent school districts. Additionally, there were 38,000 special districts that focused on a single policy area, which could be economic development, transportation, parks, sewer-sanitation, water conservation, mosquito abatement, or library districts, to name but a few (Hogue 2013). The 2012 Census of Governments also shows that the numbers of local governments vary considerably by state. Illinois is home to almost 7,000 local governments while Hawaii has only 21 (the national average is 1,801 per state). Controlling for population shows considerable variance as well. North Dakota turns out to love local governments quite a bit with 383.8 units of local government per 100,000 population, and Hawaii has the fewest with only 1.5 units per 100,000 population. While local government employment far outstrips federal employment, this varies significantly from a high of 286 local government employees per 10,000 population in Wyoming, to a low of 63 per 10,000 population in Delaware. Local governments, therefore, vary greatly in size and capacity. This array of local governments exists within a constellation of what are essentially 50 systems of local government, regulated by state laws that affect the structures, powers, and finances of local entities (Frug and Barron 2008). Municipalities and counties exist throughout the United States (whereas towns and townships tend to cluster in New England and the Midwest). While a court decision in 1868 known as Dillon’s Rule established that local governments are created and regulated by the states, state governments often allow a fair amount of discretion to local governments for policy decisions. Dillon’s Rule mandates that local governments can only exercise authority specifically granted by state statute. But in states that have adopted ‘home rule’ legislation, eligible local governments can undertake activities as long as they are not specifically prohibited from doing so by state law (Krane et al. 2001; Frug 1980). The majority of states—36 of 50— provide for some level of home rule, most often for municipalities (Richardson et al. 2003, cf. Kubler and Pagano 2012), especially in states with larger populations. Historically, the municipal reform movement at the turn of the 20th century introduced the council-manager form of government to promote professional management and to counter corrupt machine politics at the local level. Today, 132
Policy analysis and evidence-based decision making at the local level
there are varied forms of local government in the United States, even within a single state. The council-manager form has a professional chief executive appointed by an elected city council, modeled after corporate forms of governance (Judd and Swanstrom 2015). In mayor-council cities, the elected mayor is the chief executive. In 2011 council-manager governments accounted for 54% of municipalities with populations between 5,000 and 249,999. However, they comprised only about 38% of cities over 250,000 (ICMA (International City/ County Management Association) 2011). There has also been some blending of models over time, with some mayor-council governments providing positions for chief administrative officers (Frederickson et al. 2004). Growing professionalization of local government management has increased capacity to utilize analyses and evidence. There are also a number of professional organizations that spread ideas across local governments and their officials, including groups of mayors, managers, cities, counties, and townships. These are represented by such organizations as the National League of Cities, the International City/County Management Association (ICMA), the National Civic League, the Alliance for Innovation, and the Engaging Local Government Leaders organization. Furthermore, several recent developments have emerged among funding entities focusing on evidence-based decision making at the local level of government, including initiatives at the Arnold Foundation and the Bloomberg Philanthropy’s ‘What Works Cities’ efforts. As the above description demonstrates, local government is a complex and varied phenomenon in the United States, embedded within a decentralized system of federalism with varying degrees of autonomy for cities. This means that the functions, responsibilities, and capacities of local government vary. Adoption of new practices often depends on voluntary diffusion rather than federal or state requirements. Yet, there have been organizations supporting the adoption of analytical and data-driven practices, and some local governments that have been at the forefront in promoting such uses.
Historical development of evidence-based policy making at the local level The rise of council-manager government at the beginning of the 20th century focused on core values of professionalism and efficiency (Wollman and Thurmaier 2012). Concurrent with the city manager movement was the establishment of new municipal research organizations to promote the spread of ‘scientific management’ based in part on the collection of data to inform policy and practice. The most well-known example was the New York Bureau of Municipal Research, which influenced public administration as a field (Stivers 1995, 1997; see also Weimer and Peters in this volume). Municipal research bureaus spread to other cities as well, and the empirical methods and training programs spread through teaching in public administration at universities (McDonald 2010). The federal government supported the Advisory Commission on Intergovernmental Relations (ACIR) for almost three decades to help identify promising local government practices. 133
Policy analysis in the United States
Observation, data collection, and analysis were important to these organizations’ efforts to affect management and policy. While the municipal reform movement set the stage for the use of analysis and data, the specific techniques used today in policy analysis spread to local governments in the 1960s and 1970s. Ideas often diffuse across local and state governments because of federal incentives or emulation of federal programs (Eyestone 1977), and this was the case with policy analysis. The use of the Planning-Programming-Budgeting System (PPBS) by the federal government in the 1960s encouraged the adoption of policy analysis at the local level (see Weimer and Peters in this volume for a history of PPBS at the federal level). Initially, local governments employed policy analysis in budgeting even as the use of PPBS declined. Federal grants during the 1960s, such as the Community Renewal Program, required some use of systematic analysis by local governments (Kraemer 1973; Mushkin 1977). Diffusion also occurs as adopters take cues from leading innovators and professional associations (Walker 1969; Balla 2001; Mossberger 2000; Shipan and Volden 2012). Larger cities, notably New York and Los Angeles, were early proponents of policy analysis, working with think tanks such as Rand (Kraemer 1973). Professional associations, such as ICMA, also promoted the use of policy analysis by setting professional standards and norms through their publications and national training (see as an example the ICMA publication by Kraemer 1973). Even with these influences, however, one survey conducted by ICMA in 1971 reported that only 16% of local governments with populations between 5,000 and 100,000 had staff units doing analytical work (Mushkin 1977). There are other sources for local policy analysis, however. Local governments rely on universities or consultants to fill many of their analytic needs, at least on major issues. For example, many cities hire university researchers to help project economic impact for large-scale public investments (see Rosentraub and Swindell 2009). Other local actors, including unions, chambers of commerce, local think tanks, or advocacy groups may also conduct analysis to persuade policymakers (or voters in the case of referenda). For example, there have been multiple analyses of pension reforms in Phoenix, with differing views of the costs and benefits of ballot proposals from locally based think tanks and university research institutes (Chieppo 2013; Reilly 2016; Gilroy et al. 2016). Ex post evaluations also may be funded or undertaken by local organizations, such as Duke Power contracting for an external study measuring the satisfaction, economic impact, and market impact of hosting the 2012 Democratic National Convention in Charlotte, North Carolina (Heberlig et al. 2017). Analysis that originates outside of local governments may be formally or informally used by local officials, by the media, and others in policy debates. Use of policy analysis at the local level may also take advantage of resources outside of the locality. National think tanks produce research and policy analysis with implications for local governments, including some think tanks focused on urban policy and local government, such as the Urban Institute that President 134
Policy analysis and evidence-based decision making at the local level
Johnson founded in 1968 to produce research on federal antipoverty programs. In subsequent years, the Urban Institute has conducted research on tax policy, lowincome housing, health care reform, welfare reform, and urban change, among other topics (Urban Institute n.d.b). Another national research organization geared toward local issues is the Brookings Institution’s Metropolitan Policy Program, which makes recommendations on policies related to regional economic growth, innovation, and demographic change. Professional associations also distribute findings from research and policy analysis to their members, such as ICMA and the Alliance for Innovation (discussed in more detail in the following sections). Other professional associations disseminate research as well. The National League of Cities offers applied research and analysis on a number of topics, including economic development, finance, governance, housing, infrastructure, the sharing economy, sustainability, and urban development (NLC (National League of Cities) 2016). The National Association of Counties includes on its website research on transportation and infrastructure, justice, civic engagement, community and economic development, health, and more (National Association of Counties 2015). All four of these organizations are dues-based organizations in which individuals or jurisdictions join and pay a fee. Other organizations have corporate sponsors to support their efforts. One example of this model is the Smart Cities Council, which advocates for cities to adopt smart technologies in order to improve service delivery and efficiencies (Smart Cities Council, n.d.). They produce research to help cities and provide these through multiple networks, conferences, and webinars. In addition to internal expertise, professional associations, and think tanks, many cities have built close working partnerships with local universities. Sometimes, these relationships are at an individual level with specific faculty members. Other institutions have built research centers dedicated to working with local governments in a wide range of capacities and policy domains. Arizona State University is home to the Center for Urban Innovation. The University of Pennsylvania has the Institute for Urban Research. Rutgers University is home to the Center for Urban Policy Research. These represent only a small sample of the university-based research centers bridging academic research with the applied needs of practitioners trying to solve local problems. Still, significant challenges remain for closing the gap between university-based expertise and local government needs (Gordon 2016; Wang, Bunch, and Stream 2013).
Data-driven policy making Professional associations, philanthropies, and think tanks have promoted greater use of data for decision making. Local governments now confront more information and evidence than ever before, with performance data, open data, and big data. How is this related to the traditional practice of policy analysis or to new forms of analysis at the local level?
135
Policy analysis in the United States
Performance management has increased in local governments in the past few decades (Ejersbo and Svara 2012). Contracting out and managed competition have increased demands for measuring outputs and outcomes (Swindell and Kelly 2005). Calls for ‘reinventing government’ and more strategic management in the 1990s likewise advocated for measuring results at the local level (Ejersbo and Svara 2012; Osborne and Gaebler 1992). Performance measurement has been a focus of the Urban Institute (Urban Institute n.d.a) and ICMA created their Center for Performance Analytics (ICMA n.d.) to provide technical assistance for local governments to upload and analyze standardized performance data. Advice on performance management is also available through other professional associations, such as the National League of Cities (NLC 2014) and the Government Finance Officers Association (Government Finance Officers Association 2016). While the aims of performance measures are often managerial, the data collected may have implications for larger debates about programs and policies. As Radin points out in her chapter, the need to implement programs prevents the drawing of sharp boundaries between management and policy (see also Moynihan 2014). Mayors recognize the need for this information too. The National League of Cities conducted a content analysis of 100 mayoral ‘state of the city’ speeches in 2016 and found that ‘[M]ayors are helping their cities see the value of using technology and data to drive decisions and make their city governments more efficient and effective’ (Langan et al. 2016, p 1). In recent years, some cities have put their performance data online for citizens as well, often in the form of dashboards or in open data portals. Open data refers to governmental information collected by public agencies that is available to the public without restrictions (Janssen et al. 2012; Veljković et al. 2014; Meijer et al. 2014). Just as policy analysis initially spread at the local level due in large part to federal initiatives, the open data movement accelerated after the Obama administration’s 2009 Memorandum on Transparency and Open Government, which required federal agencies to make diverse datasets available online (White House 2009). Portland and San Francisco were local government pioneers in this area at the end of 2009, but by November 2015, 49 of the 100 largest cities had some type of open data portal (Tai and Mossberger 2015). Chicago, for example, has an extensive open data portal with more than 1,000 datasets, including performance metrics (City of Chicago, n.d.). The most accessed datasets include information on city employees and salaries, crime, building permits, affordable housing developments, and business licenses.1 The site allows users to search, filter data, and visualize it with charts or maps. Portals vary widely in terms of the information provided and usability (Robinson et al. 2010; Tai and Mossberger 2015). The literature on open data has discussed uses by citizens and software developers but one question that has not been raised is whether the greater availability of data enhances policy analysis at the local level. This is a new resource for local governments to do comparative analysis of data and policy options across departments and across cities. Open data is also a source 136
Policy analysis and evidence-based decision making at the local level
of data for civic organizations, local universities, or others who want to conduct useful analyses on local issues. Bloomberg Philanthropies (headed by the former New York City mayor) has launched a national network called ‘What Works Cities’ to ‘accelerate cities’ use of data and evidence’ to ‘improve services, inform local decision-making and engage residents’ (Bloomberg Philanthropies 2016). Assistance and best practices for open data are prominent topics in the network. Some cities have turned to big data and predictive analytics to identify trends and solutions for managing traffic, crime, neighborhood services, and more. Big data has been defined as ‘large, diverse, complex, longitudinal and/or distributed datasets generated from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available’ (White House 2014, pp 2–3). While big data use has become common in the private sector, utilization in the public sector is more recent and limited (DeSouza 2014).
Contemporary policy analysis practices among local governments Research focused on policy analysis at the local level is limited. Much of the existing literature emerged in the 1970s as policy analysis was beginning to spread through local governments (Schneider and Swinton 1979; Mushkin 1977; Kraemer 1973; Dror 1976). In an effort to develop a better understanding of local government usage of policy analysis, data, and partnerships today, ASU’s Center for Urban Innovation joined with ICMA and the Alliance for Innovation over the summer of 2016 to conduct two studies of local government administrators. The larger survey targeted a sample of 603 local managers on a range of emerging innovation issues (hereafter called the Innovation Survey). Municipal governments accounted for 81.3% of respondents and county governments comprised 18.7% of the Innovation Survey. Their jurisdictions ranged in population from 69 up to 3.8 million. The smaller project interviewed 56 local administrators from among the approximately 400 member jurisdictions of the Alliance, regarding their use of policy analysis, data, and analytical partnerships (hereafter called the Policy Survey). The smaller Policy Survey had the advantage of addressing a range of analytical tools and data, asking the 56 city managers: ‘… thinking about the last two years, how often does your organization utilize information from any of the following tools?’ The survey provided nine (9) options for respondents to score on a fourpoint scale: never, rarely, sometimes, and often. Figure 7.1 illustrates the percentage of the respondents that indicated ‘sometimes’ or ‘often.’ The most commonly used tool city managers use from this list is performance measurement (94%). In conversations with the respondents (reinforced in the literature), public sector usage of performance measurement tends to be primarily for internal budgeting and planning. At the other end of the scale, only 38% of the managers indicate that they utilize ‘big data’ as a tool in their organizations. Given that these respondents were sampled from the membership of the Alliance for Innovation and are self-selecting into that organization as managers committed to cutting-edge 137
Policy analysis in the United States
techniques, this low usage rate of big data is somewhat surprising. It suggests that the usage among the wider array of jurisdictions (those without the innovation proclivities) is likely even lower. Overall, however, the use of various forms of data and analysis was common among Alliance for Innovation respondents, with over 70% who reported that they used each of the tools other than big data at least sometimes.
Figure 7.1: Frequency of policy analysis tool usage (% sometimes or often) "Big Data" Formal Program Analysis Citizen Surveys Formal Policy Analysis Cost-Effectiveness Analysis Economic Impact Analysis Benchmarking Cost-Benefit Analysis Performance Measurement 0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Source: Policy Survey, Alliance for Innovation members.
The results in Figure 7.1 illustrate usage of these tools rather than sources for policy or data analysis. The Policy Survey of the city managers explored external partners and sources as well as internal capacity. The survey first asked respondents: ‘Thinking back over the past two years, have you utilized any of the following resources to help inform you on a policy or program issue confronting your organization?’ Respondents answered on a four-point scale: not at all, to a very limited extent, to a fair extent, or to a large extent. The survey then asked the same question (with the same answer set) but focusing on utilization relative to organization or management decisions confronting their organizations. Figure 7.2 highlights the results on these two questions. The results illustrate that Alliance managers rely most heavily on internal staff for research on both policy and management decisions. Notably, professional associations are the second most relied on resource for management decisions and third for policy issues. While between one quarter and one third of the respondents turned to universities as a resource on one or both of these areas, 15.4% say they never use universities as a resource on policy issues and 27.5% 138
Policy analysis and evidence-based decision making at the local level
say they never use universities on managerial decisions. These are almost the identical numbers for managers who never use academic journals. This highlights untapped opportunities for universities to help their neighboring communities address real issues confronting them.
Figure 7.2: Resource utilization for decisions (% to a fair or large extent) Think Tanks Academic Journals Local University Private Consultants Professional Assns Other Govt Agencies Internal Staff 0%
20%
40%
Mgmt Decision
60%
80%
100%
Policy Issue
Source: Policy Survey, Alliance for Innovation members.
A critical issue is the usage of analysis and data in decision making. The Policy Survey also asked city managers: ‘When you have used information from these sources, to what extent did the information from that source typically influence the outcome of a policy or management decision?’ Figure 7.3 illustrates the results for those that use the various sources. Again, and unsurprisingly, information provided by internal staff was not only the most common, but staff information was also the most influential. Influence among the remaining sources follows the same pattern as Figure 7.2 with private consultants, other governments, and professional associations clustering together. Information from local universities did better relative to usage, but think tanks and academic journals exhibited the least influence on decision outcomes, perhaps because those sources were more general and not tailored to the specific contextual situations facing individual jurisdictions. The Innovation Survey, with its broader sample, explored several aspects of how local governments ‘learn’ and utilize information. For instance, almost two thirds of the local administrators (63.7%) said they agree or strongly agree that their organization regularly obtains information on successful new practices from other local governments, indicating a robust network of policy and administrative practice diffusion. This result also suggests there is significant room 139
Policy analysis in the United States
for improvement in which the remaining one third of organizations can be better integrated into these knowledge networks. A slightly smaller majority (57.7%) reported agreement or strong agreement that their organization regularly shares information on successful new practices with other local governments.
Figure 7.3: Influence on outcome of policy/management decision (% to a fair or large extent) Academic Journals Think Tanks Local University Professional Assn Other Govts Private Consultant Internal Staff 0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Source: Policy Survey, Alliance for Innovation members.
The Innovation Survey focused on four emerging or prevalent policy areas in which local governments are currently engaged: performance analytics, public engagement, regulation of the sharing economy, and infrastructure financing. Table 7.1 reports the jurisdictions with executives that utilize any of a range of potential sources of research on these policy areas. The results highlighted several interesting patterns. First, on the initial three policy areas, local government executives rely most heavily on research provided by their professional associations, but rely most heavily on external consultants and internal staff regarding infrastructure financing. International examples provide the least input, which is perhaps not surprising given the unique structure of local governance in the United States. Local government executives tend not to rely significantly on academic publications. Use of performance data is nearly universal among respondents in the Policy Survey of Alliance members and is a topic that the larger ICMA Innovation Survey of Local Governments addressed in some detail. The Innovation Survey asked respondents whether their jurisdiction collects performance data to help assess the quality of service provision. Only about two in five (41.7%) indicated that 140
Policy analysis and evidence-based decision making at the local level
they do. The remaining 58.3% do not collect such data. Jurisdiction population is related to the likelihood of collecting performance data. The mean population of the communities that do collect the data was 119,421 and only 17,855 for those that do not (t= 5.05; p