158 22 1MB
English Pages 165 [166] Year 2022
albert hirschman’s legacy: works and discussions
4
Nicoletta Stame
Sociologist Nicoletta Stame (PhD, State University of New York and Catania University) is emerita professor at Sapienza University of Roma, past president of the European Evaluation Society and Associazione Italiana di Valutazione, a member of the editorial board of
Possibilism and Evaluation
Reconstructing the theoretical roots of interpretive social science, this text shows how Hirschman’s possibilism lies at the base of the original way Tendler practiced evaluation and anticipated many current developments. The continuing vitality of their thought enables us to trace the outlines of possibilist evaluation.
Nicoletta Stame
Albert Hirschman affirmed that “Judith Tendler’s fine insights into the differential characteristics and side-effects of thermal and hydropower, and of generation and distribution, contributed in many ways to the formation of my views.” Judith Tendler, in turn, wrote that Hirschman had taught her “to look where I never would have looked before for insight into a country’s development,” and that in Albert’s work a researcher who was “patient enough” would find “a rich complexity of both success and failure, efficiency alongside incompetence, order cohabiting with disorder.”
Possibilism and Evaluation Judith Tendler and Albert Hirschman Foreword by Luca Meldolesi
Evaluation, and vice-president of A Colorni-Hirschman International Institute.
www.peterlang.com
9781433198489_cvr_eu_opt3.indd All Pages
25-Aug-22 16:25:00
albert hirschman’s legacy: works and discussions
4
Nicoletta Stame
Sociologist Nicoletta Stame (PhD, State University of New York and Catania University) is emerita professor at Sapienza University of Roma, past president of the European Evaluation Society and Associazione Italiana di Valutazione, a member of the editorial board of
Possibilism and Evaluation
Reconstructing the theoretical roots of interpretive social science, this text shows how Hirschman’s possibilism lies at the base of the original way Tendler practiced evaluation and anticipated many current developments. The continuing vitality of their thought enables us to trace the outlines of possibilist evaluation.
Nicoletta Stame
Albert Hirschman affirmed that “Judith Tendler’s fine insights into the differential characteristics and side-effects of thermal and hydropower, and of generation and distribution, contributed in many ways to the formation of my views.” Judith Tendler, in turn, wrote that Hirschman had taught her “to look where I never would have looked before for insight into a country’s development,” and that in Albert’s work a researcher who was “patient enough” would find “a rich complexity of both success and failure, efficiency alongside incompetence, order cohabiting with disorder.”
Possibilism and Evaluation Judith Tendler and Albert Hirschman Foreword by Luca Meldolesi
Evaluation, and vice-president of A Colorni-Hirschman International Institute.
www.peterlang.com
9781433198489_cvr_eu.indd All Pages
19-Oct-22 21:25:45
ADVANCE PRAISE FOR
Possibilism and Evaluation: Judith Tendler and Albert Hirschman “Drawing on unconventional evaluation strategies of Albert O. Hirschman and Judith Tendler, this book offers important insights regarding how to conduct evaluations which open up possibilities for public action. In contrast to the pessimism of conventional evaluations which highlight ‘constraints’ and ‘structural barriers’ to development, Stame offers a relatively optimistic approach urging continuous learning to transcend the rigid guidelines of orthodox evaluation paradigms. A must read for both development scholars and practitioners who are curious about how development efforts unfold amidst uncertainty and generate ‘surprises’ which must be culled to appreciate the complexities of the development process.” —Bish Sanyal, Ford International Professor of Urban Development and Planning and Director, SPURS/HUMPHREY Program, Massachusetts Institute of Technology
“This authoritative account by an eminent evaluation scholar reminds us why when confronting today’s evaluation challenges, we should never forget insights from past masters. Both Hirschman and Tendler combined practice and research in ways that cast a penetrating light on problems that remain critical for evaluators today. These include complexity, causality and above all how to engage stakeholders and citizens in evaluations that aim to remain true to democratic principles. An accessible and thought provoking read.” —Elliot Stern, FAcSS Emeritus Professor of Evaluation Research, Lancaster University
“This book is exceptional. It does a brilliant and innovative job of taking the economic and evaluative writings from two authors, juxtaposing them, and showing how the writings of each reinforces and supports the writings of the other. I recommend this book to you without reservation.” —Ray C. Rist, George Washington University; Former President, International Development Evaluation Association
“In her new book, Possibilism and Evaluation: Judith Tendler and Albert Hirschman, Dr. Nicoletta Stame gives us a fresh perspective on Tendler and Hirschman, two provocative thinkers who were unafraid to stand apart in their thinking and practice, and whose scholarship was full of prescient of ideas that are still ‘leading edge’ in evaluation today. Tendler and Hirschman covered significant territory not only worth remembering, but truly inspiring for us today, and their work beckons evaluators to reflect on questions such as: How can we harness the power of Theory of Change in evaluation to break away from linear logic modeling and capture a flexible perspective that leaves room for ambiguity? How can we be open to see and study what has worked in the face of adversity? How do we represent human creativity in our evaluations? Stame shows connections between this still avant-garde work with evaluation scholars like Rogers, Schwandt and Patton, and with approaches and methods such as mixed methods, qualitative evaluation, Appreciative Evaluation, and others. We are grateful to Stame for bringing into the light the stimulating and robust body of work of Tendler and Hirschman, with their unflinching challenging of norms that still bind evaluators in many quarters today. The complex concept of possibilism in evaluation is an important contribution to evaluation scholarship—evaluation that invites respect of people, ethical practice, and creative insight toward what makes communities resilient and thriving.” —Tessie Tzavaras Catsambas, CEO/CFO, EnCompass LLC; Former President, American Evaluation Association
Possibilism and Evaluation
Albert Hirschman’s Legacy Works and Discussions
LucaMeldolesi Series Editor Vol. 4
The Albert Hirschman’s Legacy series is part of the Peter Lang Political Science, Economics, and Law list. Every volume is peer reviewed and meets the highest quality standards for content and production.
PETER LANG New York • Berlin • Brussels • Lausanne • Oxford
Nicoletta Stame
Possibilism and Evaluation Judith Tendler and Albert Hirschman Foreword by Luca Meldolesi
PETER LANG New York • Berlin • Brussels • Lausanne • Oxford
Library of Congress Cataloging-in-Publication Control Number:2022028057 Bibliographic information published by Die Deutsche Nationalbibliothek. Die Deutsche Nationalbibliothek lists this publication in the “Deutsche Nationalbibliografie”; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de/.
ISSN 2576-9723 (print) ISSN 2576-9731 (online) ISBN 978-1-4331-9848-9 (hardcover) ISBN 978-1-4331-9849-6 (ebook pdf) ISBN 978-1-4331-9850-2 (epub) DOI 10.3726/b19919
© 2022 Peter Lang Publishing, Inc., New York 80 Broad Street, 5th floor, New York, NY 10004 www.peterla ng.com All rights reserved. Reprint or reproduction, even partially, in all forms such as microfilm, xerography, microfiche, microcard, and offset strictly prohibited.
Table of Contents
Foreword
vii
Introduction 1 1
Evaluation and development
19
2
Interpretive social science: The core of an anthology
39
3
Interpretive social science and morality
47
4
Hirschman, possibilism and evaluation
65
5
Hirschman’s production line on projects and programs
79
6
Possibilism, change and unintended consequences
93
7
Doubt, surprise and the ethical evaluator: Lessons from the work of Judith Tendler
103
vi
| Table of Contents
Appendix A Albert Hirschman and the World Bank
125
Appendix B Remembering Judith
133
Bibliography Index of Names Index of Subjects
137 147 151
Foreword
During the 1960s, although sometimes dealing with what we now call evaluation issues, Albert Hirschman found it convenient to use his academic position as a pioneer in development economics to keep himself out of the evaluation debate that accompanied the expansion of U.S. international aid in Latin America. In the meantime, Judith Tendler, the only real student Hirschman had ever had, and still in touch with her teacher, was developing research relevant to the evaluation being conducted in a number of international institutions. In retrospect, it can be said that the attitude of both represented a possibilist stratagem, because it gave them a position of effective freedom in their work even as they operated close to officialdom in the field of evaluation, and this allowed them to positively influence its evolution. Furthermore, this is an intellectual legacy that can be very useful today. Because in the more open climate resulting from the cultural and political evolution of our times, the legacy of Judith Tendler and Albert Hirschman can be used as a source of ideas that helps us not only to better understand the various aspects of the problem of evaluation that were rediscovered later, but also to bring into focus the possibility of promising future developments. Luca Meldolesi
Introduction
My relationship with Albert and Judith (and with their thoughts on evaluation) In my relationship with these two great personalities—Albert Hirschman (1915– 2012) and Judith Tendler (1939–2016)—there was always something that was left unsaid. From their contribution to the understanding of how change can actually come about, I have especially drawn on what might be relevant to the evaluation of development policies and programs—a practice that they considered to be of lesser importance than either their theoretical work or the actions to be evaluated. This was true even though they themselves were evaluators, Albert in particular when he evaluated the World Bank projects he spoke about in Development Projects Observed, 1967 (henceforth DPO), and Judith in her work with various international development agencies over a long period of time. One reason, I think, is this. Even though they were (de facto) great innovators in the field, they never wanted to commit themselves to a battle over evaluative approaches (or paradigms). For them it went without saying that what was meant by evaluation (and/or appraisal) was the sort of mainstream practice required by the agencies, and they considered their analytical work on development programs
2 | Introduction
and policies to be on another level altogether.1 Judith, for example, distinguished between “evaluations,” something similar to monitoring, and “studies,” which allow comparative analysis and cross-functional comparisons,2 and recommended concentrating on the latter (Tendler, 1983).3 When she was asked to make an evaluation, she made it known that on such occasions, alongside the current practice recommended by the international agencies, there was also (fortunately) an opportunity to study development. Albert distinguished (1965, p. 4) between “existing project appraisal techniques” (that measure the productivity of a project with respect to a pre-established objective) and his own way of studying linkages and unexpected consequences through the observation and comparison of projects and their characteristics. He would then specify that what he was arguing should be understood as supplementing current techniques rather than replacing them. Yet in doing so, both of them showed that evaluation could be done in a way that was different from the mainstream. In fact, starting from observation in the field, comparisons, linkages, and unexpected consequences are really key tools in a type of evaluation that works toward improvement, “for a better world.” It is what I call “possibilist evaluation.” After having worked on the Marshall Plan for the U.S. Federal Reserve Board, Albert served as “economic and financial advisor to the National Planning Council” in Colombia in 1952–53. As he recalled in “A Dissenter’s Confession,” “the World Bank had recommended me for this post, but I worked out a contract directly with the Colombian government. The result was administrative ambiguity that gave me a certain freedom of action” (1986, p. 7).4 But when a
1 For example, Judith was not pleased with the title of the collection of her writings that I edited—Progetti ed effetti: Il mestiere di valutatore (1992) [Projects and effects: the profession of evaluator]. While I intended to present, through her, a type of evaluation that differed from the mainstream, Judith probably felt that presenting her as an evaluator was inadequate. 2 On cross-functional comparison, see Chapters 1 and 7 below. 3 Although she later defined her approach as “lesson-learning evaluation research” (2007; now in 2018, p. 245). Tendler (1998a, p. 1) wrote, for example: “My observations are based on some exposure to the IIG programs, on my experience in carrying out evaluation of better-performing public-sector programs, and with teaching graduate students both at MIT and in the field about how to do this kind of evaluation.” 4 Not least because he could not agree to the request to write a top-down development plan, as he “confessed” in the section entitled “Revolting against a Colombian Assignment” (1986, p. 8).
Introduction | 3 dictatorship took power there through a coup, he set himself up on his own as a business consultant, and then entered American academia through The Strategy of Economic Development (1958), a text that came out of his Colombian experience and which had an extraordinary influence on development economics. Later, in Journeys Toward Progress (1964), he reflected on the practices of “reformmongers,” particularly those who were close to him—the master reformmongers Lleras Restrepo and Celso Furtado to whom he dedicated the book. And when he wanted to look at what happened during the implementation of development projects he had a difficult confrontation with the World Bank (henceforth WB), which had itself commissioned the work.5 In fact, the report he prepared (A Study of Selected World Bank Projects: An Interim Report, 1965) was subjected to such defensive criticism by the WB that the book based on his work (DPO, mentioned above) was published through the Brookings Institution, not the Bank. After his experience as a consultant for international agencies, Albert became a great innovator in social thinking, drawing inspiration from his own observations of reality. Judith, on the other hand, who worked as a consultant for international agencies in the 1970s and 1980s, was able to establish a dialogue with them, but at the same time jealously maintained her independence, both in calling on them to adhere to their stated tasks6 and in inviting them to be true to their democratic foundations.7 Her way of looking at reality, along with her adherence to what she had directly observed through field research and direct involvement, earned her esteem and respect for her ideas—even if her interlocutors...stopped there. In this introduction I will attempt to argue my case concerning the legacy of Albert and Judith’s contribution to “possibilist evaluation.”8 I will begin by comparing their intellectual trajectories, moments of intersection and parallel paths,
5 I have reconstructed the episode of that evaluation in Appendix A, “Hirschman and the World Bank.” 6 For example, in her report for the World Bank in New Lessons from Old Projects (Tendler, 1993; now in 2018, p. 165). 7 For example, in her report for the Inter- A merican Foundation in Fitting the Foundation Style (Tendler 1981; now in 2018, p. 117 and ff). 8 When I speak of “possibilist evaluation” I am referring to the way the two authors conducted the evaluations they were involved in, utilizing the “conceptual tools” of possibilism (Hirschman, 1971b, p. 29). But I also address evaluations conducted by others in which aspects of possibilism emerge (possibilism-compatible evaluations). This is the perspective of Chapter 4.
4 | Introduction
with reference to their formulations concerning development and possibilism.9 I will then analyze the reasons behind what I see as the enduring vitality of their thinking, reinterpreting some aspects of it that can be found both in current debates and in the work of many evaluators (who nevertheless rarely refer10 to Albert and Judith). And in doing so I will refer to points that are then addressed in later chapters.
Roads that cross and roads that diverge In the mid-1960s when Albert was teaching at Columbia University, Judith asked him to be supervisor for her doctoral thesis. Hirschman was at the culmination of his three- dimensional work on development—The Strategy on economic development, Journeys Toward Progress on policy-making, and Development Projects Observed on development aid. As he wrote in 1994 (now 2015) in his Preface to the second edition of the latter: The concept—or fantasy—of a unified “trilogy” emerged in my mind primarily during the writing of the present book. Over and above the overt purpose of my work—the analysis of development and the advice on policy—I came to see it as having the latent, hidden, but overriding common intent to celebrate, to “sing” the epic adventure of development—its challenge, drama, and grandeur (2015, p. xvi).
In DPO in particular, Albert had indicated the principles that ought to motivate an evaluation whose aim is to support development. Implementation, he said, should be treated as “a long voyage of discovery in the most varied domains, from technology to politics” (1967, p. 32). And along the way, the creativity (the hiding hand) that emerges should be recognized and appreciated, and various forms of uncertainty must be taken into account along with latitude and time differences, project characteristics (trait-taking and trait-making), and the centrality of side effects. 9 For an overall reference to their work see, for Hirschman, Meldolesi (1995), and for Tendler, Tendler (1992) and Tendler (2018). All of Tendler’s papers cited in this volume are available at www.colornihirschman.org/judith-tendler-publications. 10 Robert Picciotto, Osvaldo Feinstein, Philipp Lepenies, Mita Marra, Laura Tagle, Valeria Aniello, Michael Woolcock and David Ellerman have on the other hand invoked a number of aspects of Hirschman’s work relevant to evaluation.
Introduction | 5 While Albert was observing World Bank projects, Judith was writing her doctoral thesis on electric power in Brazil. Between the two of them there was a rich exchange of views on the themes of development and technology—as well as of a personal nature (including some letters revealing discouragement11). At the end of this period Albert said that Judith’s work on technology had inspired him: This manuscript was largely written in New York between February and July 1966. During this period, I profited greatly from exchanging ideas and draft chapters with Judith Tendler who was then writing her doctoral dissertation on electric power in Brazil. Miss Tendler’s fine insights into the differential characteristics and side- effects of thermal and hydropower, and of generation and distribution, contributed in many ways to the formation of my views. (“Acknowledgements” in Hirschman, 1967, p. xi)
Judith, in turn, said that Albert had taught her “to look where I never would have looked before for insight into a country’s development,” and that in Albert’s work a researcher who was “patient enough” would find “a rich complexity of both success and failure, efficiency alongside incompetence, order cohabiting with disorder” (1968, p. xi).12 In taking leave of his “exclusive 18-year commitment” to development (and Latin America), and based on the trilogy and on a series of illuminating articles— starting with titles such as “Obstacles to Development: A Classification and a Quasi-Vanishing Act,” (1971d) or “The Search for Paradigms as a Hindrance to Understanding” (1971e)—Albert sketched out in the introduction to A Bias for Hope “an underlying methodology and perhaps philosophy” for possibilism (1971, p. ix). It was here that he listed inverted sequences, cognitive dissonance, blessings in disguise and unintentional consequences as “possibilist devices” (1971b).13 Even though Albert and Judith always stayed in touch, from 1970 onward their paths diverged.14 Albert was led by the progress of his own work to broaden 11 In that period they exchanged letters in which Judith declared she was in deep water in spite of her efforts in her research on electrical energy in Brazil, and Albert answered that he was in a similar situation, but that it would be a mistake to give up on their respective tasks. 12 This is the common thread running through all her work. See for example the introduction to Tendler and others (1988) on Bolivian cooperatives. 13 See Chapter 4. 14 In a series of letters that I was able to view at the Mudd Library at Princeton University, where the Hirschman Papers are kept, Judith informed Albert in great
6 | Introduction
the sphere of his scientific interests to encompass social science in general, and toward developed countries. The insights in DPO on latitude led him in fact to produce Exit, Voice, and Loyalty (1970). At that point Harvard called—an offer that he didn’t feel he could refuse. Later, however, partly as a result of the coups in Latin America (and the decline in U.S. interest in development economics), his well-k nown aversion to teaching prompted him to consider possible alternatives, and he therefore grasped the opportunity offered by the Institute of Advanced Study at Princeton, which initially invited him as a “visitor” and then named him “Professor of Social Science” (singular).15 It was at the Institute that The Passions and the Interests (1977) took shape, together with other important texts—but nevertheless his interest in Latin America remained, above all with regard to the return to democracy that began gradually to appear after the dark years of dictatorship. Judith began a career as a consultant and evaluator for the international development agencies—U.S. Agency for International Development (USAID), World Bank (WB), United Nations Development Program (UNDP), Inter- American Foundation (IAF), etc.16 —that continued into the mid-1980s. It was a job that involved both fieldwork and work within the agencies themselves, more attuned to the participatory and democratic style of some (IAF),17 and more detached when it involved observing the bureaucratic practices of others. This emerged later in the book Inside Foreign Aid (1975), an extremely rare analysis of the workings of international aid as seen from inside an agency (USAID). In all
detail about what she was doing, and they exchanged views and advice. And she made no secret of what a pleasure it would be for her to work with him again. In a letter dated July 5, 1979, for example, she wrote: “It looks like there will be a replay in September of the 1974 Echeverria mission of which I was a part –to evaluate programs and policies toward the rural poor in the Northeast in the interim period. I probably won’t have time to go. Is that something that will interest you? If it would (I would easily have you invited) I might reconsider” (Hirschman Papers, box 77, folder 8). 15 See the paper “Our Idea of Social Science” (School of Social Science, 1979), which I will speak about below in Chapter 2. 16 In Progetti ed effetti (Tendler, 1992, p. 281 ff.), see Judith’s bibliography containing a long list of evaluation reports and consultations for program evaluations for various agencies. 17 See the preface to Tendler (1981) in which Peter Hakim praised Judith’s approach to evaluation.
Introduction | 7 these cases, Judith felt close to the situations she observed, to beneficiaries as well as implementers of projects, and she sought a dialogue with the agencies based on the things she had seen that actually worked despite the many difficulties.18 She too had a strong sense of admiration for the results achieved: “The evaluator should treat any successes with a sense of awe...[and] explain what is happening in the project against a background of what is predictable and what is a surprise” (1982b; now 2018, p. 134). And she grew passionate about her work when she thought she had identified a new phenomenon that provided her with an opportunity to help the progress of the ongoing initiative.19 During this period, scattered through her various evaluation reports and the research projects she conducted, her ideas were emerging about how evaluation should be done. It was important to look at the results actually achieved rather than the goals set by projects, and to be able to grasp unexpected successes (where there were presumed obstacles) as well as unexpected failures (where conditions were presumed favorable). It was also important to analyze the environment of the task at hand and the “personality” of the programs, taking into account the organizational logic of programs and agencies, and to bring the redistributive aspects to the fore through targeted research methods. In the late 1970s, on the occasion of the tenth anniversary of ’68, Albert decided to explore the oscillation between moments of high public interest (in which social movements prevail) and periods of retreat into the private sphere (and consumption)—to examine, in other words, the pleasure and disappointment that each of these social “cycles” can elicit (Shifting Involvements, 1982). This text was also the prelude to a new voyage of observation concerning development projects,20 this time funded by an agency he felt close to 18 It is symptomatic that the “Suggestions to evaluators,” a distillation of evaluative wisdom aimed at those accustomed to mainstream practice, is contained in a text written for USAID (1982b; now 2018, p. 129). Similarly, it is symptomatic that in a text written for IAF (1983a) Judith emphasizes the importance of carrying on with what is already being done well as it suits the democratic and participatory nature of the organization. 19 In a letter to Albert dated July 5, 1979 Judith speaks of advice she gave to the WB (who did not believe in a positive outcome that had in fact been verified in a program she evaluated). 20 In the preface to Getting Ahead Collectively (1984a, p. vii) Albert writes: “It was not part of my intention to ‘evaluate’ the Foundation and its work. I simply used it as a convenient means of access to the ‘grassroots’ and wish to thank it here for letting itself be so used.”
8 | Introduction
(IAF),21 headed by someone he esteemed (Peter Hakim). Referring to the alternation between public and private and to some of his other ideas (such as inverted sequences or “the way one thing leads to another”), Hirschman based his reasoning on his observation of cooperatives operated by people who had had past experience with collective movements. This was the origin of the “principle of conservation and mutation of social energy,” the topic of the first chapter of Getting Ahead Collectively (1984a). It was at that time that there was a re-convergence of the trajectories of the two authors,22 as may be seen in the volume Direct to the Poor: Grassroots Development in Latin America, (Annis & Hakim, 1988), a collection of articles on programs financed by the IAF. The book opens with Hirschman’s “principle of conservation and mutation of social energy.” In the introduction the editors point to this essay as posing the fundamental questions concerning how the energy for collective action is born, reborn and applied, while the other chapters offer explorations of this. Among these, in fact, is the article by Judith (et al, 1988) on Bolivian cooperatives, a masterpiece of “possibilist” methodology that addresses successful cooperatives that lacked preconditions considered necessary, obstacles that did not prevent the achievement of goals, mixed successes and failures, differential redistributional effects of different crops, etc. When asked by Picciotto (perhaps partly to make up for Albert’s past issues with the WB23) to do a job for the World Bank or alternatively to indicate a student of his who might do it in his place, Hirschman replied that “(...fortunately) there is no Hirschman school,” but that Judith Tendler “has used some of my ideas in a remarkably creative fashion and [...] in general has an intellectual
21 When Reagan abolished it, Hirschman wrote the article “A self-inflicted wound” (1984b) (see Adelman, 2013, p. 595), to indicate that a well-functioning organization had been abolished. 22 In a letter to Albert and Sarah dated October 16, 1982, Judith describes her remarkably positive experience with the Bolivian cooperatives financed by the IAF, along with some of the local IAF operatives and their ability to interact with the poor and appreciate their food, stories and way of life. And she also speaks of Kevin Healy and Peter Hakim, the editors of the book Direct to the Poor. Knowing of Albert’s idea of studying IAF projects she suggests that he visit some of the ones in Bolivia, “I think that Kevin is one of the best IAF representatives, and has ferreted out the more innovative projects to finance.” 23 I have reconstructed the episode below, in Appendix A, “Albert Hirschman and the World Bank.”
Introduction | 9 affinity with my way of thinking.” Judith in fact did the job, which led to New Lessons from Old Projects (1993), demonstrating her originality within the possibilist framework (among other things, she made full use of her method of comparing successful programs with others, not least to criticize the WB for having abandoned projects that actually had good prospects of success). But the 1980s were also the years in which Judith transformed herself from a consultant into a university professor. Beginning in 1985 at MIT24 she began to develop her “teaching cum research,”25 an activity that gave her great satisfaction as well as a greater opportunity to pass on her particular way of studying development. Her course included doing collective research—her students learned from her experience and from interacting with each other (making their problems explicit).26 At the same time, her work was increasingly based on the fieldwork carried out by groups of students under her direction, the supervision of doctoral theses, and her interaction with ex-students who had joined international organizations. In all these cases, in her work as in her teaching, her primary concern was how to formulate the questions that could guide a research 24 In 1985 she was invited as a visitor by MIT’s Department of Urban Planning, and the following year she was abruptly made full professor (an unusual procedure in an academic career, and reserved by MIT for very few special cases). From then on Judith worked in the doctoral program of the Department of Urban Studies and Planning (DUSP). 25 This is covered extensively in Chapter 7. To understand what it consists of, see the syllabi of her courses, which I have reproduced in Long is the Journey, No. 3, 2017: https://colornihirschman.org/dossier/article/83/syllabuses-of-the-course-analyzing-projects-and-organizations. 26 I collected some accounts of these lessons in the seminar held in Judith’s memory that we organized at the Sapienza (CORIS department), on February 14, 2017. See the remarks by Tito Bianchi and Sunil Tankha, in https://colornihirschman.org/ dossier/article/82/memories-by-ex-students. Judith’s students have created a network among themselves, and they organize “Meetings on the Tendler Network.” The first of these was held in New York in March 2019. The invitation stated that “The meeting’s goal is to foster the kind of research that was pioneered by our friend, mentor, and colleague Judith Tendler. […]Professor Tendler relied on fieldwork to examine how development projects and organizations could be made to work better. This meeting will allow development practitioners and scholars from multiple disciplines to present new research, discuss how the research that Judith did (and many of us currently do) can thrive in the world today, and explore connections for collaboration.”
10 | Introduction
project. Not seeking to confirm the existing literature, but springing from the doubts that can emerge from one’s own observations or from reports written by others (academics or practitioners). Her typical question “What surprised you?” was then translated into carefully articulated questions on why something had worked well and something hadn’t, or on how an aspect seen as negative could change into something positive, and vice versa. At the turn of the 2000s, with Wolfensohn’s presidency, an interest in Hirschman’s ideas grew at the WB, as reflected in the work of Ellerman (2005), which was also critical of the official WB line. In those years Judith was involved, along with her doctoral students, in work on Brazil’s Northeast, which led her to become more concerned with the political dimension of development issues (Good Government in the Tropics).
The relevance of possibilism in evaluation Despite the fact that Albert and Judith are poorly known among evaluators, their ideas are better received today (than when they were formulated) thanks to a shift in perspective that is taking place in the field of evaluation, one that relates to understanding change. If the purpose of evaluation is still to understand which initiatives work best (where and why) in bringing about the desired change, the experience of over fifty years of history shows that in practice things have often played out (and are still playing out) differently than expected, and in a more complicated way. Alongside the traditional question “does the program work?” the complementary question is now also being asked—“ how does change happen when a program is implemented?” This takes us back to the distinction traced by Hirschman between projects as blueprints and projects as privileged elements for understanding how development occurs,27 and to the distinction between voluntaristic change and change as an effect of unintended consequences (1971b, p. 37).28 If change is what matters, this point of view maintains, everything that happens around a project needs to
27 Whose process of implementation, as mentioned, “may often mean in fact a long voyage of discovery in the most varied domains, from technology to politics” (1967, p. 32). See below, Chapter 5. 28 See Chapter 6.
Introduction | 11 be considered—i.e., implementation paths that are very different contextually, expected and unexpected consequences, complexities and uncertainties, etc. While the question “Did the program work?” presumes a traditional way of thinking in which the program is seen as a hypothetical intended change with an answer provided by testing the hypothesis, the question “How does change happen?” is an open-ended question that leaves room for many interpretations. In reality, this second question presupposes an awareness that challenges any idea of regularity and generalizability. In fact, change—A lbert argues—a lways comes in different forms, with new constellations of events and factors. Indeed, the quest for models and regularities can implicitly (involuntarily) carry obstacles to the perception of change. Herein lies the difference between “probabilism” and “possibilism,” and the conviction, considered blasphemous in traditional thinking, that the possible is wider than the probable because it glimpses additional opportunities for change beyond those that arose in the past and are highly unlikely to reappear—but which nevertheless often served as the basis for constructing a “Theory of Change.” The topicality of possibilism therefore lies in its having offered answers right from the moment the question was asked. And therefore in being equipped for new challenges. Let us see how. Theories of Change. When an evaluation design is put together today, the first question is about the project’s theory of change (ToC). The need to reconstruct the theory of change underpinning every project has been a way of dealing with the difficulty of attributing an outcome to poorly conceived or poorly implemented initiatives.29 But it is also an occasion to burden the evaluation with a “scientistic problem,” in which the theory of change underlying any project becomes a hypothetical causal explanation30 to be verified in the course of the evaluation (bringing with it all the difficulties of reconciling the “scientific method” of research with the practical demands of evaluation…). Despite notable advances in the way evaluative explanation is conceived— notably by contrasting contribution with attribution (Mayne, 2012), and by identifying a variety of evaluative questions beyond that of causal explanation (Stern et al., 2012)—we nonetheless remain within the bounds of “probabilistic” reasoning. In fact, by defining the ToC as a scheme that incorporates a causal
29 Connell and Kubisch (1998), Weiss (1997). 30 Hypotheses based on identified obstacles to be overcome or preconditions that must exist, which possibilism suggests we should doubt.
12 | Introduction
contribution (perhaps then questioning whether it is “robust”: Mayne, 2017), we are still moving in the realm of expected change, without putting ourselves in a position to incorporate alternative avenues that the situation, on closer inspection, may offer, and which ultimately might actually make change possible.31 Not surprisingly, the discussion of theories of change has since turned toward the need to formulate flexible, open-ended theories that allow for continuous iteration between different hypotheses. Valters (2014), for example, having argued that ToC do not develop as predicted, that there can be more than one of them, and that the focus should be on process rather than product, on learning rather than accountability (p. 24), goes so far as to say that the function of the ToC is to challenge the assumptions of the program (p. 9). Taking these developments into account, Kim Forss, according to whom “conducting an evaluation is an exercise in change awareness,” wonders32 how it is that in handling ToC, the question of how change happens, in what ways and with what timing, is not being asked.33 After listing a whole series of ways change can happen as revealed in the literature (developmental, transitional, transformational, planned or emergent, episodic and continuous, gradual and punctuational, etc.) Forss links the analysis of change to issues of serendipity and uncertainty. These are themes addressed by other important contributors as well, from those who have taken up the Hirschmanian problem of implementation (Woolcock, 2012; Andrews et al., 2017) to those who have praised “ambiguity” (Dahler-Larsen, 2018). Here Albert and Judith’s possibilism can bring with it an important additional contribution, not only in its starting from a position of doubt and surprise, opening the field to the multiplicity of ways a desired change can come about, but also in helping to reveal new forms, untried exit routes, which the observation of a situation on the ground will help us identify and pursue (this was where Judith excelled). This is the lesson from Journeys: “to show how a society can begin to move forward as it is, in spite of what it is and because of what it is” (1963, p. 6). And it is the lesson from Albert that Judith constantly takes up, as she does for example in Comments on Partnership for Capacity Building in Africa (1998b). 31 Mayne (2021, p. 59) believes that simply asking why there was change is doing research, whereas doing evaluation is asking whether the intervention contributed to the change. 32 See the chapter “From measuring impact to understanding change,” part of a volume on long-term perspectives in evaluation (Forss et al., p. 199, 2021). 33 On timing, see the now classic Woolcock (2012).
Introduction | 13 It is noteworthy, moreover, that for Albert the link between “ambiguity” and change is explicit: Some of the very ingredients of the old order can be shown to be ambivalent and to possess some progress-and growth-promoting potential. Hence the close attention I pay to possible blessings in disguise and the collector’s interest I take in constellations which permit strength to be drawn from alleged weaknesses do not spring from infatuation with paradox; rather they are dictated by the essence of the process of change as I am able to understand it. (1963, p. 7).
Explaining and Understanding. The question “Did the program work?” asks for an explanation (i.e., the cause of the change), on the implicit assumption that the program is the cause. Once this explanation has been formulated, we expect consistency with regard to future programs—that is, the same change. The question “How does change happen,” on the other hand, aims to understand what was new and unexpected each time. Those wishing to answer the first question will seek to draw on a knowledge of the theory concerning the field of action of the program, and will look for it in sociology, psychology, political theory, geology, etc. But those who want to answer the second question have the opposite problem—how to be surprised when faced with things that do not turn out the way the theory would have them; to develop, as Albert puts it, the ability to perceive change. This involves “overcoming obstacles to the perception of change,” such as the ones Albert identified in development (1971e)—prejudices, style of work, ideologies, types of leadership, etc. Concerning leaders of underdeveloped countries, whether revolutionaries or reformers, Albert says that they “would do a better job if they trained themselves to overcome the obstacles to the perception of change when it happens” (1971e, p. 341). Because change can happen unintentionally. As Judith says, “good government need not be intentional,” good results can fail to be perceived, and the meaning of some successes can fail to be noticed. Precisely for this reason it is essential that “such experiences be unearthed, pored over, and interpreted back accurately to those who invent advice. This will enable inadvertency to be turned into intention the next time around” (Tendler, 1997, p. 163–5). The Anthropocene and Unintended Consequences. One way of thinking about how change occurs that has recently come to the fore concerns the issue of the anthropocene and policies to combat its challenges. Considered a typical aspect of the period we are living in, the interconnection between the economic, social
14 | Introduction
and ecological effects of any given action has evoked system thinking. This shift has been seen as a “paradigmatic revolution”—from a program-to a system- perspective. A shift, that is, from the idea of change as a linear development from intervention to outcome in a particular sphere (voluntaristic change) to the idea of unexpected change occurring within a system of integrated spheres— understanding the system as characterized by complexity, emergence, reverse causality, and unexpected consequences. This “paradigm shift” brings with it two problems. On the one hand, there is the risk of forgetting that evaluation has already come to grips with complexity (in theory-based approaches, realist, democratic and participatory evaluation),34 and of thus undervaluing an important legacy that could be used in criticizing ever-present mainstream approaches.35 In the second place, while conceding that they can be positive or negative, unintended (or unexpected) consequences36 are most often invoked in the second sense—economic development destroying nature, globalization promoting social inequality, etc. This observation brings to the agenda what is known as “transformational change” or integrated development, more of an aspiration than a clear idea of what it might mean, and which in any case alludes to voluntary action (by the state, by international bodies), rather than to the idea that things can (somehow) fix themselves.37 When the program perspective was all there was, approaches had already been put forward that took context into account, and that allowed for a variability of effects as well as the necessary complexity in articulating judgments about successes and failures. Now, with the emergence of the systemic perspective, which places context and the interconnectedness of different spheres at the heart of evaluation design, work should be done on unintended consequences so as to identify which ones to exploit (and to what extent) for positive change. Albert also thought that voluntaristic change and change as an effect of unintended consequences could be intertwined, but in a different sense. For possibilism, unexpected consequences produce opportunities for development in 34 In Stame (2022), I analyzed different ways of thinking about the system, and critiqued the identification of system with holism. 35 Therefore, it would not represent a paradigmatic revolution (a la Kuhn), but the opposition between different approaches that coexist. 36 See Chapter 6. 37 See van den Berg, Magro, Adrien (2021), particularly the chapters by van den Berg and Uitto.
Introduction | 15 situations in which “the outline of previously hidden possibilities of change can begin to be perceived,” (1971b, p. 37). Not, therefore, as simple opposition to forces that are difficult to overcome, leading to further frustration (fracasomania), but as the facilitation of viable paths, which must obviously be identifiable (by an evaluator trained in possibilism). This is also the topic Judith addresses as “missed opportunities,” when evaluators feel constrained by the rationale of the project and, even when activities or situations have been identified in which developmental goals could be achieved, they do not adequately exploit these findings (1982b, p. A-2; now 2018, p. 130). The Hiding Hand. The theme of unexpected change also points to a renewed interest among those concerned with development policy in the Hirschmanian theory of the “hiding hand.” This is the providential hand that benevolently hides the difficulties that may arise in a project, suggesting either that the tasks will not be difficult or that in any case it will be possible to address them as part of an ongoing routine. In this way actions are undertaken that would otherwise never even have been initiated (or that once started, would have been abandoned), and which instead, through the creative reaction of their protagonists in such a situation, allow the identification of innovative solutions. Here then is another case of unexpected change needing to be taken into account. The creativity such change is based on relies on the ability to exploit opportunities that previously were not considered either by the actors or by the consultants and theorists. As Hirschman says, “creativity always comes as a surprise to us; therefore we can never count on it and we dare not believe in it until it has happened. In other words, we would not consciously engage upon tasks whose success clearly requires that creativity be forthcoming” (1967, p. 11). But this is not the way the concept has recently been revisited, once again in the context of answering the perennial questions about project outcomes. Under a “probabilistic” approach, the goal is to make a numerical calculation of how many times the hiding hand will be “benevolent” and how many “malevolent” (even predicting a ratio of one quarter vs. three quarters!).38 The Ethical Dimension. A further reason for the wider (though still insufficient) acceptance of Albert and Judith’s ideas is the increased interest at present in the ethical side of evaluation. Not least as a result of environmental disaster and a heightened sensitivity to social inequalities based on ethnicity, gender, race, 38 See Flyvberg (2016) and the debate that followed in World Development. A critique of this approach, based on Hirschman’s possibilist framework, can be found in Lepenies (2017).
16 | Introduction
sexual orientation, etc., the subject of ethics in evaluation, once held to be marginal, is today at the center of attention (van den Berg, Hawkins, Stame, 2021). We no longer simply advise professional behavior that will “do no harm,” but have realized that every evaluation creates opportunities for “preventing bad” and “doing good.” This change in perspective brings with it a critique of the “scientistic” approach of the mainstream, according to which the evaluator should not discuss values, which are inherent in the project. Instead, there has been a revival of the kind of social science that, as Albert says, is based on the interconnectedness of “proving and preaching”—between analytical argument and reasoning of a moral nature.39 And this attitude is linked to the passion for change, to the urge to discover what positive aspects may be present in a situation. As I write in Chapter 7, “with an eye to the public interest [Judith] emphasized that through comparative analysis it was possible to highlight what had worked vs. what had not, thus offering the public some keys to improving policy.”40 This new approach allows us to appreciate the stance Judith always held, as an ethically oriented professional, in explicitly stating the values she pursued (helping the poor, improving the operation of the public sector), and especially in identifying research methods that would support her perspective.41
The Texts The chapters that make up this volume were written at different times and for different occasions. Taken together they also reflect an evolution in my way of dealing with evaluation. The chronological order I have put them in reflects movement from a stage when it was important to define evaluation in general, in its role as a process linked to practical programs and projects, to the present stage in which evaluation takes greater account of the links that run between various domains and the problems that this, with its greater emphasis on ethics, poses for efforts to contribute to change. Initially, I tried to reconstruct the way of doing evaluation that I had discovered in Judith’s work following in the footsteps of Albert’s possibilism, which 39 See Chapter 3, which gradually reconstructs how the subject of morality was handled by Geertz and Hirschman in what they called “interpretive social science.” 40 Originally Stame (2019, p. 453). 41 See Chapter 7 and Stame (2018).
Introduction | 17 contrasted with the decision-making logic of international agencies and their inability to learn from project experiences or to understand which projects were successes and which failures. So that Chapter 1, “Evaluation and Development” (written in 1992), creates a counterpoint between a mainstream mode of evaluation and insights from Judith’s experience as a consultant to these same agencies. Subsequently, I worked on the development of various alternatives to mainstream approaches, such as theory- based evaluation, realist evaluation, and positive- thinking approaches (Stame 2001, 2002, 2004, 2014). In all these cases I recognized elements that had been anticipated by Albert—complexity, mechanisms and motivations, and the field’s interdisciplinary nature (Chapter 4: “Hirschman, Possibilism and Evaluation”). In a certain sense, the idea was to reconstruct a Hirschmanian influence that had been unconscious. The new challenges facing evaluation required that these innovative approaches make more conscious use of the theoretical foundation that our authors had handed down to us. So I made a journey backwards, investigating some of the aspects of Albert’s work that ought to help us navigate current debates: possibilism and interpretive social science (Chapter 2: “Interpretive Social Science: the core of an anthology”); the relationship between research and morality (Chapter 3: “Interpretive Social Science and Morality”) which is at the root of the debate on ethics in evaluation; the theme of complexity, which recurs in the opposition between projects and programs (Chapter 5: “Hirschman’s ‘Production Line’ on Projects and Programs”);42 and the theme of unexpected consequences, a warhorse of the “systemic” approaches (Chapter 6: “Possibilism, Change and Unintended Consequences”).43 Finally, not least in the light of the work I did in choosing the articles for the anthology Beautiful Pages by Judith Tendler,44 I have reasserted Judith’s role 42 I later wrote an article on this topic, “Program, complexity, and system when evaluating sustainable development” (Stame, 2022). 43 Chapters 2, 3 and 5 were written as presentations at the Conferences on Hirschman’s Legacy organized by A Colorni-Hirschman International Institute. 44 All Judith’s writings were uploaded to the MIT website on the occasion of her Festschrift (2011) and may also be read at www.colornihirschman.org. I have drawn on them in a new publication: Beautiful Pages by Judith Tendler (Tendler, 2018). I have selected from her long and detailed studies and evaluation reports the pieces of brilliant synthesis that use specific examples to illustrate the main points of her way of looking at “program successes and failures.” Extended excerpts are presented in Chapter 7. I should add that the title “Beautiful Pages” echoes that of a famous collection of Cattaneo’s writings: “The Most Beautiful Pages of Carlo Cattaneo,
18 | Introduction
as a forerunner of today’s trends in approaches to evaluation. By taking account of her teaching experience and her most recent writings, I have been able to more fully reconstruct her work, which is particularly relevant today in the way she interweaves theory and practice, methodology and ethics (Chapter 7: “Doubt, Surprise and Ethics: Lessons from Judith Tendler’s Work”). Here, it seemed to me, that the characteristics of possibilist evaluation were becoming clearer. Albert theorized possibilism, but it was Judith who developed the practical side and built the edifice of possibilist evaluation. It is therefore no coincidence that this work begins and ends with her. For the sake of completeness, I have added Appendix A, a reconstruction of the episode of the evaluations conducted by Albert for the World Bank, an experience that constitutes a preamble to all subsequent developments. And I have closed with an affectionate remembrance of Judith (Appendix B).
Acknowledgments Each of the chapters collected in this volume received helpful comments at the time of its previous life, by many Conference participants, editors or readers: it would be difficult to thank them individually. A special mention is due to Elliot Stern, Osvaldo Feinstein, Bish Sanyal, Salo Coslosky, Laura Tagle, Bruno Baroni. And, of course, to Luca Meldolesi. Lastly, I want to express my gratitude to Michael Gilmartin who has translated the Introduction, Chapters 1, 4, 7 and Appendix A, originally written in Italian.
Selected by Gaetano Salvemini.” But only later did I realize how appropriate the phrase was for Judith. In fact, in a letter to Albert dated July 5, 1979, she wrote that she wished she had the time to polish her writings until they were beautiful.
1
Evaluation and development1
In discussions concerning reform of the welfare state and efforts to achieve an increase in public spending productivity, there are two opposing attitudes that need to be overcome. One of these is a journalistic scandal culture, which grows out of (frequent) examples of bad government, diversion of funds, or underworld infiltration, and which aims more at selling the news than at finding ways out of these situations. Then there is a technicism that presents reforms as black boxes, mechanisms that easily slot into place once they have been approved, all the inevitable mishaps along the way having already been foreseen. Judith Tendler’s work responds to this twofold challenge in an original way. It focuses on analyzing the implementation of public policy and development projects in situations of urban poverty and in underdeveloped countries with the aim of finding out why
1 A previous version of this chapter appeared as the introduction to the book Progetti ed effetti (1992) by Judith Tendler, which I edited and which unites two book chapters, one article, and three research reports by this author, all written before the end of the 1980s. I would like to thank Liliana Bàculo, Albert Hirschman, Luca Meldolesi, Annarita Olivetti, and Bish Sanyal for discussion and comments, and the CNR for a financial contribution.
20
| Possibilism and Evaluation: Tendler and Hirschman
organizations (international, governmental, or volunteer) manage to learn certain lessons and not others, and how to influence that process. The question of organizational learning is now increasingly at the center of debate in the administrative research centers of the large American universities, whose influence on the improvement of the bureaucratic structures of the major industrial countries cannot be ignored. And the opinion is widespread in such circles that theories and methodologies of ex ante analysis, monitoring and ex post evaluation of public policies must exit from a phase of technicism and institutional ritualism if they are to be up to the task of solving problems that recur in ever-changing ways and require specific and flexible solutions.2 Two decades ago, in an article that remains a classic in the methodology of evaluation in its infancy, Donald Campbell (1969) distinguished between trapped (or “self-deceived”) reformers and experimental reformers. The former are so devoted to their proposals that they set out evaluations that can only confirm the positive results of the action, fearing any experiment that might show that such results can also be obtained by other means or that the problems remain open despite the intervention. They do not easily change their mind, and are not inclined to adopt methods of investigation that are experimental or quasi-experimental. The latter, on the other hand, are certain only that there are problems to be solved, and are ready to experiment with new pathways until the most suitable (at least for the moment) is found—this regarding both the proposed reforms and the methods of investigation. Today, now that our glorious confidence in reform has waned somewhat and much progress has been made in evaluation methods, we are still asking what it means to be an experimental reformer. Judith Tendler can certainly count herself among those who have helped shed light on this question, starting with the experimentation side. Her work aims to bring out reasonable solutions to the problems faced by development projects through an “iterative” evaluation methodology (Tendler, 1982b, p. A-10). This combines state-of-the-art experimentation with true comparative virtuosity, aiming to produce knowledge that concerns not only the projects being evaluated, but how to design successive evaluations as well, presenting both projects and evaluations as “learning processes.”3 2 I have considered these problems in Stame (1990) and Meldolesi and Stame (1989). For an “instructive” presentation of the American system (and a comparison with those of Sweden and Japan) see Crozier (1988). 3 In recent studies on development projects, the “learning process approach” has been gaining ground, cf. Gow and Morss (1988), Korten (1980), Rondinelli (1983). This
Evaluation and development | 21 It could be said that the drive to explore this issue has been present in all Tendler’s research activity, from her early work on electricity in Brazil, in which she observed that difficult problems could be solved by administrations normally considered inefficient and not up to instructing anyone, to her later experience with institutions considered efficient but which nevertheless struggled to grasp the lessons within their reach. Development projects are presented as a form of investment aimed at achieving a determined result within a specific time frame, and the entity that carries them out must respond concerning their economic and social return to the financing body (government, international agency, bank) if its legitimacy (and continued existence) is to be recognized. Fortunately, this has made it a matter of course for regular budgets to be drawn up. Fortunately, a result of this has been that evaluation—a specific form of social research—has become an established practice, taught in methodology manuals and nurtured by experts. And since every development project—like every public project—generates conditions that differ in part from those that were expected and gives rise to new knowledge, the distinctive feature of an ex-post evaluation is the observation of what actually happened (effects that were foreseen or unforeseen, positive or negative, achieved thanks to the project or despite all the foreseeable and unforeseeable difficulties). In the understanding of these effects, therefore, lie the strengths and weaknesses (of the project, of the organizations running it, of the recipients), and finally any suggestions for future initiatives. On the other hand, argues Judith Tendler, the way interventions are organized and evaluations designed, it is not always possible to make full use of the potential in the knowledge that emerges during ex-post evaluation—to the extent that in some respects what is going on can be described as wasteful. It is wasteful not only when there are specific errors in forecasting, execution or management that allow for misappropriation or abuse, but also when development agencies prematurely abandon their projects at the first sign of opposition, without waiting for the actual work to proceed. Because since they are therefore unable to see what reactions their programs would provoke, they cannot appreciate how those interested in their continuation would defend them, or what could be changed to enable them to move forward. It is wasteful when the organizations implementing projects fail to understand the signs right in front of them and persist
refers mainly to learning on the part of the beneficiaries of aid, while for Tendler it is equally important to encourage learning on the part of the development agencies.
22
| Possibilism and Evaluation: Tendler and Hirschman
in simply doing things the way they always have. It is wasteful when evaluators continue to amass heaps of data without realizing that they could use it to solve the specific problems they are facing. Prior to assuming leadership of the Program on Project Evaluation in the Department of Urban Studies and Planning at the Massachusetts Institute of Technology (MIT), Tendler was a consultant to various international and U.S. development agencies for nearly two decades. Closely following the implementation of dozens of projects; observing the interrelations between their political, economic, technological and social aspects and the specificity of the ways they mix in each experience and give rise to unexpected forms of social change; and dwelling on “unexpected successes”—these are some of the key stages through which Tendler developed an original approach to the pathways of economic development, an approach strongly linked to the construction of a corresponding evaluation method. It is also worth mentioning that the reach of her work is not restricted to the specific subject of development projects, since it can also be extended to the analysis of welfare policies in industrialized countries (and beyond).
An evaluation methodology for possible development Tendler’s work program when she began to study Latin America in the 1960s was summed up in her far from perfunctory thanks to Hirschman. The Strategy of Economic Development (Hirschman, 1959) had taught her to look where she would not have looked before, and to find “a rich complexity of both success and failure, efficiency alongside incompetence, order cohabiting with disorder” (Tendler, 1968, p. xi). This lesson, first and foremost, enabled Tendler to avoid following in the footsteps of the economists from developed countries. These, if they had not already abandoned the field reeling from “cultural shock” at the first signs of the “paroxysm” of development, which they saw only as chaos and disorganization, were often quick to prescribe recipes suitable for any latitude—perhaps of a stamp opposite to those left by whatever rival economists had preceded them. In addition it was a lesson that proved very useful in containing the skepticism of many critics, who were generally inclined to blame the interests of developed countries for the successive failures experienced in the Third World. The idea was to study how underdeveloped countries themselves were dealing with their problems and going about solving them, the point being to understand what measures
Evaluation and development | 23 were most suitable for mobilizing unused resources and hidden rationalities. In a nutshell, so as to show development’s possibilities as well as the particular forms it could take in different situations. It was to this perspective that Tendler would bring the original contribution of her particular vantage point. In the 1960s, when the theoretical debates on development were still raging and Kennedy’s ideas on the “alliance for progress” were gaining ground (according to which “all good things [namely, investment, democracy, security, etc.] go together”4), Tendler was investigating the different production technologies used in large infrastructure projects and their effect on capacity for self-promotion and the mobilization of social energies (Tendler 1968, p. 175). In the 1970s, when the agencies realized that capital investment did not “automatically” lead to an improvement in the conditions of the poorest classes and ushered in “new policies” meant to establish a satisfactory relationship between growth, political action, and income distribution, Tendler (1975) was focusing on institutional aspects of aid normally neglected in project analyses. From modes of decision-making to forms of implementation, these raise crucial questions about the relationship of projects to the context in which they intervene. And then in the 80s, when the talk was no longer about development but about debt reduction and public sector spending, international agencies were blocked by various catastrophist theories5 and non-governmental organizations (NGOs) presented as bottom-up development agents were bursting onto the international cooperation scene, Tendler was calmly analyzing the negative and positive aspects of international agency and local government policies along with the new institutions of voluntarism, trying to identify possible forms of cooperation or replacement that would enhance the capabilities of each of them. In this way they would at the same time breathe new life into development policies and restore confidence in the ability to learn from previous attempts to alleviate poverty. In recent times, however, something seems to have changed. In the wake of a new perception of underdeveloped countries (no longer a homogeneous world of underdevelopment, but a variegated set of different parts, some of which are in rapid expansion, as in Southeast Asia) and of positive political changes in some 4 An interesting analysis of American aid policy in the Kennedy era and the birth of AID may be found in Packenham (1973, pp. 59–75). 5 As Tendler notes (1987, p. 47) the researchers had become “incapable of acting— pessimistic about things working out, and worried that we will harm the very subjects of our concern.”
24
| Possibilism and Evaluation: Tendler and Hirschman
areas, a discussion has been reopened about the possibilities for development and democracy,6 for which investigative tools such as those developed by Tendler are more appropriate than ever.
Decision-making logic and the definition of objectives It is axiomatic in the literature on decisions that the objectives of any action have to be “clear” and “reachable” so as to be easily controlled and reproduced. Although this paradigm has now drawn much skepticism (and is the subject of great debate), the main efforts in evaluation methodology remain directed at identifying analytical procedures for distinguishing among purposes,7 and some ex ante evaluation techniques, such as cost-benefit analysis, have been greatly “refined.” According to this approach, international aid agencies, when deciding to finance a project involving the construction of a facility, the provision of a service or the granting of a loan, are concerned with choosing the most cost-effective among the various alternatives presented. Naturally, the granting of this aid is expected to have a developmental effect. Development goals include economic growth, increased prosperity and social justice, and the acquisition of productive and organizational skills. Yet the difficulty of predicting and accurately assessing such outcomes (and the fear that the perverse effect of increasing social inequality may also occur) has led aid agencies to relegate development effects to the background and focus almost exclusively on output, to be measured through a number of indicators such as the number of people attending training courses, the quantity of goods produced, the number of people vaccinated, the number of hectares under cultivation, etc. This preference for anything that could be quantitatively measured undoubtedly enabled the accumulation of much knowledge about development aid, but it gave rise to a series of negative consequences—negative both for the institutional
6 See for example the annual report of the Economic Commission for Latin America and the Caribbean of the UN (ECLAC, 1990), which provides a picture of the Latin American situation in which emphasis is also placed on a series of “successes.” 7 See for example World Bank manual (Casley & Lury, 1982) that distinguishes between “output” (quantity of products or services), “outcome” for the population (changes in the conditions of life) and “impact” (on the economic and social life of the population).
Evaluation and development | 25 aims of the agencies, since the purpose of development in the end becomes increasingly elusive, and for the efficiency of the projects themselves, since opportunities that do arise for finding solutions to recurring problems are not seized. Tendler focuses her attention on this in the book Inside Foreign Aid (1975). The first critical consideration in this text deals with a relatively unusual theme—the organizational environment in which decisions to disburse aid are made. The starting point is the surprising observation that in a world where the scantiness of aid resources is well known, the prevailing attitude is that development assistance funds are unlimited. This is due to the decision-making reasoning regarding output, which is defined as a set amount of resources to be transferred in a given amount of time. The optimal solution is in fact deemed to be big projects that transfer large amounts of resources in a short period of time, since savings from economies of scale are expected from the shorter working time per dollar transferred. “A donor organization’s sense of mission,” Tendler notes, “relates not necessarily to economic development but to the commitment of resources, the moving of money” (1975, p. 88), without bothering to ensure that such a transfer is qualitatively and not just quantitatively necessary (1975, p. 92). It might be added that the eyes of public opinion are entirely on the increase or decrease of funding, and “the announcements about development assistance that carry the most political impact and drama—excepting those related to scandal— have to do with significant increases or cutbacks in aggregate funds rather than with the content of programs” (1975, p. 91). Tendler’s objection is that in this way a number of resulting negative effects are overlooked, effects to which all the actors involved (the “task environment”) contribute precisely because they all scrupulously adhere to this same decision- making approach. When making their decisions the donors dismiss projects based on simpler technologies which, requiring less construction effort, would promote more learning on the part of the beneficiaries. In order to receive the higher funding, the beneficiaries accept aid linked to imports and thus damage local industry even when it would be capable of supplying the desired facilities. And the agencies entrusted with project implementation know that their efficiency— along with their employees’ chances of career advancement—is measured by their ability to “move money.” In the proverbial end-of-year race (to meet the budget), seeking not to lose funding already allocated, they scout for projects that can be financed (helping potential beneficiaries to overcome procedural difficulties and the inadequacy of their own planning skills) without entering too much into the actual merits of the projects.
26
| Possibilism and Evaluation: Tendler and Hirschman
Hence, this investigation into the aspects of the organization that are “less visible” (and therefore normally overlooked by the agencies’ critics when they go looking for the “causes” of their inefficiency) involves a parallel investigation into the inefficiency of resource allocation. Tying up a large sum in fact reduces flexibility in project implementation and precludes the possibility of funding other projects. These are missed opportunities resulting from the choices that have been made. For example, if supplies are financed in foreign currency, the price of goods produced by local industry will not be used as a bargaining chip between local industry and the borrowing institution. The only concern will be about how to divide up the market so as to secure foreign financing. Instead, the international agency could have introduced price incentives as a way of deciding which local industries should be awarded the contract. To do this it would have had to remove from the list of imports those articles whose local price was already lower (or could be made so through a tender) than the foreign price. Not doing it or anything like it represents a major missed opportunity to stimulate local businesses toward (more) competitive behavior. It could therefore be said that the whole book is dedicated to something negative—to the “process of neglecting better alternatives” (1975, p. 72). In other words, why certain problems are not perceived as problems (1975, p. 94). This involves an exploration of the theory of rational action. Behavior of the type considered here usually receives two kinds of explanations. According to the first, the decision is irrational because it was made in ignorance or bad faith (guided, for example, by the intention to finance U.S. industries irrespective of market conditions). According to the second, the decision is rational but is boycotted by the bureaucracy, which is pursuing particularized strategies. Here, however, a third hypothesis is introduced. This is that the decision was made rationally with respect to the procedures and performance standards established in the organizational environment of development aid, and with respect to implementation, which generally conforms to them. But the decision-making logic regulating the entire organizational environment conceals a danger. By keeping quiet about “non-measurable” problems and concentrating entirely on organizational output, this reasoning ends up making things worse while failing to shed any light at all on opportunities and solutions that kept cropping up along the way… Similar limitations on a decision-making rationale conditioned by output are apparent in the analysis of rural poverty projects launched by the World Bank and other agencies in the 1970s. Even though they were aimed directly at the poor (rather than seeking a simple trickle-down effect, as in previous projects)
Evaluation and development | 27 they were formulated in such a way as to link equity too closely with growth, and always in one direction. And here again, establishing the achievement of the equity goals (in health, education, democratic participation, etc.), which had even been a reason to promote the policy, was considered too difficult. Therefore, to make projects acceptable, equity goals were equated with growth goals. (Because these had the advantage of having measurable outcomes: increased income, increased life expectancy according to a careful application of the theory of human capital, increased agricultural productivity, etc.). This “dislocation” between goals and outcomes meant that the solidarity aspect was seen as an eventual “boon,” an added benefit of growth, rather than as a goal worthy of achieving on its own. But then when it was realized that even these outcomes were beyond reach (because of the difficulty of isolating the interests of the rural poor from those of the elites, the inability to reach the poorest strata, or simply because agricultural production wasn’t shooting up), the close tie that was pre-established between equity and growth led to the bold proposal to abandon projects without worrying about poverty alleviation—which was still an open question. After taking pains to show that equity would lead to an increase in growth, the absence of growth ended up compromising the pursuit of equity. Focusing as it does on decisional logic, Tendler’s argument tends to exclude, or at least greatly reduce, the weight of any explanation framed in terms of “premeditation.” It is not so much that the poor were meant to be excluded as a result of pressure from rural elites, but rather that their exclusion was a result of this “dislocation”—the belief that it was easier to get approval for projects which otherwise would have run into opposition from these same elites. If instead they had made the effort to define equity as a goal to be pursued in its own right— according to criteria of redistributive justice or social betterment—they would not have been so quick to give up on these projects. There would have been less of the waste caused by their rapid abandonment, and more effort would have been made to correct errors along the way. For example, certain agricultural aid that benefits only small producers but excludes farm workers should have been replaced, or flanked, by measures aimed at everyone working in agriculture. In this way, the project itself would be better tuned to the purpose of reaching the poor, which is actually not an unachievable goal. A second limitation to this decisional logic concerns the inability to appreciate the weight of politics. Normally agencies avoid dealing with aspects such as those just mentioned because they do not want to get mixed up in politics, which is seen not as a sphere of action where positive and negative sides can be distinguished, but only as a source of lobbying and corruption. (Or rather, Tendler
28
| Possibilism and Evaluation: Tendler and Hirschman
adds [1982a, p. 1]—because the political dimension “was too institutional for the economists and too broad for the specialists”). Concealing the weight of politics however, causes projects to miss out on essential support that they need from start to finish. For example, at the decision-making stage, the glacial pace of fund disbursement is proverbial, a result of the absence of pressure groups of sufficient strength. In the implementation phase it makes a big difference if projects are able to mobilize some local politicians who are rooted in their environment and know how to maneuver within it. The issue is exemplified by the “electoral” conditioning development projects are subjected to. Everyone can cite cases of projects abandoned because the person who proposed them was not re-elected, and the next-in-line has different… projects in mind. In this regard, however, Tendler undertakes a subtle comparison of situations that have different effects on projects depending on the technologies employed, the type of workforce that will be temporarily unemployed, and the tolerance for quality degradation of certain facilities. In this way, alongside cases where it is not in the interests of the newly-elected politician to get mixed up in projects linked to the reputation of his or her predecessor, there are others where the new person can find “considerable political exchange value” (1982a, p. 31) in the completion of projects initiated by preceding governments. To conclude, Tendler says, politics can have stabilizing as well as destabilizing effects on development projects. But this is not where it ends. Isolation from politics can have a different kind of negative effect. A preference for situations with the least instability leads to an appreciation of those in which elections are not held at all, or to favoring technocratic, administrative and even military systems, and we arrive at the paradox of supporters of democratic principles at home who are at the same time skeptical of transitional processes toward democratic systems in developing countries. All the same, Tendler points out, conditions of instability exist even under military regimes, and of rigidity in democratic regimes, depending on the tasks at hand and the organizational environment. Development projects must therefore not be politically isolated, but must examine to what extent they “are disrupted under different kinds of political systems” (1982a, p. 34).
Evaluation and development | 29
Defining objectives and opportunities not to be missed Two points can thus be made about decision-making. The first concerns the criteria for judging the achievement of “unclear” objectives. Since “accept[ing] project outputs as prima facie evidence of achievement of project objectives” (1982b, p. A-9, now in 2018, p. 137) can lead to the negative consequences mentioned, it is important to be able to look at what was done independently of the objectives, not least because “unanticipated success (…) may be obscured by the fact that the project failed in its stated objectives” (1982b, p. A-7; now in 2018, p. 135). Thus, alongside the effort to refine quantitative objectives and the advice to learn to use all available tools to measure the results obtained (quantity of goods or services produced, distribution by social and territorial groups, etc.), Tendler indicates a second direction. This is attempting to make explicit the qualitative criteria for success that apply to the many issues touched upon by the project, without worrying about the boundaries that exist between different disciplinary competencies. It is striking that her clarifications on this point in the humble form of “Suggestions to Evaluators,” were inspired by the author’s observation of the work of private voluntary organizations (PVOs) describing themselves as alternatives to the large international agencies (and local governments). These groups in fact state that their interest is in the process and not the result (which would be the exclusive focus of the large donors), and in “implementing (…) a process through which poor people learn to gain control over their lives.” From this they deduce that their projects “therefore cannot be judged by the output measures of traditional evaluations” (1982b, p. 4). In other words, if the goal is development there may not be evaluation. But in this way, Tendler argues, these organizations on the one hand assign themselves a series of unrealistic goals that they cannot achieve (so that they can easily be judged to have failed), and on the other, they have no concrete vision of the successes they can actually achieve. At the same time, they are unable to interact successfully with the other institutions operating in the sector, both public and private. And it can also happen that in order not to be submerged by the mass of data and documents collected in their field activites, PVO operatives try to escape into “objective” parameters similar to those used by evaluators in the “big agencies.” For the precise purpose of getting past these difficulties, Tendler goes so far as to propose “considering PVOs as agencies for development,” since if they were seen in this way it would be possible to take on
30
| Possibilism and Evaluation: Tendler and Hirschman
board their advantages and at the same time integrate them into a context of general commitment to development. Thus, in a sort of miniature manual for workers in the field, “Suggestions to Evaluators,” Tendler proposes some criteria for analyzing the object of an evaluation, such as how the decision-making process works (democracy), who, in terms of social stratification, benefits from the project (equity), and which parts of the project work well (efficiency). The author used these criteria again and again to analyze the various productive activities Bolivian cooperatives were engaged in, the effects of setting up Brazilian hydroelectric plants on local industry and on the regions where they are located, the development of the informal economy in several African and Asian countries, and so on. The second point Tendler raises about decision-making concerns opportunities not taken because some observed outcomes did not fall within the assessable objectives of the project or because the evaluation was not taken seriously enough. All this comes under the heading of “missed lessons” from a project, those not transformed into new policies and programs. On the other hand, ex post evaluation, in addition to going beyond a simple analysis of whether the project objectives have been achieved (and researching what other objectives have been achieved instead), can also explore the question of opportunities not to be missed. For example, instead of simply studying how various social groups are affected by a project’s redistributive measures, the ex post analysis can identify activities or resources that allow it to better focus on the poor, and to see whether these opportunities have actually been acted upon (1982b, p. A2; now in 2018, p. 130). In fact, this was exactly what Tendler herself concentrated on in discussing the potential contribution of local politicians in the implementation of projects aimed at poverty, or missed opportunities for the growth of local industries in large infrastructure projects. When evaluation is aimed at formulating proposals and suggestions for new projects it should not feel constrained by the rationale or content of the project being evaluated.8 On the contrary, the evaluator’s task is to work relentlessly to discover exit routes and alternative proposals so that missed opportunities are minimized.
8 On relations between evaluation and politics see Palumbo (1987), in particular the article by Weiss (1987).
Evaluation and development | 31
The nature of the task and cross-functional comparisons Beginning with her studies of electric power in Brazil and Argentina, Tendler discovered that a good part of the outcome of a development process depends on the technologies chosen for a certain task, since “technologies vary as to their political vulnerability, their ability to draw out and train competent talent, and their capacity to brook the coexistence of politically antagonistic institutions” (Tendler, 1968, p. 6). Faced with the identical goal of providing the country with a modern electricity grid, the greater successes of Brazil’s hydro-based system over Argentina’s thermal-based system were attributed by Tendler to the fact that hydro technology requires major mobilization during production and thus ensures a consensus for the project. Moreover, the problems it poses in the course of its implementation stimulate a search for appropriate solutions, whereas thermal technology instead requires more rigid operational planning and does not stimulate internal energies (Tendler, 1965). Finally, in exploring the Brazilian system in depth, Tendler (1968) realized that much of its success was due to the recombining that took place when different tasks were broken down among different social actors, as a way of exploiting the potential of each and at the same time neutralizing institutional shortcomings. The actual output, in fact—what people see—which mobilizes energy and instills belief in development, was the task of the nationalized industry, which was in this way legitimized along with the political forces that had promoted it. The production process had also witnessed a healthy coexistence of politicians with technicians, who agreed to handle energy production because it was a specialized and well-defined activity. Distribution, on the other hand, which is not visible and which had been considered parasitic in nature, was entrusted to a multinational company that was always on the verge of being nationalized and therefore reluctant to make investments. This caused no great damage, however, given the greater tolerance in distribution for underperformance due to poor maintenance. Various lessons on implementation can be drawn from these first results on the role of the technology utilized, the type of task to be carried out, and the connections between different activities. In her “Suggestions to Evaluators” Tendler had advised analyzing “the nature of tasks and activities in terms of their compatibility with (a) certain decision-making processes (participatory or arbitrary); (b) certain benefit distributions (equitable vs. skewed); and (c) certain degrees of control (decentralized, centralized, specialist, non-specialist)” (1982b, p. 150). If,
32
| Possibilism and Evaluation: Tendler and Hirschman
in fact, each project presents a particular combination of all these elements, then in addition to knowing how to analyze them (as suggested above), it is essential also to be able to see how they fit together, understanding whether elements that would be negative for the task of the project might be acceptable in another project with a different task, or whether elements that seem positive in themselves ought not to be integrated with others.9 An illuminating example of the application of these criteria is offered by Tendler herself in the chapter on Bolivian cooperatives financed by the Inter- American Foundation (Tender et al., 1988, p. 85). A comparison of projects involving four cooperatives immediately reveals their similar results: (a) direct and visible successes (they lasted for more than a decade and benefited the cooperators as well as the external community); (b) indirect and less visible positive results (they gave the farmers a voice and allowed them access to the state and to trade with large private companies); (c) failures: poor management (irregular accounting and corruption), failure to expand, and an elite unable to reinvent itself. This dry list comes alive as soon as it is put through the litmus test that is the nature of the task, in turn specified in the type of productive activity and type of cultivation. We thus see that each of these defects, whose causes may be sought in the history of the group and the general conditions of society, is more or less serious depending on how it combines with other structural factors, such as “the sequence in which activities were undertaken, the social structure of the communities, the varying characteristics of the principal crops grown and the traits of the various activities undertaken by the coops” (Tendler et al., 1988, p. 115). And cases where these defects are serious are distinguished from cases where they are not because the task tolerates them, and even from those in which the defects can play a positive role.10 For example, arrested growth is serious if construction work 9 In a contribution to the literature on “institutional development,” which applies the problem of bounded rationality (Leibenstein, Simon) to the world of aid, Israel (1987) addressed the need to distinguish the degree of “specificity” of the tasks to be performed: “the higher the degree of specificity, the more intense, immediate, identifiable and focused will be the effects of a good or a bad performance” (p. 49). Although this work demonstrates the theoretical advancement that has taken place in the field, I think that the analysis of specificity would really benefit by moving beyond the stage of meticulous task classification using the factor-combining procedure proposed by Tendler. 10 This work takes up some aspects of the article in which Hirschman (1971d), invoking a more complex analysis of the development process, rejected the idea that there were obstacles or prerequisites for any country embarking on such a path, and
Evaluation and development | 33 is necessary in which many hands are needed, less serious if the activities undertaken have spillover effects anyway (they are public assets, they only have an effect if everyone takes part in them), and sometimes it can even be an advantage (as in the case of credit and commercial activities, which if undertaken rapidly on a large scale cannot avoid inefficiencies and embezzlement). The analysis becomes even more subtle when Tendler gets into the specifics of individual crops, each of which is situated in a more or less socially stratified environment and requires production and marketing activities whose effects can promote either solidarity or “stratification.” Thus we learn, for example, that cacao production has leveling effects, while sugar cane, produced in combination with rice, perpetuates the division between rich cane producers and poor rice producers. Tendler achieved these results thanks to a methodology of comparative analysis that became increasingly complex. The first comparison (between the Brazilian and Argentine systems of providing electrical energy) is intra-functional—it holds the task fixed and compares the two technologies. Different planning methods involve different actors, either political or technocratic, and the same is true for different operating methods as well, which involve either experts or the general public. By the same token, maintenance needs and the possibility to make changes during the project have a different social impact depending on the degree of flexibility allowed and lead to different general processes of social learning. The second comparison (between production and distribution in the Brazilian energy industry, and between a public company and a foreign private company) is cross-functional.11 It compares two tasks and their respective actors, methods of operation, energies involved, and all other induced effects. Intra-functional comparison enlightens us on the different outcomes (direct and indirect effects)
identified not only the obstacles that must be removed, but also those that need not be removed, or at least not immediately, and those that can instead be transformed into advantages. See Chapter 6. 11 In an unpublished paper from 1984, in which she presented her work to her colleagues at MIT, Tendler states that she addressed three functional areas (infrastructure, agricultural development, and small industry and the informal sector) in such a way that her analytical work always highlighted what could be learned by comparing them. This contrasts with much of the literature on development plans and projects, which often restrict analysis to functional categories (such as health, agriculture, or transport) and therefore lose part of the powerful intuitive potential that can come from cross-functional comparison.
34
| Possibilism and Evaluation: Tendler and Hirschman
of major choices, while cross-functional comparison advances the argument by investigating combinations of different factors, such as the functional separation of domestic and multinational firms. These are comparisons that bring out the lights and shadows in any situation. In the end they are able to give an account of the advantages of a particular combination only after having thoroughly investigated, for each, both the direct effects (level of production, functionality of the facilities, satisfaction of demand, autonomy from the outside) and the indirect effects (decision-making capacity, managerial capacity, participation, self-expression, use of local political capacity, etc.). In this way it becomes clear who is best suited to doing what (whether state, private or volunteer, separately or in collaboration, whether a domestic or foreign business, a hierarchical or democratic organization, etc.). It is possible to determine what division of labor is most rational, which activities it is best to pursue and which to abandon, and what results should be sought.12 But comparisons also show how to move with agility between quasi-experimental procedures such as those analyzed up to now, and procedures that might be called “metaphorical,” in which two different situations are compared so that one is called upon to shed light on aspects of the other, especially those that have remained obscure. This is what Tendler unveiled when the World Bank asked her to evaluate policies for the alleviation of rural poverty (1982a). The situation appeared to have come to a standstill—results regularly showed a failure to achieve the desired goals, to the point of suggesting to many agencies that projects should be suspended. So Tendler appealed to her “metaphorical” process, stating explicitly that she needed a comparison with urban poverty projects as a way of establishing context—that is, so as to better understand and develop her argument. Because the most novel feature of the policies she observed seemed to be the targeting of the poor, she looked for clues in another experience of targeting, President Johnson’s “war on poverty,” a policy aimed at urban black ghettos after the riots of the 1960s. Of this she analyzed the salient characteristics, comparing them with the projects aimed at the rural poor, and several lines of analysis emerged for evaluating rural policy. 12 In Tendler (1987, p. 47) it is argued that one of the difficulties of evaluating whether NGOs benefitting from the Ford Foundation’s Livelihood, Employment, and Income-Generation (LEIG) program are in fact, as they claim, able to do better than the public sector depends partly on “a lack of comparative knowledge” that might have been available if international agencies had not abandoned poverty alleviation programs prematurely.
Evaluation and development | 35 In the first place, it was easy to aim urban policies at the poor because in large American cities they were isolated from the rich, which was not the case in the rural landscape of underdeveloped countries. A ghetto is a concentration of poor people, so that a policy aimed at ghettos cannot affect the rich and would be a net benefit to the poor (barring, of course, the diversion of funds at the local political level or the existence of various forms of boycotts). In a given rural area, on the other hand, with a more mixed population, policies involving credit, infrastructure or services would be to everyone’s advantage, and the elites, with their greater opportunities of access to institutions, might even benefit more than other citizens. All this points to the conclusion that funding should go to projects that can make the most of poor people’s condition of isolation—either because the projects are of interest only to them (such as the provision of low-quality services or goods that elites are not interested in), or because they provide goods whose enjoyment by the rich does not impede enjoyment by the poor, but rather requires the participation of all (such as certain health campaigns), or else because they are intended for particular frontier areas where there is greater social uniformity. (Although in some cases, Tendler goes on to suggest, if these conditions cannot be met, it will even be necessary to try to neutralize the elites by giving them something in return). In the second place, Johnson’s urban policies won much sympathy because they were formulated in the context of achieving civil rights. The projects’ goal was to furnish services like homes, healthcare, transportation, education, etc., because possession of these basic assets was considered the citizens’ right. They were part of the civil rights movement of the 1960s, benefiting a hitherto neglected sector of the population and at the same time addressing what was a blight on society as a whole. Even those who for selfish reasons might be against it still looked favorably on the formation of a more educated industrial proletariat. Projects concerned with rural poverty, on the other hand, as we have seen, were formulated based on the opposite principle—the reasoning behind them was not framed in terms of justice, but of profitability. They offered services which in certain contexts the poor were not even able to utilize, while at the same time alarming the elites, who were faced with the specter of dangerous competition that might emerge from below. In this way, then, the comparison with a situation in which concerns for equity and democracy carried an aura of legitimacy was able to show that rural projects could be relaunched if efficiency were better coordinated with equity and democracy—this was the aim that Tendler had in mind when she drew up the report on poverty alleviation policies.
36
| Possibilism and Evaluation: Tendler and Hirschman
Recurring difficulties and unexpected successes: What to suggest? In Tendler’s expectations, each new project would be considered as an open- ended situation, in such a way that factors which in previous settings had been causally or sequentially linked could now be combined with different aspects and result in different configurations. At the same time, her entire line of reasoning was aimed at identifying possible lessons that would be useful in the design of future projects, and at encouraging learning on the part of the promoting organizations (even if, as we might expect from the analysis of missed opportunities, it is obvious that these were by no means taken for granted ex ante, and it is also obvious that some kinds of organizational learning were more difficult than others). Specifically, the discussion was about how to look at successes and failures, and the lessons that could be taken from them. First and foremost, Tendler taught, it is important to know how to be surprised by unexpected events. But you have to be surprised by the right things. We can begin with difficulties. Her 45th piece of advice to evaluators is not to dwell on recurring difficulties (“faulty maintenance, lack of coordination between agencies, lack of funds for operating costs, schools without teachers, health clinics without doctors” 1982b, p. A-7; now in 2018, p. 135), because these should be expected. Instead, be surprised when these things don’t happen, and try to explain how that could have happened. This is the procedure she followed with the Bolivian cooperatives, when she compared a group of them that “had various traits and problems that we usually associate with failure” (Tendler et al., 1988, p. 85), and was surprised to find that in reality these elements of inadequacy were oddly combined with others that represented clear success. But then there are successes in their own right—such as the strongest cases among the poverty alleviation projects of the Ford Foundation’s Livelihood, Employment, and Income-Generation (LEIG) program, based on funding small- scale production and activities in the informal economy, often carried out by women’s groups (Tendler, 1987). It is a comparison with other projects and, in addition to the painstaking work of collecting statistical data (demographic, productive, social stratification) and direct observation, it was carried out according to a precise research orientation. Actually, one of Tendler’s most important suggestions is that where there are successes they must not be taken for granted as normal cases of the project working well. “The evaluator should treat any successes with a sense of awe” (1982b,
Evaluation and development | 37 p. A-6; now in 2018, p. 134), distinguishing between what was predictable and what instead was a surprise. This way, after identifying some common traits in the successful cases (such as, for example, the restricted localization of a production sector or activity, preferably concerning already known occupations, a leadership closely connected to powerful political institutions, an urban context, firmly established outlet markets, etc.—1987, p. iv), Tendler identified in the program a series of “opportunities where LEIG planners often do not expect to find them,” such as monopsonistic buyers, the support of powerful figures in the public sector, and non-specialist staff able to provide specialized services such as credit, etc. (1987, p. 32). Certain positive results, like community cooperation in infrastructure construction, are less surprising than others and perhaps come about through a new “constellation of factors” (for example, because of organizational disorder—which is, among other things, a running theme of Tendler’s research on electricity in Brazil and international aid organizations—and not in spite of it) or due to a “sequential order” of events different from what was expected. An attempt is required to explain what happened, what capacities and resources it was possible to utilize, what connections between productive sectors were established and why. An effort must also be made to understand whether there was something in a given project that predisposed it for success without falling back on simplistic explanations “having to do with the quality of the program leader,” for example (not least because, as Tendler suggests [1982b, p. A-7; now in 2018, p. 134], even the question of leadership has to be addressed in the context of the nature of the project. “Some types of projects are more apt to attract good leaders than others; some types of projects do well even with mediocre leadership”). Thus, when we come back to these comparisons and quasi-experiments in setting up new projects, and ask ourselves “what lessons do the most successful programs teach us,” the explicit message is that the outcomes obtained (negative or positive) should not be transformed into obstacles or prerequisites, because it is highly unlikely that they will recur in the future with the same characteristics. The point of the lessons, rather, is to expand our knowledge, to lead us to imagine how new combinations of the elements in play would work. This reasoning is exemplified in the LEIG program—if “minimalist” credit worked well, it was because it possessed certain attributes that made it easy and induced good performance in practice. This does not mean that minimalist credit has to be applied everywhere, but that the choice of tasks in program building should be made following these criteria (the ones we saw that concerned cross-functional comparison). Similar reasoning was applied to the cooperatives, where financing was
38
| Possibilism and Evaluation: Tendler and Hirschman
recommended for projects that favored cooperation, or that would tolerate management that was not innovative, or that required expert leadership—without advising all cooperatives to grow cocoa, conduct credit activities or install a rice mill. It could be said that the purpose of the whole exercise should be twofold. On one hand, the evaluator has to become an expert, focusing on the various aspects (economic, technological, social, political) that concern all projects, and on the different combinations (and interrelations) they display each time. And on the other, the evaluator has to identify some general criteria that seem to lead to success in any situation and try to picture by analogy how they might be applied to new projects. This is what Tendler summarized, in a rather appealing Hirschmanian expression, as being intellectually curious—understanding how, in reality, “one thing leads to another.”
2
Interpretive social science: The core of an anthology1
The context: The school of social science The anthology Interpretive Social Science was edited by Paul Rabinow and William M. Sullivan twice: in 1979 with the subtitle “A Reader,” and in 1987 with the subtitle “A Second Look.” This is perhaps the only volume where contributions by Albert Hirschman and Clifford Geertz can be found side by side. Hirschman often made reference to this anthology. In a letter to the director of the Institute for Advanced Study (IAS), Hirschman proudly wrote that the anthology contained five contributions by people who had been members of the School of Social Science (SSS) (Geertz, Hirschman, Taylor, Kuhn, Rabinow), regretting that the introduction did not mention it. The book is also testimony to the closeness of Robert Bellah’s work to what was at the time being developed at the School of Social Science at the IAS. The focus of the argument is on the current status of the social sciences and the need to abandon the positivist outlook that claimed to link them to the natural sciences model. “We propose a return to (the) human world in all its lack 1 A previous version of this text appeared in Long is the Journey n. 2, 2016. https:// colornihirschman.org/dossier/article/53/nicoletta-stame-an-anthology-t wo-looks.
40
| Possibilism and Evaluation: Tendler and Hirschman
of clarity, its alienation, and its depth, as an alternative to continuing to search for a formal deductive paradigm in the social sciences” (SSS, 1979, p. 8). What is called an “interpretive turn” is the need to add new concepts and intellectual tools to the skills of social scientists and to enlarge their perspective, not to repeat old disputes between the natural sciences and the humanities. Rabinow, an anthropologist, had worked with Geertz, while Sullivan, a philosopher, had worked with Bellah. Clifford Geertz and Robert Bellah had known each other since their graduate studies at Harvard in the ‘50s, where they had to navigate under Talcott Parsons’ grand systematization of the social sciences. Although they had not often been in contact recently, Geertz (in After the Fact, 1995, p. 124) had been impressed by Bellah’s “breadth of learning and, something not entirely common in the social sciences, his moral seriousness.” Bellah avowedly shared Geertz’s definition of culture: “believing (…) that man is an animal suspended in webs of significance that he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law, but an interpretive one in search of meaning” (Geertz, 1973, p. 5). Geertz had become the first professor of the newly established SSS at the IAS in 1970. Together with Carl Kaysen, who had been in charge of constituting the School of Social Science, Geertz was looking for new professors to invite, and considered Bellah a suitable candidate. Unfortunately, this proposal was opposed by the board of directors at the IAS, who—from their natural science standpoint2 —did not consider Bellah’s work sufficiently “rigorous.” The board’s position, though strongly resented by Geertz (1995, p. 125) and responsible also for the departure of Kaysen, was nonetheless unable to block the development in the SSS of a basis for the interpretive approach in the social sciences, especially beginning in the mid-1970s, when Albert Hirschman joined the SSS. From that moment on, the collaboration, and friendship, between Hirschman and Geertz was very close, and was decisive in getting the new enterprise off the ground. The two main protagonists soon gathered around them a group of research assistants (William Sewell, a historian from Chicago; Quentin Skinner, an English political scientist; Wolf Lepenies, a sociologist from Berlin) and external 2 The Institute for Advanced Study in Princeton, NJ, was founded in 1930. Among its founders was Einstein. At first it included only the School of Mathematics (of which Einstein, Oppenheimer, von Neumann and others were members). Later, the School of Natural Sciences and the School of Historical Studies (dedicated to the history of antiquity) were added. The social sciences found hospitality in that “forum” only starting in the 1970s, as I mention in the text.
Interpretive social science: The core of an anthology | 41 collaborators like Robert Darnton, a historian from Princeton University. At the end of the 1970s, two projects were initiated that aimed precisely at lending impetus to the “interpretive turn.” The first was the actual launch of the program of the SSS, in which scholars were invited to develop ideas in their specific fields of interest compatible with the interpretive social science perspective, and to create bridges that would connect with developments going on outside their fields. In a document entitled Our idea of social science (School of Social Science, 1979),3 the aim of interpretive social science is spelled out—to “criticize and refine the prevailing theories and methodologies of the human sciences” (p. 7), which are characterized by “overspecialization, present-mindedness, and unwarranted scientism without much compensating capacity to provide satisfactory solutions to the pressing social and economic problems of the day” (p. 7). Having observed that “to approach the analysis of social phenomena with the usual categories of statistical and nomological explanation is to give unwarranted priority to the externalities of behavior, its etiology and regularities,” the document goes on to state that a “full understanding of most social phenomena requires that, as well as studying causation at least as much attention should be paid to the linguistic, conventional, ritual and other symbolic systems in terms of which the agents and groups we study in the human sciences describe, theorize and appraise their own conduct” (p. 8). As Geertz would say in his “Retrospective preface,” a text written on the 25th anniversary of the founding of the SSS, “the aim was and (reworked, revised, reconsidered, and reasserted) still is, not just to measure, correlate, systematize, and settle, but to formulate, clarify, appraise, and understand. A small off-line enterprise in a glamorous up-market place, the loosening up of things, not their solidification seemed the way to go” (2001a, p. 4). They did not want to create a school that would stand in opposition to other schools, but an initiative that would promote the development of trends already present in various disciplines along with the dialogue between them, and to “refine” prevailing theories, which clearly could not be ignored. This program, therefore, required a search for the conditions that would foster debate, appealing to the right people (who could demonstrate an elective affinity with the interpretive viewpoint), and engaging them in the development of their own work in their own area of experience, in order to then be able to identify
3 This text, written to obtain funding, was prepared by Sewell and Skinner with Geertz and Hirschman’s supervision.
42
| Possibilism and Evaluation: Tendler and Hirschman
the contributions that could come from each. This meant cultivating a certain type of behavior (“what was needed was an attitude, not a program—another program—and certainly not a paradigm”) characterized by an interest “in empirical work, conceptually informed, not (…) in methodology [or] system building […], [by] careful, at least reasonably dispassionate argument, not in ideological ax grinding” (2001a, p. 8). And also by trespassing, either moving among many disciplines in addressing a topic (as Hirschman, Darnton, Geertz did), or by dealing directly with the relationship between disciplines (as Sewell and Lepenies4 did), and by including a moral and political thrust in research.5 Finally, to round out this tendency, a certain penchant arose for “confusing, incomplete, contradictory objects of study” (SSS, 1979, p. 9). The second project was a seminar on “Tradition and Interpretation: the Sociology of Culture” directed by Bellah at Berkeley in 1976/77, in which Rabinow and Sullivan took part, and out of which the anthology Interpretive Social Science emerged.6
What the anthology is about The two editions of the anthology reflect an ongoing trend among social scientists who share the interpretivist approach. Both editions include a first part, where a handful of topics of general interest are discussed by philosophers in some way linked to the hermeneutic approach, and a second part consisting of “Interpretations” by authors participating in various ways in the sought-after turn. The two editions have some writings in common and others that differ—in the second edition the general part is reduced, while there is a wider presence of illustrative writings, testifying to the fact that the interpretative approach was conquering new ground. Both editions share an introduction by Rabinow and Sullivan in which the rationale of the book is clarified. Firstly, it is reiterated that interpretive social science is “not simply a new methodology” or a new school or dogma, but a unity of themes. It replaces the distinction between the social sciences as descriptive 4 Lepenies was working at the time on his text Between Literature and Science: The Rise of Sociology. 5 See Chapter 3. 6 In 1980 another symposium was organized at Berkeley on “Morality and the social sciences” in which Albert Hirschman also participated. See Chapter 3.
Interpretive social science: The core of an anthology | 43 and the humanities as normative, since “each situation is always and at once historical, moral and political,” and “science, like any human endeavor, is rooted in a context of meanings which is itself a social reality, a particular organization of human action defining a moral and practical world” (Rabinow & Sullivan, 1987, pp. 20–21). The introduction to the first edition then accounts for the division of the book into the two parts, and for the selection of the chapters. The opening section presents a review of the dominant interpretive currents in the humanities, and serves to situate interpretive social science according to two perspectives:
- The first, “deconstructing the positivist idea of science” (Taylor, Ricoeur), is a critique of the neo-positivist quest for a unified science and formal, structural models of explanation. “The aim is not to uncover universal laws but rather to explicate context and world” (1987, p. 14). - The second perspective is “Reconstruction: Reason as action, criticism as recovery” (Gadamer, Habermas, Foucault). Interpretive social science is compared with currents that look at a type of practical reason able to interpret the human world through the re-appropriation of tradition (Gadamer) and communicative reason (Habermas).
Part II presents chapters “exemplifying the interpretive approach” (Rabinow & Sullivan, 1979, p. 18), which the editors briefly summarize. Finally, the Introduction to the second edition provides reasons for the changes, deletions7 and new chapters.8 The major changes are in the second part of the book, which is much richer due to what are referred to as developments and advances in the new approach. According to the editors, the new chapters reflect advances by the interpretive approach in two main directions: the theory of knowledge (Dreyfus & Dreyfus, Schön), and the importance of moral issues
7 Actually, the introduction to the second edition does not give specific reasons for the deletions, which are probably various and can only be guessed at. In fact, taking into account existing relations between the natural sciences, the social sciences, and the humanities, the eliminated chapters did diverge somewhat from the purposes of the anthology we have mentioned. The chapter by Kuhn, for example, intended to overcome the denial of an interpretive dimension in the natural sciences, while Fish’s advocated an interpretive concern for context in literary studies. 8 Those of Taussig, Rosaldo, Schön, Dreyfus & Dreyfus, Jameson.
44
| Possibilism and Evaluation: Tendler and Hirschman
(Taussig, Rosaldo). In particular, they refer to “new readings that heighten the contrast between knowledge seen as a technical project, with all its affinities to the technological organization of life, and knowledge seen, in the human sciences, as inescapably practical and historically situated” (Rabinow & Sullivan, 1987, p. 2).9 The chapters by Geertz, Hirschman, and Bellah remain unchanged in the second edition (and thus reflect the editors’ basic intentions). They offer interpretations of specific cultural products combined with the individual theories of interpretation supported by these authors in opposition to traditional approaches. Taken together they contribute to making interpretive social science a remarkably vital and compelling intellectual enterprise.
Fragments of interpretive social science In “The Search for Paradigms as a Hindrance to Understanding,”10 Hirschman puts the accent “on the kind of cognitive style that hinders, or promotes, understanding” (Hirschman, 1987, p. 178). In a comparison of two books on economic development in Latin America, Hirschman provides a critique of the dogmatic cognitive style, arguing that when laws and models are imposed on an actual setting claiming that it fits them, hopeful developments that might suggest other interpretations are either not perceived or are considered exceptions and therefore not relevant. In the case under scrutiny, most paradigms and models tended to “[lay] down excessive constraints on the conceivable moves of individuals and societies” (p. 186), thus “propounding that there are only two possibilities—disaster 9 For example, Shön’s chapter, “The Art of Managing: Reflection-in-Action within an Organizational Learning System,” is on the professional competence of managers, traditionally seen as “management science” or as an “art” (or even a craft). Shön rejects the first definition and addresses the second. Art, in this case, is not intuition but “reflection in action,” the reflective conversation managers have with themselves about how to adapt their knowledge to new and uncertain situations they find themselves in. Managers should be helped to reflect on their activities, to properly interpret what they are doing. Otherwise their actions will remain mysterious. A reflective practitioner is the interpreter of a context that is richer than expected, and the social scientist, decoding the manager’s actions, will help him or her become a “developer of management science.” 10 Originally published in World Politics, vol. 22, n. 3, 1970, and republished in A Bias for Hope (Hirschman, 1971e).
Interpretive social science: The core of an anthology | 45 or one particular road to salvation” (ibid).11 Hirschman criticizes the attitude of social scientists who “are happy enough when they have gotten hold of one paradigm or line of causation” and whose guesses are often further off the mark than the “experienced politician whose intuition is more likely to take a variety of forces into account” (p. 192). Changes on a large scale are “unpredictable, because they took the very actors by surprise, and not repeatable, because once the event has happened everybody is put on notice and precautions will be taken by various parties so that it will not happen again” (ibid). Such changes are better characterized by uniqueness than over-determination. The lesson for the social scientist is “an understanding of the experience,” which is what “made it at all possible to build under these trying circumstances” (p. 194). Individuals and society find ways out of their predicaments. “He who looks for large-scale social changes must be possessed, like Kierkegaard, by the ‘passion for the possible’ rather than relying on what has been certified as probable by factorial analysis” (ibid). Geertz’s article “Deep Play: Notes on the Balinese Cockfight” is a classic of his symbolic anthropology. The chapter is divided into three parts. Part one is a description of what happens during the cockfight (the symbolism of masculinity and value, the roles in the preparation, the betting). It is a scene of animal savagery, male narcissism, status rivalry, blood sacrifice. Part two deals with the meaning of the fight—a dramatic representation of status preoccupations: “in deep [games], where the amounts of money are great, much more is at stake than the material gain: namely esteem, honor, dignity, respect, in a word […], status.” “Almost all the matches are sociologically relevant” (Geertz, 1987, p. 218–9). Part three explains what it means to treat the cockfight as a text (or another form of art) to be interpreted, and not a rite or a pastime to be described, as other anthropologists would do. The cockfight is a metaphor that makes everyday experience understandable—it represents the “disquietfulness” of the Balinese, through the dramatic form of a mock war mediated by a hate for animals. It does not reinforce status distinctions—as the functionalists would have it—but “provides a metasocial commentary upon the whole matter of assorting human beings into fixed hierarchical ranks.” Its function is interpretive: “it is a Balinese reading of Balinese experience, a story they tell themselves about themselves” (1987, p. 234).
11 They are thus unaware that there may be other ways. Even purgatory—the author adds half—seriously—represents a third way.
46
| Possibilism and Evaluation: Tendler and Hirschman
Hence the need to understand how the symbolic forms work in order to organize perceptions (meanings, emotions, concepts, attitudes) in concrete situations. Symbolic forms can be handled sociologically. The culture of a people is a set of texts, and “the anthropologist strains to read over the shoulders of those to whom they belong” (1987, p. 239). If societies contain their own interpretations, “one has only to learn how to gain access to them” (p. 240). The two editions feature two different chapters by Bellah, dealing with the same topic—A merican middle class culture and the contrast between two cultural traditions, biblical religion and the Republican politics of individualism. They are serial analyses that Bellah himself was developing at the time. The chapter in the first edition, “New Religious Consciousness and the Crisis in Modernity,” refers to the cultural and political upheaval of the 1960s in reaction against what were seen as the negative consequences of the current embodiment of those traditions. The chapter in the second edition, “The Quest for the Self: Individualism, Morality, Politics,” refers to later developments when these “habits of the heart” were reconsidered. Which is to say, the quest for the self can oscillate between a Tocquevillian relationship with the community and an Emersonian absolute autonomy of the individual.12 “The chasm between person and society that individualism creates” (Bellah, 1987, p. 383) could be bridged by a wise reconciliation between the two cultural traditions, reinterpreted in current terms. Bellah defines his study as “more an interpretation than a description” (Bellah, 1979, p. 346). The culture of the specific social group of his interest is not seen as an undifferentiated whole. There are “variations, lines of cleavage, and even deep conflict.” It would be better therefore to speak of “meanings, interpretations and even ideologies, as well as cultures.” All this shows that “it is not only the interpretive social scientist who is in search of meaning, it is also the individuals and groups that he is studying who are in search of meaning” (1979, p. 361). Contrary to what some might think, and despite repeated attempts to imitate the natural sciences model, this anthology has not lost its relevance even after almost three decades. Its perspective is still capable of offering a strong antidote to such renewed hindrances to understanding.
12 Bellah is referring to Alexis de Tocqueville (1805–1859) and Ralph Waldo Emerson (1803–1882).
3
Interpretive social science and morality1
Social science and morality At a moment when moral issues are increasingly relevant to the life of our societies (growing inequalities worldwide, ecological collapse, renewed imperialist policies, terrorism and local wars) questions on the place of morality in social science have multiplied. Forty years ago, this same question was raised by the group of social scientists from different disciplines that we discussed in Chapter 2, who had converged on the idea of interpretive social science. These researchers openly dared to challenge an omission that was imprinted in the very origins of their disciplines. This episode of responsible creativity is worth revisiting, not least because it sheds much light on our problems. Interpretive social science criticized the principles of “value- freedom” (Weber), “objectivity,” and the “detachment” of the researcher that were considered true cornerstones of mainstream social science. Rather, it envisioned a moral commitment on the part of the researcher that was part of the research 1 An earlier version of this paper was presented at the First Conference on the Hirschman Legacy, Boston, 2017, and published in Meldolesi L. and Stame N., eds. (2018), For a Better World, Roma, IDE.
48
| Possibilism and Evaluation: Tendler and Hirschman
itself. In Geertz’s words: “as ‘interpretivists,’ self-declared and self-understood, we were interested in work that reached beyond the narrowed confines of a fixed and schematized ‘scientific method,’ one that connected up with moral, political, and spiritual concerns” (2001a, p. 8). It was not enough to appreciate subjective values (not fearing, incidentally, the accusation of relativism)—values, meanings, and behaviors were to be submitted to serious analysis. This was implicit in the way these social scientists addressed a question that had no relevance in mainstream social science, “what is the morality that should guide social science?” In the words of Bellah et al. (1983, p. 17), “at stake is the issue of how empirically described life and ethical vision can be brought into relation.” Positivist epistemology excluded morality from social science on the basis of certain presuppositions—the object of research was defined as “facts, not values,” the goal of research as “descriptive, not normative,” and the attitude of the researcher as “detached, not involved.” Ethical considerations were therefore considered part of the realm of the humanities, not the sciences (and these included the social sciences). Geertz first, and then the other social scientists who identified themselves as interpretive, contested these dichotomies. They linked together the various spheres of life and of inquiry that both positivist social science and hermeneutic philosophy, each for their own good reasons, wanted to keep separate.2 These authors were not comfortable in the paradigm wars, nor with the strong dichotomies (scientism vs. subjectivism) just mentioned. Indeed, considering “social scientific research as a variety of moral experience” (Geertz, 2000, p. 23) is a key claim that refers as much to morality as normally understood as it does to how morality should be treated specifically in one’s field of research. In this type of literature, in fact, two kinds of content are attributed to morality. On the one hand, it is essential to understand the meaning people attribute to their actions—that is, their own mores—because “persons are moral agents, they question themselves and take responsibility for the stances they adopt” (Sullivan, 1983, p. 306). On the other, what it means to live a human
2 Geertz (2000, p. 145) criticizes the schematic opposition between the natural and human sciences posited by the hermeneutic philosopher Taylor, the idea that between them there is a gulf, a dichotomy instead of a mere difference. While praising Taylor’s contribution to defending the integrity and vitality of the human sciences (including sociology) against the attacks of positivism, Geertz criticizes him for not having distinguished ruptures and discontinuities within the natural sciences.
Interpretive social science and morality | 49 life needs defining (Sullivan, 1983, p. 304). These two types of content may refer to both the object of research and the attitude of the researcher. Given the pervasiveness of the “amoral” stance in all social disciplines, there can be no single way of combining morality and social science. Among interpretive social scientists two main strategies can be detected:
- Recognizing that ethical orientations have always been present in scientific research, although “in disguise” (as Hirschman put it), and exposing them: “ethical orientations are present, disguised or not, everywhere in the enterprise of social science” (Bellah et al., 1983, p. 8). - Keeping the tension between the two poles of these oppositions open. This is what Bellah calls “to criticize the weaknesses of modern thought from within its own assumptions” (ibid, p. 9).
In what follows I will examine how four social scientists tackled the topic from the standpoint of their own disciplines (anthropology, sociology, economics, political science), in order to create a space for morality within a social science better equipped to address the problems of the day. I will present the position of each author with reference to:
a. The relationship between morality and social science b. How to understand moral issues—mores vs. ethical values c. The main topic dealt with—the research object with respect to the researcher/subject relationship d. The strategy for dealing with the topic—keeping open the opposition between morality and social science vs. recognizing what already exists
In conclusion I will present some comparisons between their approaches to the issue.
Clifford Geertz In 1968,3 Clifford Geertz wrote an essay entitled “Thinking as a moral act: ethical dimensions of anthropological fieldwork in the New States” that opened with
3 At that time he had not yet met Hirschman, but had cited The Strategy of Economic Development.
50
| Possibilism and Evaluation: Tendler and Hirschman
a quote from Dewey: “thought is conduct and […] it is to be morally judged as such” (2000, p. 21).4 In that essay Geertz affirms that social science research, contrary to the tenets of the “scientific method” and of the “detached observer,” is a moral experience: “methods and theories of social science are not being produced by computers but by men and women […] operating not in laboratories but in the same social world to which the methods apply and the theories pertain” (p. 22). “Social research [is] a form of conduct” and “implications [should] be drawn for social science as a moral force” (p. 23).
Geertz discusses two instances of the ethical dimensions of anthropological fieldwork (his own area of research), which found its own raison d’etre in the age of imperialism. These are the personal dilemmas of the researcher that have usually been kept under control by what he ironically calls the anthropologist’s “vocational stoicism.” The first instance refers to “the imbalance between the ability to uncover problems and the power to solve them” (p. 37). There is nothing unusual about dilemmas like having to choosing between different effects of an intervention, and it is pointless to pretend that social scientists are indifferent to their moral implications. The example concerns agrarian reform. This is a perennial problem, which Geertz has analyzed in Indonesia and Morocco, two Muslim countries that are very different: “In both situations (there is) a radical short-run incompatibility between the two economic goals which together comprise what agrarian reform in the long run consists of: technological progress and improved social welfare. […] In Indonesia,[…] this contradiction expresses itself in terms of an extraordinarily labor-intensive, but, on the whole, highly productive mode of exploitation. […] Technological progress of any serious scope […] means the massive displacement of rural labor, and this is unthinkable under the present conditions” (p. 25). In the Moroccan situation, on the other hand, “there is a split between large- scale […] modern farmers and very small-scale four-and five-acre traditional dirt farmers” (p. 27). The dilemma that it presents distinguishes between, on the one hand, a continuation of the situation which “over and above its social injustice,
4 The essay appeared originally in the Antioch Review.
Interpretive social science and morality | 51 (is) not one that is likely to endure very long in the post-colonial world, and indeed has now already begun to alter. On the other, a disappearance of such farmers and their replacement by small peasants threatens […] a fall in agricultural output and foreign exchange earnings which […] cannot (be) regarded with equanimity” (p. 27). Apparently both countries have ‘chosen’ “higher levels of rural employment over economic rationalization.” But “this sort of ‘choice’ is, for all its welfare attractions, a most dubious one, given a physical setting where advanced techniques are necessary not just to prevent the decline of output but to avoid a progressive deterioration of the environment to levels for all intents and purposes irreversible” (p. 27).
Observing that “technological progress and improved social welfare pull very strongly against one another,” Geertz notes that this “is not confined […] to the area of agrarian reform; it is pervasive […] in education […], in politics […], in religion […]” (p. 28–29). All these problems remain “on a rather impersonal, merely professional level,” and are dealt with, more or less well, “by conjuring up the usual vocational stoicism” (p. 29), as when social scientists protest that “‘I don’t give advice, I just point at the roots of the problem’” (p. 39). The second instance of the ethical dimension in social work involves what Geertz calls “the ethically ambiguous character” of the “inherent moral asymmetry of the field situation” (p. 33–34), “the inherent moral tension between investigator and subject” (p. 37), things that the usual vocational stoicism finds it harder to neutralize. “The relationship between an anthropologist and an informant rests on a set of particular fictions half seen-through” (p. 34), what Geertz calls the “anthropological irony” (p. 29), which is not included in the traditional conception of the detached researcher. “After awhile one even develops a certain resignation toward the idea of being viewed, even by one’s most reliable friends, as much as a source of income as a person. One of the psychological fringe benefits of anthropological research—at least I think it is a benefit—is that it teaches you how it feels to be thought of as a fool and used as an object, and how to endure it” (p. 30).
The anthropologist comes to represent “an exemplification […] of the sort of life-chances [the informants] themselves will soon have, or if not themselves then surely their children” (p. 31). This is what Geertz calls “‘the touching faith
52
| Possibilism and Evaluation: Tendler and Hirschman
problem.’ It is not altogether comfortable to live among people who feel themselves suddenly heir to vast possibilities they surely have every right to possess but will in all likelihood not get” (p. 31). The anthropologist “is left ethically disarmed, […][with] a passionate wish to become personally valuable to one’s informant—i.e., a friend—in order to maintain self-respect. The notion that one has been marvelously successful in doing this is the investigator’s side of the ‘touching faith’ coin: one believes in cross-cultural communion (one calls it ‘rapport’) as one’s subjects believe in tomorrow” (p. 33). “The anthropologist is sustained by the scientific value of the data being gathered,” while the informant’s interest “is kept alive by a whole series of secondary gains. […] But if the implicit agreement to regard one another […] as members of the same cultural universe breaks down, none of these more matter-of-fact incentives can keep the relationship going very long” (p. 34).
This is the awkward situation faced by people who are eager to deny “their personal subjection to a vocational ethic” that supposedly comes from “failing to have emotions nor perceiving them in others,” and who “insist that social scientists are unmoved by moral concerns altogether—not disinterested but uninterested” (p. 39), and invoke their “detachment,” “relativism,” and the “scientific method” (p. 38). Geertz admits to the “difficulties of being at one and the same time an involved actor and a detached observer” (p. 39). And yet he reminds us that “anthropological fieldwork as a form of conduct does not permit any significant separation of the occupational and extra-occupational spheres of one’s life. […] In the field, the anthropologist has to learn to live and think at the same time” (p. 39). Thus “the central question to ask about social science is […] what does it tell us about the values by which we—a ll of us—in fact live?” (p. 38). Geertz’s answer is that social science can offer moral values by keeping open the tension between judgments that are normally considered to be in opposition. It is a suggestion:
o “to combine two fundamental orientations toward reality—the engaged and the analytic—into a single attitude,” analyzing with commitment, and o “to look at persons and events (and oneself) with an eye at once cold and concerned,” which represents a “sort of research experience [that] has rather deeper, and rather different, moral implications for our culture than those usually proposed” (p. 40).
In conclusion:
Interpretive social science and morality | 53 “A professional commitment to view human affairs analytically is not in opposition to a personal commitment to view them in terms of a particular moral perspective […] The flight into scientism or, on the other side, into subjectivism, is but a sign that the tension cannot any longer be borne […]. These are the pathologies of science, not its norm […]. To attempt to see human behavior in terms of the forces which animate it is an essential element in understanding it, and […] to judge without understanding constitutes an offense against morality” (p. 41).
Robert Bellah In 1980, Robert Bellah organized a conference at Berkeley on “Morality as a Problem of the Social Sciences,” based on the idea that the moral dimension is a constitutive characteristic of social science itself. In acknowledging that “engagement on values, in one form or another, is inevitable for those doing research,” his intention was to lend new visibility to “a way of thinking that existed but was never discussed as such” (Bellah et al., 1983, p. 8). The conference was meant to answer two broad questions: “a) Why has this interest and concern about moral issues in social science occurred? In other words, what are social science’s difficulties? Why are past guidelines unsatisfactory, suspect, pallid, wrong, or whatever? b) If we are to abandon the stance (or some would say, pretension) of value-neutrality, how can we act so as to assure that social science doesn’t disintegrate into ideologies? In other words, what kind of moral theories can we use and how can we use them and still retain legitimation in our own and others’ eyes?” In a 1983 book entitled Social Science as Moral Inquiry (Haan et al., 1983) that collected some of the contributions to the seminar,5 Bellah et al. (1983, p. 8) warn that thinking along these lines will lead to a reconsideration of the “role of social science in social policy” and the very character of the various disciplines (since “the failure—of economics, psychology, anthropology and history—to deal adequately with the ethical dimension has precipitated questioning and doubt and stimulated the beginning of new formulations”). For his own part, Bellah (1983) leans toward the strategy of reviving the moral stances that have always been present, albeit disguised, in social theory, and sets out to provide a reformulation of social science going as far back as the tradition of Aristotelian social and moral thought. He finds a continuity from
5 And added other chapters as well, including Hirschman’s: cf. below.
54
| Possibilism and Evaluation: Tendler and Hirschman
ancient social inquiry to modern social science as regards the relationship between morality and social thought, and he opposes it to the contemporary idea that it is possible to speak of a social science only when such concepts as detachment, value-neutrality, etc., have been established. Following Aristotle, who considered social inquiry as “a practical science, one indelibly linked to ethical reflection,” Bellah sees the terms “moral sciences” and “social sciences” as interchangeable (1983, p. 360). The notion of “social science as practical reason” (p. 361) sets the stage: “The purpose of social science (is not) to provide the most effective means to predetermined ends. Social science as practical reason must, on the contrary, consider ends as well as means as the objects of rational reflection” (p. 362). While the ancients (Plato, Aristotle) were concerned with what a good life is, the modern Machiavelli was “not interested in how the world should be, but how it actually is” (p. 362)—which is considered the starting point of social science. Yet Machiavelli had no less passionate ethical ends in view (the unity and independence of Italy). And the same could be said for other giants in modern political thought. Hobbes envisaged the role of an absolute state with the moral aim of survival. Tocqueville, who spoke of a new political science, was gripped by the passion for liberty. And even Marx, for all his “scientific” socialism, was moved by moral passion. Coming to his own disciplinary field, sociology, which is a comparatively recent product of social thought, Bellah contrasts the ethical aims of the main figures in the field with their claims of “establishing a genuine scientific sociology” (p. 373). Durkheim, the father of positivist sociology (according to whom social facts should be considered as things) was imbued with deep morality. His idea of “society” as prior to the individual, “had profound and political implications that determined the practical meaning of his science” (p. 366), not to speak of his practical activities as an educator. At the same time, he thought “about society ‘scientifically,’ deriving the ethical ends of action from empirical investigation” (p. 367). And Weber, talking about science as a vocation, had “eloquently argued that the relation between scientifically discoverable means and ethical ends is extrinsic and that science had nothing whatever to say about ends” (p. 368). Yet in his work there is an obvious conflict between the ethics of responsibility (the use of legitimate force) and the ethics of ultimate ends (brotherly love), between power and the religion of salvation, between science and ethics (p. 369). This mixture of ethics and science allows Bellah to borrow a concept from Weber
Interpretive social science and morality | 55 that would become central to his own thinking—that of tradition,6 which would even incorporate the germane Weberian concepts of charisma and rationality.7 “In modern societies, both the general social tradition and the tradition of social thought are multiple, diverse and partially in conflict” (p. 372). And the individual will continually move among all of them, using one tradition (e.g., social inquiry) to criticize another (e.g., the norms of society) or “reflecting on the logical coherence of, and the empirical evidence for, different traditional views […] In this process of reception, practice and reflection it is quite arbitrary to decide what is cognitive and what normative, when we are being scientific and when ethical. Indeed intellectual acuteness and ethical maturity in this area go hand in hand. Wisdom is the traditional word that includes both” (p. 373).
But such wisdom was repressed when, following Parsons’ “professionalization of science,” sociologists like Collins denied pursuing any practical benefit, but rather “a coherent, powerful, and verified set of explanatory ideas” (p. 374). Since our heads are filled with traditions and false consciousness, Collins maintained, “a distinction between value judgments and logical and descriptive statements” (p. 375) is essential. Helping us to see people as animals maneuvering for their interests, and making us “aware of the plurality of realities, the multiplicity of interests, and the tricks used to impose one reality upon others” (ibid), social science will free us from these illusions. In rejecting this attitude, Bellah clarifies the reasoning behind his strategy of combining morality and social science by exposing what had always been there. “It is extremely unlikely,” he argues, “that sociology can ever be a paradigmatic science in Kuhn’s sense […]. What creates coherence and continuity in social science is not consensus around a theoretical paradigm” (like the one proposed by Collins as the last word) “but concern for practical problems in the world” (p. 377). “Social science is not cumulative, and we still have much to learn from the ancients” (p. 380).
6 Sullivan (1983), in the same book, reconsiders the communitarian tradition in American political thought, contesting the liberal tradition that ignores the moral dimension, and reclaiming the earlier notion of society (republican) and of responsible citizens that existed alongside the liberal tradition. 7 Here Bellah alludes to the three types of legitimate power according to Weber: traditional, rational, and charismatic.
56
| Possibilism and Evaluation: Tendler and Hirschman If we understand that “in the social sciences we study the same kinds of beings that we are” (p. 376), we cannot put ourselves outside or above what we study, and “we can undertake our inquiry only by continuing our dialogue with those we study and relative to whom we are as much students as teachers” (p. 377). “If social science is to be practical in (the) classic sense of the word, it means something very different from technological application on the model of the natural sciences. It means, above all, the participation of the social scientist in the process of self-understanding” (p. 378).
Albert Hirschman The contribution of Hirschman to this intellectual episode is highly significant. In a single essay, “Morality and the social sciences: a durable tension,” Hirschman completes the journey between the two strategies of combination envisaged by Bellah et al. From admitting that morality can exist in disguise, he ends up advocating a new social science based on the interconnection between “proving and preaching”—that is, keeping the tension open between analysis and moral commitment, in consonance with the thought of his colleague and friend Clifford Geertz. The story of Hirschman’s article goes back to the Berkeley conference promoted by Bellah. Initially Hirschman was uncertain whether, and how, to attend the seminar.8 In the end he did participate, and spoke about shifting involvements9 —the oscillation between the pursuit of happiness through consumption (private life), subsequent disappointment and enthusiasm for public action, and renewed disappointment and a return to the private sphere—a way of looking at the meaning people attribute to their actions (“the fact that man is reflective,” he explains, “means, in addition to other things, that there is a possibility of changing tastes”). In a comment on the seminar (Remarks on the Berkeley Conference, 1980),10 Hirschman clarified that his way of treating shifting 8 At that time Hirschman was highly concerned with moral issues, especially with reference to events prompted by Latin American dictatorships. See the notes on Universities and Human Rights written for the American Academy of Social Sciences. (Hirschman Papers, box 8, folder 10). 9 Hirschman was at the time preparing a book under the title, “Private happiness vs. public happiness” that later appeared as Shifting Involvements (Hirschman, 1982). A recollection of the original title is found in the Italian translation, called Felicità privata e felicità pubblica. 10 Hirschman Papers, box 8, folder 7.
Interpretive social science and morality | 57 involvements—which considers changes in behavior to be internally produced (and therefore not exogenous)—a llows a more attractive way of viewing man than as the usual homo economicus maximizer. “Man,” he maintained, “is not the economists’ rational actor, but a clumsy idealist, someone with passions and interests.” And in arguing this, he realized then and there that he was making a moral argument “in disguise.” Bellah, who was familiar with Hirschman’s intellectual and practical background,11 liked what he had to say, and asked him to write “a kind of autobiographical reflection of the moral implications of your own work over a fairly extended period of time.”12 Hirschman did not follow this suggestion literally, but—as usual with his economist colleagues in mind, trapped within their models13 —started thinking about how the theme of morality had fared within economic theory and social science in general, and here again he found oscillations and turning points. He then prepared a statement of his own14 that later became the article “Morality and the Social Sciences: a Durable Tension,” published both in the anthology on the Berkeley conference (Haan et al., 1983) and in his Essays in Trespassing: Economics to Politics and Beyond (1981).15 Here he develops his argument in three steps. First, he analyzes how social science evolved through an anti-moralist stance based on the purported incompatibility between moralizing and analytical- scientific activity, the “separation between heart and head (brains)” (p. 23). To this end, he follows two paths which, in a note entitled Moral and amoral thinking in economics16 written in preparation for the article, he labeled “history” and 11 Bellah was certainly thinking of the motivations behind some of Hirschman’s writings, such as National Power and the Structure of Foreign Trade, or Exit, Voice, and Loyalty. 12 Letter dated April 24, 1980 (Hirschman Papers, box 8, folder 6). 13 A similar critique of restricted economic models, and the need to enlarge the economics perspective, can be found in McPherson’s (1983) chapter in the same book. 14 Given on the occasion of his receiving the Frank E. Seidman Distinguished Award in Political Economy at Memphis, Tennessee (September 25, 1980). 15 In a letter to Walter Lippincott, publisher at Cambridge University Press, Hirschman says that this article “makes a good ending for the book, and justifies (along with other pieces) the slightly pretentious ‘and beyond’ of the subtitle” (Hirschman Papers, box 58, folder 10). In a similar vein, Hirschman suggested as the title of a collection of his essays that appeared in Italian and included this article, “L’economia politica come scienza morale e sociale” (Luca Meldolesi, ed., Liguori, Napoli, 1984). 16 Hirschman Papers, box 8, folder 9.
58
| Possibilism and Evaluation: Tendler and Hirschman
“epistemology,” respectively. The “history” of social thought refers to the “amoral birthmark of social science,” when social science emerged through a separation from morality: Machiavelli, Mandeville, Smith. According to this view, society is kept together not by love or benevolence but by interest. The “epistemology” of social thought refers to the fact that social science has advanced through new discoveries that were counter-intuitive, shocking. As Hirschman later said in the article, “one of social science’s favorite pastimes is to affirm the hidden rationality of the seemingly irrational […] [and] defend as moral or useful or at least innocent social behavior that is widely considered to be reprehensible” (1983, p. 24). This trend is clearly recognizable in what Hirschman saw as the paradox of amorality—the “‘imperialist’ expeditions of economics into areas of social life outside the traditional domain of economics […] with the predictable result that, like the consumer or producer of the economics textbook, the actors involved, be they criminals, lovers, parents, bureaucrats or voters were all found to be busily ‘maximizing under constraints’” (p. 25). Second, he recognized a recent “resurgence” of morality, which corrected some of the limitations of economic theory and acknowledged that moral behavior is needed for society to work. In micro-economics, the need to correct certain forms of market failure has been addressed by adherence to a code of professional ethics, or by recognition of the importance of trust (p. 26), while in the macro- economy some form of benevolence in the relationship between classes has been advocated to combat inflation (p. 27). In introducing the idea of morality “in disguise,” Hirschman went back to the “trained incapacity” (Veblen) of social scientists, which Geertz had in mind as well: “when one has been groomed as a ‘scientist’ it takes a great deal of wrestling with oneself before one will admit that moral considerations of human solidarity can effectively interfere with those hieratic, impersonal forces of supply and demand” (p. 30). Therefore, given the difficulty of reconciling moralizing and analytical understanding, “one effective way for social scientists to bring moral concerns into their work is to do so unconsciously!” (Hirschman, 1983, p. 31), which had happened to him when he was writing Exit, Voice, and Loyalty.17 In the Remarks on the Berkeley Conference he had gone even further: I tend to think, in general, that moralizing social science is going to be successful to the extent that it adopts this sort of disguise. This is one way of reformulating 17 This is a reference to the introduction to the German edition of Exit, Voice, and Loyalty.
Interpretive social science and morality | 59 the Weberian doctrine of Wertfreiheit (value-freedom), and is also the way we can have the best of both worlds: continue to enjoy the democratic benefits of the contention that social science must be positive and value free and yet smuggle in, as it were, some strong moral messages.18 I do not pretend that this is the only way of incorporating moral judgments into social science; just that it is worthwhile to think not only what are the moral considerations that belong to the field, but also how they should be marshaled. Perhaps it is in this case that, like happiness, morality in the social sciences eludes a direct quest.
In the note on Moral and Amoral Thinking there is an important passage, not included in the article, that seems to cast doubt on the merits of “morality in disguise,” in which he says that “amoral myopia keeps us from noticing allied phenomena.”19 The following is a list of similar misunderstandings, which also concern him directly:
- The tunnel effect:20 “mistaken for the opposite of envy when it is actually info effect.” Elsewhere21 he had praised the hope contained in the tunnel effect, something that sociologists had not grasped, being too focused on relative deprivation.22 - Voice: it was not understood that this “means bringing [into the analysis] face-to-face relations with love and hate as opposed to anonymous exit.” - “Relational exchange”: this is a reference to Carol Gilligan’s (1983) presentation at the Berkeley conference in which, after having criticized the mainstream theory of moral development for being based only on a (male) ethic of fairness and rights and having suppressed a morality of responsibility and care—had instead advocated a personality that included both these characteristics.23
18 This would resemble Bellah’s interpretation of Weber, as we have seen above. 19 In Remarks on the Berkeley Conference, referring to his economist colleagues and the fact that even elaborate theories of consumption had never considered the idea of disappointment, he said, “my contention [is] that I explain more than they do.” 20 Hirschman (1981b). 21 Cf. letter to Claus Offe, dated September 15, 1988 (Hirschman Papers, box 5, folder 16). 22 See also Chapter 4. 23 See chapter by Gilligan in Haan et al. (1983). See also his presentation of Gilligan (Hirschman Papers, box 55, folder 5).
60
| Possibilism and Evaluation: Tendler and Hirschman - “Fusion of striving and attaining: these activities get neglected by economists who need cost-benefit split.” This is a clear reference to his critique of Olson’s (1971) Logic of Collective Action (written at a time when the civil rights movement was raging), according to which social movements should not exist if the collective “benefit” of the result can be obtained without the “cost” of the commitment (Hirschman, 1982, p. 77–79).
It is therefore not surprising that in the article, the affirmation that “morality in the social sciences eludes a direct quest” leads Hirschman to an observation that is almost paradoxical for someone who had just been singing the praises of morality “in disguise.” “It seems […] impractical and possibly even counterproductive,” he wrote, “to issue guidelines to social scientists on how to incorporate morality into their scientific pursuits” for the simple reason that “morality […] belongs in the center of our work,” and only if “social scientist are morally alive and make themselves vulnerable to moral concerns […] they will produce morally significant works, consciously or otherwise” (p. 31). This admission brings him to the third step, an abrupt shift towards a “more ambitious, and probably utopian thought,” in which Hirschman imagines “a kind of social science that would be very different from the one most of us have been practicing: a moral-social science where moral considerations are not repressed or kept apart but are systematically commingled with analytic argument without guilt feelings over any lack of integration; where the transition from preaching to proving and back again is performed frequently and with ease; and where moral considerations need no longer be smuggled in surreptitiously nor expressed unconsciously but are displayed openly and disarmingly. Such would be, in part, my dream for a ‘social science for our grandchildren.’”
Are we witnessing self-subversion?24 24 Self-subversion, which Hirschman theorized about in the later book A Propensity for Self-subversion (1995), means critical reflection on his own ideas and writings. He had already done it with regard to National Power and the Structure of Foreign Trade (in “Beyond asymmetry: critical notes on myself as a young man and some other old friends,” 1978). In the morality article he does it for the first time within a single article, in a way similar to what he would later do within a single book, The Rhetoric of Reaction. Finally, in A Propensity, he would use the same approach for all his main books (with the exception of Passions and Interests) but not his articles. Thanks to Luca Meldolesi for this note.
Interpretive social science and morality | 61
Charles Anderson At that time, even within public policy analysis one could find voices arguing for the enlargement of the field of interest of the discipline based on ethical principles. Charles Anderson, a political scientist expert in Latin America, from whom Hirschman (1963) had borrowed the idea of “reformmongering,” had just written the article “The place of principles in policy analysis,” which Hirschman praised as follows: “It is a rather eloquent statement arguing that policy analysis cannot just take policymakers’ preferences as given, as though they were consumer tastes, but must inquire into moral principles such as justice. The fact that this paper was published as lead article in an ordinarily staunchly positivist journal (the American Political Science Review) is highly significant.”25 Anderson criticized contemporary theories—those that had directly influenced “evaluation with a positivist approach”—that reduced policy evaluation to a mere “technical appraisal of the impact of public programs” (Anderson, 1979, p. 711). According to these theories “values cannot be justified in terms of objective criteria. Hence they must be regarded as ‘preferences’ on the part of the policy maker. ‘Technical’ or ‘rational’ policy analysis can only begin once relevant values have been stipulated” (1979, p. 712). Anderson, on the contrary, considered policy evaluation as “the process of making deliberate judgments on the worth of proposals for public action” (1979, p. 711). Criticizing the instrumental conception of rationality, he stated that: “to be regarded as ‘reasonable’ a policy recommendation must be justified as lawful, it must be plausibly argued that it is equitable and that it entails an efficient use of resources” (1979, p. 712–3). For this to be the case, it must be based on “a repertoire of basic concepts including authority, the public interest, rights, justice, equality and efficiency [which] as standards of policy evaluation […] are not simply preferences. They are, in some sense, obligatory criteria of political judgment” (1979, p. 713).
It is remarkable how a scientist like Anderson—contrary to the many academics who aim to colonize with their methodologies adjacent research sectors (perhaps held to be of inferior status)—shows a genuine interest in the world of evaluation,
25 Letter to Norma Haan dated January 7, 1980 (Hirschman Papers, box 8, folder 6) in which Hirschman asked whether there was a possibility of inviting Anderson to the seminar, which did not happen.
62
| Possibilism and Evaluation: Tendler and Hirschman
at a time when at least a few authors (Scriven, House) were indicating a concern to bring morality into their own field.
A framework for morality in social science The contributions we have considered come from four authors who were reasoning about the relationship between morality and social science from within their own disciplinary domains. For all of them, the strategy for overcoming the dichotomy between morality and social science had to hold together the principles of their discipline and the broadening of the boundaries of that same discipline. Bellah and Anderson, from the perspective of “ancient” disciplines such as Aristotelian social inquiry (Bellah) or political science (Anderson), reclaimed a continuity with an old tradition where morality was a legitimate topic of research. Geertz and Hirschman, from the position of the “modern” social sciences, sought to emphasize the inconsistencies of an “amoral” perspective in the human sciences, and proposed to keep the tension open between the two poles of morality and analysis—“analyzing with commitment” for Geertz, “proving and preaching” for Hirschman. Each of them identified some form of morality at the center of social research, whether it referred to people’s behavior, the principles governing society, or the relationship between the researcher and the object of study. And the strategies proposed reflected the theoretical position of each thinker, showing how deep the link between morality and social science was in their own way of thinking and the originality of each contribution. (See Table 3.1 below). At the same time, this intellectual episode includes some striking similarities. Geertz and Hirschman alluded to the “trained incapacity” of the social scientist, and looked for stratagems to overcome it. Bellah and Hirschman recognized that moral issues had slipped into social research “in disguise.” Bellah and Anderson rejected an instrumental use of policy analysis. Hirschman and Anderson criticized the idea that moral values could be considered as customers’ or politicians’ preferences, not to be subject to social analysis. What is interesting in this exercise is the breadth of arguments that have been brought to the task. On one hand, morality can come into play because it is a trait of people’s behavior, and as such it is the object of social science. Geertz looks at the meaning people attribute to their acts when facing ethical dilemmas. Hirschman analyzes how people think about their own enthusiasm
Interpretive social science and morality | 63 or disappointment regarding public or private life. Bellah is interested in the way people move between general social tradition and the tradition of social thought. Then again, morality can also enter the field by virtue of the very nature of social research, because the scientist and the object of research belong to the same social matrix (Bellah). This comes out as much in the irony of the asymmetrical relationship between the anthropologist and the informant, analyzed by Geertz, as it does in Hirschman’s introspection in reflecting on the way he addresses the topic of exit and voice. Table 3.1. Aspects of the relationship between morality and social science in four authors/disciplines Geertz Relationship between morality and social science
Bellah
Hirschman
Anderson
Anthropology Sociology
Economics
Political science
Social research as a moral experience
Morality is at the center of the social scientist’s work
The centrality of ethical principles in policy analysis
Social science as practical science: the link between ethical and cognitive issues
How to understand moral questions: (a) custom
(a) people’s behavior (and his own)
(a) the meaning people attribute to their own actions when faced with ethical dilemmas
(b) ethical principles
(b) what goodness is
(b) public interest, autonomy, rights, justice
Main theme: (a) the object of research
(a) dilemma between uncovering problems and solving them
(a) ethical values such as community, responsibility
(a) shifting involvements, exit and voice, passions and interests. And his own attitude of self-subversion
(a) recommendations should be based on basic concepts; policy goals should not be viewed as preferences
64
| Possibilism and Evaluation: Tendler and Hirschman Geertz
(b) the researcher/ (b) the asymsubject metrical relarelationship tionship between researcher and subject
Bellah
Hirschman
Anderson
(a) keep the tension open between proving and preaching
(a) keep open the opposition between the concept of instrumental rationality and sound public interest criteria (b) refer to ancient social investigation
(b) the relationship between scientist and subject (same social matrix)
Strategy: (a) keep the tension open
(b) refer to what has always been there
(a) keep the tension open: analyzing with commitment
(b) show what was already present, starting in antiquity
4
Hirschman, possibilism and evaluation1
It is surprising how often theses and procedures dear to Hirschman are echoed, perhaps unconsciously, in recent papers concerning evaluative research—from theory-based evaluation to positive thinking approaches. The aim in these new developments is to challenge theories and methodologies that are taken for granted in the evaluation mainstream (the political cycle, the synoptic rationality of programs, precise rules for implementation, outcome analysis based on expected effects).2 My purpose in proposing a long-distance dialogue between these new trends and Albert Hirschman is to present this author as both a precursor and a valid interlocutor whose work can strengthen the theoretical basis of the new approaches. While Hirschman’s contribution to evaluation is today recognized mainly in his work on development projects (Hirschman, 1967), other important insights have been less thoroughly explored. On one hand, there are ideas concerning the 1 An earlier version of this paper appeared in Rassegna Italiana di Valutazione, n. 62, 2015. 2 But which instead resurrect widely contested theories in the field of policy analysis. We cannot assume, clearly, that they will so easily disappear from the cultural horizon of our times.
66
| Possibilism and Evaluation: Tendler and Hirschman
behavior of the actors involved in the evaluation of any policy (decision-makers, implementors, beneficiaries), who are invoked when we set out to work on program theories (as in theory-based and realist evaluation). And on the other hand there is the possibilist attitude, which allows an enhancement of the perception of change (as in positive approaches).
Simple and complex programs For the most part, the dispute in evaluation—especially in the debate over so- called impact evaluation—is centered on how to view complexity, in both the programs themselves and the situations in which they are launched. On one side are those who would like to reduce the complex to the simple. Establish a precise causal nexus, identify the main variables (independent, intervening, dependent) and eliminate disrupting effects so as finally to be able to pin down the single element or project—the cause, that is—that will get the desired result. On the other are those who think complexity should be faced up to and that we need to find the right methods to do so (Stame, 2006). Patricia Rogers (2008) introduced into evaluation the distinction3 between simple, complicated (composed of a number of parts that interact in a predictable way), and complex (composed of elements that evolve, with emergent aspects and initially unpredictable tipping points). This gave rise to different suggested approaches (Stern et al., 2012) that take into account aspects of both complexity and complicatedness. This problem is also linked to that of regularity. If a program is simple it will be easy to recognize those aspects of it that can be reproduced on a large scale, and its effects will be generalizable. A complicated program requires an understanding of how its parts can be combined, but it is conceivable that similar combinations might also be found in other situations. A complex program, on the other hand, will be different from others, its effects will not be generalizable, and the lessons learned in evaluating it will need to be adapted to the different contexts in which something similar is to be implemented. Pawson (2006, p. 168) defined programs as “complex systems thrust amidst complex systems.” Hirschman (1967, p. 172) immediately noted the complexity of programs, and remarked that every project is a thing in itself (“every project turns out to represent a unique constellation of experiences and consequences, of direct and 3 Initially suggested by Glouberman and Zimmermann (2002).
Hirschman, possibilism and evaluation | 67 indirect effects”), and that a plurality of effects must be taken into account— economic, social (redistributive, empowering for the beneficiaries), and political. And his entire discourse on crises and on triggering processes (“‘pressure points’ sparking more activities”) is nothing other than the idea of tipping points traversed by complex programs. But he also warns against a characteristic slide from the complex to the systemic, which is indeed precisely what some recent developments in evaluation tend to do (Byrne, 2013; Mowles, 2014). When Hirschman states that paradigms are an obstacle to understanding, what he has in mind are paradigms that set out to explain everything on the basis of a model and fail to consider the different paths the actors can take. It is an argument that strengthens Pawson’s position when he challenges the systemic perspective that “the whole is greater than the sum of its parts” with its inverse—“a more analytic principle that one cannot understand system properties without a working knowledge of their parts” (Pawson, 2013, p. 59). Thus the conclusion is that complexity does not entail a system.
Mechanisms This term burst forcefully onto the scene with realist evaluation (Pawson, 2006), which places the CMO configuration (context/mechanism/outcome) at the center of the analysis. Here evaluation is “goal-free”—the important thing is not what was meant to be achieved, but what is produced when the subjects in a determined context (social, cultural, etc.) react to the mechanism triggered by the program and achieve a certain outcome. For Pawson, as for Weiss (1997), evaluation research should seek to uncover the possible mechanisms that make a program work, and investigate how subjects react to those mechanisms, or trigger them to achieve the result which is the best possible. There are questions about what exactly a mechanism is.4 Pawson himself gives various definitions, which basically fall within the Mertonian meaning of “middle range theories.”5 Merton refers to“theories applicable to limited conceptual ranges—theories, for example of deviant behavior, the unanticipated consequences of purposive action, social perception, reference groups, social control, the interdependence of social institutions” (Merton, 1968, p. 51) and, at the same time, to concepts that are “sufficiently abstract to deal with different spheres of 4 See Barbera (2004), Elster (1989), Hedstrom and Swedberg (1998). 5 See Pawson (2010).
68
| Possibilism and Evaluation: Tendler and Hirschman
social behavior and social structure, so that they transcend sheer description or empirical generalization” (Merton, 1967, p. 68). Accordingly, once a certain mechanism has been identified, the aim is not necessarily to verify its consistent presence, but to understand when and how it operates (“what works best, where, for whom, in what circumstances” is the program of the realist evaluator), because the same mechanism can have different effects in different contexts. This defense of diversity finds additional support in Hirschman, when he states that certain situations can trigger opposite mechanisms. Pawson evaluated programs based on such mechanisms, like naming and shaming, incentives, and mentoring (Pawson, 2006). He then studied mechanisms that operate during the implementation of social service programs when the subjects are confused (about whether to join, whether to stay involved, whether to support them), calling them “invisible mechanisms” to emphasize how unpredictable they could be (Pawson, 2013). The term mechanism is familiar from Hirschman’s language. He identifies social processes that are not exclusively concerned with planned action, also examining how people behave when a change (caused by an action or other factors) creates a situation that people think they have to react to. The problem is to understand which of the alternatives that present themselves in certain determined circumstances can trigger opposite mechanisms, but also alternate or integrated ones. This way, when something that isn’t working needs to be remedied, a mechanism of exit and/or voice can be triggered (Hirschman, 1970)—either as alternation or as mutual reinforcement. Or, when a subject obtains something but realizes that what he or she wanted was something else, this can unconsciously bring about an oscillation between the predominance of public interest on the one hand and private on the other (Hirschman, 1982). And even concerning relative deprivation,6 according to which subjects feel uneasy when they haven’t received what they think is due them, Hirschman asserts that this thesis should be seen as the inverse of the tunnel effect and complementary to it. Which is to say that in a period of rapid economic expansion there is greater tolerance of inequality and less protest because of the belief—as in the case of drivers stopped in a tunnel—that once the neighboring line has moved, it will be our turn. But this makes the disappointment (and thus the relative deprivation) even greater if the expected improvement does not occur (1981b).
6 This thesis is known to be at the origin of many theories of social deviance.
Hirschman, possibilism and evaluation | 69
Implementation, or what we learn along the way Evaluation has always alternated between periods (or policy areas) in which a focus on processes prevails, and periods (or policy areas) in which the focus is entirely on outcomes. Today we live in a moment when outcomes are in vogue (Evidence Based Policy, EBP; New Public Management, NPM). The contrast is spurious, of course, because a result cannot be achieved independently of a process, and a process is valid (or not) based on the result. Conclusion—it is necessary to look at both and see how they are connected. On the other hand, the increased attention given to project outcomes is not only a question of expected and unexpected effects. Outcomes belong to the sphere of decision-making (held to be crucial—results must meet objectives), while implementation is often seen as residual (and perhaps “overly practical”). Nothing could be more wrong, many evaluators have noted. Weiss (1997), for example, in her theory-based evaluation, speaks of implementation theory and program theory (the latter concerning the mechanism by which the recipients make the program work). In addition, many other authors (beginning with Pressman and Wildavsky, 1975) have pointed to the fact that it is in its implementation that a program becomes what it is, since the decision to launch it is compounded by the decisions of the people who have to implement it. Decisions, that is, such as: All of it or only part of it? And which part? In one way rather than another? Slavishly following the guidelines (compliance)? Or by exercising discretion to tailor the project to the situation (innovation, street-level bureaucracy)?7 The rationale for this point is based on a typical Hirschmanian idea which concerns the alternatives confronting an action to be taken. Hirschman (1963) opposed the idea of stark contrasts (success/failure) and suggested looking at the positive and negative aspects of a given action in order to strengthen the former and weaken the latter. With this aim, he studied the alternatives that come out of a single point of origin (how one thing does or doesn’t lead to another), and how they play out over time. What is important in Hirschman’s way of seeing things is that it is the subjects who decide what to do using their own heads, acting repeatedly at their own risk with the resources at their disposal, and therefore also doing things not foreseen by the programmers. This is why he suggests following programs closely—to understand what happens at turning points, moments
7 I myself have used a similar setup in my evaluation of the social services in some of Rome’s municipalities (Stame, Lo Presti, Ferrazza, 2009).
70
| Possibilism and Evaluation: Tendler and Hirschman
when decisions must be made. And because it is this that can lead to new ways of doing certain things and to processes of learning.
Intrinsic and extrinsic motivation One criticism of mainstream evaluation theory is that the subjects benefiting from a program are generally viewed as passive objects, representing a “target” to be shot at with arrows in the form of incentives or other stimuli so that they will do things they wouldn’t otherwise do. This way of thinking has been strongly criticized, both by constructivist approaches (empowerment evaluation, participatory evaluation, etc.) that focus mainly on the behavior of actors based on their own values (perhaps without lingering over the actual intentions of the decision-makers), and by theory-based approaches to evaluation (Weiss, Pawson) that deem it essential to observe how actors situated in determined contexts can react in different ways to stimuli from incentives and other forms of persuasion. “Positive thinking” approaches (see below), in turn, draw on what people do voluntarily, on the good things that have happened for them in a program. Burt Perrin (2014) linked this theme to the need to look at the intrinsic motivations for an action, which interact with the extrinsic ones. This theme reflects an issue that Hirschman (1982) raised in opposition to the theory of the free rider as a rational actor (Olson, 1971). According to this theory collective action has a cost, so that the free rider (who doesn’t want to pay it) is acting rationally if he or she can benefit without participating. Hirschman instead maintained that participation can become enjoyable in itself. And not only due to the well known theory that the means become the ends (as Merton says), but also because “with some activities there is not a net distinction between pursuit and goal-attainment, partly because [sometimes, during the action] goal- attainment is forever evanescent.”8
8 Hirschman, Letter to Coleman, January 30, 1984 (Hirschman Papers, box 54, folder 1).
Hirschman, possibilism and evaluation | 71
Enlarging the sphere of the possible—generative thinking Having concerned myself with approaches to evaluation that go by the name of “positive thinking”—that is, appreciative inquiry (AI), developmental evaluation, most significant change, positive deviance, etc. (Stame, 2014; Stame and Lo Presti, 2015), I have argued that they allow for a form of learning different from what the mainstream envisions (double-loop rather than single-loop learning), and that they are positioned to interact better with positive action theories (and to oppose pessimist theories based solely on extrinsic motivations). The main idea of AI is to move beyond the way of thinking based on a negative, on a deficiency for which a predetermined solution is offered, and to focus instead on success, on what has been done well, which will become a stimulus for projecting in the mind a more desirable future. AI expressly states that “it is necessary to widen the sphere of the possible,” and as David Mac Coy9 pointed out to me, this is the hallmark of a “generative” theory. In this way I rediscovered a strong affinity with Hirschman’s possibilism in its two variants. One of these involves appreciating that many more things were possible than had been anticipated by the program under analysis. The other, with an eye on the future, concerns cultivating the optical illusion of anticipating what could possibly happen so as to then ensure that it does, engaging “with continuity and tenacity in the work of extending the boundaries of what is (or is perceived to be) possible in order to widen the margins of discretion that can be used by the process of change” (Meldolesi, 1995, p. 120). But Hirschman (1971a, p. 359) then adds to this the question of attitude. A researcher who is not “possessed [...] by ‘the passion for what is possible’ rather than rely[ing] on what has been certified as probable by factor analysis” will not be able to understand concrete situations in which people have solved the problems they were facing.
Successes and failures The issue of successes and failures has come back into the spotlight with evaluative syntheses, and more generally with the Evidence Based Policy (EBP) Movement. In the 2000s there were renewed echoes of the refrain “nothing
9 See also Mac Coy (2014).
72
| Possibilism and Evaluation: Tendler and Hirschman
works, it’s all a failure.” This suggested implicitly that programs worked poorly, and that evaluations weren’t reliable (in the sense, for example, that programs are declared to be in good shape when they aren’t, or that when they are, it can’t be demonstrated). To overcome this, the use of “rigorous” methods was proposed (randomized experiments, counterfactual analyses). But these can only be used in simple programs (bringing us back to the problem of complication, complexity, and uncertainty). In Stame (2010b) I recalled that the topic of failures had already been addressed through the identification of other possible cases—not only methodological, but also those concerning (“official”) program theory and the (preordained) way in which it was implemented. I considered that what appear to be failures might actually be successes, and that various stakeholders in a program might hold alternative theories that would explain much better why, in spite of everything, something worked. I also observed that in the course of implementation actors can choose what to do—modifying the initial program and thus showing, perhaps unintentionally, that there can be different ways of implementing it. In that paper I made use of the Hirschmanian idea that there isn’t only one single way. Generally there are alternate routes—you need to train your eyes to spot them. And I capitalized on the principle of the “hiding hand,” that “quasi provocation”—as he himself put it—that had cast him out of favor at the World Bank (WB). In fact, if the difficulties of implementing a certain project had been evident from the start, it might never have been undertaken at all, and thus the possibility of solving problems with non-traditional methods would never have been discovered (Hirschman, 1967). Another reason I was fascinated by the issue of failures (and successes) was that I remembered Hirschman’s polemic against the “fracasomania” of many Latin-A mericans. It is not just a question of being able to see successes, as “positive thinking” approaches do. It is also essential to demystify the obstacles, since in any situation it is often possible to take alternate routes not initially foreseeable, and practice is required to be able to recognize them (because what the theory indicates as obstacles may not be—they may sometimes be neutralized or even turn out to be features that are useful for the task at hand: Hirschman, 1971d).
Lessons learned and how to use them The issue here is the generalizability of results. Mainstream approaches (counterfactual) to impact evaluation seek to find out what works in one area in order to
Hirschman, possibilism and evaluation | 73 generalize it elsewhere. So that what international development aid agencies claim to be doing is generalizing and scaling up. They are obviously well aware that this is too ambitious, and are in practice forced to settle for second best—which amounts to identifying and implementing the lessons learned from an evaluation. Thereafter, however, they fall back into the temptation to use these lessons directly in designing new projects—in other words… generalizing. Fortunately, however, the conviction has become widespread that lessons learned can only be taken as general guidelines, which then must be adapted to the specific context of the intervention situation by the participants themselves (decision makers, operators, beneficiaries).10 Hirschman was particularly sensitive on this point. He had a legitimate concern that what he had learned in the course of his research would be interpreted as “applicable lessons” for a new “policy”—indeed, he was sure that if this happened it would cause a disaster. What he thought of as “lessons” were the original and unique ways individuals or communities found for dealing with problems and constructing remedies and exit routes. “When change turned out pretty well, it was often a one-time unrepeatable feat of social engineering, an outcome that only gives confidence that a similar unique constellation of circumstances can occur again; but trying to repeat the sequence of events formulaically in another context won’t work,” (1994, p. 314–15).11 Similarly, speaking of his work for the WB,12 he admitted that even though “its operational usefulness is limited,” it nevertheless “might help to develop the ‘sense of analogy and relevance’ of those called upon to make project decisions.”13 The invitation to adopt a possibilist attitude has here taken on the semblance of the promotion of a new comparative awareness.14
10 cf. also the DFID report: Stern et al., 2012. 11 This sentence summarizes Hirschman’s responses to issues raised in a seminar on his work held at MIT in 1994. 12 See in this volume Appendix A: “Hirschman and the World Bank.” 13 Letter to Kamark, dated April 11, 1966 (Hirschman Papers, box 57, folder 1). This is, among other things, a form of understanding very much akin to the principles of the realist synthesis (Pawson, 2006). 14 Comparative analysis is a topic extensively developed by Judith Tendler. See Chapters 1 and 7.
74
| Possibilism and Evaluation: Tendler and Hirschman
The uses of evaluation One of the never-ending issues in evaluation concerns the uses it is put to. According to the mainstream position an evaluation should end with the results of the analysis and the recommendations of the evaluator (directly translatable into new decisions). This is referred to as “instrumental” use. Nevertheless, right from the start things didn’t go the way they were supposed to. The evaluators complained that the decision-makers ignored the evaluations, and the decision-makers argued that the evaluators were “cranking out” reports that were too academic. Numerous authors took the trouble to suggest how evaluations might become legible (brevity, lots of graphics, etc.), while others wanted to force organizations to take evaluations into account (by setting up implementation procedures, knowledge management systems, etc.). Others still (Weiss, 1998; Leviton & Hughes, 1981) identified different types of uses for evaluations—“cognitive” (serving to better clarify the program), “enlightenment” (useful for increasing knowledge of the specific area of action, and reusable at a later date), and “symbolic” (done out of simple compliance). The ideas of Carol Weiss have always struck me as appealing. In this particular case, the instrumental use is undoubtedly what links evaluation most directly with mainstream theory and the political cycle, while envisaging uses of different types situates interest in evaluation within a wider landscape. In the history of the evaluation of WB development projects that provide the background of Hirschman’s DPO, on the other hand, I see an opposition between the different types of use. At the WB the concern was only with an instrumental use (tangible outcomes to be generalized, perhaps in the form of an evaluation manual), while Hirschman refused to translate his evaluation results into “operational lessons.” It was nevertheless true, in his view, that Development Projects Observed could be put to a cognitive use, since it was an investigation of the “personalities of the programs,” and gave importance to the technological aspects of investments as well as to their consequences—expected and unexpected. And it even contained a possible enlightenment use—from unexpected consequences to the discovery of an exit-voice mechanism (Picciotto, 1994). Hence, DPO showed concretely how a single evaluation could lend itself to different uses if it was carried out with a passion for the possible, which was certainly not lacking in Hirschman.
Hirschman, possibilism and evaluation | 75
Trespassing Hirschman’s work is often characterized by instances of trespassing from one discipline into another—from development economics to politics to anthropology, sociology,15 etc. He starts with a carefully delimited topic (often from an unusual “angle”) and analyzes it through the eyes of different disciplines (this is the convergence of different disciplines on a common argument that Ellerman calls disciplinary triangulation).16 In addition, the relationship between evaluation and the other social sciences has become a recent evaluation warhorse (Vaessen & Leeuw, 2010). The need to draw on other disciplines (if evaluation even is a discipline—another much debated issue17), is strictly linked to the emergence of evaluation approaches based on theory.18 The main points observed are as follows: 15 Hirschman (1981a). Actually, he never paid much attention to sociology as such, because he felt that sociologists were too “intent on their probabilistic assertions.” Nevertheless, Hirschman did engage with well-k nown sociologists, such as James Coleman, Alessandro Pizzorno, Daniel Bell, and Michel Crozier. And thus not only with those who might be expected to share his “interpretive” social science system such as Bellah or Schön. On this subject, see the anthology edited by Rabinow and Sullivan (1987), which I discuss in Chapter 2. 16 It is what Ellerman (2005) constructed regarding international aid policies. He criticized the role of the agencies, beginning with the World Bank (for expecting the countries receiving development aid to behave according to their directives), and maintained that development would come only through “helping people help themselves.” In other words, by establishing a relationship between a helper (who helps rather than imposing) and a doer (who then does things based on what he or she decides). This is the principle of “indirect aid,” which Ellerman derived from Hirschman’s The Strategy, and of which he also found important useful elements in psychology (Carl Rogers), education (John Dewey), and community action (Saul Alinsky). 17 There are different positions on this: it is a transdiscipline, serving all of them (Scriven); it is a practical discipline (Saunders); it is a discipline characterized by its subject matter. And then there are authors who see it only as a praxis, which cannot have theoretical aspirations. (This is usually argued by those who want to make evaluation an offspring of their discipline, be it economics, psychology, sociology, education, epidemiology, etc.). 18 Cf. Stame (2010a).
76
| Possibilism and Evaluation: Tendler and Hirschman Evaluation itself is a practical activity (we talk about communities of practices). As such, it should be analyzed using the tools of organization theory and policy analysis (Dahlen-Larsen, 2012; Saunders, 2012). Programs are theories of action. It is essential to take into account the theoretical positions developed in economics (public economics, consumption theory), sociology (theory of reference groups), psychology (cognitive dissonance), etc. [Here, however, it should be noted that the line of reasoning should also be developed in the opposite direction. New theories can in fact be formulated during evaluation or as a consequence of it. This is what Hirschman did with latitude (DPO, 1967) and with the transformation of social energy (Getting Ahead Collectively, 1984)]. Then there is the question of research methods (often—and erroneously— undervalued). Along with inventing its own tools, evaluation utilizes numerous research methods from some disciplines and none from others.
Conversely, some disciplines seek to exercise hegemony over evaluation, in particular economics (what else?)—both in interpreting programs using the theory of rational choice, and in claiming there is a hierarchy of methods of inquiry with randomized experiments at the top and ethnographic studies at the bottom, thus moving gradually from the quantitative to the qualitative. This pretension to hegemony is contested from both sides. Everything we have said up to now refutes the idea that programs can be evaluated solely on the basis of constructs derived from rational choice (political cycle, instrumental use, etc.). And it is obvious that randomized experiments are not the exclusive monopoly of economics. They are also used in sociology (the Bureau of Applied Social Research) and psychology (Campbell’s original discipline, which he then applied to evaluation). These should be utilized with caution—when programs are simple, contexts stable, and causation linear. And furthermore, even the economists have to use other research methods to explain behaviors that their experiments are unable to explain. This is basically what Hirschman, an economist by training, was trying to make clear in his many works that moved from “economics to politics and beyond,” as a book of his from 1981 is subtitled, to the extent that one of his anthologies is entitled How Economics Should be Complicated (2020).19
19 This anthology appeared for the first time in Italian as Come complicare l’economia, edited by Luca Meldolesi, Bologna, Il Mulino, 1988.
Hirschman, possibilism and evaluation | 77
To conclude This roundup of different evaluation approaches and themes allows us to see that they engaged Hirschman far more than is normally recognized. And it also means (luckily) that if this were admitted and appreciated, many aspects of evaluation theories would draw life and strength from it. Mainstream approaches to evaluation, still the most widely practiced, are based on theories of rationality (and mono-causal explanation) that are often taken for granted, which Hirschman, along with other authors,20 managed to rebut in his analyses. Theories like the “hiding hand” or the exit-voice relationship, which show the potentialities of action and the versatility of non-codified behaviors, could (should, actually) become common currency for anyone setting out to look at the ever-changing results of programmed actions, a guide to understanding how to transform such theses into opportunities for social development and the expansion of human capacities in given circumstances and conditions. In conclusion, the contribution of an author like Hirschman can be invaluable, both for specific analyses that can strengthen discoveries made in the course of evaluations, and because his “possibilist” way of thinking allows us to question the intellectual apparatus of mainstream evaluation, as intrusive as it is coercive, while at the same time offering a very useful supporting and/or alternative point of view.
20 I refer to the idea of interpretive social science,an alternative—for Hirschman and Geertz—to mainstream social theory, which was positivist and excessively quantitative. See Chapter 2.
5
Hirschman’s production line on projects and programs1
Hirschman addressed development projects and programs in a large body of writings, and through these he revisited and refined his original ideas. The list is as follows: • The Strategy of Economic Development (1958, henceforth The Strategy). It includes passages on foreign aid as an exogenous factor in a country’s development. It is argued that investments in social infrastructure enabled by foreign aid can play a rebalancing function, relieving tensions brought about by industrialization (1958, p. 204). • A Study of Selected Bank Projects: Some Interim Observations (1965, henceforth Study), his controversial report to the World Bank after visiting the eleven projects they had selected jointly. It concentrates on themes that are “policy oriented and may, therefore, be of immediate interest to the Bank,”
1 This paper was prepared for the Second Conference on the Hirschman Legacy (World Bank, Washington, 2018) and published in Meldolesi L. and Stame N., eds. A Bias for Hope, Roma: IDE, 2019.
80
| Possibilism and Evaluation: Tendler and Hirschman
“topics on which I have views at some variance with current Bank policies” (p. 2).2 • Development Projects Observed (1967, henceforth DPO), containing basic themes in the theory of possibilism—the principle of the hiding hand, uncertainty, latitude, the centrality of side effects. • “Foreign aid: a critique and a proposal” (written with R. M. Bird in 1968, now in Hirschman 1971a, henceforth “Foreign Aid”). • Getting Ahead Collectively (1984a, henceforth GAC), case studies of Latin- American cooperatives financed by the Inter- A merican Foundation.3 During the course of the analysis there was a theoretical development, from the theory of shifting involvements to the principle of the “creation and conservation of social energy.” • “A Hidden Ambition” (preface to the 1994 edition of DPO, now in Hirschman 2015), speaking of self-subversion concerning the concept of latitude.
Hirschman’s “cognitive style” Hirschman’s way of dealing with projects offers a clear example of his cognitive style, something that he acquired in his junior partnership with Eugenio Colorni and then worked on, innovating frequently, throughout the course of his life. At the beginning of The Strategy (1958, p. v), quoting Whitehead, Hirschman states that “the elucidation of immediate experience is the sole justification of any thought; and the starting point for thought is the analytic observation of the components of this experience.” As Colorni had taught him, what often sets an observation in motion is surprise, “an insight he considered worth probing. […] Of course, preliminary observations are never out of context; they are a part of an appraisal of concrete reality, and of the state of the art. But at a certain point it is the ‘capacity of being surprised which is important.’ This is what engenders research (and the elation that comes with it) and what provides the drive for [Hirschman] to analyze the initial perception in depth and come up with [cautious] generalizations” (Meldolesi, 1995, p. 46). 2 On his relationship with the World Bank during the preparation and then the writing of this paper, see below in Appendix A: “Hirschman and the World Bank.” 3 On Hirschman’s relationship with the Inter-A merican Foundation and its director, Peter Hakim, see the Introduction to this volume.
Hirschman’s production line on projects and programs | 81 This had two consequences. First, Hirschman never entered into structured debates. When he was dealing with a topic that in addition to being of interest to him was the object of controversy, he did not opt for one side vs. the other, nor try to make them converge on a third, but contributed laterally, presenting his own ideas, derived from his own concerns. Secondly, Hirschman persevered in developing his own thinking—from the original surprise (based on unexpected consequences, blessings in disguise, inverted sequences, cognitive dissonance) he moved toward new discoveries and then tested their validity, possibly in different areas. Eventually, if he considered it useful, he would specify the necessary qualifications and the cognitive limits he had identified.4 As he himself put it, “the talent I have is not just to come up with an interesting observation; it is more a question of going to the bottom of such an observation and then generalize to much broader categories. I suppose that this is the nature of theorizing” (Hirschman, 1990, p. 156). All this is different from the traditional way of doing research, both basic research and applied social research. Hirschman deviates from basic research because he starts with an observation of experience rather than a traditional research design (hypothesis, data collection, inference),5 and then moves continuously between probing and theory development. He departs from applied social research because his aim is not simply to test received theories in the practical world, but to extend his thinking to other fields, and possibly to recognize where it “holds” and where it doesn’t.6
4 Later this same logical procedure led him to carefully verify what he had written (or was writing) to the point of updating it by applying the thesis of self-subversion (Hirschman, 1995). 5 See, for example, King, Keohane, Verba (1994). 6 Here there are similarities with Merton’s idea that applied social research not only verifies, but also fructifies theory. This means “account[ing] for the discrepancies and coincidences between the ‘ideal pattern’ and the ‘actual pattern’ of relations between basic and applied social science” (1973, p. 94). This can be done by introducing concepts related to variables overlooked by the common sense of policymakers and by viewing policies from the perspective of subjects who are affected when a policy has unexpected and often unwanted consequences. It happens when it is understood that there are “total systems of interrelated variables.” In the “ideal pattern” contested by Merton, “behavior is construed as a series of isolated events. Yet many of the untoward consequences of policy decisions stem from the interaction between variables in a system” (1973, p. 95).
82
| Possibilism and Evaluation: Tendler and Hirschman
Meldolesi (1995, p. 41) characterizes Hirschman’s cognitive style as stemming from “his sense of responsibility for his work and his determination not to ‘follow the stream’ […] his willingness to remold his own outlook to correspond (as far as possible) to concrete developments—at the cost of dealing with the same subject many different times from many different points of view.” In the following pages I will address how Hirschman navigated two structured debates that were raging at the time he was engaged with development projects: (a) projects vs. programs, and (b) qualitative vs. quantitative analysis.
The structured debate on projects and programs In the 1950s, the World Bank was debating whether to continue funding development projects or to start funding programs. The issue therefore was “projects or programs?” “Projects” meant financing a specific intervention that had a well-defined goal, targeted specific beneficiaries, and had a predetermined itinerary that was easy to follow. It was believed, in other words, that they were simple. For this reason, supporters of projects—as Hirschman notes (“Foreign Aid,” 1971c)— criticized programs because, since they touched on macroeconomic issues, they might end up favoring some groups rather than others and become useless or redundant. “Programs,” on the other hand, according to their supporters, were able to overcome what was seen as the limited effectiveness of single projects. Currie (1950, p. 5) maintained that “it appears essential that the attack on the problem be incorporated in a comprehensive, overall program that provides for simultaneous action on many fronts. Economic, political and social phenomena are so inter-related and interwoven that it is difficult to effect any significant and lasting improvement in one sector of the economy while leaving the other sector unaffected.” The WB held an intermediate position. It preferred “to base its financing on a national development program, provided that it was properly worked out in terms of projects by which the objectives of the program are to be attained” (Alacevich, 2009, p. 81). According to Alacevich, the debate over programs vs. projects is parallel to that between balanced growth (Currie) vs. unbalanced growth (Hirschman). However, Hirschman’s views on the matter were more nuanced. Hirschman criticized Currie on specific policy issues regarding the Colombia case, not on the general point of projects vs. programs. Hirschman was certainly no stranger
Hirschman’s production line on projects and programs | 83 to programs and plans, beginning with his collaboration on the Marshall Plan, and from his experience with Monnet’s plan for France’s post-war reconstruction to the Colombian experience itself.7 Indeed, in Strategy (p. vii) he had stated that after all his hope was “[to make] planning and programming activities more effective.”
Hirschman on projects Hirschman first dealt with projects and programs in the last chapter of The Strategy, in the section “The role of foreign capital and aid,” where he considered foreign aid (both projects and programs) in the light of its contribution to economic development, as analyzed in the previous chapters. Then, he proposed to the WB that he visit a selected group of projects. He was interested not because projects were at the time generally considered straightforward and easily managed, but for reasons that were if anything the opposite, linked to his own way of thinking. His purpose was fundamentally cognitive. As Hirschman states in DPO (p. 1), “the development project is a special kind of investment. The term connotes purposefulness, some minimum size, a specific location, the introduction of something qualitatively new, and the expectation that a sequence of further development moves will be set in motion. […] privileged particles of the development process, and the feeling that their behavior warrants watching at close range led to the present inquiry.” This is a definition of projects that reflects his method of working, along with his interest in upstream and downstream linkages. In fact, while some projects may come close to the idea of a blueprint, others “imply lengthy ‘voyages of technical and socio-economic discovery’” (Study, p. 3). This distinction is further developed in DPO (p. 32) where Hirschman distinguishes between, on the one hand, projects that come close to the concept of “a set of blueprints, prepared by consulting engineers, which, upon being handed to a contractor will be transformed into three-dimensional reality within a reasonable time period”8 and, on the other, projects that are “of a completely different nature,” and are “affected by a high degree of ignorance and uncertainty[where] ‘project implementation’ may often mean in fact a long voyage of discovery in the most varied domains, from technology to politics.” 7 See Caballero Argaez (2008). 8 See Rondinelli’s critique (1983) of the blueprint approach to evaluation.
84
| Possibilism and Evaluation: Tendler and Hirschman
Working in this way on a reduced scale and on a history offered a better understanding of a complex situation and contributed to the discovery of new trends. “Immersion in the particular proved, as usual, essential for the catching of anything general, with the immersion-catch ratio varying of course considerably from one project to another” (DPO, p. 2). In fact, looking at the small-scale through the lens of possibilism and pluralism and generalizing at the middle range helps reveal that there are more alternatives than expected, and it makes people feel freer, both in analysis and in action. On the other hand, none of this kept Hirschman (Strategy, p. 208) from favoring “a revision of the deeply ingrained idea that aid to underdeveloped countries should always be for specific projects.” “Project lending and direct investment serve a most useful purpose” [unbalancing], but nevertheless “there is [also] a need for general balance of payments assistance on a stand-by basis” [rebalancing] since it is impossible to foresee how fast the pressures a country’s economy experiences in the course of its development will result in the appropriate domestic resource shifts.
Hirschman on programs Once again addressing the “programs vs. projects” alternatives in “Foreign Aid” (1971c), Hirschman put forward a harsh critique of programs and their “largely neglected political implications and side effects” (p. 200) which clearly deviates from the traditional ways the topic was dealt with in the structured debate. Hirschman is concerned with the asymmetric position (a “semicolonial situation,” p. 207) between donor and recipient, as shown by the type of bargain they enter into in both projects and programs. With projects, the recipient country “substitutes to some extent the donor’s investment preference for its own in so far as the use of aid funds is concerned […]. Nevertheless, the aid permits the country to achieve a position in which it is unequivocally better off than without aid” (p. 200–1). With programs, on the contrary, “the donor may require that the recipient country change some of its ways and policies as a condition for receiving the funds” (p. 200). This may refer to choices about investments and consumption, monetary policies, prices, etc., which will “make one group of the recipient country worse and another better off than before” (p. 201). Hence, the program will be attacked by the circles being hurt, who will present it “as being damaging to the national interest as they define it” (p. 203).
Hirschman’s production line on projects and programs | 85 Then Hirschman’s criticism of programs takes a different turn, and brings to the task some of his cherished tools: Doubt and irony. “For the commitments entered into in the course of the program aid negotiations to be faithfully adhered to, the recipient government ought to be so convinced of the correctness of the politics to which it commits itself that it would have followed these policies even without aid” (p. 204). In this case, the donor would be rewarding virtue, a modest purpose to say the least. Instead, donors justify their legitimacy by arguing that aid plays a big part in the difference between stagnation and growth, and claim the role of “bringing virtue into the world” (p. 205). But this is rarely the case. More often, “aid-hungry governments” will enter the bargain for much more mundane reasons, and reservations and resistance will soon materialize in “half-hearted implementation or sabotage of the agreed-to policies.” Cognitive dissonance and latitude. Once the recipient country has committed itself to the bargain, it may convince itself that it has done so in the national interest (cognitive dissonance9). But this will not happen without shocks. Certain types of policy commitments, such as specific monetary and exchange rate measures, “are highly visible, verifiable, measurable and at their best irreversible,” and therefore offer less latitude in implementation and are less prone to sabotage. Here the cognitive dissonance works better. Other economic and social policies (privatization, land reforms, etc.), that have more latitude in implementation, may be “rendered inoperative through bureaucratic harassment or through lack of administrative energy.” In this way “less and less attention is paid to economic growth and social justice, supposedly the principal objectives of aid” (p. 207). Hidden rationalities. The recipient country may feel that a contract it has signed has put it in a subordinate position. Reacting to this, it may for instance not feel obliged (contrary to the donor’s expectation) to follow the donor’s lead in international political issues. It may deem questionable the donor’s claim of superior knowledge when even in the donor’s own country the suggested policies may have run into problems. It may feel resentment when in decision-making sessions concerning program implementation the apex of the country’s political structure has to interact with a simple mission staff member of the donor agency who is not at the same level of authority. Hirschman concludes by saying,” the explicit or implicit conditioning of aid on changes in politics of the recipient countries
9 On cognitive dissonance as a type of inverted sequence, and thus an element of possibilism, see Chapter 6.
86
| Possibilism and Evaluation: Tendler and Hirschman
should be avoided. This does not mean that the donor cannot make his opinions and preferences known; but it does imply that elaborate arrangements should be made to divorce the exchange of opinions about suitable economic policies from the actual aid-giving process” (p. 211). In any case, Hirschman’s attitude toward programs must be understood in the context of his comments about the “two functions of government: planning and experimenting.” “Once development plans have been introduced, […] they usefully restrict governments’ freedom of choice; however, they can also be over- restrictive. […] Once more accurate studies have been undertaken, it is likely that several elements of the plan have to be radically revised. Thus the initial plan should make clear which projects are final and which require further research” (Meldolesi, 1995, p. 52).10 In Strategy (p. 205) the point is developed further. “The contemporary fashion of drawing up comprehensive development plans or programs is often quite unhelpful. For the very comprehensiveness of these plans can drown out the sense of direction so important for purposeful policy-making. A plan can be most useful if, through its elaboration, a government works out a strategy for development. While the choice of priority areas must of course proceed from an examination of the economy as a whole, it may be best, once the choice is made, to concentrate on detailed concrete programs for these areas, as in the first Monnet Plan for France’s postwar reconstruction.”
Hirschman on qualitative vs. quantitative analysis The opposition is generally seen as that between “big number and generalizability” (quantitative, nomothetic) and “small number and specificity” (qualitative, idiographic). The “quantitatives” generalize by drawing averages on given variables/indicators (comparing apples with apples and oranges with oranges), which, according to “qualitatives,” misses important information that may exist at the margins (positive deviance). “Qualitatives” go into depth in specific cases, which, according to “quantitatives” leads to subjectivity. Currently there is a tendency to 10 Note the difference between this and the WB’s position mentioned above. Here it is not a question of an optimal combination of productive factors in planning programs and projects, as with the WB, but of planning (programs) and at the same time experimenting (mainly in the implementation of projects, which involves actually paying attention to the unexpected).
Hirschman’s production line on projects and programs | 87 bridge the gap between qualitative and quantitative methods with mixed methods approaches, which means devising research designs that combine the different types of research, in order to compensate, at least in part, for their respective limitations.11 Hirschman’s way of navigating between the poles of this opposition suits his own intention of finding possible paths for development. In DPO and GAC he carried out field work involving a small number of different cases chosen for their diversity, with the aim of drawing some kind of generalization (middle range theory). As mentioned, Hirschman believed that looking at development on a small scale (the “privileged particles of the development process”—that is, projects) would allow the “catching of anything general” (DPO, p. 2). To this purpose, he devised an original way of doing field work based on interesting histories that would allow comparative case studies that were different from both traditional case studies and qualitative analysis in general, but not incompatible with quantitative analysis. In the projects, his way of “getting to the bottom of an observation” was in some ways different from that of his other “production lines.” It was no longer simply a question of turning to other domains of inquiry (as he did, for instance, in Exit, Voice, and Loyalty). Here it was a matter of exploiting the opportunity to go to the field and get into contact with real situations. The direct contact with the actors in development (the people fighting for their “private or public happiness,” GAC) and with the officers of the international agencies12 (which he speaks of in Study, DPO and GAC), made him aware of the many existing alternatives for dealing with problems and how to expand the possibilities for development. His intention to study how people manage to find solutions to their problems had been clear from the moment of his selection of projects to visit—they had to be from a diverse range of economic sectors and geographical areas and they had 11 On this subject see also Stame (2021). 12 For instance, country officers of the WB usually have a good grasp of “the general political situation in countries for which they are responsible, but they cannot be sufficiently familiar with the specific political and interest group constellations around a given project. This knowledge can only be acquired by on-the-spot inquiry, and it is actually absorbed by the engineers and economists involved in project appraisal and later on in progress reports.” Hirschman’s suggestion is to “seek this knowledge out more systematically and to bring it to bear on project appraisal and supervision” (Study, p. 6).
88
| Possibilism and Evaluation: Tendler and Hirschman
to have met with difficulties13 (DPO, p. 2). Then he proposed to analyze them in depth, on the site, meeting the people concerned with their implementation, the beneficiaries and other people involved (Study, p. 6), in order to clarify the problems they had experienced and how they had addressed them. This is the reason he needed a small number of projects, but ones still diverse enough so that he could recognize the variety of ways uncertainty could be mastered. In other words, he was not interested in how a single project had fared, as a traditional case study would have been. Instead, given the uncertainty that ran through all the projects examined, he wanted to compare the different ways this uncertainty had been dealt within individual projects—each with its specific “personality profile”—by their managers, so that lessons could be drawn for development. At the same time, it was a way of distancing himself from the nomothetic search for prerequisites and obstacles that plagued development studies.14 In DPO (p. 4) he states, “my purpose was not to establish for all projects general propositions that would almost certainly be empty, but to inquire whether significantly different experiences with projects might be traced to what, for a want of a better term, may be called their ‘structural characteristics’” (intended here as technological attributes and organizational or administrative properties). Nevertheless, his original way of conducting case studies and qualitative analyses was presented as complementary to the “scientific determination of projects [which is] already within reach” (DPO, p. xvi), even though he was critical and ironical concerning the limits of cost-benefit analysis (CBA), the Planning Programming Budgeting System (PPBS) and similar tools. He was far from dismissing the utility of quantitative analysis, even if skeptical of its aspirations to scientificity, generalizability and the like. After all, he was trained as a statistician, and had written a number of articles in that capacity.
13 He later realized, however, that “all projects are problem-ridden; the only valid distinction appears to be between those that are more or less successful in overcoming their troubles and those that are not” (DPO, p. 2). 14 On the subject of obstacles and prerequisites see Chapter 6.
Hirschman’s production line on projects and programs | 89
Observations drawn from project comparison A comparison showing how projects tackled similar problems in different ways led him to observations that later led to the shaping of the central ideas in DPO— uncertainty and latitude. If, as we have seen, the projects that had run into difficulties were selected because they belonged to different sectors and geographical areas, then the difference between them clearly came from a combination of their historical and operational characteristics. This was their “personality.” Here we see the importance of the observation driving the analysis. The observation concerns certain “structural characteristics” (the kind of technology, the task, the prevalence of production vs. administration) that may create problems (conflict, bureaucracy, inefficiency, clientelism) that are tackled and/or solved with specific devices. “Such a view stresses the importance for development of what a country does and of what it becomes as a result of what it does, and therefore contests the primacy of what it is (geography, history etc.)” (DPO, p. 4), the famous prerequisites or obstacles in other words. Take the question of administration, a typical source of headaches. Administration is presented as a supply uncertainty (DPO, p. 42). Ongoing problems such as “unstable or incompetent management, outside interference, debilitating conflicts” are linked to social and political factors, but the way they can be dealt with also depends on the characteristics of the project. Here Hirschman lists different types of conflicts, and how they can be handled according to the projects’ “personality profiles.” Conflicts within the project—Intergroup conflict. The Nigerian railways, a big national project with little latitude, was unable to avoid intergroup conflict because of a recruitment policy that had favored a single group. This same difficulty was avoided in the alternative system, the highways, where operations could be assigned to small firms that were free to select personnel in such a way as to minimize intergroup friction. Moreover, this difficulty did not emerge with regard to electric power (Uganda) or telecommunications (Ethiopia), where smaller staffs belonged to an elite technical (operations and maintenance) corps. Conflicts between projects and the political/ social/ bureaucratic status quo. Projects whose activities are wholly new to the country (airlines, power generation) or that are spatially isolated may not encounter such conflict. But projects that are affected by product uncertainty or whose tasks may overlap with existing agencies may be paralyzed by such agencies. Nevertheless, Hirschman found
90
| Possibilism and Evaluation: Tendler and Hirschman
cases in which an agency succeeded in overcoming such opposition by initially grounding itself in a single task and later becoming multipurpose. Aggressive actions originating externally and directed against the project agencies (DPO, p. 48). Some projects are more exposed to interference than others. This depends on “the appetites projects arouse and the built-in defenses they can muster.” Projects with less latitude that utilize advanced technology, such as electric power, telecommunications, etc., “cannot be staffed with the sole purpose of rewarding friends and extending one’s influence” (DPO, p. 49). The opposite is true of projects such as railways or irrigation that are not so technologically sophisticated, but maintain a flow of operating expenses for personnel and material inputs after construction that promises “indefinite enjoyment of the to-be- conquered benefits.” These are the types of problems that led to the WB policy of “insulation from politics.” But given the different situations a project can find itself in, Hirschman considers such insulation as not suited to all cases. In Study (p. 7) he lists cases in which it is not advisable. In the case of ethnic conflict, for example, the isolated agency may become the fief of one group. In other cases “it may mean that the agency is cut off from direct access to the central seat of political power and is therefore not able to develop the political power and influence it requires to accomplish its task (as in India).”
Traditional evaluation vs. Hirschmanian observation Having been offered the opportunity to do an evaluation, Hirschman was keen to point out the difference between his way of working and the usual way, which he described as “ascertaining the rate of return and, then, applying feel, instinct, ‘seat-of-the-pants’ judgment,” “achieving an overall appraisal of the individual World Bank-financed projects” and ranking them “along a scale that would measure their overall financial or economic results” (DPO, p. 7). Albert criticized evaluation reports which, although based solely on appraisal data (indicating rate of return or meeting a quantitative target), claimed to provide an evaluation without having carried out a proper study. He instead conducted fieldwork—observing, listening to people, comparing—from which it was possible to derive indications for future programs. But when it came to writing a report, which meant moving from observations to working out new concepts (incertitude, latitude), he felt he needed to clarify his position on the type of research he had completed. He did this in two steps.
Hirschman’s production line on projects and programs | 91 The first was presenting his work as supplementary. “Everything I say here is intended as a supplement to existing project appraisal techniques, never as an attempt to supersede them” (Study, p. 4). Supplementing meant adding to the mainstream approach (“comparing results with expectations”) his own aim of “developing in general whatever lessons could be learnt from a close observation of the history of these projects” (Study, p. 1). This meant providing missing material in order to arrive at a judgment that otherwise would be based only on appraisal techniques. This was because “the economic return from the investment tells only a small part of the total story of the project. An important, and sometimes much the more important part of that story resides in the problems which the project has faced in the course of its career and in the successes that were achieved in tackling or solving these problems” (Study, p. 2), which he defines elsewhere as “the major opportunities that are in store” for a project (Study, p. 4). The same concept is found in DPO (p. 6). “I do not advocate shelving CBA, etc., I expect the concepts I have derived from observation to have some uses in project evaluation as additional elements of judgment.” This was a critique of the limitations of traditional project appraisal. “There is far more to project evaluation than any ranking on a one-dimensional scale can convey.” In the second place, presenting his analysis as different from the “seat-of-the pants judgment” he had just waxed ironic about, he defined his way of working as “an attempt to reclaim at least part of this vast domain of intuitive discretion for the usual processes of the raison raisonnante” (DPO, p. 7). That is to say, at least in part replacing the intuitive judgment that complements the techniques of project appraisal with well-founded reasoning. All this led to enlarging the scope of the analysis by taking into account social effects—that is, “effects on the distribution of income and wealth (and also of political power)” (Study, p. 4–5). This was the subject of the last chapter of DPO, “The Centrality of Collateral Effects”, where he admitted that he had developed “the bearing of the […] notions (of uncertainty and latitude) on project appraisal” (DPO, p. 6).
6
Possibilism, change and unintended consequences1
Speaking of change in the Introduction to A Bias for Hope, Hirschman distinguished between “voluntaristic change” and change resulting from “unintended side effects” (set in motion by actions with other purposes). Voluntaristic change, “brought about consciously by some change agent, be he a revolutionary or an agricultural extension officer,” is the outcome of an intentional action, as in the blueprint approach to development.2 According to Hirschman’s counter- intuitive logic, voluntaristic change might turn out to be less “revolutionary” than expected, because “the imagination of the change agent is severely limited by his immediate experience and historical precedent” (1971a, p. 37). Change as a result of unintended side-effects, on the other hand, can be more “revolutionary,” because it is more difficult to intercept and to block by forces opposed to change. In both cases, Hirschman had found a way to oppose the preconceived pessimistic attitudes, incapable of grasping the most promising aspects of reality, 1 Prepared for the Third Conference on the Hirschman Legacy (Berlin, October 2019) and published in Meldolesi L. and Stame N., eds., A Passion for the Possible, Roma: IDE (2020). 2 On the possibility of projects as blueprints and the critique of the blueprint approach in evaluation, see Chapter 5.
94
| Possibilism and Evaluation: Tendler and Hirschman
of those he called “fracasomaniacs.” In the case of voluntaristic change, any deviation from the expected causal sequence was generally seen as a disruption. Concerning development, Hirschman had been struck by “exaggerated notions of absolute obstacles, imaginary dilemmas, and one-way sequences.” On the contrary he tried to find “avenues of escape from such straitjacketing constructs in any individual case that comes up” (1971a, p. 29).3 He went so far as to imagine that the alternatives found could be even better than those planned. And well he might, if we consider the role that Hirschman attributed to surprise. In the case of unintended side effects, he followed the distinguished ancestry of the concept of unintended consequences, noting, however, that the original meaning had been lost through its slippage into a concept of “perverse effects,” which he would later analyze in depth in Rhetoric of Reaction (Hirschman, 1991, p. 36). As an alternative to these bleak (negative, depressing) attitudes, he devised three conceptual expedients (blessings in disguise, reversed sequences, unintended consequences) by which change can occur, and he placed them at the foundation of possibilism. The first two speak of change that can occur in ways alternative to voluntaristic planning, but which remain in the same field. The third speaks of a change that takes place in an area different from the one in which the action started—that is, as a side effect.
Alternate routes to action for change Blessings in disguise. This is the idea that in certain situations it is possible to turn a negative event into an asset or a stimulus. Theories of development are full of supposed regularities and their inevitable consequences. These imply that bringing about change requires certain positive prerequisites or conditions (such as political consensus, a given culture or family structure, etc.), or the elimination of certain obstacles (conflict, centralism, family structure, etc.). Hirschman mocks such rigid notions whenever he finds beneficiaries or implementers able to find solutions to their problems independently of such preconditions (1971a, p. 29). Two articles exemplify this attitude. The first is “Obstacles to Development: A Classification and a Quasi-Vanishing Act” (1971d).4 Here Hirschman criticizes 3 This was precisely the essence of the possibilist approach. 4 Originally written for Economic Development and Cultural Change (1965, n. 3) and reproduced in A Bias for Hope (1971a).
Possibilism, change and unintended consequences | 95 the use of the concept of the “obstacle to be overcome,” which is no more than a generalization of what has already happened in developed countries, and finds that in some conditions some so-called obstacles “may turn into assets,” and that there are “obstacles whose elimination turns out to be unnecessary” or “postponable.” The second article is “The Search for Paradigms as a Hindrance to Understanding” (1971e) (see above, Chapter 2). It will be recalled that here Hirschman criticizes the “imposition” of models and paradigms on concrete situations, which leaves no room for seeking alternative interpretations and solutions. Inverted sequences. This is an expansion of the criticism of necessary prerequisites, or of supposed cause-effect relationships, strengthened by reference to the theory of cognitive dissonance (Festinger, 1957). This states that changes in beliefs, attitudes and even in personality can be caused by certain actions instead of being a prerequisite for them. Similarly, inverted sequences can be alternatives to certain “orderly” sequences that are predicted for development (1971b, p. 30). In other words, what was considered a cause of change may become its effect. Numerous examples of inverted sequences are offered in Getting Ahead Collectively (Hirschman, 1984, p. 6, 9)—improvements in squatter-occupied housing become an asset in securing title of ownership rather than its consequence, and education can be induced by development rather than being a precondition for it. In these cases, contrary to what was expected, a situation developed in a positive way so that the obstacles could in fact be overcome. Change happened in a sequence different from what was expected, and solutions were found that suited the purpose because the creativity of the agents involved was given free rein. This occurred, therefore, because people made appropriate use of the surprising situation they found themselves in—an important subjective element that complemented the objective one. Unintended consequences of human action. Here (1971b, pp. 34– 35) Hirschman refers to a notion that “has of course a distinguished ancestry” (Vico, Mandeville, Smith), and has “rich potential for understanding and expecting social change.” Not only has this concept not been properly exploited, he notes with disappointment, it has even been criticized by social reformers who believe in equilibrium, rationality, optimality and think that “social change is something to be wrought by the undeviatingly purposeful actions of some change agents.” Instead, Hirschman argues, “change can also occur because of originally unintended side effects of human actions which might even have been expressly directed toward system maintenance.”
96
| Possibilism and Evaluation: Tendler and Hirschman
In fact, the concept of unintended consequences had acquired not only the meaning given it by its distinguished ancestry but—with the launch of new development projects and socio-economic programs—it also began to be used frequently in the opposite sense. It had lost the meaning of unexpected favorable discovery (serendipity)—the focus, as we shall see later, of the modern reworking of the concept by Merton (1936, 1968)—and had more often come to refer to an undesirable consequence or perverse effect. This was a tendency that Hirschman saw as “a betrayal of the idea of unintended consequences because it cancels the open-endedness (the open-endedness to a variety of solutions) and substitutes it by total predictability and fear” (Hirschman, 1998, p. 93).
Unintended consequences today What is the situation today? Because of a broad collective sensitivity to complexity and feeling of uncertainty, unintended consequences are being talked about everywhere. But certainly not in the sense intended by their distinguished ancestry. The blueprint approach and fracasomania continue to go hand in hand (to the battle cry of “nothing works!”) and the original meaning of unintended consequences is denied, or at least deemed irrelevant. In guidelines for evaluating complex programs5 funded by international agencies that aim to solve specific problems (poverty, inequality, inefficiency in governance, etc.), complexity is seen as something inescapable but difficult to control. For this reason, we are asked to be alert to “challenges” and “unintended consequences,” seen as “wicked problems”6 that need to be circumscribed and limited—“unintended” becomes “undesirable.” The distinguished ancestry is therefore contested in several ways. First, because the concept is linked to the idea of surprise. Morrell has dedicated an entire book to this topic. Change is seen as the realization of an intentional chain of desired events with a predetermined order, and unintended consequences are viewed as a deviation from the desired effect, and therefore as failure. Referring 5 See for example Vahamaky and Verger (2019). 6 This is a reference to the famous term introduced by Rittel and Webber (1973) regarding urban planning problems. When there are many interacting systems as well as social and institutional uncertainties, and only imperfect knowledge is possible, one must settle for more or less good solutions, depending on how one understands those problems.
Possibilism, change and unintended consequences | 97 specifically to the complexity of systems and the uncertainty involved in making appropriate decisions, Morrell distinguishes between “unforeseen” and “unforeseeable” consequences, considering the surprise created in these situations to be disruptive, and maintains that “there are techniques [...] to minimize surprise [unforeseen] and maximize adaptability to what we cannot predict [unforeseeable]” (Morrell, 2010, p. 26). Secondly, there is even an aversion to the idea that anything good can exist if it is not intended. De Zwart (2015), who is often cited as having rescued the idea of unintended consequences, thinks that it was possible to conceive of them as useful only in the era when there was a “spontaneous order,” as was the case at the time the concept originated, but not now, when there is a “planned order” in which what is intended is clearly established. Therefore, unintended consequences are unfavorable because they are unintended. Contrary to Hirschman, who urges us not to squander the positive legacy of our distinguished ancestry, de Zwart considers this legacy negative because it is outdated. Nevertheless, counter-intuitively, de Zwart is concerned to account for a significant exception—when actors anticipate unintended effects, but nonetheless pursue the action anyway to avoid an even worse evil. For this reason he feels the need to distinguish between unintended consequences that are “anticipated (but unwelcome)” and those that are “unanticipated,” and lays out a table crossing the variables of anticipation and desirability. He leaves empty the cell of unanticipated and intended (welcome) (which would represent the serendipity celebrated by the distinguished ancestry), and instead includes only three combinations: intended/ anticipated (success of planned action), unintended/unanticipated (failure of planned action), unintended/anticipated (lesser evil). Third, with regard to development interventions, positive unintended consequences have even been considered irrelevant because they allegedly result simply from “luck”! Indeed, the Manual for the Implementation Completion Results Report of the Independent Evaluation Group—World Bank Group (IEG-W B (2017) declares: “As the Bank is an objectives-based institution, achievements against the project development objectives (PDOs) are paramount. Thus, the benchmark for evaluation is the project’s own stated objectives –not any absolute standard or someone else’s conception of what good performance is. What if there are unintended positive outcomes? If a project achieves positive outcomes that were not part of the objectives statement, then credit is not given for those positive outcomes in the Outcome rating. The rationale for this is that neither the Bank nor IEG want
98
| Possibilism and Evaluation: Tendler and Hirschman to encourage ‘serendipity’ –the fortuitous chancing upon favorable outcomes without realizing that they were materializing.”
Unintended consequences and change These positions separate unintended consequences from change, thus frustrating the very foundation of possibilism. Nevertheless, as we shall now see, the distinction between unanticipated and unintended consequences may offer more leeway than detractors of the concept of unintended consequences thought. We should remember that a distinction between the undesirable and unanticipated had already been proposed in Merton’s classic 1936 essay, considered the forerunner of the introduction of the concept of unintended consequences into modern social science. Merton in fact proposed a new object of study, “the unanticipated consequences of purposive social action.” The idea expressed here is that actions with a purpose (whether “a. unorganized” or “b. formally organized”) can produce unforeseen effects that: “should not be identified with consequences which are necessarily undesirable (from the standpoint of the actor). For though these results are unintended, they are not upon their occurrence always deemed axiologically negative. In short, undesired effects are not always undesirable effects. The intended and anticipated outcomes of purposive action, however, are always, in the very nature of the case, relatively desirable to the actor, though they may seem axiologically negative to an outside observer. This is true even in the polar instance where the intended result is ‘the lesser of two evils’” 7 (Merton, 1936, p. 895 and 896, italics added).
Later, in a 1972 book (Evaluation Research: Methods for Assessing Program Effectiveness), Carol Weiss dealt with “unintended consequences” in her typical style. “The program has desired goals,” she wrote. But “there is also the possibility that it will have consequences that it did not intend. The discussion of unanticipated results usually carries the gloomy connotation of undesirable results, but there can also be unexpected good results and some that are a 7 As we can see, the idea of the “lesser evil” was already present in Merton, pace de Zwart.
Possibilism, change and unintended consequences | 99 mixture of good and bad” (Weiss, 1972, p. 32, italics added). As we see, Weiss distinguished between the anticipated and intended consequences of a program (of purposive action), criticized the identification of unanticipated with undesirable, and recognized that the unanticipated may be good. But then, she ironically prefigured a further possible identification between the anticipated (although improbable) and the desirable. “Reformers trying to sell a new program are likely to have listed and exhausted all the positive results possible” so that if they happen to be achieved it cannot be said that they were unanticipated (Weiss, 1972, p. 33). Vedung (1997, p. 54), in turn, using the example of an energy conservation program in Sweden, designed an “effects tree” in which he reasoned that the anticipated/u nanticipated effects of an intervention could be favorable/ unfavorable—both within the area of the intervention (main effects) and outside of it (side effects). In all these cases, changes are viewed as the end result of an action taken for a desired purpose. It should also be noted—a nd this is the particular contribution of possibilism—t hat in many cases, positive unintended consequences were brought about by actors who, while implementing the programs, found new solutions to the problems they were facing. This emerges clearly in the work of Judith Tendler, as well as in positive approaches to evaluation (Stame, 2016). Overall, we are dealing with a vast set of alternatives, for which the differentiation of unintended consequences between unanticipated and undesirable is still useful. Positive change can come about not only as some anticipated and intended result of a purposeful action, but also as something desirable but unanticipated (brought about by other means, for example) or whose positive effects occur because of other actions. (See Table 6.1)
100
| Possibilism and Evaluation: Tendler and Hirschman
Table 6.1. Consequences of voluntary social action and change
Anticipated
Desirable
Undesirable
A Success Merton: desirable for the actor Vedung: favorable within the area of interest
B Lesser evil Merton: value-oriented action (of some actor): lesser evil De Zwart: lesser evil
C8 Blessing in disguise Unanticipated Merton: unanticipated, not necessarily undesirable Hirschman: blessing in disguise, inverted sequence Weiss: unanticipated good outcome Vedung: positive side effects Appreciative inquiry, positive deviance
D Failure Perverse effect Merton: effect undesirable for some Vedung: Perverse effect in the greater area
But if we now turn to Hirschman’s observation about the unintended consequences of other actions (the sense intended by the distinguished ancestry), then the table acquires a broader political significance. (See Table 6.2)
8 While in de Zwart’s table cell C (unanticipated and intended/welcome) is empty (see above), in this table it is rather full.
Possibilism, change and unintended consequences | 101 Table 6.2. Side effects of an action carried out for a different purpose Intended effects For maintaining the system
For change
Stability
A No change
B Futility Jeopardy9
Change
C Unintended consequences according to the distinguished ancestry
D More change than anticipated? Weiss’s irony
Side effects
In conclusion At the end of Development Projects Observed (1967, p. 167) Hirschman wrote: “Some of the most characteristic and significant events in the career of projects are ambiguous […] and the decision whether to give them a positive or a negative sign in the course of project appraisal requires considerable knowledge of the country no doubt, but also—and this is what I have been trying to convey—an awareness of the ways in which projects create entirely new openings for change.” How, then, to make the most of the “rich potential for understanding and expecting social change”? The Hirschmanian conceptual expedients we began with undoubtedly show us some of the ways.10 But other voices that we have heard also use the concept of unintended consequences in ways faithful to its origins, assigning value to the alternative courses an action can take. What they have in common is being open to doubt and being prepared in the face of surprise. But in view of the traditional training of researchers, this attitude cannot 9 Categories from The Rhetoric of Reaction (Hirschman, 1991). 10 It is possible to find other examples in Hirschman’s thinking—like the process becoming the purpose (the happiness of pursuit, in Shifting Involvements), or implementation as a learning process (Development Projects Observed), etc.
102
| Possibilism and Evaluation: Tendler and Hirschman
be taken for granted—it needs to be carefully cultivated. As Merton warned, a mind must be educated, properly prepared, to be able to discover instances of serendipity (Merton & Barber, 2004).11 This is a viewpoint that calls to mind Hirschman’s appeal to appreciate creativity, and to nurture our imagination.12 So the problem becomes one of putting the openness of possibilist analysis together with the possibilist ability to use this analysis in work aimed at change—for autonomy and for democracy. But in any event, this will require unceasing effort. Because what we are really talking about is a capacity for self- reflection that has to be cultivated—a lways being on the alert, reflecting on what we are experiencing, and focusing as we go on the complexities and potential developments. And being ready to act so as not to let opportunities slip through our fingers.13
11 I am grateful to Veronica Lo Presti for this observation. 12 See the Introduction to this volume. 13 We touch here on the more general theme of theory of knowledge, which was at the center of Colorni’s work and to which Hirschman was indebted. See Colorni (2020), and Luca Meldolesi’s introduction to that volume.
7
Doubt, surprise and the ethical evaluator: Lessons from the work of Judith Tendler1
Judith Tendler is little known in the international community of evaluators even though her practical work and theories on development foreshadowed many of the ideas that are today at the center of debate. I had engaged with her work previously as editor of an anthology of her writings (Tendler, Progetti ed effetti, 1992), but the occasion for a welcome “return to the scene of the crime” was offered to me by the online publication of her entire oeuvre (books, articles, research, evaluation reports) as part of her Festschrift (2013). Thus I edited a new selection, Beautiful Pages by Judith Tendler (Tendler, 2018), that required me to thoroughly “immerse myself” in these precious materials, both old and new. This in turn gave me a chance, one thing leading to another, to better appreciate Tendler’s work as an evaluator and teacher, and inspired me to try to bring out its continuing vitality as a source of food for thought on current problems—those that most concern evaluators today.
1 An earlier version of this chapter appeared in Evaluation, 25 (4), 2019, pp. 449–468.
104
| Possibilism and Evaluation: Tendler and Hirschman
Success and failure Tendler’s work hinges on the central evaluation question, “Were the projects successful?” or—as it is put nowadays—“What works?” But her research into what was successful came from her own way of understanding development, and not from any simple evaluation technique, which is what is generally alluded to today in the second formulation.
Looking also elsewhere2 Tendler’s approach was based on doubt (not taking for granted what in a project would be considered success, or failure) and on evidence (collected in field work, by “observing”3 projects). She was “looking at” the unexpected (“what surprised you?” she kept questioning). This also meant being interested in cases that are not in the average, or “not representative”:4 they could be positive cases, but also negative, although her preference for looking at the positive ones, apart from her avowed mission, was also moved to counteract the exclusive tendency of the other researchers in testifying
2 It might seem that “looking elsewhere” is in conflict with the first step in Hirschman’s reasoning, which in fact is observing development projects. But this is not the case. Hirschman taught Tendler (and all of us) to focus attention on a delimited theme (the development project) but at the same time (as mentioned in Chapter 5) to dig deep, to explore the consequences of this observation without restraint. This meant, therefore, continuing to untangle the knot and “also look elsewhere,” —which includes speculative activity based on imagination, hypotheses, further investigation, conjecture, analogies, comparisons, etc.. This procedure sometimes led him to ideas that at first sight seem far from the initial observation, but are in reality linked to it—which is what happened in the genesis of Exit, Voice, and Loyalty. Thanks to Luca Meldolesi for this insight. 3 “Observation” is a term Tendler borrowed from Hirschman (and his Development Projects Observed) for characterizing her practice as different from traditional evaluation. 4 She opposed analyses characterizing good countries or bad, on the average, for example the comparison in vogue at the time between Latin American and South East Asian countries (Tendler, 1997, p. 3).
Doubt, surprise and the ethical evaluator | 105 failures.5 In “Bringing Hirschman Back In” (Tendler & Freedheim, 1994, p. 177) she (and her co-author) “confess to a bias toward the positive, but that does not worry us because it may help to counterbalance the stronger bias toward the negative in the sea of literature that surrounds us.” Tendler was keen in discovering “how some projects could have worked well,” “how they were able to perform well despite the presence of such adversity,” as she stated in New Lessons from Old Projects (1993). Here she noticed that past evaluations of the OED (WB) “have been more illuminating about the causes of failure than about the causes of success,” which has meant throwing “more light on what not to do than on what to do”; and then she announced that her study will seek to do the opposite (now in Tendler, 2018, p. 165).6 However, she knew that projects that had worked well were “episodes of good performance that had come and gone, as distinct from consistently ‘good’ agencies, component or projects,” and that good performance had “less to do with the inherent capabilities of an agency itself, than with a set of other factors”: the task, outside pressures, built-in incentives to perform, “the involvement of keenly interested actors and organizations at the local level” (Tendler, 2018, p. 166, 167). Then she proposed to inquire into these aspects, with no disciplinary or topical limitation. I would characterize her approach as: “Looking also elsewhere,” at other aspects of projects than the expected, at other discipines. She suggested to“look open-handedly at what the organization [or the project, we could add] had accomplished, regardless of its objectives: then compare the reality and the objectives. Does the reality shed light on the objectives?” (Tendler, 2018, p. 135). A peculiar kind of goal-free evaluation. Sometimes she approached things the other way round, as when she criticized the applied social research in development for not utilizing for a given subject the findings of basic research on developed countries: in this case, the “elsewhere” to look at were theories in other literatures, and other contexts. For example, the literature on the public sector in developed countries analyzes the behavior of street-level bureaucrats (Lipsky, 1980), and admits of different rationalities along with the stereotype of the ritualistic bureaucrat, while for Less Developed Countries (LDC) there is only the picture of ritualistic and lazy bureaucrats. 5 All this is reminiscent of Hirschman’s attitude against Latin American intellectuals’ “fracasomania,” and, by opposition, his “Bias for Hope,” as a title of one of his books reads. 6 See also Tendler (1997, p. 3).
106
| Possibilism and Evaluation: Tendler and Hirschman
And in general, “even though the political-science literature has long left behind a view of the state as monolithic and unitary, the applied development literature nevertheless sounds as if it still sees the world that way” (Tendler, 2018, p. 175). She then proposed to look at the public sector in LDC with the same eyes as with the public sector in Developed Countries, and she identified cases of responsive workers and flexible agencies, and considered under what conditions this had happened. Analogously, in Good Government in the Tropics (Tendler, 1997, p. 5), she noted that the literature on industrial performance and workplace transformation, which refers to developed countries, fosters dedication among workers, while the donor community assumes that civil servants in the LDC are inevitably self- interested. In Social Capital and the Public Sector (Tendler, 1995) she asks: “when the subject of worker participation in restructuring of large firms is in such vogue in the U.S. literature, why hasn’t this interest in workers and their associations spilled over into the development field?” (now in Tendler, 2018, p. 177).
Received theories of change and surprising findings Being well aware of what mainstream theory maintained in the cases under scrutiny, Judith was not interested in confirmation or falsification (that is, in theory testing). Instead she asked what happened when the theory did not cover what she found: she was aiming at what was missing, where things could go in a different direction. And when she had found something positive un-expected, she inquired into the consequences that had occurred in people’s well-being and policy improvement, and the set of interrelated aspects that the positive surprise suggested to consider. In Tendler’s way of looking at things there are neither pre-requisites for success nor predetermined obstacles, but instances of success or failure to be accounted for. Comments on evaluations of bid-financed rural credit programs in six countries (Tendler, 1970) deals with the results and recommendations of six evaluations of country studies conducted for the Inter-A merican Development Bank. In contrast to “unsurprising” failures (that would confirm the presence of obstacles)— Judith Tendler suggests (Tendler, 2018, p. 11)—considerable attention should be given to the unexpected cases of failure, where all or most of the prerequisites were in place. Similarly, successes should not be considered as exceptions (to the rules, and therefore the theories) and why they happened should be explained.
Doubt, surprise and the ethical evaluator | 107 “[The problem behind] this kind of evaluation is that one knows, by definition, the answers to why things worked or didn’t work before one starts. The evaluation tends, therefore, toward categorization rather than toward a more open-ended and analytic exploration. One tallies up the problems and the achievements, and then places them into their appropriate box: existence of the classic prerequisites, lack of them, big pushes, exogenous circumstances, and exceptions to the rule. It has long been recognized, however, that prerequisites often turn out to be the result rather than the cause of development, that progress on one front often sparks—rather than being dependent on—progress on another front, that ‘big push’ successes often turn out to be a function of factors unrelated to the push, and that exceptions to the rule often, upon close examination, lead to the discovery of new ‘rules’” (Tendler, 2018, p. 9–10).
This sounds like a sweeping criticism to linear theories of change and some impact evaluation methodologies, and an acknowledgment of instances of complexity and contribution that require appropriate theory building and research methods. Such an attitude can be perceived through all of Tendler’s writings, sometimes even in their evocative titles: Trust in a rent seeking world (Tendler & Freedheim, 1994) that runs against main tenets of new institutionalism, or The trouble of goals with small farmers programs (and how to get out of it) (Tendler, 1973), which is a critique that efficiency and equity and the like necessarily go together. She favored keeping the tension open between the social and the economic. In this particular way of challenging contemporary program theories, it is as if she wanted to do what she had reproached the big donors for failing to do. In What ever happened to poverty alleviation? (Tendler, 1987), she criticized the Ford Foundation for having prematurely abandoned state-sponsored poverty- alleviation programs—of which she found cases of success—for an exclusive focus on markets and macroeconomic policies, as suggested by current economics theories, which “resulted in an abundant chronicling of failures and what caused them, but very little understanding of the more successful efforts and their ingredients” (now in Tendler, 2018, p. 162).
Comparative analysis Tendler rings an alarm-bell to evaluators with her disregard for “evaluation,” and her preference for “studies.” She does it in the name of the research methods that she utilizes most of the time, that is an all personal way of doing comparative analysis. In Impressions on evaluation (Tendler, 1983, p. 2) she states that
108
| Possibilism and Evaluation: Tendler and Hirschman
evaluations are limited to the process of making decisions about projects, and addressed to a specific commissioner. Studies, on the contrary, can make comparative analyses across sectors, finding out “what the impact of a project has been, in terms of general lessons that might be learned from this particular project experience.” With an eye to general interest, she stresses that comparative analysis is where it is possible to evidence what worked versus what didn’t, thus offering the general public clues for improving policies. Comparative analysis was conducted through an iterative process, aiming at discovering “why a particular problem did not occur, or if it did occur it did not prevent improved outcomes,” and for any such instance looking at where (and why) things worked well or badly (2007; now in 2018, p. 249). This is in fact the way she challenged traditional oppositions posited in the literature. Take the dichotomy between private/public: she showed a need for understanding where and when something (private or public) was good and where not, and opportunities for combination; or she would not object that non governmental organization (NGOs) are closer to the people than public administrations, but would suggest to look also at cases where they are not (Tendler, 1995, now in 2018, p. 172). But comparative analysis had to be crafted for the task. The correct way of comparing was not dictated by the unit of analysis per se (as in the conventional tenet of comparing apples with apples and oranges with oranges), as it could be if comparing agencies with agencies, projects with similar projects, NGOs with NGOs. On the contrary, if the goal was that of finding out what worked and drawing lessons for improving the life of the poor, Tendler would compare private and public organizations, large and small. The focus could be the issue (urban vs. rural poverty), the task (administrative vs. productive), technical aspects (labor intensive vs. technology intensive). Here we can find an anticipation of contemporary cross-evaluation “thematic assessments,” and the reasons for realist synthesis, that is to say the aggregate knowledge that is gained not by cumulating results on the same variable but comparing horizontally across mechanisms and contexts. For example: In her original research on electric power in Brazil, she compared hydro-electric vs. thermo-electric systems, and concluded that “the construction and operation of hydro as opposed to thermal in a developing country, although incurring a greater real capital cost than is usually believed, also confers real benefits on the economy, in the form of stimulation of local production, in the creation of skills, and in training for planning” (Tendler, 1965, p. 237).
Doubt, surprise and the ethical evaluator | 109 In What to think about cooperatives (Tendler et al., 1988; now in 2018, p. 139 ff.), she compared four cooperatives (or groups of) as for the tasks that were allocated to different types of participants (technicians, workers, leaders, etc.), the capital needed where some crops more capital investment, hence discriminating between farmers and poor laborers. In New directions in rural roads (Tendler, 1979; now in 2018, p. 84) she compared labor-based rural roads to equipment-based construction. She showed the advantages for a poverty alleviation policy of labor-based roads, that required traditional skills, greater involvement, and were cheaper. And then she assessed where they were most suitable. In What ever happened to poverty alleviation (Tendler, 1987; now in 2018, p. 160), a paper where she argues at length for comparative analysis, she compares public sector enterprises with NGOs (of different kinds) and assesses their performance as to the number of people reached, the competence acquired in performing their tasks, the influence on policies affecting poor people. She then analyzes the common traits of programs, such as mode of organization and objective that went well. In Rural projects through urban eyes (Tendler, 1982a; now in 2018, p. 122 ff.) Tendler assessed the results of rural projects by keeping in mind the lessons that had been learned in the evaluation of the urban projects of the War on Poverty. For example, urban projects were more successful when focused on areas of concentrated poverty, where rich people were not interested, and this could happen also in dispersed rural areas. Rural projects were more subject to political influence of the rich, unless their success required universal participation, as with vaccination programs.
Judith Tendler was so convinced of the usefulness of comparative analyses for finding out what works in development projects that in What ever happened to poverty alleviation, she suggested that “if the Foundation’s programs are to strive toward impact, then they will also have to create a record of what has worked and what has not.” And she concluded that funding these comparative evaluation studies would also “restore academic prestige, and therefore power, to this particular subject matter” (Tendler, 1987; now in 2018, p. 162)—surprisingly a way of showing consideration for her craft of evaluation.
Framing the evaluative questions Tendler did not stop at contemplating discrepancies between theories and discoveries. Whether she was commissioned some research (as in Comments on
110
| Possibilism and Evaluation: Tendler and Hirschman
partnership for Capacity Building in Africa, Tendler, 1998b) or she originated her own research (as in Good Government in the Tropics, Tendler, 1997), when the cases she was looking at revealed that the relevant mainstream theory underlying the projects did not hold, she would change the questions to be asked, and look at places where things worked differently (a form of possibilism, à la Hirschman, that allowed her to enlarge the scope of alternative ways to development). In The rule of law (Tendler, 2007), a paper dealing with the Brazilian intervention of WB and Department for International Development-UK (DFID), Tendler framed the report around the need to overcome the supposed trade-off between economic growth and reducing inequality: then she identifies promising aspects for inquiry: “I identify four themes together with research questions, implications, and case illustrations” (now in 2018, p. 246). They are: linkages and spillovers; the interaction between the rule of law and economic development; institutionalizing the mediation of conflict; modernizing the state, discretion, commitment and reform fractions. For each of these themes she formulated questions based on the result of the comparative analysis. With reference to industrial sectors, under what circumstances and in which kinds of sectors is the impact greater and more broadly distributed? With reference to regulatory systems, why some regulatory actions have had these positive-sum outcomes for economic development and for improving the rule of law, and others have not? In Social capital and the public sector (Tendler, 1995), she challenges the idea of the clear demarcation between the public sector (rife with corruption, laziness, etc.) and the outside world of private associations where social capital thrives. And she deplores the fact that there has been so little research on cases where government performs well, and has favored the formation of social capital. Thanks to the pathological view of government bureaucracy, “the question of how positive Social Capital Formation has been able to take place in the public sector becomes quite a mystery—one of the basic ingredients of a good research question” (now in 2018, p. 175). Then, for example, she proposes to inquire into the different groups that compose a single agency, in order to understand how the ascendance of one group may be more favorable to social capital formation (and she notes “I have seen this happen many times in my own field-work”), or to inquire into how public sector trade unions could “play a major role in making [and not only on breaking] the reforms needed to be undertaken by Less Developed Countries today” (p. 176).
Doubt, surprise and the ethical evaluator | 111
An ethically-oriented democratic professional Building on Dzur’s distinction between types of professionalism (social trusteeship, technocratic professionalism, democratic professionalism), Schwandt (2018) has recently inquired into what ethos of public good can characterize specific kinds of democratic professionals, with special reference to evaluators. Tendler’s profile fits perfectly that of the democratic professional. Her ethos of public good was openly spelled out. It was based on two pillars: (1) helping the poor, improving people’s life and ability to solve economic and social problems; (2) improving the working of the public sector in the service of the people. And it was reflected in her personal attitude toward commissioners of evaluations and beneficiaries of programs alike, and in the way she constructed her methodological tools. Tendler’s attitude to her clients was one of complete autonomy: respectful of their work, but not hesitant to make her criticism heard. She felt the need to say what she thought worked and what did not in projects with goals of poverty alleviation, fostering economic development, building capacity of the public sector in providing services to citizens. Her judgments would rely on her findings, not on the programs’ assumptions, that she often challenged. She knew what she was talking about, based on her reading of specialized literature and on what she had “observed” in her work on the field—interviewing local operatives and professionals, public sector managers and front-line workers, and meeting and discussing with beneficiaries. The accuracy of her reflections, and the authority that came from her intellectual environment (Bianchi, 2018), helped keep the relationship with her clients going, although one can question whether the lessons that she drew from her work were indeed learned by her interlocutors. There are of course differences in this regard. While, for instance, Peter Hakim at the Inter- American Foundation (IAF) openly acknowledges the value of her work,7 as did Picciotto in his WB role heading the Operations Evaluation Department (OED)8 most big clients appeared to take fewer lessons on board, notwithstanding their
7 In the Introduction to Tendler’s (1981) Fitting the Foundation Style: the case of rural credit Peter Hakim of IAF writes: “she commented thoughtfully and provocatively on the Foundation’s strengths and weaknesses, its style of operation, and its approach to grassroots development.” 8 In his “Foreword” to New Lesson from Old Projects (Tendler, 1993), noticing Tendler’s heterodox view of institutional design options, Picciotto states that “the insights generated by this study should encourage additional investigation in a neglected
112
| Possibilism and Evaluation: Tendler and Hirschman
continuous engagement with her, and the indirect influence that she has certainly exerted on field operators and territorial team leaders. This difference between IAF and big donors was well perceived by Tendler, as can be seen in Fitting the foundation style: the case of rural credit (Tendler 1981) where she appreciated the comparative advantage of the former in its flexible methods, that suited its “canons of behavior.” In fact, IAF funded primarily NGOs, supported organizations in which the poor participated in decision-making, and “believes strongly in a donors-grantees relationship with little intervention from the donor” (now in 2018, p. 108). Hence she advised IAF against adopting “technically rigorous” evaluation methods that were not fitted to its scope and mission. She suggested rather that IAF concentrate on “how can evaluation be done in a way that maintains the Foundation comparative advantage and respects its way of dealing with grantees—rather than playing havoc with those ways in the attempt to do ‘respectable’ evaluation work” (p. 107).
Helping the poor Tendler was concerned with improving the way of life of the people targeted by the projects and enhancing their capacities to solve problems. She had a special sense for discovering which programs favored the poor, from the perspective of distributive justice, equity and democracy. And she took into account the consequences of different resources or project characteristics for development. For example, different tasks (e.g. production vs. administration) are better or worse for poor people’s empowerment; some crops are more “democratic” than others (coffee better than rice); some projects allow for more participation than others. New Lights on rural electrification: the evidence from Bolivia (Tendler, 1980; now in 2018, p. 88 ff.) contrasts productive goals vs. social goals in a program of rural electrification that is supposed to favor the poor: social goals are best suited, and they should therefore be pursued with greater vigor. Her sensibility to these issues translated into equity oriented research methods. In a section of her “Suggestions to evaluators” (1982b, p. A1 ff., now in 2018, p. 128 ff.), titled “Participation, benefit distribution, innovation, cost” she advised evaluators to:
area of project evaluation” https://media.entopanlab.it/storage/achii/media/judith- tendler/1990-1999/NLOP-complete.pdf, p. ix.
Doubt, surprise and the ethical evaluator | 113
- Watch for examples of, or opportunities for, targeting on the poor by type of activity—e.g., low-status activities and goods, absence of elite interest in participation in the activity, class based organizations, and so on. Are these opportunities being exploited, and if not, how might they be? (suggestion 13) - With respect to project activities involving women, determine to what extent they augment women’s income-earning capacities and other forms of power, and to what extent they reinforce women’s traditional role as homemaker. If the latter is the case, make suggestions as to what changes in project design would be appropriate. (suggestion 15) - Where there is a community contribution to projects, ascertain its distributional burden. For example, voluntary labor might fall disproportionately on the poor while contributions in cash or kind might fall, disproportionately, on the rich. (suggestion 16)
“Looking at groups that have done well in regions or countries undergoing particularly adverse economic conditions”—Tendler (1983, p. 58) wrote in Impressions on Evaluation—would provide inputs for improving the lives of poor people. And she added: “I think the topic is a neglected one because we are so committed to the idea that the poor sectors of the population suffer more than most, and that they are undergoing particular stress these days. It is contrary to the nature of our political commitment to dwell on success—because it may be looked at as saying that things are not that bad after all. I think it is extremely important to overcome our ignorance about how the successful groups have survived, because it will help us to make better decisions about what kinds of projects to support under adverse economic conditions.”
This thrust to offer encouraging insights was not thwarted when her findings showed that the expected benefit did not take place. In The Fear of Education (Tendler, 2002) she discusses the current belief that in a globalized economy there is a necessary link between education of the workers and profitability: on the contrary, she had found that in some developing countries and in some industrial sectors there is an economic rationality in keeping education low. This led her to the suggestion to find a better reason for educating the workers: “researchers of political economy and policy reform might explore the historical experiences of other countries—including in other times—to find ways out of the trap” (now in 2018, p. 243). For Judith Tendler expanding literacy is a moral issue that
114
| Possibilism and Evaluation: Tendler and Hirschman
justifies finding how best to achieve this goal, using a comparative method that allows drawing lessons from different situations, and different times.
Improving the working of the public sector in the service of the people Tendler attributed a crucial role to the public sector in the policies of alleviating poverty. She was looking for the contribution of responsible public servants as well as of dedicated policy-makers. She looked at both of them as people, not as anonymous social categories, and she was happy when she could find examples that supported her perspective. In this connection, she criticized two types of literature. First, the literature which was premised on the superiority of the market mechanism vis-à-vis the inefficiency of the state (depicted as full of bureaucracy, laziness, corruption). Good Government in the Tropics (Tendler, 1997) is moved by disappointment on how theories of development stemming from that literature have shaped programs (bad theories = bad programs): and she demonstrates that good research can dismiss both the social theory (in general and applied to LDC), and the program theory.9 This book is based on her research on North Eastern Brazilian regions, where she found cases of good government in different sectors—rural preventive health, employment-creating works, construction programs, agricultural extension, assistance to small enterprises. In particular, she reflected on the findings of government workers dedicated to their work, state government creating a sense of mission around these programs, workers carrying out voluntarily a larger variety of tasks than usual, trust between public servants and citizens, good relationships between central government, local government and civil society (Tendler, 1997, p. 14–15). Second, Tendler criticized the literature on social capital, booming after the seminal Putnam (1992) publication, which assumes that it is an offspring of civil society (associationalism) against the government. She showed that government can help the creation of civil society associations that in their turn can mobilize people to have better service delivery. She believed in the necessary alliance between agencies, NGOs, workers associations, social actors. Examples of cases in which social capital was the result of the collaboration of state agencies and 9 Similar remarks can also be found in New Lessons from Old Project (Tendler, 1993), where Tendler analyzes the role of the public sector in rural development.
Doubt, surprise and the ethical evaluator | 115 civil society groups had already been found in What ever happened to poverty alleviation (Tendler, 1987). These positive findings, of course, could not hide the ups and downs of politics. The wheel turns, so that achievements obtained with great effort can later be (almost) wiped out.10 Tendler’s own emphasis on program history in part reflects these considerations. But tracking down such cases (temporary though they were) also served to justify her skepticism of the suggestion—so prevalent in international agencies—to keep away from politics,11 and recognized the push that local and regional policy-makers had given to democratic reforms. In Rural projects through urban eyes (1982a, now in 2018, p. 123) she noticed “the positive effect that supportive and powerful political interests may have on the project.” Elsewhere she suggested to find ways for politicians to be interested in good performance, by pushing for results, for instance aligning the time of service with length of programs (4 years) as in New Lessons from Old Projects (1993, now in 2018, p. 170).
Training evaluators and students Tendler was very generous in sharing her experience with practitioners and students, in helping them become comfortable with the real world and with messy organizations. The main aspects of her pedagogic mission were spelled out on behalf of evaluators, as expressed in “Suggestions to evaluators,” already referred to (Tendler, 1982b, p. A1 ff, now in 2018, p. 128 ff.). Appreciating people’s contribution. In recommending field work approaches Tendler suggested how to elicit a fruitful dialogue:
- The way of asking questions: “don’t ask people what the impact was; ask, rather, ‘what happened’ and then ask ‘what happened next?’” (suggestion 18) - Of whom to ask questions, when and where; “ask the people what difference the new practice made in their lives; don’t ask only the project staff” (suggestion 27). “Junior and field staff have more contact with and understanding of beneficiaries than persons in managerial position (…) try to
10 Perhaps the case of Ceará itself demonstrates this, based on more recent investigations. 11 This position echoes Hirschman’s criticism of the recommendation of “insulating from politics” held by many international development agencies (Hirschman, 1967, p. 48).
116
| Possibilism and Evaluation: Tendler and Hirschman talk to these persons away from the office. The best opportunity for this is to take jeep trips with these persons to visit faraway beneficiaries or project sites” (suggestion 54). “Much of what is to be learned about the project will come from interviews and not documents (…) evening hours should be taken advantage of (…) ‘hanging around’ in the communities where the project takes place, eating and drinking with local people or local staff” (suggestion 57).
Venerating success. Success and failure are the result of a combination of elements:
- “The evaluator should treat any successes with a sense of awe. Do not be content to say that something worked well, but venture an explanation as to why it worked. Explain what is happening in the project against a background of what is predictable and what is a surprise” (suggestion 39). “Certain problems experienced by projects are recurrent and therefore not a surprise—e.g. faulty maintenance, lack of coordination between agencies, lack of funds for operating costs, schools without teachers, health clinics without doctors—(…) look for cases where the expected problems are not recurring, and then try to explain why they did not appear” (suggestion 45). - Equity: “Do some activities seem more appropriable by elites than by others?” (suggestion 9). “what aspects of the project, if any, seem to be reaching the poorest stratum of the population? Why?” (suggestion 11). - Possibilism: “Be alert to the possibility that certain achievements will have been made because of the disorderliness of the organization, and not despite it” (suggestion 50).
Tendler had this experience in mind when in 1985 she took up a teaching position in the Department of Urban Studies and Planning at Massachusetts Institute of Technology (MIT). In this role she had the opportunity to train researchers to be experts in the kind of studies she advocated, launching the method she called “teaching cum research.”12 Apart from the memories of people who have benefited from her direct teaching (Bianchi, 2018), she has left hints at this in revealing documents. Read, for instance, this syllabus of a course in “Analyzing projects and organizations” (Fall
12 In The Rule of Law (Tendler, 2007; now in 2018, p. 248), there is a detailed exposition of her methodology.
Doubt, surprise and the ethical evaluator | 117 1995):13 “This course teaches students how to understand real organizational environments [chaos] and to be comfortable and analytical with a live organization”: do not “retreat to the tidier world of numbers and written words.” For this reason, the course would have taught students to use spoken words (and to be able to integrate qualitative and quantitative research methods): “Being attentive to what people say: developing skills in using spoken words as raw data for analyzing the behavior of organizations, and for understanding the meaning of numbers. Paying more attention to the spoken word can help us decide how to choose the numbers and the writings that will be most important for good and interesting analysis.”
At the same time, she would consider the class as a collective, would ask students to interrogate themselves (“what you are really interested in”) while interrogating reality, and to immerse themselves in multidisciplinary literature. Students were also advised to engage with problems before getting to general principles or rules of proceeding. And a way of doing it was learning from the problems of their colleagues in the class: “during the discussion of the project of any particular one of your classmates certain suggestions that are applicable to your work will emerge.” Engaging with problems, in Tendler’s experience, was a way of facing the chances for development, instead of being overwhelmed by predicaments. In a paper commenting a program that was intended to promote a Partnership for capacity building in Africa (PCBA) (Tendler, 1998b, p. 1–3) she stated: “the characterization of the African development experience seems too monolithic—painted as a failure, as rife with corruption, administrative and political incompetence, and with civil society nowhere to be seen, either because it is repressed or too weak to make a stand. (…) What is problematic about the monolithically negative portrayal for the PCBA is that such an initiative needs to be built on an understanding of why some things have worked (within countries, as well as between countries) and others have not.(…) The monolithically negative portrayal is also problematic because it is like typical donor portrayals of Africa that many Africans have found insulting. In that sense, it is somewhat jarring in a report claiming to represent a more African view. (…)
13 In https://colornihirschman.org/dossier/article/83/syllabuses-of-the-course-analyz ing-projects-and-organizations.
118
| Possibilism and Evaluation: Tendler and Hirschman The ‘systemic’ nature of the portrayal of Africa’s problems—a ll bad things go together in an analytically neat and closed circle of underdevelopment and incapacity—hinders one’s ability to figure out how to intervene, to identify the point of entry. The similarly ‘systemic’ nature of the report’s recommended approach—‘integrated’ and ‘not piecemeal’ (p. 3)—a lso seems unrealistic in light of what has been learned from past experience with ‘integrated’ approaches. If the analysis of Africa’s problems could show contrasts, contradictions, and jagged edges, including some bright spots, this would help indicate paths of entry into the problem. (…) Finally, the portrayal of the African continent as basket-case implies a kind of African ‘exceptionalism’ that, regardless of its justifiability, makes it difficult to learn from and apply lessons being learned in other, non-A frican countries. If Africa is so uniquely bad, and so resistant to development as a continent, then the solution must also be unique. There’s not much to be learned, that is, from the experience of other continents.”
Lessons At the end of this long journey through Tendler’s ideas and approaches, it is time to draw some lessons from her work and thought. In her times she was an eccentric figure. She “did development differently” (to borrow the title of an apposite blog) when development evaluation was mainly perceived as a technical expertise about the return of investments. Although, as we have seen, she was highly respected, the full range of her heuristic tools was not shared by her colleagues or commissioners: this is perhaps the reason why she wrote so much about how to do things. Today we live in a different landscape. Now that complexity, emergent traits of programs or recursive causalities are on all evaluators’ agenda, Tendler’s contribution can at last be appreciated in all the depth and wealth of her insights. These concluding remarks sum up, first, some traits that vindicate her as a forerunner of positions held by evaluators today, and, second, those traits that acknowledge the enduring vitality of some of her dissident views, that may help orient us in current debates.
Tendler as a forerunner Field work and mixed methods. It is well accepted nowadays that evaluators should do fieldwork where the program is implemented, possibly through participatory
Doubt, surprise and the ethical evaluator | 119 methods. This not withstanding, it is striking that an opposition continues to be asserted between qualitative and quantitative research methods, between contingency and regularity, subjectivity and objectivity—dichotomies often reinforced by proponents of Evidence-Based Policy. Tendler went to the field where programs were implemented, and observed what was happening during implementation and how people reacted to programs, in order to draw lessons and possibly generalize about good results. In doing fieldwork and utilizing participatory tools she was respectful of people’s attitudes and values: she took local knowledge seriously, she appreciated what people said and did, and tried to incorporate it in her judgment of the changes that she was observing. In fact, going to the field was part of her theoretical approach based on an openness to surprise whereby a diverse and complex reality could push her to challenge received theories. Conversely, once Tendler identified a problem through her fieldwork engagement, her evaluation design would also consider utilizing quantitative research methods. In a very real sense, Tendler anticipated mixed-method evaluation designs.14 Positive thinking and learning. Like present positive thinking approaches (Appreciative Inquiry, Most Significant Change and others: Stame, 2014; Stame and Lo Presti, 2015), Tendler maintained that learning comes more from successes than from failures—a lthough, as we have seen, she did not forget the cases of failure (which, however, in her view, were cases in which the “pre-conditions” for success had not functioned as expected). To those who would object that you learn also from failure—from trial and error—it can be pointed out that Tendler teaches us to distinguish between knowing and learning. Knowing has no limitations: a good evaluator has to be able to distinguish what went wrong from what went right in a project (Did it work as you expected? What worked that you didn’t expect? What didn’t work as you expected?), and she or he should be able to become familiar with the domains that are required in a particular case: trespassing disciplinary boundaries, as Hirschman would say. This is knowing. But what Tendler would maintain is that you learn more from positive than negative cases, from what worked when it was not expected, or in ways that were unexpected. Negative cases of failure—she would say—are depressing, they tell you what not to do, while positive cases of success are encouraging, because they show that there are ways out of predicaments. Hence she suggested to treat a success with “awe.” This is learning how to improve things, which is the real concern of evaluation.
14 See Stame (2021).
120
| Possibilism and Evaluation: Tendler and Hirschman
I find a consonance with this way of thinking in the recommendation by Lauren Kogan not to conflate accountability and learning, that she sees implicit in the Result- Based Management approach sponsored by the main foreign aid donors. Kogan (2018, p. 100) questions “that accountability and learning are essentially equivalent.” Accountability (that, in my understanding, can be equated with knowledge) tells us “whether a particular project worked” (p. 102), but “on its own, does not improve aid” (p. 104), while learning is about “how social change works” (p. 102), “is what improves lives.” Measuring the success of a program against the funders’ objectives “should be viewed as a secondary purpose of evaluation—as a means to an end, in which the ‘end’ is learning how to effectively promote development” (p. 105). Implementation as a long journey of discovery in the most varied domains. As we know, Hirschman’s famous statement on project implementation (that it “may often mean in fact a long voyage of discovery in the most varied domains, from technology to politics,” DPO, 1967, p. 32) alluded to Tendler’s original research on electrical energy in Brazil.15 But in her subsequent experiences Tendler was able to refine this way of proceeding. Today there is interest in implementation across large sectors of evaluators. Two main areas come to mind. On one hand, there is a strand of developmental evaluators who draw directly on Hirschman’s teaching. For example, the group that comes together under the name PDIA (Problem Driven Iterative Adaptation) uses Hirschman’s quote as the premise of a text setting out their approach (Andrews et al., 2017). Here, the “long journey of discovery” is part of the proposed iterative process, which opposes the traditional logic of the project cycle. At the same time, attaching importance to what happens in the course of program implementation is one of the issues at the heart of the debate on theories of change. One need only think of realist evaluation, which aims to refine program theories of how interventions work “through highly elaborate implementation processes, passing through many hands and unfolding over time” (Pawson, 2006, p. 83).
Tendler’s as a different voice As we have seen, Tendler’s positions were often at odds with current social theories, and professional guidelines. Think of her findings on social capital or public 15 See the Introduction to this volume.
Doubt, surprise and the ethical evaluator | 121 sector performance, and of her keeping a distance from traditional evaluation practice. These views were based on strong premises: doubt about received theories, and commitment to a supportive social science, devoted to development. We can trace this orientation by referring to topics that are widely discussed by evaluators at present: theories of change, independence ethics, and evaluation designs and methods. On theories of change. Well before Theory-Based Evaluation was thought of, Tendler was concerned with the theory behind programs, something she often contested in the light of what she observed in the course of her fieldwork. Elaborating program theory was a natural process with her, and she was not intimidated by the authority of the literature, of a higher level science. Instead the actions and feelings of her interlocutors in the field suggested her to look for alternative theories. As Merton would have said, based as it was on unexpected consequences (the “surprise”), her approach to theory was one of “theory fructifying,” instead of “theory testing.”16 Nowadays program theories and theories of change have become common currency in evaluation. However, they have often been expressed though rigidly assumed causal relationships that are hardly distinguishable from logical frameworks. A reaction to such a rigid (technical) way of utilizing program theories has been expressed by Peter Dahlen-Larsen: “the conventional design of the TBE process helps bury or conceal ambiguities that might otherwise lead to interesting and productive heuristic insights,” where by ambiguities is meant “the coexistence of multiple interpretations of a phenomenon among reasonable people while there is not necessarily an easy way to choose between the interpretations or eliminate some of them.” By embracing ambiguity—Dahlen Larsen (2018, p. 7) maintains—“evaluators can help promote collective sense-making about complex interventions as part of a democratic evaluation process.” 16 According to Merton (1973), fructifying theories—as opposed to testing them— means “to account for the discrepancies and coincidences between the ‘ideal pattern’ and the ‘actual pattern’ of relations between basic and applied social science” (p. 94). This happens by introducing concepts which refer to variables overlooked in the common sense view of the policy-maker; viewing policies from the perspective of others affected by the policy when there are unanticipated, and often undesired, consequences; understanding that there is a ‘total system of interrelated variables.’ In the ‘ideal pattern’ that Merton contests “behavior is construed as a series of isolated events. Yet many of the untoward consequences of policy decisions stem from the interaction between variables in a system” (p. 95).
122
| Possibilism and Evaluation: Tendler and Hirschman
Tendler’s attitude could be a good antidote to this new rigidity. For Tendler, it often meant—as already noted—finding inverted sequences instead of linear processes, considering things from a different angle, appreciating the relevance of side-effects of programs for the higher goal of social advancement. Here we can identify another lesson. More and more often, realizing that the world is complex, and linear theories do not hold, evaluators face unexpected results. Traditionally, what is mainly focused on are negative consequences: the destruction of the environment due to big infrastructures, the growing inequality due to globalization, and so on. Much less attention is given to how people, located in specific contexts, and utilizing their own capacities and resources, may react creatively to unexpected difficulties.17 At least, this was the situation up until recently, when a new awareness seems to emerge. As Larson has argued, since a program is operating within a complex adaptive system (CAS), it may not have the changes expected because of the usual grievances (delayed starts, confusion about purpose and roles, lackluster uptake), but “the converse is also true. Programs may succeed precisely because they are sensitive to the complex adaptive nature of the context in which they work” (Larson, 2018, p. 358).18 Asking CAS-sensitive evaluation questions “leads the evaluator to focus on how the implementers responded to the unexpected challenges and opportunities from CAS properties” and to “draw a convincing narrative of how program outcomes and impacts were (or were not) achieved.” On independence and ethical issues. A characteristic of an ethical evaluator is his or her independence. As we have seen, Tendler fulfills the role of the democratic evaluator as categorized by Schwandt. Her independence was based on her moral status: working for the public good (helping the disadvantaged, the public sector) coupled with her confidence on the quality of her desk and field research. She was committed to providing strong evidence that changes were possible, and people could be empowered. This orientation was reflected in her modus operandi: research tools (questions, observations, statistics) able to detect whose interests the evaluated programs were serving. She was aware of power asymmetries of money, gender,
17 This aspect holds a central place in Hirschman’s theory of the Hiding Hand (Hirschman, 1967, p. 8 ff.) 18 Larson would share much of Tendler’s positions, such as “observation” of how a CAS works, triangulation of interviews and quantitative data, reframing evaluation questions.
Doubt, surprise and the ethical evaluator | 123 generation, technical capacities, leadership roles, and was interested in discovering whether they were addressed and possible to overcome. In this sense, Tendler would support the contribution that Rogers (2016) has brought to the current debate on equity in evaluation. In reviewing recent evaluations, Rogers identified ways of expressing a judgment based on equity principles, and offered a concrete example of how is it possible to face the ethical challenges that we are confronted with. This can be articulated along all the phases of an evaluation: from the theory of change that should be formulated by involving the beneficiaries; to formulating evaluation questions (“to what extent have results contributed to decrease inequities between the best off and the worst- off groups?” p. 206), to the evaluation design including the way social differences are described or measured and the need to select samples that represent the worst- off groups; to evaluation management that follows a collaborative enquiry process that properly addresses issues of power imbalance; and how resources are allocated between evaluators, who are paid, and community members, who are not. As Rogers argued, and Tendler would agree, all these aspects require a will to “contrast the spontaneous tendencies to creaming the benefits of programs to the better-off groups, and to keep the intended beneficiaries of programs distant from the benefits as well as from the evaluation.” If properly equipped with relevant research tools this would ensure “that [the] program is not something done to people but something which supports them to be agents of their own development” (Rogers, 2016, p. 203). On evaluation designs and methods. Many tools of Tendler’s reflective practice have been mentioned that matched her ambition to learn the lessons for development from evaluation. Let me just recall two of them. First: Reframing the evaluation questions. In the debate stirred by the thrust for impact evaluation, there has been a focus on the different evaluation questions that require appropriate approaches and designs (Stern et al., 2013). However, the definition of the questions, and the choice of the appropriate approach, have always been considered a matter to be decided at the start. This is despite issues that might surface in the course of the implementation that can suggest the need to modify established designs. Tendler’s approach is different. Based on what is known about “implementation as a journey of discovery in the most varied domains…,” she would start from direct observation and the problems that emerged, and then frame the evaluation questions and designs for the evaluation. Thus, her evaluation design was evolving in line with her observations, in a reverse sequence from the usual evaluation cycle and time-plan. Unexpected
124
| Possibilism and Evaluation: Tendler and Hirschman
difficulties opened up opportunities for deeper inquiry and hopefully new ways to achieve success. Second: Cross-domain comparisons. In the wake of the Evidence-Based Policy and Evidence-Based Medicine, there has been an upsurge of systematic reviews, syntheses, meta-analyses and the like. These aim to sum the results of similar programs, in order to identify what is statistically meaningful. Notwithstanding the importance of cumulating knowledge about “what works” (and as we have seen Tendler herself had suggested the Ford Foundation do this in 1987) she would share the criticism of the limitations of that kind of research. As Pawson (2006, p. 72) noted, by their “process of simplification, standardization and aggregation […] meta-analysis eliminates most of the evidence that is capable of telling us how interventions work and how we might account for their differential effectiveness.” Judith Tendler also concurred with Pawson’s focus on middle range theory in dialogue with “realist synthesis.” As the latter is based on refining program theories about how interventions work “through highly elaborate implementation processes, passing through many hands and unfolding over time” (Pawson, 2006, p. 83), Tendler’s cross-domains comparative methods—based as it is on the merit of finding analogies and differences between mechanisms that work in fields that are separate but interacting—adds a further input on the current debate.
To conclude Reflecting on the approach, practice and ethically-oriented work of Judith Tendler can help us better understand the practice of evaluators today, gain the courage to challenge professional rituals and theoretical conformism, and add content and meaning to the improvement mission of evaluation.
Appendix A Albert Hirschman and the World Bank
Albert Hirschman had relationships of various kinds with the World Bank (WB). After working on the Marshall Plan for the U.S. Federal Reserve Board he became economic and financial advisor to the Colombian government at the suggestion of the WB during the years 1952–54. But it is his experience as a project evaluator, resulting in the book Development Projects Observed (DPO, 1967), that is particularly instructive concerning his attitude towards the Bank and the margins of independence offered by an international institution of that type—both to its own officers and its external collaborators. 1. Having established himself as a development economist (with The Strategy— 1958) and as an analyst of Latin-A merican policy-making (Journeys—1963), Hirschman intended to pursue his work further, and he realized that studying the development projects of the WB (the agency most committed to the development field at the time) might be “the right fit.” Meanwhile, the WB was looking for an “external and independent evaluator” to review the outcomes of Bank- funded projects. Two different rationales thus came together temporarily—that of the scholar and that of the evaluation client. It was clear from the beginning that the client did not fully trust the scholar (and vice versa). The WB preferred to be only marginally involved, even financially, in the initiative, and the research
126
| Albert Hirschman and the World Bank
was sponsored by the Brookings Institution (which subsequently published DPO) and financed by the Ford Foundation and the Carnegie Corporation. Under this odd combination (which Hirschman probably did not mind in the least1), the job took on all the characteristics of an evaluation and thus of a client-evaluator working relationship—sui generis though it was. Hirschman, along with some WB operatives, decided which projects to study, organized official visits accompanied by local WB personnel, and wrote the report A Study of Selected World Bank Projects: Some Interim Observations for the WB (1965).2 His notion of evaluation was very different, however, from what was current practice at the time.
• Hirschman was an “independent” evaluator in the true sense of the word, not only because he was “external,” but as a free thinker … which he was indeed. • He did actual fieldwork. He worked side by side with local operatives, listening to as many voices as possible and trying to grasp what was really going on and what choices were being made. (This is what he meant by “observing”3 projects, as in the title of the book that followed, DPO). • He selected the cases to evaluate not because they could easily be compared (same sector, same place, etc.), but based on other criteria more consistent with the object of the investigation. That is, they had to have been in place for a long time (so that medium-term effects, if not ex post, could be assessed), they had to have introduced new technology, and they had to be experimental in character (representing the true nature of development projects).4 • He did not just investigate the expected direct effects, but also the history of the projects—the problems they had brought to light, the processes they
1 Since it featured an administrative ambiguity similar (mutatis mutandis) to what he had experienced in Colombia and which, as he later wrote “gave me a certain freedom of action.” (Hirschman, 1986, p. 7). 2 Hirschman Papers, Box 57, folder 4. 3 On Hirschmanian “observation,” see Chapter 5. 4 Initially, Hirschman wanted to choose only infrastructure projects, but then, at the request of the WB, he also included projects that concerned funding agencies—this was how the Cassa del Mezzogiorno came to be selected, along with an Indian agency.
Albert Hirschman and the World Bank | 127 had set in motion, how they had resolved the resulting conflicts, and what solutions had been found (indirect effects). In this way, Hirschman tested the validity of the theories the projects were based on, and also developed new ones.5 Finally, he summarized what he had understood from a theoretical perspective. His main points were the following: Uncertainty. Not everything can be foreseen in advance. We should not try to reduce uncertainty, but show the opportunities that arise within it. (“Instead of repressing the uncertainties,” Hirschman wrote, “the Bank should, in my opinion, make a sustained effort at visualizing them,” 1965, p. 4). At the same time, not all programs have the same degree of uncertainty. Some kinds of infrastructure are blueprints.6 Others create favorable conditions, but then it is not known what the beneficiaries will make of them (for example, after irrigation is completed, what crops will the farmers actually plant?). Other projects may be modified during the course of the work as a result of social, economic or political changes (which is what happened, for example, in Nigeria). What appears to be a failure may instead mean that the crisis that was created triggered an unforeseen response from the beneficiaries and thus, in the end, offered a different solution. It is important to understand the problems the project encountered and the successes it had in dealing with them or solving them. Hirschman tried to examine unanticipated positive aspects, or conflicts that may have led to new solutions (indirect effects). For example, frustration over delays may be offset by making up for them by solving the problems that caused them. A project will have effects in various spheres—social, political, etc. ROI (Return on Investment), cost-benefit analysis, and similar techniques tell only part of the story. It is also necessary to think of different possible outcomes. Problems exist, but also “major opportunities that...lie too far ‘above the call of the project’s duty’ to be included in conventional benefit calculations” (1965, p. 4).
5 This is C. Weiss’s idea that evaluation is not only theory testing but also theory building. 6 Hirschman, 1965, p. 3. “The term ‘project’ conjures up the notion of a set of blueprints which can be handed to a contractor; but while some projects, for example, in thermo-electric power and telecommunications, come fairly close to this concept, others, notably in irrigation and agriculture in general, imply lengthy ‘voyages of technical and socio-economic discovery.’” This is the first formulation of the oft- cited version that appears in DPO, p. 17 (see the Introduction and Chapter 5).
128
| Albert Hirschman and the World Bank
Subsequently, these results were what gave Hirschman the idea of the “hiding hand”—if the difficulties of implementing a certain project had been seen earlier, perhaps the project would not have been undertaken at all, and thus the possibilities of solving things by non-traditional methods would not have been discovered. Finally, linked to these results, Hirschman also put forward some suggestions—not about which projects to focus on, but about the questions that ought to be asked, the aspects to be taken into account and, in passing, even how the project staff should behave. 2. Such an attitude seemed “alien” to the WB stakeholders, who offered the classic organizational response of clamming up when criticized “from outside,”... in spite of having requested an independent evaluation. Typical of their reactions were the following.
(a) Criticism of the research method:
• Hirschman had chosen his cases badly because they were not representative of anything (in Uganda, for example). Here it should be remembered that the cases were chosen together with WB operatives. And as we have seen, this was done according to precise criteria—that they had been operational for at least five years, and that they were experimental projects involving innovative initiatives. This criticism instead assumes that there is only one criterion, that of statistical representativeness in order to construct an average. Representative of what? … one might ask. • The method doesn’t work since the cases are not comparable. (A very naive criticism that fails to take into account relevant developments in social research—such as those recently reflected in realist evaluation). • Hirschman’s analyses are academic, typical of a truth-seeker who delights in presenting different alternatives. At WB, however, we are decision makers and must choose what is viable.
(b) Criticism of the results:
• He says things we already know • He presents the ideas of only one side (the others) without listening to ours (the WB) • He doesn’t know that we have already corrected some things
Albert Hirschman and the World Bank | 129
• He says things that are incorrect (there was disagreement, for example, with how the Peruvian case had been analyzed).
In particular, the first criticism is linked to a refusal to accept what later became the principle of the “hiding hand.” This was rejected on two grounds: (a) “It is not true that we didn’t know about the difficulties—we took them into account as risks, weighing them against the advantages.” Anyone supporting this criticism is missing the point of Hirschman’s argument. In fact, the “hand” he theorized conceals both the difficulties one faces (which are always unknown) and the ability to solve the problems generated by them (the creative ability cannot be captured). Looking only at the difficulties, one will in fact see them as simply the risks inherent in any choice, while it is the second aspect that is overlooked—that is, the ability of the actors to solve the resulting problems in a novel way. This aspect was not appreciated by the WB programmers because they could only conceive of one way to solve things, the way the project intended. On the contrary, this is something that needs to be verified, which means recognizing the presence of agency—of people who mobilize their own resources, think with their own heads, and make the pleasant discovery that they can do things they didn’t believe they were capable of. (b) “It is not true that we didn’t take context into account. Careful studies were made before the project was launched.” Hirschman never denied this, but he pointed out that the economists and engineers working in the field often become aware of other aspects of the context without putting them to good use. They don’t share what they know with their superiors, and as a result the WB knows nothing about them. The Bank should instead make the most of knowledge acquired in the field: “the need is simply to seek this knowledge out more systematically and to bring it to bear on project appraisal and supervision” (1965, p. 6).
On the other hand, unlike the authors of these criticisms, other WB directors were more favorable and would have liked Hirschman to write a final operational chapter on ex ante project evaluation. He initially wrote to Kamark (WB)7 that he was the first to admit “that [the report’s] operational usefulness is limited,” allowing that a more operational chapter might have been useful, but requested that it be written by one of them. In 1967, in a letter to van der Tak (WB), he further stated his approval of the official summary of the Study that had been prepared in the meantime. He added that he would welcome a memo based on it “that would
7 Letter dated November 4, 1966 (Hirschman Papers, box 57, folder 1).
130
| Albert Hirschman and the World Bank
request and stimulate project analysts to look out for relevant characteristics of projects, both my own and possibly others as well.”8 All told, then, despite the treatment he had received, Hirschman still thought it a good thing that the WB should use his work. Later, however, seeing that this was not happening, he let the thing drop. Moreover, in 1967 he published DPO for the Brookings Institution, opening with the chapter on the hiding hand, at which point responsibility for it became his alone. Some years later, in the preface to the second edition of DPO, he wrote that the hiding hand had almost been a provocation… Actually, he explained, something that does not go as planned should not be analyzed only as a risk that should be anticipated, because it could also contain a “blessing in disguise.” Finally, I would add that reading these comments brought to mind current practice in many other international institutions. An evaluation is not considered complete unless it contains recommendations. But these are then scrutinized against a checklist asking if they are correct, if they ought to be put into practice, or if correcting actions have already been taken. Such documents are known as evaluation management systems, and they fall under the knowledge management of institutions. I saw an enlightening example of this myself when I was part of a panel on the evaluation of EU research framework programs, and I noted that the standard response was almost always that “corrective measures have already been taken”! 3. In the light of all this, one cannot help but once again admire in retrospect Hirschman’s ability to cope. More generally, it should be said that at the time the WB had not been sufficiently shaken by external criticism or internal crises to feel it had to take into account suggestions from studies such as Hirschman’s, and was able to disregard them.9 But things certainly changed later on. In 1985, in fact, Robert Picciotto (then in the Asia Department) wrote a paper (“From project lending to adjustment lending,” WB, January 7, 1985)10 based entirely on the ideas in DPO and their links with Exit, Voice, and Loyalty.11 8 Hirschman Papers, box 57, folder 1. 9 See Ledermann (2012) on the use of evaluation depending on decision-making contexts. 10 This paper and the correspondence between Hirschman and Picciotto may be found in the Hirschman Papers, box 59, folder 2. 11 Among other things, Piccotto in this paper offers a rather “Hirschmanizing” definition of projects: “by giving concrete shape and discipline to a development
Albert Hirschman and the World Bank | 131 He later wrote to Hirschman about the new situation at the WB, or at least his own perception of it. Though initially, as an engineer, he was completely taken with the idea of applying quantitative methods to projects, he now realized the limitations of that approach. He regretted that few economists had followed Exit, Voice, and Loyalty, and asked Hirschman if anyone had applied Exit to game theory (sic) and whether there had been any use of the DPO typology in the many ex post evaluations of projects (Picciotto should have known this better than Hirschman!). Following this, Picciotto suggested that Hirschman send one of his students to the Project Policy Department to set up some kind of hands-on research on how the exit-voice mechanism might work in development projects. Hirschman (October 14, 1985) answered that it would be interesting “implanting exit-voice mechanisms into the organization of Bank-financed projects.” But he added that, “unfortunately (or, I rather tend to think, fortunately) there is no Hirschman school of economic development, and I cannot point to a large pool of disciples where one might fish someone, [with the exception of] Judith Tendler, who has used some of my ideas in a remarkably creative fashion and who in general has an intellectual affinity with my way of thinking.”12 Picciotto was stung by this answer. But of course he had made an ill-advised request, seeking to involve Hirschman in a project of his own, while Hirschman only wanted to know if his ideas had been useful to others and how they had been utilized (and also because he had liked Picciotto’s paper, mentioned above). Recently, Michele Alacevich (2012) wrote an essay recounting Hirschman’s history at the WB, highlighting the Bank’s negative reactions and the irreconcilability of their demands with Hirschman’s ideas. Picciotto commented that this was true for the 1960s, but that it was only part of the story because even then there had been favorable but subterranean voices within the WB, and in more recent years there had been more open adherence to Hirschman’s thinking on development. His 1985 paper would indicate this, and it is true that Wolfensohn himself (WB president in the 1990s) said that DPO was his livre de chevet, and undertaking projects motivate people, induce cooperation and make fuller use of available resources. Through forward and backward linkages, projects also direct resources to new areas of economic activity and create a momentum for change and innovation.” 12 As mentioned in the Introduction, Judith later carried out this work, which eventually resulted in “New Lessons from Old Projects” (Tendler, 1992). See also Chapter 7.
132
| Albert Hirschman and the World Bank
that his “Comprehensive Development Framework” was inspired by it.13 This allowed new openings within the WB (including a seminar on Hirschman promoted by Osvaldo Feinstein, who also invited Luca Meldolesi). But within the WB, the person who really worked along a track inspired by Hirschman (and in particular by The Strategy) was David Ellerman, who wrote a book entitled Helping People Help Themselves (2005), and whose subtitle reads, significantly, From the World Bank to an Alternative Philosophy of Development Assistance.
13 A paper written by Wolfensohn that proposes programming development aid in an integrated way, taking into account the concurrent funding from different international, national and NGO agencies, and coordination between different policy areas. It is questionable, however, whether this holistic framework—as Wolfensohn himself called it—would have appealed to Hirschman.
Appendix B Remembering Judith1
I first met Judith in 1990. I had read Development Projects Observed, in which Hirschman said (1967, p. xi), that Judith’s “fine insights into the differential characteristics and side-effects of thermal and hydropower, and of generation and distribution, contributed in many ways to the formation of [his own] views,” and I wanted to meet her. I was beginning at the time to develop an interest in the evaluation of public programs and I was looking around for approaches to evaluation that differed from the mainstream. Thus it happened that one time at Princeton Albert suggested that since I would later be going to Cambridge, Mass., I ought to look up Judith. I found her in her office, very busy, and all I could do was tell her why I was looking for her. So Judith invited me to her home and showed me a huge collection of mimeographs of evaluation reports, studies on international administrations, and guidelines for practitioners. I took away as much as I thought would fit in a suitcase. Back home, I soon discovered that this amazing haul, heavy as it was, was truly worth the trouble—there was so much theoretical and practical wisdom in those papers! It was difficult to choose among them in 1 Delivered at the memorial “Remembering Judith Tendler,” MIT-DUSP, October 14, 2016.
134 | Remembering Judith
putting together the book that I later edited (Progetti ed effetti, Naples: Liguori, 1992), which in the Italian evaluation community is still considered a milestone. What struck me as something that might be effective in our situation (regarding the Mezzogiorno, and public administration) was her way of looking at the “rich complexity of both success and failure, efficiency alongside incompetence, order cohabiting with disorder” (1968, p. xi), both within an organization and in development projects. More generally, it struck me that this sound theory might make sense for the situations we in Italy were working and teaching in—small businesses, cooperatives, welfare agencies, local administrations… From there a friendship and partnership developed. In 1994 Judith came to Naples for a round of seminars that we had organized at the Institute of Philosophical Studies. Her four sessions entitled “Toward a Model of Good Governance,” had enormous resonance among the students and colleagues of Liliana Bàculo, Luca Meldolesi and myself. Thereafter, some of these same students took SPURS (Special Program for Urban and Regional Studies) courses at MIT, often participating in Judith’s sponsored research programs on development projects, including fieldwork in Cearà, Northeast Brazil. Some of them have since become development practitioners, both in Italy and in international organizations, or have served in administrative roles, and they consistently acknowledge that it was Judith’s teaching that inspired their work. Continuing with my work on theories of evaluation, and especially focusing on recent developments that come under the heading “positive thinking approaches” (appreciative inquiry, developmental evaluation, most significant change, positive deviance), I came back to Judith’s work. I identified Judith as the pioneer of these approaches. It was Judith, in fact, who anticipated what is now recognized as the actual need to overcome the negative feeling that “nothing works,” suggesting instead2 that “the evaluator should treat any successes with a sense of awe. Do not be content to say that something worked well. Explain what is happening in the project against a background of what is predictable and what is a surprise” (1982b, p. A-6). To do this, Judith showed with her special skills how to address problems of equity, participation and democracy, to say nothing of gender issues. At the same time, in order not to be intimidated by so- called “robust” methodologies that sometimes hide promising results rather than exploiting their potential, she utilized and promoted a vast arsenal of research tools—from ways of producing relevant statistics to how to actually interview
2 As mentioned in Chapters 1 and 7.
Remembering Judith | 135 people. In all this, to borrow Hirschman’s words, Judith offered a wonderful example of how to reduce the tension between morality and social science (but without eliminating it).
Bibliography
Adelman, J. (2013). Worldly Philosopher: The Odyssey of Albert Hirschman. Princeton, NJ: Princeton University Press. Alecevich, M. (2009). The Political Economy of the World Bank: The Early Years. Stanford, CA: Stanford University Press. Alecevich, M. (2012). Visualizing Uncertainties, or how Albert Hirschman and the World Bank Disagreed on Project Appraisal and Development Approaches. World Bank, Policy Research Working Paper 6260. Anderson, C. W. (1979). “The Place of Principles in Policy Analysis,” The American Political Science Review,73(3),711–723. Andrews, M., Pritchett, L., & Woolcock, M. (2017). Building State Capabilities: Evidence, Analysis, Action. Oxford: Oxford University Press. Annis, S., & Hakim, P.(Eds.). (1988). Direct to the Poor: Grassroot Development in Latin America. Boulder (CO) and London: Lynne Rienner Publishers. Barbera, F. (2004). Meccanismi Sociali: Elementi di Sociologia Analitica. Bologna: Il Mulino. Bellah, R. N. (1979). “New Religious Consciousness and the Crisis in Modernity,” in P. Rabinow & M. Sullivan (Eds.), Interpretive Social Science: A Reader. Berkley, CA: University of California Press. Bellah, R. N. (1983). “The Ethical Aims of Social Inquiry,” in N. Haan, R. N. Bellah, P. Rabinow, & W. M. Sullivan (Eds.), Social Science as Moral Inquiry. Columbia University Press. New York, NY.
138 | Bibliography
Bellah, R. N. (1987). “The Quest for the Self: Individualism, Morality, Politics,” in P. Rabinow & M. Sullivan (Eds.), Interpretive Social Science: A Second Look. Berkeley, CA: University of California Press. Bellah, R. N., Haan, N., Rabinow, P., & Sullivan, W. M. (1983). Introduction. In N. Haan, R. N. Bellah, P. Rabinow, & W. M. Sullivan (Eds.), Social Science as Moral Inquiry. New York, NY: Columbia University Press. Bianchi, T. (2018). In Good Intellectual Company: Judith Tendler and Hirschman’s Tradition. In L. Meldolesi & N. Stame (Eds.), For a Better World. Roma: IDE. Byrne, D. (2013). Evaluating Complex Social Interventions in a Complex World. Evaluation, 19(3), 217–228. Caballero Argaez, C. (2008). “Albert Hirschman en Colombia y la Planeaciòn del Desarrollo,” Desarrollo y Sociedad, primer semester. Campbell, D. T. (1969). Reforms as Experiments. American Psychologist, 24, 409–429. Casley, D. J., & Lury, D. A. (1982). Monitoring and Evaluation of Agricultural and Rural Development Projects. Baltimore: Johns Hopkins University Press, for the World Bank. Colorni, E. (2021). “The Philosophical Illness’and other writings” edited by L. Meldolesi. New York, NY: Bordighera Press. Connell, J. P., & Kubitsch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress,and Problems. In K. Fulbright- Anderson, A. C. Kubisch, & J. P. Connell (Eds.), New Approaches to Evaluating Community Initiatives,vol. II. Washington, D.C.: The Aspen Institute. Crozier, M. (1988). Comment réformer l’Etat? Trois Pays, Trois Stratégies; Suède, Japon, Etats-Unis. Paris, La Documentation Française. Currie, L. (1950). Some Prerequisites for Success of the Point 4 Program. Annals of the American Academy of Political and Social Science, 27, 102-108. Dahlen-L arsen, P. (2012). The Evaluation Society. London: Sage. Dahlen-L arsen, P. (2018). Theory-based Evaluation Meets Ambiguity: The role of Janus Variables. American Journal of Evaluation, 39 (1), 6–23. de Zwart, F. (2015). Unintended but not Unanticipated Consequences. Theory and Society, 44, 283–297. ECLAC (Economic Commission for Latin America and the Caribbean). (1990). Changing Production Patterns with Social Equity. UN-ECLAC, Santiago, Chile. Ellerman, D. (2005). Helping People Help Themselves: From the World bank to an Alternative Philosophy of Development Assistance. Ann Arbor, MI: University of Michigan Press. Elster, J. (1989). Nuts and Bolts for the Social Sciences. New York, NY: Columbia University Press. Festinger, L. (1957). A Theory of Cognitive Dissonance. Palo Alto, CA: Stanford University Press. Flyvbjerg, B. (2016). The Fallacy of Beneficial Ignorance: A test of Hirschman’s Hiding Hand. World Development, 84, 176–189. Forss, K., Lindkvist, I., & McGillivray, M. (Eds.). (2021). Long Term Perspectives in Evaluation. New York, NY: Routledge. Geertz, C. (1973). The Interpretation of Cultures. New York, NY: Basic Books. Geertz, c. (1987) Deep Play: Notes on the Balinese Cockfight. In Rabinow & Sullivan (eds.) Interpretive Social Sciences: a Reader. Berkeley, CA, Univesity of California Press.
Bibliography | 139 Geertz, C. (1995). After the Fact. Cambridge, MA: Harvard University Press. Geertz, C. (2000). Available Light. Princeton, NJ: Princeton University Press. Geertz, C. (2001). School Building: A Retrospective Preface. In J. W. Scott with D. Keats (Eds.), Schools of Thought. Princeton, NJ: Princeton University Press. Gilligan, C. (1983). Do the social sciences have an adequate theory of moral development? In N. Haan, R. N. Bellah, P. Rabinow, W. M. Sullivan (Eds.), Social Science as Moral Inquiry. New York, NY: Columbia University Press. Glouberman, S., & Zimmermann, B. (2002). Complicated and Complex Systems: What Would Successful Reform of Medicare Look Like? Commission on the Future of Health Care in Canada, Discussion Paper 8. Gow, D. D., & Morss, E. R. (1988). The Notorious Nine: Critical Problems in Project Implementation. World Development, 16 (12), 1399–1418. Haan, N., Bellah, R. N., Rabinow, P., Sullivan, W. M. (Eds.). (1983). Social Science as Moral Inquiry. New York, NY: Columbia University Press. Hedstrom, P., & Swedberg, R. (1998). Social Mechanisms. Cambridge, CA: Cambridge University Press. Hirschman, A. O. (1958). The Strategy of Economic Development. New Haven, CT: Yale University Press. Hirschman, A. O. (1963). Journeys Toward Progress: Studies of Economic Policy-Making in Latin America. Washington, DC: Twentieth Century Fund. Hirschman, A. O. (1965). A Study of Selected Bank Projects: Some Interim Observations. Mimeo: World Bank. Hirschman, A. O. (1967). Development Projects Observed. Washington, D.C.: The Brookings Institution, second ed. 2015. Hirschman, A. O. (1970). Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations and States. Cambridge, MA: Harvard University Press. Hirschman, A. O. (1971a). A Bias for Hope. New Haven, CT: Yale University Press. Hirschman, A. O. (1971b). “‘Political economics’ and possibilism”. In A. O. Hirschman, A Bias for Hope. New Haven, CT: Yale University Press. Hirschman, A. O. (1971c). Foreign Aid: A Critique and a Proposal. In A. O. Hirschman, A Bias for Hope. New Haven, CT: Yale University Press. Hirschman, A. O. (1971d). Obstacles to Development: A Classification and a Quasi-Vanishing Act. In A. O. Hirschman, A Bias for Hope. New Haven, CT: Yale University Press. Hirschman, A. O. (1971e). Underdevelopment, Obstacles to the Perception of Change, and Leadership. In A. O. Hirschman, A Bias for Hope. New Haven, CT: Yale University Press. Hirschman, A. O. (1977). The Passions and the Interests. Princeton, NJ: Princeton University Press. Hirschman, A. O. (1978). Beyond Asymmetry: Critical notes on myself as a young man and some other old friends. International Organization, 32(1), 45–50. Hirschman, A. O. (1981a). Essays in Trespassing: Economics to Politics and Beyond. Cambridge, U.K.: Cambridge University Press. Hirschman, A. O.(1981b). The changing tolerance for income inequality in the course of economic development. In A. O. Hirschman (Ed.), Essays in Trespassing: Economics to Politics and Beyond. Cambridge, U.K.: Cambridge University Press.
140 | Bibliography
Hirschman, A. O. (1982). Shifting Involvements: Private Interests and Public Action. Princeton, NJ: Princeton University Press. Hirschman, A. O. (1983). Morality and the social sciences: A durable tension. In N. Haan, R. N. Bellah, P. Rabinow, W. M. Sullivan (Eds.), Social Science as Moral Inquiry. New York, NY: Columbia University Press. Hirschman, A. O. (1984a). Getting Ahead Collectively: Grassroots Experiences in Latin America. New York, NY: Pergamon Press. Hirschman, A. O. (1984b). A Self-Inflicted Wound. In New Republic, January. Hirschman, A. O. (1986). A Dissenter’s Confession. In A. O. Hirschman (Ed.), Rival Views of Market Society. Cambridge, MA: Harvard University Press. Hirschman, A. O. (1987). The search for paradigms as a hindrance to understanding. In Rabinow & Sullivan, M., (Eds.) (1987). Interpretive Social Science: a Second Look, Berkeley, CA: University of California Press. Originally in A. O. Hirschman, A Bias for Hope, New Haven, CT: Yale University Press. Hirschman, A. O. (1990). Interview with Albert O. Hirschman. In R. Swedberg (Ed.), Economics and Sociology. Princeton, NJ: Princeton University Press. Hirschman, A. O. (1991). Rhetoric of Reaction: Perversity, Futility, Jeopardy. Cambridge, MA: Harvard University Press. Hirschman, A. O. (1994). Responses and Discussions. In L. Rodwin & D. A. Schon (Eds.), Rethinking the Development Experience: Essays Provoked by the Work of Albert O. Hirschman. Washington, D.C.: The Brookings Institution. Hirschman, A. O. (1995). A Propensity for Self- Subversion. Cambridge, MA: Harvard University Press. Hirschman, A. O. (1998). Trespassing: Places and ideas in the course of a life. In A. O. Hirschman, Crossing Boundaries. New York, NY: Zone Books. Hirschman, A. O. (2015). A hidden ambition. Preface to the second edition of DevelopmentProjects Observed. Washington, D.C.: The Brookings Institution. Hirschman, A. O. (2020). How Economics Should be Complicated. Ed. By L. Meldolesi. New York, NY: Peter Lang. IEG-W B. (2017). Guidelines for Reviewing World Bank Implementation Completion and Results Report. Washington, D.C.: World Bank Group. Israel, A. (1987). Institutional Development. Baltimore, MD: Johns Hopkins University Press. King, G., Keohane, R. O., & Verba, S. (1994). Designing Social Inquiry. Princeton, NJ: Princeton University Press. Kogan, L. (2018). What have we learned? Questioning accountability in aid policy and practice. Evaluation, 24 (1), 98–112. Korten, D. C. (1980). Community Organization and Rural Development: A Learning Process Approach. Public Administration Review, 40 (6), 480–511. Larson, A. (2018). Evaluation amidst complexity: Eight evaluation questions to explain how complex adaptive systems affect program impact. Evaluation, 24 (3), 353–362. Ledermann, S. (2012). Exploring the necessary conditions for evaluation use in programme change. American Journal of Evaluation, 33 (2), 159–178.
Bibliography | 141 Lepenies, P. H. (2017). Statistical tests as a hindrance to understanding. World Development, 103, 360–365. Lepenies, W. (1988). Between Literature and Science: The Rise of Sociology. Cambridge, U.K.: Cambridge University Press. Leviton, L., & Hughes, E. (1981). Research in the Utilization of Evaluations: A Review and Synthesis. Evaluation Review, 5(4), 525–548. Lipsky, M. (1980). Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. New York, NY: Russel Sage Foundation. Mac Coy, D. (2014). Appreciative Inquiry and Evaluation –Getting to What Works. Canadian Journal of Program Evaluation, 29 (2), 104–127. Mayne, J. (2012). Contribution analysis: Coming of age? Evaluation, 18 (3), 270–280. Mayne, J. (2017). Theory of Change Analysis: Building Robust Theories of Change. Canadian Journal of Program Evaluation, 32 (2), 155–173. Mayne, J. (2021). Contribution analysis and the long- term perspective: Challenges and Opportunities. In K. Forss, I. Lindkvist, & M. McGillivray (Eds.), Long Term Perspectives in Evaluation. New York, NY: Routledge. McPherson, M. (1983). Want formation, morality and some interpretive aspects of economic inquiry. In N. Haan, R. N. Bellah, P. Rabinow, W. M. Sullivan(Eds.), Social Science as Moral Inquiry. New York, NY: Columbia University Press. Meldolesi, L. (1995). Discovering the Possible: The Surprising World of Albert Hirschman. Notre Dame, IN: Notre Dame University Press. Meldolesi, L., & Stame, N. (1989). Come si controlla (altrove) il lavoro del pubblico impiego, Mondoperaio, n. 10. Meldolesi, L., & Stame N.(Eds.). (2018). For a Better World, First Conference on Hirschman Legacy (Boston), Roma: IDE. Meldolesi, L., & Stame, N. (Eds.). (2019). A Bias for Hope, Second Conference on Hirschman Legacy (World Bank, Washington), Roma: IDE. Meldolesi, L., & Stame, N. (Eds.). (2020). A Passion for the Possible. Third Conference on Hirschman Legacy, (Berlin), Roma: IDE. Merton, R. K. (1936). The Unanticipated Consequences of Purposive Social Action. American Sociological Review, 1 (6), 894−904. Merton, R. K. (1967). On Theoretical Sociology. New York, NY: Free Press. Merton, R. K. (1968). Social Theory and Social Structure. New York, NY: Free Press. Merton, R. K. (1973). Technical and moral dimensions of policy research. In The Sociology of Science. Chicago, IL: University of Chicago Press. Merton, R. K. & Barber, E. (2004). The Travels and Adventures of Serendipity: A Study in Sociological Semantics and the Sociology of Science. Princeton, NJ: Princeton University Press. Morrell, J. A. (2010). Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. New York, NY: The Guilford Press. Mowles, C. (2014). Complex, but not quite complex enough: The turn to the complexity sciences in evaluation scholarship. Evaluation, 20 (2), 160–175. Olson, M. (1971). The Logic of Collective Action. Cambridge, MA: Harvard University Press.
142 | Bibliography
Packenham, R. (1973). Liberal America and the Third World. Princeton, NJ: Princeton University Press. Palumbo, D. J., ed. (1987). The Politics of Program Evaluation. Beverly Hills, CA: Sage. Pawson, R. (2006). Evidence Based Policy, the Realist Perspective. London: Sage. Pawson, R. (2010). Middle range theory and program theory evaluation: From provenance to practice. In Vaessen, J. & Leeuw, F. (Eds.), Mind the Gap: Perspectives on Policy Evaluation and the Social Sciences. New Brunswick, NJ: Transaction Publishers. Pawson, R. (2013). The Science of Evaluation. London: Sage. Perrin, B. (2014). Think positively! And make a difference through evaluation. In Canadian Journal of Program Evaluation, 29 (2), 48-66. Picciotto, R. (1994). Visibility and disappointment. The new role of development evaluation. In Rodwin, L. & Schon, D. (Eds.), Rethinking the Development Experience: Essays Provoked by the Work of Albert O. Hirschman. Washington, DC: Brookings Institution. Pressman, J. & Wildavsky, A. (1975). Implementation. Berkeley, CA: University of California Press. Putnam, R. (1992). Making Democracy Work. Princeton, NJ: Princeton University Press. Rabinow, P., & Sullivan, M. (Eds.). (1979). Interpretive Social Science: A Reader. Berkeley, CA: University of California Press. Rabinow, P., & Sullivan, M. (Eds.). (1987). Interpretive Social Science: A Second Look, 2nd ed. Berkeley, CA: University of California Press. Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155–169. Rogers, P. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14 (1), 29–48. Rogers, P. (2016). Understanding and supporting equity: Implications of methodological and procedural choices in equity-focused evaluations. In S. Donaldson & R. Picciotto (Eds.), Evaluation for an Equitable Society. Charlotte, NC: Information Age Publishing. Rondinelli, D. A. (1983). Development Projects as Policy Experiments. London; New York, NY: Methuen and Co. Saunders, M. (2012). The use and usability of evaluation outputs: A social practice approach. Evaluation, 18 (4), 421–436. School of Social Science. (1979). Our idea of a social science, mimeo. Schwandt, T. (2018). The concerted effort to professionalize evaluation practice: Whither are we bound? In J. E. Furubo & N. Stame (Eds.), The Evaluation Enterprise: a Critical View. New York, NY: Routledge. Stame, N. (1990). Valutazione ex post e conseguenze inattese. Sociologia e Ricerca Sociale, XI (30), 3–35. Stame, N. (2001). Tre approcci principali alla valutazione. In M. Palumbo (Ed.), Il Processo della Valutazione. Milano: Franco Angeli. Stame, N. (2006). Theory-based evaluation and varieties of complexity. Evaluation, 10 (1), 58–76. Stame, N. (2010a). U.S. sociology and evaluation: Issues in the relationship between methodology and theory. In J. Vaessen & F. L. Leeuw (Eds.), Mind the Gap: Perspectives on Evaluation and the Social Sciences. New Brunswick, NJ: Transaction Publishers.
Bibliography | 143 Stame, N. (2010b). What Does not Work? Three Failures and Many Answers. Evaluation, 16 (4), 371–387. Stame, N. (2014). Positive Thinking Approaches to Evaluation and Program Perspectives. Canadian Journal of Program Evaluation, 29 (2), 67–86. Stame, N. (2016). Valutazione Pluralista. Milano: Franco Angeli. Stame, N. (2018). Strengthening the Ethical Expertise of Evaluators. Evaluation, 24 (4), 438–451. Stame, N. (2019). Doubt, Surprise, and the Ethical Evaluator: Lessons from the Work of Judith Tendler. Evaluation, 25 (4), 449–468. Stame, N. (2021). Mixed methods e valutazione democratica. In Rassegna Italiana di Valutazione, n. 76, 53-70. Stame, N. (2022). Program, Complexity, and System when Evaluating Sustainable Development. Evaluation, 28 (1), 58–71. Stame, N. (Ed.). (2007). Classici della Valutazione. Milano: Franco Angeli. Stame, N., & Lo Presti, V. (2015). Positive thinking and learning from evaluation. In S. Bohni- Nielsen, R. Turksema, & P. van der Knaap (Eds.), Evaluation and Success. New Brunswick, NJ: Transactions Publishers. Stame, N., Lo Presti, V., & Ferrazza, D. (2009). Segretariato Sociale e Riformadei Servizi: Percorsi di Valutazione. Milano: Franco Angeli. Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the Range of Designs and Methods for Impact Evaluation. London: DFID Working Paper No. 38. Sullivan, W. (1983). Beyond Policy Science: The Social Sciences as Moral Sciences. In N. Haan, R. N. Bellah, P. Rabinow, W. M. Sullivan (Eds.), Social science as moral inquiry. New York, NY: Columbia University Press. Tendler, J. (1965). Technology and Economic Development: The case of Hydro vs. Thermal Power. Political Science Quarterly, LXXX (2), 236–253., Tendler, J. (1968). Electric Power in Brazil: Entrepreneurship in the Public Sector. Cambridge, MA: Harvard University Press. Tendler, J. (1970). Comments on Evaluations of Bid- Financed Rural Credit Programs in Six Countries. For the Inter-A merican Development Bank. Centre for Latin American Studies, University of California, Berkeley. https://media.entopanlab.it/storage/achii/media/judith- tendler/1970-1979/Bid-Financed_ Rura l_Sep_1970.pdf Tendler, J. (1973). The trouble with goals of small farmer programs (and how to get out of it). Small farmer credit analytical papers, U.S.-A .I.D. Spring review of small farmer credit. https://media.entopanlab.it/stora ge/achii/media/judith-tendler/1970-1979/TROUBLEsfcre dit-A ID_1973.pdf Tendler, J. (1975). Inside Foreign Aid. Baltimore, MD: Johns Hopkins University Press. Tendler, J. (1979). New Directions in Rural Roads. USAID. https://media.entopanlab.it/storage/ achii/media/judith-tendler/1970-1979/Rura l_%20Roads_1979.pdf Tendler, J. (1980). New Light on Rural Electrification: The Evidence from Bolivia. USAID. https:// media.entopanlab.it/stora ge/achii/media/judith-tendler/1980-1989/Rural_ Electrifi cation_ Bolivia.pdf
144 | Bibliography
Tendler, J. (1981). Fitting the Foundation Style: The Case of Rural Credit. Inter-A merican Foundation. https://media.entopanlab.it/storage/achii/media/judith-tendler/1980-1989/Fitt ing_I AF.pdf Tendler, J. (1982a). Rural Projects through Urban Eyes: An Interpretation of the World Bank’s New- Style Rural Development Projects. World Bank Staff Working Paper n. 532, Washington, DC: The World Bank.https://media.entopanlab.it/storage/achii/media/judith-tendler/1980- 1989/UrbanEyes-SWP532.pdf Tendler, J. (1982b). Turning Private Voluntary Organizations into Development Agencies: Questions for Evaluations. USAID Program Evaluation Discussion Paper n. 12, Washington, DC. https://media.entopanlab.it/stora ge/achii/media/judith-tendler/1980-1989/pvos_A I D_1 982.pdf Tendler, J. (1983). Impressions on Evaluation. Report for the Inter-A merican Foundation. https:// media.entopanlab.it/stora ge/achii/media/judith-tendler/1980-1989/Impressions_ on_Eval uation_ Jul_1983.pdf Tendler, J. (1987). Whatever Happened to Poverty Alleviation? New York, NY: Ford Foundation LEIG Program. Tendler, J. (1992). Progetti ed Effetti, ed. by N. Stame, N. Liguori. Tendler, J. (1993). New Lessons from Old Projects, the Working of Rural Development in Northeast Brazil, World Bank Operations Evaluation Study. Washington, DC: The World Bank. https://media.entopanlab.it/stora ge/achii/media/judith-tend ler/1990-1999/ N LOP-c ompl ete.pdf Tendler, J. (1995). Social Capital and the Public Sector, mimeo. https://media.entopanlab.it/stor age/achii/media/judith-tendler/1990-1999/blurred.pdf Tendler, J. (1997). Good Government in the Tropics. Baltimore, MD: Johns Hopkins University Press. Tendler, J. (1998a). Research Questions for the Ford Foundation’s Innovation in Government Programs, https://media.entopanlab.it/storage/achii/media/judith-tendler/1990-1999/Ford- IIG_4 -98.pdf Tendler, J. (1998b). Comments on Partnership for Capacity Building in Africa, mimeo. https:// media.entopanlab.it/storage/achii/media/judith-tendler/1990-1999/A fri-wb398.pdf Tendler, J. (2002). The Fear of Education, mimeo. https://media.entopanlab.it/storage/achii/ media/judith-tendler/2000-2011/fear_of_education.pdf Tendler, J. (2007). The Rule of Law, Economic Development and Modernization of the State in Brazil, Proposal for the WB and DFID. https://media.entopanlab.it/storage/achii/media/jud ith-tendler/2000-2011/DFID-W B_Tendler.pdf Tendler, J. (2018). Beautiful Pages by Judith Tendler, ed. by N. Stame, Roma: IDE. https://media- manager.net/stora ge/achii/media/judith-tend ler/2000-2011/beautif ul_pages _by_judith_ tendler.pdf Tendler, J. with Healy K., & O’Laughlin C. M. (1988). What to think about cooperatives. A guide from Bolivia. In S. Annis & P. Hakim (Eds.), Direct to the Poor: Grassroot Development in Latin America. Boulder, CO and London: Lynne Rienner Publishers. Tendler, J., & Freedheim, S. (1994). Bringing Hirschman back in: A Case of Bad Government Turned Good. In L. di Rodwin & D. Schon (Eds.), Rethinking the Development Experience: Essays Provoked by the Work of Albert Hirschman. Washington, D.C.: Brookings
Bibliography | 145 Institution. Republished as “Trust in a Rent- seeking World: Health and Government Transformed in Northeast Brazil.” World Development, 22 (12), 1994. Vaessen, J., & F. Leeuw (2010). Mind the Gap: Perspectives on Policy Evaluation and the Social Sciences. New Brunswick, NJ: Transactions. Vahamaky, J., & C. Verger (2019). Learning from Results-Based Management Evaluations and Review. OECD Development Cooperation Working Paper 53. Valters, C. (2014). Theories of Change in International Development: Communication, Learning, or Accountability? London: ODI-L SE. Van den Berg, R., Hawkins, P., & Stame, N. (Eds.). (2022). Ethics for Evaluation: Beyond “Doing No harm” to “Tackling Bad” and “Doing Good,” New York, NY: Routledge. Van den Berg, R., Magro, C., & Adrien, M.-H. (2021). Transformational Evaluation for the Global Crises of our Time, IDEAS. Vedung, E. (1997). Public Policy and Program Evaluation. New Brunswick, NJ: Transactions Publishers. Weiss, C. H. (1972). Evaluation Research: Methods for Assessing Program Effectiveness. Englewood Cliffs, NJ: Prentice Hall. Weiss, C. H. (1987). Where Politics and Evaluation Research Meet. In D. J. Palumbo (Ed.), The Politics of Program Evaluation. Beverly Hills, CA: Sage. Weiss, C. H. (1997). Theory-based Evaluation: Past, Present and Future. In D. J. Rog (Ed.), Progress and Future Directions in Evaluation. New Directions for Evaluation, n. 76. San Francisco, CA: Jossey-Bass. Weiss, C. H. (1998). Have we Learned Anything New Use of Evaluation? American Journal of Evaluation, 19 (1), 21–33. Woolcock, M. (2012). Using Case Studies to Explore the External Validity of ‘Vomplex’ Development Interventions. Evaluation, 19 (3), 229–248.
Index of Names
A
C
Alacevich, M. 82, 131 Alinsky, S. 75 Anderson, C. 61-64 Andrews, M. 12, 120 Aniello, V. 4 Aristotle 54
Caballero Argaez, C. 83 Campbell, D. 20, 76 Coleman, J. 70, 75 Collins, R. 55 Connell, J. P. 11 Coslosky, S. 18 Crozier, M. 20, 75 Currie, L. 83
B Baculo, L. 19, 134 Barbera, F. 67 Baroni, B. 18 Bell, D. 75 Bellah, R. 39-42, 44-46, 48-49, 53-57, 62-63 Bianchi, T. 9, 111,116 Byrne, D. 67
D Dahlen-Larsen, P. 76, 121 Darnton, R. 41-42 de Zwart, F. 97-98, 100 Dewey, J. 50, 75 Dreyfus, H.L. 43 Dreyfus, S. E. 43 Durkheim, E. 55
148
| Index of Names
Dzur, A. 111
K
E
Kamark, A. 73, 129 Kaysen, K. 39 Keohane, R. 81 King, G. 81 Kogan, L. 120 Kubisch, A. C. 11 Kuhn, T. 14, 39, 43, 55
Ellerman, D. 4, 10, 75, 132 Elster, J. 67 Emerson, R.D. 46
F Feinstein, O. 4,18, 132 Forss, K. 12 Freedheim, S. 105, 107
G Gadamer, H-G. 43 Geertz, C. 16, 39-45, 48-52, 58, 62-64 Gilligan, C. 59 Gilmartin, M. 18
H Haan, N. 53, 57, 59, 61 Hakim, P. 6, 8, 80, 111 Healy, K. 8 Hedstrom, P. 67 Hobbes, T. 54 Hughes, E. 74
J Jameson, F. 43
L Larson, A. 122 Ledermann, S. 130 Lepenies, P. 4, 15 Lepenies, W. 40, 42 Leviton, L. 74 Lippincott, W. 57 Lipsky, M. 105 Lo Presti, V. 69, 71, 102, 119
M Mac Coy, D. 71 Machiavelli, N. 54, 58 Mandeville, B. de 58, 95 Marra, M. 4 Marx, K. 55 Mayne, J. 11-12 mcPherson, M. 57 Meldolesi, L. 4, 19-20, 47, 57, 60, 71, 76, 79-80, 82, 86, 93, 103-104, 132, 134 Merton, R. 67-68, 70, 81, 96, 98, 100, 102, 121 Morrell, J. A. 96-97 Mowles, C. 67
Index of Names | 149
O Offe, C. 59 Olivetti, A. 19 Olson, M. 60, 70
P Parsons, T. 40, 55 Pawson, R. 66-68, 70, 73, 120, 124 Perrin, B. 70 Picciotto, R. 4, 8, 74, 111, 130-131 Pizzorno, A. 75 Plato 54 Pressman, J. 69 Putnam, R. 114
R Rabinow, P. 39-40, 42-44, 75 Reagan, R. 8 Ricoeur, P. 43 Rittel, H. W.J. 96 Rogers, C. 75 Rogers, P. 65, 123 Rondinelli, D. A. 20, 83
S Sanyal, B. 18-19 Saunders, M. 75-76 Schön, D. 43, 75 Schwandt, T. 111, 122 Scriven, M. 61, 75
Sewell, W. 40-42 Skinner, Q. 40-41 Stern, E. 11,18, 66, 73, 123 Smith, A. 58, 95 Sullivan, M. 39-40, 42-44, 49, 55, 75 Swedberg, R. 67
T Tagle, L. 4, 18 Tankha, S. 9 Taussig, M. 43-44 Taylor, C. 39, 43, 48 Tocqueville, A. de 46, 54
V Vahamaky, J. 96 Van der Tak, H. 129 Vedung, E. 99-100 Verba, S. 81 Verger, C. 132 Vico, G.B. 96
W Webber, M. M. 96 Weber, M. 48, 54-55, 59 Weiss, C. 11, 30, 67, 69-70, 74, 98-101, 127 Whitehead, A. N. 81 Wildavsky, A. 69 Wolfensohn, J. 10, 131-132 Woolcock, M. 4, 12
Index of Subjects
A Alternative routes /courses of action 24, 69, 94 Ambiguity 2, 12–13, 121 Amorality 49, 58 Appraisal
B Blessings in disguise 5, 13, 81, 94 Blueprint approach 83, 93, 96 Bureaucracy 26, 89, 110, 114
CBA (cost benefit analysis) 88, 91 Change 10, 13–15, 22, 45, 68, 73, 93– 97, 100, 121–123 Cognitive dissonance 5, 76, 81, 85, 95 Cognitive style 44, 80, 82 Comparison 2, 28, 31–34, 89, 124 Complexity 14, 17, 66, 96, 118 Conflicts 89–90, 127 Conservatione of social energy 8, 80 Context 14, 66–68, 73, 122, 129 Cooperatives 8, 32, 109 Creativity 4, 15, 95 Culture 40, 46
D C Case studies 80, 88
Decision-making logic 24–28 Democratic professional 111 Development goals 24, 97
152
| Index of Subjects
E
Interviews 111, 116, 134 Inverted sequences 5, 81, 95, 100, 122
Evaluation approaches 14, 17, 66, 70–71, 77 Positive thinking 65, 70–72, 119, 174 TBE (Theory Based Evaluation) 14, 17, 65, 69–70
Evaluative questions 11, 109, 123 Evaluator independence 3, 121 Evidence Based Policy (EBP) 69, 102 Experiments 20, 72, 76 Explain 13, 36–37, 67, 76, 106
F Failure 32, 71, 96, 100, 104–108, 119 Fieldwork 6, 9, 50, 90, 118–119, 126 Foreign aid 79–84 Fracasomania 15, 72, 94, 105
H
L Learning 12, 20, 33, 36, 70–71, 117–120 Lesser evil 97, 100 Lessons learned 66, 72–73
M Mechanism 17, 67–69, 108, 131 Middle range theory 67, 87, 124 Mixed methods 87, 118 Morality 47–49, 53–62 Motivation 70–71
N NGOs (non-governmental organizations) 23, 108–109
Hiding hand 15, 72, 77, 129–130
I Implementation 3, 12, 19, 22, 25, 28, 31, 68–69, 83–85, 119–124 Indirect (side-) effect 34, 67, 127 Informant 51–52 Institute for Advanced Study (IAS) 6, 39–40 Inter-A merican Foundation (IAF) 6–8, 111–112 International agencies 2–3, 6, 17, 23–25, 30–36, 89–90, 115 Interpretive social science 17, 39–45, 47–49
O Observation 25, 29, 80–81, 87–91, 104, 123 Obstacles, how to overcome 8, 13, 37, 72, 94–95, 106
P Paradigms 14, 42, 44–45, 48, 55, 67, 95 PDIA (Problem Driven Iterative Adaptation) 120 Perverse effect 24, 94–96, 100 Plans 83, 86
Index of Subjects | 153 Policy analysis 61–63 Politics 27–28, 85, 90, 115 Possibilism 5, 10–15, 65, 71, 84, 93–94, 98–99, 116 Poverty 19, 23, 27, 34–37, 107–111 Prerequisites for change 37, 88–89, 95, 106–107 Probabilism 11 Problems 20–26, 36, 50, 55, 71–73, 87, 89, 96, 111, 116–118, 126–129 Professional ethics 16, 51, 53, 58 Programs 7, 10, 14, 30, 37, 61, 66–77, 82–86, 99, 107–109, 111, 124, 127–130 Projects 7–10, 19, 21–38, 79–84, 87–91, 104–114, 125–131 Public sector 16, 23, 37, 105–111, 114 PVOs (Private volontary organizations) 29
Scientific method 11, 48, 50, 52 Serendipity 12, 96–98, 102 Shifting involvements 56, 64 Simple, complicated, complex 66 Social capital 110, 114, 120 Sociology 13, 54–55, 63, 75–76 Statistical representativeness 128 Studies 2, 31, 106–109 Successes and failures 5, 7–8, 14, 29, 36–37, 69, 71, 97, 100, 104–106, 116, 119 Surprise 12–14, 36, 45, 80, 94, 96–97, 104, 116, 121 Symbolic anthropology 45
Q
Tasks 7, 15, 25, 31–33, 89, 108–109
Qualitative analysis 87 Quantitative analysis 87–88
R Rational action 26, 57, 70 Reflective practitioner 44, 123 Relative deprivation 59, 68 Research 9, 16, 21, 42, 47–53, 62–64, 76, 81, 87, 105–120, 123, 128 Risk 129–130 ROI (return on investment) 127
S School of Social Science (SSS) 39–42
Antropological irony 51, 63 Asymmetry of fieldwork 51
System 14, 42, 66–67, 118, 122
T Administration 89, 112 Distribution 5, 31, 33 Production 33, 89, 112
Teaching 9, 116 Technology 5, 31, 89, 120 Theory of change 11, 121–123 Tradition 43, 55, 62 Traditional social science 11, 44, 51, 81, 101 Trained incapacity 58, 62 Trespassing 42, 75, 119 Tunnel effect 59, 68
U Uncertainty 12, 72, 80, 89–91, 127 Unintended consequences 10, 13, 93–99 Urban politics 34–37, 109
154
| Index of Subjects
USAID (United States Agency for International Development) 6 Uses of evaluation 74
V Values 16, 48, 52–53, 61–64, 119
W Wicked problems 96 World Bank 6, 25–26, 34, 75, 82, 97, 125–132