165 87 2MB
English Pages 291 Year 2018
EXPERT FAILURE
The humble idea that experts are ordinary human beings leads to surprising conclusions about how to get the best possible expert advice. All too often, experts have monopoly power because of licensing restrictions or because they are government bureaucrats protected from both competition and the consequences of their decisions. This book argues that in the market for expert opinion we need real competition in which rival experts may have different opinions and new experts are free to enter. But the idea of breaking up expert monopolies has far-reaching implications for public administration, forensic science, research science, economics, America’s military-industrial complex, and all domains of expert knowledge. Roger Koppl develops a theory of experts and expert failure, and uses a wide range of examples — from forensic science to fashion — to explain the applications of his theory, including state regulation of economic activity. is Professor of Finance in the Whitman School of Management at Syracuse University in New York, and a faculty fellow in the University’s Forensic and National Security Sciences Institute. His work has been featured in The Atlantic, Forbes, and The Washington Post.
CAMBRIDGE STUDIES IN ECONOMICS, CHOICE, AND SOCIETY Founding Editors: Timur Kuran, Duke University Peter J. Boettke, George Mason University This interdisciplinary series promotes original theoretical and empirical research as well as integrative syntheses involving links between individual choice, institutions, and social outcomes. Contributions are welcome from across the social sciences, particularly in the areas where economic analysis is joined with other disciplines such as comparative political economy, new institutional economics, and behavioral economics. Books in the Series Terry L. Anderson and Gary D. Libecap, Environmental Markets: A Property Rights Approach 2014 Morris B. Hoffman, The Punisher’s Brain: The Evolution of Judge and Jury 2014 Peter T. Leeson, Anarchy Unbound: Why Self-Governance Works Better Than You Think 2014 Benjamin Powell, Out of Poverty: Sweatshops in the Global Economy 2014 Cass R. Sunstein, The Ethics of Influence: Government in the Age of Behavioral Science 2016 Jared Rubin, Rulers, Religion, and Riches: Why the West Got Rich and the Middle East Did Not 2017 Jean-Philippe Platteau, Islam Instrumentalized: Religion and Politics in Historical Perspective 2017 Taisu Zhang, The Laws and Economics of Confucianism: Kinship and Property in Preindustrial China and England 2018
Expert Failure ROGER KOPPL Syracuse University
University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781316503041 DOI: 10.1017/9781316481400 © Roger Koppl 2018 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2018 Printed in the United States of America by Sheridan Books, Inc. A catalogue record for this publication is available from the British Library. ISBN 978-1-107-13846-9 Hardback ISBN 978-1-316-50304-1 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
To Maria, who brings joy
Contents
Acknowledgments 1
page xi
Introduction
1
2
Is There a Literature on Experts? Introduction A Simple Taxonomy Berger and Luckmann Defining “Expert”
23 23 26 33 37
3
Two Historical Episodes in the Problem of Experts Introduction The Socratic Tradition Expert Witnesses in Law
43 43 44 56
4
Recurrent Themes in the Theory of Experts Power Ethics Reflexivity The Well-Informed Citizen Democratic Control of Experts Discussion Market Structure Information Choice in the Context of the Literature on Experts Closing Remark
68 68 73 76 80 84 85 88 89 92
vii
viii
Table of Contents
5
Notes on Some Economic Terms and Ideas Spontaneous Order Competition
97 97 106
6
The Division of Knowledge through Mandeville Introduction The Division of Knowledge
116 116 118
7
The Division of Knowledge after Mandeville Vico to Marx Menger to Hayek After Hayek
133 133 139 142
8
The Supply and Demand for Expert Opinion The Economic Point of View on Experts Identifying the Commodity and Defining “Expert” Information Choice Theory Honest Error and Willful Fraud The Economics of Experts Fills a Niche The Demand for Expert Opinion The Supply of Expert Opinion
151 151 152 153 154 155 160 162
9
Experts and Their Ecology Motivational Assumptions of Information Choice Theory The Ecology of Expertise
163 163 180
10
Expert Failure and Market Structure Two Dimensions of Expert Failure Identity, Sympathy, Approbation, and Praiseworthiness Observer Effects, Bias, and Blinding
189 189 197 197
11
Further Sources of Expert Failure Normal Accidents of Expertise Complexity and Feedback Incentive Alignment The Ecology of Expertise Professions Regulation
201 201 202 204 204 205 211
Table of Contents Monopsony and Big Players Comments on the Market for Ideas Epistemic Systems Design 12
Expert Failure in the Entangled Deep State Expert Failure and America’s Entangled Deep State Closing Remarks
References Index
ix 214 215 217 221 221 234 239 267
Acknowledgments
Leonard Read taught us that no one knows how to make a pencil. Something similar can be true of a book. My name appears on the title page, but the true authors of this book are the participants in several scholarly communities I have had the honor of engaging, as well as many journalists, reformers, forensic scientists, public defenders, intellectuals, family, friends, acquaintances, and passing strangers. I will single out a few persons for thanks, but I know I have unwillingly omitted others no less deserving of gratitude and acknowledgment. I thank them all. Unfortunately, however, I cannot blame them for anything wrong with this book. My wife, Maria Minniti, has been an unfailing source of love, support, and stinging criticism. Peter Boettke has provided encouragement and helpful commentary. I have profited from an ongoing conversation on experts with David Levy and Sandra Peart. Stephen Turner helped me work out the contours of the literature on experts. Ronald Polansky improved my understanding of Socrates and the Apology. I have profited from exchanges with Simon Cole and from opportunities and generous help he has given me. Alvin Goldman has given me some helpful lessons in social epistemology. He is not to blame for my more synecological vision, however. Lawrence Kobilinsky extended himself to eliminate errors and omissions from my first published paper on forensic science, which led to my work on the general problem of experts. Our conversations have been helpful and stimulating. Michael Risinger has provided many stimulating comments and conversations, along with general support and encouragement. Phillip Magness and David Singerman both helped me understand J. M. Keynes’s opinions on eugenics and eugenic policy. Phil also guided me to source materials on that topic. Francisco Doria provided a manuscript copy of After Gödel, which was later published as Gödel’s Way. Chico
xi
xii
Acknowledgments
has also helped me with computability theory. Santiago Gangotena suggested the metaphor “experts all the way down.” I received help from many other people. A few of them are Trey Carson, Jim Cowan, Alexander Wade Craig, David Croson, James Della Bella, William Butos, David Colander, Christpher Coyne, Abigail Deveraux, Itiel Dror, Teppo Felin, Christine Funk, Paul Giannelli, Nathan Goodman, Colin Harris, Keith Harward, Steve Horwitz, Keith Inman, Stuart Kauffman, Dan Krane, Robert Kurzban, Moren Lévesque, David Lucas, Thomas McQuade, Barkley Rosser, Norah Rudin, Meghan Sacks, John Schiemann, Vernon Smith, the late Ion Sterpan, William Thompson, Richard Wagner, and Lawrence Yuter. I learned a lot at the Wirth Institute’s third biennial Austrian School of Economics Conference, “Austrian Views on Experts and Epistemic Monopolies,” which was held in Vancouver on October 15–16, 2010. Participants included David Croson, Arthur Diamond, Laurent Dobuzinskis, Rob Garnett, Steve Horwitz, Leslie Marsh, Sandra Peart, Emily Skarbek, Diana Thomas, and Alfred Wirth. I appreciate the support Alfred Wirth and the Wirth Institute have given to this and other “Austrian” conferences. Visits to George Mason University’s Mercatus Center as the F. A. Hayek Distinguished Visiting Professor helped my work on this volume by providing a stimulating work environment and many helpful conversations. The discussion of Berger and Luckmann in Chapter 2 draws on Roger Koppl, “The social construction of expertise,” Society, 47 (2010), 220–6. Parts of Chapter 3 are close to parts of Roger Koppl, “Shocked disbelief,” in F. A. Doria (ed.), The Limits of Mathematical Modeling in the Social Sciences, 2017. Some parts of Chapter 8 and Chapter 9 were adapted from Roger Koppl, “Information choice theory,” Advances in Austrian Economics, 17 (2012), 171–202. The discussion of computability in Chapter 9 is close to some passages in Roger Koppl, “Rules vs. discretion under computability constraints,” Review of Behavioral Economics, 4(1) (2017), 1–31. The discussion of the ecology of expertise draws on Roger Koppl, “Epistemic systems,” Episteme: Journal of Social Epistemology, 2(2): 91–106. Chapter 10 draws on Roger Koppl, “The rule of experts,” in Peter Boettke and Christopher Coyne (eds.), Oxford Handbook of Austrian Economics, Oxford: Oxford University Press, 2015. I gratefully acknowledge the financial support of Syracuse University’s Whitman School of Management.
1
Introduction
Politicians, partisans, and pundits were surprised and traumatized by the election of Donald Trump as President of the United States. Anger at experts seems to have contributed significantly to his victory (Easterly 2016). Brexit was led in part by Michael Gove, who exclaimed, “I think the people in this country have had enough of experts” (Lowe 2016). Whatever one’s opinion of Trump or the European Union, ordinary people in Western democracies have cause to be angry with experts. The Flint water crisis is an example. On April 25, 2014, the city of Flint, Michigan changed its municipal water supply in a manner that produced impotable brown water (Adewunmi 2017). “Flint water customers were needlessly and tragically exposed to toxic levels of lead and other hazards” (Flint 2016, p. 1). “Flint residents began to complain about its odor, taste and appearance” (Flint 2016, p. 16). “On 1 October 2015, 524 days after the switch to the Flint River, the Genesee County Health Department declared a public health emergency and urged Flint residents to refrain from drinking the water” (Adewunmi 2017). On January 24, 2017, Michigan state environment officials indicated that lead levels in Flint water no longer exceeded the federal limit (Unattributed 2017). The journalist Bim Adewunmi says, however, “By the time I left Flint on 22 February this year [2017], the water was still not safe enough to drink directly out of the faucet, according to the politicians and charity workers I spoke to, and the residents’ feelings on the matter had remained at a simmer about it” (Adewunmi 2017). In March 2016 an official Michigan task force investigating the Flint water crisis found that the Michigan Department of Environmental Quality (MDEQ) “bears primary responsibility for the water contamination in Flint” (Flint 2016, p. 6). The report found that the Michigan Department of Health and Human Services and the United States Environmental Protection 1
2
Expert Failure
Agency (EPA) shared a substantial portion of the blame (Flint 2016, p. 1). The task force’s report chronicles the actions of Dr. Mona Hanna-Attisha and Marc Edwards to produce change, in part by documenting important facts such as elevated lead levels in the blood of Flint children. Noting that “the majority of Flint’s residents are black,” Adewunmi (2017) records the opinion of many that anti-black racism contributed significantly to the crisis. In this tragedy, state experts charged with ensuring water quality instead allowed a malodorous, polluted, and toxic liquid to flow into Flint homes and poison its people. Unfortunately, the Flint water crisis is but one of many examples of expert failure. In 2009 two judges in Pennsylvania, Mark A. Ciavarella Jr. and Michael T. Conahan, pled guilty to fraud and tax charges in a scheme to imprison children for money (Chen 2009). The case was dubbed “kids for cash.” These two judges were experts. They were experts in the law giving their opinions of the guilt or innocence of children coming before them and deciding what punishments were just. They took $2.6 million in “kickbacks” (Urbina 2009) from two private detention centers to which they sent convicted juveniles. To get their kickbacks, they sent children to jail. In what is presumably one of the more egregious cases, thirteen-year-old DayQuawn Johnson, who had no previous legal troubles, “was sent to a detention center for several days in 2006 for failing to appear at a hearing as a witness to a fight, even though his family had never been notified about the hearing and he had already told school officials that he had not seen anything” (Urbina 2009). Ciavarella and Conahan sent children into detention facilities at twice the state average and seem (at least in the case of Ciavarella) to have declined to even advise them or their parents of the children’s right to an attorney (Chen 2009; Urbina 2009). Their scheme went on for years before they were finally caught and arrested. Social work provides further examples of experts causing harm to ordinary persons. In the United States, social services can be intrusive and arbitrary. In 2014 a woman in South Carolina was jailed for letting her nine-year-old daughter play unsupervised in a public park that was “so popular that at any given time there are about 40 kids frolicking” (Skenazy 2014). Another woman reports that her children, “between the ages of 10 and 5,” were taken from her after she was widowed. She chose to leave them unsupervised in the house for “a few hours” at a time while she attended college classes (Friedersdorf 2014). She says officials entered her home and removed her children without attempting to reach her, even though her children knew where she was: “Over the two years during which the case
Introduction
3
dragged on, my kids were subjected to, according to them, sexual molestation (which was never investigated) and physical abuse within the foster care system. They were separated from each other many times, moved around frequently, and attended multiple schools.” She was subject to arbitrary conditions for the return of her children. She reports, for example, I was required to allow CPS [Child Protective Services] workers into my home to conduct a thorough “white glove” type inspection. According to the court order, if any of the workers felt anything was amiss, the return of custody would be delayed or denied. I was told to sweep cobwebs and scrub the oven to their satisfaction, which I did, obsequiously.
Her experience led her to conclude that the system “was not about protection, but power” (Friedersdorf 2014). After Kiarre Harris acquired the legal right to home-school her children, Child Protective Services (CPS) arrived at her door with uniformed police in tow. If early reportage is correct, the CPS workers told Harris they had a court order to remove the children, but were unable to supply the order when Harris asked to see it. She declined to surrender her children without seeing the order. She was arrested for obstructing a court order and her children were moved to foster care (Buehler 2017; Riley 2017). One report (Williams and Lankes 2017) cautioned that “there could be more to the situation” and that Harris “has a history of domestic violence, including using a knife.” Only much later in the article, however, was the substance of this supposed knife-wielding violence revealed: “In 2012, a woman complained that Harris kicked, scratched and gauged her vehicle with a knife. That complaint was classified as criminal mischief. It’s not clear if that was the domestic dispute referenced in the CPS petition.” The “domestic violence” that Williams and Lankes (2017) soberly note seems to have been nothing more than an unsubstantiated claim that Harris keyed someone’s car or otherwise damaged it. So far, The Buffalo News has been unable to find more damning evidence of past crimes or irregularities in Harris’s life. Even if facts clearly unfavorable to Harris should eventually emerge, it seems unlikely that her arrest and the precipitous removal of the children were necessary or appropriate. It seems far more likely that the removal has been harmful to the children. In this as in other cases, CPS seems to have been arbitrary, imperious, and oppressive. In a post on social media Harris claims that one court document includes the vague remark, “Respondent seems to have a problem with whatever school the children
4
Expert Failure
are attending.” She says the same document remarks, “Respondent recently posted a comment on social media ridiculing the school system and people who attend school or graduate from school” (http://thefreethoughtproject .com/mother-arrested-homeschooling-children/.) The quoted contents of the court document seem irrelevant to the charge of neglect, and they seem to reflect more concern for the interests of the public school system than for the children’s welfare. Home-schooling advocates have identified the pattern seemingly at work in this and other cases. It is, they say, “common for the [affected school] district to not file families’ homeschool notices, then report them to CPS.” Once the child stops coming to class, the school marks them as absent. “Although the child is not ‘absent’ and is being instructed elsewhere, often the school will continue to mark them so, which is why Child Protective Service gets involved” (Hudson 2017). A British report on family courts (Ireland 2012) that gained national attention at the time of its release found grave deficiencies in the UK system of social services. The report was funded by the Family Justice Council, which is described in it as “an independent body, funded by the Ministry of Justice.” It summarized the evaluations of the psychological assessments submitted to Family Court in 126 cases. Admittedly, the report gives the opinions of some experts of the job done by other experts. Much of it is the credentialed casting aspersions on the non-credentialed. Nevertheless, some of the findings provide a clear indication that state experts have played an arbitrary, obnoxious, and intrusive role in the lives of many residents of the United Kingdom. The arbitrary nature of the psychological assessments being made is suggested by the report’s finding that more than 40 percent of the case findings reviewed failed to adhere to required procedural norms (Ireland 2012, p. 21). Thus, the representatives of the state failed to adhere to state-mandated procedural norms more than 40 percent of the time. “Key findings focus on the fifth of psychologists who, by any agreed standards, were not qualified to provide a psychological opinion, coupled with nearly all expert witnesses not maintaining a clinical practice but seeming to have become full time ‘professional’ expert witnesses” (Ireland 2012, p. 30). One news report (Reid 2012) provides several horror stories supporting the view that family-court practice in the United Kingdom is abusive and harmful. In one case, “after a woman was found by a psychologist to be a ‘competent mother,’ the social workers are said to have insisted on commissioning a second expert’s report. It agreed with the first. They then commissioned a third, which finally found that the mother had a
Introduction
5
‘borderline personality disorder.’ All three of her children were taken away for adoption.” Reid notes two institutional facts that seem to explain why “No other country in Western Europe removes so many children from their parents.” First, there were financial incentives to remove children. “The last Labour government set adoption targets and rewarded local councils with hundreds of thousands of pounds if they reached them.” Although these targets had been “scrapped” by the time Reid wrote, “Social workers now [in 2012] get praise and promotion if they raise adoption numbers. David Cameron is also demanding more adoptions – and that they are fast-tracked” (Reid 2012). Second is secrecy. The 1989 Children Act “introduced a blanket secrecy in the family courts,” thereby encouraging “a lack of public scrutiny in the child protection system” (Reid 2012). People have suffered from expert error and abuse with state health and environmental experts, state schools, state-controlled or regulated healthcare, and the criminal justice system. Joan C. Williams (2016) is probably right to say that the “white working class” in America “resents professionals,” including lawyers, professors, and teachers, in part because “professionals order them around every day.” Health economics expert Jonathan Gruber, sometimes dubbed the “Obamacare architect,” famously said that the “stupidity of the American voter” was essential to the passage of “Obamacare,” i.e., the Affordable Care Act of 2010 (Roy 2014). Experts advised the US government to send young Americans to die in Iraq because that nation had weapons of mass destruction, when in fact no such weapons were there. Monetary policy expert Alan Greenspan, former head of the Federal Reserve System, reported his “shocked disbelief” over the Great Recession in testimony before Congress. He confessed that the crisis had revealed a “flaw” in his model of capitalism (Greenspan 2008). America’s economic experts were unable to prevent economic crisis. During and after the crisis, many large organizations received bailouts while many ordinary Americans were left with underwater mortgages, unemployment, or both. The evils of eugenics are an important example of expert error and abuse. As I will note again in Chapter 4, we cannot view such evils as entirely in our past. Ellis (2008) has explicitly called for a “eugenic approach” to fighting crime (p. 258) that would dictate the “chemical castration” of “young postpubertal males at high risk of offending” (p. 255). The experts will tell us which young men are at risk of offending in the future and castrate them as a preventive measure. The Center for Investigative Reporting (Johnson 2013) has found that “Doctors under contract with the California Department of Corrections and Rehabilitation
6
Expert Failure
sterilized nearly 150 female inmates from 2006 to 2010 without required state approvals.” They seem to have been more likely to pressure inmates thought to be at risk of re-offending: “Former inmates and prisoner advocates maintain that prison medical staff coerced the women, targeting those deemed likely to return to prison in the future.” A former inmate who worked in the infirmary of Valley State Prison in 2007 “said she often overheard medical staff asking inmates who had served multiple prison terms to agree to be sterilized.” At least as recently as 2010, medical experts in California prisons pressured women to accept tubal ligation as a preventive measure when they thought such women to be at risk of offending again in the future. It seems fair to conclude that for many people the “problem of experts” discussed in this book is urgent and concrete. I have noted the destructive role of experts in the lives of ordinary people. I oppose the rule of experts, in which monopoly experts decide for non-experts. I sympathize with the people over the experts, technocrats, and elites. Such sympathies may seem populistic. I fear populism, however, and value pluralistic democracy. Because my sympathies might seem populistic to at least some readers, I should probably explain why, in my view, populism and the rule of experts, at least in their more extreme forms, are equally inconsistent with pluralistic democracy. Populist rhetoric is often, though not always, anti-expert (Kenneally 2009, de la Torre 2013). Boyte (2012, p. 300) is probably right to say that “Populism challenges not only concentrations of wealth and power, but also the culturally uprooted, individualized, rationalist thinking characteristic of professional systems, left and right.” Populism is usually a revolt against the “elites,” and that term is usually construed to include state experts and technocrats. We have seen Michael Gove disparage experts. The official site of the French Front National has warned against placing “the destiny of the people in the hands of unelected experts” (Front National 2016). The founder of Italy’s “5 Star Movement” has sharply criticized “supposed ‘experts’” in “economics, finance, or labor” who would presume to speak for the movement. The party’s platform, he said, would be “developed online” by “all of its members” and it would be “a space where everyone really counts for one” (Grillo 2013). Mudde (2004) defines populism as “an ideology that considers society to be ultimately separated into two homogeneous and antagonistic groups, ‘the pure people’ versus ‘the corrupt elite,’ and which argues that politics should be an expression of the volonté générale (general will) of the people” (p. 543). Populism, Mudde explains, “has two opposites: elitism and
Introduction
7
pluralism” (2004, p. 543). Elitism “wants politics to be an expression of the views of the moral elite, instead of the amoral people. Pluralism, on the other hand, rejects the homogeneity of both populism and elitism, seeing society as a heterogeneous collection of groups and individuals with often fundamentally different views and wishes” (pp. 543–4). Bickerton and Accetti (2015, pp. 187–8) describe “populism and technocracy” as “increasingly . . . the two organizing poles of politics in contemporary Western democracies.” They note, however, that both poles are opposed to “party democracy,” which they define as “a political regime based on two key features: the mediation of political conflicts through the institution of political parties and the idea that the specific conception of the common good that ought to prevail and therefore be translated into public policy is the one that is constructed through the democratic procedures of parliamentary deliberation and electoral competition.” Thus, “despite their ostensible opposition, there is also a significant and hitherto unstudied degree of convergence between populism and technocracy consisting in their shared opposition to party democracy.” I will argue in Chapters 6 and 7 that knowledge is often dispersed, emergent, and tacit. (It is often, I will say, “synecological, evolutionary, exosomatic, constitutive, and tacit.”) This view of knowledge is consistent with pluralistic (or “party”) democracy. Knowledge is dispersed. Each of us has at best a partial view of the truth. Plural perspectives are thus inevitable and good. In a pluralist democracy, competing partial perspectives on the truth have at least a chance to be heard and to influence political choices. Decisions in a political system – be it populist, elitist, or something else – that override or ignore plural perspectives will be based on knowledge that is at best limited, partial, biased. If knowledge were uniform, explicit, and hierarchical, then we might consider whether it could be best to determine which system of knowledge is the true one upon which all political decision making should be based. In this case, some might seek wisdom in the experts while others might turn to a party or leader embodying popular wisdom, and there would be no “neutral” way to adjudicate the dispute between them. If my more egalitarian view of knowledge is correct, however, then plural democracy is more likely to be the least worst system of political decision making. Thus, my sympathy for ordinary people against elites, experts, and technocrats is not, after all, populistic. Fear of populism is justified. But we should recognize that the rule of experts is also an “escape from democracy” (Levy and Peart 2017). If we are to preserve pluralistic democracy, all of us in the scribbling professions of scholarship, journalism, and policy analysis should recognize that
8
Expert Failure
experts often harass, harry, and harm ordinary people. Examples are legion. I have given a few in this chapter. Popular anger with and repudiation of experts should not be dismissed as irrational fear or ignorant antiintellectualism. It is all too well justified. There is a problem of experts and it matters. In this volume, I address the problem of experts. I offer an economic theory of experts. My theory is “economic” because it adopts the economic point of view (Kirzner 1976). It is not a theory of the “economic aspects” or “economic consequences” of experts or expertise. It is a theory of experts on all fours with the theories of philosophers such as Mannheim (1936) and Foucault (1980), science and technology scholars such as Turner (2001) and Collins and Evans (2002), and sociologists such as Berger and Luckmann (1966) and Merton (1976). In my theory, an expert is anyone paid for their opinion. Here, “opinion” means only the message the expert chooses to deliver, whether or not they sincerely believe the message to be true. If you are paid for your opinion, you are an expert. If you are not paid for your opinion, you are not an expert. More precisely, if you are paid for your opinion, you occupy, in that contractual relation, the role of “expert.” Thus, “expert” is a contractual role rather than a subset of persons. As I will attempt to show, this definition of “expert” creates a class of economic models that is distinct (though not disjoint) from other classes of economic models, including principal-agent models, asymmetric information models, and credencegoods models. Usually, an expert is defined by their expertise. By such a definition, however, everyone is an expert in something because we all occupy different places in the division of labor and, therefore, the division of knowledge. It thus becomes unclear who is supposed to be an expert and who a non-expert. My definition in terms of contractual relations seems to get around that problem. It also avoids the question of whether you are “really” an expert if your expertise is false or deficient. The economic theory of experts developed in this volume does not require us to judge whose expertise is legitimate or scientific or in some other way sufficiently certified or elevated to “count.” I begin with the nature and history of the problem, which I discuss in Chapters 2–4. There is a large literature on the problem spanning many fields, including philosophy, law, sociology, science and technology studies, economics, forensic science, and eugenics. This literature has not, however, been clearly delineated in the past. While I have not attempted a proper survey, I have attempted to delineate the literature, to identify the main themes of it, and to characterize what I believe to be the four main general
Introduction
9
theoretical positions one may take. To anticipate, one may take a broadly favorable view of experts or a broadly skeptical view. And one may view non-experts as having in some way the potential to choose competently among expert opinions or, alternatively, one may view non-experts as lacking the potential for such competence. These two broad perspectives on experts and two broad perspectives on non-experts create four general theoretical postures one might adopt toward experts. The great majority of theorists seem to fit reasonably well into one of these four broad categories, notwithstanding the variety of theories to be found in the literature. In Chapter 2 I discuss these four broad categories for the theory of experts and provide exemplars for each. In Chapter 3 I review two important episodes in the history of the problem. The first is the emergence of Socratic philosophy and its development with Plato, Aristotle, and the Academy. In this tradition, philosophers are experts. The second episode is a mostly nineteenth-century Anglo-American literature on expert witnesses in the law. I will argue that in both literatures the expert is often viewed as both epistemically and morally superior to non-experts. They should be obeyed. Such lionization of experts and expertise is common today as well and is, in my view, inappropriate and unfortunate. All such arguments seem to find their original in Socratic philosophy. This origin was recently invoked by one defender of experts against populism, British celebrity and physics expert Brian Cox. Commenting on Gove’s disdain for experts, he has said, “It’s entirely wrong, and it’s the road back to the cave” (Aitkenhead 2016). With this clear allusion to Socrates’ cave, Cox is telling us that it is unphilosophical to challenge the experts. He goes on to suggest that experts are superior, being unsullied by parochial interests: “Being an expert does not mean that you are someone with a vested interest in something; it means you spend your life studying something. You’re not necessarily right – but you’re more likely to be right than someone who’s not spent their life studying it” (Aitkenhead 2016). As we shall see in Chapter 3, this view of experts as better and wiser is clearly expressed in the Socratic tradition of philosophy, and again in the railings of nineteenth-century “men of science” against the challenges and supposed indignities they experienced when testifying in court. Finally, in Chapter 4 I review several recurrent themes in the theory of experts and discuss how they have been addressed in the past. These common themes are power, ethics, reflexivity, the well-informed citizen, democratic control of experts, discussion, and market structure. I have tried to give at least some indication of what choices or strategies might be
10
Expert Failure
available for addressing each theme within the context of a theory of experts. Part I of this volume provides, then, a kind of map of the territory occupied by the literature on experts. The economic point of view I adopt in this volume is, I think, easily misunderstood. I have, therefore, included a discussion of important supporting concepts from economics. This discussion is found in Chapters 5–7. My theory of experts builds on a theory of the co-evolution of the division of labor and the division of knowledge. Vital to this theory is the idea that the division of labor and division of knowledge are not planned. They emerge unintendedly from the dispersed actions of many people who have not all somehow pre-coordinated their plans. The system was not planned, but it somehow coheres and functions anyway. This notion of “spontaneous order” may seem quite strange. For this reason, I suppose, it is easily misinterpreted. It may seem to be a kind of scientific mysticism, to “reify” markets, or to be in some other way absurd or mysterious. I have tried to dispel this sense of strangeness in part through a purposefully silly example of spectators standing up together in a sports stadium. My willfully silly example shows, I hope, that there is nothing absurd or mysterious in the idea of spontaneous order. The idea is surprising, but not strange. I also consider more serious examples of spontaneous order, including the division of labor. We should not think of the division of labor as driven, somehow, by a grand purpose. It embodies no unitary hierarchy of values. The division of labor has no purpose and serves no particular hierarchy of ends. It is, rather, the emergent and unplanned result of a variety of persons pursuing a variety of potentially inconsistent goals. We can get along, so many of us so well, precisely because we do not have to agree on values. Believers buy Bibles from atheists and the system bumps along tolerably well, all things considered. In Chapter 5 I also consider the perhaps more fraught ideas of “competition” and “competitive” markets. I have called my approach to the problem of experts an “economic theory of experts.” It may not be surprising, therefore, that I take a comparative institutional approach in which expert error and abuse are more likely when experts have monopoly power and less likely in a “competitive” market for expert opinion. I put the word “competitive” in scare quotes, however, because it easily creates misunderstanding. It may seem to invoke the incoherent idea of a market in which “anything goes” and there are, somehow, “no rules.” As I attempt to show in Chapter 5, any such notion of a rules-free market is incoherent. The “free market” of economic theory is always “regulated” by some set of
Introduction
11
rules, and the questions are what rule sets have what consequences, whether a given set of rules can be improved, and how best to determine or update the different rules governing different markets. The words “competition” and “competitive” can easily suggest something very different than I have in mind, but I have no satisfactory substitute for them. Therefore, I will use them and hope that the clarification I have attempted in Chapter 5 will minimize misunderstandings. There can be a problem of experts only if different people know different things. Thus, the division of knowledge must be at the center of a theory of experts. The first clear statement of a general problem of experts, that of Berger and Luckmann (1966), built on a clear statement of the division of knowledge. “I require not only the advice of experts, but the prior advice of experts on experts. The social distribution of knowledge thus begins with the simple fact that I do not know everything known to my fellowmen, and vice versa, and culminates in exceedingly complex and esoteric systems of expertise” (p. 46). One’s theory of experts is likely to go wrong unless it embodies and reflects a theory of social knowledge that is at least approximately correct. What is the nature of the knowledge that guides action in society? How is such knowledge produced and distributed? And so on. In Chapters 6 and 7 I argue that knowledge is often dispersed, emergent, and tacit. I will impute this view to Hayek (1937), although I have also learned from Mandeville (1729), who adopted a more radically skeptical and egalitarian view of knowledge than Hayek. If my broadly Hayekian (or perhaps Mandevillean) view of knowledge is about right, then knowledge is not hierarchical, unitary, explicit, and bookish. It is, instead, generally emergent from practice, often tacit, and embodied in our norms, habits, practices, and traditions. The human knowledge that sustains the division of labor in society is better represented as a vast web of Wittgensteinian language games than as some more organized and codified entity such as Diderot’s Encyclopedia. In Chapter 6 I review the history of thought on the division of knowledge from Plato’s Apology to Mandeville’s Fable. In Chapter 7 I continue the history to the present and suggest that the notion of dispersed knowledge is still not widely understood. I describe my theory of experts in Chapters 8–11. I call the theory “information choice theory” for two reasons. First, it underlines the fact that experts are economic actors who must choose what information to convey. Second, it highlights the relationship to the economic theory of “public choice.” Like public choice theory, information choice theory assumes people are the same in all their roles in life. Thus, experts are neither more nor less honest or selfish than non-experts. I identify three
12
Expert Failure
key motivational assumptions for an economic theory of experts. First, experts seek to maximize utility. Thus, the information-sharing choices of experts are not necessarily truthful. Second, cognition is limited and erring. Third, incentives influence the distribution of expert errors. These motivational assumptions, together with a broadly Hayekian view of dispersed knowledge, support a theory of experts and expert failure in which “competition” tends to outperform monopoly. Expert failure is more likely when experts choose for their clients than when the clients choose for themselves. And (again) expert failure is more likely when experts have an epistemic monopoly than when experts must compete with one another, although the details of the competitive structure matter, as we shall see. Finally, in Chapter 11 I discuss how to design piecemeal institutional change to improve markets for expert advice. My theory of experts points to an epistemic critique of what is variously called the “military-industrial complex,” the “national-security state,” or the “deep state.” For reasons I explain in Chapter 12, I will adopt the label “entangled deep state.” I sketch this critique in Chapter 12. My critique of the deep state formed no part of my original intent in writing this book, but it emerged organically from my efforts and belongs, I think, in the volume. Unfortunately, my program for designing incremental reform does not seem to suggest how to reform the entangled deep state. Coincidentally, events in the early days of the Trump administration have caused the term “deep state” to be used more frequently in the popular press. It has become a cliché in very short order. Instead of “military-industrial complex,” we may now read of the dangers of the American “deep state.” The “entangled deep state” is a threat to pluralistic democracy. It is thus an issue of great importance. My critique, however, will be epistemic. Rather than showing how the entangled deep state contradicts or threatens my values, I will show that it produces expert failure. In contrasting “competition” with “monopoly” in the market for expert opinion, I take a comparative-institutions approach to the problem. I ask which institutional arrangements tend to encourage and enable expert error and abuse and which institutional arrangements tend to discourage and diminish expert error and abuse. We have already seen an example of the importance of institutions. We have seen that expert abuses in British family courts are at least partly attributable to the combination of secrecy of court proceedings and financial incentives for the removal of children. The institutional structure of social services in the United Kingdom leads to needlessly high rates of expert error and abuse in the form of arbitrary and inappropriate removal of children from their families. A different
Introduction
13
institutional arrangement could produce fewer miscarriages of justice. By emphasizing comparative institutional analysis, my theory is structurally similar to the works of many predecessors, including William Easterly and Vincent Ostrom. Easterly (2013) rightly rails against the “tyranny of experts” in development. As I shall say again in Chapter 4, he repudiates the “technocratic illusion,” which he defines as “the belief that poverty is a purely technical problem amenable to such technical solutions as fertilizers, antibiotics, or nutritional supplements” (p. 6). Easterly says, “The economists who advocate the technocratic approach have a terrible naïveté about power – that as restraints on power are loosened or even removed, that same power will remain benevolent of its own accord” (p. 6). Poverty is about rights, not fertilizer. “The technocratic illusion is that poverty results from a shortage of expertise, whereas poverty is really about a shortage of rights” (p. 7). Easterly thus adopts a comparative institutional approach to the problem of experts. Where experts have power in a technocratic model, there is scope for abuse and bad results easily emerge. If, instead, the poor are given civil and political rights, better outcomes will likely emerge. Top-down planning cannot match undirected economic development. Ostrom (1989) adopts a comparative institutional approach to public administration. He criticizes the administrative state. Strauss (1984, p. 583) identifies four “significant features of modern administrative government,” and I take those features to define the “administrative state” Ostrom opposed. The gist of it is that the US federal government includes a complex bureaucracy that is at least somewhat protected from democratic process and exists as a separate – though not homogeneous or unified – power base. The President is “neither dominant nor powerless” (Strauss 1984, p. 583) in their relations with the administrative agencies of the federal government. The terms “administrative state” and “deep state” identify distinct, but overlapping, phenomena. The term “administrative state” has generally been used in the context of regulatory agencies and has not usually included the military or the intelligence agencies. Moreover, the “deep state” has usually been construed to include nominally private actors such as defense contractors. The two terms have been distinct, but overlapping. They have recently come into popular use with vague, inconsistent, and overlapping meanings. Ostrom’s critique of the administrative state challenges the traditional view in public administration that sees the state as a unitary actor. In the traditional and still dominant view put forward by Wilson (1887), Ostrom
14
Expert Failure
explains, “There will always be a single dominant center of power in any system of government; and the government will be controlled by that single center of power” (1989, p. 24). In the Wilsonian vision Ostrom identifies, strict hierarchy is the only possible form of government. “So far as administrative functions are concerned,” Wilson explains, “all governments have a strong structural likeness; more than that, if they are to be uniformly useful and efficient, they must have a strong structural likeness . . . Monarchies and democracies, radically different as they are in other respects, have in reality much the same business to look to” (1887, p. 218). In this view of things, trained experts must be protected from democratic pressures. Wilson avers that “administration lies outside the proper sphere of politics. Administrative questions are not political questions” (1887, p. 210). Wilson acknowledges some role for democracy, but the core of governance is technical decision making that only experts are competent to perform: “Although politics sets the tasks for administration, it should not be suffered to manipulate its offices” (Wilson 1887, p. 210). The principle of central bank independence illustrates the idea that the technical decision making of experts should be protected from democratic interference. Today, this sort of thinking informs not only politics and “administration,” but also law, medicine, journalism, and education. Against this hierarchical view, Ostrom cites the basic institutional fact of “overlapping jurisdictions and fragmented authority” (1989, p. 106) characterizing American federalism. If “Individuals who exercise the prerogatives of government are no more nor no less corruptible than their fellow citizens” (1989, p. 98), then such fragmentation may produce better results than the unitary government imagined by Wilson. Ostrom defends polyarchy against hierarchy. In so doing he proposes a radical change in perspective for the theory of public administration. My theory of experts expresses a vision similar to that of Ostrom (1989). As I will show, my views on experts are congruent with those of other scholars upon whom I have drawn, including Levy and Peart (2017), Turner (2001), and Berger and Luckmann (1966). It is nevertheless true, I think, that the dominant view in the literature on experts is more hierarchical than that of Ostrom (1989), as illustrated, perhaps, by Cole (2010) and Mnookin et al. (2011). Like Ostrom, I propose moving away from the mistaken idea that only hierarchy is coherent and workable and toward the idea that polyarchy generally produces better results. I reject hierarchical views such as that of Wilson (1887) in favor of polyarchy. As with Ostrom, my preference for polyarchy over hierarchy is a conclusion of the analysis and not its starting point. Moreover, the
Introduction
15
preference for polyarchy follows only if we add a norm of beneficence to my value-free analysis. If the theory is more or less right, however, no very specific normative assumption is required to infer the desirability of polyarchy. A general good will toward other humans is all that is required. If I am mistaken to prefer polyarchy to hierarchy, the error is to be sought, presumably, in my value-free scientific analysis rather than my morals. Unfortunately, it is possible to be beneficent toward some and malicious toward others. All men are created equal, but we can have perfectly polycentric political order with liberty for some and tyranny for others. The United States during slavery and South Africa under apartheid are salient examples. One may point out that restricting the liberty of some narrows society and reduces the general level of benefits it provides even to those not in the tyrannized group. But powerful members of the tyrannizing group may derive special benefits from oppressing others. Large slaveholders in the American South grew rich by stealing the labor of the persons they owned (Fogel and Engerman 1974). We must choose between the “love of domination and tyrannizing” (Smith 1982, p. 186) and the material and spiritual benefits of amity and free exchange among equals. My analysis in this book is more thoroughly epistemic than Ostrom’s, but the theory of public administration and the theory of experts both address the question of hierarchy vs. polyarchy. Do we get better results with hierarchy or polyarchy? How best can we coordinate the dispersed knowledge of many persons to produce good social results? The Wilsonian vision of expert rule through hierarchical administration is based on a hierarchical vision of knowledge. Knowledge is uniform, explicit, and integrated in the Wilsonian vision. To coordinate human actions, it holds, we must bring them into harmony with the hierarchy of knowledge, and such knowledge is embodied in experts. But if knowledge is not hierarchical, if it emerges from practice, sometimes in tacit form, then only polyarchy can coordinate action well. If knowledge is dispersed and heterogeneous, administrative hierarchy and the rule of experts will give us bad outcomes. Like Ostrom, I wish to challenge the administrative state. In Chapter 12 I will note that the administrative state is inconsistent with the rule of law. Unfortunately, the term “rule of law” is often used to mean something like “vigorous enforcement of legislation” or “cracking down” on supposed bad actors. In this volume, however, it is a term of art in law with a very different meaning. The “rule of law” is central to the liberal ideal of liberty. The leading interpreter of the rule of law, A. V. Dicey, described it by saying: “It means, in the first place, the absolute supremacy or predominance of regular law as opposed to the influence of arbitrary power, and
16
Expert Failure
excludes the existence of arbitrariness, of prerogative, or even of wide discretionary authority on the part of the government” (1982: 120). (See Fallon 1997 on the meaning of “rule of law” and Epstein 2008 on “Why the modern administrative state is inconsistent with the rule of law.”) Mises (1966) invoked the rule of law when contrasting hegemonic and contractual bonds in society. “In the hegemonic state there is neither right nor law; there are only directives and regulations which the director may change daily and apply with what discrimination he pleases and which the wards must obey” (p. 199). The administrative state abrogates the rule of law and is in this sense lawless. It should be dismantled. But it is impossible to predict the consequences of constitutional innovations (Devins et al. 2015). Thus, any dismantling of the administrative state should be piecemeal. It is not clear how to unwind it, and precipitate measures may replace a bad thing with something worse. In particular, if the administrative state is replaced by party rule we will have moved even further away from pluralistic democracy. Party rule would strip away the necessity of supporting policy decisions with rational analysis. In the administrative state, rational support for the experts’ decisions is often more ceremonial than substantive. Nevertheless, the façade of objectivity and neutrality does provide at least some check on arbitrariness and inconsistency. Under party rule, the arbitrariness and inconsistency of state decision makers would likely be even worse than under the current administrative state. Decisions would be overtly political. They would reflect supposed judgments that this person or group is loyal or good whereas that group or individual is disloyal or bad, that this way of proceeding reflects the inner nature of the nation or its people whereas other ways are foreign and bad. Such “arguments” are built on a supposed fealty to the ruling party and its values. Such justifications of arbitrary decisions are immune to rational criticism. They allow similar cases to be treated in dissimilar ways. Anyone challenging the wisdom or legitimacy of a given policy choice thereby reveals themself to be evil or a servant of foreign interests. Populist movements seem to carry with them the danger of replacing the administrative state with party rule. It is hard to guess whether “deconstruction of the administrative state” (Rucker and Costa 2017) in America would produce party rule or increased liberty. Gloomy prognosticators would do well to remember the embarrassing spectacle of Stuart Hall (1979), who mistakenly warned of the “authoritarian populism” of Thatcher’s government, which “has entailed a striking weakening of democratic forms” (p. 15). Hall went so far as to suggest that Thatcher’s government was a kind of semi-fascism,
Introduction
17
which, “unlike classical fascism, has retained most (though not all) of the formal representative institution in place” (p. 15). Hall’s harrowing alarm may now seem exaggerated. If Chicken Littleism is a consummation devoutly to be unwished, some considerations are nevertheless discomforting. As the American public grows more aware of abuses of both the administrative state and the deep state, the legitimacy of traditional constitutional structures may be lost. These traditional structures include legislative and judicial checks on executive power. In such an environment, limits on state power may be further reduced, and a move to one-party rule is not unthinkable. The danger I am warning of is not the product of one political party or one elected official. America’s two major political parties have both given greater support to elite privilege and state interests than to the rule of law or the interests of the people (Greenwald 2011). They have both cooperated in restricting political competition, thereby reducing political pluralism (Miller 1999, 2006). Political pluralism and the rule of law have therefore become increasingly alien concepts in American politics. The problem is not that this politician or that party seeks evil. The problem is that the rule of experts is incompatible with the rule of law and pluralistic democracy. The rule of experts progressively weakens the rule of law and pluralistic democracy, making it ever more likely that any democratic overthrow of the system will exclude as unknown and unimaginable both pluralism and the rule of law. As administrations and parties change, the risk of a turn to party rule persists unabated. We are not doomed to travel the road to serfdom (Hayek 1944). We can choose to strengthen the rule of law, embrace democratic pluralism, and repudiate party rule. Until we do, however, the dangers inherent in the twin evils of the administrative state and the entangled deep state will persist. Those dangers arise from the logic of the situation, rather than one political party or one elected official. My interest in the problem of experts emerged from my interest in forensic science and the problem of forensic-science error. That starting point may have helped my thinking. The problem of experts in forensic science presents itself free of some potentially distracting issues present in other areas such as climate science and monetary policy. The problem of expert error in forensic science cannot be attributed to Wilsonian progressivism. All parties, whether “left,” “right,” or something else, agree that forensic-science error is a bad thing. The fact that forensic science has not been associated with any political party or program may have helped me to see the problem of experts relatively unclouded by ideological bias. It also
18
Expert Failure
freed me from the necessity of judging the theories putatively used by experts. Forensic scientists judge matters of fact that are independent of the theory they use to evaluate evidence. Whether the defendant shot the victim is not a matter of perspective or theoretical framework. Jones did or did not shoot Smith. This theory-independence allowed me to consider error and the influences on error without pretending to second-guess the expert’s theory. I think this fact may have helped me to see in clearer focus the influence of social context, including incentives, on the decisions of experts. Finally, forensic science has, unfortunately, been a rich source of examples of expert failure. Although my interest in experts arose from the relatively narrow context of forensic science, my analysis of it applies broadly. I noted above that it points to an analysis and critique of the entangled deep state. The “intelligence community” is a group of experts who seem clearly to be defending their power and prerogatives as experts. Whether these experts are cynical or sincere, their knowledge claims and their corresponding claims to power and autonomy are an undemocratic invitation to expert failure. It may be that my more radical view of knowledge adds support to what I have called “Humean status quo bias” (Koppl 2009; Devins et al. 2015). Vernon Smith (2014) has noted that “the language ‘regulation’ or ‘deregulation’” is “unfortunate language.” I expand on this important point in Chapter 5. Economists of the “Austrian” school have warned of the “perils of regulation” (Kirzner 1985). I strongly agree that regulation has the very sort of perils Kirzner warns of. The epistemics of this volume strengthens Kirzner’s warning. But deregulation too has epistemic perils. Beware the perils of deregulation. We may call a general resistance to political and institutional change “Humean status quo bias.” Hume remarked that “a regard to liberty, though a laudable passion, ought commonly to be subordinate to a reverence for established government.” My position is similar to that of Charles Lindblom (1959), who argued in favor of “muddling through.” Devins et al. (2015, 2016) take up a similar position “against design.” I have noted that liberty does not imply equality. But the reverse entailment is valid: Equality implies liberty. The cliché that we may be equally slaves under collectivism is wrong. We cannot all be equal unless we are all equally free. It is well understood, I think, that in a controlled system the powerful are more likely to serve their own interests than the general interest. In that case, however, we will not have equality. There will be no equality of income, status, or rights between those who command and those who must obey. Something similar is true of a “regulated”
Introduction
19
system even under the most perfect democracy. If the system is “regulated,” then state experts will, ex hypothesi, decide many things. Nonexperts will have to obey the decisions of experts. And if these experts cannot avoid the “synecological bias” I describe in Chapter 10, then many of those expert decisions will be bad for the non-experts on whose behalf (putatively) they were made. Given synecological bias, the experts’ decisions will be bad even if they have zeal and good will. My discussion of expert failure in Chapter 10 may help to make the point that monopoly experts are not on a footing of equality with non-experts. Only when each of us is free of the rule of experts can we be equal. If you get to choose for me, we are not equal. Mises (1966, p. 196) has expressed the point forcefully: Where and as far as cooperation is based on contract, the logical relation between the cooperating individuals is symmetrical. They are all parties to interpersonal exchange contracts. John has the same relation to Tom as Tom has to John. Where and as far as cooperation is based on command and subordination, there is the man who commands and there are those who obey his orders. The logical relation between these two classes of men is asymmetrical. There is a director and there are people under his care. The director alone chooses and directs; the others – the wards – are mere pawns in his actions.
Under the rule of experts, knowledge is imposed on the system. Knowledge should instead emerge from the system. If knowledge is imposed on the system, it is imposed by someone who imposes upon and therefore dominates others. The persons imposed upon are not in a relation of equality with those imposing a knowledge scheme on society. The view of emergent knowledge I develop in this volume shows that we need not impose a unitary scheme of knowledge on society. We can let knowledge emerge and flourish without attempting to control or systematize it. If we are to be free, we must let knowledge emerge freely. And we cannot be free unless we are free of the domination and tyrannization of those who would impose a uniform system of knowledge on others. In other words, we cannot be free unless we are equal. The disposition to impose knowledge on the system arises from the very act of studying society. In examining human society, we may easily forget that we too are humans in society. We see society as an anthill and people as ants. We gaze down upon the anthill as if we were higher beings. Alfred Schutz described what we might call the “anthill problem”: The theoretical perspective requires us to imagine ourselves above the system even though we live within the system. I don’t think Schutz meant his description of the anthill problem as a warning against hubris among social scientists, but he
20
Expert Failure
should have. Schutz compared the agents in our models (“personal ideal types”) to “puppets” whose strings the theorist pulls. The puppet’s “destiny is regulated and determined beforehand by his creator, the social scientist, and in such a perfect pre-established harmony as Leibnitz imagined the world created by God.” The very act of theorizing society puts you in a spurious godlike position. “What counts is the point of view from which the scientist envisages the social world” (Schutz 1943, pp. 144–5). The permanent and ineradicable crisis of social science is the theorist’s dual role as godlike observer and equal participant.
PART I NATURE AND HISTORY OF THE PROBLEM
2
Is There a Literature on Experts?
INTRODUCTION
Experts have knowledge not possessed by others. Those others, the laity, must decide when to trust experts and how much power to give them. We hope for a “healer” but fear the “quack,” and it is hard to know which is which. Experts may play a strictly advisory role or they may choose for others. Psychiatrists, for example, may have the legal power to imprison persons by declaring them mentally unfit. There is, then, a “problem with experts,” as Turner (2001) has noted. When do we trust them? How much power do we give them? What can be done to ensure good outcomes from experts? What invites bad outcomes? And so on. Different people know different things, and no one can acquire a reasonable command of the many different fields of knowledge required to make good decisions in all the various domains of ordinary life, including career, health, nutrition, preparing for the afterlife, voting, buying a car, and deciding which movie to watch this evening. Each of us feels the need of others to tell us things such as which medical therapy will best promote long-term health, which religious practices will produce a happy existence after death, which dietary practices are most likely to postpone death, which schools are most likely to launch our children into good careers, what hairstyles are the most fashionable, and which car will be the most fun to drive. In our work lives, too, we rely on the expertise of others. Corporate managers require the opinions of experts in many areas, including engineering, accounting, and finance. Investigating police officers require forensic scientists to tell them whether there were “latent” fingerprints at the crime scene and whether any such prints “match” those of the police suspect. A civil attorney may seek expert opinion on the risks of using a product that harmed their client. And so on. We rely, in other words, on the opinions of experts. We rely on 23
24
Expert Failure
experts even though we are conscious of the risk that experts may give bad advice, whether from “honest error,” inattention, conflict of interest, or other reasons. The “problem of experts” is the problem that we must rely on experts even though experts may not be completely reliable and trustworthy sources of the advice we require from them. The problem of experts has been recognized in some degree and in some form, however vaguely, from an early point in the history of Western thought. But it has emerged as a distinct problem only quite recently. The earliest explicit treatment of a general problem of experts may be Berger and Luckmann (1966), who link the problem to the division of knowledge in society. “I require not only the advice of experts, but the prior advice of experts on experts. The social distribution of knowledge thus begins with the simple fact that I do not know everything known to my fellowmen, and vice versa, and culminates in exceedingly complex and esoteric systems of expertise” (p. 46). The problem of experts originates in the “social distribution of knowledge” that Berger and Luckmann emphasize. I will examine the social distribution of knowledge in Chapters 6 and 7. The basic idea, given earlier, is simple enough, however: Different people know different things. The division of labor entails a division of knowledge. No one would deny this humble truth. As we shall see, however, there seems to have been only limited clear awareness of the division of knowledge in Western thought before F. A. Hayek (1937) elevated it to a central theme of political economy. The idea has even been denied or decried by skilled scholars. And few thinkers have been willing and able to recognize the implications of dispersed knowledge as understood by Hayek. As we shall see, Hayek drew our attention to the ways in which dispersed knowledge is embedded in practice rather than in books, formulas, and rational thought. In Chapter 6 I will discuss the surprising fact that the modern literatures on the problem of experts to be found in sociology and in science and technology studies can be traced to a source only rarely cited in those literatures, namely, Hayek’s 1937 article “Economics and Knowledge.” It would be difficult or impossible to recognize a general problem of experts without a clear and explicit recognition of both the existence and the importance of the division of knowledge in society. We should not be surprised, therefore, that it was only after Hayek’s articulation of the problem of dispersed knowledge that we had a clear and explicit recognition of a general problem of experts. Nor is it likely that one could clearly see a problem of experts in a society lacking a relatively high degree of Weberian rationality. Weber’s meaning
Is There a Literature on Experts?
25
is contested. I will therefore define “Weberian rationality” as the practice of applying procedures of counting, numerical measurement, and conscious deliberation to a large number and variety of choices in human life, the tendency to articulate ends pursued, the further tendency to apply an efficiency criterion to the choice of means to achieve those ends, and a disposition to seek constancy, harmony, and order among the ends pursued. (See, generally, Weber 1927 and Weber 1956) Modern corporate enterprises, of course, exhibit a high degree of Weberian rationality. This sort of rationality contrasts with Marco Polo’s representation of thirteenthcentury business practices. (See any standard edition of his memoir, which is usually entitled The Travels of Marco Polo in English.) The particulars of his story cannot all be trusted, of course, and it is disputed whether even the broadest outlines are factual. It is nevertheless notable how little careful calculation seemed to have entered his business dealings as described in his famous narrative. His business venture was an adventure. Presumably, the uncertainties of the venture would have overwhelmed any attempt at careful calculation. And, importantly, double-entry bookkeeping was not yet available to him. (Marco Polo’s adventure is dated 1271–95, which precedes any plausible date for the “first” example of double-entry bookkeeping. See Yamey 1949, p. 101; de Roover 1955, pp. 406–8 and 411; Edwards 1960, p. 453.) Without double-entry bookkeeping, good calculations would be difficult or impossible. In a world of low Weberian rationality there will be priests, soothsayers, physicians, jurists, and other experts. But it seems unlikely that any thinker in such a world would very clearly anticipate the “problem of experts” considered in this book. Only when a large variety of choices are made at least in part through Weber-rational processes is it reasonably possible for a social observer to recognize a general problem of experts. Earlier thinkers may not have been in a good position to identify a general problem of experts. They did produce, however, some analyses that addressed questions of expertise. Sometimes experts are viewed favorably, especially by writers putting themselves forward as experts. We can also find complaints against experts. Aristophanes, for example, mocked Athenian oracle mongers (Smith 1989). In Henry VI, Shakespeare has Dick the butcher exclaim, “The first thing we do, let’s kill all the lawyers.” The line may have alluded to Wat Tyler’s Peasant Revolt of 1381, whose partisans “expressed a particular animosity against the lawyers and attornies” (Hume 1778, vol. 3, p. 291), perhaps because “they had deliberately advised the barons to reinstitute servile labor as the just and traditional due” (Schlauch 1940). The physicians in Molière’s L’Amour médecin are
26
Expert Failure
incompetent fops more likely to kill than heal. DeFoe (1722) contrasts skilled physicians with the “Quack Conjurers” (p. 33) brought forward by the plague. He laments “the foolish Humour of the People, in running after Quacks, and Mountebanks, Wizards, and Fortune tellers” (p. 42). Only very recently has Western thought produced any clear and coherent statements of a general problem with experts. Nor have we yet had a common discussion of the problem of experts. Scholars producing similar and complementary analyses do not seem to be aware of one another. We can nevertheless identify a literature on experts stretching back to antiquity with the aid of a simple classification. This simple classificatory scheme helps to us identify works that are relevant to a theory of experts even when they do not include a clear and coherent statement of a general problem of experts. Indeed, as we shall see, the literature includes theoretical treatments of experts in specific domains. In some cases the only problem with experts seems to be that nonexperts are not sufficiently obedient. In this and the next two chapters, I will discuss many past thinkers. But I will not attempt to provide anything approaching a complete survey of the literature on experts. There are important figures I will not mention at all, such as Scheler (1926), Szasz (1960), Tetlock and Gardner (2015), and Conrad (2007). There are others, such as Jurgen Habermas (1985) and James B. Conant, who will be mentioned, but given, perhaps, insufficient attention. Rather than attempting a survey of the literature on experts, I have tried to show that there is a literature on experts. This literature is spread across several fields, including philosophy, science and technology studies, economics, political science, eugenics, sociology, and law. I believe the simple taxonomy provided in the next section creates a common framework for comparing different theoretical treatments of experts. It makes them commensurable. A SIMPLE TAXONOMY
Theories will differ according to how they model experts and how they model nonexperts. Sometimes experts are modeled as disinterested, neutral, objective, “scientific,” incorruptible, and so on. Call such experts “reliable.” They are reliable because nonexperts can rely on them to provide good guidance. Sometimes experts are modeled as interested parties who may act on local, partisan, or selfish motives, or as subject to unconscious biases, or as corruptible, and so on. Call such experts “unreliable.” They are unreliable because nonexperts cannot always rely on them to provide good guidance. Nonexperts may be modeled as rational,
Is There a Literature on Experts?
27
reasonable, active, skillful, or otherwise competent. Call such nonexperts “empowered.” They have, potentially at least, the power to form a reasonable estimate of the reliability of experts or, perhaps, to override or ignore the advice of experts or decisions made for them by experts. Nonexperts may be modeled instead as unable to reason well, subject to the supposed dictates of “culture” or “ideology,” or otherwise passive and causally inert. They lack the potential to competently judge or to choose among expert opinions. Call such nonexperts “powerless.” They do not have the power to form a reasonable estimate of the reliability of experts or to competently override or ignore decisions made for them by experts. This taxonomy gives us four categories for a theory of experts: reliable-empowered, reliable-powerless, unreliable-empowered, and unreliable-powerless. Any theory of experts can be fitted into this humble taxonomy. Technical fields such as psychometrics give the word “reliable” a technical meaning from statistics that I am not invoking here. Kaye and Freedman (2011, p. 227) say, “In statistics, reliability refers to reproducibility of results. A reliable measuring instrument returns consistent measurements.” Thus defined, reliability is distinct from validity. “A valid measuring instrument measures what it is supposed to” (Kaye and Freedman 2011, p. 228). These scientific definitions are important in many contexts, including assessments of the quality of forensic-science evidence. The word “reliable” may also have, however, the broader meaning I invoke in this chapter. My taxonomy does not touch all dimensions. It does not address how or whether the theorist models themself. Does the theorist model themself as an “expert”? Does the theorist express sympathy for some parties over others? And so on. And it may be difficult to classify a theory in which the reliability of experts or the power of nonexperts is endogenous. Nevertheless, any theory will contain a model of the expert and a model of the nonexpert, and we can usually classify the experts as generally “reliable” or “unreliable” and the nonexperts as generally “empowered” or “powerless.” Table 2.1 represents my simple taxonomy. American progressives such as Wilson (1887) tended to view experts as reliable and nonexperts as powerless. Nonexperts were expected to follow the dictates of the experts and little weight was given to the idea of resistance or of feedback from the nonexpert to the expert. Socrates and the philosophers of the Academy belong in the “reliablepowerless” category. I discuss this group in Chapter 3, where I attempt to defend at some length the claim that Socrates, Plato, and Aristotle put themselves forward as experts who should be obeyed. In other words, they
28
Expert Failure
Table 2.1 A taxonomy for theories of experts
Experts are reliable
Experts are unreliable
Nonexperts are powerless
Nonexperts are empowered
Theories in this box tend to view experts favorably. Some theorists in this group have argued in favor of experts deciding for nonexperts in at least some areas such as reproduction. Theories in this box tend to view the thinking of both experts and nonexperts as determined by historical or material circumstances. These circumstances may be oppressive to both experts and nonexperts, but experts tend to be in a position of dominance over nonexperts.
Theories in this box tend to view nonexperts as dependent on experts, but reasonably able to ensure that experts serve the interests of nonexperts. Theories in this box tend to view experts as posing a risk to nonexperts in part because experts may place their own interests above those of nonexperts. Mechanisms may exist, however, to protect nonexperts from the errors and abuses of experts.
wanted the rule of experts. As we will also see in Chapter 3, expert witnesses in law have often adopted views that place them as well in the reliable-powerless box. Eugenicists such as Francis Galton (1904), Karl Pearson (1911), and J. M. Keynes (1927) tended to fall into the category reliable-powerless. The “innate quality” of the population (Keynes 1926, p. 319) was to be preserved and enhanced. Eugenicists often sought to achieve this end in part by denying the benefits of child rearing to some individual members of that community, with eugenic experts deciding who would be denied. Keynes’s letter to Margaret Sanger of 23 June 1936 indicates a “shifting” of his views. Keynes said, “In most countries we have now passed definitely out of the phase of increasing population into that of declining population, and I feel that the emphasis on policy should be considerably changed, – much more with the emphasis on eugenics and much less on restriction as such” (Keynes 1936). This “shifting” may seem ominous. Singerman (2016, p. 541) says, however, “Eugenics contained many different ideologies and policies, and Keynes was among many enthusiasts who never endorsed forcible sterilization or other state violence.” Presumably, Keynes favored a population policy that was in some way interventionist, neither laissez faire
Is There a Literature on Experts?
29
nor dictatorial. Singerman notes that Keynes “never advocated any specific policies in print” (2016, p. 541). In private conversation, he has wryly commented that Keynes seems to have taken the attitude that “eugenic policies will be extremely important . . . tomorrow.” Singerman’s researches seem to carry us only up to about 1930. The potentially ominous letter to Sanger is consequently beyond the scope of his inquiries. Overall, however, current evidence does not seem to support the claim that Keynes specifically favored or advocated forced sterilization. And yet the number and “innate quality” of the population was in some unarticulated way to be a matter of state policy. A great variety of policy positions in economics may be described as “Keynesian.” And Keynes’ own views on economic policy are contested. It seems fair to interpret Keynes, however, as an advocate of the rule of experts in both economic policy and population policy. Singerman (2016) links Keynes’ policy preferences in these two areas, which might at first seem to be unrelated: After the catastrophe of the Great War, Keynes continued to link ethics and eugenics as he sought to construct a moral society. A crippled continent faced unemployment, starvation, and revolution, so achieving a world of good things and good states of mind was possible only through technocratic management of population and economy. At the same time, this very mechanism for building an ethical society itself required attention to the nature of population as well as the number. The creation of both the caste of technocratic managers and the educated democratic citizenry who would follow them demanded active – if always ambiguous – measures to address the biological characteristics of those citizens. By the end of the 1920s, Keynes judged the immediate pressure of overpopulation to have receded. He could begin to foresee how material progress could positively combine with biological science, to produce individuals both taught and shaped to make good use of the absence of want. (Singerman, p. 564)
For Keynes, economy, population, and morality are all subject in some degree to state planning. And state planning in these three areas is linked: The success of planning in any one area depends on successful planning in the other two. This vision of planning in three interlocking domains seems to limit the scope for human agency. Our choices must be right and rational. Eugenic experts will act to produce a future population that makes right and rational choices as judged by those experts today. The real authors of future moral choices are today’s eugenic, moral, and economic experts, especially Keynes. This mechanistic vision exalts the knowledge of one man and one mind, while mortifying the freedom and creativity of future generations.
30
Expert Failure
It is, unfortunately, necessary to make a kind of disclaimer on Keynes and Keynesianism. Some readers may mistakenly believe (whether angrily or gleefully) that I think Keynes’ views on eugenics somehow taint or disprove Keynesian economics. I repudiate such callow logic. Although I reject the sort of economic policies usually associated with Keynes (Koppl 2014, pp. 129–39), I have profited from his economic theory and drawn on it in my own work (Koppl 2002). It is not my opinion that Keynesian economists of any sort secretly wish for eugenic policies or otherwise fail some moral test. Nor do I think that Keynes’ eugenic opinions somehow prove that policies of economic “stimulus” are bad or ineffective. Mannheim (1936, 1952) is an important figure in the unreliablepowerless box. As Goldman explains (2009), Mannheim “extended Marx’s theory of ideology into a sociology of knowledge.” Marx lacked a clear understanding of the division of knowledge in society and tended to view the cultural and ideological “superstructure” as caused by the base without causally influencing the base. The material forces of production cause the bourgeoisie to think in one way and the proletariat in another. True freedom of thought outside the revolutionary vanguard will not exist until the workers’ paradise has arrived. Mannheim was subtler than Marx on these points, but in most cases the thinking of a social class or of a “generational unit” is “determined in its direction by social factors” (1952, p. 292). Mannheim seems oblivious to the division of knowledge and its coevolution with the division of labor. For example, he describes the “mental climate” of “the Catholic Middle Ages” as “rigorously uniform,” notwithstanding some differences between clerics, knights, and monks (1952, p. 291). But the “mental climates” of cobblers, courtiers, and concubines were all very different from one another, as were those of sailors, saints, and sinners; pages and popes; blacksmiths, beggars, and bandits; goldsmiths; maidens; widows; weavers; tillers; jugglers; jewelers; minstrels; masons; hunters; fishers; and tillers. The “mental climate” of “the Catholic Middle Ages” varied within in each such social role, between places, and over time. It varied, importantly, with one’s place in the division of labor. In another passage Mannheim says: “The real seat of the class ideology remains the class itself . . . even when the author of the ideology, as it may happen, belongs to a different class, or when the ideology expands and becomes influential beyond the limits of the class location.” By exempting science and his own ideas from the category of “ideology,” Mannheim placed his theory, in a sense, above the system and not in the system. (He did not “put the theorist in the model,” to use language developed in
Is There a Literature on Experts?
31
Chapter 4.) From his seat above the system Mannheim proclaimed boldly that “man’s thought . . . now perceives the possibility of determining itself” (Mannheim 1940, p. 213). Foucault (1980, 1982) also belongs in the “unreliable-powerless” category. The individual is oppressed by a “discipline,” which is (1) a knowledge system (“régime du savoir”) such as medicine or psychology combined with (2) a communication system such as academic journals and the popular press and (3) some elements of more direct power, such as the ability to write prescriptions or to imprison someone declared “insane.” The victims of such “disciplines” may resist, and some generally do. But they resist only from a position of subjugation that they cannot easily escape. They are trapped in part because the knowledge system defines reality for the oppressed person, which makes it hard for the oppressed to recognize that they are indeed oppressed. Some high school students may be rebellious, while others see nothing to rebel against. In any event, the school’s “discipline” gives power to the teachers, the administrators, and, importantly, the overall system, but not to the students putatively served by the system. Foucault (1982) has expressed a special interest in “the effects of power which are linked with knowledge, competence, and qualification: struggles against the privileges of knowledge.” For Foucault such struggles “are also an opposition against secrecy, deformation, and mystifying representations imposed on people” (1982, p. 781). Foucault’s linking of power and knowledge is suggestive and points to the possibility that experts may have pernicious power. His refusal to view power as extrinsic to the social fabric or exclusive to a small oppressing minority is also suggestive. Power is, somehow, in the daily fabric of social life and inextricable from it. Jürgen Habermas probably also belongs in the “unreliable-powerless” category of Table 2.1. Turner says that for Habermas, “There is an unbridgeable cultural gap . . . between the world of illusions under which the ordinary member of the public operates and worlds of ‘expert cultures’” (2001, p. 128). (Turner cites Habermas [1985] 1987, p. 397.) Schiemann (2000) challenges Habermas’s distinction between communicative and strategic action. Merton (1945) and Goldman (2001) are probably best placed in the “reliable-empowered” category. For Merton, experts are indispensable and mostly reliable, notwithstanding their potential to place special interests above the general interest. Because “intellectuals in the public bureaucracy” are in a position of “dependency” to the democratically elected policy
32
Expert Failure
maker, there is a relatively high risk of the “technician’s flight from social responsibility” through excess deference to the policy maker’s goals (p. 409). There is a relatively low risk of expert abuses such as imposing the expert’s values on others. Adopting the perspective of veritisitic social epistemology, Goldman (2001) examines the problem of “how laypersons should evaluate the testimony of experts and decide which of two or more rival experts is most credible” (p. 85). He reviews some strategies for nonexperts to evaluate experts and places his discussion in the context of the philosophical literature on testimony. Unlike many other philosophical analyses of the issue, however, Goldman gives some attention to the social context of testimony. He notes, for example, that novices may be able to make reliable judgments of interests and biases (pp. 104–5). Goldman might have reached a more optimistic position if he had given even greater attention to social structure. I will note presently, for example, the importance of competition in the market for expert opinion. Goldman neglects ways that details of context can matter. When two competing experts present alternative interpretations of a given issue, the truth may hinge on factors a novice is capable of judging. In DNA profiling, for example, the judgment about whether the genetic information in the crime-scene sample matches that of the suspect’s sample depends in part on the interpretation of an “electropherogram,” which is nothing more than a squiggly line. In some cases, novices will be able to see for themselves whether the two squiggly lines have peaks at the same locations. In such cases, modularity in expert knowledge allows novices to make reasonable judgments without acquiring all the specialized knowledge of the contending experts. Goldman notes that the power of nonexperts is generally enhanced by the existence of competing experts whose exchanges the nonexpert can observe (2001, pp. 93–7). His overall conclusions are “decidedly mixed, a cause for neither elation nor gloom” (2001, p. 109). He thus views laics as neither utterly powerless nor fully empowered. Finally, Berger and Luckmann (1966) are best placed in the unreliableempowered category, as are Stephen Turner (2001) and Levy and Peart (2017). These thinkers recognize that experts may be unreliable and generally oppose giving experts an unchecked power to decide for others. And in one way or another they recognize the ability of nonexperts to challenge the authority of experts. Turner, for example, insists that “We, the non-experts, decide whether claims to cognitive authority . . . are to be honoured” (2001, p. 144).
Is There a Literature on Experts?
33
BERGER AND LUCKMANN
I noted that the problem of experts originates in the “social distribution of knowledge” that Berger and Luckmann (1966) emphasize. In Chapter 7 I will explain how the idea of the division of knowledge in society migrated from the “Austrian economics” of Ludwig von Mises and F. A. Hayek to the phenomenological sociology of Alfred Schutz, from whence it entered the sociology of knowledge via Berger and Luckmann (1966). There is some irony, perhaps, in the fact that “social constructionism” would owe so much to followers of the “Austrian” school of economics. The first of the Austrians, Carl Menger, defended “abstract theory” in the methodenstreit of the 1880s. Later, Mises would say, “In the course of social events there prevails a regularity of phenomena to which man must adjust his actions if he wishes to succeed.” The sovereign (whether literal monarch or a democratic electorate) knows that it cannot suspend the law of gravity by fiat. But the sovereign rarely understands that the laws of economics are just as binding and just as independent of human will. “One must study the laws of human action and social cooperation,” Mises averred, “as the physicist studies the laws of nature” (1966, p. 2). There may be little or no incongruity between the “social constructionism” of Berger & Luckmann themselves and the “naturalism” of Mises. The term “social constructionism,” however, often refers to an anti-realist view of the world that puts any notion of truth under a cloud. One textbook treatment says, for example, “the notion of ‘truth’ becomes problematic. Within social constructionism there can be no such thing as an objective fact. All knowledge is derived from looking at the world from some perspective or other, and is in the service of some interests rather than others” (Burr 1995, p.4). Lest the reader mistake his comments for nothing more than healthy skepticism, he continues, “The search for truth . . . has been at the foundation of social science from the start. Social constructionism therefore heralds a radically different model of what it could mean to do social science” (p. 4). This antirealist position is traced back to Berger and Luckmann’s classic work. The author says “the major social constructionist contribution from sociology is usually taken to be Berger and Luckmann’s (1966) book The Social Construction of Reality” (p. 7). Unfortunately, Burr gives us no hint of the important fact that Berger and Luckmann explicitly invite the reader to “put quotation marks around” the terms “knowledge” and “reality” (p. 2). The term “social construction of reality” has come to identify the view that all social reality – and in some versions all reality – is socially
34
Expert Failure
constructed and can, therefore, be made and unmade at will. In other words, there can be no scientific theory of society. Such a view, however, is in tension with the view that knowledge is dispersed widely across the population. To reformulate society at will, we would have to predict the consequences of our designed institutions. Those consequences depend on the interaction between dispersed bits of knowledge and future contingencies, which cannot be predicted without those very bits of knowledge, which, however, are unavailable to the planner if knowledge is dispersed. Berger and Luckmann are probably not best classified as constructivists in the modern sense, although the term has an elastic meaning (Berger 2016). Paul Lewis (2010, p. 207) says: “Berger and his associates such as Thomas Luckmann see themselves as integrating the key insights of [Max] Weber and [Emil] Durkheim, namely that society is produced by subjectively meaningful human activity and yet appears also to possess objective, things-like status.” If this attractive interpretation is correct, it puts further distance between Berger and Luckmann and the sort of constructivism Burr attributes to them. They do say at one point, however, “the social world was made by men – and, therefore, can be remade by them” (p. 89). This remark seems to neglect the economic idea of spontaneous order, which is often identified with Adam Smith’s “invisible hand.” I refer to the idea that there are systematic but unintended consequences of human action. Standard economic examples include money, the macroeconomic determination of the price level, and the microeconomic determination of relative prices. Importantly for this volume, both the division of labor and its correlated, corresponding, coevolved division of knowledge are spontaneous orders. They are emergent unintended consequences of human action. In my view, Berger and Luckmann do not give adequate weight to the economic idea of spontaneous order. This criticism is supported by at least one scholar who has made a direct comparison of Hayek (1967b) with Berger and Luckmann (1966). Against Berger and Luckmann, Martin (1968, p. 341) rails: “Surely society is in part ‘a non-human facticity’ and the attempt to eliminate this self-deluding and self-destructive.” On just this point, Martin (1968, p. 341) says, Hayek gives us a “critical codicil” with his “understanding of the non-human complexities and ‘arbitrary’ mechanisms derived from the intended and unintended consequences of each man’s choice, now as well as in the past.” Advances in ethology and psychology since 1966 have shown us just how much of human social life is derived from our prehuman ancestors and is, therefore, a product of biological evolution rather than free human invention (Cosmides, Tooby, and Barkow 1992).
Is There a Literature on Experts?
35
Berger and Luckmann introduce experts with the problem of knowing which expert to trust. As we saw above, they say, “I require not only the advice of experts, but the prior advice of experts on experts. The social distribution of knowledge thus begins with the simple fact that I do not know everything known to my fellowmen, and vice versa, and culminates in exceedingly complex and esoteric systems of expertise” (p. 46). How do you judge expertise without having the expertise you are judging? Berger and Luckmann identify the strategy we generally use in the “natural attitude.” I assume my everyday reality conforms to that of other members of my society. “I know that there is an ongoing correspondence between my meanings and their meanings” (p. 23). I apply this strategy to experts as well. I assume, for example, that forensic science is “science” and that the forensic scientist shares my view of science as objective and reliable. We rely on the label since we don’t have the expert knowledge itself. Whether the default strategy of the natural attitude is effective in coping with expertise depends on the details of the social processes governing the sort of expertise in question. If there is a tight feedback loop between the experts’ claims or actions on the one hand and the public’s experiences on the other hand, then a kind of conformity is likely between the expectations of the public and the competencies of the experts. Turner (2001) provides the useful example of the massage therapist. “The massage therapist is paid for knowledge, or for its exercise, but payment depends on the judgements of the beneficiaries of that knowledge to the effect that the therapy worked. The testimony of the beneficiaries allows a claim of expertise to be established for a wider audience” (p. 131). This type of expertise, Turner explains, “shades off into the category of persons who are paid for the successful performance of services” (p. 131). Competition among experts may not be sufficient to keep the expectations of novices aligned with the competencies of experts if the novices cannot independently judge the results of expert advice or practice. This can be the case with experts in the area of used cars, as Akerlof (1970) pointed out. But even when autonomous learning by consumers is difficult, entrepreneurial market processes may create effective feedback loops anyway. Used car dealers in the United States, for example, now offer reliable warranties on their used cars, thus substantially reducing the risk of getting a lemon. Private rating agencies may be an effective surrogate for consumer judgment of outcomes. The Insurance Institute for Highway Safety, for example, probably does a good job of supplying naïve consumers with expert knowledge of automobile safety.
36
Expert Failure
Berger and Luckmann focus most of their attention on universal experts. The fact that such experts claim special knowledge that applies to all things in the universe does not seem to be the driving factor in Berger and Luckmann’s analysis. The driving factor seems to be the relative independence of their knowledge from either empirical control or reasoned judgment by nonexperts. In the extreme, these two characteristics create a “[h] ermetic corpus of secret lore” (p. 87). In such cases, the division of knowledge is not easily overcome. In the market for universal experts, competition may not improve veracity. Berger and Luckmann note the “practical difficulties that may arise in certain societies” when, for example, “there are competing coteries of experts, or when specialization has become so complicated that the layman gets confused” (p. 78). Pluralism in systems of expert knowledge may create “socially segregated subuniverses of meaning” in which “role-specific knowledge becomes altogether esoteric” (p. 85). Universal experts are “the officially accredited definers of reality” in general (p. 97). But experts in modern society can acquire the power to define reality in a particular realm. “All legitimations, from the simplest pretheoretical legitimations of discrete institutionalized meanings to the cosmic establishments of symbolic universes may, in turn, be described as machines of universe maintenance” (p. 105). Thus, experts may be engaged in subuniverse maintenance. When either the universe or a subuniverse is at stake, experts are eager to claim a monopoly and reluctant to allow competition. Who is to be considered an expert is very important. “The outsiders have to kept out” and the insiders “have to be kept in” (p. 87). Berger and Luckmann note: “there is the problem of keeping out the outsiders at the same time having them acknowledge the legitimacy of this procedure” (p. 87). In addition to pumping up your own expertise, you disparage the expertise of others. With nihilation, “The threat to the social definitions of reality is neutralized by assigning an inferior ontological status, as thereby a not-to-be-taken-seriously cognitive status, to all definitions existing outside the symbolic universe” (p. 115). As a part of this strategy, the disparaged group may be taken to know who the “real” experts are. They do not own the truth, we do. And “Deep down within themselves, they know that this is so” (p. 116). A body of experts may defend its monopoly with “intimidation, rational and irrational propaganda . . . mystification,” and “manipulation of prestige symbols” (p. 87). Physicians, Berger and Luckmann note, manipulate prestige symbols, mystify, and propagandize on the power and mystery of
Is There a Literature on Experts?
37
modern medicine (p. 88). The “general population is intimidated by images of the physical doom that follows” from rejecting a doctor’s advice. “To underline its authority the medical profession shrouds itself in . . . symbols of power and mystery, from outlandish costume to incomprehensible language.” Medical insiders are kept in – that is, kept from “quackery” – “not only by the powerful external controls available to the profession, but by a whole body of professional knowledge that offers them ‘scientific proof’ of the folly and even wickedness of deviance” (p. 88). The theory of experts addresses the issues of epistemic authority and epistemic autonomy. To the extent that epistemic authority is problematic or epistemic autonomy desirable, there is a problem with experts. In this book I will try to show that there is a market for expert opinion whose structure determines the reliability of experts and the power of nonexperts. Most earlier treatments of experts and expertise have taken the reliability of experts and the power of nonexperts to be exogenous. There can be ambiguity, however, about what is endogenous and what is exogenous in a model. We might imagine, for example, that workers will achieve epistemic autonomy in the workers’ paradise and only in the workers’ paradise. It seems ambiguous whether the power of nonexperts is exogenous or endogenous in such a model. Nevertheless, it seems fair to say that expert reliability and nonexpert power are fully endogenous in only a relatively small fraction of discussions of experts and expertise. Having sketched out a crude map of the literature on experts, I should probably indicate where I can be found on that map. I view experts as posing a risk to nonexperts in part because experts may place their own interests above those of nonexperts. I will identify mechanisms, however, that protect nonexperts from the errors and abuses of experts. Thus, I may fit best in the unreliable-empowered category of my simple taxonomy. In the theory I develop in this book, I have tried to make expert reliability and nonexpert power fully endogenous. Experts have great potential to be unreliable, and the wrong institutions may leave nonexperts more or less powerless. But there is a perfect moral, epistemic, and cognitive parity between experts and nonexperts. Thus, the overall institutional structure in society or the institutions dominant in a particular area of human thought and action may produce reliable experts, an empowered laity, or both. DEFINING “EXPERT”
Experts are usually defined by their expertise. An expert is anyone with expertise. This is the norm no matter what box one occupies in Table 2.1.
38
Expert Failure
But this way of defining “expert” may have some tendency to suggest that experts are reliable. It underlines the nonexperts’ relative ignorance, which may have some tendency to suggest that they are powerless in the sense of Table 2.1. I define an “expert” as anyone paid to give an opinion. This definition leaves it an open question whether experts are reliable or unreliable and might be less likely to suggest that nonexperts are powerless. The literature on expert witnesses at law is an exception to the generalization that experts are usually defined by their expertise. Hand (1901) notes the distinction between experts and other witnesses. Unlike experts, other witnesses “shall testify only to facts and not to inferences” (p. 44). In other words, an expert witness expresses an opinion. Hand views nonexperts as unable to judge the relative merits of competing expert claims, and wishes to put things in the hands of an expert “tribunal.” He thus views experts as relatively “reliable” and nonexperts as “powerless.” Woodward (1902) quotes (without citation) St. Clair McKelway’s definition. “An expert must be regarded as any specialist giving evidence in the form of opinion, no matter what his real or reputed standing in his specialty or in the community” (Woodward 1902, p. 488). McKelway’s definition has the advantage of precision and neutrality. It tells us what an expert witness is, not what an expert witness should be. Unfortunately, Woodward proposed an amended version that added a normative element to the definition of expert. “An expert is a specialist, the value of whose evidence, given in the form of opinion, is proportioned to his character, to his reputation for honesty in the community, and to his standing in his specialty or profession. It is not sufficient that he be thought wise, he must also be accounted honest” (p. 488). Idealizing experts may encourage a moralistic approach to the problem of experts. Such moralizing cannot replace a positive theory of experts, in which we try to examine cause and effect objectively. In the theory of experts, as in economic theory, we need positive theory to ground our normative prescriptions. The definition of “expert” should leave it an open question whether experts so defined are “reliable” or “unreliable” in the sense of Table 2.1. It should leave it open whether they are wise or foolish, saints or knaves, respected or derided, heeded or ignored. Much of the discussion of experts, however, includes moralizing over the expert’s knowledge and character. This moralizing may have been more salient in the nineteenth century, but it has never disappeared. The infinitely repeated phrase “junk science,” which seems to have been coined by Huber (1993), suggests that errors of scientific testimony are at least partly attributable to some form of ethical
Is There a Literature on Experts?
39
failure by the expert. Huber derides the “scientific cranks and iconoclasts who peddle their strange diagnostics and quack cures not at country fairs but in courtrooms across the land” (pp. ix–x). He thus views experts as unreliable and nonexperts as often powerless to distinguish “science” from “junk science.” Although moralizing is likely justified in all too many cases, the term “junk science” spuriously dismisses the notion that questions of applied science are often difficult and ambiguous. Its use often seems to reflect the view that if two experts give inconsistent opinions, at least one of them is personally corrupt. In any event, moralizing does not help us to produce a scientific theory of experts and expert failure. Peart and Levy (2005) say that the “key feature” of the concept of an “expert” is “someone who makes recommendations about how others might achieve human happiness” (p. 4, n. 1). This characterization links experts to expert advice. Unfortunately, it considers only one sort of advice – how to achieve human happiness – and it does not define experts by their advisory role. Later in the same work, the authors “provide a more restrictive, formal definition of an expert” (p. 4, n. 1). They “define an expert as someone who uses all the evidence in a transparent manner to estimate a model” (2005, p. 241, emphasis in original). This definition seems to elevate the expert’s expertise to something like the best available opinion. Peart and Levy recognize that “[r]eal-world experts sometimes fall remarkably short” of their “idealized expert” who “has no interests other than truth and so estimates a model with all the data in the most efficient manner” (p. 243). Indeed, their research on experts began in discussions on eugenics (2005, p. vii; Levy and Peart 2017, p. xv), and they do not stint in criticism of eugenics experts. But taking this idealized vision as definitive of experts may make it harder to identify the conditions tending to render experts more reliable and those tending toward less reliability. By the definition in Peart and Levy (2005), neither wizards nor forensic scientists are experts. The wizard mystifies and is thus not transparent. (See the discussion of alchemy in Mackay 1852.) The forensic scientist typically makes a subjective judgment of similarity (Nichols 2007; NAS 2009; Koppl 2010c), which, as such, cannot be transparent. There is the further point that transparency may be hard to define precisely. Levy and Peart (2017) do offer a formal definition of transparency. It applies, however, only in the narrow context of statistical estimation. And it seems all too consistent with their recognition that “full transparency . . . may be a pipedream” (2017, p. 183). They say, “The estimate is transparent if for an arbitrary reader t EðbÞs ¼ EðbÞt 8t and nontransparent when the equality fails” (p. 222). Even within the narrow context of its intended application, it
40
Expert Failure
seems possible to doubt the cogency of this definition. The “arbitrary reader” must themself be an expert. We must assume, therefore, a world in which the expert’s expertise is common knowledge and all cognitive and epistemic differences between experts and nonexperts are effaced. In the economic literature on “credence goods,” mechanics, doctors, and others bundle diagnosis and treatment (Darby and Karni 1973; Wolinsky 1993, 1995; Emons 1997, 2001; Dulleck and Kerschbamer 2006; Sanford 2010). In this literature, “experts” are able to perform a diagnosis that the customer cannot. Here too, experts are defined by their expertise. Sanford (2010) says, “[E]xperts are able to, with effort, understand and interpret data relevant to their customers’ situations in a way customers themselves cannot. That is, true expertise is acquired, on a customer-by-customer basis, at a cost to its purveyors” (p. 199). Experts in this literature may provide incorrect diagnoses because they cheat or because they shirk. Sanford’s model distinguishes “true experts” from “quacks,” who “lack the ability to effectively carry out their duties” (2010, p. 199). The question then becomes the proportion of each type in the market. Alfred Schutz (1946) defines experts by their expertise. He says, “The expert’s . . . opinions are based upon warranted assertions; his judgments are not mere guesswork or loose suppositions” (1946, p. 465). For Schutz, the expert’s knowledge is valid by definition. Schutz comes close to recognizing a problem with experts. But his definition of “expert” prevents him from recognizing adequately the full scope of the problem. Wierzchosławski (2016, p. 248) describes Schutz (1946) as “a classical phenomenological approach” to “expert studies.” Schutz says that the outside world imposes “situations and events” on us that we did not choose. The extensive cooperation with strangers characteristic of the modern world has make these “imposed relevancies” ubiquitous and potentially oppressive. “We are, so to speak, potentially subject to everybody’s remote control” (p. 473). This “remote control,” together with the social distribution of knowledge, obliges us to rely on others for the knowledge that guides our choices. At this point, Schutz is very close to Berger and Luckmann’s (1966) statement of the problem of experts quoted above. In the end, however, Schutz seems to fear the man in the street and to place his hopes on experts and, as I discuss below, well-informed citizens. He thus places himself squarely in the top row of Table 2.1, viewing experts as fundamentally reliable. Schutz does not quite recognize a fully general problem of experts because he views expert knowledge as unproblematic. Following their common teacher Alfred Schutz (1946), Berger and Luckmann (1966) define experts by their expertise. But they self-consciously
Is There a Literature on Experts?
41
set aside the question of the validity of the expert’s knowledge (p. 2). By explicitly setting aside any question of the reliability or validity of the expert’s knowledge, they were empowered to take a fairly skeptical view of experts. Their skeptical attitude, however, did not correspond to any theory indicating when experts are likely to be more reliable and when less. Bloor (1976) went further than Berger and Luckmann by postulating the methodological norm of “symmetry,” whereby the sociologist of science does not appeal to “truth” to help explain why one scientific theory prevails over another. Berger and Luckmann (1966) offer a phenomenological analysis of expertise, but not a causal theory of the ideas of scientists or other experts. Bloor’s symmetry principle has lost ground in science and technology studies. Lynch and Cole (2005, p. 298) say that “Proposals for critical, normative, activist, reconstructivist and other modes of ‘engaged’ STS research are abundant in the literature.” They call such engagement the “normative turn” in science and technology studies (205, p. 270). Such “normativity” has become “central” to their discipline (Lynch and Cole 2005, p. 298). Turner (2010, p. 241) says, “Major figures such as Harry Collins and Bruno Latour have modified their views in order to avoid being seen as skeptics, particularly about global warming.” Forensic science provides the immediate context for the discussion in Lynch and Cole (2005). Turner attributes the normative turn in STS at least in part to an extrinsic motive: I don’t want to look like a global warming skeptic. The desire to signal goodness on global warming and the desire to intervene in political outcomes are both extrinsic motives for the normative turn in science and technology studies. I do not know how important such extrinsic motives were in taking the normative turn. Presumably, intrinsic motives had at least some role to play as well. After all, some of the practices in, for example, forensic science might well seem shockingly “unscientific” to even the most ardent Bloorian. The relative importance of extrinsic and intrinsic motives in the normative turn of science and technology studies is a matter of guesswork and judgment, not scientific measurement. In any event, good scholarly practice usually requires us to consider the arguments and evidence of other scholars without imputing inappropriate motives to them. Like Schutz (1946), Collins and Evans (2002) focus on “expertise,” which they view as “not truth-like” because expertise is “essentially imprecise” and “does not provide certainty” (Collins and Evans 2003, pp. 436, 448). They say, “Since our answer turns on expertise instead of truth, we will have to treat expertise in the same way as truth was once treated – as something more than the judgement of history, or the outcome of the play
42
Expert Failure
of competing attributions. We will have to treat expertise as ‘real’, and develop a ‘normative theory of expertise’” (2002, pp. 236–7). They regard the “area of prescription” they wish to enter as “upstream” from “downstream” descriptive analysis, wherein Bloor’s “symmetry” postulate is “central” (2002, p. 286, n. 28). The “prescriptions” they favor “depend on a preference for a form of life that gives special value to scientific reasoning” even though they explicitly “do not provide an argument for why [they] value it specially.” If we define experts by expertise, we must either, like Berger and Luckmann (1966), set aside questions of expert reliability or, like Schutz (1946) or Collins and Evans (2002), mostly defer to the experts. Whichever choice we make, it becomes difficult to ask when experts might be more reliable and when less reliable. If, instead, we define “experts” by their place in the social order, we can ask about their economic incentives and discover, perhaps, when they are more likely to provide reliable advice and when they are less likely to do so. In other words, we need a theory of experts and not a theory of expertise. As I noted previously, I define “expert” as anyone paid to give an opinion. This economic definition skirts the necessity of somehow deciding ex ante whether the expert’s knowledge is reliable, unreliable, real, spurious, or something else. It allows us to ask which market structures tend to generate more reliable expert opinions and which market structures tend to generate less reliable expert opinions. And it allows us to ask which market structures tend to generate more empowered nonexperts and which market structures tend to generate less empowered nonexperts.
3
Two Historical Episodes in the Problem of Experts
INTRODUCTION
A complete survey of the problem of experts would cover a vast ground, including the entire detailed history of jurisprudence, philosophy, and religion, as well as innumerable discussions of the proper authority of physicians, the role of psychiatry in the state, the design of economic policy, and so on. Two episodes in the history of ideas may especially be worth discussing, however. The first episode I will discuss concerns the role of experts in the Socratic tradition of Socrates, Plato, and Aristotle. This tradition, of course, had a great role in shaping subsequent thought. I will try to place the tradition in the competitive context from which it emerged. Socrates was an expert who challenged the prior authority of religious orthodoxy. He and his group were the new experts challenging the old experts. The story of the Socratic tradition in ancient Athens is not merely of historical interest. The attitudes formed at that time are still with us. Defenders of experts in specific areas such as “science” have often had the sort of perspective I will attribute to the Socratic tradition. The second episode I will discuss is the emergence of a literature on expert witnesses in law. Mostly, I will cover some relevant nineteenthcentury discussions in the United States and the United Kingdom. This literature illustrates the connection between the Socratic tradition and the attitudes and presumptions of many modern experts. Many would-be nineteenth-century defenders of “science” in the witness box struck moralistic and high-handed tones that seem to reflect the philosophy of the Socratic tradition or, at least, an attitude similar to that of the Socratic philosophers who had themselves a similar interest in claiming, creating, and preserving a privileged epistemic status. As I will note, the attitudes 43
44
Expert Failure
expressed in this nineteenth-century literature can still be found today, especially in the context of expert witnesses in the British and American legal systems. In both the Socratic tradition and the literature on expert witnesses we see arguments that tend to elevate the all-seeing expert both morally and epistemically, thereby justifying the call for others, the nonexperts, to obey.
THE SOCRATIC TRADITION
The death of Socrates should be seen in the light of political struggles in which the authority of religious experts was challenged by philosopherexperts and the religionists used political means to defend their prerogatives. As I will attempt to show, Socrates put himself forward as an expert and exalted the epistemic authority of philosophy. His teaching was a fundamental threat to the preexisting politico-religious power structure. Nilsson (1940, pp. 121–39) discusses the conflict between Greek “seers and oracle mongers” and the philosophers and sophists of the fifth and fourth centuries BCE. The seers were the experts challenged by the philosophers and sophists. Nilsson explains: The real clash took place between that part of religion which interfered most in practical life and with which everyone came into contact every day, namely, the art of foretelling the future, and the attempts of natural philosophy to give physical explanations of celestial and atmospheric phenomena, or portents, and other events. Such explanations undermined the belief in the art of the seers and made it superfluous. For if these phenomena were to be explained in a natural way, the art of the seers came to naught.
Given the tight connection between religion and politics that Nilsson notes, questioning divination becomes a political threat. Rubel emphasizes the tight link between religion and politics in Athens. “Athenian society was characterized by religious traditions based on the fundamental belief that (a) gods exist, (b) they could influence our world and (c) it is advisable to keep in with them if one wants to avoid the consequences of neglecting the divine” (2014, p. viii). He describes Athens in the wake of the “calamity” of defeat in the Peloponnesian War (431–404 BCE) with Sparta. “Athenians, stricken by a terrible plague, the horrors of war and the loss of an empire” were in politico-religious crisis and “feared that ‘the sky would fall on their heads’” (2014, p. vii). Athens of the fifth and fourth century BCE experienced a political crisis that included “numerous” trials of philosophers charged with impiety (Rubel 2014,
Two Historical Episodes in the Problem of Experts
45
p. 2), such as the famous trial of Socrates. Rubel (2014, p. 2) reports, “Such religious trials were unique to this period. It is thus legitimate to talk about the ‘heated’ atmosphere at Athens at the end of the fifth century, expressed through trials judged by the Assembly and by the courts, where the accused were charged with religious offenses.” Socrates’ appeal to the Delphic oracle in Apology seems reasonable in light of the deep political conflict of that time between Athenian religion and Socratic philosophy. (See Rubel 2014, pp. 5–13 on the difficulties of distinguishing religion and politics in the Athens of that time.) The oracle at Delphi was held in high esteem and was more deeply honored than the local “seers and oracle mongers” in Athens. It was a higher authority. If Socrates seemed to have the approval of this higher authority, then it might be harder to accuse him of impiety or to appeal to religious authority to disparage his philosophy. These considerations seem to support the hypothesis that Socrates or one of his supporters may have paid the oracle at Delphi to tell Chaerephon that there was no man wiser than Socrates. This hypothesis seems obvious enough, and yet I have been unable to find mention of it in the scholarly literature. Nock (1942) seems to hint at the possibility of a bribe when he says, “Socrates had every reason to know the influence of mundane considerations upon the Delphic oracle and the diverse factors that entered into divination” (p. 476). I could find no stronger suggestion in the literature than Nock’s mild hint. Smith (1989) considers the hypothesis that Socrates may have engaged in the “political employment of oracles” (p. 146), and rejects it on the grounds that none of Aristophanes’ plays makes the accusation (pp. 146–7). But this hypothesis concerns Athenian oracle mongers, not the Delphic oracle, which Aristophanes treated “with a great deal more respect” (p. 151). Fontenrose (1978) considers it “incredible” but not impossible that the oracle story could be “a pious fiction of the Socratic circle” (p. 34). But this “incredible” possibility would still leave Socrates himself innocent of trickery or earthly motives. Daniel and Polansky (1979, p. 83) say that “suspicions” about Socrates’ tale of the oracle fall into two groups. Some think it “a Platonic invention,” while others doubt that the oracle’s answer to Chaerephon was transformative for Socrates. Neither group of skeptics, it would seem, imagines the possibility of a bribe. Accepted facts about Delphi, however, suggest that the possibility should be considered. Herodotus records at least two instances of bribes given to the oracle (V 63, VI 66; Fairbanks 1906, pp. 40–1). Reeve (1990) gives some details of the operation of the oracle and notes that there were “two methods of
46
Expert Failure
consulting the oracle” (p. 29). One was expensive, “involving the sacrifice of sheep and goats,” and the other cheap. The existence of an expensive method strongly suggests that Delphic pronouncements were up for sale. Other evidence supports the same conjecture. Broad notes the “monumental wealth” of Delphi and says: “It was the custom for thankful supplicants to send back riches. These and other gifts and tithes accumulated over the centuries to the point that Delphi became one of the wealthiest places on Earth” (2006, p. 16). It seems hard to distinguish such “gifts” from bribes. Arnush notes that both public and private “consultants” had to pay “taxes in the form of a sacrifice and a special type of cake (the pelanos) in order to gain access to the oracle” (2005, p. 100). Lloyd-Jones (1976, p. 68) grudgingly admits, “Anti-clerical critics can easily accuse the Delphians of cynical pursuit of their own private interest.” Broad (2006, p. 16) says, “The odor of corruption wafted about the Oracle when at times she seemed ready to please whoever held power.” Chaerephon was “notoriously poor” and probably used the cheap method of consulting the oracle (Reeve 1990, p. 29). But if someone sent him to Delphi to get the desired answer, he might well have brought money, gifts, or livestock he could not have provided out of his own apparently meager resources. Indeed, what better agent to deliver the bribe than one “notoriously poor”? And the answer he received was less ambiguous than most Delphic pronouncements: No man is wiser than Socrates. We have, then, a corrupt oracle offering an unusually clear answer that conveniently serves a political purpose. It seems only reasonable to ask whether the oracle’s convenient answer might not have been bought and paid for. The noble lie of Republic (V, 473c) was the myth that God had mixed gold with the rulers, silver with the guardians, and brass and iron with “husbandsmen and craftsmen.” Plato’s use of the noble lie might seem to suggest that he did not fully absorb Socrates’ lesson, expressed in Apology, of epistemic humility. But it is hard to escape the view that in the Apology and elsewhere Socrates was an expert insisting that Athens place its trust in him. (David M. Levy and Ronald Polansky have both emphasized with me the importance of this point in understanding the Socratic project.) In Xenophon’s version of the story, for example, Meletus exclaims to him, “I know those whom you persuaded to obey yourself rather than the fathers who begat them.” “I admit it,” Socrates replied, “in the case of education, for they know that I have made the matter a study; and with regard to health a man prefers to obey his doctor rather than his parents; in the public assembly the citizens of Athens, I presume, obey those whose arguments exhibit the soundest wisdom rather than their own relations. And is it not the case that, in your choice of generals, you set
Two Historical Episodes in the Problem of Experts
47
your fathers and brothers, and, bless me! your own selves aside, by comparison with those whom you believe to be the wisest authorities on military matters”? (Xenophon 2007, pp. 6–7)
In this passage, Socrates seems to call for the rule of experts. In any given area such as education or medicine, we should obey those who “have made the matter a study” and become “the wisest authorities” in that area. Socrates’ expertise is in education, he tells us. Trust me to educate your youth, Athens, for I am the wisest educator among you. One is reminded of the quote attributed to the economic expert Paul Samuelson: “I don’t care who writes a nation’s laws – or crafts its advanced treatises – if I can write its economics textbooks” (Weinstein 2009). In Crito, Socrates says we should “follow the opinion of the one man who has understanding, and whom we ought to fear and reverence more that all the rest of the world.” Perhaps we should view such statements as no more than expressions of esteem for wisdom. But it seems possible to see in them a call for nonexperts to obey experts. I have described Socrates as an expert. Socrates’ poverty may seem to contradict that interpretation. In my theory, an expert is paid for their opinion. But an ancient tradition holds that Socrates was poor, which seems to suggest that he did not accept payment for his opinions. In Apology he says that his work to vindicate the oracle “quite absorbs me, and I have no time to give either to any public matter of interest or to any concern of my own, but I am in utter poverty by reason of my devotion to the god” (23b, Jowett translation). Socrates’ possible poverty is the topic of conflicting interpretations in the literature. We cannot exclude the possibility that he accepted cash or in-kind payments for his lessons. The dialogues are often set in the homes of Athenians hosts whose hospitality seems generally to have included a significant meal. If such meals were part of a gift exchange system (Mauss 1925), then they fall outside the strict limits of information choice theory. If, however, there was a direct quid pro quo in which Socrates was expected to teach in exchange for his meal, then he was acting as an expert in the strict sense of information choice theory. Aristophanes represents Socrates and Chaerephon as teachers in the Thinkery, which seems consistent with Socrates’ reply to Meletus that he (Socrates) should be obeyed as a teacher. It seems fair to ask whether Aristophanes’ lampooning of the Thinkery would have been funny if Socrates had scrupulously eschewed all payment from the youths he taught. However we resolve the issues surrounding Socrates’ poverty, it probably implies no substantial qualification to my interpretation of the Socratic tradition. Even if Socrates was not himself an expert in the strict sense of information choice theory, he called for the rule of experts.
48
Expert Failure
Plato famously calls for a philosopher-king in Republic (V, 473c). “Until philosophers are kings, or the kings and princes of this world have the spirit and power of philosophy . . . cities will never have rest from their evils.” And, as I noted above, he authorizes the few to tell a noble lie to the many. He proposes (Republic, III, 414c, Jowett translation) “just one royal lie which may deceive the rulers, if that be possible, and at any rate the rest of the city.” He thus places all epistemic authority in philosophy. Aristotle preserved the epistemic authority of philosophers, but left it to others to enforce that authority. The fourth-century Roman politician and philosopher Themistius says: Plato, even if in all other respects he was divine and admirable, was completely reckless when he uttered this saying, that evils would never cease for men until either philosophers became rulers, or kings became philosophers. His saying has been refuted and has paid its account to time. We should do honour to Aristotle, who slightly altered Plato’s words and made his counsel truer; he said that it was not merely unnecessary for a king to be a philosopher, but even a disadvantage; what he should do was to listen to and take the advice of true philosophers, since then he filled his reign with good deeds, not with good words.”
(See Ross 1952, p. 66.) In the Politics (1288a), Aristotle says, And since we pronounce the right constitutions to be three, and of these the one governed by the best men must necessarily be the best, and such is the one in which it has come about either that some one man or a whole family or a group of men is superior in virtue to all the citizens together, the latter being able to be governed and the former to govern on the principles of the most desirable life, and since in the first part of the discourse it was proved that the virtue of a man and that of a citizen in the best state must of necessity be the same, it is evident that a man becomes good in the same way and by the same means as one might establish an aristocratically or monarchically governed state.
It seems fair to say that Aristotle favors the rule of the virtuous who will, he imagines, impose “the most desirable life” as described by the philosopherexpert. Epistemic authority is still given to the philosopher and the people are still to obey. But they are to obey the political authorities, who, in turn, should take the advice of philosophers. Kelsen (1937) argues that Aristotle’s defense of monarchy was the same time a defense of Philip and Alexander of Macedonia. He quotes Aristotle: If, however, there be some one person, or more than one, although not enough to make up the full complement of a state, whose virtue is so pre-eminent that the virtues or the political capacity of all the rest admit of no comparison with his or theirs, he or they can no longer be regarded as part of a state; for justice will not be
Two Historical Episodes in the Problem of Experts
49
done to the superior, if he is reckoned only as the equal of those who are so far inferior to him in virtue and in political capacity. Such an one may truly be deemed a God among men. (Politics 1284a as quoted in Kelsen 1937, p. 31)
He then asks: “To whom else, if not to Philip or to Alexander, should these words refer?” Filonik (2013, p. 60) says “a significant amount of the late-fourth century events was centred around the tensions in Athenian politics concerning Macedonia and domestic politics referring to it.” He reports on many trials of philosophers in Athens at about the time of Alexander’s death. These trials generally targeted pro-Macedonian philosophers. He takes a skeptical view of the reality of many of these “alleged trials” (p. 70). Nevertheless, “At least some of the trials had to be genuine” (p. 70). This was “a period of grave political conflict between the advocates of the proMacedonian option in foreign politics and elitist constitutional changes on the one hand, and those of anti-Macedonian and pro-democratic followers on the other” (p. 70). Filonik credits as “probable” the traditional story that Aristotle fled Athens to avoid such a trial (2013, pp. 72–3). Chroust says, “We know that the Platonic Academy was not merely concerned with philosophic or scientific studies, including political theory, or with the cult of the Muses, but also with effective political action” (1973, vol. ii, p. 123). The Academy had “a great many political contacts and diplomatic dealings with a number of cities, peoples, dynasties and prominent personages . . . throughout the Hellenic world” (1973, vol. ii, p. 123). Plutarch (Morals v 32/Goodwin 1878, pp. 381–2), who may be a more reliable source, also emphasized the political role of Plato’s Academy: Plato left in his writings excellent discourses concerning the laws, government, and policy of a commonweal; and yet he imprinted much better in the hearts and minds of his disciples and familiars, which caused Sicily to be delivered by Dion, and Thrace to be set at liberty by Pytho and Heraclides, who slew Cotys. Chabrias also and Phocion, those two great generals of the Athenians, came out of the Academy. As for Epicurus, he indeed sent certain persons into Asia to chide Timocrates, and had him removed out of the king’s palace, because he had offended his brother Metrodorus; and this is written in their own books. But Plato sent of his disciples and friends, Aristonymus to the Arcadians, to set in order their commonweal, Phormio to the Eleans, and Menedemus to the Pyrrhaeans. Eudoxus gave laws to the Cnidians, and Aristotle to the Stagirites, who were both of them the intimates of Plato. And Alexander the Great demanded of Xenocrates rules and precepts for reigning well. And he who was sent to the same Alexander by the Grecians dwelling in Asia, and who most of all inflamed and stimulated him to embrace and undertake the war against the barbarian king of Persia, was Delius the Ephesian, one of Plato’s familiars.
50
Expert Failure
Chroust (1967) takes a darker view of the Academy than Plutarch. Dion, for example, “played at times a rather sordid role in the political events that transpired during the fourth century” in Sicily (p. 29). “Some of Plato’s disciples seem to have practiced politics with a vengeance,” he snorts. Chaeron of Pallene, for example, “made himself the detested tyrant of his native city about the year 331 B.C., thanks to the support of Alexander the Great and his general Corragos” (p. 37). In Chroust’s view, the Academy started out “theoretical, universalist, and frequently Utopian,” but grew progressively more “realistic” in the wake of Alexander’s conquests and their aftermath. Two trends emerged: one element which radically rejected any form of tyranny or absolute rule, and another which advocated, or aspired to, an enlightened form of tyranny. Representatives of the latter trend often turned themselves into petty rulers or tyrants, engrossed in their pitiful efforts to administer and dispense power for its own sake. Frequently these people had to rely upon the goodwill and military support of Macedonian armies or upon the military might of the Diadochi [i.e. the generals competing for power after Alexander’s death]. The former, on the other hand, wherever possible, engaged in or planned tyrannicide, a practice which persisted throughout the third, second, and first centuries B.C. (p. 40)
Isnardi (1959) also views the Academy as a political entity. It was both a “cultural community” and a “group organized with a view to the possibility of transforming society” (p. 273). Even the otherworldly and Utopian tendencies of the Academy’s early years can be viewed in this light. The Academy was the “germ cell of a new organism” (p. 274). Necessity played an important role in producing this dualism of the Academy, whereby it sought, initially, this-worldly ends with otherworldly means. “Plato founded the Academy when he found himself in the practical condition of having to renounce all political activity; the Academy was even under state orders not to play any active role in policy whatsoever, apart from the activities of some of his adherents” (p. 274). But, again, the ends were always political. Isnardi rejects the unsubtle idea that the Academy as a whole was “politically committed” (p. 282) to Macedonia. “One may nevertheless speak of a kind of growing sympathy for monarchical power constituted in its traditional form and personified in Philip” (p. 282). We cannot impute any very precise political program to the Academy. Rather, it “pushed” a “certain number of youths educated in the political rationalism of the Platonic school in an attempt to seize political power” (p. 283).
Two Historical Episodes in the Problem of Experts
51
Eugenic passages in Plato and Aristotle support the idea that Greek philosophers in the Socratic tradition held themselves out as experts to be obeyed by nonexperts. In the Politics (7.1335b), for example, Aristotle calls for “a law that no deformed child shall live.” David Galton (1998) notes Greek “contributions” to eugenics. “Plato’s works,” he says, “reveal a profound interest in eugenics as a means of supplying the city state with the finest possible progeny” (p. 263). Ojakangas (2011) notes that in “Sparta the quality of the population was strictly supervised by the state, eugenics being the major means of doing this. Plato and Aristotle were also of the opinion that a healthy state must practise eugenics, including not only infanticide but many other means of controlling and improving the ‘quality’ of the population” (p. 2). David Galton (1998) says, “It is of interest that neither [Francis] Galton nor many contemporary commentators appear to give credit to or even mention the Greek contribution to the subject of eugenics” (p. 266). And he repudiates Francis Galton’s reference to colonization in Plato with the claim that the “idea of colonization as a eugenic measure . . . does not appear to occur anywhere in Plato’s works” (p. 263). But Pearson (1911, pp. 23–4) quotes the following passage from Jowett’s translation of Plato’s Laws (V, 735b–735a). The shepherd or herdsman, or breeder of horses or the like, when he has received his animals will not begin to train them until he has first purified them in a manner which befits a community of animals; he will divide the healthy and unhealthy, and the good breed and the bad breed, and will send away the unhealthy and badly bred to other herds, and tend the rest, reflecting that his labours will be vain and have no effect, either on the souls or bodies of those whom nature and ill nurture have corrupted, and that they will involve in destruction the pure and healthy nature and being of every other animal, if he should neglect to purify them. Now the case of other animals is not so important-they are only worth introducing for the sake of illustration; but what relates to man is of the highest importance; and the legislator should make enquiries, and indicate what is proper for each one in the way of purification and of any other procedure. Take, for example, the purification of a city – there are many kinds of purification, some easier and others more difficult; and some of them, and the best and most difficult of them, the legislator, if he be also a despot, may be able to effect; but the legislator, who, not being a despot, sets up a new government and laws, even if he attempt the mildest of purgations, may think himself happy if he can complete his work. The best kind of purification is painful, like similar cures in medicine, involving righteous punishment and inflicting death or exile in the last resort. For in this way we commonly dispose of great sinners who are incurable, and are the greatest injury of the whole state. But the milder form of purification is as follows: – when men who have nothing, and are in want of food, show a disposition to follow their leaders in
52
Expert Failure
an attack on the property of the rich – these, who are the natural plague of the state, are sent away by the legislator in a friendly spirit as far as he is able; and this dismissal of them is euphemistically termed a colony. And every legislator should contrive to do this at once.
It seems fair to infer that Socrates, Plato, and Aristotle viewed the philosopher-expert as reliable and often wished for the state to impose their advice on the people. It is old hat to suggest that Plato’s Republic is authoritarian. It may be less common to see something similar in Aristotle. In emphasizing Aristotle’s seeming toadyism to monarchal power, especially that of his patrons Philip and Alexander, I am following Kelsen (1937) at least in broad outline. Scorn has been heaped on Kelsen (1937) as well as Chroust (1973), with whom Kelsen has been somewhat inappropriately lumped. Miller (1998, p. 503) says, “Not surprisingly, both Kelsen and Chroust got a cool reception from the scholarly community.” Only too happy to endorse this dismissive attitude, Miller (1998) derides as “problematical” (p. 503) all “extravagant interpretations, such as those provided by Chroust and Kelsen” (p. 504), which have been “justifiably criticized” (p. 515). Miller criticizes the “extravagant” views of Kelsen (1937). And yet, Miller’s own interpretation is far more “extravagant” than Kelsen’s. “It is probably safe to say, rather, that Aristotle had some capacity as an unofficial ambassador for the Macedonian monarchy, and that he represented the equivalent of a political and cultural interests section in Athens for Alexander and Antipater. In this capacity Aristotle would have had to appear balanced and receptive to his Athenian hosts, while simultaneously subtly advancing the Macedonian position” (p. 515). This interpretation makes Aristotle a willful and self-conscious hypocrite whose philosophical writing, moreover, never “resolved the tension between his recommendation of a constitutional democracy and kingship” (p. 516). In Kelsen’s interpretation, Aristotle is less conniving and his philosophy more internally consistent than Miller (1998) allows. If either thinker (Kelsen or Miller) is even approximately correct, however, we may safely put Aristotle in the “reliable-powerless” category of Chapter 2’s taxonomy along with earlier thinkers in the Socratic tradition. Miller’s dismissal of Kelsen is mistaken on its own grounds and conveys the impression of a purely gratuitous remark meant to distance himself from a derided author. It is simply not true that Kelsen, as opposed to Chroust, gives us “a portrait of Aristotle that seems to oscillate between Mata Hari and James Bond” (Miller 1998, p. 503). I suppose Miller can sniff at Chroust, however, since Chroust takes a harder line than Miller,
Two Historical Episodes in the Problem of Experts
53
whose views on Aristotle’s political activities are moderately less extreme. Chroust describes Aristotle as “a kind of political agent working for Macedonia.” In the same page he says Aristotle’s letters to Antipater “might have contained ‘intelligence reports’ on the political situation and event in Athens and, probably, the rest of Greece” (1973, vol. I, p. 171). I do not know whether Chroust or Miller is closer to the truth or whether, perhaps, both exaggerate Aristotle’s political role. In any event, Chroust’s scandalous assertions seem to be entirely about Aristotle’s life rather than his philosophy and are thus not very closely related to Kelsen (1937). Miller conflates the doctrinal analysis of Kelsen with Chroust’s interpretation of Aristotle’s political activity. Certainly, these two things are intertwined. But Kelsen is very much focused on the doctrine, whereas the putatively objectionable bits of Chroust are focused on biography. And, indeed, Chroust does not cite Kelsen (1937). The only reference to Kelsen is the passing comment that Kelsen’s pure theory of law is a “science of pure norms” and in this regard similar to late Platonic philosophy (1973, vol. II, 138). My perspective on the Socratic tradition seems to get additional support from Merlan’s (1954) discussion of Isocrates’ letter to Alexander. Isocrates was pro-Macedonian, but his “relations to [Plato’s] Academy were inimical” (Merlan 1954, p. 61). Upon learning of Aristotle’s appointment as Alexander’s tutor, Isocrates wrote a highly diplomatic letter to young Alexander in hopes of advancing his cause to the disadvantage of his hated rival Aristotle. In Merlan’s undiplomatic paraphrase, the letter contains this passage: Compare this program of education with what the sophists from the Academy have to offer. They will teach you to quibble and split hairs concerning problems of no practical value whatsoever. They will never enable you to cope with the actualities of daily life and politics. They will teach you to disdain opinion (common sense) in spite of the fact that common sense assumptions are the only basis for ordinary human affairs and they are sufficient to judge the course of future events. Instead of common sense opinions, they will make you chase after a phantom which they call true and precise knowledge, as distinct from mere opinion. Even if they could reach their ideal of precise and exact knowledge – it would be a knowledge of things entirely useless. Do not be deceived by their extravagant notions of goodness and justice or their opposites. These are just ordinary human notions not so very difficult to understand, and you need them only to help you to meet out rewards and punishments. (Merlan 1954, p. 64)
Isnardi (1959) also notes the “competition of the Platonic school with that of Isocrates” for “influence in the Macedonian court” (p. 286). Chroust
54
Expert Failure
says that Isocrates “propagated a practical not to say pragmatic attitude toward politics and political thought and at the same time ridiculed the purely theoretical-ethical approach to political philosophy advocated by Plato and the Early Academy” (1973, vol. ii, p. 135). According to Merlan (1954, p. 65), Isocrates elsewhere says: “The layman cannot fail to notice that the sophists pretend to have knowledge (eidenai) and yet are failures when it comes to discussing or advising on things at hand. It becomes obvious that those who rely on common sense (opinion; doxa) do better than those who profess to possess knowledge (epistêmê).” Isocrates opposed the very thing that made Socrates, Plato, and Aristotle the sort of experts who would impose their expert opinions on the people, namely, esoteric knowledge. We have true philosophical knowledge, whereas others have only opinions and common sense. Only we philosophers are qualified experts. It seems only reasonable that Philip and Alexander would choose philosophical knowledge (epistêmê) over common sense and opinion (doxa). The philosophical “knowledge” of the Socratic tradition is not subject to dispute. It is uniform, unambiguous, and certain. Such philosophical certainty provides a better foundation for monarchal fiat than mere opinion, a heterogeneous and changeable willo’-the-wisp. Philosophy in the Socratic tradition was more dogmatic than Athenian religion at the time. Rubel points out that we should not therefore romanticize Athenian religion: “Nevertheless, despite all the apparent openness of polytheism, there was an ‘unwritten’ duty to acknowledge and attend the cults of the city. This is why the very acknowledgement of fundamental beliefs and the public participation of all the citizens at solemn sacrifices and cult-related activities guaranteed the favour of gods. Without this, the community believed there was no hope of stability” (2014, p. 9). Notwithstanding Rubel’s unromantic portrait of Athenian religion, he says, “The polytheistic religion of the polis was characterized by a freedom from dogma, as there were no mandatory beliefs, no authoritarian clergy with special knowledge, and no ‘church’. There was thus no risk of heresy” Rubel (2014, pp. 8–9). This freedom from dogma contrasts with the more rigorous epistêmê of philosophy in the Socratic tradition. Isnardi (1959) goes further than Rubel by suggesting that Plato created a full-blown religion to rival the traditional state religion. Citing Müller (1951), she says that “Hellenistic theology” begins with Plato’s Laws, which “clearly delineates a purely cosmic religion” in which the cosmos is “the eternal field of struggle between good and evil” (p. 287). Isnardi considers Müller’s view, which she endorses, as a friendly amendment to the position
Two Historical Episodes in the Problem of Experts
55
of Jaeger (1948, p. 138), who described Aristotle as “the real founder of the cosmic religion of the Hellenistic philosophers.” The Socratic philosophical tradition elevated the philosophical expert above the laity and demanded obedience from them. It tended to exalt the supposed intellectual and moral superiority of its representatives. Jaeger (1948, p. 436) says: Plato had attached moral insight, the phronesis of Socrates, to the contemplation of the Idea of the Good. They were conflated to such a degree that the concept of phronesis, which in ordinary usage was purely ethical and practical, came in Plato always to include the theoretical knowledge of the idea, became, in fact, finally synonymous with expressions that had long meant nothing but pure knowing and contained no relation to the practical, such as sophia, nus, episteme, theory and the like.
Jaeger (1948, pp. 436–40) describes “the progressive loosening of the tie connecting the theoretical life with the kernel of Aristotle’s ethics.” Presumably, this “progressive loosening” was necessary for Aristotle to uphold the supposedly superior virtue of Alexander, who was not a monkish figure pursuing the theoretical life, but a man of action. It seems fair to say, however, that the Socratic tradition included a strong celebration of the superior virtue of experts, who possessed knowledge. The Socratic tradition sought to replace mere opinion (doxa) with knowledge (epistêmê). It is even possible to interpret Plato’s Laws as establishing a new philosophical religion to challenge the religion of inherited tradition. And at the center of it all, as Isnardi says, was “an attempt to seize political power.” More recent rationalists often exhibit a similar tendency to exalt the superiority of experts and to seek political power for them. The literature on expert witnesses, discussed below, provides a vivid example. John Maynard Keynes, too, exalted the supposed intellectual and moral superiority of experts. This exaltation received vivid expression in his 1922 essay “Reconstruction in Europe: An Introduction” (Keynes 1922, pp. 432–3). With Platonic grandiosity he exclaimed, “No! The economist is not king; quite true. But he ought to be! He is a better and wiser governor than the general or the diplomatist or the oratorical lawyer. In the modern overpopulated world, which can only live at all by nice adjustments, he is not only useful but necessary.” He then warned that “squalor follows” from rejecting the economist’s advice. This threat of “squalor” fits perfectly the pattern identified by Berger and Luckmann (1966, p. 88), whereby the expert intimidates the populace with a picture of “doom” that follows from rejecting the expert’s advice. I cannot hope to review the history of philosophy and place each thinker in the taxonomy of Chapter 2’s Table 2.1. My discussion of the Socratic
56
Expert Failure
tradition may suggest, however, that more or less any philosopher may be examined from such an angle. Our discussion of Aristotle prefigures the complications of placing different figures. Secular or religious power may act as an intermediary between the philosopher-expert and the laity. Philosophers may give epistemic authority to the Church or to science, while admonishing this epistemic authority to take correction from philosophy. The subjectivism of figures such as Descartes and Husserl might seem to empower nonexperts. But their meditations support only one interpretation of the world. If your meditation takes you to a different interpretation of the world, you have erred. Thus, such figures may be closer to the Socratic tradition than their subjectivism might at first suggest. Wittgenstein’s notion of “language game” embeds knowledge in practice, which moves us away from the Socratic tradition. Philosophical skepticism may conduce to skepticism of experts. As we will see in Chapter 6, Bernard Mandeville stands out for his epistemic vision supporting skepticism of experts and expertise. The notion of dispersed knowledge, which was recognized by Mandeville and clarified by F. A. Hayek, also supports skepticism of experts and expertise. EXPERT WITNESSES IN LAW
Long before Berger and Luckmann (1966) examined the general problem of experts, there was the problem of expert witnesses in law. This problem can be traced far back in Anglo-American law, though a separate legal category of “expert witness” emerges only in the nineteenth century. At least as far back as the fourteenth century, a special jury of persons with the requisite knowledge might be gathered. If a customer complained of being sold “putrid victuals,” the mayor might “summon persons of the trade of the man accused, as being well acquainted with the facts, and their verdict would decide and the mayor direct sentence accordingly” (Hand 1901, pp. 41–2). There would seem to be an incentive problem with special juries in, at least, many cases. In the case of “putrid victuals,” for example, tradesmen who were called in as experts would likely have placed their commercial interests above justice in such cases. In general, the experts on a special jury might have more sympathy for the defendant than for the plaintiff. The courts were also able to get expert knowledge by summoning experts to advice the court. Thus, “In 1345, in an appeal of mayhem [which is the crime of injuring another to impair their self defense], the court summoned surgeons from London to aid them in learning whether or not
Two Historical Episodes in the Problem of Experts
57
the wound was fresh” (Hand 1901, pp. 42–3). Hand informs us that the practice of summoning witnesses was not established before the middle of the fifteenth century. Thus, the English courts had found ways to avail themselves of experts even before they had adopted the practice of calling ordinary witnesses to testify. Hand (1901) cites the 1620 British case of Alsop v. Bowtrell as “the first case [he had] found of real expert testimony,” which he characterized as “a case where the conclusions of skilled persons were submitted to the jury” (p. 45). Hand reports, “In the eighteenth century the practice was certainly well established” (p. 47). Golan (1999) traces the emergence of expert witnesses as a “distinct legal entity” (p. 9) in British law to the “Adversarial Revolution” of the early eighteenth century (p. 9). Once the defendant had acquired the right to counsel, sometime before the end of the 1730s, litigants began to influence the process of evidence production. By the end of the century “the practice of evidentiary objection” led to the hearsay doctrine, which restricted testimony to “personal observation,” and the opinion doctrine, which restricted testimony to facts and excluded inferences drawn from such facts (Golan 1999, p. 10). The expert witness, who draws inferences from facts not personally observed, then emerges as an exception to these two core rules of evidence. Golan (1999) depicts the British courts as largely trusting of expert witnesses in the early part of the nineteenth century. As the century progressed, “men of science” (Golan 1999, p. 15) became increasingly important among expert witnesses. This led to disappointment and frustration among both jurists and “men of science.” Jurists were disappointed, in Golan’s telling, that science did not seem to provide unequivocal testimony. Alas, the experts disagreed with one another. The scientists in their turn were dismayed “that their standard strategies for generating credibility and agreement did not well withstand the adversarial heat of the courtroom” (Golan 1999, p. 16). The record seems to support Golan’s description. Alfred Swaine Taylor’s A Manual of Medical Jurisprudence was first published in 1844. In it, he inveighs against partisan medical experts: “We ought not to hear, as we have done in recent times, of a medical prosecution and a medical defence. Under such circumstances, a medical jurist can be regarded no longer as the scientific witness of truth, but as the biassed advocate, who will spare no effort to extricate the party for whom he appears” (as quoted in Unattributed 1844, p. 273). Later editions contain similar excoriations against partisan medical testimony. In what seems to be the last edition published in his lifetime, A. S. Taylor says:
58
Expert Failure
“there appears to be no other remedy than that of appointing a Medical Board of competent persons to act as assessors to the learned judge” (1880, p. 339). A. S. Taylor seeks neutrality rather than contending views. (As we will see below in connection with the Palmer case, A. S. Taylor seems open to the charge of hypocrisy in deriding partisan expert witnesses.) John Pitt Taylor, in his 1848 Treatise on the Law of Evidence, says: “Perhaps the testimony which least deserves credit with a jury is that of skilled witnesses. These gentlemen are usually required to speak, not to facts, but to opinions; and when this is the case, it is often quite surprising to see with what facility, and to what extent, their views can be made to correspond with the wishes or the interests of the parties who call them” (Taylor 1848, p. 54). J. P. Taylor notes that the “judgments” of such witness become “warped,” thus anticipating the gist of the theory of “observer effects” developed by Rosenthal and his coauthors (Rosenthal and Fode 1961; Rosenthal 1978) and applied to forensic science testimony by Risinger et al. (2002) and many others, including, notably, Itiel Dror (Dror and Charlton 2006, Dror et al. 2006) and Krane et al. (2008). J. P. Taylor recommends that such testimony be given low evidentiary weight. The passage just quoted, as well as the entire paragraph here discussed, is repeated word for word on page 79 of the 1887 edition of the same book (Taylor 1887). It seems fair to conclude from these two books, each of which went into multiple editions sold in both the United States and the United Kingdom, that the partisan witness was in this period widely viewed with some mixture of distrust and distain. This view is supported by several other references to expert witnesses at court, including “scientific witnesses” and “medical witnesses.” One somewhat amusing illustration comes from an 1854 letter published in a British medical journal (Browne 1854). The letter’s author (Andrew McFarland) complains of the risk of a malpractice suit and the role of expert witnesses in court. He objects that “the surgeon of science and experience finds confronting him as experts, irregular practitioners of all stripes, and of the most contemptible character” (p. 243). In this passage, a medical expert objects to being challenged by other medical experts, whose competence and character are correspondingly called into question. At about this time, discussions among scientists and physicians of expert scientific and medical witnesses often placed great emphasis on the moral character of the physician or scientist. It may be that this imagined role of character reflected humanist ideas of the Renaissance and early modern
Two Historical Episodes in the Problem of Experts
59
period. Cook (1994) says: “That the man of learning possessed good character, and might therefore offer good council rooted in good judgment, was a common humanist theme, a theme used to persuade the gentry and aristocracy to become learned men, too” (p. 4). Whether the nineteenth-century emphasis reflects this humanist idea or not, at this time it was often considered scandalous that a dignified man of science would be exposed to challenging crossexamination. The Palmer trial inspired much of this sort of thing. In a celebrated case, the physician William Palmer (1824–56) was convicted of using poison to murder his friend Cook. Palmer needed money to help clear his gambling debts and was accused of killing Cook to get it. Davenport-Hines (2009) finds the evidence “weak” and criticizes the “show trial” that resulted in the death penalty for Palmer. Burney (1999, p. 77) describes the climate of thought in the run-up to the Palmer trial: [T]he public standing of toxicology was far from straightforward: its leading spokesman was equivocal on the legitimacy of expectations placed on toxicological proof, press opinion was divided between those who warned against sacrificing common sense to chemical pedantry and those descrying the subjective influences at the root of the “detection mania,” and the medical press seemed to acknowledge the profession’s vulnerability to charges of authoritarianism in taking away the public’s capacity for participatory judgment.
Discussions of the Palmer case were often dismissive and judgmental toward the defense’s expert witnesses. One account laments: “the worst danger to the administration of justice, and the greatest injury of the scientific character, will be incurred whenever it shall be known that professional witnesses may be retained to establish indifferently a case for either side” (Unattributed 1856, p. 39). Another (Davies 1856, p. 611) says: I am sure there is not one gentleman here present who has not felt annoyance, humiliation, and grief, at the spectacle which our profession presented at, perhaps, the most important of modern trials – that of William Palmer for the murder of John Parsons Cook. Who is there among us who did not feel his cheek burn with shame for the profession to which he belonged, as he read day by day the so-called medical evidence brought forward for the defence of Wm. Palmer?
Davies finds it unbearable that medical experts would be subject to doubt and challenge from the public, and he blames the defense witnesses for this “humiliation.” Davies insists, “A witness must not be partisan” (1856, p. 612). But the careful historical review carried out by Burney (1999) suggests that the medical case against Palmer may have been much weaker than these contemporary accounts suggest.
60
Expert Failure
Earlier we saw Alfred Swain Taylor say, “We ought not to hear . . .of a medical prosecution and a medical defence.” But in the second American edition of his Poisons in Relation to Medical Jurisprudence and Medicine, published in 1859, we learn that he, Taylor, was a witness for the prosecution in the Palmer case (p. 63). Indeed, A. S. Taylor was the main medical witness for the prosecution (Burney 1999). Later in the same book, Taylor harshly criticizes expert witnesses for the defense. After proclaiming the medical testimony for the defense to have been worthless and false in the Palmer case, Taylor says: It argues but little for the knowledge or moral feelings of medical witnesses, and must shake the confidence of the public, as it has already done to a great extent, in the trustworthiness of medical opinions. Such must be the result when scientific witnesses accept briefs for a defence; when they go into a witness-box believing one thing, and endeavor to lead a jury by their testimony to believe another – when they make themselves advocates, and deal in scientific subtleties, instead of keeping to the plain truth. (p. 703)
This tirade was audacious. Medical witnesses for the prosecution are spotless. Those for the defense are ipso facto liars. Something similar happens in the United States today when the prosecution derides defense experts in forensic science as “hired guns.” This accusation conveniently ignores the fact that forensic experts for the prosecution are also paid. It also ignores the fact that many American crime labs are funded in part per conviction (Koppl and Sacks 2013, pp. 147–8; Triplett 2013). Prosecution experts are typically full-time employees of a public crime lab, which is usually part of a law-enforcement agency such as the municipal police or the FBI. They are thus dependent on the police or prosecution for their livelihood and good income. By contrast, defense experts more often than prosecution experts may be supplementing their income with part-time work as expert witnesses. They are less dependent on their clients. Even those whose income derives principally from expert witnessing may choose their clients and therefore find it less necessary to generate a pleasing outcome in each individual case. At least one review of A. S. Taylor’s book notes the tirade just quoted and criticizes it. The reviewer notes that “Dr. Taylor himself is paid, and properly so, for giving evidence,” and expresses “regret that Dr. Taylor should not have understood the position and duties of counsel better” (Unattributed 1859, p. 575). Burney (1999, pp. 87–92) notes that in the wake of the Palmer trial, A. S. Taylor inserted materials into his texts (Taylor 1859, 1880) that seem designed mostly to vindicate his inherently
Two Historical Episodes in the Problem of Experts
61
rather dubious testimony in the case. Later, in the Smethurst case of 1859, Taylor was embarrassed by the public revelation of a significant error that put his reputation at risk (Coley 1991, p. 422). The Palmer and Smethurst cases were “public scandals of scientific expert testimony” (Golan 2004, p. 110). These scandals precipitated an important 1860 meeting of the Royal Society of Arts reported on in volume 7 number 374 of the Journal of the Society of Arts (Golan 2004, p. 111). The arguments made in this meeting reflect the “widely accepted convention that expert disagreement was detrimental to both justice and science” (p. 118). Science is neutral and good. Partiality is bad, venal, and unscientific. Many of the proposals put forward seem designed to promote the interests of the medical and scientific professions rather than any cause of justice. Smith (1860, p. 141) proposes “To have a scientific assessor on the bench beside the judge, who shall examine the witnesses, if needful, and who shall advise the judge.” This assessor “shall not be questioned as a witness, but sit as assistant to the judge.” Interestingly, Smith even calls for this assessor to be “appointed by the Crown,” which would give “a better position to science . . . and would also enable the assessor to speak less as an inferior or an employed of the judge” (p. 141). Smith seems unaware that his comments might be construed as self-serving. Oblivious to this possibility, he says blithely, Scientific men are bound together by mutual beliefs in a stronger manner than the community at large, and if placed in this honourable and independent position, they will act according to their knowledge and character, and cause to cease much unnecessary contradiction and opposition. Being bound to speak the truth, the whole truth, and nothing but the truth, they will feel in honour bound to do so when an opportunity offers. This opportunity rarely occurs in the present confusion. (p. 141)
It is an abomination that a mere barrister should speak to a “scientific man” as “an inferior personage” (p. 141). The proposal for such an “assessor” did meet with some resistance. One commenter, E. Chadwick, noted that “it implied omniscience in science and art – a false notion, in which was the foundation of an immense amount of mischievous quackery” (p. 143). Interestingly, Chadwick bolstered his position of epistemic humility by appealing to the division of knowledge in science. “As labour in science and art was constantly being subdivided, each one engaged in the exclusive pursuit of one subdivision was, to the extent of his pursuit, excluded from the knowledge of others, and was, as related to them – if upon success in his subdivision he
62
Expert Failure
presumed to give an authoritative opinion upon those others – an ignorant quack” (p. 143). It is interesting to note that Smith also anticipated the “hot tubbing” of expert witnesses. Smith says, “It was my belief that if the scientific witnesses, on their appointment by either side, met and arranged their points of agreement, and arrived at their exact points of disagreement or divergence, much time would be saved to the courts” (Smith 1860, p. 138). More or less this procedure is now applied in at least some jurisdictions in Australia under the name “hot tubbing” (Biscoe 2007). There are two forms of hot tubbing in Australia. With “concurrent evidence” the experts testify together and may be allowed to ask each other questions. In a “conclave” or “joint report” the experts meet before trial and prepare a joint report. These meetings are intended to enable the experts to identify the extent of their agreement or disagreement, resolve or narrow differences, and reduce their respective positions to writing in the form of a joint report that they are required to endorse. This joint report, it is hoped, will help to procure settlement. Ordinarily, only the areas of disagreement will be “live” should the case proceed to trial. (Edmond 2009, p. 165)
Golan (2004) reports that most participants in the meeting of the Royal Society of Arts “shared the widely accepted convention that expert disagreement was detrimental to both justice and science. Only one commentator challenged this view,” William Odling (p. 118). Odling (1860) says, “truth can only be arrived at by the conflict of testimony” (p. 167). As Golan (2004) points out, Odling illustrates his principle with a patent case that turned on whether a particular chemical reaction was possible. Received wisdom had it that the reaction could not occur. Partisan experts for the defendant devised a “new reaction” that contradicted the claims of the plaintiff’s experts. “And this illustration shows the importance of having a subject investigated by men desirous of establishing different conclusions” (p. 168). Golan (2004) says, “Elegant as they were, Odling’s arguments failed to persuade many” (p. 119). Golan explains: “The suggestion that adversarial proceedings might result in the production of new scientific knowledge also went against the ideological grain that portrayed the scientific discourse as a neutral domain of knowledge where all could meet and work together for good social ends” (pp. 119–20). As we have seen, different views of expert witnesses were adopted in the nineteenth century. It seems fair to say, however, that the general tendency was dismay over the impossibility of overcoming ambiguity and scientific disagreement. Reynolds (1867, p. 644) says: “The production of expert
Two Historical Episodes in the Problem of Experts
63
testimony is therefore so elaborate that it virtually destroys itself.” Crossexamination was another common cause for dismay, generally because it challenged the august dignity of honorable men of science. Like others before him, Washburn (1876) proposes “to have the court before which the trial is to take place, select a proper number of experts of an established reputation, after a proper hearing of the parties, and to have these called, while the parties may still be at liberty to call others if they see fit” (p. 39). The goal seems to be neutrality. Making experts officers of the court would increase the “weight” of the experts’ opinions with juries. And it would “add a dignity and importance to the office of an expert” (p. 39). Here again we are told how scandalous it is for the opinions of men of science to be challenged by other men of science or, especially, from outside science. This common attitude, still encountered today, seems to depend on the view that the knowledge of expert witnesses is or should be uniform, unambiguous, and certain. Even if we were to accept this description of general “scientific” knowledge, expert testimony applies such knowledge to complicated, ambiguous, and contested facts arising from human events. The knowledge of an expert witness is, therefore, generally more like the dispersed and relatively pragmatic knowledge of everyday life than the defenders of science are sometimes willing to acknowledge. It is in its basic character, then, not very different from the knowledge of other participants in the proceeding. A notice of Washburn’s article appearing in the British Medical Journal endorses the proposal (Unattributed 1877a). “Under such improved conditions, the witness-stand would no longer be a place of torture for physicians.” Though jocular in its expression, this comment seems to reflect the expert’s desire to go unchallenged and to be obeyed. Calls for “neutrality” endorse the view of many experts that their opinions should not be challenged. And measures to eliminate the putatively scandalous “battle of the experts” reinforce the unity and common interests of members of the same profession, such as medicine. The economic theory of professions helps us to understand some of the discussion of expert witnesses found in both the legal and, especially, medical and scientific literature. Savage (1994) gives us an economic definition of “profession” as “a network of strategic alliances across ownership boundaries among practitioners who share a core competence” (p. 131). The profession is the network. Importantly, “a practitioner’s performance” typically “has a direct . . . effect on the ability of other practitioners to use their own capabilities” (p. 135). Savage says, “Each individual’s routines have to ‘interface’ or be compatible with the routines others in the network” (p. 136).
64
Expert Failure
The professional network, which requires harmony and agreement, is, essentially, the ground on which the professional’s income rests. Shaking the ground threatens that income. The “battle of the experts” tends to shake the network and cause confusion and distress to its members. The role of the professional network in the professional lives of physicians and other experts may be an important source of the repeated lamentations that partisan experts create “gross and scandalous evils,” that it is “disagreeable” and “demoralizing” to have such partisanship, and that the witness stand should not be “a place of torture” for the expert witness. One writer pointed out that a “clever solicitor” can send several physicians a “copy of the depositions, with a fee, or the promise of a fee” and hire only those who give a favorable opinion. Importantly, this procedure “will be studiously concealed from the court” (Unattributed 1877b). This logic is similar to that of Feigenbaum and Levy (1996), which I will discuss in Chapter 8. At a meeting of microscopists, one participant suggests, “A committee could formulate matters so that others besides specialists could tell what might be expected and what could not be looked for as results of microscopical work.” Another supports the idea because it “would tend to dispose of the farce too frequently presented when an astute lawyer pits the testimony of one expert against that of another’ (Burrill 1890, p. 214). A third says, “We should agree upon a standard by which all general statements can be tried. In this way the efficiency of expert testimony can be augmented. The association of agricultural chemists have established a standard which is accepted in the courts, and there is no reason why this Society may not do the same for its own department of knowledge” (pp. 213–14). Here again we find the aversion to competing experts and the corresponding desire to somehow prevent diversity of opinion from being expressed. And here again we see the role of the profession in attempts to prevent disagreements among experts. In one jurisdiction in England – Leeds – the professional aversion to disagreement seems to have influenced court procedure. “In Leeds,” we are told, “the custom has, we believe, long obtained among the leaders of the profession to refuse to give expert evidence on any case until after a meeting of the experts on both sides; and this practice has worked so well that the Leeds Assizes are notable for the absence of these conflicts of scientific testimony which elsewhere has done so much to discredit such testimony in courts of law” (Unattributed 1890, p. 492). If this report is accurate, it would seem that physicians and, perhaps, other experts
Two Historical Episodes in the Problem of Experts
65
succeeded in transforming the adversarial procedure in a manner that likely did less to promote truth and justice than testimonial unity. Foster (1897) shows that expert witnesses at the time were held in low esteem. Contemporary commentary often insisted that such testimony be given low weight. In 1902 The American Law Register published an interesting exchange on experts between Clement B. Penrose, who was a Pennsylvania judge, and Persifor Frazer, who was (among other things) an early expert in forensic handwriting identification. Responding to Frazer (1902), Penrose says “the expert is – and should be so regarded – simply a scientific advocate, associated with the legal advocate, of the party on whose behalf he appears in the case” (Penrose and Frazer 1902, p. 346). Frazer responds: “If [the expert] be a disguised special counsel, or as you call it a ‘scientific advocate,’ he is an expert fighter, a case winner, – what you will, – but not an expert in its highest and best sense” (p. 347). Importantly, Frazer cannot admit the possibility that experts might disagree: “The opinion which results from an examination of a subject by an expert is a fact as much as any occurrence, and if he deliberately denies holding such an opinion, he is (outside of the legal definitions) a perjurer and a criminal.” Penrose responds with examples to show that experts do give opinions that should not be classified as facts. His examples include one that seems crafted to annoy Frazer: “that what purports to be the signature of one whom he has never known is, in his opinion, simulated” (p. 348). Penrose continues, “[I]t would not do to deprive the party against whom the scientific opinion is pronounced of the opportunity of contradicting the soundness of the opinion or showing by cross-examination the want of knowledge of the person expressing it, or the illogical method by which his conclusion was deduced from his premises” (p. 349). Frazer rises to the bait and defended his forensic techniques as “practically ‘certain’” (p. 349). Thus, in the cases to which his techniques are applied, “the question has ceased to be one of opinion and has become one of fact” (p. 349). Frazer’s conclusion reveals his epistemics: It is a nebulous, uncertain, and ever changing line which separates what the court calls “opinion” and “fact.” In truth, the belief of the court in the reality of such a distinction is opinion and not fact; and, anyhow, all that we can know of any “fact” is that it is an “opinion” held by many or few persons, and this includes even the great first fact, Cogito ergo sum, which as regards incontrovertibility is in a class all by itself. (p. 350).
This celebratory allusion to Descartes and Cartesian rationality places Frazer far from the broadly Hayekian view of knowledge I will articulate in Chapters 6 and 7.
66
Expert Failure
Frazer wrote a book on questioned documents analysis (which includes handwriting identification) that was published in at least three editions. (See Frazer 1901. See also Frazer 1894.) In it he makes repeated reference to “experiments.” These “experiments,” however, are techniques for experts to apply in their cases. Frazer does not seem to have conducted any validation studies for his techniques of handwriting analysis. Indeed, their validity seems to be a “postulate” in no need of testing. Frazer’s three “postulates” of handwriting analysis include: “The determination of averages with limits of variation is applicable to the study of handwriting” (Frazer 1901, p. 133). Risinger et al. (1989) report that no validation studies for handwriting identification had been conducted prior to 1939. They say: “For now [1989], the kindest statement we can make is that no available evidence demonstrates the existence of handwriting identification expertise” (pp. 750–1). More recently, the National Academy of Sciences has noted the “limited research to quantify the reliability and replicability of the practices used by trained document examiners” (NAS 2009, pp. 5–30). The virulence of Frazer’s prose sometimes seems designed to compensate for the absence of foundations for his methods. He says, for example, that “it is the Judges who are most to blame” for “degrading expert testimony on handwriting.” The disparagements of these judges come sometimes “from ignorance of the difference between scientific methods and quackery” and sometimes “from a disbelief that any one can be entirely uninfluenced in his conclusion in a case affecting the man who pays his fee.” In either event, Frazer suggests, it is an outrage: These judges are themselves (presumably) chosen for their positions because they are experts in the law, and would visit condign punishment upon a plain citizen who accused them of soiling their judicial ermine by deciding cases in favor of the side best able to promote their advancement. Yet they not infrequently join in the hue and cry against other experts, and very particularly those in handwriting. (Frazer 1907, p. 272)
Mnookin (2001) provides an interesting history of handwriting identification that includes a fairly substantial discussion of Persifor Frazer. Hand (1901) thought that expert witnesses raise “serious practical difficulties” (p. 50) because the expert effectively “takes the jury’s place” (p. 51) at trial, which he viewed as an “evil” (p. 52). Hand shared with many others the objection that “the expert becomes a hired champion of one side” (p. 53). Hand’s core objection, however, is that contending experts disagree. When they do, “the jury is not a competent tribunal” (p. 55) and cannot be expected to resolve the contradiction except arbitrarily. The only solution, Hand believes, is “a board of experts or a single expert, not called
Two Historical Episodes in the Problem of Experts
67
by either side, who shall advise the jury of the general propositions applicable to the case which lie within his province” (p. 56). The common nineteenth-century view that expert scientific witnesses should receive a certain deference did not disappear in the following century. Eliasberg (1945) laments the “to and fro of attacks and counterattacks; of fanning and rooting and intrigue among all those before the bar – these are the roots of an atmosphere which is hostile to the ideals which are often found engraved upon the pediments of our court buildings: ‘The Place of Justice is a Sacred Place’” (p. 234). His list of “urgently needed” rules and procedures includes: “Counsel should be held in contempt if he willfully attacks the honor or selfinterest of the expert” (p. 241). The nineteenth-century literature on expert witnesses is remarkably similar to more recent discussions (Cheng 2006). Today as well we have calls for neutrality and objections to the “battle of experts.” It is a scandal when scientific opinions differ. Experts should be officers of the court and thus removed from the adversarial process. Today as well we see professional groups attempting to impose uniformity on the testimony of their members. And today as well we can find instances in which defense witnesses are vilified as “hired guns” and the supposed authority of prosecution witnesses is celebrated, notwithstanding the equal or greater dependence of the prosecution’s experts on payments for their work. And today as well, lamentations on the biases of experts are often based on the (generally implicit) assumption that scientific knowledge is uniform, unambiguous, and certain rather than distributed, embodied in practice, and decidedly fallible. The predominant attitudes expressed in the nineteenth-century AngloAmerican discussions of expert witnesses are similar in some ways to the Socratic tradition described earlier in this chapter. In both cases the expert is celebrated for their ethical and epistemic superiority to nonexperts. In both cases the “nihilation” described by Berger and Luckmann is practiced by experts against the laity. Laics should not challenge experts; they should obey. In both cases, expert knowledge is hierarchical, uniform, unambiguous, and certain. Diversity of opinion is bad. In both cases we have experts who fit all too well the model in Berger and Luckmann (1966) of “universal experts” who are “the officially accredited definers of reality” (p. 97). Unfortunately, in my view, such attitudes are common today as well.
4
Recurrent Themes in the Theory of Experts
Several themes and issues recur in past discussions of experts. While these themes overlap, it may be helpful to briefly consider them under separate headings. POWER
Thinkers who view experts as unreliable will generally fear expert power. Only those who view experts as reliable are likely to endorse increasing expert power. The more “powerless” nonexperts are in the sense of Table 2.1, the greater is the threat posed by expert power. As we have seen in Chapter 2, Foucault (1980, 1982) sees “disciplines” as sources of power. When some people can impose knowledge on others, we have a power relationship in which those imposed upon are oppressed at least in some degree. Turner (2001, pp. 123–9) says that Michel Foucault and others in the tradition of “cultural studies” tend to the view that expert actions and categories “constrain” consumers “into thinking in racist, sexist, and ‘classist’ ways” (p. 126). This remark fits many scholars who cite Foucault favorably. Foucault himself, however, was subtler than Turner’s remark suggests. It nevertheless seems fair to say that Foucault tended to work in grand categories that are disconnected from both the individual meaning structures described carefully by Berger and Luckmann and the social processes that give rise to them. Foucault’s excess reliance on grand categories reflects his conception of his “problem” as “a history of rationality.” He has said, “I am not a social scientist.” He was examining not “the way a certain type of institution works,” but “the rationalization of the management of the individual” (Dillon and Foucault 1980, p. 4). He thus emphasized the imposition of unitary knowledge schemes on populations such as prisoners and students. 68
Recurrent Themes in the Theory of Experts
69
The issue of expert power is important to figures generally considered “left,” such as Foucault (1980, 1982) and Habermas (1985). It is also of concern, however, to liberal thinkers, who are sometimes dubiously considered “right.” Easterly (2013) is a good representative of this strain of concern over expert power. Easterly (2013) repudiates the “technocratic illusion,” which he defines as “the belief that poverty is a purely technical problem amenable to such technical solutions as fertilizers, antibiotics, or nutritional supplements” (p. 6). Easterly says, “The economists who advocate the technocratic approach have a terrible naïveté about power – that as restraints on power are loosened or even removed, that same power will remain benevolent of its own accord” (p. 6). Poverty is about rights, not fertilizer. “The technocratic illusion is that poverty results from a shortage of expertise, whereas poverty is really about a shortage of rights” (p. 7). Easterly supports human rights in part because “the rights of the poor . . . are moral ends in themselves” (p. 6). He also identifies a basic mechanism making “free development” better at improving people’s lives. People with rights can choose whom to associate with, contract with, and vote for to help solve their problems. Cooperative social connections, demand, or votes will grow for the more helpful problem solvers, who will attract imitators. Good solutions spread and the people grow richer. Free development leverages epistemic diversity to find solutions to human problems. The “tyranny of experts,” by contrast, has very little epistemic diversity and limits feedback from the people to the expert. Expert schemes imposed on the people don’t allow for the heterogeneity naturally emergent from free development. It is usually one size fits all. Nor do they entail the ceaseless local searching and experimentation of free development. Top-down planning cannot equal “the vast search and matching process” (p. 249) of free development. Thus, Easterly views experts as fundamentally unreliable. Democracy and economic liberalism are valuable in part because they tend to empower nonexperts. Easterly links his liberal argument on expert power to the Hayekian “knowledge problem,” which I discuss in Chapters 6 and 7: Another way to state the knowledge problem is that success is often a surprise. It is often hard to predict what will be the solution. It is even harder to predict who will have the solution, and when and where. And it is even harder when the success of who, when, and where keeps changing. This is just restating Hayek’s insight about the knowledge problem with conscious design. (p. 241)
70
Expert Failure
Like Berger and Luckmann (1966), Easterly links the problem of experts to the division of knowledge in society. Coyne (2008) makes broadly similar criticisms of expert power in connection with military interventions. Central to the issue of power is the question of who chooses. Do experts choose for nonexperts or merely offer advice and opinion? State-sponsored eugenics experts may have the power to decide for others whether they should be sterilized. Such cases are not, unfortunately, “ancient history” upon which we may look back with a shudder and a sense of superiority. Ellis (2008) says that “genetic factors contribute to criminality. Therefore, curtailing the reproduction rates of persons with ‘crime-prone genes’ relative to persons with few such genes should reduce a country’s crime rates” (p. 259). He explicitly labels this strategy a “eugenic approach” to crime fighting (p. 258). Noting that the use of antiandrogen drugs “is also called chemical castration,” he says: “administering anti-androgens to young postpubertal males at high risk of offending, especially regarding violent offenses, should help to suppress the dramatic surge in testosterone in the years immediately following puberty. Males with the greatest difficulty learning may need to be maintained on anti-androgen treatment for as much as a decade” (p. 255). Ellis imagines such policies would be administered with scientific neutrality and precision. He forgets that the administrators would occupy positions of privilege in a dominance hierarchy and act, therefore, in the unfortunate ways predicted by evolutionary psychology. In the United States, formally recognized and legally sanctioned coercive sterilizations were performed well into the 1970s (Stern 2005; Shreffler et al. 2015). More recently, the Center for Investigative Reporting has found that “Doctors under contract with the California Department of Corrections and Rehabilitation sterilized nearly 150 female inmates from 2006 to 2010 without required state approvals” (Johnson 2013). David Galton (1998), though distancing himself from genocide and most of the coercive eugenics of the twentieth century, says that “a state body should intervene” if a woman pregnant with a “trisomy 21” child is “clearly unable to provide economically for the long term care of her handicapped child” (p. 267). Present-day eugenicists such as David Galton (1998) and Lee Ellis (2008) would empower eugenic experts to make reproductive decisions for others. Eugenic policies are a subset of population policies. Population policy may range from seemingly benign measures such as state-sponsored child care to forced sterilization to genocide. Eugenic policies properly speaking depend on the assumption that disfavored human qualities are heritable.
Recurrent Themes in the Theory of Experts
71
The broader category of “population policy” does not. Nevertheless, when the size or composition of the population is a policy objective, the people are being viewed as livestock meant to serve an end separate from the population’s component individuals, whose autonomy and dignity should be respected. This perspective of human husbandry may make coercive measures more desirable for some political actors. De la Croix and Gosseries (2009, p. 507) advocate “Population policy through tradable procreation entitlements.” They favor “tradable procreation allowances and tradable procreation exemptions” to achieve “the optimal fertility rate.” They trace their proposal to Kenneth Boulding (1964), who seems to have been the first thinker to propose procreation vouchers. Hickey et al. (2016, p. 869) say “that the dire and imminent threat of climate change requires an aggressive policy response, that it is reasonable to think that this response should include population engineering.” “Further,” they argue, “aggressive implementation of well-designed choice-enhancing, preference-adjusting, and incentivizing interventions aimed at reducing global fertility would be morally justifiable and potentially effective parts of a global population engineering program.” Policies that do not “consider population as a variable to be manipulated, might turn out to be too little too late.” The greater the supposed urgency of global warming and income inequality, the more likely we are to have coercive population policies. O’Neill et al. (2010, p. 17525) claim to have shown “that reduced population growth could make a significant contribution to global emissions reductions.” They find “that family planning policies would have a substantial environmental cobenefit.” Citing O’Neill et al. (2010), Johns Hopkins ethicist Travis Rieder has described children as “externalities” (Ludden 2016). “Rieder proposes that richer nations do away with tax breaks for having children and actually penalize new parents. He says the penalty should be progressive, based on income, and could increase with each additional child” (Ludden 2016). Rieder has appeared on the television show “Bill Nye Saves the World.” His comments on that show induced the host to ask: “So should we have policies that penalize people for having extra kids in the developed world?” (Rousselle 2017). Rieder responded, “So, I do think we should at least consider it.” Thus, coercive population policy measures in the United States are beginning to be vetted with the general public as a method of combatting the threat of global warming. Because global warming is often represented to the public as a catastrophic threat, it seems only reasonable to fear that coercive population policies will gain political acceptance in the relatively near future. We
72
Expert Failure
should again recall Berger and Luckmann’s (1966, p. 88) observation that the “general population is intimidated by images of the physical doom that follows” when experts are not obeyed. Merton (1937, 1945, 1976) may have been too sanguine about expert power. He looked to the social structure of science and declared the “quest for distinctive motives” of scientists to be “misdirected” (1937, p. 559). He attributed the epistemic merit of science to the social structure of science rather than any personal merit of scientists. This connection between social structure and the epistemic performance of experts, in this case scientists, is important and valuable. As my earlier brief discussion suggests, however, Merton did not give adequate weight to the risk that an expert might misbehave or make mistakes. The discussion of sociological ambivalence in Merton and Barber (1976) illustrates. Merton and Barber (1976) identify many forms of “sociological ambivalence” (pp. 6–12). They focus, however on the “core type” in which “incompatible normative expectations” are “incorporated” in a social role, social status, or set of simultaneously occupied social positions. So defined, “sociological ambivalence” is about the tensions within recognized and socially approved norms. It is not about when, whether, and how professionals might deviate from such norms. Thus, Merton and Barber (1976) obscure and ignore potential problems with experts by the very definition of “ambivalence,” which discourages any examination of misbehavior or unconscious bias among experts. Tellingly, when discussing the frustrations a professional’s client may feel, Merton and Barber (1976) say: “We focus on frustrations induced by the professional living up to his role” (1976, p. 26). And they do not distinguish the client’s reactions to “his doctor, his lawyer, his social worker or his clergyman” (pp. 26–7). But the “social worker” will not generally have been chosen by the client, whereas their clergyman (in the United States) typically is chosen under conditions of free competition. In other words, the social worker wields the power of the state, whereas the clergyman has only the “power” of persuasion. As I will discuss in later chapters, doctors and lawyers are intermediate cases: Licensing restrictions and professional organizations give these experts a degree of epistemic monopoly, but their power is at least somewhat mitigated by the client’s power to choose among certified experts. For Merton and Barber, however, it always the relatively reliable expert interacting with a potentially recalcitrant nonexpert who is in need of the expert’s ministrations, any recalcitrance notwithstanding. Merton and Barber are insensitive to the risk of expert failure. Indeed, when at last they acknowledge that clients may “suspect the motivations of the professionals who minister to their needs” (p. 27), the focus is entirely
Recurrent Themes in the Theory of Experts
73
on the clients’ anxiety and not the prospect that such anxiety may be very well justified. They say, “And once again, we emphasize that we are not dealing here with cases in which professionals do, by the standards of the time, exploit the troubles of their clients. We are concerned with legitimate practices and patterned situations, not with deviant practices, that produce ambivalence” (p. 28). Merton (1937) saw that the epistemic success of science was a function of its social structure. He says, for example, that its disinterestedness is not a matter of the superior morality of scientists. “It is rather a distinctive pattern of institutional control of a wide range of motives which characterizes the behavior of scientists.” Scientists are not unusually ethical; rather, “the activities of scientists are subject to rigorous policing” (1937, p. 559). This link between social structure and epistemic performance was pathbreaking, and it remains important. And yet, when Merton turns to the professions, he neglects to ask when and whether such experts are “subject to rigorous policing.” He cordons off considerations of “deviant practices” and considers only the “ambivalence” arising from “legitimate practices.” The issue of power fades behind a screen of complacency about the expert’s expertise. This limits the utility of his analysis of ambivalence for an economic theory of experts. The point is not always to constrain the power of the experts. As we saw in Chapter 3, Socrates seems to have sought to empower experts and called on Athenians to obey them. Cole (2010) wants to empower a “knowledge elite” within forensic science. Using medicine as his model, he explicitly calls for greater “hierarchy” in forensic science to empower the knowledge elite. He thus wishes to increase the power of some experts to reduce the power of others. Experts in the knowledge elite are reliable. Experts below the knowledge elite are unreliable. The power of these unreliable experts must be constrained by empowering the reliable experts. It seems fair to suggest that Cole tends to view nonexperts as fundamentally powerless, at least within the context of the criminal justice system. It is for this reason that he seeks a hierarchical and not a democratic solution to the problem of experts. (See my exchange with Cole in the Fordham Urban Law Journal: Cole 2012; Koppl 2012a.) ETHICS
Ethics enter theoretical treatments of experts in at least three ways. First, there is the question of the virtue of experts. More virtuous experts are generally considered more reliable. And if experts are imagined to be more
74
Expert Failure
virtuous than nonexperts, as in much of the nineteenth-century literature on expert witnesses, it may seem sensible to minimize the power of nonexperts so that they may receive instruction from their betters. Second, there is the question of what ethical norms should constrain experts. Finally, there is the question of what social mechanisms would induce or constrain experts to act within ethical norms. Experts are sometimes viewed as virtuous, and this virtue is often taken to bolster their supposed epistemic superiority over nonexperts. Thus, the supposed epistemic merit of the expert is in part attributable to their moral character. In Chapter 3 we saw this attitude expressed in both the Socratic tradition of philosophy and much of the nineteenth-century literature on expert witnesses. For example, Angus Smith (1860, p. 141) says: Scientific men are bound together by mutual beliefs in a stronger manner than the community at large, and if placed in this honourable and independent position, they will act according to their knowledge and character, and cause to cease much unnecessary contradiction and opposition. Being bound to speak the truth, the whole truth, and nothing but the truth, they will feel in honour bound to do so when an opportunity offers.
Smith’s opinion that the combination of science and virtue ensures the correctness of expert opinions has arisen in forensic science as well. Until recently, the suggestion that forensic scientists might be subject to unconscious bias was widely considered a challenge to the moral character of forensic scientists. An internationally prominent forensic scientist and forensic science researcher once heatedly exclaimed to me: “But I am trained to be objective!” More virtuous experts are not necessarily more reliable. Koppl and Cowan say that “some of the very qualities that may make a forensic scientist a good person may induce unconscious bias and consequent error” (2010, p. 251). If, for example, the forensic expert knows that the case they are working on is that of a heinous double murder, their very human decency and compassion for the victim may skew their judgment toward finding a “match” when there is no match at all. The view among forensic scientists has often been that the moral character of the forensic scientist ensured the correctness of the forensic scientist’s opinion. But this view within forensic science, as in the quote from Angus Smith (1860), requires that such virtue be combined with “science.” It is only the specially trained expert whose moral uprightness ensures a correct opinion in the given domain of expertise. For this reason, it has been important to the apologists for forensic science to insist on the scientific nature of forensic science.
Recurrent Themes in the Theory of Experts
75
In the sort of view of represented above by Angus Smith (1860), “science” is also a guard against diversity of opinion. (See also Taylor 1859, p. 703 and Cook 1994.) It is necessary to resist and deny diversity of opinion if “science” and the expert’s personal virtue are to be held up as sufficient to ensure a correct expert opinion. Thus, forensic scientists have claimed a “zero error rate” for their methods, and they have testified that their results are correct to a “reasonable degree of scientific certainty” (NAS 2009; NIST 2011). The view that experts are uniquely virtuous seems inconsistent with the view that experts require a code of ethics. Writers who question the reliability of experts are more likely to call for the explicit promulgation of an ethical code. Levy and Peart (2017) propose a code of ethics for experts. Such a code would mitigate the risks of partiality in expert opinion. A vital minimal requirement to this end is transparency. “[A]s a preliminary matter . . . it is critically important for experts to reveal information relevant to their financial interests” (p. 314). Sympathy for the client or for other experts may also create bias, as might a commitment to preferred policies or general frameworks for policy analysis. “Such sympathetic connections are more clearly revealed by detailing the history of one’s work, including not least one’s consulting history and the policy positions one has advocated in consulting and academic work” (p. 315). Koppl and Cowan (2010, p. 241) say: The ethics code of the American Statistical Association (ASA) comes to 3,395 words, none of which concern procedures in case of a violation. The ethical code of the American Academy of Forensic Sciences (AAFS) is 1,384 words long. Of these, however, all but 145 are preamble, section titles, or (mostly) procedures in case of a violation. Of these 145 words, 62 are devoted to saying that members must not act contrary to the “interests and purposes of the Academy” and 31 are devoted to saying that members cannot give any opinion as that of the AAFS without prior permission. That leaves 52 words for the ethics of forensic science analysis. Thus, the ethical code of American Statistical Association has over 65 times as many words devoted to ethical conduct than the ethical code of the American Academy of Forensic Sciences. This difference of one order of magnitude is explained by the greater specificity of the ASA code. Vague ethical guidelines are not likely to provide more than modest help in error prevention, correction, and detection.
Since the quoted article was written, the ethics code of the AAFS has been trimmed substantially. A code of professional ethics should help the professional decide what actions and inactions are wrong and what actions and inactions are ethically acceptable. It may also be a mechanism for inducing ethical behavior. This is achieved in part by the informational function of the
76
Expert Failure
code. Most professionals have at least some desire to behave ethically. This desire may spring in part from the professional’s sense of identity with their profession. Simply articulating and promulgating a code of ethics will, then, have at least some effect on the behavior of the relevant professionals. A code also creates a standard for others to use in judging a professional, creating the possibility of censure. Such secondary effects of a code of ethics will also influence the behavior or professionals. The theoretical literature addressing the problem of experts includes other mechanisms for helping to ensure the ethical behavior of experts. These include discussion and democratic control, which I consider below, as well as regulation and licensing restrictions. If experts are potentially unreliable, but subject to competition, they may be driven toward more ethical behavior, depending on the structure of market competition. Adam Smith saw such an ethical benefit in free competition among religions, at least if we view “candour and moderation” as ethical norms: The teachers of each sect, seeing themselves surrounded on all sides with more adversaries than friends, would be obliged to learn that candour and moderation which is so seldom to be found among the teachers of those great sects whose tenets, being supported by the civil magistrate, are held in veneration by almost all the inhabitants of extensive kingdoms and empires, and who therefore see nothing round them but followers, disciples, and humble admirers. (Smith 1776, v.1. p. 197)
REFLEXIVITY
“Reflexivity” is the methodological requirement that a theory applies to itself (Bloor 1976, pp. 13–14, Pickering 1992, 18–22). The problem of reflexivity arises most naturally when the theorist views experts as unreliable. The theorist themself may be considered an expert. Is the theorist’s theory, therefore, unreliable? When experts are viewed as reliable, reflexivity is less likely to be a problem. The theorist’s theory conveys the reliable expert’s opinion that expert opinions are reliable. Reflexivity is closely related to the anthill problem. The reliable expert imagines themself above the anthill looking down. To be consistent, the theorist who warns of unreliable experts must recognize themself as an ant in the anthill and emphasize this fact in their theory. Turner (1991, 2003) discusses some of the issues as they arise in science studies. He criticizes Jasanoff and others when he disparages “the inner contradictions of the attempt to be anti-essentialist (or ‘social constructionist’) about science and at the same time to provide some sort of external God’s eye
Recurrent Themes in the Theory of Experts
77
view ‘critique’ with ‘policy’ implications which bedevils ‘science studies’ attempts to be normative” (Turner 2003, p. viii). Jasanoff (2003, p. 394) in her turn also invokes a notion of reflexivity when she criticizes Collins and Evans (2002) by saying: “Nor is there an objective Archimedean point from which an all-seeing agent can determine who belongs, and who does not, within the magic ring of expertise.” In Chapter 11 I will argue that we may use the techniques of experimental economics to skirt the vexed problem of whether to second-guess expert opinion. The use of these methods may mitigate problems of reflexivity in some degree. By constructing the truth in the human-subjects laboratory, we create for ourselves the “God’s eye view” disparaged by Turner. This perspective is valid only in the laboratory, however, and does not allow us to judge competing scientific claims in, say, climate change or epidemiology. I will argue in Chapter 11 that it helps us to judge how different institutions affect the chance of expert failure. There are at least four other approaches to handling the potential paradoxes of reflexivity. First, the theorist may exempt themself from the theory. Second, the theorist may embrace the paradoxes of self-reference, but use irony and satire to prevent those paradoxes from destroying the theory from within. Third, the theorist may attempt to construct their theory in such a way that complete self-reference does not harm the theory. It is not clear that this strategy has been used successfully. Finally, in what may be a variant of the third strategy, the theorist may identify limits of explanation that prevent paradoxes from arising. A theorist may skirt reflexivity problems by somehow exempting themself or their theory from the requirement of reflexivity. Marxism (in, at least, some of its variants) adopted this strategy. All ideologies reflect material forces. But the revolutionary vanguard, because of its unique position in history, can see things as they really are. They alone are exempt from the false consciousness implied by historical materialism. Exempting the theorist from their theory introduces an obvious asymmetry that may allow critics to complain of inconsistency. Satire may be a vehicle for avoiding the paradoxes that might otherwise defeat efforts to put the theorist in the model. The satirist’s ironic voice admonishes readers to doubt their own motives and self-interpretations as well as the motives and self-interpretations of others. But this same skepticism must be applied with equal to vigor to the satirist as well. Satire entails an invitation to the reader to be skeptical of the satirist. If satire and irony do indeed help bring the theorist into the model, they may be the true voices of liberalism, as with Fielding’s Tom Jones. The satirical
78
Expert Failure
approach to reflexivity reflects the view that experts are unreliable. It may also reflect the view that nonexperts are potentially empowered. The satirist asks the reader to be empowered by doubting even the satirist themself. Rather than exempting the theorist, we may “put the theorist in the model,” as David Levy and Sandra Peart have expressed it to me. (In Levy and Peart 2017, p. 192, they say “we need to put the economist in the model” because economic “experts share motivational structure with those we study.”) It is hard to put the theorist in the model. But such reflexivity may be required if the theorist is to avoid modeling themself as above the anthill looking down, at least when the model does not naively assume that we are all virtuous and our thoughts and motives self-transparent. Identifying limits of explanation may make it easier to put the theorist in the model. Peart and Levy (2005, p. 3) define “analytical egalitarianism” as the social scientist’s presumption that “humans are the same in their capacity for language and trade; observed differences are then explained by incentives, luck, and history, and it is the ‘vanity of the philosopher’ incorrectly to conclude that ordinary people are somehow different from the expert.” They draw the phrase “vanity of the philosopher” from chapter 1, section 4 of Smith (1776). They have characterized this methodological norm as requiring that “differences among types of agents” in one’s model are “endogenous to the model” and that “the theorist” be viewed as of “the same type as the agents in the model” (Levy and Peart 2008a, p. 473). Knight struggled with the anthill problem. He struggled with the issue of how to put the theorist in the model, as beautifully illustrated by a passage in Turner (1991). In a letter to the sociologist George A. Lundberg, who is generally classified as a “neo-positivist” (Wagner 1963, p. 738, n. 7; Shepard 2013, p. 17), Knight asks why thinking about social questions shows such an overwhelming tendency to run into a sales competition between different forms of verbal solitaire. The social problem itself is, of course, largely that of how men’s minds work, and especially important are the minds of those who achieve some position of articulate bid for leadership, and among these there is surely no more challenging case than the mind of the behaviorist. In this connection, the principle of knowing oneself, or beginning at home, has a peculiar appropriateness. The first thing for behaviorism to explain is behaviorism! If it will take this problem first, instead of embarking on the emotional-religious spree of attempting to convert everybody to its peculiar type of enlightenment, it will immediately render the inestimable service of making itself harmless. (Knight 1933 as quoted in Turner 1991, p. 30)
Knight wants the theory to explain the theorist.
Recurrent Themes in the Theory of Experts
79
Buchanan (1959), a student of Knight, also struggles with the anthill problem. He wishes to avoid imposing the economic expert’s values on others, while preserving an advisory role for that expert. As but one equal member of the community, the economist may seek agreement, which is to say unanimity. Recalcitrant or otherwise unreasonable members of the community, however, will likely make “full agreement” impossible. The argument for unanimity assumes “that the social group is composed of reasonable men” (p. 134). In the end, therefore, the “political economist . . . is forced to discriminate between reasonable and unreasonable men in his search for consensus” (p. 135). Rather than “absolute unanimity” we seek “relative unanimity.” The move to relative unanimity seems doubtful. Relative unanimity is not agreement at all. It is an imagined agreement among imagined “reasonable men.” We have with Buchanan, as with Rawls, the attempt to deduce what “reasonable” persons would choose in a purely imaginary and hypothetical context that does not and cannot exist in reality. There is no discussion or bargain, only the theorist’s imagined discussion or bargain. But then the theorist is no longer one among equals. The theorist instead becomes everyone! (See Gordon 1976, pp. 577–8.) In the end, then, “relative unanimity” does not prevent the “political economist” from imposing themself on others and thus breaking the moral and epistemic reflexivity Buchanan sought. By deciding what is “reasonable” and imagining what “reasonable men” would agree to, the political economist decides for the polity which opinions count. Opinions the theorist cannot imagine are thereby excluded and do not count. F. A. Hayek also addressed reflexivity. The Sensory Order (Hayek 1952a) outlined a theory of mind in which mind is emergent from neural connections. These connections form at any moment a classificatory system. Ordinary thinking and scientific explanation are classificatory activities. Hayek argued that “any apparatus of classification must possess a structure of a higher degree of complexity than is possessed by the objects which it classifies; and that, therefore, the capacity of any explaining agent must be limited to objects with a structure possessing a degree of complexity lower than its own” (1952a, p. 185). It follows by logical necessity, for Hayek, that “the human brain can never fully explain its own operations” (1952a, p. 185), because it would have to be more complex than itself to do so. Later, Hayek would associate this argument with two mathematical results: Cantor’s diagonal argument and Gödel’s incompleteness theorem (1967b, p. 61, n. 49 and p. 62). Hayek applied his notion of the “explanation of the principle” (1952a, p. 182) to the social sciences as well (Hayek 1967b). An explanation of the
80
Expert Failure
principle is one in which only general features of some phenomenon are accounted for. When the phenomena studied are sufficiently complex, this is all that can be hoped for. Thus, Hayek resolved the problem of reflexivity by identifying logically necessary and insuperable limits of explanation. Earlier we saw Lee Ellis argue for the chemical castration of young men with “crime-prone genes.” Writers who, like Ellis (2008), view experts as reliable and nonexperts as powerless do not usually subject their theories to a reflexivity requirement. Ellis’s essay illustrates, however, the importance of the reflexivity requirement that all agents of the system be modeled. He models persons with “crime-prone genes,” but not the experts who would administer sterilization policies. He consequently wishes to place discretionary power in the hands of persons unlikely to exercise such power with the Solomonic disinterest and wisdom his policies would require even under the assumption that his eugenic ideas are correct. In the theory of experts, as in all of social science, all agents must be modeled if we are to minimize the risk of proposing policies that would require some actors to behave in ways that are inconsistent with their incentives or beyond human capabilities. THE WELL-INFORMED CITIZEN
Nonexperts, I have said, may be powerless or empowered. If experts are defined by their expertise, then it may be hard to conceive of an empowered nonexpert unless that nonexpert has some approximation to the expert’s expertise. This sort of thinking crops up in discussions of the “well-informed citizen,” as with Schutz (1946). The well-informed citizen, then, has a natural place in theories that view experts as fundamentally reliable and nonexperts as empowered. If experts are defined by their expertise and seen as fundamentally reliable, then nonexperts can only be empowered by knowledge. Knowledgeable persons who are not experts are, by definition, well-informed citizens. If experts are unreliable and nonexperts are empowered, the well-informed citizen might have a role. Levy and Peart (2017), discussed in this section, is an example. The wellinformed citizen would seem to have no role to play in a theory in which nonexperts are powerless. As I briefly discuss in this section, however, Turner (2003) attributes to Karl Pearson the desire to train citizens to be “junior scientists.” Schutz (1946) distinguishes the “expert,” the “well-informed citizen,” and the “man on the street.” The “man on the street” is entirely unreflective. He applies “recipes” in life “without questioning” them (p. 465). The well-informed citizen stands between the expert and the man in the street.
Recurrent Themes in the Theory of Experts
81
While not an expert, “he does not acquiesce in the fundamental vagueness of a mere recipe knowledge or in the irrationality of his unclarified passions and sentiments. To be well informed means to him to arrive at reasonably founded opinions in fields which as he knows are at least mediately of concern to him although not bearing upon his purpose at hand” (p. 466, emphasis in original). These three figures are “ideal types.” Schutz was perfectly aware that no one is a pure man on the street in all domains. The most uncurious soul will usually have at least some opinions that go beyond mere acceptance of recipes, even though many of them may not be entirely “reasonably founded.” The social distribution of knowledge implies that no one can be an expert in all domains, nor well informed in all fields. Schutz places his hopes in well-informed citizens to balance the competing claims of different types of experts and impress their well-informed views upon the man in the street. “It is the duty and the privilege,” he concludes, “of the well-informed citizen in a democratic society to make his private opinion prevail over the public opinion of the man on the street” (p. 478). In this essay, Schutz views a part of the citizenry, those who are “well informed,” as empowered, and the rest as relatively powerless. Barber (2004, p. xii) interprets Schutz (1946) to say “that the opinion of the well-informed citizen ought to take precedence over that of experts and the uninformed.” This way of putting it may overstate the power of the wellinformed citizen to second-guess the expert. Schutz says, “[I]t is the wellinformed citizen who considers himself perfectly qualified to decide who is a competent expert and even to make up his mind after having listened to opposing expert opinions” (p. 466). The well-informed citizen’s opinion is always derived from the more informed opinions of experts. Each expert, however, is locked within the frame of their specialty. “The expert starts from the assumption not only that the system of problems established within his field is relevant but that it is the only relevant system” (Schutz 1946, p. 474). This implies both a certain narrowness and disengagement from ethical norms. We “can expect from the expert’s advice merely the indication of suitable means for attaining pregiven ends, but not the determination of the ends themselves” (p. 474). The well-informed citizen weighs and balances the competing frames of diverse experts. In this sense we may say, with Barber, that the well-informed citizen’s opinion “ought to take precedence over that of experts.” But the well-informed citizen is always choosing among expert opinions. And it is in this sense that the wellinformed citizen’s opinion is always derived from the more informed opinions of experts.
82
Expert Failure
With Schutz, the well-informed citizen disciplines the man on street, thereby helping to ensure that the right expert opinion will “prevail.” Jasanoff (2003) gives the well-informed citizen the opposite role. She says: “expertise is constituted within institutions, and powerful institutions can perpetuate unjust and unfounded ways of looking at the world unless they are continually put before the gaze of laypersons who will declare when the emperor has no clothes” (2003, p. 398). Schutz seems to take it for granted that the well-informed citizen will support the expert. Jasanoff seems to take it for granted that the well-informed citizen will oppose the expert. It is difficult to adjudicate the competing opinions of Schutz and Jasanoff, because it is not clear when the public opinion reflects the wellinformed citizen and when it reflects the man on the street. One observer will say that members of the public who resist, say, Darwinism are uninformed. By definition, the well-informed citizen supports Darwinism. Another observer might say that only well-informed citizens understand intelligent design well enough to exercise a Jasanoffian resistance to the spurious experts trying to impose dubious Darwinism on innocent schoolchildren. (Admittedly, Blount et al. 2008 has made the example of intelligent design outdated.) Our assessment of which citizens are “well informed” may not be separable from our assessment of the expert opinion in question. It seems unlikely, moreover, that any very simple statement about the role of the well-informed citizen will be satisfactory for a theory of experts. Social processes of opinion formation are too complex to reduce to the sort of simple formulae we have seen from Schutz (1946) and Jasanoff (2003). Turner (2003, pp. 97–128) discusses authors who in one way or another address the role of experts through “citizen competence.” He attributes to Dewey and Pearson ideas that amount to making the competent citizens “junior scientists.” Such figures were looking not so much to control experts as to make society more scientific. Turner extols the more skeptical views of James B. Conant, who thought citizens needed to know not science, but (in Turner’s words) “how science works” (p. 121). Conant’s educational reforms were meant, Turner reports, “to produce members of a liberal public” (2003, p. 121). Conant did not think a well-informed citizenry would be sufficient to control experts, however. Levy and Peart (2017) suggest that randomly chosen citizens can play the role often assigned to well-informed citizens. Randomness replaces informedness. As we have seen, Levy and Peart (2017) emphasize transparency, which tends to empower the laity. And they offer the relatively concrete proposal to mitigate expert bias by “extend[ing] the jury system to
Recurrent Themes in the Theory of Experts
83
that of regulation.” They say: “Instead of appointed regulatory bodies with their experts making decisions, where the only people with a voice in the matter have a particular interest in the issues, we propose that decisions be made by people randomly selected, who have the issues explained to them by contending experts” (2017, p. 242). They do not discuss how this system might be put into practice. They stipulate that selection shall be random without specifying a social mechanism that will generate such a result. In this regard their proposal suffers the same infirmity as Sanford Levinson’s call for random selection of delegates to a constitutional convention (Devins et al. 2016, pp. 242–3). Levy and Peart (2017) compare their proposal to jury selection, which they describe as “random.” But the recent ruling of the US Supreme Court in Foster v. Chatman (14–8349, May 23, 2016) illustrates the complaint that jury selection is frequently nonrandom and, in particular, racially biased. The court ruled in this case that the prosecution had excluded jurors because they were black. The court ruled against this form of nonrandomness in this particular case. Unfortunately, it does not seem likely that the decision in this case will make randomness readily attainable in the future. The trial in question was held in 1987. Thus, the violation in question persisted for almost thirty years. Moreover, at least one journalist reports, “The decision was narrowly focused on Mr. Foster’s jury selection and is unlikely to have a broad impact. Evidence of the sort that surfaced in Mr. Foster’s case is rare, and the [precedent on which the decision was based] is easy to evade” (Liptak 2016). Randomness can be a good thing, but it can also be hard to achieve. Colander and Kupers (2014, pp. 173–4) provide an interesting twist on the theme of the well-informed citizen. In the context of economic policy, they define the “problem of experts” as “the problem that there is no one to keep the experts humble.” They explain: “It’s not that they aren’t experts; it’s that the problems they are facing are so complex that no one fully understands them.” They “advocate . . . the idea of educated common sense. Educated common sense involves an awareness of the limitations of our knowledge that is inherent in” the conceptual framework enabled by modern complexity theory, including the mathematical theories of complexity associated with the Santa Fe Institute. “The complexity policy revolution involves not merely changing theory around the edges; it involves experts changing the way they think about models and policy.” They say, “A central argument of this book is that with complexity now being several decades old as a discipline (and much older as a sensibility), policy that ignore this frame fails the educated common sense standard.” Thus, the expert should arm themself with the sort of “educated common
84
Expert Failure
sense” Schutz associated with the well-informed citizen. Instead of waiting to be told, à la Jasanoff, that they are emperors with no clothes, the experts must learn enough humility to know and confess, indeed vigorously affirm, their own nakedness. DEMOCRATIC CONTROL OF EXPERTS
The idea of democratic control of experts is related to that of the wellinformed citizen. The idea is often that we will gain from experts but somehow control them through democratic means. As with the wellinformed citizen, the general idea of democratic control of experts probably best fits theories with empowered citizens, but surprising combinations of ideas may sometimes be found. I will briefly discuss Wilson (1887), who views nonexperts as fundamentally powerless. And yet the principle of democracy will somehow ensure that experts serve the commonweal. The problem of expert control frequently arises in the context of government policy built on expert advice. Merton (1945) and Turner (2001) are examples. In this context, democracy is sometimes viewed as a safeguard against problems with experts. Loosely, the reason may be that the democratic electorate employs the experts, who are somehow bound, therefore, to serve the people. Or, loosely, the reason may be that the electorate somehow decides who counts as an expert or what counts as expertise, thereby retaining ultimate control. Jasanoff gives the first sort of reason when she says that “public engagement is needed in order to test and contest the framing of the issues that experts are asked to resolve” and that “participation is an instrument for holding expertise to cultural standards for establishing reliable public knowledge, standards that constitute a culture’s distinctive civic epistemology.” She gives the second sort of reason when she invokes, as we have seen already, “the gaze of laypersons who will declare when the emperor has no clothes” (2003, pp. 397–8). Jasanoff makes two further defenses of “participation” in the same passage. In a strictly normative remark, she says: “in democratic societies . . . all decisions should as far as possible be public” (397). And finally, she appeals to a notion of the well-informed citizen when she says, “participation can serve to disseminate closely held expertise more broadly, producing enhanced civic capacity and deeper, more reflective responses to modernity” (p. 398). As we have seen, Merton (1945) seems to view the expert’s position of “dependency” as sufficient to ward off at least the worst abuses of experts. Turner (2001) takes the second approach when he says, “Expertise is a
Recurrent Themes in the Theory of Experts
85
deep problem for liberal theory only if we imagine that there is some sort of standard of higher reason against which the banal process of judging experts as plumbers can be held, and if there is not, it is a deep problem for democratic theory only if this banal process is beyond the capacity of ordinary people” (p. 146). Turner has confidence in the robustness of democracy to the problem of experts. But, invoking James B. Conant, he dismisses the idea of “democratic control of science” as a “dangerous illusion” (2003, p. vii, emphasis added). Wilson (1887) acknowledged fears of “an offensive official class, – a distinct, semi-corporate body with sympathies divorced from those of a progressive, free-spirited people, and with hearts narrowed to the meanness of a bigoted officialism” (p. 216). But such fears, he assures his readers, neglect the vital principle “that administration in the United States must be at all points sensitive to public opinion” (p. 216). Wilson stipulates that the officialdom shall have and preserve a democratic spirit. He says, “The ideal for us is a civil service cultured and self-sufficient enough to act with a sense of vigor, and yet so intimately connected with the popular thought, by means of elections and constant public counsel, as to find arbitrariness or class spirit quite out of the question” (p. 217). Wilson’s response to the fear of bureaucracy is breathtaking. I can buy a drunk a bottle of whiskey and stipulate that he will use it only for medical emergencies. My stipulation will not ensure such restraint, however. In this analogy, the drunk is Wilson’s “corps of civil servants prepared by a special schooling and drilled, after appointment, into a perfected organization, with appropriate hierarchy and characteristic discipline” (p. 216) and the whiskey is power. DISCUSSION
Discussion is a recurrent theme in the literature on experts. The wellinformed citizen is usually a relatively unimportant figure in theories that emphasize discussion. If discussion informs the citizen, then we do not need to invoke the prefabricated figure of the well-informed citizen. Theorists who emphasize discussion often think that discussion empowers the public. They tend to fit, therefore, in the right-hand column of Table 2.1. Both theories that view experts as reliable and theories that view experts as unreliable may emphasize discussion. Collins and Evans (2002) is an example of the former and Jasanoff (2003) of the later. Several important names are associated with the role of discussion in politics, including John Stuart Mill, John Rawls, and Jurgen Habermas.
86
Expert Failure
Both Durant (2011) and Turner (2003) treat Habermas as an important source of thinking about experts. Durant (2011) places weight on Rawls as well. Levy and Peart (2017, p. 35) draw our attention to a passage in Mill’s On Liberty addressing the theme of contesting experts: I acknowledge that the tendency of all opinions to become sectarian is not cured by the freest discussion, but is often heightened and exacerbated thereby; the truth which ought to have been, but was not, seen, being rejected all the more violently because proclaimed by persons regarded as opponents. But it is not on the impassioned partisan, it is on the calmer and more disinterested bystander, that this collision of opinions works its salutary effect. Not the violent conflict between parts of the truth, but the quiet suppression of half of it, is the formidable evil. (Mill [1869] 1977, p. 259)
In Chapter 8 I will discuss the important model of Milgrom and Roberts (1986), which employs a logic close to Mill’s to show how competing experts may lead a disinterested party to the “full-information opinion.” Frank Knight attempted to use the idea of discussion to ensure that the economic expert does not impose solutions on others. “For Knight,” Levy and Peart (2017, p. 48) explain, “the role of the economic expert was twofold: 1. Economic experts take the values (norms) of the society as given; 2. Proposals for change should be submitted for discussion in a democratic process.” “But discussion,” Knight says, “must be contrasted with persuasion, with any attempt to influence directly the acts, or beliefs, or sentiments, of others” (quoted in Levy and Peart 2017, p. 48). Citing Knight and the broader “discussion tradition” they identify within economics, Levy and Peart argue that experts should be “constrained by discussion and transparency” (2017, p. 7). Earlier, we saw that Jasanoff (2003) defends “participation,” which is both a democratic idea and a discussion idea. Insisting that the “project of looking at the place of expertise in the public domain” is “a project in political (more particularly democratic) theory,” Jasanoff invokes “intense and intimate science-society negotiations” (2003, p. 394). Durant (2011) discusses “the debate between Sheila Jasanoff and Brian Wynne, on one side, and Harry Collins and Robert Evans, on the other” (p. 691). He associates “the approach of Collins and Evans to [John] Rawls’s notion of public reason, and more generally to a form of liberal egalitarianism,” and he associates “the theorizing of Jasanoff and Wynne to the contemporary project of identity politics, and more generally to [Jurgen] Habermas’s discourse ethics” (p. 692). Levy and Peart (2017) identify a “discussion tradition” in economics. They view themselves as a part of this tradition, and they view the tradition
Recurrent Themes in the Theory of Experts
87
as vital to their theory of experts and expertise. Levy and Peart (2017) associate the discussion tradition with Adam Smith and J. S. Mill among the classical economists and Frank Knight, James Buchanan, Vernon Smith, and Amartya Sen among more recent figures. Economists of this tradition “have expounded upon the rich moral and material benefits associated with discussion – benefits that contribute to a well-governed social order” (p. 30). While this tradition did not produce an economic theory of experts, it did provide some anticipations of themes relevant to the theory. Levy and Peart (2017) do not include Bernard Mandeville in the discussion tradition, and for good reason. For discussion to get us to the truth, according to this tradition, it must be constrained. Levy and Peart explain: “The requirements for discussion, as these economists used the term, are stringent. Reciprocity and civility are needed and so, too, is real listening and moral restraint. In this tradition one accepts the inevitability of an individual ‘point of view’ and the good society is one that governs itself by means of an emergent consensus among points of view” (p. 30). Mandeville could not have taken such constraints seriously. Discussion, for Mandeville, was always an occasion for deception of self and others. We are taught the “Habit of Hypocrisy, by the Help of which, we have learned from our Cradle to hide even from our selves the vast Extent of Self-Love, and all its different Branches” (1729, vol. I, p. 140). Indeed, “it is impossible we could be sociable Creatures without Hypocrisy” (1729, vol. I., p. 401). Hypocrisy and self-deception are too deeply ingrained in our nature to hope that discussion will lead to truth even when it is subject to “stringent” moral constraints. Nor can we hope for such constraints as “real listening” to be honored in practice. As we shall see in Chapter 6, however, Mandeville did think that we could slowly acquire “good Manners,” which do entail civility and reluctant reciprocity. We can learn to get along, and accumulated experience can lead to good practices such as skillful sailing and good manners. But truth is more elusive. Mandeville’s deep doubts about our ability to see the truth raise the question of how he could pretend that he had somehow overcome universal hypocrisy and self-deceit to speak the truth of human nature and social life. Do not Mandeville’s own principles invite us to question his motives and doubt his arguments? It is hard to construct an explanatory model of social processes that can be applied equally to the theorist and others. We have seen that it is hard to put the theorist in the model and that such reflexivity is required if the theorist is to avoid modeling themself as special
88
Expert Failure
and somehow above ordinary persons, at least when the model does not naively assume that we are, all of us, virtuous and our thoughts and motives self-transparent. As I will argue in Chapter 6, Mandeville’s Fable was satiric. And I claimed earlier that satire is one vehicle to avoid paradoxes of self-reference. Mandeville did not put himself in the model. Rather, he invites the reader to put him in the model. MARKET STRUCTURE
In an economic theory of experts, such as I am attempting to outline in this book, the question of market structure arises. Both the reliability of experts and the power of nonexperts may depend on market structure. While there are many market forms, we may broadly distinguish competition and monopoly. The theorist must then choose whether it is better that experts enjoy a monopoly or be subject to competition. This rough accounting is inadequate to the full complexity of events. But it helps us to organize theoretical treatments of experts. The economic concept of “competition” has the potential to create misunderstandings and must, therefore, be interpreted with caution. We have seen Berger and Luckmann (1966) emphasize the dangers of monopoly in the market for experts. Earl and Potts (2004) discuss the “market for preferences” and Potts (2012) discusses “novelty bundling.” They consider businesses with expertise in new technologies or fashions. These businesses help less informed households to cope with novelty. The experts educate consumers on the possibilities and propose different combinations to them. Potts (2012, p. 295) illustrates novelty bundling with “fashion magazines such as Vogue,” which present readers with novel combinations of fabrics, clothing items, hairstyles, and so on. Earl and Potts (2004, p. 622) point to “product review websites such as Amazon. com.” Consumers do not know what they want when they are considering novel items and novel combinations. The experts help them form their (low-level) preferences. Thus, we may describe the market as a market for preferences. The notion of “novelty intermediation” in Koppl et al. (2015) builds on Earl and Potts (2004) and Potts (2012). I briefly commented earlier on Milgrom and Roberts (1986). In their model, competition between “strongly opposed” experts is beneficial because it allows a disinterested party to reach the “full-information opinion.” Levy and Peart are oddly ambivalent on competition among experts. They clearly revile monopoly in expert opinion (Levy and Peart 2006). They note “the messy but perhaps salutary effects of competition in
Recurrent Themes in the Theory of Experts
89
expertise” in a nineteenth-century British trial deciding whether information on contraception might be banned as obscene (Levy and Peart 2017, p. 108). And yet they favorably quote Frank Knight lamenting the supposed tendency of “competition [among economists] for recognition and influence” to “take the place of the effort to get things straight.” Knight sniffs at economists “hawking their wares competitively to the public by way of settling their ‘scientific’ differences” (Knight 1933, pp. xxvii–xxviii, as quoted in Levy and Peart 2017, p. 186). In this volume, I express an unambiguous preference for “competition” in the market for expert opinion. I have tried to emphasize, however, the importance of market structure. Market structures that might we might plausibly dub “competitive” may be poor safeguards against expert failure if the market structure lacks rivalry, “synecological redundancy” (defined in Chapter 9), and free entry. Thus, Levy and Peart’s seeming ambivalence may reflect the importance of structural differences between various markets in expert opinion that might all be considered “competitive.” It seems fair to say that the themes of competition and market structure are undertheorized in Levy and Peart’s work on experts. In Chapters 8–11 I give an “economic” theory of experts and expert failure in which market structure is central.
INFORMATION CHOICE IN THE CONTEXT OF THE LITERATURE ON EXPERTS
Before closing this chapter I should briefly place “information choice,” as I will call the economic theory of experts, in the context of the larger literature on experts. We have already seen that it probably best fits in the “unreliable-empowered” category of the basic taxonomy of Chapter 2. In this section, I comment on the definition of “expert” and briefly lay out my stance on each of the themes discussed in this chapter.
Definitions If knowledge is dispersed, then everyone has specialized knowledge peculiar to their place in the division of labor. If experts are defined by their expertise, we are all “experts.” It is hard to see what intellectual work can be done by a definition of “expert” that fails to distinguish between experts and nonexperts. I define “expert” not by expertise, but as anyone paid for their opinion. I am not proposing a theory of expertise, but an economic theory of experts.
90
Expert Failure
Power In my theory expert power is to be feared because it makes experts less reliable. At least two conditions give experts undue power. First, experts may have a kind of monopoly in which they become “the officially accredited definers of reality” (Berger and Luckmann 1966, p. 97). Second, they may choose for nonexperts, rather than merely advising them. I seek mechanisms tending to increase the diversity of expert opinions and reduce, thereby, the monopoly power of experts. And I generally prefer that experts be in a merely advisory role that preserves the autonomy of nonexperts. I think Foucault was right to see knowledge imposition as a power issue. I oppose the rule of experts, which entails knowledge imposition. In my closing remarks I will say that the problem of experts mostly boils down to the question of knowledge imposition. Like Foucault, I think when some people can impose unitary knowledge schemes on other people, the people imposed upon are oppressed in at least some degree. Unlike Foucault, however, I am a social scientist. I want to know how institutions work. Thus, my own analysis does not run in terms of “rationalities” or “discourses” or “disciplines” and their changing historical forms. My analysis is about who does what and why they do it. I feel bound to ensure, to the best of my ability, that the actions I impute to agents in my theory would be understandable to the real-world actors thus modeled (Schutz 1943, p. 147; Machlup 1955, p. 17). Foucault, instead, wished to “Refer the discourse not to the thought, to the mind or to the subject which might have given rise to it, but to the practical field in which it is deployed” (1972, p. 235).
Ethics I forcefully repudiate any suggestion that experts are more virtuous than nonexperts. I thus advocate a kind of ethical parity between experts and nonexperts. Good and substantive ethical codes tend to improve expert opinion, but are not powerful mechanisms to that end. Market competition has much more potential to improve outcomes.
Reflexivity I view reflexivity as a central issue. Following Hayek and others, I think that reflexivity implies limits to explanation. In particular, reflexivity
Recurrent Themes in the Theory of Experts
91
implies that we cannot have the sort of causal theory of ideas that Bloor (1976) and others have attempted. Like Levy and Peart (2017) I place great importance on “putting the theorist in the model.” The theory, in other words, should not require the theorist to implicitly model themself as motivationally, cognitively, ethically, behaviorally, or in any other way different than the agents in the model. The theorist is but one more ant in the anthill.
The Well-Informed Citizen My theory gives very little weight to any suggestion that the well-informed citizen has a special role to play in either disciplining experts or helping expert opinion prevail with the man on the street.
Democratic Control of Experts Democracy is a vital political principle and a bulwark against tyranny. But the logic of public choice theory (Buchanan and Tullock 1962) seems to suggest that democratic control of experts is unlikely to be very effective. The theory of regulatory capture (Stigler 1971; Posner 1974; Yandle 1983) seems to bolster such pessimism. Thus, in contrast to many other theoretical treatments of the problem of experts, I give little weight to the idea that the principle of democracy can somehow constrain experts from abusing their power. Democracy is not an effective check on expert power, error, or abuse. Rather, the rule of experts is inconsistent with pluralistic democracy.
Discussion As far as I can tell, the best opinion does not necessarily prevail in the market for ideas. But free and open discussion is nevertheless a bulwark against multiple evils, including expert failure and the abuse of power. No encomiums to discussion, however, can prevent the rule of experts from inducing expert failure. Thus, market structure is the fundamental issue rather than, say, the proper ethics of discussion.
Market Structure I emphasize the importance of market structure for the regulation of expert opinion. Cowan and I have said that “Competition turns wizards into teachers” (Koppl and Cowan 2010, p. 254). I take a generally favorable
92
Expert Failure
view of “competition.” Unfortunately, words such as “competition” are subject to misunderstanding. The term “competition” covers very different market structures with very different epistemic consequences. Empty invocations of “competition” are no substitute for analysis. The word “competition” seems to suggest to many scholars ideas that have little or nothing to do with the sort of model I have in mind. In Chapter 5 I will review some of the economic concepts used in Part III of this study along with some common misunderstandings of them. CLOSING REMARK
In this and the previous two chapters I have tried to show that there is a literature on experts. This literature sprawls across many scholarly disciplines and may easily seem to lack structure. Scholars in science and technology studies have made important contributions to this literature, and they cite and discuss one another’s work. The sociological and methodological literature on science is of relevance to the economics and sociology of experts. Butos and Koppl (2003) review this literature, which includes Bloor (1976), Kitcher (1993), Kuhn (1970), Latour (1987), Latour and Woolgar (1979), Merton (1937), Pickering (1992), and Polanyi (1962). This literature considers research science, however, rather than experts in general. There is less coherence and no common conversation in the broader literature I have tried to identify. Scholars working along similar lines do not cite one another. Thus, Turner (2014) does not cite Peart and Levy (2005), and Levy and Peart (2017) in turn do not cite Turner (2001). The simple taxonomy of Table 2.1 may be helpful in creating some structure and coherence for this literature. In this chapter I have identified several recurrent overlapping themes arising in the literature on experts. Besides the obvious issue of how to define “expert,” these common themes are power, ethics, reflexivity, the well-informed citizen, democratic control of experts, discussion, and market structure. For each theme I have tried to give at least some indication of what choices or strategies might be available for addressing that theme within the context of a theory of experts. This and the previous two chapters consider the history of thought on experts. Chapters 8–11 lay out a theory of experts and expert failure. Hayek’s notions of spontaneous order and dispersed knowledge are central to this theoretical discussion. Thus, in the next three chapters I review these foundational concepts. I give Hayek’s idea of dispersed knowledge a
Recurrent Themes in the Theory of Experts
93
lengthy discussion. I think this idea is every bit as important as Hayek claimed. But it is easily reduced the banal and inconsequential remark that “knowledge is dispersed.” I have tried to show that Hayek’s insight on knowledge went far beyond this banality, that in general it was not clearly understood before Hayek raised the issue, and that even since then skilled scholars have often fallen into error by an inadequate recognition of the nature of the problem of dispersed knowledge. Others have more sophisticated notions, and yet retain a fundamentally hierarchical view of knowledge that is, in my view, deeply mistaken.
PART II FOUNDATIONS OF THE THEORY OF EXPERTS
5
Notes on Some Economic Terms and Ideas
This book outlines an economic theory of experts. It uses the economic concepts of “spontaneous order” and “competition.” The concept of spontaneous order is important for a satisfactory understanding of the division of knowledge as I develop that concept in the next two chapters. The concept of “competition” is important for the theory of experts and, especially, the theory of expert failure. Unfortunately, the concepts of spontaneous order and competition are fluid. They get different meanings from different economists. Non-economists sometimes interpret these concepts in ways that most economists do not. It can be challenging, therefore, to use these concepts and, especially, these terms to communicate ideas effectively. Rather than trying to invent new terms that are likely themselves to be misunderstood, I will try to clarify their meanings for the broad tradition of “mainline” economics, which is the intellectual context for my theory. Peter Boettke coined the term “mainline economics” to identify a tradition of economic thought encompassing Adam Smith, F. A. Hayek, Vernon Smith, and others. This tradition of thought emphasizes “limits to economic analysis, and efforts at economic control” (Boettke 2012, pp. 383–5). And in this tradition, the notions of spontaneous order and competition are important. SPONTANEOUS ORDER
The idea of “spontaneous order” is more commonly known by the label “invisible hand.” But the term “invisible hand” is sometimes taken to be a religious idea or to connote a mystification of markets. It may therefore cause confusion or misunderstanding. I will use the term coined by F. A. Hayek: “spontaneous order.” It is the idea that there are systematic but unintended consequences of human action. An explanation of a spontaneous order may 97
98
Expert Failure
be called an “invisible-hand explanation” (Ullmann-Margalit 1978). The “mainline” theory of markets views them and their scientific regularities as spontaneous orders. As I will briefly note below, this cannot be said of general equilibrium theory as it was first proposed by Leon Walras (1874–77). But in more customary interpretations of the theory, it imperfectly describes Adam Smith’s invisible hand. In the interpretations of neoclassical economics most common within the discipline, the theory of markets describes a spontaneous order. To introduce the concept, I will use a silly example concerning the behavior of spectators at a game of American football. Later I will briefly consider the more serious examples of money and the division of labor. But the silly example may be more transparent and thus useful for introducing the concept, which has so often been maligned and misconstrued. In American football, everyone in the stadium will stand up when the “long bomb” is thrown. The “long bomb” is a play in which the ball is thrown a great distance down the field. It is an exciting event, and the spectators all stand up when it happens. The connection is quite regular and consistent: When the long bomb is thrown, everyone stands up. It is a scientific regularity, though a trivial one. We cannot adequately explain this phenomenon without the idea of systematic but unintended consequences of human action. Without this idea of spontaneous order and assuming away supernatural explanations, we would probably have to explain the regularity in one of two ways. First, we could explain it as “natural.” We would cast about for physical causes of the regularity or, perhaps, some sort of biological explanation. Second, we could explain it as “artificial.” We would construe the regularity as the product of a plan. Neither explanation works. The phenomenon is not “natural.” There are no springs in the seats to thrust fans upward when the long bomb is thrown. Nor are there invisible threads descending from heaven to pull us up at the appropriate moment. It does not seem possible to explain it as a “natural” phenomenon if “nature” excludes human action. But it is not an “artificial” phenomenon either, if “artificial” means that it was planned ahead of time. The spectators did not receive instructions to stand up when the long bomb is thrown. There was no meeting ahead of time during which the fans agreed on a common plan for standing during the long bomb. No orders to stand were barked from above. And yet they stand. We can explain the long-bomb phenomenon as a spontaneous order. When the long bomb is thrown, spectators close to ground level must stand up to see the ball’s progress through the air. They want to see the
Notes on Some Economic Terms and Ideas
99
action. When the long bomb is thrown, they cannot see the action without standing up. When spectators in the first row stand up, they obscure the view of spectators in the second row, who must now stand if they are to follow the action. When spectators in the second row stand up, they obscure the view of spectators in the third row, who must now stand if they if they are to follow the action. And so on to the highest row. The phenomenon is perfectly regular. It is regular enough to look like it was planned. But it was not the product of human design. In this sense, it is like natural phenomena. While the phenomenon was not planned, it was the result of human action. In this sense, it is like artificial phenomena. It is a systematic, but unintended consequence of human action. It is a spontaneous order. Importantly, the long-bomb phenomenon may exist without the persons involved being aware of it. Each person will realize when everyone is standing and when everyone is sitting. But they may not be aware of the regularity whereby the long bomb causes everyone to stand up. Thus, we have a law of stadium behavior that the people follow without knowing it. Many thinkers (and most undergraduate students) seem to believe that any “laws of economics” would have to be first devised and promulgated and then followed. If there is a law, there is a lawgiver. But the “laws” of a spontaneous order generally function even when nobody is aware of them. Thus, an increase in the supply of a commodity causes its price to fall in Homeric Greece no less surely than in nineteenth-century London. My explanation of the long-bomb phenomenon is also subject to empirical test. If it were a topic worthy of study, researchers could go to stadiums and see whether the correlation I have observed is robust. (Perhaps I have generalized too freely from a small number of observations.) They could observe whether fans in the lower rows stand up before fans in the upper rows. They could become participant observers and ask their fellow spectators, “Why did you stand just now?” And so on. Any claim that invisiblehand explanations must be untestable “just-so stories” is mistaken. Since Kuhn (1970), philosophers and methodologists have become more sensitive to the difficulties of “falsification” and, more generally, empirical control of theory. Invisible-hand explanations are no exception. But they are not less subject to test and criticism in general than other scientific theories in the natural and social sciences. The evolution of money and the division of labor are more serious examples of spontaneous order. As Carl Menger (1871) has noted, John Law (1705) seems to have been the first scholar to explain the existence of money by evolution rather than
100
Expert Failure
agreement. Menger’s own exposition is a standard example of an invisiblehand explanation. The story begins with barter. The evolutionary process starts when one trader innovates by engaging in indirect exchange. Once that has happened, many individuals have an incentive to expand their behavioral repertoire to include indirect exchange. Even if all traders in the economy practice indirect exchange, there may be no generally accepted medium of exchange. There may be no money. In this situation, many traders will have an incentive to change their exchange medium or media. They will have an incentive to move away from less widely demanded goods to more universally demanded goods. In other words, they have an incentive to move to the goods that are more moneylike. This process is not complete until some goods emerge as generally accepted exchange media. They are moneys. The process may produce more than one money or only one generally accepted medium of exchange. Menger’s theory shows how money could emerge without an overall plan or agreement to create money. The underlying logic is that indirect exchange is often better than quid pro quo, and the acceptability of exchange media is self-reinforcing. Several instances in which cigarettes become money seem to fit the pattern. Radford’s (1945) description of the economics of a World War II prisoner of war camp includes the use of cigarettes as money. Senn (1951) describes how currency restrictions in postwar Germany led to the use of cigarettes as money. For about eighteen months cigarette money was a minor currency used mostly to support trade between “Allied nationals and a relatively small number of Germans” (p. 332). In Communist Romania, Kent cigarettes were used as money, especially for bribes (Leary 1988) or the purchase of items with “recurrent shortages” such as “meat, produce and energy” (Lee 1987). Lee records one person explaining, “If I needed to seek the counsel of a lawyer, I would pay him off in Kents.” In these and other instances restrictions prevented the use of ordinary currency, and there seems to have been a Mengerian evolutionary process converging on cigarettes, or one particular brand of cigarette, without any prior plan or willful coordination among the people involved. Ludwig von Mises says the “main deficiency” of the “doctrine” that money was invented is “the assumption that people of an age unfamiliar with indirect exchange and money could design a plan of a new economic order, entirely different from the real conditions of their own age, and could comprehend the importance of such a plan” (1966, pp. 402–3). To anticipate the systemic benefits of money in a world without money would be a superhuman epistemic feat.
Notes on Some Economic Terms and Ideas
101
Importantly for this volume, the division of labor is a spontaneous order. The division of labor is the system of specialization and exchange. Each actor specializes in a relatively narrow set of tasks, producing a surplus. Each actor then trades their surplus for some of the surpluses of others. We specialize and exchange. Specialization makes us more productive. Trade lets us get the advantage of that increased productivity. It is almost as if some mastermind had divided the social production process into subtasks and allocated those tasks across people and groups of people such as firms. But the division emerged without a Great Divider. Instead, each actor (whether a person or an organized group with a common purpose) tends to gravitate toward tasks they do relatively best. Economists use the term “comparative advantage” for the idea of being relatively best at something. There is no guarantee that every actor will find their comparative advantage. But if enough actors move to tasks that they are relatively good at, the productive activities of all the different actors in society and, indeed, the global economy will tend to have a rough and ready compatibility. If too many of us think ourselves skilled singers, the wages of singers will be low and some of us will turn to other employments. Those with the greatest incentive to exit will tend to be relatively poor singers or, perhaps, good singers who nevertheless have other skills in relatively abundant demand. In other words, those whose comparative advantage lies elsewhere have the greatest incentive to exit the industry. If too few of us sing for a living, the high wages of singers will attract entrants. Those with the greatest incentive to enter will tend to be relatively good singers or, perhaps, mediocre singers whose other meager skills are not greatly demanded. In other words, those with a comparative advantage in singing have the greatest incentive to enter the industry. Smith (1776) argued that the division of labor causes comparative advantages for individuals, not the other way around. Smith may have been about right, notwithstanding some innate individual differences. Most of the economically important differences between thee and me probably come from our different experiences and opportunities in life. We start the same. We enter the labor market with differences, however, and these differences imply different comparative advantages. And we grow still more different once we have entered different paths, cultivating different opportunities, experiencing different work histories. At any moment in our work histories, therefore, we have different comparative advantages. My illustrative story of entry and exit from the market for singers would seem to hold, therefore, regardless of how comparative advantages come about. What matters for that story is not how comparative advantages come to exist, but that they do indeed exist.
102
Expert Failure
Imagining a Great Divider probably helps us to understand the division of labor as a spontaneous order. But the division of labor is too intricate to be the designed product of a human mind. Mandeville (1729, vol. I, pp. 182–3) commented on the vast numbers of persons involved in producing the humblest item of clothing in the England of his time: A Man would be laugh’d at, that should discover Luxury in the plain Dress of a poor Creature that walks along in a thick Parish Gown and a course Shirt underneath it; and yet what a number of People, how many different Trades, and what a variety of Skill and Tools must be employed to have the most ordinary Yorkshire Cloth? What depth of Thought and Ingenuity, what Toil and Labour, and what length of Time must it have cost, before Man could learn from a Seed to raise and prepare so useful a Product as Linen.
Later, Smith (1776, I.1.11) would make a similar observation about the “woollen coat” of “the most common artificer or day-labourer in a civilized and thriving country.” Buchanan (1982) makes the further point that social orders are defined in the process of their emergence. It follows that largescale spontaneous social orders could not have been designed even by super-intelligent beings. General equilibrium theory is the standard mathematical representation of this process. The theory does not, however, represent the process itself, but only the theoretical end point of the process. And, of course, there are different versions of the theory. The original was Walras (1874–77). More recently, Debreu (1959) used very different mathematics in his general equilibrium theory. The theoretical outcome of the process, the equilibrium distribution of prices and output, is optimal in the sense that no rearrangement could help at least one person without simultaneously harming at least one other person. Economists understand perfectly well, however, that this interesting theoretical equilibrium is not achieved in any real economy. Economists who tend to favor more or less free or unhampered market exchange tend to think government “intervention” does not usually get us closer to the theoretical ideal. But many economists think governmental interventions (“regulation”) have a rich potential to improve outcomes. While economists in Boettke’s “mainline” tradition tend toward relatively strong support of more or less unfettered market competition, such views are probably in the minority among economists (Klein and Stern 2006). Economists tend to be more favorable toward free trade and free markets than non-economists. But the strongly free-market view is mainline, not mainstream, within the economics profession.
Notes on Some Economic Terms and Ideas
103
Frederic Bastiat’s reflections on how Paris is fed reflect the idea that the division of labor is a spontaneous order: On coming to Paris for a visit, I said to myself: Here are a million human beings who would all die in a few days if supplies of all sorts did not flow into this great metropolis. It staggers the imagination to try to comprehend the vast multiplicity of objects that must pass through its gates tomorrow, if its inhabitants are to be preserved from the horrors of famine, insurrection, and pillage. And yet all are sleeping peacefully at this moment, without being disturbed for a single instant by the idea of so frightful a prospect. On the other hand, eighty departments have worked today, without co-operative planning or mutual arrangements, to keep Paris supplied. (Bastiat 1845, p. 97)
Leonard Read’s famous 1958 essay “I, Pencil” makes the vital point that nobody knows how to make a pencil. It is thinkable, if improbable, that someone might know all that must happen within the pencil factory to produce a pencil. But no one knows how to make all the inputs to pencil making and inputs to the inputs to pencil making, and so on. It is the overall division of labor that “knows” how to make a pencil. Pencil-making knowledge is distributed across all participants in the social division of labor; it exists in the system. If no one knows how it all works – if, in this sense, no one understands the division of labor – then it would seem hard to deny that the division of labor is a spontaneous order. It evolves over time in ways no one plans and that no one understands in detail. It is a systematic but unintended consequence of human action. The Scottish Enlightenment philosophers clearly expressed the idea of spontaneous order. Adam Ferguson said: “Nations stumble upon establishments, which are indeed the result of human action, but not the execution of any human design” (Ferguson 1767 as quoted in Hayek 1967a, p. 96). Adam Smith, of course, used the phrase “invisible hand.” Table 5.1 provides a simple taxonomy of orderly structures. An order may be the result of human action or not. It may be the execution of a human design or not. Natural orders are not the result of human action and not the execution of any human design. Artificial orders are the result of human action and the execution of a human design. Presumably, no order can be the execution of a human design and yet not the result of human action. Anything done to put the plan into motion would be a human action resulting in a designed order. Thus, it could not be described as the execution of a human design and yet not the result of any human action. Finally, spontaneous orders are the result of human action, but not the execution of any human design.
104
Expert Failure
Table 5.1 Types of orders The execution of a human design The result of human action Not the result of human action
Not the execution of any human design
Artificial orders
Spontaneous orders
Impossible
Natural orders
The taxonomy of Table 5.1 is an aid to thinking. It is not meant to exclude intermediate cases. Consider the modern automobile. The orderliness of any one automobile is the result of human actions executing a human design. That design, however, is but a slight modification of previous designs, which modified still earlier designs, and so on. The earliest automobiles were horseless carriages. They combined slight modifications of the earlier designs of internal combustion engines and of horsedrawn carriages. Those earlier engines and carriages were but slight modifications of still earlier forms, and so on. Thus, even a seemingly clear case of an artificial order, a modern automobile rolling off the assembly line, is the product of an evolutionary process that no one planned and that no one could have imagined. If no one knows how to make a pencil, then pencils are more emergent than designed. Or consider “the wave.” Spectators in the sports stadium may stand up and sit down one after another to produce an undulation, a wave. When someone tries to start a wave in the stadium, it may or may not take. There must be several other people who respond with enough rhythm to get the thing going. There is no agreement to make it happen. No one is making enforceable commands to get the wave going. Nor is there any plan for how long the wave should last. Thus, the wave is in some degree a spontaneous order. But it cannot happen if the great majority of participants do not understand consciously what they are doing and what the wave is. They cannot be unaware of the fact that they are making the wave. Thus, it has an importantly “artificial” element as well. It is an intermediate form between spontaneous order and artificial order. Any even moderately complex economic order is a spontaneous order, whatever the role of government might be. Official Soviet planning was never the dominant ordering force in the Soviet Union. Rather, informal markets and official actions interacted in complex ways to produce a spontaneous order, albeit a decidedly unlovely one (Boettke 2001). Joung
Notes on Some Economic Terms and Ideas
105
(2016) discusses the spontaneous ordering forces in North Korea. Wagner (2010) and Smith et al. (2011) argue that nominally “public” and nominally “private” enterprises are “entangled.” Just as two distant particles may be “entangled” in physics such that the properties of one depends instantaneously on the properties of the other, the behavior of “private” and “public” enterprises are “entangled” such that the nature of the one type of enterprise is a function of the nature of the other type of enterprise. In the current American system of crony capitalism, for example, “systemic” enterprises are gambling with other people’s money and therefore rationally take on more risk than otherwise similar enterprises under a regime that does not invite moral hazard by privatizing profits while socializing losses. In this example, the policy regime shapes the risk tolerance of “private” actors. Governments may plan, but the best laid schemes of mice and men go often askew. (See also Koppl et al. 2015.) F. A. Hayek used the term “spontaneous order” to describe “[t]he grown order . . . [the] self-generating or endogenous order” (1973, p. 37). Spontaneous orders in society are “The Result of Human Action, but not of Human Design” (1967a). The study of spontaneous orders, Hayek argued, was originally the province of economics. But biology “from its beginnings has been concerned with that special kind of spontaneous order we call an organism” (1973, p.37). It was not until the arrival of “cybernetics [as] a special discipline” that the physical sciences came to discuss these ‘selforganizing or self-generating systems’” (ibid). Of course, many consequences of our actions are perfectly intended. Action is purposeful and often obtains its ends. But intended consequences raise no fundamental scientific problem. Unintended consequences are puzzling. The economy runs along without a central planner. This seems a prescription for disaster. But the system holds together and permits each of us a greater fulfillment of his ends than would otherwise be possible. It is the scientific problem at the heart of economics. Spontaneous orders have three “distinguishing properties.” (They are typical properties, not always present.) First, they are complex; for spontaneous orders, the “degree of complexity is not limited to what a human mind can master.” Think of the division of labor. As Bastiat suggests, no one understands precisely how all the pieces fit together. We have general ideas, but not a detailed, concrete understanding. Second, they are abstract; a spontaneous order’s “existence need not manifest itself to our senses but may be based on purely abstract relations which we can only mentally reconstruct.” Again, think of the economy. The economy is not a bunch of machines or a thing that happens at 2:00 on Thursday. It is an order of
106
Expert Failure
events, a set of interconnections. Third, they have no purpose; “not having been made” by any designing minds, a spontaneous order “cannot legitimately be said to have a particular purpose, although our awareness of its existence may be extremely important for our successful pursuit of a great variety of different purposes” (Hayek 1973, p. 38; all emphases in original). Once again we use the economy as our example. What is the “purpose” of the economy? Each of us has his own ends, but we don’t have any great collective goal. An atheist might buy a Bible from a Christian in order to find supposed contradictions with which to embarrass Christians. The atheist opposes Christianity; the Christian supports it. And yet they come together in the same exchange of money for Bible. Far from having a common purpose, their purposes are antithetical. Hayek’s notion of spontaneous order is, at a minimum, similar to the complexity theorists’ notion of “complex adaptive systems.” Hayek claims that such orders “result from their elements obeying certain rules of conduct” (1973, p. 43). Each “agent” follows a set of rules and responds to local information. The interaction of many such agents produces an overall order that was not planned by any of the agents who produced it. If the number of agents is large enough, this order may be very complex even when the rules governing each individual are quite simple. Hayek is an evolutionary theorist whose evolutionary ideas are based, in part, on a cognitive psychology similar to the sorts of things discussed in complexity theory. As with the complex adaptive systems of the Santa Fe group, Hayek recognizes that there is “no global controller” (Arthur, Durlauf, and Lane 1997, p. 4) of the economy or any other complex adaptive system. The terms “unintended consequences,” “spontaneous order,” and “complex adaptive system” do not have identical meanings. Many unintended consequences are neither systematic nor orderly. But for that very reason, they don’t often form the objects of scientific study. A spontaneous order could be relatively simple, though few, if any, simple cases are interesting to study. In principle, a complex adaptive system could be constructed with all its aspects and behaviors perfectly planned. But it is hard to come up with examples. Such a system would have no unintended consequences. Thus, the terms do not cover the same ground. But the overlap is great. This overlap contains most or all of the interesting phenomena in this area. COMPETITION
The word “competition” may invoke quite a variety of ideas, many of them very different from the sort of thing I have in mind. Economists and
Notes on Some Economic Terms and Ideas
107
non-economists alike often toss around the vague term “free-market competition” without definition. Related terms such as “capitalism” and “laissez faire” are also frequently used without adequate definition. The general idea such terms are usually meant to invoke is that different parties my trade freely with whomsoever they choose. But in any actual system of supposed “free competition” there have always been restrictions on who may trade with whom and on what terms. Some such restrictions are widely opposed by economists. Economists generally oppose restrictions against trading with foreigners. We tend to support free trade. And economists generally oppose minimum wage laws, which we think do more harm than good. (There are notable exceptions in both cases, of course. Some economists like these restrictions.) Other restrictions, however, receive broad support from economists and non-economists alike. Children, for example, should not have a complete freedom of contract. A 12 or 16-year-old may buy a toy truck, but not a used car. Nor should adults be allowed to sell themselves into slavery. We generally take for granted the very large number and variety of restrictions on what is allowable in trade. If I sell you a large can labeled “olives” that you later discover contains lots of brine and only two olives, you will accuse me of fraud. I violated an implicit standard of reasonableness in our contract. Is that standard of reasonableness a deviation from “free-market competition”? When terms such as “free-market competition” are used, whether favorably or unfavorably, we are usually left to guess which restrictions on trade “count” and which do not. In other words, we are usually left to guess what precisely the term is supposed to mean to the author who is splashing it about. This ambiguity about restrictions is sometimes a reflection of the view that it would be better to have a decentralized evolutionary process to establish such restrictions, rather than attempting to design and impose them from the center. The British common law is often celebrated in this context. All contracts are incomplete. They do not specify all relevant details of what should happen in every possible contingency. Contracting parties rely, therefore, on a shared set of expectations. These expectations are often unconscious and tacit. We become aware of them only when they are violated. If the contracting parties fall into dispute they may find themselves in civil court. The court should try to determine which of the contending parties had the more reasonable expectations. That determination depends on a host of particulars, including the ordinary practices of the relevant industry. Thus, the “rules” of “free-market competition” should exist mostly in tacit forms of habit, practice, and custom. When contracting parties fall into dispute, the concept of “fairness,” in this sort of
108
Expert Failure
view, is central to the determination of which litigant had the more reasonable expectations. We generally have the right to expect that our trading partners are not willfully tricking us into unfair deals. Custom includes standards for what is fair and what is unfair in our dealings with others. Defenders of the common law think that at least some actually existing systems of common law have come close enough to the ideal to outperform more planned and centralized systems of restrictions. And they generally believe that it is possible and desirable to set up such decentralized processes of dispute resolution, thus largely (though not wholly) sidestepping the need for “regulation” by governmental authorities. They think common law outperforms “regulation” for the most part. Notice, however, that even in this putatively antiregulatory view, commerce is absolutely regulated by a set of binding rules that are enforced in practice. All commerce is regulated, even under laissez faire. The question is what sort of regulation is best. The generalized objection to “regulation” is either incoherent or a call for regulation through a decentralized evolutionary process such as Anglo-American common law. I have argued that even the most ardent supporters of laissez faire are not usually suggesting that commerce be somehow unconstrained by rules limiting what one is permitted to do. Rather, they favor (where possible) decentralized evolutionary processes for the determination of those rules. Unfortunately, loose talk of “freedom,” “liberty,” laissez faire, and the like often obscures the point that all commerce, however “free,” is “regulated” in the sense that there are rules restricting what you can do, whether those rules be top down or bottom up. Loose talk of “freedom” and the like easily creates the impression that “free-market competition” is some sort of freefor-all in which the more cunning and ruthless players have a natural advantage over more prosocial actors. It may then appear that economists take primitive notions of “individualism” and “selfishness” so seriously that they cannot appreciate what a horror show “free-market competition” really is. And yet the sort of view thus criticized is not really upheld at all by professional economists, especially those in research universities. Cole (2012) and Cole and Thompson (2013) illustrate the confusions that may exist. Their broadly favorable characterizations of my ideas for “competitive” forensic-science reforms rather badly misconstrue the underlying economic vision of those reforms. They are good illustrations precisely because the authors are mostly friendly toward my work in the area. They are not hostile critics eager to misconstrue my arguments. Importantly, their interpretation of the economic idea of “free-market
Notes on Some Economic Terms and Ideas
109
competition” is not idiosyncratic. It represents, instead, quite a lot of scholarly thinking by non-economists about what economists are trying to say. Their interpretation of “competition,” which I will explain presently, is not correct for the “mainline economics” of Adam Smith, F. A. Hayek, and Vernon Smith. But more “neoclassical” economists do sometimes fall into the sort of thinking they seem to describe, at least on markets being somehow “natural.” Thus, while I will express my disagreements with Cole (2012) and Cole and Thompson (2013), I will also show that that one of the most important architects of neoclassical economics had ideas on the “natural” at least similar to the views they impute to me and to economists in general. We will also see, however, that the economists who come closest to their depiction are not “free-market economists,” but “interventionists.” Cole (2012) says that “Free-market competition requires a level playing field”; therefore, “Professor Koppl’s vision of a level playing field in which defense and prosecution experts compete in a free-market” may be at least somewhat “naive” (p. 103). I think Cole expresses a common view that is often taken for granted. Somehow we must a have a “level playing field” before we permit “free-market competition.” In my view, the truth is almost the opposite. What is sometimes called “free-market competition” does not “require” a “level playing field” to exist before competition begins. Rather, “free-market competition” tends to produce greater equality of outcomes. It does not eliminate inequalities of wealth and income, of course, but it does tend to reduce them. As an aside, I do not believe that I have ever described my proposals for “competition” among crime labs as a call for “free-market competition.” Koppl (2005a) does include privatization in the suite of proposed reforms. But as I note there, citing Williamson (1976), “Poorly designed ‘privatization’ may replace a government bureaucracy with a profit-seeking monopoly” (p. 273). Privatization can easily go wrong and is very far indeed from a panacea. In any event, criminal justice is a state function and thus monopolized. Even if crime labs were privatized, the result would still not be “free-market competition.” The notion that “free-market competition” (whatever that may be) requires a “level playing field” is either a positive or a normative statement. I am unaware of any positive statement to the effect that some sort of equity condition is required for competition to exist. Indeed, Adam Smith held the view that slavery could persist indefinitely in an otherwise competitive system. Slaves do not exist on a “level playing field” with their owners. Smith opposed slavery. And he thought slave labor was more
110
Expert Failure
expensive for the slave owner than free labor. (Fogel and Engerman 1974 show that Smith was probably wrong on this point, though it remains, of course, a contested question.) Slavery persists in spite of its (supposed) economic inefficiency because of our “love of domination and tyrannizing,” which “will make it impossible for the slaves in a free country ever to recover their liberty” (Smith 1982, p. 186). (The passage is from the 1762–3 lectures, LJ[A] iii, 114.) Smith’s theory of slavery’s persistence should suggest that that “free-market economics” (whatever that may be) does not assume or “require” a “level playing field.” It may be that some critics of “free-market competition” think we should have a “level playing field.” It is hard to see how this normative view applies to the narrower question of “competing” crime labs. In any event, I think mainstream economists usually argue for transfer programs. Let’s tax the rich to support the poor. Even two prominent economists who are often accused of opposing such measures, F. A. Hayek and Milton Friedman, have, instead, forcefully supported them (Hayek 1944; Friedman 1962). It is not that we “require” a “level playing field.” Rather, we desire public support for relatively poor people. I have suggested that “free-market competition” might reduce inequalities. This claim is broadly true, I think. Details matter, and history matters. But the general tendency of more or less unfettered markets is to reduce differences in average wealth across ethnic and national groups and, to a lesser extent, individuals. This general tendency is illustrated by the convergence in average height across countries: “The increasing trend in height has decelerated in developed Western countries in the 1990s, while it is still occurring in recently industrialized or developing countries” (Pak 2004, p. 512). As poor countries get rich through greater integration with the global marketplace, average heights in the poor countries rise toward average heights in rich countries. No such convergence occurred in North Korea, where “free-market competition” has not been allowed. (But as chronicled in Joung 2016, there have been black markets, and since the 1990s there has been some official liberalization.) Great differences in height have emerged between North Korea and South Korea. Pak (2004, p. 514) found that young adults in South Korea were more than two inches taller than their peers in North Korea. (See also Schwekendiek 2009 and Steckel 1995.) Some economists of the Progressive Era seem to have understood that “free-market competition” tends to eliminate the differences between groups. Some of them favored restrictive measures out of a fear of the competition of putatively inferior groups. Leonard (2005, pp. 212–13) explains how this sort of thinking led to support for the minimum wage.
Notes on Some Economic Terms and Ideas
111
Progressive economists, like their neoclassical critics, believed that binding minimum wages would cause job losses. However, the progressive economists also believed that the job loss induced by minimum wages was a social benefit, as it performed the eugenic service ridding the labor force of the “unemployable.” Sidney and Beatrice Webb (1897 [1920], p. 785) put it plainly: “With regard to certain sections of the population [the “unemployable”], this unemployment is not a mark of social disease, but actually of social health.” “[O]f all ways of dealing with these unfortunate parasites,” Sidney Webb (1912, p. 992) opined in the Journal of Political Economy, “the most ruinous to the community is to allow them to unrestrainedly compete as wage earners.” A minimum wage was seen to operate eugenically through two channels: by deterring prospective immigrants (Henderson, 1900) and also by removing from employment the “unemployable,” who, thus identified, could be, for example, segregated in rural communities or sterilized.
They favored restrictions on immigration for similar reasons. Such eugenic views were used to support restrictive measures. Cole and Thompson (2013, p. 126) say that I “advocate a system of competition in which [crime] laboratories would naturally compete to provide the best scientific analysis of evidence.” I do not know what it means to compete “naturally” or “unnaturally.” Nor do I recall using such language to characterize my proposals, which, after all, would require policy makers to decide and act. They seem to believe that economists think of markets as “natural” in some strong sense that puts them, I gather, beyond the proper reach of human intervention. Such interpretations of standard economics are probably mistaken, as our discussion of spontaneous order should have suggested. Certainly, they do not fit the “mainline economics” with which I tend to identify. I will come presently to some “neoclassical” ideas that come closer to this interpretation. One may guess how it could seem that even mainline economists think such things. If you are insensitive to the idea of spontaneous order, then the economist’s emphasis on unplanned order might seem to suggest the view that market forms are “natural” in a sense that excludes human action. If you push this view, economists could seem to be saying that we do not create, design, or implement markets. We simply remove impediments to their “natural” existence. Such interpretations are mistaken. Markets are spontaneous orders and thus very much the result of human action. And in many instances market structure is either the product of design or, at least, influenced for good or ill by human plans operating at relatively high levels such as that of a regulatory body or national government. Many scholars, economists included, seem to struggle with the interpretation of standard microeconomic theory. (See, for example, Yeager 1960; Langlois and Koppl 1991.) In one common view that Cole and
112
Expert Failure
Thompson (2013) seem to share, economists are unable to recognize the role of human action in shaping market forms. Zuiderent-Jerak says that the perspective of science studies “changes markets from impersonal and ‘natural’ entities” as conceived in economics “into interesting objects for studying the creation and regulation of a particular form of objectivity” (2009, p. 769). It is not necessary to interpret economists in this way if we think they are describing spontaneous orders. Michel Callon thinks it is a meaningful criticism of “economics” to say “The economists play a very important role because they perform the idea of pure markets, governed by natural laws in the political sphere” (quoted in Barry and Slater 2002, p. 299). Callon’s remark comes in context of a criticism of economists as monopoly experts with which I largely agree. But it is probably misleading to say that economic principles are “natural laws,” which invokes religious notions of right and wrong and seems to imply that social forms do not or, perhaps, should not evolve. If we apply Callon’s statement about “performing” markets as broadly as his unqualified remark seems to imply, then it dissolves into absurdity. In the modern world, of course, economic experts may be involved in the design and redesign of markets. And in many cases they deserve criticism for the results they help to bring about. (See Koppl et al. 2015 and Smith 2009.) But economists have applied their models to times and places in which economic theory was unknown. This activity is called “economic history,” and the basic “laws” of economics do not change when applied in this way. For example, among economists it is uncontroversial to blame the collapse of monetary exchange in late third and fourth-century Rome on currency debasement engineered by the Emperors (Jones 1953; Mises 1966, pp. 761–3; Wassink 1991, p. 468). This rather humdrum example of scientific economics is hard to square with Callon’s absurd “performance” view or his claim that “There exist only temporary, changing laws associated with specific markets” (Callon 1998, p. 47). Presumably, the problem is (again) that Callon and others have not absorbed the economic notion of spontaneous order. Thus, Callon (1998) seems to think he is developing an idea alien to economics when he makes the banal evolutionary point that a “market is more like an unfinished building, an eternal work site which keeps changing and of which the plans and construction mobilize a multitude of actors participating in the development, by trial and error, of analytical tools, of rules of the game, of forms of organization and pricing principles” (p. 30). Without a clear recognition of spontaneous order, it might seem that economists somehow deny or
Notes on Some Economic Terms and Ideas
113
ignore the role of human action in economic evolution, or even economic evolution itself. To someone operating under such a misunderstanding, it may seem necessary to impute to economists some notion that markets are “natural” in much the way a tree or mountain is “natural.” Such an imputation has little to do with standard economics and less to do with the “mainline” (Boettke 2012, pp. 383–5) economics undergirding my views on epistemic “competition.” I have repudiated common criticisms of economics for upholding a spurious view of markets as somehow “natural.” But there is an important thread of economic theory that is at least moderately close to the views Cole, Callon, and others have imputed to economists in general. But economists of this stripe do not give “free-market competition” any very unqualified support. They tend, instead, to favor intervention and regulation. The originator of general equilibrium theory, Leon Walras, said that his “pure economics” was “natural science” because it concerned the “relations of things to things” (Koppl 1995). Thus, economics is a branch of physics. Pure economics, for Walras, was a natural science studying a phenomenon from which human action is absent. This view will likely seem strange and perplexing to most readers. To add to the perplexity, Walras thought that the natural science of economics was a normative discipline describing an ideal world that it is the job of politics to realize. It takes some work to see why such views could seem reasonable to a scholar of his stature. Walras was building on a philosophical foundation very different from the sort of thing we generally take for granted today. (His most important philosophical inspiration was the French eclectic philosopher Étienne Vacherot.) The notion of normative physics must seem incoherent to most thinkers today. See Koppl (1995) for an attempt to show how these seemingly bizarre ideas are perfectly coherent, if very far from the thinking of most social scientists today. For Walras the theory was both “natural” and the object of policy. Most or all economists would reject this idea. But many neoclassical economists have retained the view of the Walrasian general equilibrium model as a normative ideal. They have retained the view that it is the goal of policy to achieve general economic equilibrium. When the ideal is not realized we have “market failure.” The archetypal statement of the market-failure approach to normative economics is Bator (1958). Recall that in general equilibrium no one can be made better off without harming at least one other person. This condition is a kind of optimality, which Walras noted and emphasized. And yet it is
114
Expert Failure
called “Pareto optimality” after Walras’s follower in technical economics, Vilfredo Pareto. Neoclassical economists made Pareto optimality a central normative criterion for judging policy. The mathematical conditions required to prove that economic equilibrium achieves such normative optimality are quite narrow and artificial. Thus, it is possible to claim in a wide variety of cases that we have “market failure” and a corresponding need for government “intervention.” (Later, public choice economists would develop a theory of “government failure” as a kind of dual of “market failure.”) Theorists of market failure treat general equilibrium in just the way Walras intended. It is the normative ideal that policy should aim at. None of them would say, with Walras, that general equilibrium is a part of physics and, in this sense, “natural.” But they do not question their ideal. When fact does not conform to theory, they wish to change fact and not consider that it may be their theory that is deficient. Overall, then, we do have an important strain of economic theorists and policy advisors who come close to the sort of thing Callon described, or at least closer than mainline economists. Note, however, that these market failure theorists are not advocates of “free-market competition.” Walras was a cooperativist who called himself (independently of Marx) a “scientific socialist.” I do not wish to suggest that the old-fashioned neoclassical economics I have described here still holds the dominant position it held at the time of Bator (1958) or as recently as, say, 1975 or 1980. It does not. That sort of neoclassical economics has lost ground. Evolutionary theory, complexity theory, and institutional analysis are gaining. It is not my purpose here, however, to say where I think economics might be headed or whether it is getting more or less “free market” over time. My purpose in this chapter has only been to clarify some of the economic foundations of my larger argument on experts and expert failure. The word “competition” creates confusion. But I have no really satisfactory substitute. Therefore, I will use it and hope that the clarification I have attempted in this chapter will minimize misunderstandings. This chapter included an overview of the important economic idea of spontaneous order. This notion is needed for an adequate interpretation of the division of knowledge in society. The division of labor and the division of knowledge coevolve and both are spontaneous orders. This fact is vital to our understanding of Hayekian dispersed knowledge, in part because it dispels the often implicit theory that knowledge exists in some sort of planned hierarchical structure that can be surveyed and directed from above. In the next two chapters I review the history of thought on the
Notes on Some Economic Terms and Ideas
115
division of knowledge. While my review is surely incomplete, I hope that it will achieve two ends. First, I hope it will give us a reasonably rich and sophisticated understanding of the phenomenon, one that reaches beyond the banality that different persons know different things. Second, I hope to show that the notion of dispersed knowledge has not been well understood in Western thought right down to the present. Sometimes otherwise sophisticated thinkers fail to grasp even rather simplistic versions of the idea or basic implications of it.
6
The Division of Knowledge through Mandeville
INTRODUCTION
The problem of experts originates in the “social distribution of knowledge” that Berger and Luckmann (1966) emphasized. Thus, an analysis of expert failure should be grounded in an appreciation of the division of knowledge in society. We need some understanding of where expertise comes from. The social division of knowledge was not designed and imposed from above by a knowledge elite. It emerged spontaneously as the unintended consequence of many individual choices aiming at local ends and not any overall design for the system. The division of knowledge is a spontaneous order. It is bottom up and not top down. The theory of expert failure may go wrong if it does not begin with this bottom-up understanding of the social division of knowledge. In earlier chapters we saw that some authors build on hierarchical notions of knowledge. As I will suggest in this chapter, the division of knowledge in society is quite rich and complex. If we employ a grossly simplified hierarchical model of knowledge, it becomes possible to imagine the growth of knowledge can be planned or that theorists can construct causal models to explain the “ideology” of other humans. Mannheim (1936) is an example. We saw earlier that Mannheim (1936) exempted science and his own ideas from the category of “ideology,” placing his theory above the system and not in it. Cole (2010) does not make the gross simplifications of Mannheim. He is, nevertheless, another example of a theorist adopting a relatively hierarchical view of knowledge. He explicitly calls for more “hierarchy” and wants to empower a “knowledge elite.” Cole’s hierarchical view of knowledge has implications for practice and for the theory of expert failure.
116
The Division of Knowledge through Mandeville
117
Cole criticizes an important document published by America’s National Academy of Sciences. The “NAS Report” (NAS 2009) reviews forensic science in the United States and makes recommendations for reform. Cole (2010) supports the Report’s central reform measure, which is federal oversight of forensic science through a regulatory body. But he criticizes the Report for adopting “obsolete models of science” found in the works of Karl Popper and Robert Merton that can only “impede the achievement” of the Report’s “purported goal: the ‘adoption,’ by forensic science, of a vaguely articulated ‘scientific culture’” (Cole 2010, p. 452). One “obvious problem” is Popper’s idea of scientific “boldness,” which is “tellingly missing from the NAS Report’s discussion of the ‘scientific method’” (p. 452). Popper said scientists should make bold conjectures. As Cole deftly puts it, in the Popperian view, “only by thinking big, taking risks, and making ‘bold conjectures’ would scientists advance knowledge” (Cole 2010, p. 452). Cole considers it a great error to apply such Popperian sensibilities to daily work in forensic science. “Popper’s theory,” Cole insists, “applies to the sort of theory-generating scientist who works at the apex of the academic establishment.” It probably does not apply to forensic scientists, who are usually not research scientists. Forensic scientists are mostly “technoscientific workers from whom we would probably not desire boldness” (pp. 452–3). Cole emphasizes the supposed difference between “discovery science” and “mundane science.” He wants Popperian boldness only in discovery science. But in Chapter 3 we saw a telling example of boldness in “mundane science.” There we saw Odling (1860) defend “the conflict of testimony” (p. 167) with an example. It is worth quoting Odling at greater length than I did earlier. Odling (1860, p. 168) says: Three or four scientific men of eminence, retained by the patentee in an action for infringement, declared that a particular chemical reaction could not take place. They were supported by works of authority, and the reaction most certainly did not take place when the ordinary modes of experimenting were adopted. But it was most important for the defendant to show that this reaction was practicable, and his witnesses, after various attempts, succeeded in devising a method by which it was effected with facility. Accordingly they contradicted the witnesses for the plaintiff, and declared positively that the reaction could take place. And this illustration shows the importance of having a subject investigated by men desirous of establishing different conclusions. Had this case been referred to an independent commission, they would probably have decided it by the mere knowledge of the day, and this new reaction, with its important consequences, would have been altogether overlooked.
118
Expert Failure
Applying the hierarchical model of knowledge to this case would probably have resulted in false victory for the plaintiff and an injustice to the defendant. Cole (2010) adopts a hierarchical view of knowledge. In this chapter and the next I will develop a nonhierarchical view of knowledge that owes much to F. A. Hayek and Bernard Mandeville. We have seen other examples of theorists adopting a simplified and relatively hierarchical view of knowledge. Among them, Alfred Schutz is unusually interesting. Schutz (1946) recognized the Hayekian division of knowledge and gave it great importance: “Knowledge is socially distributed and the mechanism of this distribution can be made the subject matter of a sociological discipline” (1946, p. 464). And yet Schutz gives the wellinformed citizen the job of adjudicating expert opinions coming from different domains. Schutz says, “There is a stock of knowledge theoretically available to everyone, built up by practical experience, science, and technology as warranted insights” (1946, p. 463). Thus, even though Schutz fully acknowledges Hayek’s insight that knowledge is dispersed, he sees the social “stock of knowledge” as consisting in “warranted insights” rather than, mostly, practices that have emerged over time as aspects of the division of labor. These “warranted insights” are, apparently, conscious thoughts, and they are “theoretically available to everyone.” In spite of his emphasis on division of knowledge, his theory of experts builds on a theory of knowledge that is much more hierarchical and disembodied than Hayek allows. The example of Schutz (1946) suggests that Hayek’s insights on dispersed knowledge are not as simple or trivial as summary statements may seem to suggest. We had better spend some time on the topic if we hope to avoid the sort of errors, as I imagine them, that I have attributed to Mannheim, Cole, and Schutz. THE DIVISION OF KNOWLEDGE
The term “division of knowledge” often refers to how we divide education or science into topics or disciplines. We typically distinguish, for example, science from the humanities. McKeon (2005) seems to have this sort of “division” in mind. Our knowledge may be divided into that concerning the public sphere and that concerning the private sphere, with different rules or principles belonging to each sphere. (Given the “entanglement” of the nominally public and private spheres explained in Wagner 2010, this particular division of knowledge may be doubted.) In this sense, the
The Division of Knowledge through Mandeville
119
“division of knowledge” may be perfectly understood by an individual mind. Or the “division of knowledge” may refer to a perhaps philosophical distinction between types of knowledge. The difference between a priori and a posteriori knowledge, for example, may be described as a “division of knowledge.” These are, of course, perfectly legitimate uses of the term, but distinct from the meaning of interest here. In this volume the term refers to the fact that different people know different things. F. A. Hayek (1937, 1945) is generally credited with the insight that knowledge is dispersed. This attribution seems to be about right. But as we will see, others anticipated Hayek in various degrees. In particular, Mandeville (1729) gives dispersed knowledge a treatment that rewards study today. Both George (1898) and Mises (1920) had previously given the idea a clear and explicit scientific statement and drawn from it the inference that comprehensive central planning of production cannot be as productive and efficient as its advocates had imagined. It is difficult to trace the history of Hayek’s notion of dispersed knowledge. The problem is that the basic idea that different people know different things is trivial and obvious. The implications of this humble insight, however, are neither trivial nor obvious, especially when it is combined with a broader understanding of the nature of the knowledge that is thus divided among participants in the division of labor. It is a truism to say that different people know different things. Keil et al. (2008) note, “As adults we all believe that knowledge is not distributed smoothly and homogeneously in the minds of others” (p. 259). We recognize that “bits of knowledge and understanding cluster together in ways that reflect different areas of expertise” (p. 259). Lutz and Keil (2003, p. 1081) find that “Children as young as 3 years of age already have a sense of the division of cognitive labor. They understand that adults are not omniscient and that they do have different areas of expertise.” Older children display more sophistication. The researchers found that “Children as young as 4 years were able not only to make attributions about stereotypical roles but also to make judgments about quite general and seemingly abstract domains such as biology and mechanics” (p. 1081). It is to be expected, then, that past writers will have often noted the division of knowledge without thinking themselves to have made some sort of discovery. For this very reason, perhaps, no one prior to Hayek succeeded in bringing this foundational fact forward as an explicit and central theme of social science. An exception should perhaps be made for Mandeville. But his treatment of dispersed knowledge seems to have left inadequate traces on subsequent thinkers. Even Adam Smith and David Hume, who were
120
Expert Failure
greatly influenced by Mandeville, adopted a less radical view of knowledge dispersion than we find in Mandeville. It was only with Hayek that the theme stuck in economics. Hayek’s insight that knowledge is dispersed is important because we cannot somehow aggregate divided knowledge, thereby overcoming or eliminating dispersion of knowledge. Our understanding of dispersed knowledge is therefore linked to our understanding of the nature of the knowledge thus dispersed. There are large literatures on themes such as tacit knowledge and extended cognition that relate to the picture of knowledge I develop in this chapter. I will refer to this literature only incidentally, however, and direct my attention to the theme relevant to a theory of expert failure, the division of knowledge. As I develop in greater detail presently, the knowledge corresponding to the division of labor is “constitutive” and evolutionary. It may also be exosomatic, tacit, and synecological. When put in the right order these labels give us the acronym SELECT, which represents the idea that knowledge may be Synecological, EvoLutionary, Exosomatic, Constitutive, and Tacit. Knowledge is synecological if the knowing unit is not an individual, but a collection of interacting individuals. Smith (2009) makes a similar point in contrasting “constructivist” and “ecological” rationality in economics. He links ecological rationality with “adaptive human decision and with group processes of discovery in natural social systems” (2009, p. 25). Hutchins (1991, 1995) famously explained how no one on a modern ship knows personally all the information that nevertheless enters into the decisionmaking process guiding the ship. The knowing entity is the ship and its crew, not any one person. There is an evolved “division of cognitive labor” (1991, p. 34) that was not fully designed by anyone. Each person in the crew interacts with others and with the ship to generate choices that may be more justly attributed to the system – the ship and its crew as a whole – than to any one person on the ship. Some readers may prefer to say that only individuals know and only individuals choose. But if we are not privy to the details of the ship’s division of cognitive labor, then we cannot specify which persons knew which things and which persons made which choices. We do not need a map of the ship’s division of cognitive labor to recognize that their interactions are generating potentially adaptive outcomes that depend on new information coming from both outside and inside the ship. In other words, we do not need a map of the ship’s division of cognitive labor to see that it is thinking, learning, and acting in much the way individual humans think, learn, and act. As we saw in the previous chapter, Leonard Read (1958) taught us that no one person knows how to make a pencil. Probably no one person knows
The Division of Knowledge through Mandeville
121
all that must happen in the pencil factory to produce a pencil. Here too there is a division of cognitive labor. Moreover, no one knows how to make all the inputs to pencil making and inputs to the inputs to pencil making, and so on. It is the overall division of labor that “knows” how to make a pencil. Pencil-making knowledge is distributed across all participants in the social division of labor; it exists in the system. Pencil-making knowledge is synecological. I borrow the term “synecological” from ecology. The Oxford English Dictionary defines “synecology” as “The study of the relationships between the environment and a community of organisms occupying it. Also: the relationships themselves.” Etymologically, “syn” means “same.” Thus, etymologically, the word means “same ecology.” The interacting elements are in the same ecology. I use the term “synecological” to suggest that such knowledge is generated by the interactions of elements in an environment and is not separable from these elements, their interactions, or their environment. There are a variety of views that represent knowledge as evolutionary. The idea of evolutionary knowledge can be found in Mandeville (1729). As we will see later in this section, Vasari (1568) also takes an evolutionary view of knowledge, though without offering a philosophically grounded theory of evolutionary knowledge. More recently, Radnitzsky and Bartley (1987) provide something of a canonical statement of the modern evolutionary epistemology associated with names such as Karl Popper (1959) and Imre Lakatos (1970). Donald Campbell (1987, pp. 73–9) reviews some history of the idea of evolutionary epistemology, but traces it back no further than Herbert Spencer. Generally, knowledge is “evolutionary” if it changes over time in a process of variation, selection, and retention. Longo et al. (2012) have given us a theory of what we might call “creative evolution.” This theory has been imported to the social sciences and humanities by Felin et al. (2014), Koppl et al. (2015a, 2015b), and Devins et al. (2015, 2016). If Koppl et al. (2015a) are right, then the theory of creative evolution describes the coevolution of the division of knowledge and the division of labor. Knowledge is exosomatic if it is embodied in objects existing outside the organism that uses such knowledge. As Ingold notes, Lotka (1945, p. 188) seems to be the first to use the term “exosomatic” for “the products of human knowledge” (Ingold 1986, p. 347). Lotka says: “In place of slow adaptation of an atomical structure and physiological function in successive generations by selective survival, increased adaptation has been achieved by the incomparably more rapid development of ‘artificial’ aids to our native receptor-effector apparatus, in a process that might be termed
122
Expert Failure
exosomatic evolution.” He links exosomatic evolution to the idea that, for humans, “[k]nowledge breeds knowledge” (1945, p. 192). Karl Popper (1979) emphasized the exosomatic nature of much human knowledge, using the book as a principal example. A more suitable example for this discussion might be an egg timer. The knowledge of when to remove the egg from the boiling pot is embodied in the egg timer, which exists apart from the cook. Knowledge is “constitutive” if it constitutes a part of the phenomenon. It is “speculative” if it explains the phenomenon. As we shall see presently, these are overlapping categories. The fisherman’s knowledge is constitutive of fishing, for example, no matter how much or little of it can be found in theories of fishing. Finally, knowledge is tacit if it exists in our skills, habits, and practices rather than in an explicit form that could be written down. The knowledge of how to ride a bicycle is tacit. Ryle (1949) and Polanyi (1958) are standard sources on tacit knowledge, though many other figures, such as George (1898, pp. 39–41), have noted the phenomenon in one way or another. The knowledge that coevolves with the division of labor is synecological, evolutionary, exosomatic, constitutive, and tacit. SELECT knowledge seems to be necessarily evolutionary. It is characteristically, but not necessarily, synecological, exosomatic, constitutive, and tacit. What Prendergast (2014) says of Mandeville applies to the characterization of knowledge given here. In social evolution there is an “accumulation of knowledge derived in the course of economic activity and embodied in practices, procedures, goods and technologies” (p. 105). Hayek’s discovery of dispersed knowledge was not the truism that different persons know different things. He discovered that this humble insight is of central importance to social theory. Hayek seems to have credited himself with this discovery when he described his 1937 essay “Economics and Knowledge,” which identified the division of knowledge as the central issue in economics, as “the most original contribution I have made to the theory of economics” (Hayek 1994, p. 79). Earlier writers gave differing degrees of attention to the division of knowledge and they adopted different attitudes to it. Mandeville’s treatment of knowledge so thoroughly anticipated Hayek that we may wonder if Hayek was not guilty of unconscious borrowing. His article on Mandeville (Hayek 1978) is generous toward Mandeville and credits Mandeville with the “twin ideas” of evolution and spontaneous order (p. 250). But Hayek does not seem to recognize Mandeville’s theory of dispersed knowledge. It seems unlikely that this snub was intentional.
The Division of Knowledge through Mandeville
123
Even if we credit Mandeville fully, however, it was Hayek and not Mandeville, George, or Mises who caused the idea to stick. After Hayek, there is widespread recognition of a problem of dispersed knowledge; before Hayek, there was not. Often even the very persons who invoke it do not adequately understand the concept. But the idea is nevertheless widely recognized as real and important. It is because of Hayek that the idea has acquired a permanent place in the lexicon of social science. Thus, it seems fair to speak of “Hayekian” dispersed knowledge rather than “Mandevillean” dispersed knowledge. The distinction between constitutive and speculative knowledge is important for a good understanding of Hayekian knowledge dispersion. This distinction modifies slightly Hayek’s distinction between “constitutive” and “speculative” ideas (1952b, pp. 36–7). Knowledge is “constitutive” if its possession becomes one of the causes of a social phenomenon. Such knowledge is constitutive of the phenomenon. Knowledge is “speculative” if it explains something, be it a social phenomenon, a natural phenomenon, or something else. This distinction is often between two aspects of knowledge, but some constitutive knowledge has no evident speculative dimension. The specialized knowledges associated with each position in the social division of labor enable, and in this sense “cause,” that division of labor. It is true, of course, that an earlier and perhaps less refined division of labor gave rise to the constitutive knowledge that then enabled the subsequent, perhaps more refined division of labor. We have here yet another example of feedback in an evolutionary system. The logic here is close to that of Young (1928), who, however, took a more hierarchical view of knowledge. Sailors had a constitutive knowledge of their craft long before scientists acquired a speculative knowledge of the mathematical principles of sailing (Mandeville 1729, p. 143). Wild chimpanzees have been seen to make tools (Goodall 1964). They have a constitutive knowledge of tool making. Presumably, however, they do not have any speculative knowledge of tool making since they do not possess human language. Whatever we might be able to teach chimpanzees to do, the gestural language of wild chimpanzees as currently understood (Hobaiter and Byrne 2014) would not seem to allow the explanatory function defining speculative knowledge. For humans, too, constitutive knowledge need not correspond in any way to speculative knowledge. But constitutive knowledge for humans might also be speculative. Classical mechanics is speculative knowledge because it explains the motions of bodies celestial and terrestrial. It is also constitutive knowledge because it guides us in the construction of bridges and buildings.
124
Expert Failure
Speculative knowledge is theoretical and explicit. Speculative knowledge does not necessarily guide action. When it does, the knowledge precedes the action and is separate from it. Constitutive knowledge may be tacit and practical. By definition, constitutive knowledge guides action. It need not exist prior to or separately from the actions so guided. I did not know how to ride a bicycle prior to the action and my bicycle knowhow is not independent of my bicycle riding. If I found that I could no longer ride a bicycle I might lament that I had “forgotten” how. And I would mean by that statement only that I had lost the skill, not that my memory had failed me. Hayek’s notion of dispersed knowledge refers mostly to constitutive knowledge. It refers to the knowledge that enables the division of labor. Expertise is generally constitutive knowledge derived from the expert’s place in the social division of labor. The social division of knowledge is featured prominently in Plato’s Apology. In it, Socrates explains how he became a gadfly to Athens. Chaerephon had asked the oracle at Delphi whether there was anyone wiser than Socrates and, Socrates reports, “the Pythian prophetess answered that there was no man wiser.” Like Captain Renault in Casablanca, Socrates was shocked, shocked by this report. He said he was thus driven to question his fellow Athenians in an effort to prove the oracle wrong. His first stop was a politician. Their exchange left Socrates thinking “he knows nothing,” a plausible report given the man’s profession. Other politicians were no better. Socrates moved from politicians to poets, whom he found incapable of explaining their own works. Much like the politicians, they thought themselves wise when they were not. Finally, he arrived at the artisans, who, Socrates says, “knew many fine things” and were thus wiser than he. But, like the poets and politicians, “they thought that they also knew all sorts of high matters” of which they knew nothing. From this experience he draws his famous conclusion: “He, O men, is the wisest, who, like Socrates, knows that his wisdom is in truth worth nothing.” Socrates’ conversations with artisans revealed a social division of knowledge. Each knew his separate art and was, in this regard, “wise.” Socrates blasts their “philosophical pretensions” and explicitly esteems practical, humble, workmanly knowledge over theoretical knowledge, at least among the humble artisans: “I found that the men most in repute were all but the most foolish; and that some inferior men were really wiser and better.” In his conversations with the poets, moreover, he discovered that some knowledge is tacit. He says: “not by wisdom do poets write poetry, but by a sort of genius and inspiration.” They know how to write poems, but can explain nothing of it to others.
The Division of Knowledge through Mandeville
125
In Plato’s Apology, the knowledge divided among the people is constitutive knowledge. Much of it exists in the tacit forms of habit and knowhow. The poets could not say how to write a good poem, but they could do it. Finally, the division of knowledge corresponds to the division of labor. Plato’s Socrates does not consider whether this division of knowledge and labor was planned or emergent. Ancient writers do not seem to have written much that very clearly anticipates Hayek’s notion of spontaneous order according to which a seemingly planned social order may be the unintended consequence of human action. This point matters for us because in my interpretation of the division of knowledge it is itself a spontaneous order that emerges together with the division of labor. Hayek (1978) says the ancient Greeks saw the problem of unplanned order, “of course,” but to discuss the problem they used the distinction between “natural (physei)” and “artificial or conventional (thesei or nomõ)” (pp. 253–4). This vocabulary, Hayek says, “produced endless confusion” because of its “ambiguity” (p. 253). If the ancient Greeks were as familiar with the problem as Hayek seems to suggest, it seems puzzling that they could not overcome their supposed problem of vocabulary. It seems hard to suppress the suspicion that the confusion Hayek refers to was in their understanding and not only in their exposition. In any event, it seems that ancient writers did not produce any very clear anticipations of the idea of spontaneous order. And if that is right, they would not likely have produced a clear statement of the division of knowledge in anything like the broadly Hayekian terms I articulated previously. Hayek does provide a few suggestive quotes. In “all free countries,” he says, there was the “belief that a special providence watched over their affairs which turned their unsystematic efforts to their benefit” (p. 254). He quotes Aristophanes to illustrate this belief: There is a legend of the olden times That all our foolish plans and vain glory conceits Are overruled to work the public good.
But the notion of divine intervention to overrule the chaos and misery of choices made higgledy-piggledy seems far from Adam Smith’s invisible hand. Hayek’s quote from “the Attic orator Antiphon” seems closer to our themes. Antiphon says great age “is the surest token of good laws, as time and experience shows mankind what is imperfect.” This remark hints at both the limits of human knowledge and the tendency of accumulated experience to produce results superior to what design can produce.
126
Expert Failure
Finally, Hayek (1978, p. 255) quotes a passage from Cato that expresses esteem for tradition as embodying the accumulated wisdom of many minds, each of which is weak and fallible. Roman law was exemplary because it was based upon the genius, not of one man, but of many: it was founded, not in one generation, but in a long period of several centuries and many ages of men. For, said he, there never has lived a man possessed of so great a genius that nothing could escape him, nor could the combined powers of all men living at one time possibly make all the provisions for the future without the aid of actual experience and the test of time.
This remark represents, for Hayek, a natural law tradition that kept alive some notion of spontaneous order. In Hayek’s view, the tradition reached a zenith with the “Spanish Jesuits of the sixteenth century.” These “Spanish schoolmen of the sixteenth century . . . emphasized that what they called pretium mathematicum, the mathematical price, depended on so many particular circumstances that it could never be known to man but was known only to God” (1989, p. 5). They seem to have had, therefore, a relatively clear recognition of the Hayekian problem of dispersed knowledge. They reached, Hayek says, very “modern” results before being “submerged by the rationalist tide of the following century” (1978, p. 255). Another sixteenth-century thinker, Georgio Vasari, also reached results compatible with the general view adopted here. His Lives of the Painters, Sculptors and Architects chronicles the gradual accumulation of knowledge in these arts. He represents progress in the arts as a recovery of ancient practice and knowledge. There is, therefore, an element of teleology in his story. The ancient perfection to which he appealed, however, had no precedent and was achieved by the same piecemeal discovery process that Vasari chronicled for the modern figures covered in his history: Having pondered over these things intently in my own mind, I judge that it is the peculiar and particular nature of these arts to go on improving little by little from a humble beginning, and finally to arrive at the height of perfection; and of this I am persuaded by seeing that almost the same thing came to pass in other faculties, which is no small argument in favor of its truth, seeing that there is a certain degree of kinship between all the liberal arts. Now this must have happened to painting and sculpture in former times in such similar fashion, that, if the names were changed round, their histories would be exactly the same. (i, 247)
Progress in all the liberal arts proceeds by the same piecemeal process of accumulation that Vasari chronicled for Tuscan art from Cimabue to
The Division of Knowledge through Mandeville
127
Michelangelo. It was, says Vasari, the same process in the ancient world and in the modern world. The knowledge acquired in this process is largely tacit. In both poetry and “the arts of design,” works made in the “fire” of inspiration are better than those made “with effort and fatigue” (p. 274). And the process is not necessarily cumulative. Knowledge can both arise and disappear. Luca della Robbia invented glazed terra cotta sculpture, which was a “new form of sculpture” that “the ancient Romans did not have” (p. 280). His techniques were family secrets and when his family became “extinct,” knowledge of the technique was largely lost and “art was deprived of the true method of making glazed work” (p. 280). Vasari relates at least one story that reveals a clear appreciation for the distribution of knowledge. Lorenzo Ghiberti won a competition to make a pair of bronze doors for the Florence Baptistery. These are now the north doors of Baptistery and depict scenes from the Old Testament. Ghiberti’s rivals for the commission kept their work “hidden and most secret, lest they should copy each other’s ideas.” Ghiberti, by contrast, “was ever inviting the citizens, and sometimes any passing stranger who had some knowledge of the art, to see his work, in order to hear what they thought and these opinions enabled him to execute a model very well wrought and without one defect” (p. 292). By crowdsourcing criticism of his work, Ghiberti was able to improve it enough to prevail in the competition. Vasari chronicles the emergence of a tradition from its earliest beginnings with Cimabue to its apotheosis in Michelangelo. (It does not matter for our purposes here that Vasari’s choice of Cimabue as the one who gave “the first light to the art of painting” seems more political than factual.) In the process painters learned how to represent the human body realistically, how to foreshorten figures, how to represent a figure “shivering with cold” (p. 323), perspective, how to give figures “grandeur and majesty” (p. 723), and so on. Competition and emulation among members of the community gave rise to a set of practices that, in turn, produced a series of objects and innovations. And yet this history seems to have had little or no influence on subsequent social thought. Mandeville makes no reference to it in the Fable. Adam Smith’s library included a copy of Vasari’s Vite (Bonar 1894, p. 116). No reference is made to it, however, in the Wealth of Nations, The Theory of Moral Sentiments, or Lectures on Justice, Police, Revenue and Arms. Whether because it was a victim of the “rationalist tide,” or because it was considered a thing apart from social theory, or for another reason, Vasari’s history seems to have left no discernible trace on social theory, notwithstanding the detailed and vivid chronicle he gave us of the evolution of a spontaneous order.
128
Expert Failure
Hayek says that Mandeville’s “speculations . . . mark the definite breakthrough in modern thought of the twin ideas of evolution and of the spontaneous formation of an order” (1978, p. 250). Mandeville recognized that knowledge is divided and gave importance to the fact. He recognized the tacit dimension of knowledge. He distinguished constitutive from speculative knowledge, elevating the former relative to the latter. He saw constitutive knowledge as emergent from the division of labor. And he viewed the division of labor and both constitutive and speculative knowledge as products of a slow evolution driven by individual action, but not any human design. Hayek’s assessment of Mandeville’s role in creating the “twin ideas” of evolution and spontaneous order supports the impression that Mandeville may have been the first (or at least the first modern writer) to achieve anything like the Hayekian conception of human knowledge. As we shall see, later writers seem to have fallen short of Mandeville’s radical vision until Hayek resuscitated it beginning with his classic essay of 1937, “Economics and Knowledge.” Even since Hayek’s much cited work on this theme, most scholars take a less radical view of knowledge – one that overestimates the power of rational thought and speculative knowledge, at least if Mandeville and Hayek were right. Prendergast (2014, p. 87) nicely summarizes Mandeville’s views on these topics: “[F]or Mandeville, innovators were people of ordinary capacity who were alert to the opportunities and challenges of their environment. As a result of specialisation, they possessed tacit knowledge which was actualised in what they did rather than in theoretical propositions.” Bernard Mandeville noted the division of knowledge in society. Because “Men differ” in “Inclination, Knowledge, and Circumstances,” Mandeville explained, “they are differently influenced and wrought upon by all the Passions” (1729, vol. II, p. 90). He recognized this divided knowledge as an emergent phenomenon having more to do with experience than “reasoning a Priori” (1729, vol. II, p. 145). We “often ascribe to the Excellency of Man’s Genius, and the Depth of his Penetration, what is in Reality owing to length of Time, and the Experience of many Generations, all of them very little differing from one another in natural Parts and Sagacity” (1729, vol. II, p. 142). The art of sailing, for example, has been explained mathematically. But it is practiced “without the least scrap of Mathematics” (1729, vol. ii, p. 143). Ignorant sailors impressed into service soon learn to sail “much better than the greatest Mathematician could have done in all his Life-time, if he had never been at Sea” (1729, vol. ii, p. 143). Through practice, the knowledge becomes “habitual” (1729, vol. ii, p. 140). Similarly,
The Division of Knowledge through Mandeville
129
The Arts of Brewing, and making Bread, have by slow degrees been brought to the Perfection they now are in, but to have invented them at once, and à priori, would have required more Knowledge and a deeper Insight into the Nature of Fermentation, than the greatest Philosopher has hitherto been endowed with; yet the Fruits of both are now enjoy’d by the meanest of our Species, and a starving Wretch knows not how to make a more humble, or a more modest Petition, than by asking for a Bit of Bread, or a Draught of Small Beer.
“It is,” Mandeville says, not only that the raw Beginners, who made the first Essays in either Art, good Manners as well as Sailing, were ignorant of the true Cause, the real Foundation those Arts are built upon in Nature; but likewise that, even now both Arts are brought to great Perfection, the greatest Part of those that are most expert, and daily making Improvements in them, know as little of the Rationale of them, as their Predecessors did at first. (1729, vol. ii, p. 144)
This passage expresses very strongly the priority of practical, experiential knowledge over theoretical knowledge. Mandeville values constitutive knowledge above speculative knowledge and seems to think the overlap between them is small. Though he denied it (I 292), Mandeville sometimes seems to disparage education altogether, as when he opines that “the Knowledge of the Working Poor should be confin’d within the Verge of their Occupations” (1729, vol. I, p. 288). Prendergast (2010) explains that Mandeville placed great importance on education, but attacked “the view that the poor were poor because of their lack of education” (p. 415, n.1). Moreover, the education he favored was often conducted outside of schools and when “The Knowledge of Parents is communicated to their Off-spring, and every one’s Experiences in Life, being added to what he learned in his Youth, every Generation after this must be better taught than the preceding; by which Means, in two or three Centuries, good Manners must be brought to great Perfection” (1729, vol. ii, pp. 145–6). Even those receiving a university education should concentrate on matters trade-related. “No Man ever bound his Son ’Prentice to a Goldsmith to make him a Linen-draper; then why should he have a Divine for his Tutor to become a Lawyer or a Physician?” (1729, vol. I, p. 293). Mandeville’s attitude to the poor is a contested issue. On the one hand, he advises legislators that “the surest Wealth” of a nation “consists in a Multitude of laborious Poor” (1729, vol. I, p. 287). He says: “To make the Society happy and People easy under the meanest Circumstances, it is requisite that great Numbers of them should be Ignorant and Poor” (pp. 287–8). It may be that Mandeville had as harsh and exploitative a
130
Expert Failure
view of the poor as Kaye believes. Kaye holds to this view even while noting that “here, as elsewhere,” Mandeville was able “to make a current creed obnoxious by the mere act of stating it with complete candour” (see p. lxxi of Kaye’s introduction to Mandeville 1729). This skill, however, may suggest a more Swiftian interpretation of Mandeville’s attack on charity schools. He rather clearly says that the poor can abide their bad condition only because they are ignorant and have never known anything else. He seems, then, to attribute any differences between prince and pauper to their different positions in the division of labor, rather than any innate differences. “Human Nature is every where the same,” he says; “Genius, Wit and Natural Parts are always sharpened by Application, and may be as much improv’d in the Practice of the meanest Villany, as they can in the Exercise of Industry or the most Heroic Virtue” (1729, vol. I, p. 275). He says wryly that “A Servant can have no unfeighn’d Respect for his Master, as soon as he has Sense enough to find out that he serves a Fool” (p. 289), and “No Creatures submit contentedly to their Equals, and should a Horse know as much as a Man, I should not desire to be his Rider” (p. 290). Mandeville’s attack on the charity schools was an attack on the clergy, not the poor. Mandeville adhered to Peart and Levy’s (2005, p. 3) analytical egalitarianism. Before coming to the charity schools, Mandeville developed at length the theme that “the Clergy are not possess’d of more intrinsick Virtue than any other Profession” (1729, vol. I, p. 173). The clergy are just like you and me, which makes them despicable, immoral, and corrupt. With “brutish Appetite” they “indulge their Lust.” We have “reason to believe,” Mandeville tells us, that what the clergy say “is full of Hypocrisy and Falshood, and that Concupiscence is not the only Appetite they want to gratify; that the haughty Airs and quick Sense of Injuries, the curious Elegance in Dress, and Niceness of Palate, to be observ’d in most of them that are able to shew them, are the Results of Pride and Luxury in them” (1729, vol. I, p. 173). Mandeville’s Letter to Dion (1732) may give support to the view that Fable had firmer moral foundations than Kaye seems to allow. Mandeville says that “The Fable of the Bees was a Book of exalted Morality” (p. 24) aimed at Christian hypocrisy. Thus attacked, his “Adversaries were obliged to dissemble the Cause of their Anger” (p. 25) by imputing immoralist ideas to Mandeville. Edwards (1964) and Harth (1969) both view the Fable as satirical. Harth (1969) castigates Kaye’s attempt to “impose an artificial unity” on the Fable, which only served to “flatten his satire into an insipid exercise in literary paradox” (pp. 325–6). He notes the passage in Fable in which
The Division of Knowledge through Mandeville
131
Mandeville has Cleomenes saying, “There is, generally speaking, less Truth in Pangyricks than there is Satyrs” (Harth 1969, p. 322, and Mandeville, 1729, vol. II, p. 59). Edwards compares Mandeville to Swift (1964, pp. 198, 203, and 204) and emphasizes the “complexity of Mandeville’s tone,” which includes, he avers, heavy doses of irony (p. 204). In Edwards’s plausible interpretation, Mandeville did have a low opinion of the charity schools, but not because he had a low opinion of the poor. Mandeville asks: [W]hy must our concern for Religion be eternally made a Cloke to hide our real Drifts and worldly Intentions? Would both Parties agree to pull off the Masque, we should soon discover that whatever they pretend to, they aim at nothing so much in Charity-Schools, as to strengthen their Party, and that the great Sticklers for the Church, by Educating Children in the Principles of Religion, mean inspiring them with a Superlative Veneration for the Clergy of the Church of England, and a strong Aversion and immortal Animosity against all that dissent from it. (1729, vol. I, p. 309)
The use I am making of Mandeville’s work, however, does not require me to sort out the important interpretive question of his attitude to the poor. My attention here is focused solely on his importance for our understanding of the division of knowledge in society. As his use of the word “habitual” seems to suggest, Mandeville recognized the existence of tacit knowledge. The word “knowing,” he explains, has a “double Meaning” (1729, vol. ii, p. 171): “There is a great Difference between knowing a Violin when you see it, and knowing how to play upon it” (1729, vol. ii, p. 171). This is “the Difference between Knowledge, as it signifies the Treasure of Images receiv’d, and Knowledge, or rather Skill, to find out those Images when we want them, and work them readily to our Purpose” (1729, vol. ii, p. 171). Here, interestingly, part of tacit knowledge is the ability to skillfully call up and deploy explicit knowledge. Prendergast (2014, p. 105) says: “Mandeville appears to have been the first to develop a theory of social evolution based on the accumulation of knowledge derived in the course of economic activity and embodied in practices, procedures, goods and technologies.” Mandeville also recognized what we might call the “division of opinion” in society. Speaking of the “Judges of Painting,” he says, “There are Parties among Connoisseurs, and few of them agree in their Esteem as to Ages and Countries, and the best Pictures bear not always the best Prices” (1729, vol. I, p. 326). Such judges are, of course, experts, and it is interesting to note Mandeville’s skeptical view of them. We saw Mandeville explain how the imparting of knowledge from one generation to the next improves “Manners.” The “Precepts of good Manners”
132
Expert Failure
for Mandeville “are no more than the various Methods of making ourselves acceptable to others, with as little Prejudice to ourselves as possible” (1729, vol. ii, p. 147). In keeping with the overall spirit of his “licentious system” (Smith 1759, VII.II.104), Mandeville gives a dark cast to this seemingly happy idea. “Manners and Good-breeding,” he says wryly, “consists in a Fashionable Habit, acquir’d by Precept and Example, of flattering the Pride and Selfishness of others, and concealing our own with Judgment and Dexterity” (1729, vol. I, p. 69). Good manners are the art of getting along. Mandeville does not think manners so conceived are easily acquired. It takes “two or three Centuries” of accumulated experience to bring manners to “great Perfection” (1729, vol. ii, p. 146). It is a slow evolution of prudent, agreeable, and sociable behavior shaped by commerce. Mandeville thus articulates the doux commerce thesis discussed by Hirschman (1977, pp. 56–63). This term, Hirschman explains, “denoted politeness, polished manners, and socially useful behavior in general” (p. 62). Recently, Henrich et al. (2005) and Pinker (2011) have given empirical support to this view. There is an epistemic dimension to the doux commerce thesis, at least in the form Mandeville gives it. The knowledge of how to behave in a prosocial manner emerges slowly from the accumulated experience of generations. It exists, I would add, mostly in the tacit form of accumulated habit.
7
The Division of Knowledge after Mandeville
VICO TO MARX
We have seen Hayek credit Mandeville with the “twin ideas” of evolution and spontaneous order. Mandeville’s close contemporary Giambattista Vico has also been cited for his anticipation of the idea of spontaneous order. Hirschman (1977, p. 17) notes that “Adam Smith’s Invisible Hand” can be “read into” Vico’s work. “But,” Hirschman cautions, “there is no elaboration and we are left in the dark” about how it all works. Moreover, in The New Science Vico articulates a stages theory of the history of the rise and decline of nations (1744, pp. 509–35) and attributes natural law to divine providence (1744, pp. 313–17). It is Divine Providence that has ordained both “the republics” and “the natural law of the people.” Vico thus seems an improbable source for enriching our understanding of SELECT knowledge, i.e., knowledge that is synecological, evolutionary, exosomatic, constitutive, and tacit. In book I, chapter I of The Wealth of Nations, Adam Smith gives two distinct accounts of the division of knowledge. On the one hand, the division of labor applies to “science”: In the progress of society, philosophy or speculation becomes, like every other employment, the principal or sole trade and occupation of a particular class of citizens. Like every other employment too, it is subdivided into a great number of different branches, each of which affords occupation to a peculiar tribe or class of philosophers; and this subdivision of employment in philosophy, as well as in every other business, improves dexterity, and saves time. Each individual becomes more expert in his own peculiar branch, more work is done upon the whole, and the quantity of science is considerably increased by it. (I.1.9)
This passage seems to suggest that speculative knowledge is an offshoot of constitutive knowledge. Peart and Levy say: “In Adam Smith’s account, 133
134
Expert Failure
philosophy is a social enterprise that begins with universal experience” (Peart and Levy 2005, p. 4, n. 1). Of more humble forms of knowledge, Smith says, “Observe the accommodation of the most common artificer or day-labourer in a civilized and thriving country, and you will perceive that the number of people of whose industry a part, though but a small part, has been employed in procuring him this accommodation, exceeds all computation.” He goes into some detail on the variety of distant persons and tasks required for such “accommodation.” He invites us to consider “all the knowledge and art requisite” to provide a “woollen coat” or a glass window. “[I]f we examine,” he says, “all these things, and consider what a variety of labour is employed about each of them, we shall be sensible that without the assistance and cooperation of many thousands, the very meanest person in a civilized country could not be provided, even according to what we very falsely imagine, the easy and simple manner in which he is commonly accommodated” (I.1.11). Smith seems to view the division of knowledge as originating in the division of labor. He notes: “[T]he invention of all those machines by which labour is so much facilitated and abridged, seems to have been originally owing to the division of labour” (I.1.8). Often such machines “were originally the inventions of common workmen, who, being each of them employed in some very simple operation, naturally turned their thoughts towards finding out easier and readier methods of performing it” (I.1.8). Smith’s discussion of the division of labor reveals some appreciation of the division of knowledge in two aspects. First, he has a very clear statement of the division of cognitive labor within “speculation.” Second, he has at least some appreciation of dispersed knowledge of time and circumstance such as merchants and artisans have. The complex division of labor, he says, “exceeds all computation.” In both Smith and Mandeville, the slow evolution of the division of labor shapes constitutive knowledge. Habit, practice, and experience subject to the pragmatic test of workability ensure that constitutive knowledge is reliable and useful. For Mandeville, speculation is largely suspect even though The Fable of the Bees is itself a speculative work. Smith does not put speculation under a cloud, but he does seem to see it as an offshoot of constitutive knowledge and thus, presumably, dependent on it. Many eighteenth-century thinkers recognized that reason is fallible and opinions will differ. Like Mandeville, they recognized a division of opinion in society. Appreciation of the division of opinion in society seems to fit the
The Division of Knowledge after Mandeville
135
anti-Cartesianism expressed by Hume in his History of England (1778, vol. vi, p. 541) when he attributed the success of Descartes’ “mechanical philosophy” to “the natural vanity and curiosity of men” rather than any intrinsic merits it might have had. In Federalist 10, Madison says, “As long as the reason of man continues fallible, and he is at liberty to exercise it, different opinions will be formed. As long as the connection subsists between his reason and his self-love, his opinions and his passions will have a reciprocal influence on each other; and the former will be objects to which the latter will attach themselves.” In Federalist 65, Hamilton makes an interesting case for separating the impeachment trial of the Senate from the criminal trial by the Supreme Court. Otherwise, “Would there not be the greatest reason to apprehend, that error, in the first sentence, would be the parent of error in the second sentence? That the strong bias of one decision would be apt to overrule the influence of any new lights which might be brought to vary the complexion of another decision?” In the same essay, Hamilton says: “Where is the standard of perfection to be found? Who will undertake to unite the discordant opinions of a whole community, in the same judgment of it; and to prevail upon one conceited projector to renounce his infallible criterion for the fallible criterion of his more conceited neighbor?” In Federalist 50, Madison says: “When men exercise their reason coolly and freely on a variety of distinct questions, they inevitably fall into different opinions on some of them. When they are governed by a common passion, their opinions, if they are so to be called, will be the same.” We find in the Federalist Papers, then, clear expressions of the division of opinion in society, which is attributable at least in part to the fallibility and provincialism of human reason. This perspective is, of course, conducive to a skeptical view of experts. Owen (1841) seems to bemoan the division of knowledge when he says: “Hitherto the education of men’s faculties has limited their natural powers, rendering them incapable to form more than some incongruous notions of some small portion of some one general division of a most random, chaotic, and always, under every change, a most perplexed system” (p. 36). He looks forward to replacing this chaos with a planned system. The current society “has arisen without foresight or arrangement” (p. 40). His proposed system, by contrast, “will be based on well-defined, eternal principles of truth, and its various parts will be in accordance with those fundamental principles; every part of the whole system will be carefully prepared to produce benefit, not evil, to man” (p. 40). This planned system will, apparently, eliminate the division of knowledge. “The mind that will
136
Expert Failure
be created” will have “clear and distinct ideas on all subjects that can be acquired and understood by the human faculties” (p. 39). The ideas of this “new mind” will be “in perfect unison or accordance with each other . . . and with all ascertained facts” (p. 39). Presumably, then, Owen’s system will wipe out the division of opinion as well. Owen’s dim view of the division of knowledge and opinion in society is likely related not only to his optimism about the possibility of planning and implementing a new, rational, and beneficent system, but also to his assumption that all useful knowledge is scientific. The subtitle of his 1841 book describes the “present system of society” as “derived from the inexperienced and crude notions of our ancestors.” Ideas come first and social institutions are subsequently “derived” from those ideas. The progress of knowledge consists in replacing the “ignorance and early errors of our ancestors” (p. 17) with science. The “progress made in real knowledge depended upon the success of individuals in discovering a sufficient number of facts to form a true groundwork for a science” (p. 18). Owen says: “And it is probable that our ancestors lived for a long period of ages before one fixed science was discovered; and during which period all were guided and governed by the instincts of their imaginations alone” (p. 18). Far from seeing speculative knowledge as an offshoot of constitutive knowledge, Owen thinks that all “real knowledge” is not only speculative but scientific. Owen is literally unable to imagine an accumulated tradition that is wiser than any individual in society. Before science, our ancestors “were guided by the instincts of their imaginations alone,” and these “first conceptions . . . appear to have been erroneous upon almost all subjects” (p. 18). Owen is a striking representative of the view that society can and should be rationally ordered. In the tradition of Hume, Smith, and Hayek, accumulated tradition is wiser than we are, not because we have somehow grown more stupid than our ancestors, but because our existing habits and institutions embody more facts and experience than any rational planning process can handle. All the smart is in the system, not the systematizers. For Owen, instead, all the smart is in the systematizers. Presumably, it was an engineering mentality similar to that of Owen that led Robert Mudie (1840) to deride the division of knowledge as an evil. Mudie is not an important thinker. The Oxford Dictionary of National Biography says he had an “unsatisfactory and ultimately wretched career.” But he gives vivid expression to the frequently encountered view that knowledge should somehow be unified:
The Division of Knowledge after Mandeville
137
It is true that, in so far as manual operations are concerned, there must be a division of labour in those higher branches of art as well as in branches which are more humble: but the division of labour is one thing, and a good; while the division of knowledge and thought is another thing, and an evil; and there are no professions in which the want of due knowledge of all the circumstances, even of circumstances which appear to common observation to be too remote for being taken into account, lead to errors, and evils, and losses, of such extent and magnitude. (Mudie 1840, p.3)
It is not clear how similar Marx’s views are to those of Owen and Mudie. Roy Bhaskar (1991) says: “It is a truism that the tensions in Marxist thought between positivism and Hegelianism, social science and philosophy of history, scientific and critical (or humanist or historicist) Marxism, materialism and the dialectic etc. are rooted in the ambivalence and contradictory tendencies of Marx’s own writings.” These “ambivalent and contradictory tendencies” make any brief sketch of Marx’s views on the division of knowledge unlikely to satisfy all informed readers. Nevertheless, it seems fair to say that Marx’s doctrine of historical materialism places him closer to Owen than Smith. It is a contested matter what Marx “really meant” regarding historical materialism. He contrasted the “material transformations of the economic conditions of production” with “the legal, political, religious, aesthetic or philosophical – in short ideological forms in which men become conscious of this conflict and fight it out” (1859, p. 13). This distinction is the distinction between the substructure (or base) and the superstructure. Marxist scholars are not agreed on whether causality moves only from the material base to the cultural superstructure, or also in the opposite direction. Marx’s summary statement of 1859, however, seems to suggest that the causality is mostly or wholly from base to superstructure: In the social production which men carry on they enter into definite relations that are indispensable and independent of their will; these relations of production correspond to a definite stage of development of their material powers of production. The sum total of these relations of production constitutes the economic structure of society – the real foundation, on which rise legal and political superstructures and to which correspond definite forms of social consciousness. The mode of production in material life determines the general character of the social, political and spiritual processes of life. It is not the consciousness of men that determines their existence, but, on the contrary, their social existence determines their consciousness. (pp. 12–13)
While this view is far from Smith or Mandeville, it is nevertheless an economic perspective with links to their earlier writings. In Mandeville,
138
Expert Failure
Smith, and Marx the “relations of production” give rise to ideas. In Mandeville and Smith, the division of labor does not so much cause ideas as enable them. (See Koppl et al. 2015a, especially pages 12–13, on the difference between causing and enabling.) In Marx the relation is one of strict causality. In Mandeville, Smith, and Marx, the “relations of production” evolve. In Mandeville and Smith the evolutionary process is an unintended consequence of human action having no predetermined end point. In Marx, process is driven by “material” forces driving it inevitably forward to its necessary culmination in the modern classless society. And in Mandeville and Smith, the human knowledge of an epoch reflects those evolved “relations of production.” In Mandeville and Smith, the ideas caused or enabled by the division of labor represent primarily constitutive knowledge, not speculative knowledge. Marx seems to see the knowledge that is caused by the material relations of production as epiphenomenal. And this epiphenomenal knowledge is speculative and not constitutive. In Mandeville and Smith there are intricate and evolving feedback loops between the division of labor and the constitutive knowledge that sustains it. In Marx, instead, the causality is mostly if not wholly from the base to the superstructure. When Adam Smith said that the division of labor “exceeds all computation,” he was not complaining. On the contrary, he applies his observation to any “civilized and thriving country.” This uncomputable division of labor is a good thing in Smith. “It is the great multiplication of the productions of all the different arts, in consequence of the division of labour, which occasions, in a well-governed society, that universal opulence which extends itself to the lowest ranks of the people” (I.1.10). Marx seems less at ease with our ignorance of the particulars of the division of labor. Marx (1867) decries the “mystical character of commodities” (p. 82). Capitalism makes relations between people look like relations between things (p. 83). This false face is attributable to commoditization. And “articles of utility become commodities, only because they are products of the labour of private individuals or groups of individuals who carry on their work independently of each other” (pp. 83–4). The “mystery of commodities,” their “magic and necromancy” (p. 87), will dissipate only when the division of labor is planned and controlled. “The life-process of society, which is based on the process of material production, does not strip off its mystical veil until it is treated as production by freely associated men, and is consciously regulated by them in accordance with a settled plan” (p. 92). But if Adam Smith was right to say that the division of labor
The Division of Knowledge after Mandeville
139
“exceeds all computation,” then the division of labor is irremediably opaque. The “abstract” market relations Marx decried are a necessary feature of an advanced division of labor. In this situation, there will necessarily be expertise and a division of knowledge. MENGER TO HAYEK
Carl Menger criticizes Adam Smith’s assertion that the division of labor is the main cause “in a well-governed society” of “that universal opulence which extends itself to the lowest ranks of the people” (1871, p. 72). The division of labor “should be regarded only as one factor among the great influences the lead mankind from barbarism and misery to civilization and wealth” (p. 73). Smith, claims Menger, totally missed the really important thing, the “progress of human knowledge” (p. 74). The progress consists in improved “knowledge of the casual connections between things.” Menger says, “Increasing understanding of the causal connections between things and human welfare, and increasing control of the less proximate conditions responsible for human welfare, have led mankind, therefore, from a state of barbarism and the deepest misery to its present stage of civilization and well-being” (p. 74). Menger (1871) criticizes Smith for neglecting the “progress of human knowledge.” We have seen Smith say, however, that “the invention of all those machines by which labour is so much facilitated and abridged, seems to have been originally owing to the division of labour” (1776, I.1.8). In Smith, this increase in knowledge is one of three causes of the greater productivity of the division of labor. Menger’s interpretation of Smith, therefore, is mistaken. The knowledge Menger extols is speculative knowledge of “causal relations.” It is objective, universal, and explicit. The objective quality of Mengerian knowledge is reflected in his dismissal of “imaginary goods,” which “derive their goods character merely from properties they are imagined to possess,” rather than properties they truly possess, “or from needs merely imagined by men” rather than true human needs (p. 53). Mengerian knowledge contrasts with the habitual knowledge Mandeville emphasized and “the knowledge of the particular circumstances of time and place” emphasized by Hayek. The knowledge Menger extols begins when “men . . . investigate the ways in which things may be combined in a causal process” (p. 74). For Menger, such knowledge transforms the division of labor. It is a cause of change in the division of labor. He gives no hints that the division of labor might in its turn cause such knowledge
140
Expert Failure
to change. As in Owen, the causality is one way, running from speculative knowledge to the division of labor. Böhm-Bawerk (1888, p. 15) takes a view but little different from Menger, from whom, presumably, he adopted it. The “human mind” has helped us to increase production greatly. “In investigating the causal relation of things we come to know the natural conditions under which the desired goods come into existence.” Constitutive knowledge of production is speculative knowledge originating not in accumulated habit, but in “investigating the causal relation of things.” Böhm-Bawerk’s translator, Albion Small, seems to view the division of scientific knowledge as a necessary evil. Small (1908) does not distinguish between constitutive and speculative knowledge. He seems to have thought that social progress depends exclusively or almost exclusively on speculative and, indeed, scientific knowledge. As “social complexity increases” it becomes “more imperative” to achieve “a comprehensive insight into the reciprocal relationships of human beings.” He says, “It is constantly becoming more evident that science cannot possibly accomplish its utmost, if it merely strives for the minute, and dissolves itself in subdivisions. We are coming to see rather that this tendency can be only an auxiliary phenomenon in intellectual development, because all creative work has its conclusion not in unraveling but in combining” (p. 437). Note that Small is not calling for specialization to be followed by exchange. He is, rather, calling for “synthesis.” As if to eliminate doubt on this point, he says: “The division of labor is, and always will be, merely a technical trick. All completeness in art and science has its roots in unification” (p. 438). Thus, for Small as for Owen and Mudie, the division of knowledge is an evil to be overcome. It is not clear how Small’s desired “unification” might be achieved. We have seen Adam Smith say that the division of labor “exceeds all computation.” This was true already in 1776. The gap between our computational capacity and social complexity was greater still by 1908. The knowledge distributed throughout the social division of labor could not be shared universally because no one person could acquire and process so much knowledge and so many facts. Henry George (1898) may have been the first modern economist to raise the division of knowledge to a separate and explicit theme of political economy. When a modern ship builder “receives an order,” George explains, “he does not send men into the forest, some to cut oak, others to cut yellow pine,” and so on (p. 389). He lacks the knowledge to direct the myriads of actions required to generate a ship. “So far from any lifetime
The Division of Knowledge after Mandeville
141
sufficing to acquire, or any single brain being able to hold, the varied knowledge that goes to the building and equipping of a modern sailingship, already becoming antiquated by the still more complex steamer, I doubt if the best-informed man on such subjects, even though he took a twelvemonth to study up, could give even the names of the various separate divisions of labor involved” (p. 390). This epistemic insight allowed George to anticipate the “Austrian” argument on the impossibility of rational economic planning under socialism (pp. 391–401). Yeager (1984) draws attention to these themes in George. After George, Ludwig von Mises seems to be the first modern economist to raise the division of knowledge to a central theme of political economy. In his 1920 essay, “Economic Calculation in the Socialist Commonwealth,” Mises explains why socialist economies would not be able to match the output of capitalist economies even if all workers strove always for the greater good. It would be impossible, he said, to compute relative values without the aid of a unit of calculation, i.e., money. You cannot figure out how many fuzzy slippers are worth one blast furnace unless you reduce both to some unit of value. In other words, you need money. Moreover, for the attributed values to reflect the relative scarcity of fuzzy slippers and blast furnaces they must emerge from decentralized voluntary exchange. In other words, you need market prices. In particular, Mises argued, you need market prices for capital goods. Roughly: Do you have a stock market? (See Lachmann 1969, p. 161.) This argument led to the socialist calculation debate of the 1930s and 1940s. In working up his argument, Mises was driven to an explicit recognition that the knowledge driving the division of labor is decentralized and impossible to collect and compute centrally. Mises (1920, p. 102) says: [T]he mind of one man alone – be it ever so cunning, is too weak to grasp the importance of any single one among the countlessly many goods of a higher order [i.e. capital goods such as blast furnaces]. No single man can ever master all the possibilities of production, innumerable as they are, as to be in a position to make straightway evident judgments of value without the aid of some system of computation. The distribution among a number of individuals of administrative control over economic goods in a community of men who take part in the labor of producing them, and who are economically interested in them, entails a kind of intellectual division of labor, which would not be possible without some system of calculating production and without economy.
Mises’ insight into the “intellectual division of labor” set the stage for Hayek’s development of the idea in his 1937 essay “Economics and Knowledge.” Hayek (1937, p. 50) says: “Clearly there is here a problem of the
142
Expert Failure
division of knowledge, which is quite analogous to, and at least as important as, the problem of the division of labor. But, while the latter has been one of the main subjects of investigation ever since the beginning of our science, the former has been as completely neglected, although it seems to me to be the really central problem of economics as a social science.” In a footnote to the phrase “division of knowledge,” Hayek quotes Mises (in the German original) saying: “In societies based on the division of labor, the distribution of property rights effects a kind of mental division of labor, without which neither economy nor systematic production would be possible” (Mises 1932, p. 101). In his 1945 essay on “The Use of Knowledge in Society,” Hayek clarifies that the division of knowledge applies not only to scientific knowledge but also to “the knowledge of the particular circumstances of time and place,” with respect to which “practically every individual has some advantage over all others” (p. 80) because of their unique position in the division of labor. Thus conceived, the division of knowledge is a necessary correlate of the division of labor. It is not only that each person has their specialized tasks. Each person has their specialized knowledge as well. And if this be true of every participant in the division of labor, the knowledge thus divided is mostly practical, applied, and particular. Only a relatively small fraction of it will be theoretical, learned, lofty. Hayek emphasized that the knowledge that we must deploy is local and particular, not scientific and philosophical. AFTER HAYEK
The idea of the division of knowledge was not explicit in Alfred Schutz’s 1932 book The Phenomenology of the Social World. In retrospect at least, it seems implicit in his description of “the social stock of knowledge.” But Schutz did not raise it to an independent theme until after Hayek articulated the idea. Thus, Schutz may have learned the idea of the division of knowledge from Mises but learned from Hayek to elevate it to an independent theme. Schutz appreciation for the idea may have grown slowly over time. Schutz (1996) was a commentary on Hayek (1937), penned shortly after Hayek presented his ideas at a 1936 meeting of the Viennese Gesellschaft für Wirtswissenschaft (Wagner et al. 1996, p. 93). Schutz’s commentary focused on the supposed methodological problem of imputing unknown knowledge to the ideal types of one’s model. Schutz seems to have found it puzzling that a theorist might impute knowledge to the ideal type that they, the theorist, did not fully possess. To do so, Schutz seems to have thought,
The Division of Knowledge after Mandeville
143
carried one out of the world of economic theory and into “daily economic life” (Schutz 1996, p. 104). Schutz relates this supposed problem to the anthill problem. “We should not be surprised when this ideal type, imagined as being involved in social relations, should now, in marvelous harmony, be in command of knowledge of such a kind” as to ensure the achievement of “economic equilibrium. Indeed, this wonderful harmony is pre-established – and that by the sage economist who designed the whole machinery and its parts in the manner in which Leibniz imagined that God the Creator established the world” (1996, p. 103). The theorist “alone knows the whole play” (1996, p. 103). At this point in time, Schutz did not seem to have appreciated the importance to social science of the division of knowledge. Later statements, however, reveal a greater appreciation of its importance. In his 1953 essay “Common-Sense and Scientific Interpretation of Human Action,” he says: With the exception of some economists (e.g. F. A. Hayek . . . ) the problem of the social distribution of knowledge has not attracted the attention of the social scientists it merits. It opens a new field of theoretical and empirical research which would truly deserve the name of a sociology of knowledge, now reserved for an illdefined discipline which just takes for granted the social distribution of knowledge, upon which it is founded. (1953, p. 15, n. 29a)
Schutz speaks of “the social distribution of knowledge” and describes knowledge as “socially distributed.” He notes that the sociology of knowledge has considered the topic “merely form the angle of the ideological foundation of truth . . . or from that of the social implications of education, or that of the social role of the man of knowledge. Not sociologists but economists and philosophers have studied some of the many other theoretical aspects of the problem” (Schutz 1959, p. 149 as quoted in Berger and Luckmann 1966, p. 16). Berger and Luckmann build on this Schutzian notion of “the social distribution of knowledge,” which seems to have come from Hayek. Hayek’s real insight, I have suggested, was in recognizing the importance of the division of knowledge rather than the trivial and obvious fact that different people know different things. And yet one can find statements denying the necessity of the division of knowledge and even, in some contexts, its very existence. Roswell Sessoms Britton provides an example. He was an assistant professor of Chinese and mathematics at New York University from 1930 to his death in 1951 (Shavit 1990). Britton’s history of Chinese newspapers (Britton 1933) has been called a “pioneer work” (Walravens 2006, p. 159). Referring to about 1830, Britton (1934) says,
144
Expert Failure
“There was division of labor in China, but not division of knowledge. The progress of scientific method, and of all the technologies, has compartmentalized the knowledge of the West and departmentalized our education and press. Now it is doing the same in China” (p. 188). Britton is referring to speculative knowledge and seems insensitive to the distinction between constitutive and speculative knowledge. From our point of view, Britton’s claim that there was no division of knowledge in China might seem absurd. It reflects, however, Britton’s failure to distinguish constitutive and speculative knowledge. Thus, our most basic ideas about what knowledge is and what counts as “knowledge” may easily obscure from our view the Hayekian division of knowledge. The philosophical definition of “knowledge” as “justified true belief” makes it hard to recognize skills, habits, and errors as “knowledge.” It makes it hard to recognize, therefore, the very existence of the Hayekian division of knowledge. In her presidential address to the Eastern Sociological Society, Fox (1978) takes a dim view of the division of knowledge in Belgium, imaging that it impedes the flow of information in that society. “The particularism and localism, the vested interests and distrustful caution that accompany them, and the elaborate division of knowledge as well as of labor characteristic of many Belgian organizations all converge to control, limit or impede the existence of information in the system, its circulation, and access to it” (pp. 217–18). Problems of poor information flow and inadequate information sharing, Fox believes, are a “widespread characteristic of the society, that, in turn, underlies the critical services that the agent intermediary performs, as a detective, diagnostician, and conveyor of information” (p. 218). Fox’s seeming equation of information and knowledge suggest insensitivity to tacit knowledge and accumulated habit. Herbert Simon (1962) emphasized the information overload that follows from attempts to share information and maintain communication between cooperating units within an organization or system. Fox seems oblivious to the need for an organized division of knowledge within organizations, and seems to view the “division of knowledge” as a bad thing. Notice also that “information” can be conveyed by communication and is thus an explicit form of knowledge. Thus, Fox seems to neglect the importance of tacit knowledge in sustaining organizations as well as the division of labor in society. Adopting a structuralist Marxist perspective, Anderson (1973) criticizes the division of knowledge conceived as a partition of scholarly investigation into distinct scholarly disciplines. “The division of knowledge is
The Division of Knowledge after Mandeville
145
useful to rulers as a control mechanism” (p. 3). Marxist social scientists must break through the “appearance” to reveal “essence.” He says: “And in social science, the fetishes of capitalism demand that appearances be demolished if the essence is to be seen” (p. 2). Anderson neglects constitutive knowledge to focus on “ideologies,” which “have an historical basis . . . in the division of labor” (p. 2). In a critical comment on this passage, Duncan and Ley (1982) explain that this remark does not mean that the “rulers” are “consciously trying to manipulate knowledge in order to control people” (p. 40). It is a structural outcome. “Again,” Duncan and Ley decry, “it apparently is the system that is the subject, working toward its own functional ends through people who are its unconscious agents” (p. 40). The division of knowledge in society implies, of course, “asymmetric information.” And the economic theory of experts may benefit from models of asymmetric information. It seems fair to say that asymmetric information is generally viewed as bad. It is often thought of as a market imperfection that must be overcome, perhaps with the help of a benevolent government. To cite a salient example, Akerlof (1970) said that “pathologies can exist” when a market has “different grades of goods” that demanders cannot readily distinguish among (p. 490). He does include a short section on “counteracting institutions” such as product warrantees (pp. 499–500). Most of the article, however, is devoted to the “pathologies” created by asymmetric information. But the division of knowledge is a necessary correlate of the division of labor. It is thus good and not bad. It is bad only when compared to an imaginary world in which all practical knowledge can be expressed in words and mathematical symbols for rapid dissemination to everyone and in which each cooperating human has superhuman mental abilities with which to absorb and process all that information. But such an imagined world is cloud cuckoo land. It is too far from the human experience to matter for any decision we might make here in the imperfect world of our experience. Gatewood (1983) recognizes a “social division of knowledge” (p. 384) and links it to the division of labor (p. 385). He neglects tacit knowledge, however, and the role of trade in deploying dispersed knowledge. There are two ways information can be “stored outside the body of an organism itself,” Gatewood says. Information may be stored in “artifacts” such as “books, magnetic tapes, and laser discs,” or it may be stored “through a social division of knowledge which, in turn, depends on being able to access the information stored in other organisms” (p. 384). In this view, language and the division of labor make it possible to have exosomatic
146
Expert Failure
information storage through a social division of knowledge (p. 385). Gatewood, however, thinks that exosomatic information storage is useful only through the linguistic communication of information from one party to another. He thus excludes habit and tacit knowledge from the social division of knowledge. Gatewood does not seem to recognize the potential for price signals to convey information. He argues as though dispersed knowledge can be deployed usefully only through talk. But trade too deploys dispersed knowledge. Prices communicate relative scarcities, though imperfectly, of course. While there is an analogy between words and prices, we should view “monetary exchange as an extra-linguistic social communication process” (Horwitz 1992). Noting that “The division of labor is equally a division of knowledge,” Luban, Strudler, and Wasseram (1992) view the “fragmentation of knowledge in modem bureaucracies and other large organizations” as a moral problem. They articulate an “obligation of investigation” (p. 2355). Employees of large corporations, for example, have an obligation to “discover . . . what other employees are doing with their work products” (p. 2383). They must “do their best to acquire the knowledge they lack” (p. 2384). The obligation to do no harm to others does seem to imply an obligation to acquire the knowledge of how we may be contributing such harm. As Luban et al. seem to recognize, however, this obligation is vague because we cannot acquire all the relevant knowledge and we have no clear objective standards for how much “investigation” is enough. In at least some passages, Luban et al. seem to see this obligation of investigation as imposing heavy responsibilities. They say: “[B]ecause of the great potential for harm arising from the division of labor and fragmentation of knowledge in a corporate or bureaucratic organization, employees may acquire duties far more demanding than doing no evil. They must look and listen for evil and attempt to thwart it if they discover it” (p. 2383). But elevating the obligation of investigation to the heights they seem to desire would likely render the division of labor unwieldy, for the sorts of reasons Simon (1962) explored. Here too, then, we see the division of knowledge viewed in largely negative terms, with little appreciation for its salutary role in society or the impossibility of overcoming the division of knowledge through education, communication, or other means. This chapter concerns the social division of knowledge. I have emphasized that knowledge is synecological, evolutionary, exosomatic, constitutive, and tacit (SELECT). My survey supports the claim, I think, that my broadly Hayekian perspective on dispersed knowledge is not so widespread
The Division of Knowledge after Mandeville
147
that we may take it for granted. I have not attempted to survey related works more focused on individual knowledge and cognition or philosophical discussions of the nature of human knowledge. Some authors of the twentieth and twenty-first centuries have expressed similar views of human knowledge. Wittgenstein’s notion of language games may be the most prominent example. I think it is fair to say, however, that most treatments of language games, including those of Wittgenstein himself, do not consider their origin and evolution. Hutchins (1991, 1995), who proposes a view of “distributed cognition,” is another example. Epistemological naturalism in philosophy has tended to bring out the synecological dimension of knowledge. D’Agostino (2009) is probably the best representative of this trend. Goldman (2010) is also suggestive. The exosomatic dimension of knowledge has been explored in the literature on “externalism” in both philosophy and cognitive neuroscience. Building in part on Hutchins and then on recent cognitive science, Clark and Chalmers (1998) propose that the mind extends beyond “skin and skull.” In general, there is more than one large literature related to SELECT knowledge that I am neglecting. But my purpose in this and the previous chapter (Chapter 6), again, has been to review ideas on the social distribution of knowledge, not cognitive science or philosophy. The upshot of my survey is that the idea of SELECT knowledge has not greatly informed the human and social sciences, including economics. Hayek (1945) is cited frequently, but it is unusual for authors making such citations to get beyond the rather banal insight that different persons know different things. Taking SELECT knowledge seriously, instead, drives us toward a more thoroughly epistemic economics, which has implications for the economic theory of experts and, correspondingly, a great variety of applied problems such as the efficacy of central planning, false convictions in the criminal justice system, and the causes of economic stagnation.
PART III INFORMATION CHOICE THEORY
8
The Supply and Demand for Expert Opinion
THE ECONOMIC POINT OF VIEW ON EXPERTS
Economic theory identifies the likely consequences of different market structures. Those consequences can be surprising. Rent control, for example, is usually touted as a measure to make housing more affordable. Standard economic theory surprises us by showing that it tends to make housing less affordable. (The evidence seems to support this conclusion of economic theory. For theory and evidence, see Coyne and Coyne 2015.) Journalists and others often speak of “the law of unintended consequences” when discussing such surprises. An economic theory of experts has its surprises as well. Many of us tend to think of experts as reliable and truthful. Examples of expert failure may be met with calls for oversight or “regulation.” We are not used to asking about the structure of the market for expert opinion. We should. We tend to think of experts in hierarchical terms, but we should take a transactional approach. Different market structures will create different outcomes. The general thrust of both mainstream and mainline economics is that competition tends to produce outcomes that are generally viewed as favorable, whereas monopoly and monopsony tend to produce outcomes that are generally viewed as unfavorable. This generalization applies to the market for expert opinion as well. Details matter. One must not simply invoke the potentially empty words “competition” and “monopoly,” declaring the one to be good and the other bad. But in the market for expert opinion, as with other markets, the general rule is that competition tends to outperform the available alternatives. In a competitive market for expert opinion, the return to the marginal expert’s specialized knowledge will tend toward the ordinary rate of return adjusted for factors such as risk and the pains or pleasures of acquiring and 151
152
Expert Failure
using that knowledge. Entry restrictions will tend to raise the rate of return on the expert’s specialized knowledge and increase the expert’s monopoly power as measured by the elasticity of demand. In Chapter 11 I will note that professional organizations such as the American Medical Association may work toward entry restrictions that tend to reduce the supply of such professionals and raise the price of their professional advice. Economists often judge markets by efficiency. Efficiency is good; waste is bad. There are exceptions, to be sure. Efficiency in the market for assassins is probably bad. Nor is efficiency the only thing that matters. Fairness is important, and economists today do not neglect it. (See, for example, Smith 2003; Henrich et al. 2005; Smith 2009 and Henrich et al. 2010.) But efficiency is an important normative criterion often invoked by economists. The economic theory of experts has mostly neglected efficiency so far. The focus has been not efficiency, but veracity. Truth is good; falsity is bad. In spite of this shift in normative criterion, the generalization favorable to economic competition tends to hold in the market for expert opinion. IDENTIFYING THE COMMODITY AND DEFINING “EXPERT”
An economic theory of experts must identify the commodity being traded. As we have seen, past thinkers have defined experts by their expertise, with the exception of writers on expert witnesses in court. Expertise, however, is not a commodity. It is human capital that allows the expert to produce expert opinions. The expert’s expertise as such is not for sale. The relevant commodity for an economic theory of experts is expert opinion. An economic theory of experts is a theory of the supply and demand for expert opinion. The commodity has unique properties distinguishing it from other commodities. But market participants are not extraordinary. In particular, experts are people and do not change their human qualities when supplying expert opinions. Experts respond to the same incentives as people in other areas of human action, and in the same ways. Levy and Peart call this principle “analytical egalitarianism” and apply it “not only to policy makers but also to the experts who influence policy” (2017, p. 7). They say, “We have used the phrase ‘analytical egalitarianism’ to describe the presumption that people are all approximately the same messy combinations of interests” (2017, p. 7). Experts are not likely to be mustache-twirling fiends, but neither are they likely to be selfless servants of the public interest. For example, experts
The Supply and Demand for Expert Opinion
153
may be biased by sympathy for their clients. Such bias may emerge from human qualities we value, and yet cause the expert’s opinion to deviate from the public interest. This insight is a truism: Experts are ordinary humans; they are humans and not otherworldly creatures. The disciplined pursuit of this common-sense observation helps us to reach conclusions about experts that might be surprising or counterintuitive. INFORMATION CHOICE THEORY
An economic theory of experts should thus rely on the underlying logic of public choice theory. The Calculus of Consent, which was first published in 1962, is the great early statement of public choice theory. In it, Buchanan and Tullock assumed that “the representative or the average individual acts on the basis of the same over-all value scale when he participates in market activity and in political activity” (1962, p. 19). People are the same in economic and political exchange. The economics of experts pushes the same basic idea by assuming experts are driven by the same motives as nonexperts. In particular, we must abandon the idea that experts seek only the truth without regard to motives such as fame and fortune (Peart and Levy 2005, pp. 87–8). What Buchanan has said of public choice applies to the economics of experts as well. “Public choice did not emerge from some profoundly new insight,” he notes. It “incorporates a presupposition about human nature that differs little, if at all, from that which informed the thinking of James Madison” and, indeed, the “essential scientific wisdom of the 18th century,” which was largely “lost” by the middle of the twentieth century (Buchanan 2003, pp. 11–12). Like public choice theory, to paraphrase Buchanan, the economics of experts incorporates a rediscovery of eighteenth-century wisdom and does little more than incorporate a rediscovery of this wisdom and its implications into analyses and appraisals of experts. Because of its similarities to public choice theory, we might call the economics of experts information choice theory. The expert must choose what information to provide to others. Just as public choice theory includes a theory of government failure, information choice theory includes a theory of expert failure. It helps us to understand, in other words, when relying on experts may not produce the outcomes we desire and expect. It helps us decide when experts are more or less “reliable” in the sense of Chapter 2 and when nonexperts are more or less “empowered.” Information choice theory supports the view that monopoly expertise tends to produce a poorer epistemic performance than competition. It
154
Expert Failure
notes, however, that many variables besides the number of experts influence the performance of epistemic systems, including redundancy, “synecological redundancy” (defined in Chapter 9), the correlation structure among expert errors, and conditions of expert entry and exit. Information choice theory replaces the naïve model of the “objective” expert by supply and demand models in which the opposed interests of rival experts can be leveraged to enhance epistemic outcomes. I develop the theory of expert failure in Chapters 10 and 11. The term “information choice” suggests that scholars should recognize that experts choose what information to convey. This point is recognized in many contexts, including models of asymmetric information, principalagent models, signaling games, and sender-receiver models. Economists and other scholars do not always apply the insight consistently, however. Levy (2001) and Peart and Levy (2005) note that economists tend to assume other economists are pure truth seekers. In earlier chapters we have seen that experts are sometimes lionized and represented as immune to ordinary incentives. In information choice theory, an “expert” is anyone paid for their opinion. Economists, forensic scientists, and statisticians are experts; racecar drivers are not. My definition of expert implies that entrepreneurs and profit-seeking enterprises are not experts. An entrepreneur’s output might, of course, be his or her opinion. Consultants are paid for their opinions. But the entrepreneurial function is not identical to that of the expert, nor is one an aspect or subset of the other. The entrepreneur is paid for their output. The young Steven Jobs, for example, was paid for his computers, not his opinions on the future of digital technology. This is true even though Jobs would not have cofounded Apple Computers if he had not held prescient opinions on digital technology. Experts are in a different position. They are paid for their opinions themselves. HONEST ERROR AND WILLFUL FRAUD
An economic understanding of experts would improve understanding in areas that economists have given scant attention. Much of the literature on forensic science, for example, assumes that forensic scientists are either pure truth seekers or willful frauds. In an important article on observer effects in forensic science, Risinger et al. (2002) distinguish fraud from unconscious bias. “We are not concerned here with the examiner who, in light of the other findings, deliberately alters her own opinion to achieve a false consistency. That is the perpetration of an intentional fraud on the
The Supply and Demand for Expert Opinion
155
justice system, and there are appropriate ways with which such falsification should be dealt” (p. 38). Information choice theory challenges this sharp distinction. Information choice theory tells us that “the line between ‘honest error’ and willful fraud is fluid” in part because “there are no bright lines as we move from the psychological state of disinterested objectivity to unconscious bias to willful fraud” (Koppl 2005a, p. 265). An expert has many ways to introduce bias into their work. The expert themself may be only half aware of their use of such techniques, or completely unaware. When incentives skew honest error, the erring person knows, presumably, what their incentives are. The error may be “honest,” however, if the person does not know that those consciously known incentives have altered their perceptions. The error may also be “honest” if the person underestimates the effect and therefore fails to fully compensate for it. There are many very different perceptions that may be altered by incentives. The fingerprint examiner may not notice dissimilarities between a known and unknown print, for example. A research scientist must search for deviations from experimental protocol before accepting the data generated by an experimental trial. That search may be more diligent or thorough when an experimental trial has produced disappointing results. If the scientist is unaware of this asymmetry in their search efforts, the results will be biased in spite of a conscious desire to be unbiased. If incentives skew “honest” errors, then we should recognize that experts choose what information to share and that incentives influence that choice. THE ECONOMICS OF EXPERTS FILLS A NICHE
Information choice theory is the application of familiar economic logic to relatively straightforward questions about experts and expertise. Sandra Peart and David Levy have made the most complete articulations so far of an economic theory of experts (Feigenbaum and Levy 1993, 1996; Levy 2001; Peart and Levy 2005; Levy and Peart 2007, 2008a, 2008b, 2010; Levy et al. 2010; Levy and Peart 2017). My coauthors and I have considered comparative institutional analysis and the mechanism design problems associated with information choice (Koppl 2005a, 2005b; Koppl, Kurzban, and Kobilinsky, 2008; Cowan and Koppl 2011, 2010). Milgrom and Roberts (1986), Froeb and Kobayashi (1996), Feigenbaum and Levy (1996), and Whitman and Koppl (2010) are information choice models. Koppl (2012b, pp. 177–8) explains why Sah and Stiglitz (1986) is not an information choice model.
156
Expert Failure
Milgrom and Roberts (1986) is a canonical model in information choice theory. The authors consider a naïve recipient of information confronting competitive suppliers of information. “The question at issue,” they write (1986, p. 25), ‘is under what circumstances competition among providers of information can help to protect unsophisticated and ill-informed decision-makers from the self-interested dissembling of information providers.” If the competitors’ interests are “strongly opposed,” as in a civil trial in a common-law country, even a naïve information recipient will come to the full-information decision. Interests are strongly opposed when for every pair, d, d', of alternative choices the information recipient might make, one of the interested information suppliers prefers d to d' and the other prefers d' to d. If the interests of the competing information suppliers are strongly opposed then one of them always has an incentive to provide additional information. Assume for a moment that the information revealed to the recipient does not induce the full-information choice, d*. Then it leads to some other choice, d0. Because interests are strongly opposed, one of the information suppliers prefers d* to d0 and thus has an incentive to reveal more information. Even though the decision maker is naïve, competition ensures that he reaches the full-information decision. The Milgrom and Roberts model shows that a battle of the experts is not a race to the bottom (Koppl and Cowan, 2010). It shows that competition among experts will influence their choices of what information to share. Their result suggests the epistemic value of having opposing interests for competing experts. Feigenbaum and Levy (1996) is also a canonical model of information choice. The authors imagine a biased researcher estimating the “central tendency” of random variable. They consider both the researcher who wants as large a number as possible and the researcher who wants the smallest number possible. The researcher will use several estimators and report the result that best fits their bias. Feigenbaum and Levy (1996) run a simulation study with several symmetric distributions. They computed the “central tendency” of each distribution in four different ways, namely, “the mean, the midrange, the median and a 20 percent trimmed mean” (p. 269). (To estimate a 20 percent trimmed mean ignore the largest 20 percent and the smallest 20 percent of values in a sample and take an ordinary mean of the remainder.) Each of these techniques (when applied to symmetric distributions) is unbiased considered in itself. But the technique of using them all while reporting only preferred results is decidedly biased, as Feigenbaum and Levy (1996) show in detail. Feigenbaum and Levy (1996) show that fraud may be unnecessary for a biased scientific expert. The strategic choice of which results to report
The Supply and Demand for Expert Opinion
157
supports the expert’s bias. The expert in their model must make a choice about what information to share. Whitman and Koppl (2010) present a rational-choice model of a monopoly expert. In their model a monopoly forensic scientist chooses whether to “match” ambiguous crime-scene evidence to a suspect. (The assumption of binary choice is a simplification. In forensic-science practice the word “match” is used less often than words such as “individualization,” “association,” and “consistent with.”) Because the evidence is ambiguous, the forensic scientist must choose when to declare a match. The forensic scientist must make a choice about what information to convey. He or she must choose whether to report “match” or “no match.” Whitman and Koppl show that a rational Bayesian will be influenced by the results of the forensic examination, but also by their prior estimation of the probability of guilt and by the ratio of the disutility of convicting an innocent to the utility of convicting the guilty. In some cases, priors and utilities may render the results of the forensic examination irrelevant to the expert’s expressed opinion. They note the importance of institutional factors in influencing both priors and utilities. Working as an employee of the police department, for example, will likely increase the forensic scientist’s prior belief in the suspect’s guilt and lower the disutility of convicting an innocent relative to the utility of convicting the guilty. The institutional structure creates a bias even when experts are perfectly “rational” Bayesian decision makers. If experts are paid for their opinions, then they are agents of the payers, their principals. Thus, the subject of expertise has been treated in the economics literature mostly in the context of principal-agent models. It is probably fair, however, to distinguish standard principal-agent models from information choice theory. In the canonical model of Ross (1973), the principal cannot observe the agent’s action; the principal can observe the payoff, which depends on chance and the agent’s action. This model clearly applies to situations in which the agent is not an expert. Workers paid on commission or at piece rates, for example, are not being paid for their opinions, but for their results. Ross’s assumption that payoffs are observable does not always apply to experts. The doctor tells me I will die tomorrow if I do not take his patent medicine. I take the tonic and live another day. My continued existence does not help me to discriminate between the hypothesis that the doctor is a quack and the hypothesis that the doctor saved my life. A similar logic seems to apply to the expert opinions of economists. Economists have debated whether “the stimulus” created by the American
158
Expert Failure
Investment and Recovery Act of 2009 worked. This debate turns mostly on the size of the Keynesian multiplier, and opinions differ on that topic. If the multiplier is low, the stimulus did not work. If the multiplier is high, then the stimulus prevented output and employment from going even lower. In this situation, the payoff of the actions taken is not observable. In the example just given, there may be some ambiguity about the identity and preferences of the principal. In other cases, though, it seems clear that experts may be hired to provide correct information and that the information they provide cannot be confirmed or can be confirmed only at a relatively high cost to the principal. The case of Brandon Mayfield illustrates. Brandon Mayfield was arrested as a material witness to the Madrid train bombing of March 11, 2004. Mayfield was arrested after the FBI made a “100% identification” of him as the source of a latent fingerprint at the crime scene (OIG 2006, pp. 64 and 67–8). Mayfield’s attorney requested an independent opinion and the court agreed to pay for an examiner to be chosen by the defense (OIG 2006, p. 74). That examiner supported the FBI identification of Mayfield as the source of the crime-scene print (OIG 2006, p. 80). The Spanish authorities, however, connected the crime-scene fingerprint to a different person. It seems the Spanish authorities were right and the FBI mistaken. The FBI withdrew its identification and declared the crime-scene fingerprint to be of “no value for identification purposes” (OIG 2006, pp. 82–8). Mayfield was released and the FBI issued an apology (OIG 2006, pp. 88–89). In this case, the independent fingerprint examiner was an agent for the defense; the agent provided information that the defense could not challenge or question. This examiner’s error might not have been revealed if the Spanish authorities had not fortuitously identified a more likely source for the crime-scene fingerprint. The example of Brandon Mayfield illustrates a difference between standard principal-agent models and information choice models. In Ross (1973), the agent’s actions are distinct from their output. If the agent is an expert, the observable outcome of the expert’s activity is a part of the agent’s action and not separable from it. This lack of separability does not help the principal to monitor the agent if the principal lacks the expertise that the agent was hired to deploy. Thus, it may be difficult for the principal to assess the outcome of the agent’s actions. As the Mayfield case illustrates, getting opinions from other experts may not solve the principal’s monitoring problem if errors and inaccuracies are correlated across experts. The model of Milgrom and Roberts (1986) discussed previously in this chapter suggests that competition among experts with opposed interests may help third parties to judge expert opinions.
The Supply and Demand for Expert Opinion
159
Information choice theory is more sensitive to the fallibility of experts than standard principal-agent models. It makes the somewhat innovative assumption that incentives skew expert errors, including “honest” errors. Information choice theory also gives greater attention to four motives absent from standard principal-agent models, namely identity, sympathy, approbation, and praiseworthiness. Finally, information choice theory does not presume an isolated principal-agent institutional structure. Instead, the theory recognizes that the larger institutional context may create different degrees and forms of competition among experts. In both its positive and normative aspects, information theory explicitly considers what we might call the “ecology of expertise.” Information choice models will often assume asymmetric knowledge, but information choice and asymmetric information are distinct. A model of asymmetric information might contain no experts, and a model of information choice may contain no asymmetric information. In the illustrative model of Akerlof (1970), that of the market for used cars, the usedcar owner has information unavailable to the potential buyer. But the owner is not an expert because he is not paid for his opinion, but for his car. The potential buyer knows the relative frequency of lemons, but has no information indicating whether any particular car is a good car or a lemon. Thus, Akerlof’s basic model has no experts and is not an example of information choice theory. A referee or umpire in a sporting event need not necessarily have asymmetric information, particularly when it is cheap to make a video recording of the event and study an instant replay of close calls. Nevertheless, we pay referees and umpires to give their expert opinions about whether a goal was scored or a foul made. Arbiters do not always have asymmetric information about the facts of the dispute or the rules governing dispute resolution. And yet they are paid as experts to give an opinion that may be binding on the parties. Thus, information choice does not necessarily imply asymmetric information. Nevertheless, it seems likely that the experts in most information choice models will have asymmetric information. A physician, for example, is an expert with asymmetric knowledge. The patient is buying a mixture of medical services and medical advice. Expert opinion is often a “credence good,” which Darby and Karni (1973, p. 69) define as goods whose quality “cannot be evaluated in normal use.” As the umpire example illustrates, however, expert opinion may be subject to “evaluation in normal use.” Thus, although models of credence goods may prove useful for information choice theory, the two model classes are distinct. The credence goods literature has focused on cases
160
Expert Failure
such as car repair in which the same party supplies diagnosis and treatment. The question is whether the supplier will recommend a needlessly costly treatment. Darby and Karni (1973) find that market mechanisms such as branding can mitigate, but not eliminate, the risk of fraud with credence goods. They doubt the efficacy of “governmental intervention even in markets where deliberate deception is a regular practice” because “governmental evaluators will be subject to much the same costs and temptations as are present for private evaluators” (p. 87). Emons (1997, 2001) finds fraud-free equilbria under both competition and monopoly. Dulleck and Kerschbamer (2006) provide a simple model that generates most of the earlier results in this literature. The literature shows that under both monopoly and competition, cheating or overtreatment occurs less than our untutored intuition might have supposed. THE DEMAND FOR EXPERT OPINION
There are many sources of demand for expert opinion. Households may demand expert opinions when they are dissatisfied with reputational mechanisms and word of mouth. Word of mouth transmits something similar to expert opinions. But instead of an explicit quid pro quo, it is a form of gift exchange. (Mauss 1925 is the classic study of gift exchange, but see also experimental studies such as McCabe et al. 2001 and Henrich et al. 2005.) I tell you my experience with different butchers and you tell me your experience with different bakers. We share our opinions. When this way of getting information begins to seem unsatisfactory, information seekers may begin to demand expert opinions. Presumably, such dissatisfaction grows more likely as group size grows. Gossip may provide adequate information about social partners for groups of about 100–200 persons, the group size Aiello and Dunbar (1992, p. 185) associate with the emergence of modern human language. But this sort of information sharing may still function well in many relatively small-numbers contexts such as that of a local neighborhood in a city. Film critics once provided expert opinions on what movies their readers would likely enjoy. Information aggregation sites such as Rotten Tomatoes have at least partially displaced this function for many moviegoers. In general, information aggregation services can sometimes substitute for the paid opinions of experts. Businesses also demand expert opinions. As I noted in Chapter 2, managers require the opinions of experts in many areas, including
The Supply and Demand for Expert Opinion
161
engineering, accounting, and finance. Such experts will often be members of professional organizations and bound by professional standards and ethics. Langlois (2002, pp. 19–20) notes the modular structure of the division of labor in market economies. Such modularity in production corresponds to modularity in knowledge, which is the context for the emergence of professions such as accountancy. Business managers draw on this modularized knowledge by seeking the expert opinions of various professionals. These professionals have a duty to their clients or employers, but also to the epistemic and ethical standards promulgated (for good or ill) by their professional associations. Both businesses and households face an uncertain future. Expanding on the concepts of a “market for preferences” (Earl and Potts 2004) and “novelty bundling” (Potts 2012), Koppl et al. (2015a) discuss “novelty intermediation.” Koppl et al. (2015b, p. 62) say: “With Potts (2012) and Earl and Potts (2004), the idea is that certain businesses know about recent innovations that have already taken place, whereas the retail consumer does not. These businesses inform the consumer by suggesting certain combinations or offering products that exhibit certain combinations.” In the analysis of Koppl et al. (2015a), instead, “the intermediary knows what combinations of inputs to the firm’s production process may generate new discoveries” (2015b, p. 62) and their corresponding innovations. Finally, of course, governments may demand expert opinions. As I have already noted, the American progressive movement essentially wanted to establish the rule of experts (Wilson 1887; Leonard 2016). But even in more liberal and democratic regimes, governments will rely on experts in law, military strategy, espionage, and so on. Households, businesses, and governments may demand expert opinions to help them know the unknowable. I have discussed the oracle mongers of Athens. Ancient generals often sought auguries. Traders in financial markets demand “technical analysis” of stock price movements even though theory, history, and good common sense show it to be useless. (See, for example, LeRoy 1989; Arthur et al. 1997; and Brock and Hommes 1997.) There has always been brisk demand for medical advice, even when medical experts were more likely to do harm than good, and for weather forecasting, even before scientific meteorology existed. There is always a brisk demand for magical predictions of the unpredictable. Expert failure is likely in the market for impossible ideas even under more or less competitive conditions. Competition helps even here, but only so much.
162
Expert Failure
THE SUPPLY OF EXPERT OPINION
There are many sources of supply of expert opinion. As we have seen, in some discussions, experts are taken to be figures whose opinions are, in Schutz’s (1946, p. 465) formulation, “based upon warranted assertions.” This sort of view makes the suppliers of expert opinion a breed apart. Bogus experts then become Lakatosian “monsters” (Lakatos 1976, p. 14) who are to be explained away by fraud, or lack of regulation, or some other special external consideration. A theory building on a broadly Hayekian conception of the division of knowledge recognizes that anyone can be an expert. For this reason, I have little to say in general about the supply of expert opinion beyond examining the market structure. But in the next chapter, I will discuss information choice theory’s assumption about the motives of experts.
9
Experts and Their Ecology
MOTIVATIONAL ASSUMPTIONS OF INFORMATION CHOICE THEORY
There are three key motivational assumptions of information choice theory. First, experts seek to maximize utility. Thus, the informationsharing choices of experts are not necessarily truthful. Second, cognition is limited and erring. Third, incentives influence the distribution of expert errors. I take up each point in turn.
Experts Seek to Maximize Utility The information-choice assumption that experts seek to maximize utility is parallel to the public-choice assumption that political actors seek to maximize utility. I have used the word “seek” to avoid any suggestion that experts must be modeled as “rational.” The assumption of utility maximization should not be given a narrowly selfish meaning. Agents who seek to maximize utility may seek praise or praiseworthiness as well as material goods. Truth may often be an element in the utility function, though it will usually be subject to tradeoffs with other values. While I think truthfulness is a value that should generally be treated as like any other, I do not wish to deny that some experts will find ways to constrain themselves to a corner solution. Hausman and McPherson (1993, p. 685, n. 21) relate an apposite story about Lincoln’s response to a man who tried to bribe him. (They report that they could not trace the origin of the story.) “Lincoln kept brushing him off genially and the briber kept increasing his price. When the price got very high, Lincoln clouted him. When asked why he suddenly got so aggressive, Lincoln responded – because you were getting close to my price!” On the other hand, we also 163
164
Expert Failure
have instances in which truth seems to be quite absent from an expert’s utility function, as in the fraudulent forensic science of Fred Zain, who simply lied in court about tests he had never performed (Giannelli 1997, pp. 442–9). Abe Lincoln and Fred Zain remind us that different experts will have different utility functions, in some of which truthfulness will have a higher marginal value than in others. The assumption of utility maximization has created controversy for public choice theory. That theory has been criticized for assuming “egoistic rationality” (Quiggin 1987). There is some truth to the claim that public choice theorists have assumed people are selfish in some sense. For example, Dennis Mueller says “The basic behavioral postulate of public choice, as for economics, is that man is an egoistic, rational, utility maximizer” (Mueller 1989, pp. 1–2). Mueller’s 1986 presidential address to the Public Choice Society, however, already called on public choice theorists to draw on “behaviorist psychology” to account for the observation that people seem to cooperate more in the prisoners’ dilemma than “rational egoism” would predict. Since then, public choice and economic theory in general have moved toward more nuanced theories of individual action and motivation. In their 1997 “critical survey” of public choice, Orchard and Stretton say: “The assumption of exclusively rational, egoistic, materially acquisitive behaviour is now acknowledged to be unhelpful and to generate inadequate explanations of political performance” (p. 423). While behaviorist psychology has not figured prominently in the move toward a more nuanced image of human action, economists and public choice theorists are drawing from other parts of modern psychology, including cognitive psychology (Frohlich and Oppenheimer 2006) and evolutionary psychology (Congleton 2003). The move away from egoistic rationality is a move back to the original core position of Buchanan and Tullock. In The Calculus of Consent, they say quite explicitly that the assumption of narrow self-seeking is not essential to their analysis. Their argument “does not depend for its elementary logical validity upon any narrowly hedonistic or self-interest motivation of individuals in their behavior in social-choice processes.” It depends only on the more mild “economic” assumption that “separate individuals are separate individuals and, as such, are likely to have different aims and purposes for the results of collective action” (Buchanan and Tullock 1962, p. 3). The situation is the same for information choice theory. On the one hand, the assumption of egoistic rationality among experts is clearly exaggerated. On the other hand, it seems likely that in information choice
Experts and their Ecology
165
theory, as in public choice theory, the assumption of egoistic rationality will sometimes be a serviceable first approximation that helps draw our attention to basic issues and relationships, while also pointing to empirical anomalies that can be resolved only by relaxing the assumption. And in information choice, as in public choice, we have clear early statements repudiating facile models of “selfish” behavior. Koppl and Cowan (2010) emphasize identity as an expert motive. Peart and Levy (2005, 2010) have emphasized sympathy, approbation, and praiseworthiness as expert motives. We should recognize, therefore, that the “economic” perspective of information choice theory does not require us to assume that an expert – or anyone else – is “selfish” in any substantive sense. We should repudiate both the naive view that experts are always and everywhere truth seekers and the cynical view that truthfulness has no value to any expert ever. Information choice theory includes identity as a motive of experts. Akerlof and Kranton (2000, 2002) introduce identity to the utility function. Akerlof and Kranton (2005, 2008) put identity into the utility function of the agent in an otherwise standard principal-agent model. Cowan (2012) and Koppl and Cowan (2010) apply the principal-agent model of Akerlof and Kranton (2005, 2008) to forensic science. Identity creates the risk of faction. An expert who identifies with the standards and practices of their expert group may be less likely to contradict the opinions of their fellow experts even when doing so would be correct or in the client’s interest. The fingerprint examiner hired for Mayfield, for example, missed evidence tending to exonerate Mayfield, and one may wonder whether a sense of identity with the fingerprint profession may have contributed to this oversight. An expert’s identity as a professional or sympathy for his or her fellow experts may compete with sympathy for the client. (See Levy and Peart 2017, pp. 197–209 on factionalized science.) Information choice theory includes sympathy as a motive of experts. Adam Smith defined sympathy as “our fellow-feeling with any passion whatever,” including “our fellow–feeling for the misery of others” (Smith 1759, p. 5). Peart and Levy have emphasized the important role of Smithian moral sentiments motivating action. Peart and Levy (2005) note that sympathy, or its lack, may be an important motive. They cite both Adam Smith (1759) and Vernon Smith (1998). Information choice theory includes approbation as a motive of experts. Levy (2001, pp. 243–58) contains a helpful discussion of approbation. He quotes Adam Smith saying “man” has “an original desire to please, and an original aversion to offend his brethren” (Smith 1759, as cited in Levy 2001, p. 244). Peart (2010) emphasizes approbation as an expert motive.
166
Expert Failure
Information choice theory includes praiseworthiness as a motive of experts. Levy and Peart (2008a) extend the attention to Smithian moral sentiments by noting the importance of praiseworthiness. “Motivation by praiseworthiness may answer the question of how the two great reforms [Adam] Smith advocated, free trade and abolition of slavery, could be effected despite Smith’s pessimism to the contrary,” they note. “If a world without special privilege becomes a norm and action to move the world closer to the norm is a praiseworthy act, then the desire to behave in a praiseworthy manner can change the world” (p. 475). We have seen Mandeville adopt a rather dark and satiric perspective on human motivation. But he too affirms the importance to each of us of the opinions of others. He describes the “Witchcraft of Flattery” (p. 37) as “the most powerful Argument that could be used to Human Creatures” (p. 29). Our feelings of honor and shame come from our susceptibility to flattery (p. 29). These two “passions” induce us to act as if our inner motives were better than they really are. He says, “So silly a Creature is Man, as that, intoxicated with the Fumes of Vanity, he can feast on the thoughts of the Praises that shall be paid his Memory in future Ages with so much ecstasy, as to neglect his present Life, nay, court and covet Death, if he but imagines that it will add to the Glory he had acquired before” (p. 237). It is, I think, just to roughly equate Mandeville’s “Vanity” or, in some passages, “Pride and Vanity” with Adam Smith’s “impartial spectator.” Other writers have found some degree of equivalence between Mandevillean pride and Smith’s impartial spectator. Of the relationship between Mandeville’s Fable and Smith’s Theory of Moral Sentiments, Kerkhof says: The central question is whether morality can be reduced to a “regard to the opinion of others”, in other words whether virtue ultimately appears to be a form of vanity. In the final analysis, this proves to be the case: “virtue” appears to be a more effective way of receiving applause from the audience – internalized as the “impartial spectator”. According to the Theory of Moral Sentiments morality is produced by something like a market of sympathetic feelings, although Smith as a moral person does not want to accept this. The flowery language Smith uses to cloak his findings, hides a darker, Mandevillean view of man as an animal living in constant anxiety about the opinion of others. (Kerkhof 1995, p. 221)
Kerkhof (1995, p. 219) draws our attention to Arthur Lovejoy’s letter to Kaye (editor of Fable) in which Lovejoy says the Theory of Moral Sentiments “carried out” in “more detail” the “general idea” from Mandeville that “pride” or “glory” is “the genesis of morality.” (Some passages of the letter are transcribed on page 452 of volume II of Kaye’s edition of the
Experts and their Ecology
167
Fable.) Branchi (2004, n.p.) says “Smith’s effort to draw a clear distinction between love for praise and love for praiseworthiness” is “an attempt to trace a dividing line through the middle of that which Mandeville groups together as egoistic passions. Honor and the passions on which it is based rest in precarious balance on this boundary line.” Mandeville even recognizes human compassion, which is close to what Smith called “sympathy.” We have seen Smith define “sympathy” as “fellow-feeling.” Mandeville defined “compassion” as “a Fellow-feeling and Condolence for the Misfortunes and Calamities of others” (1729, vol. I, p. 287). Mandeville and Smith both recognize and emphasize compassion. Unlike Smith, however, Mandeville places equal emphasis on malice as its ever-present dual. “Some are so Malicious they would laugh if a Man broke his Leg, and others are so Compassionate that they can heartily pity a Man for the least Spot in his Clothes; but no Body is so Savage that no Compassion can touch him, nor any Man so good-natur’d as never to be affected with any Malicious Pleasure” (p. 146). Compare Mandeville’s remarks with the opening paragraph of Smith’s Theory of Moral Sentiments: How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it. (1759, p. 61)
It is as if Smith could not bear to deploy Mandeville’s moral psychology without first stripping away or cloaking the darker elements. He stripped away the unpalatable truth that even the most “good-natur’d” person has at least some tincture of “Malicious Pleasure” in the suffering of others. And he cloaked vanity in the finery of the impartial spectator. With Mandeville, we have both Smithian “sympathy” and the Smithian desire of praiseworthiness. But we also have darker elements ever present. By muting the unharmonious notes in Mandeville, Smith gives us a more optimistic vision of human psychology. Mandeville placed greater emphasis on malice, self-deception, and the universality of hypocrisy.
168
Expert Failure
Smith seems to reassure us that we are not so bad, really, all things considered. Mandeville insists that we are far more wicked than we dare admit to ourselves. And yet the basic psychological mechanisms at work in the two theories are similar. Kaye is ambivalent on this issue, but as one point describes the differences between Smith’s psychology and Mandeville’s as reducible “mostly to a matter of terminology” (Mandeville 1729, p. cxlii, n. 3).
Cognition is Limited and Erring The assumption of some sort of “bounded rationality” is no longer unusual in economics. The term has a fluid meaning, but always implies some idea of cognitive limits and the potential for error. While bounded rationality is not always assumed in economic analysis today, it is probably uncontroversial even among economists to say that the cognition of experts is limited and erring. Nevertheless, many economists seem somewhat disposed to neglect such limits when discussing different forms of expertise. Howard Margolis provides an illustration. Margolis (2007) comments on Ferraro and Taylor (2005), who asked participants at the leading profession meeting for academic economists to identify the opportunity cost in a multiple-choice problem drawn verbatim from a successful introductory textbook, that of Frank and Bernanke (2001). The correct answer was the least popular. Ferraro and Taylor infer that pedagogy must be improved (p. 11). Margolis (2007), however, attributes the errors to a “cognitive illusion” and notes that “economists are human” and, therefore, “vulnerable to cognitive illusions” (p. 1035). Margolis (1998) identified a cognitive illusion that ensnared Tycho Brahe, influenced theory choice, and yet went undetected for 400 years. Thus, experts are human even when the stakes are high and many of them consider the same topics over long periods of time. The assumption that experts seek to maximize utility might create the mistaken impression that information choice theory repudiates “bounded rationality.” It does not. If experts have “bounded rationality,” then they will not always be able to maximize their objective functions. Felin et al. (2017) suggest why we should not limit our concept of “bounded rationality” to that found in the work of Herbert Simon or of Daniel Kahneman. They note that perception arises through an interplay of environment and organism and is, therefore, “organism-specific.” Color vision illustrates the point. Writers sometimes equate “color” with given light frequency. But people typically perceive an object’s “color” as
Experts and their Ecology
169
constant when the conditions of viewing it are variable. The green of my shirt is the same at twilight and midday. Nor does it change when a cloud passes across the sun. The “spectral reflectance” (Maloney and Wandell 1987) arriving from the object to our retina is variable, and yet we perceive the object’s color as constant. Such perceived constancy is feature of human color vision and not a bug. Musacchio (2003) uses synesthesia to illustrate the claim that our phenomenal experience has no particular relationship to the external objects of that experience (p. 344). Felin et al. (2017) use the evocative phrase “mental paint.” If we accept this sort of view, then, as Felin and colleagues emphasize, we should not imagine that “organisms (whether animals or humans)” correctly perceive an external environment and are bounded in their perception only because they “are not aware of, nor do they somehow perceive or compute, all alternatives in their environments.” They argue that this mistaken view of perception is characteristic of the leading economic models of bounded rationality, including those of Herbert Simon and Daniel Kahneman. They criticize this implicit idea of the “all-seeing eye” in theories of bounded rationality. We should recognize instead, they say, that the internal perceptual structure of the organism determines the sort of world it lives in. They favorably cite the biologist Jakob von Uexküll, who “argued that each organism has its own, unique “Umwelt” and surroundings.” Uexküll said that “every animal is surrounded with different things, the dog is surrounded by dog things and the dragonfly is surrounded by dragonfly things” (Uexküll 1934, p. 117 as quoted in Felin et al. 2017). This argument by Felin et al. (2017) matters for us because something similar is true about perceptual differences among humans. Different persons live in different “Umwelten.” Though rooted in a phenomenological perspective rather than natural science, Alfred Schutz (1945) makes a similar point. He speaks of the “multiple realities” we inhabit. William James’ Principles of Psychology, Schutz says, raises “one of the more important philosophical questions,” namely the nature of reality (Schutz 1945, p. 533). Schutz tells us that, as James explained, The origin of all reality is subjective, whatever excites and stimulates our interest is real . . . [T]here are several, probably an infinite number of various orders of realities, each with its own special and separate style of existence. James calls them ‘subuniverses’ and mentions as examples the world of sense or physical things . . . the world of science . . . [and] . . . the various supernatural worlds of mythology and religion. (Schutz 1945, p. 533)
170
Expert Failure
Schutz (1945) attempts to develop some of “the many implications involved” in James’s understanding of “reality” and “sub-universes” (p. 533). “Relevance” is a central concept in that effort. Each of us has a different place in the division of labor. Therefore, each of us knows different things and has different sensibilities to events around us. We have different stereotypes and recipes to guide us in our daily lives. Each person’s “prevailing system of interests” determines which elements of his “stock of knowledge” are relevant to him (Schutz 1951, p. 76; see also Schutz 1945, pp. 549–51). This system of Schutzian “relevancies” guides the person and influences the sorts of discoveries they can make. Thus, “a barber is more likely to note details of a person’s haircut than a dentist, and a dentist more likely to note details about a person’s teeth” (Risinger et al. 2002, pp. 18–19). In a fairly literal sense, the division of knowledge in society produces a separate world for each person to inhabit. Our “worlds” overlap, but each of us occupies a unique position in the system and lives, therefore, in a unique “Umwelt.” This important implication of the division of knowledge should warn us against adopting the view that “science” or some other expert perspective is “the” correct view. In some contexts, of course, the expert’s expertise trumps other views. The world is not a flat plate resting on the back of a turtle. And there is often only one correct answer to a question. Whether the defendant shot the victim is not a matter of perspective even if the available evidence does not show us who did it. Either Jones did shoot Smith or Jones did not shoot Smith. But the expert’s system of relevancies is not that of their clients or other nonexperts. And it is not generally desirable to impose the expert’s relevancies on others. All perception comes from a point of view and no one point of view among the many is uniquely correct or superior to all others in all contexts. It is hard to imagine how there could be a division of knowledge in society and simultaneously one uniquely correct overarching point of view or one privileged system of relevancies. Totalitarian attempts to impose one unifying system of ideas and relevancies are utterly and necessarily futile. The heterogeneity and multiplicity of legitimate perspectives is a necessary consequence of the division of labor and its correlate, the division of knowledge. But this heterogeneity and multiplicity of perspectives implies that any one person will have a partial and limited model of the world. Such limits constitute a form of “bounded rationality,” but one very different from that of Herbert Simon and Daniel Kahneman. Call it “synecologically bounded rationality.” The bound on rationality is “synecological” because it is the necessary consequence of the fact that knowledge is synecological when there is a social division of labor.
Experts and their Ecology
171
Computability theory and related branches of pure mathematics imply a different sort of limit on cognition, at least if we are unwilling to assume that the agents in our model can compute the uncomputable. A mathematical function is (“Turing” or “algorithmically”) computable if a Turing machine, the mathematicians’ ideal type of a digital computer, can be programmed to solve it. (Often Turing computability is simply called “computability.”) Building on Lewis (1992), Tsuji et al. (1998) reveal just how pervasive (algorithmic) noncomputability is. They show that even finite games can be undecidable. A finite game is trivially decidable, of course, if we have a complete and explicit list of strategies and payoffs. We can list every strategy combination and its corresponding payoff vector. We can then simply run down this finite list and see which entries, if any, are Nash equilibria. But if the game is described in formal language, it may be impossible to solve by “brute force.” In that case the game may be undecidable. In a deep and important remark, Tsuji et al. (1998) note, “Formalized theories are about strings of symbols that purport to represent our intuitions about concrete objects” (p. 555). We use general terms to describe the world, only rarely employing detailed listings and the like. Thus, our account of the world is radically incomplete and constantly subject to limits of computability and decidability. Velupillai (2007) shows that “an effective theory of economic policy is impossible” for an economy that can be modeled as “a dynamical system capable of computation universality (pp. 273 and 280). Policy makers would have to compute the uncomputable to know the future consequences of their policies. Velupillai links his important result to F. A. Hayek’s “lifelong skepticism on the scope for policy in economies that emerge and form spontaneous orders” (Velupillai 2007, p. 288). Canning (1992) showed that a Nash game may not be algorithmically solvable if there is no “Nash equilibrium in which the equilibrium strategy is a best reply to all best replies to itself” (p. 877). He calls this condition “strict Nash.” This result is fundamental, but he curiously argues that his result implies only a “slight” adjustment to then current practice in social science and applied game theory. The requirement of “strictness” excludes many games, including two given considerable attention by von Neumann and Morgenstern (1953), namely, “Matching Pennies” and “Stone, Paper, Scissors.” It seems doubtful whether Canning’s restriction should be considered “slight.” Algorithmic information theory, which “studies the size in bits of the smallest program to compute something” (Chaitin, da Costa, and Doria 2012, p. 50), reveals a sense in which it may be impossible to have a theory
172
Expert Failure
of some complex phenomena. To predict, explain, or even merely identify a sufficiently complex system may require a description so lengthy that no simplification of the original system is achieved. The behavior of a sufficiently complex system (one at the top of a Chomsky–Wolfram hierarchy) cannot be predicted ahead of time (Wolfram 1984; Markose 2005). We can do no better than watch it unfold. Wolpert (2001) showed that for any pair of computers it is impossible for each to reliably predict the output of the other, even if the computers are somehow more powerful than Turing machines. An analog “hypercomputer” could theoretically compute functions that are not Turing computable (da Costa and Doria, 2009, p. 80). Opinions differ on whether hypercomputation is even theoretically possible (Cockshott et al. 2008; da Costa and Doria 2009). In any event, it is not a current reality. Moreover, Wolpert (2001) shows that computability problems would arise even in a world with hypercomputers, as long as they are physically realizable. Theorists should not impute to the agents in their models the ability to compute uncomputable functions or compress uncompressible data. This methodological restriction imposes a minimal form of bounded rationality. As the contributions to Velupillai (2005) demonstrate, however, even this “minimal” form of bounded rationality is often violated in standard economic models.
Incentives Influence the Distribution of Expert Errors There are two aspects to the assumption that incentives skew expert errors. First, experts may cheat or otherwise self-consciously deviate from complete truthfulness. They may do so to serve either an external master or an internal bias. Second, experts may unknowingly err in ways that serve an external master or internal bias. The first point is relatively straightforward, but not always consistently applied. I think the second point is quite important, but it seems to have been almost entirely overlooked by the economics profession. In a standard principal-agent model, expert “errors” are the product of guile. In a standard model they must be the product of guile because cognition is unerring. In information choice theory, however, experts may make errors that are not fully or even remotely intentional. In this sense, they may make “honest” errors. These honest errors, however, are influenced by incentives. The lower the opportunity cost of making an error of a particular sort, the higher the incidence of such errors,
Experts and their Ecology
173
independently of the degree of intention associated with the error. In other words, experts respond to bias. An expert may make honest errors in favor of their client because their client pays them or because they have sympathy for their client. Sympathy for someone else, other experts perhaps, may encourage an error that hurts the client. In all such cases, incentives skew honest errors. The assumption that incentives skew honest errors is a natural assumption only if we are also assuming fallible cognition. When economists assume strong forms of rationality, the assumption may be unnatural or even impossible. In that case, one may be led to model expert errors as random draws from a symmetric distribution. I have not been able to find a clear statement in the economics literature that honest errors are skewed by incentives. Gordon Tullock probably comes as close as anyone in his book The Organization of Inquiry. As Levy and Peart (2012, p. 165) explain, Tullock asked what science would be like “if the subjects of our discipline could trade with us, offering us things of value to bend our results.” His answer was that “we would have economics, a racket not a science.” The word “racket” appears twice in the book. It is seen first in a footnote to “Some Academic ‘Rackets’ in the Social Sciences” by Joseph Roucek (1963). “What it all amounts to,” says Roucek in his summary, is this: the academic profession, as any other, carries on shady practices . . . It tries to prevent self-evaluation or an empiric description of its underground practices which might give the idea to the layman that, after all, even the professors are just human beings . . . and members of a profession which has to sell its wares to itself and especially to the unsuspecting public. (p. 10)
The second use of the word “racket” comes later in the book. Tullock notes that there is always a demand for supposed experts to defend tariffs and the like. The existence of demanders who do or might “need the assistance of either someone who believes in tariffs or an economist who is in this racket makes it possible for them to continue to publish, even in quite respectable journals. Thus a dispute which intellectually was settled over a century ago still continues” (Tullock, 1966, p. 158). Tullock (1966), Levy and Peart (2010), Diamond (1988), Wible (1998), and others have made the “economic” assumption that incentives help shape the opinions of experts. Levy and Peart (2010) suggest that the assumption has not been well understood or widely adopted. In all these versions of the idea, however, the assumption is that the scientist or other
174
Expert Failure
expert knowingly serves his interests. Some evidence suggests, however, that the same thing can happen without the affected expert knowing it. Some evidence has been discussed in the economics literature with the Allais paradox, preference reversal, and other cases in which people seem to contradict themselves. It is hard to square such phenomena with strong forms of economic rationality. Jakobsen et al. (1991) report on a patient with brain damage who could accurately describe small blocks, but had difficulty in grasping them because of an inability to appropriately direct the motions of her hand. Goodale and Milner (1995) use this case as part of an argument that the brain has two largely separate visual systems. They say: “the neural substrates of visual perception may be quite distinct from those underlying the visual control of actions. In other words, the set of object descriptions that permit identification and recognition may be computed independently of the set of descriptions that allow an observer to shape the hand appropriately to pick up an object” (Goodale and Milner 1995, p. 20). A difference in evolutionary time separates the “functional modules supporting perceptual experience of the world . . . and those controlling action within it” (Goodale and Milner 1995, p. 20). This study is but one of many tending to show that the idea of a unitary mind is no longer consistent with the established results of empirical science. If the hypothesis of a unitary mind cannot be sustained, then it is possible to consider evidence that incentives skew honest errors. The literature on “observer effects” seems to show that our opinions may serve our interests even when we know it not. Robert Rosenthal is probably the leading theorist of observer effects. He was a coauthor on Risinger et al. (2002), which includes a masterful review and summary of the literature on observer effects. Risinger et al. (2002, p. 12) explain: “At the most general level, observer effects are errors of apprehension, recording, recall, computation, or interpretation that result from some trait or state of the observer.” We are generally more likely to observe, perhaps mistakenly, what we expect to see than what we do not expect to see, what we hope to see than what we do not hope to see. This tendency is strengthened by ambiguity. Thus, the three keys to observer effects are the observer’s state of expectation, their state of desire, and the degree of ambiguity in the material being observed. Krane et al. (2008) say that “Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with one’s expectations. This tendency is particularly likely to distort the results of a scientific test when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires” (p. 1006).
Experts and their Ecology
175
Figure 9.1 Context influences perception.
Figure 9.1 illustrates how errors of observations can be skewed in the direction of our expectations. If the central figure is seen only with the vertical elements, it will be seen as the numeral 13. If it is seen only with the horizontal elements, it will be seen as the letter B. Context creates expectations that influence perception. Expectations influence observations without determining them. Risinger et al. (2002, p. 13) say: “The cognitive psychology underlying observer effects is best understood as a cyclical interplay between pre-existing schemata and the uptake of new information. Schemata are mental categories . . . that provide the framework for perception and reasoning.” They quote Ulrich Neisser saying “we cannot perceive unless we anticipate, but we must not see only what we anticipate” (Neisser 1976, p. 43, as quoted in Risinger et al. 2002, p. 14). This point merges with the point I made earlier that the division of knowledge implies a different perceptual framework for each person. The unique Umwelt of each person creates a different set of anticipations and, correspondingly, a different set of perceptions and perceptual possibilities. In other words, synecologically bounded rationality causes observer effects. The synecological bound on rationality gives us insight into the supposed assumption of “selfishness” in economics. As I noted above, economists have sometimes assumed “egoistic rationality.” But the view that economists generally make such an assumption is mistaken. We have seen, for example, that the founders of public choice theory made no such assumption. And we have seen that even the cynical Mandeville recognized the existence of human compassion as well as psychological mechanisms that drive us to more prosocial behavior than our inner depravity might otherwise allow. But if our rationality is synecologically bounded, then the most altruistic person may be unable to transcend their local interests and
176
Expert Failure
perspectives. What may at first look like selfishness may simply be the fact that your Schutzian relevancies can enter my thinking at best only partially and imperfectly. We praise persons unusually able to apprehend and respond to the relevancies of others as highly empathetic and, perhaps, intuitive. Even the most empathetic and intuitive among us, however, will have little insight into the relevancies of strangers. The greater the social distance between thee and me, the less able I am to serve your interests directly. The epistemically necessary partiality in each person’s perspective makes us all look more “selfish” than we really are. I have said that synecologically bounded rationality causes observer effects. Risinger et al. (2002 pp. 22–6) note that observer effects are pervasive and enhanced by “desire and motivation.” The selection of decision thresholds may be influenced by incentives, as Risinger et al. (2002, p. 16) briefly note. They point out that “observer effects manifest themselves most strongly under conditions of ambiguity and subjectivity” (p. 16, n. 62). Heiner (1983, 1986) imports signal detection theory to basic microeconomics. Whitman and Koppl (2010) give a rational-choice Bayesian model, discussed in Chapter 8, tending to support the claim of Risinger et al. regarding ambiguity. Tversky and Kahneman (1974) discuss representativeness bias (p. 1124), availability bias (p. 1127), and adjustment and anchoring bias (p. 1128). These biases have since become familiar to economists. As far as I can tell, we can accept these “biases” as often present in human decision making even if we accept fully the critique of Felin et al. (2017) discussed earlier in this chapter. Economists may be less familiar with role effects, which are important for information choice theory. Pichert and Anderson (1977) had their subjects read a story about two boys playing in a house. The story contained information about the house such as the presence of a leaky roof and the parents’ rule to keep a side door unlocked at all times. Subjects were instructed to read the story from either the perspective of a burglar or that of a realtor. Subjects had better recall of details relating to their assigned role rather than the opposite perspective. Anderson and Pichert (1978) and Anderson et al. (1983) found that when subjects were asked to switch roles in a second recall task, their memory improved for details relevant to the new role and degraded for details irrelevant to it. This effect was noted whether the second memory task was performed with a delay of five or ten minutes or a delay of about two weeks (Anderson et al. 1983, pp. 274–5). Role effects reflect synecological (and thus SELECT) knowledge and, therefore, synecologically bounded rationality. To view the house from the
Experts and their Ecology
177
burglar’s point of view we must imagine ourselves in the burglar’s Umwelt. Imagining life in that Umwelt directs our attention to burglar things. The experimental results of Anderson and his coauthors support the view that incentives skew the distribution of honest error. They induced observer effects with no more powerful an incentive than the injunction to read or recall from a given perspective. The perspective adopted determined what “errors” – in this case memory lapses – were more likely. These errors seem to have been unintended and unrelated to any scruples the subjects may have had about honesty or other virtues. Risinger et al. (2002, p. 19) define conformity effects as “our tendency to conform to the perceptions, beliefs, and behavior of others.” They matter for information choice theory because experts may have relatively high status. Risinger et al. (2002) describe an experiment reported in Sherif and Shefir (1969) in which subjects were asked to say how far a light in darkened room had moved. The light was in fact stationary, but might seem to move under the optical conditions created by the experiment. Subjects were shown the light and asked to give their opinions in one another’s presence. “Although each person’s perceptions of motion range were influenced by the announced perceptions of the others, those of perceived lower rank were more influenced by those of perceived higher rank” (Risinger et al. 2002, p. 19). Several experimenter effects have motivated protocols such as randomization and double-blind procedures in science. The Mendel–Fisher controversy, nicely summarized by Franklin (2008), illustrates the need for such precautions. Mendel (1865) worked out the elements of genetics in a study of peas. He carefully bred and crossbred two varieties of garden peas, noting five different properties of each, such as the color of the pod and whether the peas were wrinkled or smooth. He reported some results that seemed to have surprised him, but others that seem to have conformed to his expectations. In particular, he found that crossbred plants exhibited very nearly a 2:1 ratio of heterozygous to homozygous individuals. Fisher (1936) showed that this 2:1 ratio is not the one that should have been expected. We do expect the distribution of the true ratio to be centered on 2:1. But Mendel’s method for judging and recording heterozygosis required a correction factor that he neglected. To decide whether a crossbred plant was homozygous or heterozygous, Mendel planted seeds from each such plant to see if any of that plant’s offspring exhibited the recessive trait. Each seed from a crossbred and heterozygous plant has a one in four chance of exhibiting the recessive trait. Thus, it is necessary to plant enough seeds from each plant to be nearly certain that at least one offspring will exhibit
178
Expert Failure
the recessive trait. Seeing just one offspring with the recessive trait assures you that the crossbred parent plant was indeed heterozygous. Mendel, however, chose to plant only ten seeds per crossbred plant. That’s not enough. Fisher noted: “If each offspring has an independent probability, .75, of displaying the dominant character, the probability that all ten will do so is (.75)10, or .0563. Consequently, between 5 and 6 per cent. of the heterozygous parents will be classified as homozygotes” (1936, p. 125). Mendel should have recorded a ratio of about 1.7:1, not 2:1. His results were too good to be true. Fisher had difficulty accounting for the error. He said: “Although no explanation can be expected to be satisfactory, it remains a possibility among others that Mendel was deceived by some assistant who knew too well what was expected. This possibility is supported by independent evidence that the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel’s expectations” (Fisher 1936, p. 132). Presumably, the Mendel–Fisher controversy will never be fully resolved, although the majority opinion seems be that Mendel probably did not commit a willful fraud. (Fisher cast his suspicions on an anonymous assistant, but not Mendel himself.) If we absolve Mendel of fraud, observer effects become a likely explanation. Mendel or his assistants were more likely to recheck data that violated expectation, which would induce bias. (Apparently, Sewell Wright was the first to make this point in 1966.) Similarly, knowing that he “needed” more wrinkled peas or more yellow pods may have caused Mendel or an assistant to see wrinkles that were not present or to mistake the color of a pod. The fact that Mendel’s data was too good to be true may help to suggest that the tendency of incentives to skew honest errors increases the risk of faction in science. Whatever particular mechanisms may have been at work, we should take a lesson from the Mendel–Fisher controversy. Observer effects matter in science. Note also that Mendel’s apparent errors were not uncovered for seventy-two years. (While Fisher was not the first to see a problem, it was his article that caused a stir, albeit with a delay.) Once an error is recognized, it can be hard to correct. Richard Feynman gives another salient example in his famous lecture “Cargo Cult Science” (1974). Robert Millikan performed an experiment in which he measured the charge of an electron by observing how little oil drops emitted from an atomizer move in a field with and without an electrical charge. Feynman said “we now know” that the calculated value was “a little bit” too small: “It’s interesting to look at the history of measurements of the charge of the electron after Millikan. If you plot them as a function of time, you find that one is a little
Experts and their Ecology
179
bigger than Millikan’s and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher” (Feynman 1974, p. 12). Berger, Matthews, and Grosch (2008, pp. 234–7) give three examples in which experimental precautions against observer effects are compromised by “inappropriate yet regimented research methods.” In the most striking of the three examples, “run-in bias” is created by deleting adverse events prior to randomization. “In randomized treatment trials,” they explain, “it is common to pre-treat the patients with the active treatment, evaluate their outcomes, and determine which patients to randomize based upon those outcomes. Bad outcomes (even deaths) prior to randomization do not make it into the analysis and do not count against the active treatment under scrutiny” (p. 234). Dror and Charlton (2006) had experienced fingerprint examiners reexamining evidence from cases they had decided in the past. The evidence was presented in the ordinary course of work as real case evidence. The real case information was stripped away, however, and replaced with either no supporting information or supporting information that suggested a match when the earlier decision had been an exclusion (being told, for example, that the “suspect confessed to the crime”) or an exclusion when the earlier decision had been a match (being told, for example, that the “suspect was in police custody at the time of the crime”). A pair of experienced experts confirmed that the original decision was correct in each case. This determination by experienced experts participating as experimenters creates the presumption that the subject examiners’ original judgments were correct for those pairs of fingerprints used in the study. Dror and Charlton found that from forty-eight experimental trials, the fingerprint experts changed their past decisions on six pairs of fingerprints. The six inconsistent decisions (12 percent) included two from the twenty-four control trials that did not have any contextual manipulation. The fingerprint experts changed four of their past decisions from the twenty-four experimental trials that included the contextual manipulation. Thus, biasing context seems to have induced inconsistent decisions in 16.6 percent of the cases with contextual manipulation. Only one-third of the participants (two out of six) remained entirely consistent across the eight experimental trials. The experimental results of Dror and Charlton (2006, p. 610) support the view that incentives skew the distribution of honest error. Of the six errors, four – in this case reversing an earlier opinion – occurred when the experimenters provided context information. In each of those four cases the error was in the direction suggested by the context information. It
180
Expert Failure
seems unlikely that these errors were willful. Each of the experimental subjects agreed to participate in the experiment. Each of them understood that they would at some point get experimental materials masquerading as real case files. And yet they agreed to participate in the study. It does not seem reasonable for someone to report judgments he or she knew to be erroneous after agreeing to such conditions. Incentives skewed the experts’ errors in spite of a conscious desire to avoid error. Information choice theory assumes that errors are skewed by incentives. But incentives depend on institutions. Thus, information choice theory includes comparative institutional analysis. This institutional connection brings us to a point of distinction between information choice theory and standard principal-agent models that was noted earlier. Information choice theory studies the ecology of expertise. THE ECOLOGY OF EXPERTISE
Recall that information choice theory does not presume an isolated principal-agent institutional structure. Instead, the theory recognizes that the larger institutional context may create different degrees and forms of competition among experts. In both its positive and normative aspects, information choice theory explicitly considers the ecology of expertise. Some simple analytics of “epistemic systems” (Koppl 2005b) may help us to identify relevant features of the ecology of expertise. Epistemic systems are agent-based processes viewed from the perspective of their tendency to help or frustrate the production of local truth. (This definition generalizes that of Koppl 2005b.) In this context, “local truth” may mean true beliefs, correct expectations, appropriate behaviors, or something else depending on context. “Local truth” is “getting it right.” It will sometimes be convenient to model an epistemic system as a game, perhaps a sender-receiver game, specifying a set of senders, S, a set of receivers, R, and a set of messages, M, together with payoffs and rules for their interactions. The senders may have a probability distribution over messages, showing the subjective probability that each message is true. The senders send messages to the receivers, who somehow nominate one message from the message set and declare it “true.” This is the judgment, of the receiver(s). Figure 9.2, adapted from Koppl (2005b), illustrates epistemic systems with monopoly experts. The oval marked “message set” represents the set of messages an expert might deliver. A forensic scientist, for example, might declare of two samples “match,” “no match,” or “inconclusive.” (NAS 2009, p. 21
Experts and their Ecology
181
Figure 9.2 Monopoly expert.
Figure 9.3 Multiple experts who may be in competition.
discusses the variety of terms used to describe what is commonly thought of as “matching.”) The dashed arrow represents the expert choosing from the message set. The solid arrows represent the transmission to the receiver, for example the judge or jury at trial, and the “nomination” of a message by the receiver. The circle marked “judgment” represents the receiver’s judgment, for example the judgment of the jury in a criminal case that it was the defendant who left a latent print lifted from the crime scene. When the receiver is able to compare the opinions of multiple experts we have the sort of situation represented in Figure 9.3, also adapted from Koppl (2005b). Because each receiver gets messages from multiple senders, the experts are in a position of strategic interdependence, which may constrain their choice of message. Figures 9.2 and 9.3 do not represent all dimensions relevant to the ecology of expertise. For example, are the competing experts truly independent? If they are, the multiplicity of signals available to each receiver acts like a coding redundancy to reduce the receiver’s rate of error
182
Expert Failure
in judgment. If they are highly correlated, however, the appearance of redundancy may reinforce errors in the mind of the receiver, serving only to degrade system performance. Sympathy among experts or identification with an expert’s professional standards and practices may reduce independence among experts and cause expert errors to be correlated. In Chapter 3 I noted that professions may foster uniformity of professional opinion among their members. Professional “standards” and professional loyalty may reduce independence among experts of that profession. In some applications, it is important to know whether sender messages are independent. A jury in a criminal trial may be presented with a confession, eyewitness testimony, and forensic evidence. It seems plausible that a jury in such a case might imagine that it has three independent and reliable sources of evidence pointing to the defendant’s guilt. Any one of these three forms of evidence would be sufficient for conviction if errors were not possible. When presented with multiple channels of incriminating evidence, a jury that is highly alert to the possibility of error within each channel considered individually might nevertheless convict because it underestimates the probability of innocence. If each of three forms of evidence has a 50 percent chance of falsely suggesting guilt, the chance that three independent forms of evidence would all point in the wrong direction is only one in eight, 12.5 percent. If each line of evidence has only a 10 percent chance of falsely suggesting guilt, the seeming probability of error falls to just one in a thousand, or 0.1 percent. But in some cases, these seemingly independent sources of evidence may be correlated, as suggested by some facts about the American system of criminal justice. The case of Cameron Todd Willingham illustrates the risk that police investigators may unwittingly influence eyewitnesses to provide more damning testimony as the police investigation unfolds. Willingham was convicted in 1991 of the arson murder of his three young children based in large part on now discredited fire investigation techniques (Mills and Possley, 2004; Willingham v. State, 897 S.W.2d. 351, 357, Tex.Crim. App. 1995). Willingham’s case seems to be an unambiguous example of a false execution. Grann (2009) reports that one eyewitness’s “initial statement to authorities . . . portrayed Willingham as ‘hysterical,’” whereas her subsequent statements “suggested that he could have gone back inside to rescue his children” without much risk or courage. Another eyewitness switched from describing a devastated father to expressing a “gut feeling” that Willingham had “something to do with the setting of the fire.” Drizin and Leo (2004) review 125 cases of “proven” false confessions. They provide evidence that the risk of false confession is higher for the
Experts and their Ecology
183
young, the mentally retarded, and the mentally ill. Garrett (2010) reports that in 252 DNA exonerations, forty-two involved false confessions. He performed a content analysis of the first forty of them. “In all cases but two,” he reports, “police reported that suspects confessed to a series of specific details concerning how the crime occurred. Often those details included reportedly ‘inside information’ that only the rapist or murderer could have known” (Garrett 2010, p. 1054). Garrett infers that in many cases, “police likely disclosed those details during interrogations” (2010, p. 1054). As Garnett points out, we do not know whether the apparent information transfer was willful or unconscious, vicious or innocent. Finally, the literature on forensic science errors shows that they are likely to be more common than popularly imagined and may be influenced by incentives at the crime lab. I have estimated that there are more than 20,000 false felony convictions per year in the United States attributable in part to forensic science testing errors or false or misleading forensic science testimony (Koppl 2010b). In many of these cases, the error or inappropriate testimony may be motivated by knowledge of the police case file, as illustrated by the Dror study cited earlier. Risinger et al. (2002, p. 37) give an example drawn from lab notes in a real case. “Suspect-known crip gang member – keeps ‘skating’ on charges – never serves time. This robbery he gets hit in head with bar stool – left blood trail. [Detective] Miller wants to connect this guy to scene w/DNA.” In another case, an examiner writes, “Death penalty case! Need to eliminate Item #57 [name of individual] as a possible suspect” (Krane, 2008). Such context information has the potential to skew the results of a forensic science examination, particularly under the frequently encountered condition of ambiguous evidence. It seems likely that in at least some cases in the United States, false convictions are created when police investigators induce a false confession from a suspect, influence eyewitnesses to testify against that suspect, and provide the crime lab with case information that induces an incriminating forensic science error. In such cases, errors are correlated across seemingly independent evidence channels. The system is nonergodic, but the multiplicity of evidence channels creates for the jury a false picture of the reliability of the evidence in the case. Ioannidis (2005) shows that similar problems can exist in research science. The ecology of tests in a field may drive the “positive predictive value” (PPV) of a given result below the “significance level” of the researchers’ statistical tests. He denotes by R the ratio of true relationships to objectively false relationships “among those tested in a field” (p. 0696). Thus, R depends on both the objective phenomena of the field and the
184
Expert Failure
ideas of researchers about the field. Those ideas determine which possible relationships are tested and the ratio, therefore, of true to false relationships “among those tested.” In this case the chance of a positive result ð1αÞR being true (the PPV) is ðRβRþα Þ, where α is the chance of a false positive (or Type I) error β is the chance of a false negative (or Type II) error (p. 0696). (Ioannidis keeps things simple by assuming these error probabilities are the same in all tests.) The PPV can be smaller than the standard p-values reported in research papers. Reporting bias exists when researchers report what “would not have been ‘research findings,’ but nevertheless end up presented and reported as such, because of bias” (p. 0697). Adding in a parameter to reflect reporting bias reduces the PPV further. “The probability that at least one study, among several done on the same question, claims a statistically significant research finding” grows as the number of such studies grows, which tends to drive PPV values down (p. 0697). Ioannidis draws several conclusions beyond general skepticism. Social scientists should humbly note two of them. First, “The greater the number . . . of tested relationships in a scientific field, the less likely the research findings are to be true.” Second, “The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true” (p. 0698). It is not surprising to learn that “The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.” It is surprising, however, to learn that “The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.” Hotness tends to create reporting bias and a poor testing ecology (p. 0698). Synecological redundancy may reduce the chance of correlated errors. Synecological redundancy is “the ability of elements that are structurally different to perform the same function or yield the same output” (Edelman and Gally 2001, p. 13763). Edelman and Gally call it “degeneracy.” I prefer the label “synecological redundancy” because it is more descriptive. It suggests the synecological idea of a relationship between an environment and a community of interacting elements occupying it. These heterogeneous interacting elements produce the same or similar functions in the system through a variety of methods. Precisely because these elements are not the same, the function is robust to failures among the elements. I have abandoned the term “degeneracy” in favor of “synecological redundancy.” Although my label is more cumbersome, it is more descriptive (as previously noted) and it does not suggest decline or seem otherwise pejorative. The term “partial redundancy” has also been used (Whitacre
Experts and their Ecology
185
and Bender 2010, p. 144). It may suggest, however, the need to “complete” redundancy or that “partial redundancy” is inferior to “redundancy.” Wagner (2006) discusses degeneracy in economics. Martin’s (2015) essay on “Degenerate Cosmopolitanism” provides an unusually lucid discussion of degeneracy, robustness, and evolvability in the context of political economy. He helpfully relates these terms to the idiosyncratic vocabulary of Taleb (2012). Edelman and Gally (2001) contrast synecological redundancy with simple redundancy, “which occurs when the same function is performed by identical elements.” Synecological redundancy, by contrast, “involves structurally different elements.” It “may yield the same or different functions depending on the context in which it is expressed.” A multiexpert system will exhibit synecological redundancy to the extent that each expert in it is unique. A priest, a psychologist, a Buddhist monk, and a bartender may give very different advice on relieving anxiety. When all experts are the same, expert errors may be correlated. Our earlier discussion of the criminal justice system suggests that synecological redundancy is not sufficient to create independent errors. Eyewitness testimony, confessions, and forensic evidence seem to be structurally different elements. The synecological redundancy of the system is low, however, because each of the seemingly diverse evidence channels emerges from one integrated process of evidence construction. “The absence of degeneracy [i.e. synecological redundancy] indicates monopoly, for it means that a successful plan must include participation at a particular node in the network of plans” (Wagner 2006, p. 119). Cowan and Koppl (2010, p. 411) note that the “tight relationship” among prosecutor, police, and crime lab “creates a vertically integrated monopoly supplier of criminal justice.” Such integration limits desirable properties of an epistemic system, namely, redundancy, synecological redundancy, adaptivity, diversity, and resilience. The system has low error-correcting power because errors can easily cascade down from the top, inducing the failure of multiple elements within the system. These considerations lead us to the theory of expert failure, which I consider in the next chapter.
PART IV EXPERT FAILURE
10
Expert Failure and Market Structure
Experts fail when they give bad advice. In its broadest meaning, “expert failure” refers to any deviation from a normative expectation associated with the expert’s advice. TWO DIMENSIONS OF EXPERT FAILURE
Expert failure is more likely when experts choose for their clients than when the clients choose for themselves. And in broad brush, we may say that expert failure is more likely when experts have an epistemic monopoly than when experts must compete with one another – although the details of the competitive structure matter, as we shall see. In Chapter 5 I noted that the unavoidable word “competition” may easily create misunderstanding. I will use the phrase “ecology of expertise” in part to underline the synecological quality of the sort of “competition” among experts that might help to reduce the chance of expert failure. These two dimensions of expert power suggest the four-quadrant diagram of Table 10.1, which identifies four cases: (1) the rule of experts, (2) expert-dependent choice, (3) the quasi-rule of experts, and (4) self-rule or autonomy. The greater the freedom of nonexperts to ignore the advice of experts, the lower is the chance of expert failure, ceteris paribus. And the more competitive is the market for experts, the lower is the chance of expert failure, ceteris paribus.
The Rule of Experts The rule of experts creates the greatest danger. Here, monopoly experts decide for nonexperts. State-sponsored eugenics may be the most obvious example. The state hires a eugenicist to tell it which persons should be allowed to reproduce. I noted in Chapter 4 that we cannot, unfortunately, 189
190
Expert Failure
Table 10.1 A taxonomy for the theory of expert failure Monopoly expert Expert decides for the nonexpert
Nonexpert decides based, perhaps, on expert advice
Rule of experts Examples include stateadministered eugenics programs, central planning of economic activity, and central bank monetary policy Highest chance of expert failure Expert-dependent choice Examples include religion under a theocratic state and state-enforced religion.
Competitive experts Quasi-rule of experts Examples include school vouchers, Tiebout competition, and representative democracy
Self-rule or autonomy Examples include periodicals such as Consumer Reports, the market for preferences, and venture capital. Lowest chance of expert failure
view this sort of thing as entirely behind us (Galton 1998; Stern 2005; Ellis 2008; Johnson 2013; Shreffler et al. 2015). Of course, eugenic principles were taken to devastating extremes in Nazi Germany. Examples of the rule of experts include more seemingly moderate and reasonable cases of expert control such as monetary policy under central banking. Steven Horwitz (2012) examines this case closely in a penetrating analysis of the history of the Federal Reserve System in the United States. He notes that in the financial crisis of 2008 the Fed “began to exercise a variety of new powers that they saw as ‘necessary’ to deal with the unfolding crisis” even though they had no authority to do so. “[T]hose powers were largely ‘seized’ in the sense that there was no real debate, either in Congress or the public at large, over whether the Fed’s acquisition of those new powers was desirable or not” (p. 67). Horwitz notes, pointedly, “[I]n the face of what the Fed claimed (rightly or wrongly) was the impending meltdown of the financial system, claims of expertise triumphed over democratic political processes” (p. 68). We have seen both Turner (2001) and Jasanoff (2003) express an aversion to expertise unfettered by democratic constraints. Horwitz (2012) points out how difficult it can be to exercise democratic control of experts in a context such as central banking. He says: “Where there is only one person or organization responsible for a complex task, it inevitably will look to experts to help achieve its goals and use that need for expertise as a
Expert Failure and Market Structure
191
way to shield it from outsiders in general, and critical ones in particular” (p. 62). Horwitz here describes the sort of mystery-making we saw Berger and Luckmann (1966) warn of in their critique of “nihilation.” Horwitz (2012, pp. 72 and 77) recognizes a dynamic aspect to the link between expertise and monopoly. “A belief in expertise calls forth monopoly, and monopoly calls forth a need for expertise that is genuine given the institutional context of monopoly.” In the case of central banking, experts come to be “in charge of the decision-making process with no alternative sources of expertise and no possibility of ‘exit’ for those who use the product.” But then there can be no “strong checks on the accuracy of the decisions being made” and “the expert policymakers are further able to shield themselves from feedback by cloaking their decisions in language that only other experts can really understand. Monopoly creates the need for conscious policy, and then policymakers are able to close off competing perspectives and obfuscate exactly what they are doing and why.” Evidence in White (2005) supports Horwitz’s suggestion that experts in monetary policy have “shielded themselves” from criticism. He shows that macroeconomic researchers in the United States are dependent on the Federal Reserve System. Judging by the abstracts compiled by the December 2002 issue of the e-JEL, some 74 percent of the articles on monetary policy published by US-based economists in US-edited journals appear in Fed-published journals or are co-authored by Fed staff economists. Over the past five years, slightly more than 30 percent of the articles by US-based economists published in the Journal of Monetary Economics had at least one Fed-based co- author. Slightly more than 80 percent had at least one co-author with a Fed affiliation (current or prior Fed employment including visiting scholar appointments) listed in an online vita. The corresponding percentages for the Journal of Money Credit and Banking were 39 percent and 75 percent. The editorial boards (editors and associate editors) of these journals are even more heavily weighted with Fed-affiliated economists (9 of 11, and 40 of 46, respectively). (White 2005, pp. 325–6)
White concludes dryly: “Fed-sponsored research generally adheres to a high level of scholarship, but it does not follow that institutional bias is absent or that the appropriate level of scrutiny is zero” (White 2005, p. 344). Both eugenicists and economists provide examples of the rule of experts. As we saw in Chapter 3, John Maynard Keynes favored the rule of experts in both areas, as well as morals. He wanted state policy in economy, population, and morals. We saw Singerman’s interpretation of Keynes, wherein successful planning in any one area depended on successful planning in the other two (2016, p. 564).
192
Expert Failure
Singerman’s interpretation of Keynes sheds new light on Keynes’s famous letter to Hayek on the latter’s book, The Road to Serfdom. Keynes told Hayek he was in “deeply moved agreement” with his book. And yet he went on to defend state economic planning against Hayek’s criticism: “But the planning should take place in a community in which as many people as possible, both leaders and followers, wholly share your moral position. Moderate planning will be safe if those carrying it out are rightly orientated in their own minds and hearts to the moral issue.” Keynes told Hayek: [W]hat we need is the restoration of right moral thinking – a return to proper moral values in our social philosophy. If only you could turn your crusade in that direction you would not look or feel quite so much like Don Quixote. I accuse you of perhaps confusing a little bit the moral and the material issues. Dangerous acts can be done safely in a community which thinks and feels rightly, which would be the way to hell if they were executed by those who think and feel wrongly. (Keynes 1944, pp. 385–8)
Keynes seems to have thought that the real core of Hayek’s warning was that the wrong morality might prevail, a concern he shared deeply. He thought Hayek had overlooked the vital eugenic dimension to moral error and for this reason was led to a spurious antiplanning stance. Hayek got off to a good start with his call for good morals, Keynes thought, but ran off the rails by neglecting the eugenic dimension of morality, which requires economic planning.
Expert-Dependent Choice Religion provides examples of expert-dependent choice. The monopoly priest offers his advice on correct behavior and how to enter paradise. In many cases the priest’s advice has no coercive force behind it, and the priest is left to complain of his parishioner’s sins. In the United States, religions compete freely. Experts on the afterlife and other religious matters compete. But in many times and places, such as the Roman Empire after Constantine, religious experts have enjoyed a state-supported monopoly. In Chapter 2 we saw Adam Smith argue that religious competition produces “candour and moderation” in religious leaders. Smith’s analysis is supported by Buddhist texts describing how the Buddha drew followers in a competitive market for gurus: “Instead of mysterious teachings confided almost in secret to a small number, he spoke to large audiences composed of all those who desired to hear him. He
Expert Failure and Market Structure
193
spoke in a manner intelligible to all . . . He adapted himself to the capacities of his hearers” (Narasu 1912, p. 19). Walpola Rahula (1974) says that “faith or belief as understood by most religions has little to do with Buddhism.” The Buddha taught religious toleration and emphasized the students’ need to see the truths being taught for themselves rather than accepting them on authority. Rahula says: The question of belief arises when there is no seeing . . . The moment you see, the question of belief disappears. If I tell you that I have a gem hidden in the folded palm of my hand, the question of belief arises because you do not see it yourself. But if I unclench my fist and show you the gem, then you see it for yourself, and the question of belief does not arise. So the phrase in ancient Buddhist texts reads: ‘Realizing, as one sees a gem in the palm.’ (1959, pp. 8–9)
The metaphor of the gem in the palm helps to suggest that competition turns wizards into teachers. If Smith is right about religious competition, then monopoly priests tend to be mysterious and immoderate. And, indeed, in 1427, more than forty years after the heretic John Wycliffe died, the Roman Pope ordered his bones to be exhumed and burnt, and his ashes scattered on the River Swift. This action “completed the anathema pronounced on Wycliffe, and on a list of 267 articles from his writings, at the Council of Constance on 4 May 1415” (Hudson and Kenny 2004). While the action was taken in response to Wycliffe’s published heretical opinions, it seems reasonable to guess that Wycliffe’s case was further damaged when he helped to inspire others to translate the Latin Bible into English. This translation ensured that that book would be, as the nearcontemporary chronicler Henry Knighton put it, “common and open to the laity, and to women who were able to read, which used to be for literate and perceptive clerks” (Knighton’s Chronicles as quoted in Hudson and Kenny 2004). The Bible was once the exclusive and mysterious province of “literate and perceptive clerks” who wished to exclude the laity from reading it. The story of John Wycliffe shows that experts may act with great force and violence to preserve their monopoly position as “the officially accredited definers of reality” (Berger and Luckmann 1966, p. 97). But if their expert advice to the laity is not enforced by measures more concrete than the threat of damnation, the laity can choose whether to follow their advice. Rich laics in the old days could sin freely and then pay for an indulgence. Purchasing indulgences may have seemed a good hedging strategy to rich unbelievers who were not fully convinced that the priests were wrong. I do not want to look like an apologist for Buddhism. The beauty of the Buddha’s message of acceptance and toleration does nothing to alter the
194
Expert Failure
crooked timber of humanity. There are, unfortunately, examples of Buddhist crimes far worse than the Church’s judgment on Wycliffe. For example, Buddhist monks in Burma have recently inspired and perpetrated murderous violence against Muslims in that country (Coclanis 2013; Kaplan 2015; Siddiqui 2015). The stated justifications for these attacks are, of course, spurious. The thinnest of arguments will suffice when our will is strong. As Benjamin Franklin noted wryly, “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do” (Franklin 1793, p. 27). In religion as in other markets, it can be hard to maintain a monopoly. The Church’s fury at Wycliffe did not prevent other heretics from coming along and founding what we now call “Protestant” sects. We saw in Chapter 3 that philosophy in the Socratic tradition challenged the monopoly of official religion in Athens. In that case, it was not one religion against another. Rather, Socratic philosophy challenged the monopoly of Athenian religion, much as a coffee importer might challenge a tea monopoly.
The Quasi-Rule of Experts Under the quasi-rule of experts, experts choose for nonexperts, but compete among themselves for the approval of nonexperts. Voucher programs create the quasi-rule of experts. With vouchers, parents may exercise some choice of public school without thereby acquiring control of the school’s curriculum. Tiebout competition provides another example. Tiebout (1956) noted that local communities compete for residents. If the costs of moving from one jurisdiction to another are low enough, communities will specialize in the mix of services provided, thereby attracting citizens with similar preferences over public services. The public services provided under Tiebout competition will generally entail expert choices imposed on the local citizens. Local government experts choose school curricula and programs. Local judges interpret the law. City planners decide how to pave sidewalks and where to put traffic lights. Such expert choices typically involve some prior citizen input, to be sure. In the end, however, the experts decide. Finally, representative democracy creates the quasi-rule of experts. The citizens vote for representatives who are or will become experts in public policy questions. These representatives then choose for the people what path to take. The theory of public choice shows that representative democracy may go wrong, especially when the state takes on a large number and variety of
Expert Failure and Market Structure
195
functions. On the other hand, Tiebout competition seems to produce good outcomes for people rich enough to exercise their exit option relatively easily. The difference in outcome in these two cases depends in part on the feedback loop. With Tiebout competition, the citizens experience the consequences of expert choice fairly clearly and quickly. And the decision to exit can have a fairly immediate negative consequence for the expert. In representative democracy in larger jurisdictions such as the nation-state, the feedback loop will generally be looser. It is hard to know whether expert decisions have made things better or worse in general or for you in particular. Thus, the quasi-rule of experts is less likely than the rule of experts to induce expert failure, but more likely than autonomy.
Self-Rule or Autonomy Finally, experts may compete among themselves to provide mere advice to nonexperts, who may freely accept or reject that advice. Consumer Reports magazine is a relatively pure example of such “autonomy.” The magazine has experts examine and test a variety of products within some category such as “microwave ovens” or “baby cribs.” The team of experts gives their opinions on each product’s performance including safety and reliability. No one is required to subscribe to the magazine. The magazine’s subscribers can buy the product recommended by the experts or one disparaged by the experts, who have a strictly advisory role and no power to choose for the consumer. Moreover, the magazine has many competitors. It is but one source of expert opinion on consumer products. The probability of expert failure in this case is low. And, indeed, I am aware of no argument to the effect that Consumer Reports is somehow bad or dangerous. There have been errors, although the number seems to be quite low. In 2007, for example, the magazine found all but two of the child safety seats it tested failed to provide adequate protection in a side-impact crash at thirty-eight miles per hour. In fact, the test had been conducted at about seventy miles per hour. The magazine corrected the error within two weeks. (See Claybrook 2007.) Notice that in this case the error was not one that might have put children in danger, only manufacturers. In the previous chapter we discussed the “market for preferences” (Earl and Potts 2004 and Potts 2012) and “novelty intermediation.” These are further examples of “self-rule” or “autonomy.” Venture capitalists may provide novelty intermediation as well as capital. With both the market
196
Expert Failure
for preferences and novelty intermediation there seem to be relatively few cases of expert failure. My comments on self-rule may suggest that we do not have to assume that each individual is the best judge of their own case to be in favor of autonomy. Adam Smith (1759, II.2.11) said that “Every man is, no doubt, by nature, first and principally recommended to his own care; and as he is fitter to take care of himself than of any other person, it is fit and right that it should be so.” Holcombe (2006) notes, “Economists often argue that individuals are the best judges of their own well-being” (p. 210). We are sometimes told that each individual is the best judge of their own interests and their own comparative advantage. This statement may seem to suggest that only one person is judging the best use of my time, namely me. But in commercial society many decentralized actors have a role in judging how I should spend my time. I am one of them, but so are my family members, my employer, potential employers, religious leaders, doctors, lawyers, financial advisors, and reporters for Consumer Reports. This list includes persons who act as experts in the sense of this book. Employers and potential employers are important figures on this list, but they are not experts in my sense. One of the functions of the entrepreneur is to judge how to use the labor time of others. In a more or less unfettered market economy, entrepreneurs with a comparative advantage in making such judgments will usually continue to be in a position to offer workers a guaranteed wage in exchange for the right to direct the workers’ efforts. Outside the workplace, the individual has many sources of advice on how to behave, which brings us back to experts. They may seek advice from religious leaders, self-help manuals, life coaches, and so on. The Great Original self-help book in America is the autobiography of Benjamin Franklin, which includes Franklin’s “Plan for Attaining Moral Perfection.” The advantage of individual autonomy is not so much that the individual chooses their own path. The individual is not always the best judge of their own well-being. The advantage of autonomy consists in the increased probability, relative to available alternatives such as the rule of experts, that the individual will be guided, in the different aspects of their life, by persons enjoying a comparative advantage in providing such guidance. We should not compare an idealized picture of self-rule to a realistic or worse than realistic picture of alternatives such as the rule of experts. Nor should we compare an idealized picture of alternatives such as the rule of experts to a realistic or worse than realistic picture of self-rule. We should compare self-rule, the way it really works, to alternatives, the
Expert Failure and Market Structure
197
way they really work. In other words, we should not commit the Nirvana fallacy (Demsetz 1969). IDENTITY, SYMPATHY, APPROBATION, AND PRAISEWORTHINESS
In earlier chapters we saw references to the motives of identity, sympathy, approbation, and praiseworthiness. These motives can induce expert failure if they are in some way misplaced. For example, some forensic scientists have a strong sense of identification with law enforcement that may create an unconscious bias toward results favoring the police theory of a case. An expert with more sympathy for other experts than their client may fail to detect an error in their colleagues’ work even when that error is damaging to their client. An expert who seeks the approbation of other experts may be led into expert failure. Even the seemingly pristine motive of praiseworthiness can go wrong. A forensic scientist who knows they are working on a murder case may feel a duty to be sure the case is solved. That desire to solve the case, however, may precipitate a false positive match to the police suspect. The noble desire to be worthy of praise may nevertheless be a cause of expert failure. This possibility underlines the important of rivalry, redundancy, synecological redundancy, and other aspects of market structure. Even if experts were angels, a poor market structure could promote expert failure. OBSERVER EFFECTS, BIAS, AND BLINDING
Observe effects induce bias, which is an important cause of expert failure. Within the current literature, blinding is probably the leading therapeutic response to bias. Blinding is the hiding of information, as in double-blind drug studies. In such studies, the patient does not know whether they are receiving medicine or a placebo. And the experimenters do not know which patients get the drug and which the placebo. (See Schulz and Grimes 2002 for a biting and cogent analysis of blinding protocols and their sometimes poor application.) Blinding is desirable in a large variety of cases. But synecologically bounded rationality creates inherent limits to blinding, which implies the necessity of auxiliary precautions. Podolsky et al. (2016) trace blinding of subjects back to the “trick trials” of the late sixteenth century. Disputes over exorcism “led to the adoption of bogus holy water and sham relics of the holy cross being used in
198
Expert Failure
exorcism trials to determine whether overenthusiasm, autosuggestion, or deceit – as opposed to the devil – was the cause of the behavior of those afflicted” (p. 46). Later, Louis XVI commissioned a group including Benjamin Franklin, Antoine Lavoisier, and Joseph-Ignace Guillotin to investigate Franz Anton Mesmer’s claims that he could use “animal magnetism” to cure sick patients. This group knew of the trick trials from reading Montaigne (Podolsky et al. 2016, p. 46). Franklin and the others used techniques such as blindfolding the patients to hide from them information about whether or when “animal magnetism” was at work. They concluded that Mesmer’s technique had no scientific merit (Podolsky et al. 2016). Podolsky et al. (2016) trace awareness of observer effects only back to the nineteenth century, when “observer bias was noted to occur across all scientific disciplines, conventional or unconventional. Most notably, astronomers had described the impact of the ‘personal equation’ in the recording of seemingly objective data” (p. 50). A study from 1910 seems to have applied observer blinding in an irregular, unplanned, and haphazard manner (Podolsky et al. 2016, p. 51). Hewlett (1913) is the earliest study Podolsky et al. (2016) note in which the observers were subject to consistent blinding protocols (p. 51). The Hewlett (1913) study begins: “It has been claimed that the sodium salicylate prepared from natural oils is superior as a therapeutic agent to the sodium salicylate prepared by synthetic methods.” It thus addressed an issue of urgent concern to pharmaceutical manufacturers. The study compared natural and synthetic sodium salicylate for the treatment of different diseases, “especially rheumatic fever” (Hewlett 1913, p. 319). The “natural” version was prepared by a practitioner from oil of birch, and the synthesized version was manufactured by Merck. The study was initiated by the AMA’s Council on Pharmacy and Chemistry, which had been founded in part to promote cooperation between medical professionals and the newly emergent pharmaceuticals industry (Stewart 1928). The study concluded: “natural and synthetic sodium salicylate are indistinguishable so far as their therapeutic and toxic effects on patients are concerned” (Hewlett 1913, p. 321). Thus, the organization that was formed to promote good relations with pharmaceutical manufacturers conducted a study showing that one manufacturer’s product was not inferior to the competing “natural” version. The reader may judge whether the Hewlett study was, therefore, an auspicious beginning to random controlled trials. Podolsky et al. (2016) report that we see “the scattered uptake of observer blinding” from “the 1910s through the 1930s” (p. 52). The
Expert Failure and Market Structure
199
practice grew more systematic after World War II: “By 1950, Harry Gold and his colleagues could for the first time officially label studies in which both patients and physicians (or subjects and researchers) were blinded as ‘double-blind’ tests” (Podolsky et al. 2016, p. 53). Robertson (2016, p. 27) says that “mostly in the last 60 years” the “double-blind randomized, placebo-controlled trial has become the gold standard for scientific inquiry.” In a great variety of contexts, blinding is a valuable and important tool to mitigate observer effects. In forensic science, for example, “sequential unmasking” is a desirable protocol (Krane et al. 2008). Scientists regularly employ a variety of blinding procedures. In Chapter 9 we saw that Mendel’s results seem to have been distorted by observer effects and might have been improved by the use of blinding protocols. I am not aware of any reason to question the basic claim that blinding can reduce bias. But we should recognize, I think, an important limit to the principle of blinding. Blinding protocols cannot eliminate what I will call “synecological bias,” which is the bias arising from synecologically bounded rationality. The division of knowledge makes it impossible for anyone to avoid a limited and partial perspective, which implies a kind of parochial bias in our perceptions and judgments. Only multiplying the number of experts and putting them in a position of genuine rivalry can mitigate this important form of bias. Blinding would be sufficient if knowledge were hierarchically structured and if the only bounds on rationality left it guided by the “all-seeing eye” described by Felin et al. (2017). In that case, all bias would be induced bias, as we may call it. Domain-irrelevant information, inappropriate incentives, or the emotional context may induce a bias. Remove that distorting influence and nothing is left to skew the flat plane of reason away from the objective truth. Reason thus conceived has no need of synecological redundancy, because it is, when free of induced bias, automatically in accord with the truth. The situation, however, is more richly textured if there is a Hayekian division of knowledge in society. If knowledge is SELECT, the rationality of experts is synecologically bounded, and blinding protocols can only be partial measures. Like all of us, experts have a limited and partial perspective on events. Multiplying these perspectives increases the opportunity to make appropriate connections and discover superior arguments and interpretations of the evidence. But it is necessary to engage those multiple perspectives fruitfully. As Odling (1860), Milgrom and Roberts (1986), Koppl and Cowan (2010), and others suggest, we can create such engagement by pitting experts
200
Expert Failure
against each other. Koppl and Krane (2016) speak of “leveraging” bias. In other words, the ecology of expertise should have both rivalry and synecological redundancy. Bias can be induced by information and incentives. Blinding can remove or at least reduce induced bias. But synecological bias is not induced. It is not caused by any special or specific cause. It is not a distortion. It is inherent in the “social division of knowledge” that formed the context and starting point for Berger and Luckmann’s (1966) theory of experts. It is inherent in the social division of knowledge without which the problem of experts would not arise in the first place.
11
Further Sources of Expert Failure
NORMAL ACCIDENTS OF EXPERTISE
Turner (2010) points out: Charles Perrow [1984] used the term ‘normal accidents’ to characterize a type of catastrophic failure that resulted when complex, tightly coupled production systems encountered a certain kind of anomalous event. These were events in which systems failures interacted with one another in a way that could not be anticipated, and could not be easily understood and corrected. Systems of the production of expert knowledge are increasingly becoming tightly coupled. (p. 239)
Turner builds a theory of expert failure based on this idea of “normal accidents of expertise.” Others have applied Perrow’s theory somewhat more narrowly to problems in forensic science (Cole 2005; Thompson 2008; Koppl and Cowan 2010). James Reason (1990) builds on Perrow. Following Charles Perrow (1984), Reason (1990) notes that latent errors are more likely to exist and create harm in a complex, tightly coupled system than in a simple, loosely coupled system. Perrow borrowed the vocabulary of “tightly” and “loosely” coupled systems from mechanical engineering. In that context, Perrow explains, “tight coupling is a mechanical term meaning there is no slack or buffer between two items” (1984, pp. 89–90). In the context of social processes, a tightly coupled system is one in which failure in any one component or process may disrupt the function of others, thus generating an overall system failure. Perrow lists four characteristic features of tightly coupled systems: (1) “Tightly coupled systems have more time-dependent processes: they cannot wait or stand by until attended to”; (2) “The sequences in tightly coupled systems are more invariant”; (3) “In tightly coupled systems, not only are the specific sequence invariant, but the overall design of the 201
202
Expert Failure
process allows only one way to reach the production goal”; (4) “Tightly coupled systems have little slack” (pp. 93–4). Perrow (1984, p. 79) calls a system “complex” when it has many “hidden interactions” whereby “jiggling unit D may well affect not only the next unit, E, but A and H also.” Systems for the production of expert knowledge, “expert systems” as Turner (2010) dubs them, can exhibit these qualities in varying degrees. Forensic science today is a good example of a complex, tightly coupled “expert system.” For example, the Houston Crime Lab in past years invited cross-contamination of evidence from one type of testing to another. A 2002 audit reports: ‘The laboratory is not designed to minimize contamination due to the central screening area being used by serology, trace, and arson. Better separation of these disciplines is needed. The audit team was informed that on one occasion the roof leaked such that items of evidence came in contact with the water’ (FBI Director, 2002, p. 21). Forensic science is a complex, tightly coupled system within which multiple individuals, organizations, processes, technologies, and incentive systems mediate conflict between individuals. Such systems are easily subject to the “latent errors” Reason identified. These latent errors typically lay dormant until active errors trigger them. This context includes both technological and organizational aspects. Indeed, these two layers in the error-enabling process are not orthogonal. The word ‘structural’ also underlines the view that complex, tightly coupled systems can be reengineered to reduce the chance of error. Inappropriate structural features of the context enable active errors. Once these structural flaws are identified, the system can be reengineered to produce a better result. Errors do not happen in a vacuum. An individual causes an error to occur within a set of social and economic structures. Economic and social structures create incentives that can create expert failure. COMPLEXITY AND FEEDBACK
Complexity is central to Perrow’s notion of “normal accidents.” In Turner’s use of Perrow, the relevant complexity is in the production process of the experts, for example in the complex, tightly coupled system that is a modern crime lab. But complexity of the domain may also lead to expert failure. The phenomena we ask experts about may be complex, uncertain, indeterminate, or ambiguous. Here, too, forensic science provides an example. The evidence in forensic science is often ambiguous. The latent prints left at a crime scene, for example, may be partial or smudged.
Further Sources of Expert Failure
203
One print may be deposited on top of another. The material on which the print was deposited may be irregular. And so on. DNA evidence, too, can be ambiguous (Thompson 1995, 2009). We ask economic experts to predict the economy, which is complex, uncertain, and indeterminate. Consider the stock market. If stock payouts could be predicted (and risk levels determined), then everyone would know the value of every stock, which would equal its price. In this scenario, no one could do better than to simply buy and hold. But then it would be pointless to research stocks, and no one would do it. But if no one researched stocks, their prices would pull away from their underlying values, at which point it would it would pay to research stocks. Brock and Hommes (1997) show how this sort of logic can lead to unpredictable dynamics. If good predictors of stock behavior cost more than poor predictors, stable equilibria fall apart as cost-bearing sophisticated traders shift to cheap but myopic predictors. The resulting instability makes it worthwhile to bear the cost of the superior predictor and the system temporarily shifts back toward a stable equilibrium before the aperiodic cycle resumes. It is hard to forecast when the system will switch between stable and unstable dynamics (Brock and Hommes 1997). Arthur (1994) and Arthur et al. (1997) have also shown how interacting agents can generate complex dynamics endogenously. When the object domain is complex, uncertain, indeterminate, or ambiguous, feedback mechanisms may be weak or altogether absent. It can be hard to decide whether prior expert opinions were good or bad, true or false. Until recently, this lack of feedback has been evident in macroeconomic disputes. Different schools of thought would insist that the other schools had failed the test of history. And yet very few macroeconomists switched from one school to another because the newly adopted school of thought had a superior empirical record. In recent years, this situation has changed somewhat as macroeconomists have converged on a common model class (DSGE models), even though differences in theory and policy remain. As I noted in Chapter 8, this sort of unpredictability does not always discourage demand. In this case, expert failure is quite likely indeed. Demanders of expert opinion will pay for opinions that cannot be reliable. Government demand for macroeconomic projections seems unrelated to their quality or reliability. After the 2008 financial crisis, Queen Elizabeth asked economists: “Why did nobody notice it?” (Pierce 2008). The British Academy gave her something of an official answer. “Everyone seemed to be doing their own job properly,” they told the Queen, “and often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction” (Besley and
204
Expert Failure
Hennessy 2009). Koppl et al. (2015a) comment, “Rather than questioning the dynamics of the econosphere, this answer questions the organization of economic authorities. If we had had a better organization amongst ourselves, the whole thing could have been prevented” (p. 6). Governments are not the only ones to demand magical predictions. Fortunetellers continue to ply their trade. Stock pickers and mutual fund managers continue to find work. And a great variety of prophets support themselves by forecasting doom or salvation. If, as I believe, competition turns wizards into teachers, then competition may help to reduce the chance of expert failure even in such markets. Competition among priestly experts tends to make them less dogmatic and more disposed to “candour and moderation.” The emergence of wisdom traditions in Western philosophy might also suggest that competition has this ameliorative effect on expert opinion even in markets with little or no feedback between an expert’s opinion and subsequent events. INCENTIVE ALIGNMENT
Many of the issues discussed in Chapter 10 might be described as issues of “incentive alignment.” When expert incentives are not aligned with the truth, expert failure is more likely. Some cases of misaligned incentives may not fit easily in the categories of this or the previous chapter. For example, through court-assessed fees, some crime labs are being funded in part per conviction (Koppl and Sacks 2013; Triplett 2013). State law in at least fourteen states requires this practice and it has also been adopted in other jurisdictions (Koppl and Sacks 2013). Triplett (2013) says the practice exists in twenty-four states. One must be attentive to the institutional particulars of any given case to maximize the chance of identifying the relevant incentives tending to promote or discourage expert failure. As a general matter, however, “incentive alignment” is a matter of “market structure.” I have argued that “competition” in the market for expert opinion tends to reduce the incidence of expert failure. In this chapter I develop the point by looking more closely at the structure of the market for expert opinion. THE ECOLOGY OF EXPERTISE
Competition among experts is not simply a matter of the number of experts or the client’s ability to select among experts. For example, as we have seen in Chapter 3, licensing restrictions and professional associations
Further Sources of Expert Failure
205
produce an enforced homogenization of opinion among experts. For the ecology of expertise to minimize the chance of expert error it must include rivalry, synecological redundancy, and free entry. There must be rivalry among experts. There is no rivalry unless clients have at least some ability to select among experts. Without this rivalry, there are only weak incentives for one expert to challenge and possibly correct another. There must also be, of course, multiple experts. But multiplicity is not enough. They must be different one from another. Thus, we require not simple redundancy, but synecological redundancy. We saw in Chapter 3 that professional associations tend to reduce synecological redundancy. Indeed, the point of a profession is to create homogeneity across experts. One professional may be more skilled than another, but they all agree on the knowledge codified by the profession. They all represent the same knowledge base. But if that knowledge base is imperfect in any way, as it will be almost by definition, then the homogenizing tendency of professions will encourage expert failure. I develop these points about professions in the section of this chapter entitled “Professions.” Finally, it is unlikely that the full range of relevant expert opinions will be available to clients if entry is controlled. We require, therefore, free entry as well. As Baumol (1982, p. 2) points out, “potential competition” is more important than the number of incumbent competitors. (Baumol cites Bain 1956, Baumol et al. 1982, and others.) The same logic applies in the market for expert opinion. State support for professional organizations such as the American Medical Association encourages expert failure by creating barriers to entry. To borrow once again from Berger and Luckmann (1966), the outsiders have to be kept out and the insiders have to be kept in. PROFESSIONS
Professions such as medicine, law, and pharmacy, I have said, may often serve to keep outsiders out and insiders in. They provide an interesting intermediate case between full autonomy and the rule of experts. Such professions may be a source of expert failure. On the one hand, we are often free to choose our doctor, our lawyer, our pharmacist. On the other hand, as we saw in Chapter 3, their professional organizations create an epistemic monopoly that limits competition between them. Licensing restrictions support the monopoly position of the professions with the coercive power of the state. It may be that the power of professions would
206
Expert Failure
be inconsiderable without licensing restrictions. In Chapter 3 we saw the lamentations of nineteenth-century “men of science” over the putatively intolerable variety of opinions by expert witnesses claiming the mantle of science. There were no licensing restrictions for “science.” We saw calls for measures to eliminate sources of divergence in the scientific opinions expressed in court. This history supports the conjecture that professions would tend to create neither expert power nor homogeneity of expert opinion without measures of state support such as licensing restrictions. Kessel (1970) explains how licensing restrictions gave the American Medical Association (AMA) the power to restrict the supply of physicians. The system of medical education existing in 1970 was a consequence of the “Flexner report,” which was published by the Carnegie Foundation in 1910. “This report discredited many medical schools and was instrumental in establishing the AMA as the arbiter of which schools could have their graduates sit for state licensure examinations. Graduation from a class A medical school, with the ratings determined by a subdivision of the AMA, became a prerequisite for licensure” (Kessel 1970, p. 268). The report recommended that all medical training in the United States be conducted on the model of the medical school of The Johns Hopkins University. Rather than attempting to “evaluate the outputs of medical schools,” the report placed the “entire burden of improving standards” on “changes in how doctors should be produced” (pp. 268–9). Moreover, the report’s author, Abraham Flexner, “implicitly ruled out” any model of medical education other than “the one he observed at Johns Hopkins” (p. 269). Kessel says, “The implementation of Flexner’s recommendations made medical schools as alike as peas in a pod” (p. 269). Implementing the Flexner report reduced the output of physicians. “Organized medicine – again, the AMA – using powers delegated by state governments, reduced the output of doctors by making the graduates of some medical schools ineligible to be examined for licensure and by reducing the output of schools that continued to produce eligible graduates” (Kessel 1970, p. 267). These changes disproportionately reduced the supply of black physicians. “As a result of the AMA’s and Flexner’s endeavors, the number of medical schools declined from 162 in 1906 to sixty-nine in 1944, while the number of Negro medical schools went from seven to two. Moreover, the number of students admitted to the surviving schools decreased” (p. 270). During the Great Depression, “there was a cutback in admissions to medical schools, with Negroes and Jews bearing a disproportionate share of the reduction. Probably females also bore a disproportionate share of the reduction in admissions” (p. 271). As Kessel
Further Sources of Expert Failure
207
notes with excess delicacy, the Flexner report contains “patronizing” comments on race (1970, p. 270). Kessel describes a professionalization of medicine supported by state licensing restrictions, which reduced the supply of physicians and increased the homogeneity and uniformity of the knowledge base of legally sanctioned medical practice to the disadvantage of the public in general and women and oppressed minorities in particular. Before returning to the theme of professions in the theory of experts, it may be worth spending some time on the racism of the Flexner report. It includes the following passage on “the medical education of the negro”: The medical care of the negro race will never be wholly left to negro physicians. Nevertheless, if the negro can be brought to feel a sharp responsibility for the physical integrity of his people, the outlook for their mental and moral improvement will be distinctly brightened. The practice of the negro doctor will be limited to his own race, which in its turn will be cared for better by good negro physicians than by poor white ones. But the physical well-being of the negro is not only of moment to the negro himself. Ten million of them live in close contact with sixty million whites. Not only does the negro himself suffer from hookworm and tuberculosis; he communicates them to his white neighbors, precisely as the ignorant and unfortunate white contaminates him. Self-protection not less than humanity offers weighty counsel in this matter; self-interest seconds philanthropy. The negro must be educated not only for his sake, but for ours. He is, as far as human eye can see, a permanent factor in the nation. He has his rights and due and value as an individual; but he has, besides, the tremendous importance that belongs to a potential source of infection and contagion. The pioneer work in educating the race to know and to practise fundamental hygienic principles must be done largely by the negro doctor and the negro nurse. It is important that they both be sensibly and effectively trained at the level at which their services are now important. The negro is perhaps more easily “taken in” than the white; and as his means of extricating himself from a blunder are limited, it is all the more cruel to abuse his ignorance through any sort of pretense. A well-taught negro sanitarian will be immensely useful; an essentially untrained negro wearing an M.D. degree is dangerous. (Flexner 1910, p. 180)
Flexner claims an interest in “the physical well-being of the negro,” but the supposed problem of “hygiene” and “self-protection” for whites seems to be more important to him. We read, “self-interest seconds philanthropy.” We are told, “The negro must be educated not only for his sake, but for ours.” Thus, the implied readership is “white,” utterly excluding black people. The report tells its white readership to fear black people as “a potential source of infection and contagion.” When Flexner says that “The negro is perhaps more easily ‘taken in’ than the white” and “his means of
208
Expert Failure
extricating himself from a blunder are limited,” we recognize the gross racial trope of intellectual inferiority for black people. Unfortunately, no one should be surprised to see a white racist describing black people as naturally stupid and irrational. But notice also the implication. It would be “cruel” to allow such simple-minded folk to receive medical services from black doctors other than those few “we” have trained in “fundamental hygienic principles.” Thus, the forcible suppression of black medical schools is dressed up as a philanthropic act bestowed on them by their moral and intellectual betters. Flexner takes it for granted that white people have permanent dominion over black people. Whites should exercise that dominion with compassion, but without neglecting “self-protection.” Flexner’s compassion for “the negro” seems to have been constrained by his notion of “self-protection.” Let us momentarily set that fact aside, however, to take up the hypothesis that Flexner’s compassion for “the negro” was sincere and abundant. If so, it was unaccompanied by any principle of equality. Because blacks are inferior, whites have a duty born of compassion to make important choices for them. Thus, compassion unaccompanied by respect for the equal dignity and autonomy of others becomes an instrument of oppression. It is tempting to conclude that the principle of equality is more important for human welfare than that of compassion, but we are probably unable to correctly imagine a world in which we are all perfectly equal, but devoid of compassion. In any event, all attempts at imagining such a world would be empty speculation. I think we can conclude, however, that compassion without equality may lead to oppression. As Adam Smith and Bernard Mandeville both clearly recognized, compassion is a virtue. It is a good thing. But it may turn into a bad thing if the compassionate think themselves superior beings with a right or duty to choose for others. Leonard (2016, p. xii) notes the “unstable amalgam of compassion and contempt” of Progressive Era reformers. I doubt, however, that this evil amalgam is unstable. Flexner’s racism and the AMA’s success in shutting down black medical schools illustrates the claim that licensing restrictions and other barriers to entering a profession are likely to be disproportionately harmful to the least privileged and most oppressed members of society. Monopoly power tends to increase the scope for bigotry to operate. According to OpenSecrets.org, the AMA spent well over $300 million on lobbying over the period 1998–2016, a number exceeded by only two other organizations (Open 2016). Lobbying expenditures reflect both the positive efforts of special interests to acquire a special benefit and defensive efforts against predation by the state actors such as Congress. Nevertheless, the
Further Sources of Expert Failure
209
large sums spent by the AMA may suggest that it continues to act in ways that support the interests of physicians more effectively than the general public. This view is supported by the analyses in Svorny (2004) and Svorny and Herrick (2011). Svorny (2004) excoriates licensing restrictions in medicine. She says: “[M]any economists view licensing as a significant barrier to effective, cost efficient health care. State licensing arrangements have limited innovations in physician education and practice patterns of health professionals” (p. 299). Svorny and Herrick (2011) note that these licensing restrictions hurt the poor most of all. A profession serves to enforce an official view, which the professionals tend to accept and perpetuate. We have seen Berger and Luckmann (1966) discuss how monopoly experts may use “incomprehensible language,” and “intimidation, rational and irrational propaganda . . . mystification,” and “manipulation of prestige symbols,” to ensure that outsiders are “kept out” and insiders “kept in” (p. 87). Professions help to provide such “propaganda” and “mystification.” In the United States, “federal, state, and local governments today impose an array of limits” on professional speech (Sandefur 2015–16, p. 48). The American legal system “regulates communications between licensed professionals and their clients” (Zick 2015, p. 1291). Presumably, these regulations serve a variety of purposes and arose from a variety of causes. Nevertheless, one important function seems to be creating uniformity of opinion within the profession. This issue deserves more attention than it has yet received in the American law literature. Haupt defends such restrictions on the grounds that the professional represents their profession and its supposed “body of knowledge” when communicating with a client. And, with Haupt, it is the profession, not the professional, that produces truth. The approach requires not only a non-SELECT view of knowledge, a top-down view of knowledge, but also a naïve view of professional interests. They are “producing truth.” But we have seen the effects of professions in Chapter 3. To establish reasonable doubt for the defendant in a criminal case may require deviations from the professional “consensus,” whose very existence is often a product in part of state coercion. Let us consider the example of the chemical reaction from Chapter 3. Let us also consider the Flexner report, which imposed the spurious “knowledge” that blacks were inferior as well as the probably spurious “knowledge” that there is but one true way to do medical education. We saw in Chapter 8 that expert errors may be correlated and that, consequently, multiplicity in expert opinion could produce a false confidence in them. State-supported professional organizations tend to produce
210
Expert Failure
homogeneity of opinions and, therefore, increase the correlation of errors across experts in a given profession. This reduction in synecological redundancy tends to encourage expert error. Something like this seems to have happened in the rape and murder trial of Keith Harward, who was falsely convicted in 1982 and exonerated in 2015. (He thus spent thirtythree years in prison for a rape and murder he could not have committed.) Harward has explained to me that bite-mark evidence was central to his case. Six odontologists all supported the now demonstrably false claim that he had bitten the victim. It is Harward’s conjecture that this perverse uniformity of opinion reflected the professional commitments and affiliations of these experts. This conjecture seems plausible, although the same correlation of errors might also be explained by the likely fact that each of them had access to the police’s case file before giving an opinion. When professional associations are linked to licensing restrictions, as with the AMA, the speech of professionals in their communications with clients may be regulated. Zick (2015) examines three cases: “restrictions on physician inquiries regarding firearms, ‘reparative’ therapy bans, and compelled abortion disclosures.” He defines “rights speech” as “communications about or concerning the recognition, scope, or exercise of constitutional rights” (p. 1290) and laments the existence of legal restrictions on the “rights speech” of professionals with their clients: Regulations of professional rights speech are, and ought generally to be treated as, regulations of political expression based on content. As such, they raise important free speech concerns and merit strict judicial scrutiny. The fact that the speakers are licensed professionals, and their audiences are clients or patients, does not eliminate the need to guard against state suppression or compulsion of speech— particularly, although not exclusively, when the speech concerns or relates to constitutional rights. (p. 1359)
Haupt (2016) argues that restrictions on the speech of professionals when interacting with clients can be made to survive First Amendment challenges. Her argument relies on the idea that the professional is representing a supposed “body of knowledge” to the client. This defense of speech restrictions assumes that the professional “body of knowledge” should be homogeneous across professionals, which is consistent with my claim that professional associations often serve function of fostering uniformity in expert opinion. Professionals such as doctors and lawyers may represent sources of oppressive power to many of their clients. Some clients may be able to judge when a professional can be trusted and when they cannot be trusted.
Further Sources of Expert Failure
211
In Chapter 2 we saw Goldman (2001) and others discuss strategies for nonexperts to judge experts. The simple expedient of getting a second opinion is one obvious strategy, but not everyone can afford it, and the value of a second opinion may often be reduced by the professional attachments of the expert. The limited value of second opinions is illustrated by Brandon Mayfield’s fingerprint expert, who supported the state’s identification of Mayfield with the crime-scene evidence. Formal education often helps in the acquisition of good judgment regarding experts. Wealthy people are more likely to have friends and family in professions such as law and medicine. Such connections may often reduce the risks for such persons of suffering bad consequences of expert failure. Only so many people are rich and well educated, however. And even for such persons it can sometimes be hard to judge the quality of expert advice. The anger and resentment that nonexperts sometimes feel toward experts has good foundation in many cases. REGULATION
I have emphasized market structure, contrasting competition with monopoly. Markets are also subject to overt state regulation, which shapes the market structure, the ecology of expertise. Just as in other markets, such “regulation” tends to produce results contrary to the stated goals of the advocates of it. Many would-be reformers of forensic science seek some form of regulation. In particular, the National Academy of Sciences (NAS 2009) has called for the creation of a national regulatory body to be called the National Institute of Forensic Science (NIFS). It seems fair to say that the great majority of reformers in forensic science favor such “oversight” and “regulation.” Rehg (2013) seems to favor the regulation of “dissenting experts” with minority opinions on scientific questions such as global warming. He castigates the “lax attitude toward dissent” of some authors. He notes approvingly the epistemic value of dissenting opinions. “But a dissenter who engages in political advocacy is, like any citizen, morally responsible for his or her political judgments and advocacy” (p. 101). It seems only reasonable to note such a moral responsibility. But Rehg goes on to say, “Experts should be held responsible, at the very least, for labeling their opinions with something like officially approved safety labels or warnings, which signal their (un)reliability. Officially appointed panels like the Intergovernmental Panel on Climate Change (IPCC) already do this, and I do
212
Expert Failure
not see why dissenters should get a free pass” (p. 102). While this statement is ambiguous, it seems to call for state regulation of expert dissent. Proposals for the “regulation” of experts run into a problem of infinite regress that we might call the “turtles problem.” We need to regulate experts. We need other experts for the job. Call these experts “metaexperts.” The same logic that tells us to regulate the experts tells to regulate the meta-experts too. And the meta-meta-experts. And the meta-metameta experts. And so on. Quis custodiet ipsos custodes? This problem of infinite regress may remind the reader of the story of an old woman at a public science lecture. She upbraids the speaker for foolishly claiming the earth is a ball spinning in empty space. The earth, she insists, is a flat plate resting on the back of a turtle. He asks, “On what does the turtle rest?” Another turtle. And what does the second turtle rests on? “It’s no use, young man,” she replies triumphantly, “it’s turtles all the way down.” For some advocates of regulation, it’s experts all the way down. Regulation has a turtles problem. It also creates the risk of regulatory capture. Regulatory and oversight bodies are supposed to constrain special interest and protect the general interest. When they instead serve special interests, they have been “captured.” An industry must offer something in return if it is to capture a regulator. The reciprocation may consist in campaign contributions to members of Congress providing oversight of the regulatory body. It may take any of an indefinitely large number of other forms. Capture is the norm, unfortunately, which makes beneficial change hard. The first great regulatory body in the United States was the Interstate Commerce Commission (ICC), which was established in 1887 to control railroads. The Interstate Commerce Act prohibited price discrimination and required that “all charges . . . shall be reasonable and just.” This language seems to constrain the railroads, and yet the railroads supported the act. Posner (1974, p. 337) explains: “The railroads supported the enactment of the first Interstate Commerce Act, which was designed to prevent railroads from practicing price discrimination because discrimination was undermining the railroads’ cartels.” The interest that captures a regulator may not be the regulated industry. “Crudely put, the butter producers wish to suppress margarine and encourage the production of bread” (Stigler 1971, p. 6). For example, the railroads sometimes used state regulators to suppress trucking. In the 1930s, “Texas and Louisiana placed a 7,000-pound payload limit on trucks serving (and hence competing with) two or more railroad stations, and a 14,000-pound limit on trucks serving only one station (hence not competing with it)” (Stigler 1971, p. 8).
Further Sources of Expert Failure
213
The theory of supply and demand predicts that a commodity sold on a competitive market will end up in the hands of those who value it most, as measured by willingness to pay. The theory does not tell us, however, who is willing to pay the most. Similarly, the theory of regulatory capture does not tell us who will win in the contest of interests to capture a regulator. It is a continuous fight; victory may be partial and fleeting. Nevertheless, we can say that concentrated interests aid victory. Well-organized groups with relatively large and homogeneous interests have an advantage in the contest. Calls for the regulation of experts should be tempered by the risk of regulatory capture. Consider, for example, the NAS (2009) proposal to create NIFS. A coalition of law enforcement agencies may be in the best position to capture a federal regulator of forensic science. According to Bureau of Labor Statistics data, in 2012 the number of employees in law enforcement exceeded 1.3 million (BLS 2015). These people are part of a relatively large, concentrated, well-organized, and homogeneous interest group. An episode from 2013 suggests that some such coalition exists and is capable of acting cooperatively. On August 30, 2013, a consortium of law enforcement groups wrote a letter to the U.S. Attorney General strongly condemning his new policy of respecting the liberal state laws on marijuana in Colorado and Washington (Stanek et al. 2013). Commenting on this episode, one journalist has opined, “[P]olice organizations have become increasingly powerful political actors” (Grim, 2013). Is there any other interest group, such as the innocence movement, in a good position to compete with law enforcement? And, if so, for how long? Cole (2010) recognizes the risk of regulatory capture in forensic science. He says of the proposed NIFS, “If it is ‘captured’ by law enforcement, it becomes less obvious that it would be a force for improvement rather than stagnation” (p. 436). Law enforcement has a distinct advantage in the struggle to capture a forensic science regulator. But victory is not guaranteed as illustrated by an episode involving the National Commission on Forensic Science, which was created in 2013 jointly by the Department of Justice and the Commerce Departments’ National Institute of Standards and Technology (NIST). One of the duties of the Commission as stated in its charter is “[t]o develop proposed guidance concerning the intersection of forensic science and the courtroom.” When, however, the Commission’s Subcommittee on Reporting and Testimony came forward with a proposal (authored by law professor Paul Giannelli) for increased disclosure of
214
Expert Failure
criminal forensics the Commission’s cochair from the Department of Justice determined that the proposal was beyond the scope of the Commission’s charter. Subcommittee cochair Judge Jed Rakoff resigned in protest, citing the Commission’s charter as clearly creating scope for just such a proposal. The very next day, January 30, 2015, the Commission’s cochair reversed this decision and invited Judge Rakoff back onto the Commission. It was the Committee’s cochair and not the Department of Justice that interpreted the Committee’s charter to exclude the Subcommittee’s proposal, and the decision was quickly reversed. This episode nevertheless illustrates the sort of thing that may happen in the struggle to capture regulators. The Commission was not renewed at the expiration of its charter in April 2017. MONOPSONY AND BIG PLAYERS
Monopsony is a source of expert failure. Monopsony is the existence of only one buyer in a market. It makes even nominally competing experts dependent on the monopsonist and correspondingly unwilling to give opinions that might be contrary to the monopsonist’s interests or wishes. The police and prosecution are often the only significant demanders of forensic science services in the United States. There is also a kind of narrow monopsony that may sometimes encourage expert failure. An expert is hired to give an opinion to their client. The client is the only one demanding an opinion for that client. This exclusivity is a narrow monopsony. The expert may have other customers, but none of them is paying the expert to give this particular client an opinion. If it is sufficiently easy for third parties to observe the advice given to the client or sufficiently probable that this advice will be revealed, then the expert has an incentive to give opinions that will seem reasonable to other prospective clients. If not, the expert has an incentive to offer pleasing opinions to the client even if that implies saying something unreasonable or absurd. Toadies and yes men respond to such incentives. Michael Nifong, the district attorney in the Duke rape case, induced the private DNA lab he hired to withhold exculpatory evidence (Zucchino 2006). The lab was private and thus nominally “competitive.” Presumably, it could have declined this particular request without particular harm to its bottom line. But the lab chose to go along with Nifong’s desire to hide evidence. The District Attorney’s narrow monopsony created an incentive to do so. Only Nifong had effective control of the DNA evidence in this case. If the client seeks multiple opinions and there is synecological redundancy on the supply side, then different suppliers may give different expert
Further Sources of Expert Failure
215
opinions. In this case, each expert has an incentive to anticipate the opinions of other experts and explain to the client why his or her opinion is best. Doing so increases the chance that the client will review the expert favorably and, perhaps, return to the same expert in the future. Again, competition tends to make experts less like mysterious wizards and more like helpful teachers. Butos and McQuade (2015) argue that the United Nations Intergovernmental Panel on Climate Change (IPCC) is a “Big Player” in research on climate change. Yeager and I define a Big Player as “anyone who habitually exercises discretionary power to influence the market while himself remaining wholly or largely immune from the discipline of profit and loss” (Koppl and Yeager 1996, p. 368). Koppl (2002) develops the theory of Big Players in relative detail. According to Butos and McQuade, the IPCC has a disproportionate, if indirect, influence the funding of climate change research. It has “become a dominant voice in the climate science community, and its summary pronouncements on the state of the science carry significant weight among scientists” (p. 189). Prudently, Butos and McQuade (2015) do not pretend to have an opinion on whether human activity is a significant contributor to adverse climate change. They argue, instead, that “that a confluence of scientific uncertainty, political opportunism, and ideological predisposition in an area of scientific study of phenomena of great practical interest has fomented an artificial boom in that scientific discipline” (p. 168). They describe how “the herding induced by the IPCC in the scientific arena interacts with the government-funding activities in mutually reinforcing ways” (p. 167). The Big Player influence they chronicle would seem to increase the risk of expert failure. Whether fears of “global warming” are generally too high, too low, or just right, Butos and McQuade point to an important general truth. Big Players in science and other areas of expertise increase the risk of expert failure.
COMMENTS ON THE MARKET FOR IDEAS
I have extolled the benefits of competition in the market for expert opinion, at least when the “competitive” market has rivalry, synecological redundancy, and free entry. But even with competition, expert failure is possible. In the market for expert opinion, “total truth production” (Goldman and Cox 1996) can be meager. The market for expert opinion is part of the “market for ideas.” Thus, my theory leads to pessimism about the market for ideas.
216
Expert Failure
Liberals in the tradition of Mandeville, Smith, Hume, and Hayek should not think it somehow a problem or disappointment if the market for ideas does a poor job of inducing true beliefs in its participants. The synecological and often tacit knowledge of how to do things is brought into conformity with the purposes to which it is put through a process of evolutionary shaping. These practices tend toward a kind of rough and ready conformity with our needs, even though evolution does not produce optimality. But any tendency toward true beliefs or correct statements as opposed to useful practices is likely to be weaker. As we have seen with Ioannidis (2005), Tullock (1966), and others, false beliefs may persist even in science. The relatively poor quality of our propositional knowledge is consistent with the generally skeptical attitude of liberalism. The doctrine of “consumers’ sovereignty,” as W. H. Hutt (1936) dubbed it, holds that consumer decisions to buy or not to buy determine the production of goods and services. “Competitive institutions are the servants of human wants” (Hutt 1936, p. 175). This doctrine of consumers’ sovereignty applies no less forcefully in the market for ideas than in the market for men’s shoes. Truth is only one of many things demanders want from ideas. And sometimes truth has nothing to do with it. Often, demanders in the market for ideas want magical thinking. By “magical thinking” I mean an argument with one or more steps that require something impossible. Unfortunately, experts often have an incentive to engage in magical thinking. Under competitive conditions in the market for ideas, the demand for magical thinking meets a willing supply. As Alex Salter (2017, p. 1) has said, in the market for ideas, competition occurs “on margins unrelated to truth.” Goldman and Cox (1996, p. 18) say, “if consumers have no very strong preference for truth as compared with other goods or dimensions of goods, then there is no reason to expect that the bundle of intellectual goods provided and ‘traded’ in a competitive market will have maximum truth content.” Coase (1974) takes a similarly skeptical view of the market for ideas. And yet Goldman and Cox (1996, p. 11) say that “Certain economists, including . . . Ronald Coase, simply assume the virtues of the free market for ideas (or assume that others grant these virtues) and proceed to defend the free market for goods as being entirely parallel with the market for ideas.” But in the cited article, Coase (1974) twice says that there seems to be “a good deal of ‘market failure’” in the market for ideas. Coase’s claim was not that the market for ideas is somehow efficient or otherwise wonderful. His point concerned asymmetry: It seems inconsistent to support government action to correct “market failure” in the market for
Further Sources of Expert Failure
217
commodities while repudiating such action in the market for ideas. Coase is careful to point out that this asymmetry does not depend on the spurious assumption that the market for ideas is structurally identical to commodity markets. “The special characteristics of each market lead to the same factors having different weights, and the appropriate social arrangements will vary accordingly.” But, “we should use the same approach for all markets when deciding on public policy.” And yet, Coase laments, many scholars and intellectuals assume that state intervention is generally skillful and beneficial in the one domain, clumsy and destructive in the other. “We have to decide whether the government is as incompetent as is generally assumed in the market for ideas, in which case we would want to decrease government intervention in the market for goods, or whether it is as efficient as it is generally assumed to be in the market for goods, in which case we would want to increase government regulation in the market for ideas” (Coase 1974, p. 390). If imperfections in the market for ideas are not always best handled by state regulation, then perhaps imperfections in commodity markets are not always best handled by state regulation. The plausible inference runs the other way too. If the perils of regulation should stay our hand from state intervention in commodity markets, then perhaps the considerable infirmities of the market for ideas are no more compelling an inducement to state intervention. The liberal defense of free speech is not based on any claim that the market for ideas somehow eliminates error or erases human folly. It is based on a comparative institutional analysis in which most state interventions make a bad situation worse. Free speech is the worst possible rule, except for all the others. EPISTEMIC SYSTEMS DESIGN
The economic theory of experts studies how market structure determines the risk of expert failure. The broad generalization that competition tends to produce better results than monopoly holds in the market for expert opinion as in other markets. It might then seem a simple matter to turn from “positive” to “normative” analysis. My prescription, it would seem, must be “let there be competition!” It is not so easy, however, to “let there be competition.” First, as I have tried to emphasize, details matter. I have spoken of the “ecology of expertise” partly in hopes of getting past the potentially empty categories of “competition” and “monopoly” to focus on the details of market structure. The term “competition” is vague and may include
218
Expert Failure
institutional structures that promote, or do little to prevent, expert failure. A good ecology of expertise will generally have rivalry, synecological redundancy, and free entry. Demanding “competition” is not a design ensuring these features are present. Second, design is difficult. Devins et al. (2015) argue that constitutional design is impossible. Designing individual markets is less ambitious than designing constitutions, but fraught with difficulty nevertheless, as Smith (2009) notes. The methods of experimental economics can help us to overcome at least some of the difficulties of market design. Koppl et al. (2008) is an example. Koppl et al. (2008) tested the consequences of using multiple experts in the context of forensic science. Koppl et al. (2008) provide experimental evidence that this ameliorative measure has the potential to improve system performance. They had “senders” report to “receivers” on evidence simplified to one of three shapes (circle, triangle, square). The senders are analogous to forensic scientists, and the receivers are analogous to triers of fact (judges or jurors). Bias was induced in some senders by giving them an incentive to issue a false report. The receivers were asked to conclude what shape (circle, triangle, square) the sender(s) had actually been shown. In some cases receivers got one report, in others multiple reports. Because the “errors” in reports were independent (senders did not know the content of reports from other senders or what incentives they had received), the receivers made fewer errors when they receive multiple reports. The results of Koppl et al. suggest that competitive epistemics (as we might call such redundancy) will improve system performance if the number of senders is three or higher, but may not improve performance if the number of senders is two. Moreover, further increasing the number of senders beyond three does not seem to improve system performance (Koppl et al. 2008, p. 153). Interestingly, in one set of experiments, the use of multiple senders seems to have degraded the average performance of senders while improving the performance of the system. Robertson (2010, pp. 214–19) makes a similar proposal for “adversarial blind” expertise in civil cases. The use of random, independent, multiple examinations is a form of blinding. Examiners will not know whether other labs have examined the same evidence and, if so, what the results of such examinations were. They are blinded from this information. Such blinding gives the examiner an increased incentive to avoid scientifically inappropriate inferences so as to minimize what Koppl et al. (2015c) have called “reversal risk,” which they define as “the risk that a decision will later be determined to have been mistaken.”
Further Sources of Expert Failure
219
The experimental results of Koppl et al. (2008) may be surprising. But to improve system reliability, it is not necessary to improve the reliability of the individual units within the system (senders). A chain is only as strong as its weakest link; a net is stronger than its individual knots, and a network is stronger than its individual nodes. A net may be stronger than a chain even though the average knot in the net may be weaker than the average link in the chain. Experiments such as Koppl et al. (2008) form part of the field of “epistemic systems design,” which is the application of the techniques of economic systems design to issues of veracity, rather than efficiency. This study adapted the techniques of economic systems design (Smith 2003) to aid in discovering institutional changes that will improve not the efficiency, but the veracity of expert markets. The change in normative criterion from efficiency to veracity creates epistemic systems design. Economic systems design uses “the lab as a test bed to examine the performance of proposed new institutions, and modifies their rules and implementation features in the light of the test results” (Smith 2003, p. 474). It has produced a major change in how researchers design economic institutions. Epistemic systems design may have a similar potential to change how researchers design institutions. Examples include Koppl et al. (2008), Cowen and Koppl (2011), and Robertson (2011). Many past studies are precursors in some degree. Blinding studies such as Dror and Charlton (2006) and Dror et al. (2006) are examples to at least some degree, as are many studies in social psychology such as Asch (1951). Epistemic systems design is possible because we construct the truth in an experimental economics laboratory. We are in the godlike position of saying unambiguously what the truth is and how close to it our experimental subjects come. (On our godlike position, compare Schutz 1943, pp. 144–5.) We construct the truth, the preferences, and the institutional environment of choice. We construct, in other words, the world in which in which we place our subjects. From this godlike perspective we are in a position to compare the epistemic properties of different institutional arrangements. When we return from our constructed world to the real world, we lose our privileged access to the truth and return to the normal uncertainty common to all. But we carry with us a knowledge of which institutional structures promote the discovery and elimination of error and which institutional structures promote error and ignorance. This knowledge can be carried from the constructed world of the laboratory to the natural world of social life because of the common element in both worlds, namely, the human mind. The one vital element of the experimental world
220
Expert Failure
that is not constructed is the human mind, which makes choices within the institutional context of the laboratory experiment. It is this same element that makes choices in the institutional structures of the natural world of social life. Thus, the sort of laboratory experiment described in Koppl et al. (2008) cannot tell us which particular expert judgments are correct and which incorrect, but they can tell us that the monopoly structure of forensics today produces a needlessly high error rate. When applied to pure science, the techniques of epistemic systems design give us an experimental approach to science studies. In the past, disputes in this field could be addressed empirically only through historical research and field studies. It now seems possible to address a significant fraction of them with the tools of epistemic systems design. Thus, it seems possible to address the role of the network structure of pure science in producing reliable knowledge. Epistemic systems design might help us to understand which social institutions produce truth and which do not. The related strategy for the discovery of truth and the elimination of error is indirect. Rather than attempting to instruct people in how to form true opinions, we might reform our social institutions in ways that tend to induce people to find and speak the truth. Comparing the epistemic properties of alternative social institutions is “comparative institutional epistemics.” At the margin it may be more effective to give people an interest in discovering the truth than to invoke the value of honesty or teach people the fallacies they should avoid. When we rely on experts to tell us the truth, it seems especially likely that institutional reforms will have a higher marginal value than exhortations to be good or rational. If virtue and rationality are scarce goods, we should craft our epistemic institutions to economize on them.
12
Expert Failure in the Entangled Deep State
The Austrian school of economics made an epistemic critique of Sovietstyle central planning (Mises 1920; Hayek 1935; Boettke 1998). The theory of expert failure also leads to an epistemic critique of central planning, which is an extreme form of the rule of experts. Central planning entailed many of the causes of expert failure discussed in this volume. The planning committee had a monopoly on planning advice. The state was the monopsonistic buyer of expert opinions on planning. There was often a leader who was a Big Player with a parochial interest in the planning advice given. There was no entry, rivalry, simple redundancy, or synecological redundancy. Planning experts chose for the people rather than merely advising them. These experts were generally more interested in avoiding perceived error – or execution – than in providing good advice. Therefore, their sympathies and incentives were not aligned with the general welfare. The planning process was a complex, tightly coupled system, and the economy being planned was complex, uncertain, and indeterminate. The failure of Soviet-style central planning is a direct implication of information choice theory. Considering the largely “Austrian” origins of the theory I have tried to develop in this volume, it not surprising that it leads to an epistemic critique of central planning. It may be less obvious that it leads to an epistemic critique of what I shall call the “entangled deep state.” EXPERT FAILURE AND AMERICA’S ENTANGLED DEEP STATE
I will argue that the “military–industrial complex,” “national security state,” or “deep state” in the United States is a real phenomenon. It is a form of the rule of experts and thus invites expert failure. Unlike some other salient discussions of the deep state, however, I will emphasize here 221
222
Expert Failure
the multiplicity of competing, parochial, and inconsistent interests at work in the American deep state. My critique of the deep state will be epistemic. Although I uphold the values of freedom and equality, the theory of expert failure is meant to be value-free in the Weberian sense that it does not require, assume, propound, or advocate any values. Like any other human product, it is, of course, value-motivated. Writing this book had for me a value greater than the activities it displaced. And, unsurprisingly, the value system supporting my preference includes a preference for liberty over tyranny, equality over hierarchy. Nevertheless, the point of the critique I develop here is not to condemn tyranny or hierarchy, nor to vilify the persons whose choices have (perhaps unintendedly) given us the American deep state. The point is to analyze it scientifically. A treatise on bacteriology does not lose its objectivity if the author, accepting the human viewpoint, considers the preservation of human life as an ultimate end and, applying this standard, labels effective methods of fighting germs good and fruitless methods bad. A germ writing such a book would reverse these judgments, but the material content of its book would not differ from that of the human bacteriologist. (Mises 1966, p. 54).
Rather than condemning the American deep state, I intend to show that the entangled deep state will produce expert failure. The phenomenon I address is known by at least three names. In chronological order, they are the “military–industrial complex,” the “national security state,” and the “deep state.” The first term may be the least suited to my purpose. It identifies the simple fact that there is a large military and a large arms industry in the United States. While this “conjunction” (as Eisenhower put it) is generally viewed with some sort of alarm or apprehension, it does not necessarily imply the rule of experts or, indeed, any evil. The other two terms generally connote the rule of experts and, correspondingly, some sort of abrogation of democracy. But the writers using these terms tend to think of the deep state as an internally harmonious body with common interests, whereas the deep state is more like an arena in which conflicting interests do battle. Thus, I will use the invented term “entangled deep state” to identify the phenomenon I am criticizing. I explain the term presently. Warnings against the deep state go back at least as far as Dwight Eisenhower’s (1961) farewell address. In that speech, he issued two warnings, only one of which is often remembered. In his more famous warning, Eisenhower (pp. 1038–9) noted a “conjunction of an immense military
Expert Failure in the Entangled Deep State
223
establishment and a large arms industry” in the United States. This “conjunction” was “new in the American experience,” he said. “In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.” Less famously, Eisenhower also warned of a “technological revolution” that had changed “the conduct of research” in American universities: “Akin to, and largely responsible for the sweeping changes in our industrial–military posture, has been the technological revolution during recent decades.” Eisenhower said, “Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.” In this situation, “The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.” Eisenhower saw a “danger that public policy could itself become the captive of a scientific-technological elite.” In other words, the deep state’s Big Player role in university research could promote and support the rule of experts. In this second and less famous warning, Eisenhower invokes the rule of experts and warns against it. But this second warning did not stick in the American consciousness. The earliest reference I know of to the “national security state” is Michael Reagan (1965, p. 5). He says, “In terms of federal expenditure, ours is more a ‘national security state’ than a ‘welfare state.’ Not only have national security expenses increased, but a broadening of the security concept to include many nonmilitary programs has also led to an increase in the complexity of government” (p. 5). (This seems to be the earliest reference in JSTOR.) Price (1998, p. 390) defines the “National Security State . . . as the economic and political strategies and actions undertaken by a variety of governmental and business policy makers that matured in the Cold War United States for the purpose of protecting and expanding US elites’ economic interests.” The term “deep state” seems to have been first applied to modern Turkey. Watts (1999, p. 639) casually refers to “the so-called Turkish deep state” when conveying widely shared suspicions that it was “targeting proKurdish activists for assassination.” (This passage is the earliest reference to the “deep state” that I found on JSTOR.) Kasaba and Bozdoǧan (2000, p. 19) define the term when they say that “each day the [Turkish] media probes into what is euphemistically referred to as the ‘deep state’ and seeks to expose the vast network of corruption that linked death squads, crime
224
Expert Failure
syndicates and the highest levels of the government.” Apparently, the term arose in journalism to identify a political system in which formal democracy was largely or wholly superseded by the coordinated actions of a coalition of criminals and corrupt state actors such as military officers, politicians, state bureaucrats, and judges. Yilmaz’s (2002, p. 130 and n. 73, p. 130) discussion of “the militarist ‘deep state’ elite” adds corrupt “businessmen” and “media tycoons” to the list. Gökalp and Ünsar (2008, n. 18, pp. 100–1) define the deep state as “the hidden informal networks between the intelligence, security forces, party politics, and leaders of organized crime aiming to silence and eliminate the resistant voices and forces challenging the present status quo of the political system.” The earliest reference I know of to an American deep state is Scott (2007). In his glossary he says the term “is used to refer to a closed network said to be more powerful than the public state. The deep state engages in false-flag violence, is organized by the military and intelligence apparatus, and involves their links to organized crime” (p. 268). Elsewhere, he seems to equate the “deep state” with the supposed power of “the top 1 percent” (p. 3). He refers to “the top 1 percent’s direct or indirect control of certain specific domains of government” (p. 4). Scott says that his book “looks beyond the well-defined public entities of open politics to include the more amorphous and fluid realm of private control behind them. This realm of private influence, the overworld, is a milieu of those who either by wealth or background have power great enough to have an observable influence on their society and its politics” (p. 4). Scott’s characterizations of the American deep state exemplify a widespread tendency to see the deep state as a unified entity pursuing common interests. Scott does, however, distinguish the overarching “deep state” from “the deep state’s resources in the military” (p. 275), which he calls the “security state.” He explains, “Those parts of the government responding to their influence I call the ‘deep state’ (if covert) or ‘security state’ (if military)” (p. 4). He notes that the security state and the deep state (in its nonmilitary aspects, presumably) “respond to different segments of the overworld and thus sometimes compete with each other” (p. 275). Thus, he does at some moments recognize two competing elements within the deep state. But he stops short of a more ramified vision of power struggles in the United States. Scott seems to include Charles and David Koch in the deep state (pp. 22 and 97). As far as I can tell, he does so because they have engaged in persuasion meant to promote “the survival of the free enterprise system” (p. 22). He seems to equate the personal spending of the Koch brothers
Expert Failure in the Entangled Deep State
225
with “corporate spending on advocacy advertising” on the grounds, I suppose, that their personal riches come from corporate earnings. And such spending is somehow also “lobbying” (p. 97). These gaffes reflect, I think, Scott’s unfortunate tendency to equate the American deep state with any political activity he dislikes. Scott seems to think of himself as “left” and the Koch brothers as “right.” Ipso facto, the Kochs are a part of the deep state. Lofgren (2016, p. 5) says, “I use the term [‘Deep State’] to mean a hybrid association of key elements of government and parts of top-level finance and industry that is effectively able to govern the United States with only limited reference to the consent of the governed as normally expressed through elections.” Lofgren is right to say that “The Deep State is the big story of our time” (p. 5). Like Scott, Lofgren absurdly insinuates that the Koch brothers (Charles and David) are somehow a part of America’s deep state. Noting Charles Koch’s important role in founding and funding the libertarian think tank, Cato Institute, Lofgren says that “the habit” of “ostensibly charitable organizations” such as Cato “of providing an income, a megaphone, and the veneer of a respectable job for out-of-work political operatives may be their most important function in assuring a continuity of personnel for the Deep State” (p. 56). Lofgren seems to think that persuasion is bad if you are rich. To his credit, Lofgren equitably lumps George Soros with Charles and David Koch in his mistaken lament that rich people sometimes seek to persuade others of their political opinions. “Many so-called educational foundations are nothing more than overt political advocacy organizations for wealthy donors like the Koch brothers or George Soros” (p. 224). Like Scott, Lofgren seems to think that the “Deep State” is somehow a “free market” entity. They share this belief even though by definition the American deep state is about power and not competition, secrecy and not openness, command and not trade. This view seems rooted in part in a failure to understand “free market” arguments. For example, Lofgren notes, Milton Friedman “once declared that pure food and drug laws were cumbersome and unnecessary” (p. 133). Such a view seems to be literally unimaginable to Lofgren. He says, “One wonders whether Friedman was really that naïve, or whether there was some calculated bad faith involved” (p. 133). So impossible is it for Lofgren to imagine that “free markets” might provide satisfactory regulation of food and drug quality that he cannot but question the intellectual integrity of someone who says they would. Lofgren’s apoplectic smear of Friedman’s character does not identify any material interests that might have driven Friedman to lie about the
226
Expert Failure
effects of “pure food and drug laws.” We are left to wonder what “calculated” interest could have promoted Friedman to lie about his views on this matter. Past critiques of the American deep state have been marred by two important errors. First, they tend to falsely equate the deep state with “free markets” and the like. Second, they tend to assume that members of the American deep state constitute a largely homogeneous group with common interests. In Chapter 5 I railed against mistaken interpretations of “free markets,” “competition,” and related terms. In any appropriate interpretation of such terms, the “free market” is largely free of discretionary decision making by state actors. As I noted in Chapter 1, the “rule of law” is central to the liberal ideal of liberty. The rule of experts and the rule of law are incompatible. For the experts of the administrative state are hired precisely to replace “regular law,” which Dicey (1982) extolled, with “wide discretionary authority,” which he abhorred. Whatever our precise definition of the deep state, it is the rule of experts. It requires the exercise of “wide discretionary authority” by state experts, including national security experts who are privy to state secrets. It is thus in direct contradiction to “regular law” and, therefore, both (1) liberalism in the “classical” or “free market” sense of David Hume and Adam Smith and (2) the rule of law. In this sense, the deep state, like the administrative state, is lawless. Participants in the American deep state do not have uniform interests. They have a great variety of competing and parochial interests. Lofgren errs, therefore, in describing “the rich” as “plutocrats” who are “ruling the place, but [are] not of it” (pp. 124–5). I do not imagine that Lofgren thinks all rich Americans are attempting to “rule the place.” His error, rather, is assuming that what is good for one plutocrat is good for another. In an earlier chapter I quoted Stigler (1971, p. 6) saying: “Crudely put, the butter producers wish to suppress margarine and encourage the production of bread.” And I gave Stigler’s example of railroads using the state to suppress trucking. Even within the “intelligence community” there are heterogeneous and conflicting interests. The conflict between the FBI and the CIA is something of a cliché (Theoharis 2007). It is also a true and salient example of contending interests within the American deep state. At least one important journalist, Glenn Greenwald, has argued that elements within the FBI were “undermining Hillary Clinton’s candidacy in several different ways,” whereas the CIA was “very strongly behind Hillary Clinton.” (See Chambers 2017.) Although the entangled deep state is not a “free-market” phenomenon, powerful corporations are important actors in it. This importance is most
Expert Failure in the Entangled Deep State
227
obviously true for large defense contractors seeking government contracts. They make up the “large arms industry” of which Eisenhower warned. It seems worth adding, however, that there seems to be a general problem of corporate power in the United States and globally. And this problem likely contributes to the dangers of the entangled deep state. Liberalism (in the “classical tradition” of David Hume and Adam Smith) supports not business, but competition. And in the market for commodities, as in the market for expert advice, the particular rules of the competitive game matter. It has so far proved impossible, however, to extirpate from public discourse the error that corporate power is somehow “free market” or “laissez faire.” It might thus surprise many intellectuals to learn that Hayek (1960, p. 301) warned of the “arbitrary and politically dangerous powers” of corporations. Even many intellectuals who think of themselves as advocates of “free markets” may find Hayek’s warning and analysis surprising. Hayek bemoaned “the complete separation of management from ownership, the lack of real power of the stockholders, and the tendency of corporations to develop into self-willed and possibly irresponsible empires, aggregates of enormous and largely uncontrollable power” (p. 311). He thought these dangerous tendencies could be reversed by relatively straightforward changes in corporate law. He suggested that shareholders be given a “legally enforceable claim” to their individual shares “in the whole profits of the corporation” (p. 307). He also suggested that corporations be prohibited from holding voting shares of other corporations (pp. 308–11). When one corporation may hold voting shares of another corporation, then a small group “through a pyramiding of holding” may come to have disproportionate power (p. 309). Some readers may object that Hayek, writing in 1960, was not in a position to benefit from the now vast literature on the “market for corporate control,” which began with “Manne’s (1965) seminal article” on the topic (Jensen and Ruback 1983, p. 7). The empirical study of Vitali et al. (2011) suggests, however, that the market for corporate control may be failing for reasons not entirely unlike those Hayek articulated. In a large study of “transnational corporations (TNCs)” they find that “a large portion of control flows to a small tightly-knit core of financial institutions.” They say that “nearly 4/10 of the control over the economic value of TNCs in the world is held, via a complicated web of ownership relations, by a group of 147 TNCs in the core, which has almost full control over itself.” Moreover, “3/4 of the core are financial intermediaries.” A simple numerical example may suggest the nature of the problem identified by Vitali and colleagues. Corporation A may hold 30 percent of
228
Expert Failure
Corporation B and 30 percent of Corporation C. Corporation B may hold 30 percent of Corporation A and 30 percent of Corporation C. And, finally, Corporation C may hold 30 percent of Corporation A and 30 percent of Corporation B. In this imaginary situation, 60 percent of each firm is held by other firms and no external parties can impose market discipline, nor can individual shareholders vote out existing management or otherwise influence the conduct of business in these enterprises. In this simplistic example the market for corporate control has been suspended. The imagined scenario is not meant to be realistic. It is meant only to suggest the nature of the problem more carefully examined by Vitali et al. (2011) and to suggest that many large enterprises are largely or wholly free from the market for corporate control. The entangled deep state is an only partially hidden informal network linking the intelligence community, military, political parties, large corporations including defense contractors, and others. While the interests of participants in the entangled deep state often conflict, members of the deep state share a common interest in maintaining the status quo of the political system independently of democratic processes. Therefore, denizens of the entangled deep state may sometimes have an incentive to act, potentially in secret, to tamp down resistant voices and to weaken forces challenging the political status quo. I borrow the term “entangled” from Wagner (2010) and Smith et al. (2011) to suggest that the American deep state contains rival, combatting interests. Recall from Chapter 5 that Wagner describes “an entangled network of enterprises that are constituted under different institutional arrangements that generate a continually evolving admixture of cooperation and conflict” (2010, p. 160). I use the term “entangled” here to suggest that the distinction between “private” and “public” entities is fluid and that the entangled entities of the American deep state are a “continually evolving admixture of cooperation and conflict.” The entangled deep state is not a coherent cabal. It is an ongoing struggle. Despite the heterogeneity of interests active in the entangled deep state, it seems to have contributed to a loss of liberty. See, for example, Turley (2012) and Greenwald (2011, 2014). I do not know how much of the deep state is secret and how much is hiding in plain sight, but secrecy is essential to it if only because of the important element of “national security.” It is noteworthy that two U.S. Senators have obliquely warned the public that we have secret interpretations of public laws (Ackerman 2011; Villagra 2013). Senator Ron Wyden has said: “We’re getting to a gap between what the public thinks the law
Expert Failure in the Entangled Deep State
229
says and what the American government secretly thinks the law says” (Ackerman 2011). Today, the duly elected representatives of the people do not consider themselves to be “the government.” A 2013 exchange between Representative Jerrold Nadler of New York and FBI Director Robert Mueller is informative. McCullagh (2013) reports: Mueller initially sought to downplay concerns about NSA surveillance by claiming that, to listen to a phone call, the government would need to seek “a special, a particularized order from the FISA court directed at that particular phone of that particular individual.” Is information about that procedure “classified in any way?” Nadler asked. “I don’t think so,” Mueller replied. “Then I can say the following,” Nadler said. “We heard precisely the opposite at the briefing the other day. We heard precisely that you could get the specific information from that telephone simply based on an analyst deciding that . . . In other words, what you just said is incorrect. So there’s a conflict.”
Democratically elected Representative Nadler did not feel free to report apparent abuses until an appointed official assured him that the policies in question were not “classified in any way.” Apparently, Nadler did not consider the elected representatives of the people to be “the government.” Instead “the government” is, for Representative Nadler, an ill-defined and undemocratic entity that includes the FBI and, presumably, the rest of the intelligence community. New York Senator Chuck Schumer criticized President Trump for seeming to challenge the autonomy of the intelligence community just prior to his inauguration. In a live, nationally broadcast interview a journalist informed Schumer that Trump had claimed earlier in the day that release of a potentially damaging CIA report had been delayed to give it more time to “build a case.” In response to this revelation, Schumer exclaimed, “Let me tell you: You take on the intelligence community – they have six ways from Sunday at getting back at you. So, even for a practical supposedly hard-nosed businessman, he’s being really dumb to do this.” (Chaitin 2017 reports on this event.) Schumer’s seemingly spontaneous remark suggests that “the government,” in the form of the American intelligence community, somehow exercises control over Senators and Representatives in Congress to preserve its autonomy from the elected representatives of the people. I shall let others address the empirical question whether, as Scott (2007, p. 267) claims, the entangled deep state is entangled with organized crime and whether it “engages in false-flag violence.” My epistemic critique of the entangled deep state is robust to such considerations as it is robust to the precise degree of secrecy of its operations. Nor does my critique require me
230
Expert Failure
to view the deep state as serving the interests of any political party or ideology. Indeed, I have tried to emphasize the multiplicity of competing and parochial interests at work in it. Of course, most ideological views expressed in American politics today are at least nominally prodemocracy and thus opposed to deep-state power. But deception, self-deception, and hypocrisy are all well at home in every political party and, as Mandeville noted, most human hearts. Thus, there is little reason to think that, for example, a socialist or a libertarian Senator would be less corrupt than a Democratic or Republican Senator. There is no Grand Conspiracy in the entangled deep state, but there is coercion, secrecy, and intervention. My epistemic criticism of the entangled deep state may seem simple, even obvious, when viewed in the context of the economic theory of experts I have given in this book. The “national security” apparatus of the entangled deep state creates an epistemic monopoly for the “intelligence community.” The deep state has monopsony power in the purchase of expert advice on “security” matters. The more important political actors in the entangled deep state are Big Players to whom others in the system must orient their actions. The role of Big Players is revealed in a news story claiming that ISIS intelligence was “cooked”: Two senior analysts at CENTCOM signed a written complaint sent to the Defense Department inspector general in July alleging that the reports, some of which were briefed to President Obama, portrayed the terror groups as weaker than the analysts believe they are. The reports were changed by CENTCOM higher-ups to adhere to the administration’s public line that the U.S. is winning the battle against ISIS and al Nusra, al Qaeda’s branch in Syria, the analysts claim. (Harris 2015)
The substance of this report seem to be supported by later news reports, including at least one in the New York Times (Cooper 2016). The internal market for expert advice in the entangled deep state is not competitive. There is no free entry. There is limited rivalry. And there is inadequate synecological redundancy. Indeed, the response to 9/11 was in the direction of greater consolidation, which tends only to reduce synecological redundancy. The entangled deep state produces the rule of experts. Experts must often choose for the people because the knowledge on the basis of which choices are made is secret, and the very choice being made may also be a secret involving, supposedly, “national security.” The production of expert opinions on national security is a complex, tightly coupled system, as illustrated by the history of the American intelligence on Iraqi weapons
Expert Failure in the Entangled Deep State
231
of mass destruction. The phenomena intelligence experts report on are complex, uncertain, and indeterminate. And there is poor feedback between the global situation and any choices made on the basis of (possibly secret) expert opinions. The “intelligence community” has incentives that are not aligned with the general welfare or with democratic process. There is a problem of incentive alignment. Here, too, the supposed intelligence that Iraq had weapons of mass destruction is a salient illustration. Private interests are entangled in the entangled deep state. We have seen Dwight Eisenhower warn us against precisely such influence. Edward Snowden was an employee of the nominally private military contractor Booz Allen. Large arms makers and other nominally private enterprises in the entangled deep state have interests that are not aligned with the general interest, whether we conceive the general interest globally or in more parochial nationalistic terms. This entanglement is an example of regulatory capture. As I have noted, the theory of regulatory capture does not predict that a solitary winner will emerge and forever retain its preeminence unchallenged. Victory, I said, may be partial and fleeting. It may therefore be unfortunate that the theory uses the word “capture” to describe this ongoing struggle. The term “entangled” may help to suggest the more correct picture of ongoing struggle among contending parties. The existence of a militarized deep state creates the risk that many people who are a part of it may come to have sympathies that are bent toward perceived state or national interests that may not align with the common interest. For such persons, approbation may be sought and given for actions that are contrary to the values of an open society. There is a scholarly literature on the “civil–military gap,” which seems to show that there is a gap between the values and orientation of the American military and the American public. Wrona (2006) warns of “A Dangerous Separation” between the two. Wrona (2006, p. 26) says that in “many cases, the ideological foundations” of “liberal democracies and military organizations” are “fundamentally at odds.” Summarizing the findings of a variety of past researchers, he says that “66 percent of surveyed American military members think that the military has higher moral values than the American civilian population,” that the “overwhelming majority of military officers identify themselves as ideologically conservative,” that “[m]any military officers believe that their role has changed from policy advisor to policy advocate.” Wrona (2006, p. 30) quotes Feaver and Kohn (2001, p. 460): “Military officers express great pessimism about the moral health of civilian society and strongly believe that the military could help society become more moral, and that civilian society
232
Expert Failure
would be better off if it adopted more of the military’s values and behaviors.” Goldrich (2011, pp. 68–9) says: “There appears to be a gap – if not a chasm – between an increasingly sensate, amiable, and emotionally narrow civilian world and a flinty, harshly results-oriented, and emotionally extreme military, for career and non-career personnel alike.” This “chasm” is dangerous because the American military “may turn . . . on those whom it is supposed to defend, out of disgust for their failure to step up and contribute either directly or with moral support.” A widely disparaged article in the National Security Law Journal (Bradford 2015) illustrates the civil–military gap in values, sympathy, and approbation. The author does not seem to be a remotely influential thinker even in the most secret chambers of the deep state. The evidence seems to support the view that he is little more than a crank with falsified credentials (Ackerman 2015; Ford 2015). I would not even exclude that his article was a hoax similar to Sokal (1996). And yet it was published in a law journal devoted to national security. The editor, who boasts on his LinkedIn page (www.linkedin.com/in/ayesnik) of his “[b]road experience in homeland and national security from work at both the Department of Justice and the Department of Homeland Security,” seems to have thought it was plausible that a person in Bradford’s position might hold such views. The author was a member of the “intelligence community” who taught for a time at West Point. Thus, even if the article is ultimately revealed as a hoax, his seeming sympathies illustrate the risk that the social world of the national security expert may generate structures of sympathy and approbation inconsistent with the moral sentiments of a functioning democracy. Bradford was briefly a member of the Trump administration, serving as Director, Office of Indian Energy in the United States Department of Energy. Bradford invokes the Song of Roland, which recounts the tale of the French victory in the Battle of Roncevaux Pass in 778. The French victory over their Muslim enemy was achieved after the hero Roland exhausted his dying energies blowing a horn of warning to alert Charlemagne of the impending danger. The treachery of some Christians in Charlemagne’s army is crucial to Bradford’s purpose in using the tale as a leitmotif: Although the medieval epic blames treacherous Christian nobles – who abused positions of trust to pass military secrets to the Islamic invaders – with the neardefeat of the Frankish army, it hails the sacrifice of Roland – the rear-guard commander whose desperate last stand culminated in a timely trumpeted warning that saved Charlemagne from ambush – as exemplar of the valiant defense of Europe against Islamic dominion. (pp. 280–1)
Expert Failure in the Entangled Deep State
233
We are admonished to view certain American law professor as analogous to these “treacherous Christian nobles.” The professors in question “have converted the U.S. legal academy into a cohort whose vituperative pronouncements on the illegality of the U.S. resort to force and subsequent conduct in the war against Islamism – rendered in publications, briefs amicus curiae, and media appearances – are a super-weapon that supports Islamist military operations by loading combat power into a PSYOP campaign against American political will” (p. 300). Forty of the putatively most extreme among these scholars form a “Fifth Column” that must be met with decisive action (p. 302). Bradford gives us a list of increasingly severe “counterattack” measures that might be applied (pp. 443–50). The list culminates not in the call to charge them with treason (pp. 448–9), but in the suggestion that they might be treated as “unlawful combatants.” As such, they could be “targeted at any time and place and captured and detained until termination of hostilities” (p. 450). They would be “subject to coercive interrogation, trial, and imprisonment.” Moreover, “the infrastructure used to create and disseminate” their “propaganda – law school facilities, scholars’ home offices, and media outlets where they give interviews – are also lawful targets given the causal connection between the content disseminated and Islamist crimes incited.” Presumably (if the whole thing is not a hoax), Bradford raises the absurd specter of drone strikes on American law schools and television studios only to make his less extreme proposals, such as the reinstitution of loyalty oaths (pp. 445–6), seem like reasonable political compromises. What matters for my argument here, however, is neither the substance of Bradford’s legal argument nor his sincerity in putting it forward. I wish instead to note the undemocratic structure of sympathy and approbation Bradford represents. His closing paragraph states: The warison sounds; the warning is sent; the assistance of the sacred and the profane is summoned. Whether once again the West will heed the call, march apace against the Islamist invaders, and deliver justice swift and sure to disloyal courtiers abasing it from within, or whether the West has become deaf to the plaintive, fading notes of one encircled knight who long ago called forth its soldiers and calls them yet again, will decide if the Song of Roland remains within the inheritance of future generations of its peoples. If the West will not harken now to Roland and his horn, neither it, nor its peoples, nor the law they revere will outlive the bleak day of desecration when Islamists, wielding their Sword, strike his Song, all it represents, and all it can teach, from history. (p. 461)
Bradford’s seeming sympathies exclude “Islamists” and, one suspects, all Muslims. His sympathies lie not with Americans or Christians, but only
234
Expert Failure
with putatively loyal Americans. Approbation is given principally to those who defend the American state with martial courage and unquestioning loyalty. Whatever Bradford’s true beliefs, these views have been accepted as sincere by the editors of the National Security Law Journal as well as by journalists in the United States and United Kingdom (Ackerman 2015; Ford 2015). They illustrate the risk that the relatively insular work world of national security experts may lead them to a structure of sympathy and approbation that is inconsistent with pluralistic democracy. Democratic values may become literally unimaginable to deep state denizens. In Chapter 11 I discussed how to design incremental improvements to markets for expert opinion. Unfortunately, my scheme for piecemeal institutional reform (which is mostly borrowed from Vernon Smith) does not have an obvious application to the entangled deep state. If my diagnosis of the deep state is at all correct, reform is urgently required. I freely confess, however, that I have no specific ideas on how we might attempt to roll back the deep state with a reasonable prospect of success. In Chapter 1 I warned of the dangers of precipitate change. It is fine to exclaim upon the urgency of reform. It would be much better to have a realistic program for such reform. I regret that I do not. CLOSING REMARKS
In this volume, I have tried to show that there is a general literature on experts spanning many fields and reaching back at least as far as Socrates. I have attempted to provide a brief general guide to this vast literature. I hope that I have also made progress on an economic theory of experts and expert failure. My theory is based on a radically egalitarian model of knowledge. In this antihierarchical model, knowledge is “SELECT.” It is synecological, evolutionary, exosomatic, constitutive, and tacit. This idea of SELECT knowledge is the key to my efforts in this volume. This view of the production and distribution of knowledge in society conduces to a view of experts very different from that of many scholars working in the area today. And, indeed, my overall perspective seems to differ from the leading views in philosophy, economics, science and technology studies, law, and forensic science. Even thinkers from whom I have drawn much, for example Levy and Peart (2017), Turner (2001), and Goldman (1999), may be less skeptical and more hierarchical than me. If my epistemics are about right, then the problem of experts mostly boils down to the question of knowledge imposition. Shall we impose a uniform body of knowledge on society? Imposing knowledge from above
Expert Failure in the Entangled Deep State
235
ensures expert failure, as illustrated by the failure of Soviet planning. The knowledge sustaining the division of labor in society is inherently synecological. It is therefore not a matter of free choice whether social relations shall be governed by one unitary system of knowledge or by the undirected efforts of diverse persons, each guided by their own knowledge. Any attempt to impose a systematized body of knowledge on the system will fail, and social cooperation will be correspondingly thwarted. If we are to have sustainable social cooperation among vast numbers of strangers, social intercourse cannot be directed by an imposed system of knowledge, however “scientific” we may imagine such knowledge to be. The attempt to impose knowledge on society can be traced at least as least as far back as the Socratic tradition in philosophy. We should repudiate their exaltation of episteme over doxa, of supposedly grounded “knowledge” over common sense and “mere” opinion. Expert witnesses in British and American law courts have shown a Socratic zeal for imposition. If a body of experts is going to impose its knowledge on society, it must represent its members as morally superior to others. Those seeking power do not and cannot represent themselves as evil (Havel 1978). And we have seen that both Socratic philosophers of the Academy and nineteenthcentury “men of science” emphasized their supposed epistemic and moral superiority. American progressives have also attempted to impose knowledge on society through the rule of experts. This effort helped produce the twin evils of the administrative state and the entangled deep state. True to the Socratic pattern, defenders of these twin evils emphasize not only the supposed epistemic superiority of the experts but also their supposed moral superiority. We have seen Wilson give assurances that the empowered civil service he called for would be “cultured and self-sufficient enough to act with a sense of vigor, and yet so intimately connected with the popular thought, by means of elections and constant public counsel, as to find arbitrariness or class spirit quite out of the question” (1887, p. 217). We have seen Flexner (1910, p. 180) emphasized the supposed duty of white experts to cultivate the “mental and moral improvement” of “the negro race.” We have seen Wrona (2006) report that “66 percent of surveyed American military members think that the military has higher moral values than the American civilian population” and that “[m]ilitary officers . . . strongly believe that the military could help society become more moral, and that civilian society would be better off if it adopted more of the military’s values and behaviors” (Wrona 2006, p. 30). One selfdeclared expert has said without irony: “Experts need to demonstrate that
236
Expert Failure
they are good, honest people who have the public’s best interest at heart” (Shaw 2016). Imposed knowledge cannot grow and change as freely or rapidly as synecological knowledge. In other words, it cannot grow or change as freely or rapidly as the divided knowledge emergent from an ecology of interacting, dispersed, and autonomous knowers. Imposed knowledge easily becomes dogma and thus deeply “unscientific” if, at least, “science” means open inquiry. Thus, apoplectic appeals to “science” in defense of the administrative state are mistaken. We have seen that professional organizations generally attempt to impose uniformity on the opinions expressed by their members. Such uniformity is a necessary consequence of imposing knowledge on society. And it tends to slow the pace of change in any area of knowledge. The anthill problem shows why theorists and other experts may be drawn to knowledge imposition. By imagining themself above the anthill looking down, they easily forget that they are themself but another ant in the hill. Their frame of analysis becomes the only frame. “What counts is the point of view from which the scientist envisages the social world,” as we have seen Schutz (1943, p. 145) point out. The knowledge of the other ants, that is, of the expert’s fellow humans, falls mostly out of view and becomes entirely irrelevant. The problem of experts, I have said, is mostly about imposing knowledge on society. I do not wish to suggest, however, that the problem of experts has no real intricacies. On the contrary, the problem suffuses all aspects of human social life, including our most basic understandings of science. It gives rise, therefore, to many opportunities for perplexity. For example, the problem of reflexivity, with its intimate relationship to the anthill problem, is a central issue and yet almost impenetrable. My point is not how simple it all is. It is not so simple. My point about imposition is that we discover the importance of knowledge imposition in social life as soon as we take dispersed knowledge seriously. My theory is scientific. I have attempted to craft theory in such a way that it has empirical content that can be tested with experimental data, as in Koppl et al. (2008) and Cowan and Koppl (2011), and with historical evidence, as in Koppl (2005a) and Koppl (2010b). The problems with testability, including Popperian falsifiability, are well known. A few comments may be in order nevertheless. If we were perfect logicians, we could say that our theory is “true” just because we were careful in our reasoning. The question would then be only when our theory applies. If we keep finding that a theory is not applicable
Expert Failure in the Entangled Deep State
237
where we thought at first it likely would be, then we might end up losing interest in the theory. That loss of interest is not a Popperian “falsification.” It is somewhat like an “empirical test,” however, even though we may later find our interest revived by new historical evidence or something else (Lakatos 1970). Pure theory might seem to be “a priori” if we were perfect logicians, with only the application being “empirical.” But we are not perfect logicians. We do not know all of the implicit assumptions we make, and we are not always clear about the steps in our reasoning (Lakatos 1976). Because human reason is fallible we need to test theory with history. (Historical evidence may exist as numbers, texts, images, or something else. Whether statistical analysis of historical evidence is helpful or even possible depends on the case.) But observation is fallible too (Wolpert 2001). Because observation is fallible we need to test history with theory. In this disappointing situation, we cannot hope for falsifications or definitive tests beyond, perhaps, a few relatively rare instances (Quine 1951). We have, instead, a dialogue between theory and history with no guarantee that we will always move closer to the truth. In this perhaps disappointing situation, testability adds value. Theories crafted to be testable are more subject to amendment in the face of experience. We are correspondingly less likely to carry forward distortions, errors, and omissions. Thus, testability retains value in science in spite of the ineradicable ambiguities of testing. I have tried to offer a theory that is no less testable than most theories in the natural and social sciences and, perhaps, more testable than many. My theory is built on a radically egalitarian theory of human knowledge and conduces to a view that sees experts as unreliable and nonexperts as potentially empowered, but only in more or less competitive markets for expert advice. Competition is likely to be beneficial only if the ecology of expertise has rivalry, synecological redundancy, and free entry. These features are easier to identify than institute. But epistemic systems design, which borrows the experimental methods of economic systems design, gives us a way to test our ideas and improve our chances of making incremental institutional changes that improve the epistemic performance of expert markets. This program for piecemeal institutional reform could become yet another opportunity for experts to impose their knowledge on others. I would be distraught by such an outcome. My fondest hope for this volume is that it may help induce the reader to value expertise, but fear expert power.
References
Ackerman, Spencer. (2011). There’s a Secret Patriot Act, Senator Says. Wired, May 25, 2011. Downloaded January 13, 2017 from www.wired.com/2011/05/secret-patriot-act/. (2015). West Point Law Professor Who Called for Attack on “Islamic Holy Sites” Resigns. The Guardian, August 31, 2015. Downloaded January 26, 2017 from www.theguardian.com/us-news/2015/aug/31/west-point-law-professor-william-brad ford-resigns. Adewunmi, Bim. (2017). Flint Isn’t Ready to Trust Anyone Yet. BuzzFeed News, May 24, 2017. Downloaded May 30, 2017 from www.buzzfeed.com/bimadewunmi/ flint-isnt-ready-to-trust-anyone-yet?utm_term=.iaoddoZ4L0#.uj5PPv9yW3. Aiello, Leslie C. and Dunbar, R. I. M. (1992). Neocortex Size, Group Size, and the Evolution of Language. Current Anthropology, 34(2), 184–93. Aitkenhead, Decca. (2016). Prof Brian Cox: “Being anti-expert – that’s the way back to the cave.” The Guardian, July 2, 2016. Downloaded December 27, 2016 from www.theguardian.com/tv-and-radio/2016/jul/02/professor-brian-cox-interviewforces-of-nature. Akerlof, George A. (1970). The Market for “Lemons”: Quality Uncertainty and the Market Mechanism. Quarterly Journal of Economics, 84(3), 488–500. Akerlof, George A. and Kranton, Rachel E. (2000). Identity and Economics. The Quarterly Journal of Economics, 115(3), 715–53. (2002). Identity and Schooling: Some Lessons for the Economics of Education. Journal of Economic Literature, 40(4), 1167–201. (2005). Identity and the Economics of Organizations. Journal of Economic Perspectives, 19(1), 9–32. (2008). Identity, Supervision, and Work Groups. American Economic Review, 98(2), 212–17. Anderson, James. (1973). Ideology in Geography: An Introduction. Antipode, 5(3), 1–6. Anderson, Richard C. and Pichert, James W. (1978). Recall of Previously Unrecallable Information Following a Shift in Perspective. Journal of Verbal Learning and Verbal Behavior, 17, 1–12. Anderson, Richard C., Pichert, James W., and Shirey, Larry L. (1983). Effects of Reader’s Schema at Different Points in Time. Journal of Educational Psychology, 75(2), 271–9.
239
240
References
Arnush, Michael. (2005). Pilgrimage to the Oracle of Apollo at Delphi: Patterns of Public and Private Consultation. In Jas Elsner and Ian Rutherford, eds., Pilgrimage in Graeco-Roman and Early Christian Antiquity: Seeing the Gods. Oxford: Oxford University Press, pp. 97–110. Arthur, W. Brian. (1994). Inductive Behaviour and Bounded Rationality. American Economic Review, 84, 406–11. Arthur, W. Brian, Durlauf, Steven N., and Lane, David A. (1997). Introduction. In W. B. Arthur, S. Durlauf, and D. Lane, eds., The Economy as an Evolving Complex System II. Boston: Pearson Education, pp. 1–14. Arthur, W. B., Holland, J., Le Baron, B., Palmer, R., and Taylor, P. (1997). Asset Pricing under Endogenous Expectations in an Artificial Stock Market. In W. B. Arthur, S. Durlauf, and D. Lane, eds., The Economy as an Evolving Complex System II. Boston: Pearson Education, pp. 15–44. Asch, S. E. (1951). Effects of Group Pressure upon the Modification and Distortion of Judgement. In H. Guetzkow, ed., Groups, Leadership and Men. Pittsburgh: Carnegie Press, pp. 177–90. Bain, Joe S. (1956). Barriers to New Competition. Cambridge: Harvard University Press. Barber, Michael D. (2004). The Participating Citizen: A Biography of Alfred Schutz. Albany: State University of New York Press. Barry, Andrew and Slater, Don. (2002). Technology, Politics and the Market: An Interview with Michel Callon. Economy and Society, 31(2), 285–306. Bartley, W. W. III. (1987). Alienation Alienated: The Economics of Knowledge versus the Psychology and Sociology of Knowledge. In G. Radnitzsky and W. W. Bartley III, eds., Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. La Salle: Open Court, pp. 423–51. Bastiat, Frederic. (1845) [1851/1854–1864/1964]. Economic Sophisms. Translated by Arthur Goddard. Irvington-on-Hudson: The Foundation for Economic Education, Inc. Bator, Francis M. (1958). The Anatomy of Market Failure. Quarterly Journal of Economics, 72, 351–79. Baumol, William J. (1982). Contestable Markets: An Uprising in the Theory of Industrial Structure. American Economic Review, 72(1), 1–15. Baumol, William J., Panzar, John C., and Willig, Robert D. (1982). Contestable Markets and the Theory of Industry Structure. San Diego: Harcourt Brace Jovanovich. Berger, Peter. (2016). In the Vortex of the Migration Crisis. The American Interest, May 18, 2016. Downloaded June 28, 2016 from www.the-american-interest.com/2016/ 05/18/in-the-vortex-of-the-migration-crisis/. Berger, Peter and Luckmann, Thomas. (1966). The Social Construction of Reality. New York: Anchor Books. Berger, Vance, Rosser Matthews, J., and Grosch, Eric N. (2008). On Improving Research Methodology in Clinical Trials. Statistical Methods in Medical Research, 17(3), 231–42. Besley, T. and Hennessy, P. (2009). Letter to Queen Elizabeth. July 22, 2009. Downloaded January 11, 2016 from www.feed-charity.org/user/image/besley-hennessy2009a.pdf. Bhaskar, Roy. (1991). Knowledge, Theory of. In Tom Bottomore, Lawrence Harris, V. G. Kiernan, and Ralph Miliband, eds., A Dictionary of Marxist Thought, 2nd edn. Oxford and Malden, MA: Blackwell, pp. 285–94.
References
241
Bickerton, Christopher and Accetti, Carlo Invernizzi. (2015). Populism and Technocracy: Opposites or Complements? Critical Review of International Social and Political Philosophy, 20(2), 186–206. Biscoe, Peter. (2007). Expert Witnesses: Recent Developments in NSW. Australian Construction Law Newsletter, #114, 38–41. Bloor, David. (1976). Knowledge and Social Imagery. London: Routledge & Kegan Paul. Blount, Zachary D., Borland, Christina Z., and Lenski, Richard E. (2008). Historical Contingency and the Evolution of a Key Innovation in an Experimental Population of Escherichia coli. Proceedings of the National Academy of Sciences, 105(23), 7899–906. BLS, Bureau of Labor Statistics. (2015). Occupational Outlook Handbook 2014–15 Edition, www.bls.gov/ooh/a-z-index.htm. Boettke, Peter. (1998). Economic Calculation: The Austrian Contribution to Political Economy. Advances in Austrian Economics, 5, 131–58. (2001). Calculation and Coordination: Essays on Socialism and Transitional Political Economy. London and New York: Routledge. (2012). Living Economics: Yesterday, Today, and Tomorrow. Oakland: The Independent Institute. Böhm-Bawerk, Eugen v. (1888) [1891/1930]. The Positive Theory of Capital. Translated by William Smart. New York: G. E. Stechert & Co. Bonar, James. (1894). A Catalogue of the Library of Adam Smith. London and New York: Macmillan and Co. Boulding, Kenneth. (1964). The Meaning of the Twentieth Century. London: George Allen and Unwin, Ltd. Boyte, Harry C. (2012). Populism – Bringing Culture Back In. The Good Society, 21(2), 300–19. Bradford, William C. (2015). Trahison des Professeurs: The Critical Law of Armed Conflict Academy as an Islamist Fifth Column. National Security Journal, 3(2), 278–461. Branchi, Andrea. (2004). Introduzione a Mandeville. Rome and Bari: Editori Laterza, ebook. Britton, Roswell S. (1933). The Chinese Periodical Press, 1800–1912. Shanghai: Kelly and Walsh. (1934). Chinese News Interests. Pacific Affairs, 7(2), 181–93. Broad, William J. (2006). The Oracle: Ancient Delphi and the Science behind Its Lost Secrets. New York: The Penguin Press. Brock, William A. and Hommes, Cars H. (1997). A Rational Route to Randomness. Econometrica, 65(5), 1059–95. Browne, W. A. F. (1854). Treatment of Medical Witnesses in Courts of Law in America. Association Medical Journal, 2(63), 243. Buchanan, James M. (1959). Positive Economics, Welfare Economics, and Political Economy. The Journal of Law & Economics, 2, 124–38. Buchanan, J. M. (1982). Order Defined in the Process of Its Emergence. Literature of Liberty 5. Downloaded June 19, 2010 from http://oll.libertyfund.org/?option= com_content&task=view&id=163&Itemid=282. Buchanan, James M. (2003). Public Choice: The Origins and Development of a Research Program, Fairfax, VA: Center for the Study of Public Choice. Downloaded May 18, 2007 from www.gmu.edu/centers/publicchoice/pdf%20links/Booklet.pdf.
242
References
Buchanan, James M. and Tullock, Gordon. (1962) [1999]. The Calculus of Consent: The Logical Foundations of Constitutional Democracy, volume 3 of The Collected Works of James M. Buchanan. Indianapolis: Liberty Fund. Buehler, Hannah. (2017). Family Court Judge Denies Homeschool Mom’s Custody Request. WKBW Buffalo, February 9, 2017. Downloaded February 10, 2017 from www.wkbw.com/news/family-court-judge-denies-homeschool-moms-custodyrequest. Burney, Ian A. (1999). A Poisoning of No Substance: The Trials of Medico-Legal Proof in Mid-Victorian England. Journal of British Studies, 38(1), 59–92. Burr, Vivien. (1995). An Introduction to Social Constructionism. London and New York: Routledge. Burrill, Thomas J. (1890). Proceedings of the American Society of Microscopists. Minutes of the Thirteenth Annual Meeting. Proceedings of the American Society of Microscopists, 12, 208–52. Butos, William and Koppl, Roger. (2003). Science as a Spontaneous Order: An Essay in the Economics of Science. In H. S. Jensen, M. Vendeloe, and L. Richter, eds. The Evolution of Scientific Knowledge. Cheltenham: Edward Elgar, pp. 164–88. Butos, William N. and McQuade, Thomas J. (2015). Causes and Consequences of the Climate Science Boom. The Independent Review, 20(2), 165–96. Callon, Michel. (1998). Introduction: The Embeddedness of Economic Markets in Economics. The Sociological Review, 46(1), 1–57. Campbell, Donald. (1987). Evolutionary Epistemology. In Gerard Radnitzky and W. W. Bartley, III., eds., Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. Chicago and La Salle, Illinois: Open Court, pp. 47–89. Canning, D. (1992). Rationality, Computability and Nash Equilibrium. Econometrica, 60(4), 877–88. Chaitin, Daniel. (2017). Schumpeter Warns Trump: Intel Officials “Have Six Ways from Sunday at Getting Back at You,” Washington Examiner, January 3, 2017. Downloaded February 5, 2017 from www.washingtonexaminer.com/schumerwarns-trump-intel-officials-have-six-ways-from-sunday-at-getting-back-at-you/ article/2610823. Chaitin, G., da Costa, N., and Doria, F. A. (2012). Gödel’s Way: Exploits into an Undecidable World. Leiden: CRC Press. Chambers, Francesca. (2017). Is Intelligence Community Plotting Revenge on Trump for Attacking It? High-Profile Critic of Spies Says There’s “Open Warfare” between the CIA and President-Elect, Daily Mail, January 13, 2017. Downloaded February 5, 2017 from www.dailymail.co.uk/news/article-4117364/Is-intelligencecommunity-plotting-revenge-Trump-attacking-High-profile-critic-spies-says-s-openwarfare-agents-president-elect.html. Chen, Stephanie. (2009). Pennsylvania Rocked by “Jailing Kids for Cash” Scandal. CNN, February 24, 2009. Downloaded November 30, 2016 from www.cnn.com/2009/ CRIME/02/23/pennsylvania.corrupt.judges/. Cheng, Edward K. (2006). Same Old, Same Old: Scientific Evidence Past and Present. Michigan Law Review, 104(6), 1387–402. Chroust, Anton-Hermann. (1967). Plato’s Academy: The First Organized School of Political Science in Antiquity. The Review of Politics, 29(1), 25–40.
References
243
(1973). Aristotle: New Light on Some of His Lost Works. Notre Dame, IN: University of Notre Dame Press. Volume I: Some Novel Interpretations of the Man and His Life. Volume II: Observations on Some of Aristotle’s Lost Works. Clark, Andy and Chalmers, David. (1998). The Extended Mind. Analysis, 58(1), 7–19. Claybrook, Joan. (2007). Crash Test Dummies. New York Times, January 28, 2007. Downloaded July 16, 2016 from www.nytimes.com/2007/01/28/opinion/28claybrook.html? _r=2&n=Top%2fOpinion%2fEditorials%20and%20Op-Ed%2fOp-Ed%2fContri butors&oref=s. Coase, Ronald H. (1974). The Market for Goods and the Market for Ideas. The American Economic Review. 64(2), 384–91. Cockshott, Paul, Mackenzie, Lewis, and Michaelson, Greg. (2008). Physical Constraints on Hypercomputation. Theoretical Computer Science, 394, 159–74. Coclanis, Peter A. (2013). Terror in Burma: Buddhists vs. Muslims. World Affairs, 176 (4), 25–33. Colander, David and Kupers, Roland. (2014). Complexity and the Art of Public Policy: Solving Society’s Problems from the Bottom Up. Princeton and Oxford: Princeton University Press. Cole, Simon. (2005). More than Zero: Accounting for Error in Latent Fingerprint Identification. The Journal of Criminal Law & Criminology, 95(3), 985–1078. (2010). Acculturating Forensic Science: What Is “Scientific Culture,” and How Can Forensic Science Adopt It? Fordham Urban Law Journal, 38(2), 435–72. Cole, Simon A. (2012). Reply, Defending a Knowledge Hierarchy in Forensic Science. Fordham Urban Law Journal, City Square, 39, 97–104. Cole, Simon A. and Thompson. William C. (2013). Forensic Science and Wrongful Convictions. In C. Ronald Huf and Martin Killias, eds., Wrongful Convictions & Miscarriages of Justice: Causes and Remedies in North American and European Criminal Justice Systems. New York and London: Routledge, pp. 111–36. Coley, Noel G. (1991). Alfred Swaine Taylor, MD, FRS (1806–1880): Forensic Toxicologist. Medical History, 35, 409–27. Collins, H. M. and Evans, Robert. (2002). The Third Wave of Science Studies: Studies of Expertise and Experience. Social Studies of Science, 32(2), 235–96. (2003). King Canute Meets the Beach Boys: Responses to the Third Wave. Social Studies of Science 33(3), 435–52. Congleton, Roger. (2003). The Future of Public Choice. Public Choice Studies, 40, 5–23. Downloaded September 2, 2010 from http://rdc1.net/forthcoming/FutofPC3.pdf. Conrad, Peter. (2007). The Medicalization of Society: On the Transformation of Human Conditions into Treatable Disorders. Baltimore: The Johns Hopkins University Press. Cook, Harold J. (1994). Good Advice and Little Medicine: The Professional Authority of Early Modern English Physicians. Journal of British Studies, 33(1), 1–31. Cooper, Helene. (2016). Military Officials Distorted ISIS Intelligence, Congressional Panel Says. New York Times, August 11, 2016. Downloaded March 5, 2017 from www.nytimes.com/2016/08/12/us/politics/isis-centcom-intelligence.html?_r=0. Cosmides, Leda, Tooby, John, and Barkow, Jerome, eds. (1992). The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York and Oxford: Oxford University Press.
244
References
Cowan, E. James. (2012). Using Organizational Economics to Engage Cultural Key Masters in Creating Change in Forensic Science Administration to Minimize Bias and Errors. Journal of Institutional Economics, 8(1), 93–117. Cowan, E. James and Koppl, Roger. (2010). An Economic Perspective on Unanalyzed Evidence in Law-Enforcement Agencies. Criminology & Public Policy, 9(2), 409–17. (2011). An Experimental Study of Blind Proficiency Tests in Forensic Science. Review of Austrian Economics, 24(3), 251–71. Coyne, Christopher. (2008). After War: The Political Economy of Exporting Democracy. Stanford: Stanford University Press. Coyne, Christopher and Coyne, Rachel L. eds. (2015). Flaws & Ceilings: Price Controls and the Damage They Cause. London: Institute of Economic Affairs. D’Agostino, F. (2009). From the Organization to the Division of Cognitive Labor. Politics, Philosophy & Economics, 8, 101–29. da Costa, N. C. A. and Doria, F. A. (2009). How to Build a Hypercomputer. Applied Mathematics and Computation, 215, 1361–7. Daniel, James and Polansky, Ronald. (1979). The Tale of the Delphic Oracle in Plato’s Apology. The Ancient World, 2(3), 83–5. Darby, Michael R. and Karni, Edi. (1973) Free Competition and the Optimal Amount of Fraud. Journal of Law & Economics, 16(1), 67–88. Davenport-Hines, Richard. (2009). “Palmer, William [the Rugeley Poisoner] (1824–1856). In H. C. G. Matthew and Brian Harrison, eds., Oxford Dictionary of National Biography, Oxford: Oxford University Press, 2004; online edn., ed. Lawrence Goldman, May 2009, www.oxforddnb.com.libezproxy2.syr.edu/view/art icle/21222 (accessed April 5, 2016). Davies, William. (1856). Presidential Address to the Provincial Medical and Surgical Association (Later the British Medical Association) as reported in “Association Intelligence.” Association Medical Journal, 4(185), 609–13. de la Croix, David and Gosseries, Axel. (2009). Population Policy through Tradable Procreation Entitlements. International Economic Review, 50(2), 507–42. de la Torre, Carlos. (2013). Technocratic Populism in Ecuador. Journal of Democracy, 24(3), 33–46. Debreu, Gerard. (1959). Theory of Value. New Haven and London: Yale University Press. Defoe, Daniel. (1722). A Journal of the Plague Year. London: E. Nutt. Undated facsimile “reproduced from the copy in the Henry E. Huntington Library.” Demsetz, Harold. (1969). Information and Efficiency: Another Viewpoint. Journal of Law and Economics, 12(1), 1–22. de Roover, Raymond. (1955). New Perspectives on the History of Accounting. The Accounting Review, 30(3), 405–20. Devins, Caryn, Koppl, Roger, Kauffman, Stuart, and Felin, Teppo. (2015). Against Design. Arizona State Law Journal, 47(3), 609–81. (2016). Still against Design: A Response to Steven Calabresi, Sanford Levinson, and Vernon Smith. Arizona State Law Journal, 48(1), 241–8. Diamond, A. M. Jr. (1988). Science as a Rational Enterprise. Theory and Decision, 24, 147–67. Dicey, A. V. (1982). Introduction to the Study of the Law of the Constitution, Indianapolis, IN: Liberty Classics. (This volume is a reprint of the 8th edition of 1915.)
References
245
Dillon, Millicent and Foucault, Michel. (1980). Conversation with Michel Foucault. The Threepenny Review, 1, 4–5. Drizin, Steen A. and Leo, Richard A. (2004). The Problem of False Confessions in the Post-DNA World. North Carolina Law Review, 82, 891–1004. Dror, Itiel E. and Charlton, David. (2006). Why Experts Make Errors. Journal of Forensic Identification, 56(4), 600–16. Dror, Itiel E., Charlton, David, and Peron, Ailsa. (2006). Contextual Information Renders Experts Vulnerable to Making Erroneous Identifications. Forensic Science International, 156, 74–8. Dulleck, Uwe and Kerschbamer, Rudolf. (2010). On Doctors, Mechanics, and Computer Specialists: The Economics of Credence Goods. Journal of Economic Literature, 44, 5–42. Duncan, James and Ley, David. (1982). Structural Marxism and Human Geography: A Critical Assessment. Annals of the Association of American Geographers, 72(1), 30–59. Durant, Darrin. (2011). Models of Democracy in Social Studies of Science. Social Studies of Science, 41(5), 691–714. Earl, P. E. and Potts, J. (2004). The Market for Preferences. Cambridge Journal of Economics, 28, 619–33. Easterly, William. (2013). The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor. New York: Basic Books. (2016). Democracy Is Dying as Technocrats Watch. Foreign Affairs, December 23, 2016. Downloaded December 24, 2016 from https://foreignpolicy.com/2016/12/ 23/democracy-is-dying-as-technocrats-watch. Edelman, Gerald M. and Gally, Joseph A. (2001). Degeneracy and Complexity in Biological Systems. Proceedings of the National Academy of Sciences, 98(24), 13763–8. Edmond, Gary. (2009). Merton and the Hot Tub: Scientific Conventions and Expert Evidence in Australian Civil Procedure. Law and Contemporary Problems, 72, 159–89. Edwards, James Don. (1960). Early Bookkeeping and Its Development into Accounting. The Business History Review, 34(4), 446–58. Edwards, Thomas R. (1964). Mandeville’s Moral Prose. ELH, 31(2), 195–212. Eisenhower, Dwight D. (1961). Farewell Radio and Television Address to the American People. January 17, 1961. In Public Papers of the Presidents of the United States, Dwight D. Eisenhower 1960-61, Washington DC: Office of the Federal Register, National Archives and Records Service, General Services Administration, pp. 1035–40. Eliasberg, W. (1945). Opposing Expert Testimony. Journal of Criminal Law and Criminology, 36(4), 231–42. Ellis, Lee. (2008). Reducing Crime Evolutionarily. In Joshua D. Duntley and Todd K. Shackelford, eds., Evolutionary Forensic Psychology: Darwinian Foundations of Crime and Law. Oxford and New York: Oxford University Press, pp. 249–67. Emons, Winand. (1997). Credence Goods and Fraudulent Experts. RAND Journal of Economics, 28(1), 107–19. (2001). Credence Goods Monopolists. International Journal of Industrial Organization, 19(3–4), 375–89.
246
References
Epstein, Richard. (2008). Why the Modern Administrative State Is Inconsistent with the Rule of Law. NYU Journal of Law and Liberty, 3, 491–515. Fairbanks, Arthur. (1906). Herodotus and the Oracle at Delphi. The Classical Journal, 1 (2), 37–48. Fallon Jr., R. H. (1997). “The Rule of Law” as a Concept in Constitutional Discourse. Columbia Law Review, 97(1), 1–56. FBI Director. (2002). An Audit of Houston Police Department Crime Laboratory-DNA/ Serology Section, December 12–13. Feaver, Peter D. and Kohn, Richard H. (2001). Conclusion: The Gap and What it Means for American National Security. In Peter D. Feaver and Richard H. Kohn, eds., Soldiers and Civilians: The Civil-Military Gap and American National Security. Cambridge: MIT Press, pp. 459–73. Feigenbaum, Susan and Levy, David M. (1993) The Market for (Ir)reproducible Econometrics. Social Epistemology, 7, 215–32. (1996) The Technical Obsolescence of Scientific Fraud. Rationality and Society, 8, 261–76. Felin, Teppo, Kauffman, Stuart, Koppl, Roger, and Longo, Giuseppe. (2014). Economic Opportunity and Evolution: Beyond Bounded Rationality and Phase Space. Strategic Entrepreneurship Journal, 8, 269–82. Felin, Teppo, Koenderink, Jan, and Krueger, Joachim I. (2017). The All-Seeing Eye, Perception and Rationality. Psychonomic Bulletin & Review, 24, 1040–59. Ferguson, A. (1767). An Essay on the History of Civil Society. London: A. Millar & T. Caddel; Edinburg: A. Kincaid & J. Bell. Ferraro, Paul J. and Taylor, Laura O. (2005). Do Economists Recognize an Opportunity Cost When They See One? A Dismal Performance from the Dismal Science. Contributions to Economic Analysis & Policy, 4(1): Article 7. Downloaded September 3, 2010 from www.bepress.com/bejeap/contributions/vol4/iss1/art7. Feynman, Richard. (1974). Cargo Cult Science. Engineering and Science, 37(7), 10–13. Filonik, Jakub. (2013). Athenian Impiety Trials: A Reappraisal.” Dike. Rivista di storia del diritto greco ed ellenistico, 16, 11–96. Fisher, R. A. (1936). Has Mendel’s Work been Rediscovered? Annals of Science, 1, 115–37. Flexner, Abraham. (1910). Medical Education in the United States and Canada. New York: Carnegie Foundation for the Advancement of Teaching. Flint Water Advisory Task Force. 2016. Final Report. Downloaded April 22, 2017 from www.michigan.gov/documents/snyder/FWATF_FINAL_REPORT_21March2016_ 517805_7.pdf. Fogel, Robert William and Engerman, Stanley L. (1974). Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown and Company. Fontenrose, Joseph. (1978). The Delphic Oracle: Its Responses and Operations with a Catalogue of Responses. Berkeley: University of California Press. Ford, Matt. (2015). The West Point Professor Who Contemplated a Coup. The Atlantic, August 31, 2015. Downloaded January 26, 2017 from www.theatlantic.com/polit ics/archive/2015/08/west-point-william-bradford/403009/. Foster, William L. (1897). Expert Testimony, Prevalent Complaints and Proposed Remedies. Harvard Law Review, 11(3), 169–86.
References
247
Foucault, Michel. (1972). “History, Discourse and Discontinuity. Translated by Anthony M. Nazzaro. Salmagundi, 20, 225–48. (1980). Power/Knowledge: Selected Interviews and Other Writings, (1972–1977). New York: Pantheon. (1982). The Subject and Power. Critical Inquiry, 8(4), 777–95. Fox, Renée. (1978). Why Belgium? European Journal of Sociology, 19(2), 205–28. Frank, Robert H. and Bernanke, Ben S. (2001). Principles of Microeconomics. New York: McGraw-Hill/Irwin. Franklin, Allan. (2008). The Mendel-Fisher Controversy: An Overview. In Allan Franklin, A. W. F. Edwards, Daniel J. Fairbanks, Daniel L. Hartl, and Teddy Seidenfeld, eds., Ending the Mendel-Fisher Controversy. Pittsburgh: University of Pittsburgh Press, pp. 1–77. Franklin, Benjamin. (1793) [1996]. The Autobiography of Benjamin Franklin. Mineola, NY: Dover Publications, Inc. Frazer, Persifor. (1894). Bibliotics: Or, The Study of Documents; Determination of the Individual Character of Handwriting and Detection of Fraud and Forgery, New Methods of Research. 3rd edn. Philadelphia: J. B. Lippincott Company. (1901). A Manual of the Study of Documents to Establish the Individual Character of Handwriting and to Detect Fraud and Forgery, including Several New Methods of Research. 3rd edn. Philadelphia: J. B. Lippincott Company. (1902). Expert Testimony: Its Abuses and Uses. The American Law Register, 50(2), 87–96. (1907). Scientific Methods in the Study of Handwriting. Journal of the Franklin Institute, 163(4), 245–75. Friedersdorf, Conor. (2014). This Widow’s 4 Kids Were Taken after She Left Them Home Alone. The Atlantic, July 16, 2014. Downloaded November 29, 2016 from www.theatlantic.com/national/archive/2014/07/this-widows-4-kids-were-takenbecause-she-left-them-home-alone/374514/. Friedman, Milton. (1962). Capitalism and Freedom. Chicago: The University of Chicago Press. Froeb, Luke M. and Kobayashi, Bruce H. (1996). Naive, Biased, yet Bayesian: Can Juries Interpret Selectively Produced Evidence? Journal of Law, Economics, and Organization, 12, 257–76. Frohlich, N. and Oppenheimer, J. A. (2006). Skating on Thin Ice: Cracks in the Public Choice Foundation. Journal of Theoretical Politics, 18(3), 235–66. Front National. (2016). Europe: Une Europe au service des peuples libres. Downloaded December 27, 2016 from www.frontnational.com/le-projet-de-marine-le-pen/poli tique-etrangere/europe/. Galton, David J. (1998). Greek Theories on Eugenics. Journal of Medical Ethics, 24, 263–7. Galton, Francis. (1904). Eugenics: Its Definition, Scope, and Aims. American Journal of Sociology, 10(1), 1–6. Garrett, Brandon L. (2010). The Substance of False Convictions. Stanford Law Review, 62, 1051–119. Gatewood, John B. (1983). Loose Talk: Linguistic Competence and Recognition Ability. American Anthropologist, 85(2), 378–87.
248
References
George, Henry. (1898). The Science of Political Economy. New York: Doubleday & McClure Co. Giannelli, Paul C. (1997). The Abuse of Evidence in Criminal Cases: The Need for Independent Crime Laboratories. Virginia Journal of Social Policy & the Law, 4, 439–78. Gökalp, Deniz and Ünsar, Seda. (2008). From the Myth of European Union Accession to Disillusion: Implications for Religious and Ethnic Politicization in Turkey. Middle East Journal, 62(1), 93–116. Golan, Tal. (1999). The History of Scientific Expert Testimony in the English Courtroom. Science in Context, 12(1), 7–32. (2004). Laws of Men and Laws of Nature: The History of Scientific Expert Testimony in England and America. Cambridge, MA: Harvard University Press. Goldman, A. I. (1999). Knowledge in a Social World. Oxford: Oxford University Press. Goldman, Alvin. (2001). Experts: Which Ones Should You Trust? Philosophy and Phenomenological Research, 63(1), 85–110. Goldman, Alvin. (2009). Social Epistemology. In The Stanford Encyclopedia of Philosophy (Fall 2009 edn.), ed. Edward N. Zalta. Downloaded May 31, 2010 from http:// plato.stanford.edu/archives/fall2009/entries/epistemology-social/. (2010). Systems-Oriented Social Epistemology. In T. Gendler and J. Hawthorne, eds., Oxford Studies in Epistemology, vol. 3. Oxford: Oxford University Press, pp. 189–214. Goldman, Alvin I. and Cox, James C. (1996). Speech, Truth, and the Free Market for Ideas. Legal Theory, 2, 1–32. Goldich, Robert L. (2011). American Military Culture from Colony to Empire. Daedalus, 140(3), 58–74. Goodale, Melvyn A. and Milner, A. David. (1995). Separate Visual Pathways for Perception and Action. Trends in Neuroscience, 15, 20–5. Goodall, Jane. (1964). Tool-Using and Aimed Throwing in a Community of FreeLiving Chimpanzees. Nature, 201(4926), 1264–6. Goodwin, William W. (1878). Plutarch’s Morals Translated from the Greek by Several Hands, vol. V. Boston: Little, Brown, and Company. Gordon, Scott. (1976). The New Contractarians. Journal of Political Economy, 84(3), 573–90. Grann, D. (2009). Trial by Fire. The New Yorker, September 7, 2009. Downloaded September 7, 2009 from www.newyorker.com/reporting/2009/09/07/090907fa_ fact_grann?currentPage¼all. Greenspan, Allan. (2008). “Testimony of Dr. Alan Greenspan.” Prepared for Committee of Government Oversight and Reform. October 23. Downloaded November 4, 2009 from www.gpo.gov/fdsys/pkg/CHRG-110hhrg55764/html/CHRG-110hhrg55764.htm. Greenwald, Glenn. (2011). With Liberty and Justice for Some: How the Law Is Used to Destroy Equality and Protect the Powerful. New York: Metropolitan Books. (2014). No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State. New York: Metropolitan Books. Grillo, Giuseppe Piero “Beppe.” 2013. “Gli ‘esperti’ e la piattaforma M5S. Il Blog di Beppe Grillo, March 1, 2013. Downloaded December 27, 2016 from www.beppegrillo.it/ 2013/03/gli_esperti_e_l.html. Grim, R. (2013). “Police Groups Furiously Protest Eric Holder’s Marijuana Policy Announcement. Huffington Post, August 30, 2013. Downloaded September 2,
References
249
2013 from www.huffingtonpost.com/2013/08/30/police-eric-holder-marijuana-_ n_3846518.html?utm_hp_ref=mostpopular. Habermas, Jürgen. [1985] (1987). The Theory of Communicative Action, vol. 2. Boston: Beacon. Hall, Stuart. (1979). The Great Moving Right Show. Marxism Today, January, 14–20. Hand, Learned. (1901). Historical and Practical Considerations regarding Expert Testimony. Harvard Law Review, 15(1), 40–58. Harris, Shane. (2015). Exclusive: 50 Spies Say ISIS Intelligence Was Cooked. The Daily Beast, September 15, 2015. Downloaded March 5, 2017 from www .thedailybeast.com/articles/2015/09/09/exclusive-50-spies-say-isis-intelligence-wascooked.html. Harth, Phillip. (1969). The Satiric Purpose of the Fable of the Bees. Eighteenth-Century Studies, 2(4), 321–40. Haupt, Claudia E. (2016). Professional Speech. The Yale Law Journal, 125(5), 1238–1303. Hausman, Daniel M. and McPherson, Michael S. (1993). Taking Ethics Seriously: Economics and Contemporary Moral Philosophy. Journal of Economic Literature, 31(2), 671–731. Havel, Vaclav. (1978) [1987]. The Power of the Powerless. In Vladislav, Jan, ed., Living in Truth: 22 Essays Published on the Occasion of the Award of the Erasmus Prize to Vaclav Havel. London: Faber & Faber, pp. 36–122. Hayek, F. A. (ed.). (1935). Collectivist Economic Planning, London: George Routledge & Sons, Ltd. (1937 [1948]. Economics and Knowledge. In F. A. Hayek, ed., Individualism and Economic Order. Chicago: The University of Chicago Press, pp. 33–56. Hayek, Friedrich A. (1944). The Road to Serfdom. Chicago: University of Chicago Press. (1945) [1948]. The Use of Knowledge in Society. In F. A. Hayek, ed., Individualism and Economic Order. Chicago: The University of Chicago Press, pp. 77–91. (1952a). The Sensory Order. Chicago: University of Chicago Press. (1952b). The Counter Revolution of Science: Studies in the Abuse of Reason. Chicago: University of Chicago Press. (1960) [1967]. The Corporation in a Democratic Society: In Whose Interest Ought It to and Will It Be Run? In F. A. Hayek, ed., Studies in Philosophy, Politics, and Economics. Chicago: University of Chicago Press, pp. 300–12. (1967a). The Results of Human Action but Not of Human Design. In F. A. Hayek, ed., Studies in Philosophy, Politics, and Economics. Chicago: University of Chicago Press, pp. 96–105. (1967b). Studies in Philosophy, Politics and Economics. London: Routledge & Kegan Paul. (1973). Law, Legislation and Liberty, Volume I: Rules and Order. Chicago: University of Chicago Press. (1978). “Dr. Bernard Mandeville. In F. A. Hayek, ed., New Studies in Philosophy, Politics, Economics and the History of Ideas. Chicago: University of Chicago Press, pp. 249–66. (1989). The Pretence of Knowledge. American Economic Review, 79(6): 3–7. (1994). Hayek on Hayek: An Autobiographical Dialogue. Chicago: The University of Chicago Press.
250
References
Heiner, Ronald A. (1983). The Origin of Predictable Behavior. American Economic Review, 73, 560–95. Heiner, Ronald. A. (1986). Uncertainty, Signal-Detection Experiments, and Modeling Behavior. In R. N. Langlois, ed., Economics as a Process: Essays in the New Institutional Economics. New York: Cambridge University Press, pp. 59–115. Henderson, Charles R. (1900). Science in Philanthropy. Atlantic Monthly, 85(508), 249–54. Henrich, Joseph, Boyd, Robert, Bowles, Samuel, Camerer, Colin, Fehr, Ernst, Gintis, Herbert, McElreath, Richard, Alvard, Michael, Barr, Abigail, Ensminger, Jean, Smith Henrich, Natalie, Hill, Kim, Gil-White, Francisco, Gurven, Michael, Marlowe, Frank W., Patton, John Q., and Tracer, David. (2005). “Economic Man” in Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies. Behavior and Brain Sciences, 28, 795–855. Henrich, Joseph, Heine, Steven J., and Norenzayan, Ara. (2010). The Weirdest People in the World? Behavioral and Brain Science, 33, 61–135. Hewlett, A. W. (1913). Clinical Effects of “Natural” and “Synthetic” Sodium Salicylate. Journal of the American Medical Association, 61(5), 319–21. Hickey, Colin, Rieder, Travis N., and Earl, Jake. (2016). Population Engineering and the Fight against Climate Change. Social Theory and Practice, 42(4), 845–70. Hirschman, Albert O. (1977). The Passions and the Interests: Political Arguments for Capitalism before Its Triumph. Princeton: Princeton University Press. Holbaiter, Catherine and Byrne, Richard W. (2014). The Meanings of Chimpanzee Gestures. Current Biology, 24(14), 1596–600. Holcombe, Randall. (2006). Leland Yeager’s Utilitarianism as a Guide to Public Policy. In Roger Koppl, ed., Money and Markets: Essays in Honor of Leland B. Yeager. London and New York: Routledge, pp. 207–20. Horwitz, Steven. (1992). Monetary Exchange as an Extra-Linguistic Social Communication Process. Review of Social Economy, 50(2), 193–6. (2012). Expertise and the Conduct of Monetary Policy. In Roger Koppl, Steven Horwitz, and Laurent Dobuzinskis, eds., Experts and Epistemic Monopolies, Advances in Austrian Economics, vol. 17. Bingley, UK: JAI Press, pp. 61–80. Huber, Peter W. (1993). Galileo’s Revenge. New York: Basic Books. Hudson, Alexandra. (2017). “Families Sue NYC For Reporting Them To Child Services When They Homeschool.” The Federalist, February 4, 2017. Downloaded February 26, 2017 from http://thefederalist.com/2017/02/04/families-sue-nyc-reportingchild-services-homeschool/. Hudson, Anne and Kenny, Anthony. (2004). Wyclif, John (d. 1384). In H. C. G Matthew and Brian Harrison, eds., Oxford Dictionary of National Biography, Oxford: Oxford University Press; online edn. September 2010. Downloaded July 15, 2016 from www.oxforddnb.com.libezproxy2.syr.edu/view/article/30122. Hume, David. (1778) [1983]. The History of England from the Invasion of Julius Caesar to the Revolution in 1688. Indianapolis, IN: Liberty Fund. Hutchins, Edwin. (1990). The Technology of Team Navigation. In J. Galegher, R. Kraut, and C. Egido, eds., Intellectual Teamwork: Social and Technical Bases of Cooperative Work. Hillsdale, NJ: Lawrence Erlbaum, pp. 191–220. Hutchins, Edwin. (1991). Organizing Work by Adaptation. Organization Science, 2(1), 14–39.
References
251
Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. Hutt, William H. (1936) [1990]. Economists and the Public: A Study of Competition and Opinion. New Brunswick, NJ and London: Transaction Publishers. Ingold, Tim. (1986). Evolution and Social Life. Cambridge: Cambridge University Press. Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Med, 2(8), e124 (0696–0701). Downloaded September 14, 2007 from www.plosmedicine.org. Ireland, Jane L. (2012). Evaluating Expert Witness Psychological Reports: Exploring Quality. University of Central Lancashire for Family Justice Council. Downloaded December 19, 2016 from http://repository.tavistockandportman.ac.uk/1065/1/ Shared_Understandings_PDFA(1).pdf. Isnardi, Margherita. (1959). “Studi Recenti e Problemi Aperti sulla Structura e la Funzione della Prima Accademia Platonica.” Rivista Storica Italiana, 71(2), 271–91. Jaeger, Werner. (1923) [1934]. Aristotle, Fundamentals of the History of his Development. Translated by Richard Robinson. Oxford: Clarendon Press. (1948). Aristotle, Fundamentals of the History of his Development, 2nd edn. Translated by Richard Robinson. Oxford: Clarendon Press. Jakobsen, L. S., Archibald, Y. M., Carey, D. P., and Goodale, M. A. (1991). A Patient Recovering from Optic Ataxia. Neuropsychologia, 29(8), 803–9. Jasanoff, Sheila. (2003). Breaking the Waves in Science Studies: Comment on H. M. Collins and Robert Evans, “The Third Wave of Science Studies,” Social Studies of Science, 33(3), 399–400. Jensen, Michael C. and Ruback, Richard S. (1983). The Market for Corporate Control. Journal of Financial Economics, 11, 5–50. Johnson, Corey G. (2013). Female Inmates Sterilized in California Prisons Without Approval. Center for Investigative Reporting, July 7, 2013. Downloaded February 3, 2016 from http://cironline.org/reports/female-inmates-sterilized-california-prisonswithout-approval-4917. Jones. A. H. M. (1953). Inflation under the Roman Empire. The Economic History Review, New Series, 5(3), 293–318. Joung, Eun-Lee. (2016). North Korea’s Economic Policy as a Duet with Control and Relaxation: Dynamics Arising from the Development of Public Markets since the North Korean Famines in the 1990s. Journal of Asian Public Policy, 9(1), 75–94. Kaplan, Sarah. (2015). The Serene-Looking Buddhist Monk Accused of Inciting Burma’s Sectarian Violence. The Washington Post, May 27, 2015. Downloaded July 21, 2016 from www.washingtonpost.com/news/morning-mix/wp/2015/05/27/theburmese-bin-laden-fueling-the-rohingya-migrant-crisis-in-southeast-asia/. Kasaba, Reşat and Sibel Bozdoǧan. (2000). Turkey at a Crossroad. Journal of International Affairs, 54(1), 1–20. Kaye, David H. and Freedman, David. (2011). Reference Guide on Statistics. In National Research Council of the National Academies, Reference Manual on Scientific Evidence, 3rd edn. Washington, DC: The National Academies Press, pp. 211–302. Keil, Frank C., Stein, Courtney, Webb, Lisa, Billings, Van Dyke, and Rozenblit, Leonid. (2008). Discerning the Division of Cognitive Labor: An Emerging Understanding of How Knowledge Is Clustered in Other Minds. Cognitive Science, 32(2), 259–300.
252
References
Kelsen, Hans. (1937). The Philosophy of Aristotle and the Hellenic-Macedonian Policy. International Journal of Ethics, 48(1), 1–64. Kenneally, Ivan. (2009). Technocracy and Populism.The New Atlantis, 24, 46–60. Kerkhof, Bert. (1995). A Fatal Attraction? Smith’s “Theory of Moral Sentiments” and Mandeville’s “Fable.” History of Political Thought, 16(2), 219–33. Kessel, Reuben A. (1970). The A.M.A. and the Supply of Physicians. Law and Contemporary Problems, 35, 267–83. Keynes, John Maynard. (1922) [1977]. “Reconstruction in Europe: An Introduction.” Transcribed in Elizabeth Johnson, ed., The Collected Writings of John Maynard Keynes, vol. XVIII, Activities 1920–1922, Treaty Revision and Reconstruction. London and New York: Macmillan and Cambridge University Press, pp. 426–33. (1926) [1962]. The End of Laissez-Faire. In John M. Keynes, ed., Essays in Persuasion. New York and London: W. W. Norton & Company, pp. 312–22. (1927). Comments delivered to Malthusian League Dinner, July 26, 1927. Keynes papers, King’s College Cambridge, PS/3/109. (1936). Letter to Margaret Sanger, June 23, 1936. Margaret Sanger Papers at the Library of Congress. Marked “Recvd [?] 6/30/36.” (1944) [1980]. Letter to Hayek, 28 June 1944. Transcribed in Moggridge, Donald (ed.), The Collected Writings of John Maynard Keynes, vol. XXVII, Activities 1940–1946, Shaping the post-war world: Employment and commondities. London and New York: Macmillan and Cambridge University Press, pp. 385–8. Kirzner, I. M. (1976). The Economic Point of View: An Essay in the History of Economic Thought. Kansas City: Sheed and Ward, Inc. (1985). Discovery and the Capitalist Process. Chicago: University of Chicago Press. Kitcher, Philip. (1993). The Advancement of Science. New York and Oxford: Oxford University Press. Klein, Daniel B. and Stern, Charlotta. (2006). Economists’ Policy Views and Voting. Public Choice, 126, 331–42. Knight, Frank H. (1933). Preface to the Re-Issue. In Frank H. Knight, ed., Risk, Uncertainty and Profit. London: London School of Economics and Political Science, pp. xi–xxxvi. Koppl, Roger. (1995). The Walras Paradox. Eastern Economic Journal, 21(1), 43–55. 2002. Big Players and the Economic Theory of Expectations. London and New York: Palgrave Macmillan. (2005a). How to Improve Forensic Science. European Journal of Law and Economics, 20(3), 255–86. (2005b). Epistemic Systems. Episteme: Journal of Social Epistemology, 2(2), 91–106. (2009). Complexity and Austrian Economics. In J. B. Rosser Jr., ed., Handbook of Research on Complexity. Cheltenham: Edward Elgar. (2010a). The Social Construction of Expertise. Society, 47, 220–6. (2010b). Romancing Forensics: Legal Failure in Forensic Science Administration. In Edward Lopez, ed., The Pursuit of Justice: Law and Economics of Legal Institutions. New York: Palgrave Macmillan, pp. 51–70. (2010c). Organization Economics Explains Many Forensic Science Errors. Journal of Institutional Economics, 6(1), 71–81. (2012a). Leveraging Bias in Forensic Science. Fordham Urban Law Journal, City Square, 39, 37–56.
References
253
(2012b). Information Choice Theory. Advances in Austrian Economics, 17, 171–202. (2014). From Crisis to Confidence: Macroeconomics after the Crash. London: The Institute of Economic Affairs. Koppl, Roger and Cowan, E. James. (2010). A Battle of Forensic Experts Is Not a Race to the Bottom. Review of Political Economy, 22(2), 235–62. Koppl, Roger, Kauffman, Stuart, Felin, Teppo, and Longo, Giuseppe. (2015a). Economics for a Creative World. Journal of Institutional Economics, 11(1): 1–31. (2015b). Economics for a Creative World: A Response to Comments. Journal of Institutional Economics, 11(1), 61–8. Koppl, Roger, Charlton, David, Kornfield III, Irving, Krane, Dan, Risinger, D. Michael, Robertson, Christopher T., Saks, Michael, and Thompson, William. (2015c). Do Observer Effects Matter? A Comment on Langenburg, Bochet, and Ford. Forensic Science Policy & Management: An International Journal, 6(1–2), 1–6. Koppl, Roger and Krane, Dan. (2016). Minimizing and Leveraging Bias in Forensic Science. In Christopher T. Robertson and Aaron S. Kesselheim, eds., Blinding as a Solution to Bias. Amsterdam: Elsevier Academic Press, pp. 151–65. Koppl, Roger, Kurzban, Robert and Kobilinsky, Lawrence. (2008). Epistemics for Forensics. Epistmeme: Journal of Social Epistemology, 5(2), 141–59. Koppl, Roger and Sacks, Meghan. (2013). The Criminal Justice System Creates Incentives for False Convictions. Criminal Justice Ethics, 32(2), 126–62. Koppl, Roger and Yeager, Leland. (1996). Big Players and Herding in Asset Markets: The Case of the Russian Ruble. Explorations in Economic History, 33 (3), 367–83. Krane, Dan. (2008). Evaluating Forensic DNA Evidence. Downloaded July 20, 2009 from www.bioforensics.com/downloads/KranePhiladelphia.ppt. Krane, Dan E., Ford, S., Gilder, J. R., Inman, K., Jamieson, A., Koppl, R., Kornfield, I. L., Risinger, D. M., Rudin, N., Taylor, M. S., and Thompson, W. C. (2008). Sequential Unmasking: A Means of Minimizing Observer Effects in Forensic DNA Interpretation. Journal of Forensic Sciences, 53(4), 1006–7. Kuhn, Thomas S. (1970). The Structure of Scientific Revolutions, 2nd edn., enlarged, volume 2, number 2 of International Encyclopedia of Unified Science. Chicago: University of Chicago Press. Lachmann, Ludwig. (1969 [1977]. Methodological Individualism and the Market Economy. In Ludwig Lachmann, ed., Capital, Expectations, and the Market Process. Kansas City: Sheed Andrews and McMeel, Inc., pp. 149–65. Lakatos, Imre. (1970). Falsification and the Methodology of Scientific Research Programs. In I. Lakatos and A. Musgrave, eds., Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press, pp. 91–196. (1976). Proofs and Refutations: The Logic of Mathematical Discovery. Cambridge: Cambridge University Press. Langlois, Richard N. (2002). Modularity in Technology and Organization. Journal of Economic Behavior and Organization, 49, 19–37. Langlois, Richard N. and Koppl, Roger. (1991). Fritz Machlup and Marginalism: A Reevaluation. Methodus, 3(2), 86–102. Latour, Bruno. (1987). Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press. Latour, Bruno and Woolgar, Steve. (1979). Laboratory Life. London: Sage.
254
References
Law, John. (1705) [1966]. Money and Trade Considered, with a Proposal for Supplying the Nation with Money. New York: Augustus M. Kelley Publishers. Leary, Mike. (1988). If You Have Kent Cigarettes, All Romania Is Your Oyster. Philadelphia Inquirer, April 28. Downloaded August 18, 2016 from http://articles.philly .com/1988-04-28/news/26251481_1_kents-romanian-economy-romanian-currency. Lee, Gary. (1987). In Romania, Kents as Currency. The Washington Post, August 29. Downloaded August 18, 2016 from www.washingtonpost.com/archive/lifestyle/1987/ 08/29/in-romania-kents-as-currency/af55be66-f57c-4aeb-9d13-22bdbaa75947/. Leonard, Thomas C. (2005). Retrospectives: Eugenics and Economics in the Progressive Era. Journal of Economic Perspectives, 19(4), 207–24. (2016). Illiberal Reformers: Race, Eugenics & American Economics in the Progressive Era. Princeton and Oxford: Princeton University Press. LeRoy, Stephen F. (1989). Efficient Capital Markets and Martingales. Journal of Economic Literature, 23(4), 1583–621. Levy, David M. (2001). How the Dismal Science Got Its Name: Classical Economics and the Ur-Text of Racial Politics. Ann Arbor: University of Michigan Press. Levy, David M., Houser, Daniel, Padgitt, Kail, Peart, Sandra J., and Xiao, Erte. (2011). Leadership, Cheap Talk and Really Cheap Talk. Journal of Economic Behavior and Organization, 77(1): 40–52. Levy, David M. and Peart, Sandra J. (2006) The Fragility of a Discipline when a Model has Monopoly Status. Review of Austrian Economics, 19, 125–36. (2007). Sympathetic Bias. Statistical Methods in Medical Research, 17, 265–77. (2008a). Thinking about Analytical Egalitarianism. American Journal of Economics and Sociology, 67(3), 473–9. (2008b). Inducing Greater Transparency: Towards an Econometric Code of Ethics. Eastern Economic Journal, 34, 103–14. (2012). Tullock on Motivated Inquiry: Expert-Induced Uncertainty Disguised as Risk. Public Choice, 15(1–2): 163–80. Levy, David M. and Peart, Sandra J. (2017). Escape from Democracy: The Role of Experts and the Public in Economic Policy. Cambridge: University of Cambridge Press. Lewis, Alain A. (1992). On Turing Degrees of Walrasian Models and a General Impossibility Result in the Theory of Decision-Making. Mathematical Social Sciences, 24, 141–71. Lewis, Paul. (2010). Peter Berger and His Critics: The Significance of Emergence. Society, 47, 207–13. Lindblom, Charles E. (1959). The Science of “Muddling Through.” Public Administration Review, 19, 79–88. Liptak, Adam. (2016). Supreme Court Finds Racial Bias in Jury Selection for Death Penalty Case. New York Times, May 23, 2016. Downloaded June 1, 2016 from www.nytimes.com/2016/05/24/us/supreme-court-black-jurors-death-penaltygeorgia.html?_r=0. Lloyd-Jones, Hugh. (1976). The Delphic Oracle. Greece & Rome, 23(1), 60–73. Lofgren, Mike. (2016). The Deep State: The Fall of the Constitution and the Rise of a Shadow Government. New York: Penguin Books. Longo, Giuseppe, Montévil, Maël, and Kauffman, Stuart. (2012). No Entailing Laws, but Enablement in the Evolution of the Biosphere. Physics ArXiv, 1201.2069v1, http://arxiv.org/abs/1201.2069.
References
255
Lotka, Alfred J. (1945). The Law of Evolution as a Maximal Principle. Human Biology, 17(3), 167–94. Lowe, Josh. (2016). Michael Gove: I’m “Glad” Economic Bodies Don’t Back Brexit. Newsweek, June 3, 2016. Downloaded December 19, 2016 from http://europe.newsweek .com/michael-gove-sky-news-brexit-economics-imf-466365?rm=eu. Luban, David, Strudler, Alan, and Wasserman, David. (1992). Moral Responsibility in the Age of Bureaucracy. Michigan Law Review, 90(8), 2348–92. Ludden, Jennifer. (2016). Should We Be Having Kids in the Age of Climate Change? NPR, August 18, 2016. Downloaded April 30, 2017 from www.npr.org/2016/08/ 18/479349760/should-we-be-having-kids-in-the-age-of-climate-change. Lutz, Donna J. and Keil, Frank C. (2003). Early Understanding of the Division of Cognitive Labor. Child Development, 73(4), 1073–84. Lynch, Micael and Cole, Simon A. (2005). Science and Technology Studies on Trial: Dilemmas and Expertise. Social Studies of Science, 35(2), 269–311. Machlup, Fritz. (1955). The Problem of Verification in Economics. Southern Economic Journal, 22, 1–21. Mackay, Charles. (1852). Memoirs of Extraordinary Popular Delusions and the Madness of Crowds. London: Office of the National Illustrated Library. Downloaded February 5, 2004 from www.econlib.org/library/Mackay/macEx15.html. Maloney, L. T. and Wandell, B. A. (1987). Color Constancy: A Method for Recovering Surface Spectral Reflectance. In M. A. Fischler and O. Firschein, eds., Readings in Computer Vision: Issues, Problems, Principles, and Paradigms. San Francisco: Morgan Kaufmann, pp. 293–7. Mandeville, Bernard. (1729) [1924]. The Fable of the Bees: Or, Private Vices, Publick Benefits, with a Commentary Critical, Historical, and Explanatory by F. B. Kaye, in two volumes. Oxford: Clarendon Press. (1732) [1953]. A Letter to Dion, Occasion’d by his Book call’d Alciphron or the Minute Philosopher. London: J. Roberts in Warwick Lane. Facsimile reproduction in The Augustan Reprint Society publication number 41. Los Angeles: William Andrews Clark Memorial Library, University of California. Manne, Henry G. (1965). Mergers and the Market for Corporate Control. The Journal of Political Economy, 73(2), 110–20. Mannheim, Karl. (1936) [1985]. Ideology and Utopia: An Introduction to the Sociology of Knowledge. New York: Harcourt, Brace & World, Inc. (1940). Man and Society in an Age of Reconstruction: Studies in Modern social structure. New York: Harcourt, Brace & World. (1952). The Problem of Generations. In Mannheim, Karl, ed., Essays in the Sociology of Knowledge. London: Routledge & Kegan Paul Ltd., pp. 276–322. Margolis, Howard. (1998). Tycho’s Illusion and Human Cognition. Nature, 392, 857. (2007). Are Economists Human? Applied Economics Letters, 14, 1035–7. Markose, S. M. (2005). Computability and Evolutionary Complexity: Markets as Complex Adaptive Systems. Economic Journal, 115, F159–92. Martin, Adam. (2015). Degenerate Cosmopolitanism. Social Philosophy and Policy, 32 (1), 74–100. Martin, David. (1968). The Sociology of Knowledge and the Nature of Social Knowledge. The British Journal of Sociology, 19(3), 334–42.
256
References
Marx, Karl. (1867) [1909]. Capital: A Critique of Political Economy. Volume I: The Process of Capitalist Production. Edited by Friedrich Engels and translated from the third German edition (1873) by Samuel Moore and Edward Aveling. Chicago: Charles H. Kerr & Company. (1859) [1904]. Author’s Preface. In A Contribution to the Critique of Political Economy, translated from the second German edition (1897) by N. I. Stone. Chicago: Charles H. Kerr & Company, pp. 9–15. Mauss, Marcel. (1925) [1969]. The Gift: Forms and Functions of Exchange in Archaic Societies. London: Cohen and West. McCabe, Kevin, Houser, Daniel, Ryan, Lee, Smith, Vernon, and Trouard, Theodore. (2001). A Functional Imaging Study of Cooperation in Two-Person Reciprocal Exchange. Proceedings of the National Academy of Sciences, 98(20), 11832–5. McCullagh, Declan. (2013). NSA Spying Flap Extends to Contents of U.S. Phone Calls. CNET, June 15, 2013. Downloaded January 13, 2017 from www.cnet.com/news/ nsa-spying-flap-extends-to-contents-of-u-s-phone-calls/. McKeon, Michael. (2005). The Secret History of Domesticity: Public, Private, and the Division of Knowledge. Baltimore: Johns Hopkins University Press. Mendel, Gregor. (1865) [1996]. Experiments in Plant Hybridization. English translation of German original. Downloaded October 7, 2010 from www.mendelweb.org/ Mendel.html. Menger, Carl. (1871) [1981] Principles of Economics. Translated by James Dingwell and Bert F. Hoselitz. New York and London: New York University Press. Merlan, Philip. (1954). Isocrates, Aristotle and Alexander the Great. Historia: Zeitschrift für Alte Geschichte, 3(1), 60–81. Merton, Robert K. (1937) [1957]. Science and the Social Order. Reproduced in Robert K. Merton, Social Theory and Social Structure, rev edn. New York: The Free Press, pp. 537–49. (1945). “Role of the Intellectual in Public Bureaucracy. Social Forces, 23(4): 405–15. Merton, Robert. (1976). Sociological Ambivalence and Other Essays. New York: Free Press. Merton, Robert and Barber, Elinor. (1976). Sociological Ambivalence. In Robert Merton, ed., Sociological Ambivalence and Other Essays. New York: Free Press, pp. 3–31. Milgrom, P. and Roberts, J. (1986). Relying on Information of Interested Parties. RAND Journal of Economics, 17, 18–32. Mill, J. S. (1869) [1977]. On Liberty. In Essays on Politics and Society, vol. 18, Collected Works of John Stuart Mill. Edited by J. Robson. Toronto: University of Toronto Press, pp. 213–310. Miller, James C. III. (1999). Monopoly Politics. Stanford: Hoover Press. (2006). Monopoly Politics and Its Unsurprising Effects. In Roger Koppl, ed., Money and Markets: Essays in Honor of Leland B. Yeager. New York: Routledge, 2006, pp. 48–65. Miller, Jeff. (1998). Aristotle’s Paradox of Monarchy and the Biographical Tradition. History of Political Thought, 19(4), 501–16. Mills, Steve, McRoberts, Flynn, and Possley, Maurice. (2004). Man Executed on Disproved Forensics: Fire that Killed His 3 Children Could Have Been Accidental, Chicago Tribune, 9 December 2004.
References
257
Mises, L. von [1920] (1935). Economic Calculation in the Socialist Commonwealth. In F. A. Hayek, ed., Collectivist Economic Planning. George Routledge and Sons, London, pp. 87–130. Mises, Ludwig von. [1932] (1981). Socialism. 3rd rev. edn. Indianapolis: Liberty Classics. Mises, Ludwig. (1966). Human Action: A Treatise on Economics, 3rd rev. edn. Chicago: Henry Regnery Company. Mnookin, Jennifer L. (2001). Scripting Expertise: The History of Handwriting Identification Evidence and the Judicial Construction of Reliability. Virginia Law Review, 87(8), 1723–845. Mnookin, Jennifer, Cole, Simon A., Dror, Itiel E., Fisher, Barry, A. J., Houck, Max M., Inman, Keith, Kaye, David H., Koehler, Jonathan J., Langenburg, Glenn, Risinger, D. Michael, Rudin, Norah, Siegel, Jay, and Stoney, David A. (2011). The Need for a Research Culture in the Forensic Sciences. UCLA Law Review, 58, 725–79. Mudde, Cas. (2004). The Populist Zeitgeist. Government and Opposition, 39(4), 542–63. Mudie, Robert. (1840). Preliminary Address. The Surveyor, Engineer, and Architect, 1 (1), 1–6. Mueller, Dennis C. (1986). Rational Egoism versus Adaptive Egoism as Fundamental Postulate for a Descriptive Theory of Human Behavior. Public Choice, 51, 3–23. Mueller, D. C. (1989). Public Choice II. Cambridge: Cambridge University Press. Müller, Gerhard. (1951). Studien zu den platonischen Nomoi. Munich: C. H. Beck. Musacchio, J. M. (2003). Dissolving the Explanatory Gap: Neurobiological Differences between Phenomenal and Propositional Knowledge. Brain and Mind, 3, 331–65. Narasu, Lakshmi P. (1912). The Essence of Buddhism with Illustrations of Buddhist Art. Madras: Srinivasa Varadachari. NAS, National Academy of Sciences, Committee on Identifying the Needs of the Forensic Sciences Community. (2009). Strengthening Forensic Science in the United States: A Path Forward. Washington, DC: The National Academies Press. Neisser, Ulrich. (1976). Cognition and Reality: Principles and Implications of Cognitive Psychology. San Francisco: W. H. Freeman. Neumann, John von and Morgenstern, Oskar. (1953). Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. Nichols, Ronald G. (2007). Defending the Scientific Foundations of the Firearms and Toolmark Identification Discipline: Responding to Recent Challenges. Journal of Forensic Science, 52(3), 586–94. Nilsson, Martin P. (1940) [1972]. Greek Folk Religion. Philadelphia: University of Pennsylvania Press. NIST. (2011). Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach. The Report of the Expert Working Group on Human Factors in Latent Print Analysis. Washington, DC: National Institute of Standards and Technology Forensic Science Program. Nock, Arthur Darby. (1942). Religious Attitudes of the Ancient Greeks. Proceedings of the American Philosophical Society, 85(5), 472–82. Odling, William. (1860). Science in Courts of Law. The Journal of the Society of Arts, 8 (375): 167–8. Office of the Inspector General. (2006). A Review of the FBI’s Handling of the Brandon Mayfield Case. Washington, DC: US Department of Justice.
258
References
Ojakangas, Mika. (2011). Michel Foucault and the Enigmatic Origins of Bio-Politics and Governmentality. History of the Human Sciences, 25(1), 1–14. O’Neill, Brian C., Dalton, Michael, Fuchs, Regina, Jiang, Leiwen, Pachauri, Shonali, and Zigova, Katarina. (2010). Global Demographic Trends and Future Carbon Emissions. PNAS, 107(41), 17521–6. OpenSecrets.org. 2016. “Top Spenders.” Downloaded July 22, 2016 from www.opensecrets .org/lobby/top.php?showYear=a&indexType=s. Orchard, Lionel and Stretton, Hugh. (1997. Public Choice. Cambridge Journal of Economics, 21, 409–30. Ostrom, Vincent. (1989). The Intellectual Crisis in American Public Administration, 2nd edn. Tuscaloosa and London: The University of Alabama Press. Owen, Robert. (1841). Lectures on the Rational System of Society. London: Home Colonization Society. Pak, Sunyoung. (2004). The Biological Standard of Living in the Two Koreas. Economics and Human Biology, 2, 511–21. Pearson, Karl. (1911). The Scope and Importance to the State of the Science of National Eugenics, 3rd edn. London: Dulau Co., Ltd. Peart, Sandra J. and Levy, David M. (2005) The “Vanity of the Philosopher”: From Equality to Hierarchy in Postclassical Economics. Ann Arbor, The University of Michigan Press. (2010). If Germs Could Sponsor Research: Reflections on Sympathetic Connections amongst Subjects and Researchers. Paper presented at the 3rd biennial Wirth Institute workshop on Austrian Economics, Vancouver, October 14–16, 2010. Pierce, A. (2008). The Queen Asks Why No One Saw the Credit Crunch Coming. Telegraph, November 5, 2008. Downloaded February 26, 2013 from www .telegraph.co.uk/news/uknews/theroyalfamily/3386353/The-Queen-asks-why-noone-saw-the-credit-crunch-coming.html. Penrose, Clement B. and Frazer, Persifor. (1902). Expert Testimony. A Discussion. The American Law Register, 50(6), 346–50. Perrow, Charles. (1984) [1999]. Normal Accidents: Living with High Risk Systems. Princeton: Princeton University Press. Pichert, James W. and Richard C. Anderson. (1977). Taking Different Perspectives on a Story. Journal of Educational Psychology, 69(4), 309–15. Pickering, Andrew. (1992). From Science as Knowledge to Science as Practice. In Andrew Pickering, ed., Science as Practice and Culture. Chicago: The University of Chicago Press, pp. 1–26. Pinker, Steven. (2011). The Better Angels of Our Nature: Why Violence Has Declined. New York: Viking. Podolsky, Scott H., Jones, David S., and Kaptchuk, Ted J. (2016). From Trials to Trials: Blinding, Medicine, and Honest Adjudication. In Christopher T. Robertson and Aaron S. Kesselheim, eds., Blinding as a Solution to Bias. Amsterdam: Elsevier Academic Press, pp. 45–58. Polanyi, Michael. (1958). Personal Knowledge. Chicago: University of Chicago Press. (1962). The Republic of Science: Its Political and Economic Theory. Minerva, 1, 54–73. Popper, K. (1959). The Logic of Scientific Discovery. Hutchinson: London.
References
259
Popper, Karl. (1979). Objective Knowledge: An Evolutionary Approach, rev. edn. Oxford: Oxford University Press. Posner, Richard A. (1974). Theories of Economic Regulation. The Bell Journal of Economics and Management Science, 5(2), 335–58. Potts, Jason. (2012). Novelty-Bundling Markets. In David E. Andersson, ed., The Spatial Market Process, Advances in Austrian Economics, vol. 16. Bradford: Emerald Publishing Group, pp. 291–312. Prasad, K. (2009). The Rationality/Computability Trade-Off in Finite Games. Journal of Economic Behavior and Organization, 69, 17–26. Prendergast, Renee. (2010). Accumulation of Knowledge and Accumulation of Capital in Early “Theories” of Growth and Development. Cambridge Journal of Economics, 34(3), 413–31. (2014). Knowledge, Innovation and Emulation in the Evolutionary Thought of Bernard Mandeville. Cambridge Journal of Economics, 38(1), 87–107. Price, David H. (1998). Cold War Anthropology: Collaborators and Victims of the National Security State. Identities, 4(3–4), 389–430. Quiggin, John. (1987). Egoistic Rationality and Public Choice: A Critical Review of Theory and Evidence. Economic Record, 63(1), 10–21. Quine, W. V. (1951). Two Dogmas of Empiricism. The Philosophical Review, 60(1), 20–43. Radford, R. A. (1945). The Economic Organisation of a P.O.W. Camp. Econometrica, 48, 189–201. Radnitzky, Gerard and Bartley, W. W. III. eds. (1987). Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. Chicago and La Salle, IL: Open Court. Rahula, Walpola. (1974). What the Buddha Taught, revised and expanded edn. New York: Grove Press. Read, Leonard E. (1958) [1999]. I, Pencil: My Family Tree as Told to Leonard. E. Read. Irvington-on-Hudson, NY: The Foundation for Economic Education, Inc. Currently available at www.econlib.org/library/Essays/rdPncl1.html. Reagan, Michael D. (1965). Why Government Grows. Challenge, 14(1), 4–7. Reason, James. (1990). Human Error. Cambridge: Cambridge University Press. Reeve, C. D. C. (1990). Socrates in the Apology. Indianapolis, IN: Hackett Publishing Company. Rehg, William. (2013). Selinger and Contested Expertise: The Recognition Problem. In Stephen Turner, William Rehg, Heather Douglas, and Evan Selinger, eds., Book Symposium on Expertise: Philosophical Reflections by Evan Selinger Philosophy & Technology, 26(1), 93–109. Reid, Sue. (2012). “The ‘Experts’ Who Break Up Families: The Terrifying Story of the Prospective MP Branded an Unfit Mother by Experts Who’d Never Met Her – A Nightmare Shared by Many Other Families. Daily Mail, March 28, 2012, updated March 29, 2012. Downloaded December 19, 2016 from www.dailymail.co.uk/news/ article-2121886/The-experts-break-families-The-terrifying-story-prospective-MPbranded-unfit-mother-experts-whod-met-nightmare-shared-families.html. Reynolds, Russell. (1867). On Some of the Relations between Medical and Legal Practice. (Lumleian Lectures, Delivered before the Royal College of Physicians). The British Medical Journal, 1(335), 644.
260
References
Riley, Ricky. (2017). Buffalo CPS Takes Children, Has Mother Arrested for Choosing to Homeschool Them. Atlanta Black Star, February 10, 2017. Downloaded February 10, 2017 from http://atlantablackstar.com/2017/02/10/buffalo-cps-takes-childrenmother-arrested-choosing-homeschool/. Risinger, D. Michael, Denbeaux, Mark P., and Saks, Michael J. (1989). Exorcism of Ignorance as a Proxy for Rational Knowledge: The Lessons of Handwriting Identification “Expertise.” University of Pennsylvania Law Review, 137(3), 731–92. Risinger, Michael, Saks, Michael J., Thompson, William C., and Rosenthal, Robert. (2002). The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion. California Law Review, 90, 1–56. Robertson, C. T. (2010). The Blind Expert: A Litigant-Driven Solution to Bias and Error. New York University Law Review, 85, 174–257. Robertson, Christopher T. (2011). Biased Advice. Emory Law Journal, 60(3), 653–703. (2016). Why Blinding? How Blinding? A Theory of Blinding and Its Application to Institutional Corruption. In Christopher T. Robertson and Aaron S. Kesselheim, eds., Blinding as a Solution to Bias. Amsterdam: Elsevier Academic Press, pp. 25–38. Rosenthal, Robert. (1978). How Often Are Our Numbers Wrong? American Psychologist, 33, 1005–8. Rosenthal, Robert and Fode, Kermit T. (1961). The Problem of Experimenter Outcome-Bias. In Donald P. Ray, ed., Symposium Studies Series No. 8, Series Research in Social Psychology. Washington, DC: The National Institute of Social and Behavioral Science, pp. 9–14. Ross, David. (1952). The Works of Aristotle, vol. XII: Select Fragments. Oxford: The Clarendon Press. Ross, Stephen A. (1973). The Economic Theory of Agency: The Principal’s Problem. The American Economic Review, 63(2), 134–9. Roucek, Joseph S. (1963). Some Academic “Rackets” in the Social Sciences. The American Behavioral Scientist, 6(5), 9–10. Rousselle, Christine. (2017). Bill Nye the Eugenics Guy: Maybe We Should Penalize People with “Extra Kids.” Townhall.com, April 26, 2017. Downloaded May 1, 2017 from https://townhall.com/tipsheet/christinerousselle/2017/04/26/bill-nyethe-eugenics-guy-maybe-we-should-penalize-people-with-extra-kids-n2318527. Roy, Avik. (2014). ACA Architect: “The Stupidity of The American Voter” Led Us to Hide Obamacare’s True Costs from the Public. Forbes, November 10, 2014. Downloaded December 20, 2016 from www.forbes.com/sites/theapothecary/ 2014/11/10/aca-architect-the-stupidity-of-the-american-voter-led-us-to-hide-oba macares-tax-hikes-and-subsidies-from-the-public/#192f610779bc. Rubel, Alexander. (2014). Fear and Loathing in Ancient Athens: Religion and Politics during the Peloponnesian War. Durham, UK and Bristol, CT: Acumen. Rucker, Philip and Costa, Robert. (2017). Bannon Vows a Daily Fight for “Deconstruction of the Administrative State.” Washington Post, February 23, 2017. Downloaded March 15, 2017 from www.washingtonpost.com/politics/top-wh-strategist-vows-adaily-fight-for-deconstruction-of-the-administrative-state/2017/02/23/03f6b8da-f9ea11e6-bf01-d47f8cf9b643_story.html?utm_term=.53e68b32c0a7. Ryle, G. (1949). Knowing How and Knowing That. In G. Ryle, ed., The Concept of Mind. London: Hutchinson’s University Library, pp. 25–61.
References
261
Sah, Raaj Kuma and Stiglitz, Joseph E. (1986). The Architecture of Economic Systems: Hierarchies and Polyarchies. American Economic Review, 76(4), 716–27. Salter, Alexander William. (2017). The Constitution of Economic Expertise: Deep History, Extended Present, and the Institutions of Economic Scholarship. February 28, 2017. Downloaded March 4, 2017 from https://ssrn.com/abstract=2925241. Sandefur, Timothy. (2015–16). Free Speech for You and Me, But Not for Professionals. Regulation, 38(4), 48–53. Sandford, Jeremy A. (2010). Experts and Quacks. The RAND Journal of Economics, 41 (1), 199–214. Savage, Deborah A. (1994). The Professions in Theory and History: The Case of Pharmacy. Business and Economic History, 23(2), 129–60. Scheler, Max. (1926) [1960/1980]. Problems of a Sociology of Knowledge. London, Boston, and Henley: Routledge & Kegan Paul. Schiemann, John. (2000). Meeting Halfway between Rochester and Frankfurt: Generative Salience, Focal Points, and Strategic Interaction. American Journal of Political Science, 44(1), 1–16. Schlauch, Margaret. (1940). The Revolt of 1381 in England. Science & Society, 4(4), 414–32. Schulz, Kenneth F. and Grimes, David A. (2002). Blinding in Randomised Trials: Hiding Who Got What. The Lancet, 359, 696–700. Schutz, Alfred. (1932) [1967]. The Phenomenology of the Social World. Translated by George Walsh and Frederick Lehnert. Evanston, IL: Northwestern University Press. (1943). The Problem of Rationality in the Social World. Economica, N. S. 10, 130–49. (1945). On Multiple Realities. Philosophy and Phenomenological Research, 5(4), 533–76. (1946). The Well-Informed Citizen. Social Research, 13(4), 463–78. Schutz, A. (1951) [1962]. Choosing among Projects of Action. In A. Schutz, ed., Collected Papers I: The Problem of Social Reality. The Hague: Martinus Nijhoff, pp. 67–96. (1953) [1962]. Common-Sense and Scientific Interpretation of Human Action. In A. Schutz, ed., Collected Papers I: The Problem of Social Reality. The Hague: Martinus Nijhoff, pp. 3–47. (1959) [1962]. Husserl’s Importance for the Social Sciences. In Collected Papers I: The Problem of Social Reality. The Hague: Martinus Nijhoff, pp. 140–9. (1996). Political Economy: Human Conduct in Social Life. In Collected Papers IV. The Hague: Martinus Nijhoff, pp. 93–105. Schwekendiek, Daniel. (2009). Height and Weight Differences between North and South Korea. Journal of Biosocial Science, 41(1), 51–5. Scott, Peter Dale. (2007). The Road to 9/11: Wealth, Empire, and the Future of America. Berkeley: University of California Press. Senn, Peter R. (1951). Cigarettes as Currency. Journal of Finance, 6(3), 329–32. Shavit, David. (1990). The United States in Asia: A Historical Dictionary. Westport, CT: Greenwood Press. Shaw, Julia. (2016). The Real Reason We Don’t Trust Experts Anymore. Independent, July 8, 2016. Downloaded March 19, 2016 from www.independent.co.uk/voices/ the-real-reason-that-we-don-t-trust-experts-a7126536.html.
262
References
Shepard, Jon. 2013. Cengage Advantage: Sociology, 11th edn. Belmont, CA: Wadsworth, Cengage Learning. Sherif, Muzafer and Sherif, Carolyn. (1969). Social Psychology. New York: Harper & Row. Shreffler, Karina M., McQuillan, Julia, Greil, Arthur L., and Johnson, David R. (2015). Surgical Sterilization, Regret, and Race: Contemporary Patterns. Social Science Research, 50, 31–45. Siddiqui, Usaid. (2015). Myanmar’s Buddhist Terrorism Problem. Aljazeera America, February 18, 2015. Downloaded July 21, 2016 from http://america.aljazeera.com/ opinions/2015/2/myanmars-buddhist-terrorism-problem.html. Simon, Herbert A. (1962). The Architecture of Complexity. Proceedings of the American Philosophical Society, 106(6), 467–82. Singerman, David Roth. (2016). Keynesian Eugenics and the Goodness of the World. Journal of British Studies, 55, 538–65. Skenazy, Lenore. (2014). Mom Jailed Because She Let Her 9-Year-Old Daughter Play in the Park Unsupervised. Reason, July 14, 2014. Downloaded December 19, 2016 from http://reason.com/blog/2014/07/14/mom-jailed-because-she-let-her-9-year-ol. Small, Albion. (1908). Ratzenhofer’s Sociology. The American Journal of Sociology, 13 (4), 433–8. Smith, Adam. (1759) [1761/1767/1774/1781/1790/1976/1984]. The Theory of Moral Sentiments, ed. D. D. Raphael and A. L. Mactie. Indianapolis, IN: Liberty Press. (1776) [1789/1904/2000]. An Inquiry into the Nature and Causes of the Wealth of Nations. Library of Economics and Liberty, www.econlib.org/library/Smith/ smWNCover.html, last visited January 27, 2012. (This online edition of the book is based on Edwin Cannan’s 1904 compilation of the 5th edition, which was published in 1789.) (1982). Lectures on Jurisprudence. Indianapolis, IN: Liberty Fund, Inc. Smith, Adam C., Wagner, Richard E., and Yandle, Bruce. (2011). A Theory of Entangled Political Economy, with Application to TARP and NRA. Public Choice, 148, 45–66. Smith, R. Angus. (1860). Science in Our Courts of Law. Journal of the Society of Arts, 8 (374), 135–42. Smith, Nicholas D. (1989). Diviners and Divination in Aristophanic Comedy. Classical Antiquity, 8(1), 140–58. Smith, Vernon. (1998). The Two Faces of Adam Smith. Southern Economic Review, 65 (1), 1–19. (2003). Constructivist and Ecological Rationality in Economics. American Economic Review, 93(3), 465–508. (2009). Rationality in Economics: Constructivist and Ecological Forms. Cambridge: Cambridge University Press. (2014). F.A. Hayek and the Nobel Prize, video created by the Mercatus Center at George Mason University. Smith’s comments can be found at 37:47. http://mercatus.org/events/40-years-after-nobel-fa-hayek-and-political-economyprogressive-research-program. Sokal, Alan D. (1996). Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity. Social Text, 46(47), 217–52. Stanek, Richard W., Leidholt, Michael H., McConnell, Robert, Steckler, Craig T., Ramsey, Charles H., and Bushman, Bob. (2013). Letter to US Attorney General Eric H. Holder,
References
263
Jr., August 30, 2013. Downloaded September 2, 2013 from www.theiacp.org/portals/ 0/pdfs/FINALLawEnforcementGroupLetteronDOJMarijuanaPolicy.pdf. Steckel, Richard H. (1995). Stature and the Standard of Living. Journal of Economic Literature, 33(4), 1903–40. Stern, Alexandra Minna. (2005). Sterilized in the Name of Public Health: Race, Immigration, and Reproductive Control in Modern California. American Journal of Public Health, 95(7), 1128–38. Stewart, Francis E. (1928). Origin of the Council of Pharmacy and Chemistry of the American Medical Association. Journal of the American Pharmaceutical Association, 17(12), 1234–9. Stigler. George J. (1971). The Theory of Economic Regulation. The Bell Journal of Economics and Management Science, 2(1), 3–21. Strauss, Peter L. (1984). The Place of Agencies in Government: Separation of Powers and the Fourth Branch. Columbia Law Review, 84(3), 573–669. Svorny, Shirley. (2004). Licensing Doctors: Do Economists Agree? Economic Journal Watch, 1(2), 279–305. Svorny, Shirley and Herrick, Devon M. (2011). Increasing Access to Health Care. In Roger Koppl, ed., Enterprise Programs: Freeing Entrepreneurs to Provide Essential Services for the Poor, A Task Force Report. Dallas: National Center for Policy Analysis, pp. 71–88. Szasz, Thomas S. (1960). The Myth of Mental Illness. American Psychologist, 15(2), 113–18. Taleb, Nassim. (2012). Antifragile. New York: Penguin. Taylor, Alfred Swaine. (1859). On Poisons in Relation to Medical Jurisprudence and Medicine, 2nd US edn. Philadelphia: Blanchard and Lea. (1880). A Manual of Medical Jurisprudence, 8th US edn. Philadelphia: Henry C. Lea’s Son & Co. Taylor, John Pitt. (1848). A Treatise of the Law of Evidence, as Administered in England and Ireland. London: A. Maxwell & Son, Law Booksellers and Publishers. (1887). A Treatise of the Law of Evidence, as Administered in England and Ireland, from the 8th English edn. Philadelphia: The Blackstone Publishing Company. Tetlock, Philip and Gardner, Dan. (2015). Superforcasting: The Art and Science of Prediction. New York: Crown Publishers. Theoharis, Athan. (2007). The Quest for Absolute Security: The Failed Relations among U.S. Intelligence Agencies. Chicago: Ivan R. Dee. Thompson, William C. (1995). Subjective Interpretation, Laboratory Error and the Value of Forensic DNA Evidence: Three Case Studies. Genetica, 96, 153–68. (2008). Beyond Bad Apples: Analyzing the Role of Forensic Science in Wrongful Convictions. Southwestern University Law Review, 37, 1027–50. (2009). Painting the Target around the Matching Profile: The Texas Sharpshooter Fallacy in Forensic DNA Interpretation. Law, Probability and Risk, 8(3), 257–76. Tiebout, C. (1956). A Pure Theory of Local Expenditures. Journal of Political Economy, 64, 416–24. Triplett, Jeremy S. 2013. National Survey on the Use of Court Fees for the Funding of Crime Laboratory Operations. Poster presented at 2013 ASCLD Symposium, Durham, NC, May 4–9, 2013.
264
References
Tsuji, M., daCosta, N. C. A., and Doria, F. A. (1998). The Incompleteness of Theories of Games. Journal of Philosophical Logic, 27, 553–64. Tullock, Gordon. (1966) [2005]. The Organization of Inquiry. Indianapolis, IN: Liberty Fund. Turley, Jonathan. (2012). 10 Reasons the U.S. Is No Longer the Land of the Free. Washington Post, January 13, 2012. Downloaded March 16, 2017 from www.washingtonpost.com/opinions/is-the-united-states-still-the-land-ofthe-free/2012/01/04/gIQAvcD1wP_story.html?utm_term=.66e29c3f7a07. Turner, Stephen. (1991). Social Construction and Social Theory. Sociological Theory, 9 (1), 22–33. (2001). What Is the Problem with Experts? Social Studies of Science, 31(1), 123–49. (2003). Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications. (2010). Normal Accidents of Expertise. Minerva, 48, 239–58. (2014). The Politics of Expertise. New York and London: Routledge. Tversky, A. and Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(September 27), 1124–31. Uexküll, J. (1934) [2010]. A Foray into the World of Animals and Humans: With a Theory of Meaning. Minneapolis: University of Minnesota Press. Ullmann-Margalit, E. (1978). Invisible-Hand Explanations. Synthese, 39(2), 263–91. Unattributed. (1844). Review of Mr. Taylor’s Medical Jurisprudence. Provincial Medical Journal and Retrospect of the Medical Sciences, 7(171), 271–3. (1856). The Evidence in Palmer’s Case. The American Law Register, 5(1), 20–46. (1859). “Review” of Taylor, J. S. 1859. The American Law Register, 7(9), 573–5. (1877a). Expert Testimony. The British Medical Journal, 2(881), 704. (1877b). The Penge Case: Medical Evidence of the Experts. The British Medical Journal, 2(874), 449. (1890). Expert Evidence. The British Medical Journal, 1(1522), 491–2. (2017). Water Lead-Level Falls below Federal Limit in Flint. Associated Press, January 24, 2017. Downloaded May 30, 2017 from www.nbcnews.com/storyline/ flint-water-crisis/water-lead-level-falls-below-federal-limit-flint-n711716. Urbina, Ian. 2009. Despite Red Flags about Judges, a Kickback Scheme Flourished. New York Times, March 27, 2009. Downloaded November 30, 2016 from www.nytimes.com/2009/03/28/us/28judges.html. Vasari, G. (1996) [1568]. Lives of the Painters, Sculptors and Architects. London: Everyman’s Library. Velupillai, Vela. 2007. The Impossibility of an Effective Theory of Policy in a Complex Economy. In Massimo Salzano and David Colander, eds., Complexity Hints for Economic Policy. Milan: Springer., pp. 273–90. ed. (2005). Computability, Complexity and Constructivity in Economic Analysis. Oxford: Blackwell. Vico, Giambattista. (1744) [1959]. Principi di scienza nuova d’intorno all commune natura dell nazioni. In Vico, Giambattista Opere, corrected, clarified and expanded (“Corretta, Schiarita, e notabilmente Accresciuta”) by Paolo Rossi. Milan: Rizzoli. (At the time of this writing available online at: www.letteraturaitaliana.net/pdf/ Volume_7/t204.pdf.)
References
265
Villagra, Hector. (2013). Release Secret Interpretation of the Patriot Act. Albuquerque Journal, June 14, 2013. Downloaded January 13, 2017 from www.abqjournal.com/ 210388/release-secret-interpretation-of-the-patriot-act.html. Vitali, Stefania, Glattfelder, James B., and Battiston, Stefano. (2011). The Network of Global Corporate Control. PLoS ONE, 6(10), e25995. Wagner, Helmut R. (1963). Types of Sociological Theory: Toward a System of Classification. American Sociological Review, 28(5), 735–42. Wagner, Helmut R. and Psathas, George. (1996). “Editors’ Introduction.” In A. Schutz, Collected Papers IV. The Hague: Martinus Nijhoff, p. 93. Wagner, Richard E. (2006). Retrogressive Regime Drift within a Theory of Emergent Order. Review of Austrian Economics, 19, 113–23. (2010). Mind, Society, and Human Action: Time and Knowledge in a Theory of Social Economy. London and New York: Routledge. Walras, Léon. (1874–7) [1954]. Elements of Pure Economics, translated from the Edition Définitive of 1926 by William Jaffé. Homewood, IL: Richard D. Irwin, Inc. Walravens, Hartmut. (2006). The Early East Asian Press in the Eyes of the West. Some Bibliographical Notes. In Walravens, Hartmut, ed., Newspapers of the World Online: U.S. and International Perspectives. Proceedings of Conferences in Salt Lake City and Seoul, 2006. München: K. G. Saur, pp. 159–72. Washburn, Emory. (1876). Expert Testimony and the Public Service of Experts. Public Health Papers and Reports, 3, 32–41. Watts, Nicole F. (1999). Allies and Enemies: Pro-Kurdish Parties in Turkish Politics, 1990–94. International Journal of Middle East Studies, 31(4), 631–56. Wassink, Alfred. (1991). Inflation and Financial Policy under the Roman Empire to the Price Edict of 301 A.D. Historia: Zeitschrift für Alte Geschichte, 40(4), 465–93. Webb, Sidney. (1912). The Economic Theory of a Legal Minimum Wage. Journal of Political Economy, 20(10), pp. 973–98. Webb, Sidney and Webb, Beatrice. (1897) [1920]. Industrial Democracy. London: Longmans Green. Weber, Max. (1927) [1981]. General Economic History. New Brunswick and London: Transaction Books. (1956) [1978]. Economy and Society: And Outline of Interpretive Sociology. Berkeley: University of California Press. Weinstein, Michael M. (2009). Paul A. Samuelson, Economist, Dies at 94. New York Times, December 13, 2009. Whitacre, James and Bender, Axel. (2010). Degeneracy: A Design Principle for Achieving Robustness and Evolvability. Journal of Theoretical Biology, 263, 143–53. White, L. H. (2005). The Federal Reserve System’s Influence on Research in Monetary Economics. Economic Journal Watch, 2(2), 325–54. Whitman, D. Glen and Koppl, Roger. (2010). Rational Bias in Forensic Science. Law, Probability and Risk, 9(1), 69–90. Wible, James. (1998). The Economics of Science: Methodology and Epistemology as If Economics Mattered. London and New York: Routledge. Wierzchosławski, Rafał Paweł. (2016). Florian Znaniecki, Alfred Schutz, Milieu Analysis and Experts Studies. In Elżbieta Hałas, ed., Life-World, Intersubjectivity, and Culture: Contemporary Dilemmas. New York,: Peter Lang, IAP, pp. 245–62.
266
References
Williams, Joan C. (2016). What So Many People Don’t Get About the U.S. Working Class. Harvard Business Review, November 10, 2016. Downloaded November 29, 2016 from https://hbr.org/2016/11/what-so-many-people-dont-get-about-the-u-sworking-class. Williams, Deirdre and Lankes, Tiffany. (2017). Support, Criticism Surround HomeSchooling Mom’s Claims. The Buffalo News, February 25, 2017. Downloaded February 26, 2017 from https://buffalonews.com/2017/02/25/homeschoolingmothers-claims-run-differing-reports-school-cps/. Williamson, Oliver. (1976). Franchise Bidding for Natural Monopolies – In General and with Respect to CATV. Bell Journal of Economics, 7(1), 73–104. Willingham v. State, 897 S.W.2d. 351, 357, Tex.Crim.App. (1995. Wilson, Woodrow. (1887). The Study of Administration. Political Science Quarterly, 2 (2), 197–222. Woodward, John. (1902). Expert Evidence. The North American Review, 175(551), 486–99. Wolfram, S. (1984). Universality and Complexity in Cellular Automata. Physica, 10D, 1–35. Wolinsky, Asher. (1993). Competition in a Market for Informed Experts’ Services. RAND Journal of Economics, 24(3), 380–98. (1995). Competition in Markets for Credence Goods. Journal of Institutional and Theoretical Economics, 151(1), 117–31. Wolpert, David H. (2001). Computational Capabilities of Physical Systems. Physical Review E, 65(016128), 1–27. Wrona, Richard M. (2006). A Dangerous Separation: The Schism between the American Society and Its Military. World Affairs, 169(1), 25–38. Yamey, B. S. (1949). Scientific Bookkeeping and the Rise of Capitalism. The Economic History Review, New Series, 1(2/3), 99–113. Yandle, Bruce. (1983). Bootleggers and Baptists: The Education of a Regulatory Economist. Regulation, 7(3), 12. Yeager, Leland B. (1960). Methodenstreit over Demand Curves. Journal of Political Economy, 63, 53–64. (1984). Henry George and Austrian Economics. History of Political Economy, 16, 157–74. Yilmaz, Ihsan. (2002). Secular Law and Emergence of the Unofficial Turkish Islamic Law. Middle East Institute, 56(1), 113–31. Young, Allyn A. (1928). Increasing Returns and Economic Progress. The Economic Journal, 38(152), 527–42. Xenophon. (2007). The Apology of Socrates. Translated by H. G. Dakyns. eBooks@Adelaide. Zick, Timothy. (2015). Professional Rights Speech. Arizona State Law Journal, 48(4), 1289–1360. Zuccino, D. (2006). Duke Case Worsens for Prosecution. Los Angeles Times, December 16, 2006. Downloaded August 1, 2016 from http://articles.latimes.com/2006/dec/ 16/nation/na-duke16. Zuiderent-Jerak, Teun. (2009). Competition in the Wild: Reconfiguring Healthcare Markets. Social Studies of Science, 39(5), 765–92.
Index
AAFS. See American Academy of Forensic Sciences Accetti, Carlo Invernizzi, 7 Adewunmi, Bim, 1 administration, Wilson and, 13–14, 85 administrative state, 13–17, 235–6 dismantling of, 16–17 party rule and, 16–17 advice, experts and, 39–40 Aiello, Leslie C., 160 Akerlof, George, 35, 145, 159, 165 Alexander of Macedonia, 48–50, 52–5 AMA. See American Medical Association American Academy of Forensic Sciences (AAFS), 75 American football, 98–9 American Investment and Recovery Act of 2009, 157–8 American legal system, 209 American Medical Association (AMA), 152, 205–10. See also Flexner report, racism and American progressives, knowledge imposition and, 235–6. See also economics, Progressive American Statistical Association (ASA), 75 L’Amour médecin (Molière), 25–6 analytical egalitarianism, 78, 130, 152 ancient writers, spontaneous order and, 125–6 Anderson, James, 144–5 Anderson, Richard C., 176 anger, at experts, 1 Anglo-American law, expert witnesses and, 56–67 anthill problem, 19–20, 78–9, 143, 236 Antiphon, 125
Apology (Plato), 45–7, 124–5 approbation, 165, 197 arbitrariness, party rule and, 16 Aristophanes, 25, 45, 47, 125 Aristotle, 48–9, 51–3, 55 Arnush, Michael, 46 Arthur, W. Brian, 106, 161, 203 arts, division of knowledge and, 126–7 ASA. See American Statistical Association asymmetric information, 145, 159 Athens, ancient, 44–5, 161. See also Socratic tradition Australia, 62 automobiles, spontaneous order and, 104 autonomy, 71, 90, 189–90, 195–7 epistemic, 37 Barber, Michael D., 72–3, 81 barter, money and, 100 Bartley, W. W., 121 Bastiat, Frederic, 103, 105 Bator, Francis M., 113–14 Baumol, William J., 205 Berger, Peter, 11, 24, 32–7, 55, 71–2, 116, 205 division of knowledge and, 143, 200 expertise and, 40–1 Hayek and, 34 monopoly and, 36–7, 88, 209 natural attitude and, 35 nihilation and, 36, 67, 191 social constructionism and, 33–4 spontaneous order and, 34 universal experts and, 36 Berger, Vance, 179 Bhaskar, Roy, 137
267
268
Index
bias, 156–7, 176, 199–200. See also expert errors, honest; willful fraud blinding and, 197–200 economics and, 157–8 leveraging of, 200 synecological, 19, 199–200 Bickerton, Christopher, 7 Big Players, 215, 230 biology, 105 blinding, 197–200, 218 Bloor, David, 41–2 Boettke, Peter, 97, 102, 104, 113 Böhm-Bawerk, Eugen v., 140 boldness, forensic science and, 117–18 Boulding, Kenneth, 71 bounded rationality, 168–72 computability theory and, 171–2 motivations and, 175–6 observer effects and, 175–6 role effects and, 176 synecologically, 170, 175–7, 197, 199–200 Boyte, Harry, 6 Bozdoǧan, Sibel, 223–4 Bradford, William C., 232–4 Brahe, Tycho, 168 Branchi, Andrea, 167 Brexit, 1 British common law, 107–8 Britton, Roswell Sessoms, 143–4 Broad, William J., 46 Brock, William A., 203 Buchanan, James M., 79, 102, 153, 164 Buddhism, 192–4 Burney, Ian A., 59 Burr, Vivien, 33 businesses, division of knowledge and, 160–1 Butos, William N., 92, 215 The Calculus of Consent (Buchanan and Tullock), 91, 153, 164 Callon, Michel, 112–14 Cameron, David, 5 Campbell, Donald, 121 Canning, D., 171 Catholicism, 193 Cato, 126 Cato Institute, 225 central banking, US and, 190–1 Central Intelligence Agency (CIA), 226 central planning, 13, 29, 69, 104–5, 119, 136, 141, 190–2, 221, 234–5. See also knowledge imposition Chadwick, E., 61–2
Chaerephon, 45–7, 124 Chalmers, David, 147 Charlton, David, 58, 179–80, 218–19 Child Protective Services (CPS), 2–4 children, division of knowledge and, 119 chimpanzees, 123 China, 143–4 Chroust, Anton-Hermann, 49–50, 52–4 CIA. See Central Intelligence Agency cigarettes, as money, 100 Cimabue, 126–7 civil-military gap, deep state, entangled and, 231–4 Clark, Andy, 147 clergy, Mandeville and, 130 climate change, 41, 71–2. See also Intergovernmental Panel on Climate Change Clinton, Hillary, 226 Coase, Ronald H., 216–17 cognition, experts and, 12, 168–72 Colander, David, 83–4 Cole, Simon A., 14, 41, 73, 108–9, 111–12, 116–18, 201, 213 Collins, H. M., 41–2, 77, 85–6 color vision, 168–9 commodities Marx and, 138–9 opinions as, 152 common sense, 53–4, 83–4 “Common Sense and the Scientific Interpretation of Human Action” (Schutz), 143 comparative advantage, division of labor and, 101 compassion, 167–8, 175, 208 competition, 97, 106–15, 156, 158, 237. See also expertise, ecology of expert failure and, 161, 189, 204 expertise, ecology of and, 217–18 free market, 106–14 judging and, 35 liberalism and, 227 market structure and, 91–2 restrictions and, 106–8 universal experts and, 36 competitive markets, 10–11, 37, 42, 88–9, 151 ethics and, 76, 90 as natural, 109–14 non-expert power and, 32 complex adaptive systems, spontaneous order and, 106
Index complexity expert failure and, 202–3 forensic science and, 202–3 complexity theory, 83–4 computability theory, bounded rationality and, 171–2 Conant, James B., 82, 85 conformity effects, 177 Consumer Reports, 195 consumer sovereignty, 216 Cook, Harold J., 59 corporations, 146, 226–8 court testimony, science and, 57 Cowan, E. James, 74–5, 91–2, 165, 185 Cox, Brian, 9 Cox, James C., 216 Coyne, Christopher, 70, 151 Coyne, Rachel, 151 CPS. See Child Protective Services credence goods, 40, 159–60 criminality, eugenics and, 5–6, 70 Crito (Socrates), 47 de la Croix, David, 71 cultural studies, 68 cybernetics, 105 D’Agostino, F., 147 Daniel, James, 45 Darby, Michael R., 159–60 Darwinism, 82 Davenport-Hines, Richard, 59 Davies, William, 59 Debreu, Gerard, 102 deep state, entangled, 12, 18, 221–34 Big Players and, 230 corporations and, 226–8 democracy, pluralistic and, 234 experts, rule of, and, 230–1 free market and, 225–6 incentives and, 231 Koch brothers and, 224–5 regulatory capture and, 231 rule of law and, 226 secrecy and, 228–31 synecological redundancy and, 230 defense, expert witnesses and, 60 DeFoe, Daniel, 26 degeneracy. See synecological redundancy “Degenerate Cosmopolitanism” (Martin, A.), 185 Delphic oracle corruption and, 45–6 Socrates and, 45–6, 124 demand, for opinions, 160–1
269
democracy, 14, 84–5, 91 participation and, 84, 86 pluralistic, 6–8, 17, 234 representative, 194–5 deregulation. See regulation Descartes, René, 65, 135 Devins, Caryn, 16, 18, 83, 121, 218 Dewey, John, 82 Dicey, A. V., 15–16, 226 disagreement, 75, 131. See also division of opinion expert witnesses and, 61–7, 117, 209 professions and, 63–4, 205 discipline, 31, 68 discussion, 85–8 economics and, 86–7 information choice theory and, 91 dispersed knowledge. See division of knowledge divination, Socratic tradition and, 44. See also Delphic oracle, Socrates and division of knowledge, 11, 24, 70, 116–32, 133–47, 143, 200. See also Synecological, EvoLutionary, Exosomatic, Constitutive and Tacit knowledge arts and, 126–7 asymmetric information and, 145 bias and, 199–200 bounded rationality and, 170 businesses and, 160–1 children and, 119 corporations and, 146 division of labor and, 30, 103, 123, 133–4, 137–42 expert witnesses and, 61–2 experts, economic theory of, and, 11 Hayek and, 92–3, 119–20, 122–8, 141–3 Mandeville and, 119–20, 122–3, 128–33 Marx and, 30, 137–9 Marxism and, 144–5 Mises and, 141–2 morality and, 146 political economy and, 140–1 Schutz and, 118, 142–3, 169–70 skepticism and, 56 social constructionism and, 34 socialism and, 141 Socrates and, 124–5 spontaneous order and, 34, 116 unification and, 140 Vasari and, 121, 126–7 division of labor, 101–2, 138–9, 141–2 constitutive knowledge and, 134, 138
270
Index
division of labor (cont.) division of knowledge and, 30, 103, 123, 133–4, 137–42 experts, economic theory of, and, 10 spontaneous order and, 34, 101–3 division of opinion, 131, 134–6 DNA profiling, 32 doux commerce thesis, 132 Drizin, Steen A., 182–3 Dror, Itiel E., 58, 179–80, 183, 219 Dulleck, Uwe, 160 Dunbar, R. I. M., 160 Duncan, James, 145 Durant, Darrin, 85–6 Earl, P. E., 88 Easterly, William, 1, 13, 69–70 “Economic Calculation in the Socialist Commonwealth” (Mises), 141 Economic Systems Design, 219 economics, 86–7, 102, 157–8. See also experimental economics Austrian school of, 18, 33, 141, 221 Keynes and, 29–30, 55, 192 mainline, 97, 111 neoclassical, 109, 113–14 Progressive, 110–11 “Economics and Knowledge” (Hayek), 24, 122, 128, 141–2 Edelman, Gerald M., 184–5 education Mandeville and, 129 medical, 206–7 Edwards, Thomas R., 130–1 efficiency, experts, economic theory of, and, 152 egoistic rationality, 163–5, 175–6 Eisenhower, Dwight, 222–3 Eliasberg, W., 67 elitism, populism and, 6–7 Elizabeth (Queen), 203 Ellis, Lee, 5, 70, 80 Emons, Winand, 160 entrepreneurs, experts and, 154 epistemic systems, 180–2 epistemic systems design, 217–20, 237 equality compassion and, 208 experts, rule of, and, 19 liberty and, 18 ethics, 73–6, 90. See also moral character, expert witnesses and; virtue ethics, code of, 75–6, 90
eugenics, 5–6, 39, 70–2. See also population policies; sterilization, forced experts, rule of, and, 189–90 Keynes and, 28–30, 192 minimum wage and, 110–11 reflexivity and, 80 Socratic tradition and, 51–2 Evans, Robert, 41–2, 85–6 expectations, observer effects and, 174–5 experimental economics, 77. See also Economic Systems Design; epistemic systems design expert dependent choice, 189, 192–4 expert errors, 158, 183, 210, 220 honest, 154–5, 172–4, 177, 179–80 incentives and, 12, 155, 172–80 principal-agent model and, 172–3 synecological redundancy and, 182–5 expert failure, 12, 17–18, 197, 202–3. See also expert errors approbation and, 197 autonomy and, 195–7 Big Players and, 215 competition and, 161, 189, 204 deep state, entangled and, 12, 18, 221–34 expert dependent choice and, 189, 192–4 experts, quasi-rule of, and, 194–5 experts, rule of, and, 189–92 feedback and, 203–4 Flint water crisis as, 1–2 identity and, 197 information choice theory and, 153 justice system and, 2 kids for cash case as, 2 market structure and, 189–200 monopsony and, 214–15 normal accidents and, 201–2 praiseworthiness and, 197 professions and, 205–11 regulation and, 211–14 social work and, 2–5 sympathy and, 197 tightly-coupled systems and, 201–2 expert witnesses, 4, 9, 28, 43–4, 56–67. See also handwriting identification; medical witnesses; special juries Anglo-American law and, 56–67 defense and, 60 disagreement and, 61–7, 117, 209 hot tubbing and, 62 moral character and, 58–9 opinions and, 38, 57, 65 payment and, 59–61
Index prosecution and, 60 scientific assessor and, 61–2 universal experts and, 67 expertise, 8, 37–8, 40–2, 152 judging of, 35 monopoly and, 191 expertise, ecology of, 180–5, 205, 217–18 epistemic systems and, 180–2 synecological redundancy and, 205 experts, 1, 69, 79–82, 152–4. See also universal experts advice and, 39–40 cognition and, 12 as defined by contractual role, 8 democracy, pluralistic and, 17 democracy and, 14 ethics, code of, and, 75–6 expertise and, 8, 37–8, 40–2, 152 experts, literature on, defining, 37–42 information choice theory defining, 89, 154 moral superiority of, 9, 55, 67, 235 moral superiority of in Keynes, 29, 55, 191–2 motivations of, 163–80 obedience and, 9, 44 opinions and, 8, 38, 42, 152–4 philosophers as, 9 preferences and, 88, 195–6 Socrates and, 46–7 Socratic tradition and, 9, 43–56 utility maximization and, 12, 163–8 virtue and, 73–5 Wilson and, 27 experts, economic theory of, 8, 10–11, 152, 155. See also expertise, ecology of; information choice theory comparative institutional approach to, 12–13 competitive markets and, 10–11 experts, literature on, 8–10, 23–42, 76–80, 85–8, 234. See also Anglo-American law; Socratic tradition democracy and, 84–5 ethics and, 73–6 market structure and, 88–9 moralizing and, 38–9 non-expert power and, 9, 26–32 power and, 68–73 reliability and, 9, 26–32, 42 well-informed citizens and, 80–4 experts, quasi-rule of, 190, 194–5 experts, rule of, 19, 189–92, 230–1 democracy, pluralistic and, 6–8, 17
271
Keynes and, 29, 55, 191–2 knowledge and, 19 populism and, 6–8 rule of law and, 226 Socrates and, 27–8, 46–7, 73 Socratic tradition and, 46–55 explanation of the principle, social sciences and, 79–80 The Fable of the Bees: Or, Private Vices, Publick Benefits (Mandeville), 88, 130–1, 134, 166–7. See also Mandeville, Bernard false confessions, 182–3 false convictions, expertise, ecology of, and, 182–3 FBI. See Federal Bureau of Investigation Feaver, Peter D., 231–2 Federal Bureau of Investigation (FBI), 226, 229 Federalist Papers, 135 feedback, expert failure and, 203–4 Feigenbaum, Susan, 156–7 Felin, Teppo, 168–9 Ferguson, Adam, 103 Ferraro, Paul J., 168 Feynman, Richard, 178–9 Fielding, Henry, 77 film critics, 160 Filonik, Jakub, 49 Fisher, R. A., 177–8 5 Star Movement, 6 Flexner, Abraham, 206–9, 235 Flexner report, racism and, 206–8 Flint water crisis, 1–2 Fontenrose, Joseph, 45 forensic science, 23, 39, 163–4. See also American Academy of Forensic Sciences; DNA profiling; handwriting identification; National Commission on Forensic Science; National Institute of Forensic Science bias and, 157 blinding and, 199 boldness and, 117–18 complexity and, 202–3 expert errors and, 158, 183, 210, 220 expert failure and, 17–18, 197, 202 incentives and, 179–80, 204 monopsony and, 214 payment and, 60 power and, 73 regulation and, 211, 213–14 virtue and, 74–5 willful fraud and, 154–5
272
Index
Foster, William L., 65 Foster v. Chatman, 83 Foucault, Michel, 31, 68–9, 90 Fox, Renée, 144 Franklin, Allan, 177 Franklin, Benjamin, 194, 196, 198 Frazer, Persifor, 65–6 free development, experts and, 69 free entry, expertise, ecology of, and, 205 free market, 10–11, 106–14, 225–6 economics and, 102 market for ideas and, 216–17 free speech, 210, 217 Freedman, David, 27 Friedman, Milton, 110, 225–6 Front National (National Front), 6 full-information decisions, competition and, 156, 158 Gally, Joseph A., 184–5 Galton, David, 51, 70 Galton, Francis, 28, 51 Garrett, Brandon L., 183 Gatewood, John B., 145–6 general equilibrium theory, 98, 102, 113–14 George, Henry, 119, 122, 140–1 Ghiberti, Lorenzo, 127 global warming. See climate change Gökalp, Deniz, 224 Golan, Tal, 57, 62 Goldman, Alvin, 30, 32, 147, 216 Goldrich, Robert L., 232 Goodale, Melvyn A., 174 Gosseries, Axel, 71 gossip, 160 Gove, Michael, 1, 6 government intervention, economics and, 102 governments, opinions and, 161 Grann, D., 182 Great Recession, 5, 190 Greenspan, Alan, 5 Greenwald, Glenn, 17, 226, 228 Grosch, Eric N., 179 Gruber, Jonathan, 5 Habermas, Jürgen, 31, 69, 85–6 Hall, Stuart, 16–17 Hamilton, Alexander, 135 Hand, Learned, 38, 56–7, 66–7 handwriting identification, 65–6 Harth, Phillip, 130–1
Harward, Keith, 210 Haupt, Claudia E., 209–10 Hausman, Daniel M., 163 Hayek, Friedrich, 24, 33–4, 56, 110, 147, 192 corporations and, 227 division of knowledge and, 92–3, 119–20, 122–8, 141–3 division of labor and, 141–2 knowledge and, 11, 139 knowledge problem and, 69 reflexivity and, 79–80 Schutz and, 118, 142–3 spontaneous order and, 97, 105–6 height, average human, competition and, 110 Heiner. Ronald. A., 176 Henrich, Joseph, 132, 152 Henry the Sixth (Shakespeare), 25 Herodotus, 45 Herrick, Devon M., 209 Hewlett, A.W., 198 Hickey, Colin, 71 hierarchy knowledge and, 15, 116–18 Cole on, 73 polyarchy and, 14–15 Hirschman, Albert O., 132–3 historical materialism, Marx and, 137 History of England (Hume), 135 Holcombe, Randall, 196 home-schooling, 3–4 Hommes, Cars H., 203 Horwitz, Steven, 146, 190–1 hot tubbing, expert witnesses and, 62 Huber, Peter W., 38–9 Hume, David, 18, 119–20, 135 Hutchins, Edwin, 120, 147 Hutt, W. H., 216 “I, Pencil” (Read), 103, 120 ICC. See Interstate Commerce Commission identity, 165 expert failure and, 197 motivations and, 165 ideology, 30–1, 116 immigration, 111 impartial spectator, Smith, A., and, 166–8 incentives, 173, 179–80, 204, 231. See also conformity effects; observer effects; role effects alignment of, 204 expert errors and, 12, 155, 172–80
Index honest errors and, 172–3, 177, 179–80 market structure and, 204 infinite regress, regulation and, 212 information, knowledge and, 144. See also asymmetric information; fullinformation decisions, competition and information aggregation services, 160 information choice theory, 11–12, 89–91, 151–62 asymmetric information and, 159 democracy and, 91 egoistic rationality and, 164–5 ethics and, 90 expert errors and, 172–3 market structure and, 91–2 monopoly and, 153–4 motivational assumptions of, 163–80 power and, 90 principal-agent model and, 157–8, 172–3 well-informed citizens and, 91 Ingold, Tim, 121 An Inquiry into the Nature and Causes of the Wealth of Nations (Smith, A.), 133–4 institutions, 37, 180, 220. See also expertise, ecology of intelligence community. See deep state, entangled Intergovernmental Panel on Climate Change (IPCC), 211–12, 215. See also climate change Interstate Commerce Commission (ICC), 212 invisible hand. See spontaneous order IPCC. See Intergovernmental Panel on Climate Change Iraq War, 5 Islamic State (ISIS), 230 Isnardi, Margherita, 50, 53–5 Isocrates, 53–4 Jaeger, Werner, 54–5 Jakobsen, L. S., 174 James, William, 169–70 Jasanoff, Sheila, 76–7, 82, 84–6, 190 Jobs, Steven, 154 Joung, Eun-Lee, 104–5 judging competition and, 35 of expertise, 35 junk science, 38–9 jury selection, 83 justice system, expert failure and, 2
273
Kahneman, Daniel, 168–70, 176 Karni, Edi, 159–60 Kasaba, Reşat, 223–4 Kaye, David H., 27, 130, 168 Kaye, F. B., 129–30 Keil, Frank C., 119 Kelsen, Hans, 48–9, 52–3 Kerkhof, Bert, 166 Kerschbamer, Rudolf, 160 Kessel, Reuben A., 206–7 Keynes, John Maynard disclaimer on, 30 on eugenics, morality, and planning and, 28–30 letter to Hayek and, 191–2 on superiority of experts, 55 kids for cash case, as expert failure, 2 Kirzner, I. M., 18 Knight, Frank H., 78, 86, 89 Knighton, Henry, 193 knowledge, 19, 134. See also division of knowledge; Synecological, EvoLutionary, Exosomatic, Constitutive and Tacit knowledge constitutive, 122–5, 133–4, 138 democracy, pluralistic and, 7 evolutionary, 121, 128 exosomatic, 121–2, 145–6 hierarchy and, 15, 116–18 information and, 144 language and, 145–6 Mandeville and, 11, 121, 128, 131, 134, 139 power and, 31 prices and, 146 science and, 136, 140 social distribution of, 11, 24 speculative, 123–4, 133–4, 136, 140 synecological, 120–1, 235–6 tacit, 122, 131 knowledge imposition, 19, 31, 52, 54, 90, 170, 234–6 American progressives and, 235–6 moral superiority and, 235–6 knowledge problem, Hayek and, 69 Koch, Charles, 224–5 Koch, David, 224–5 Kohn, Richard H., 231–2 Koppl, Roger, 109, 113, 121, 157, 176, 185 bias and, 200 Big Players and, 215 competition and, 91–2
274
Index
Koppl, Roger (cont.) epistemic systems design and, 218–20 expert failure and, 204 identity and, 165 novelty intermediation and, 161 virtue and, 74–5 Krane, Dan, 58, 174, 183, 199–200 Kranton, Rachel E., 165 Kuhn, Thomas, 99 Kupers, Roland, 83–4 laboratory, experimental economics and, 77 laissez faire. See competition, free market Langlois, Richard N., 111, 161 language, exosomatic knowledge and, 145–6 Law, John, 99–100 Laws (Plato), 51–2, 54–5 Lee, Gary, 100 Leo, Richard A., 182–3 Leonard, Thomas C., 110–11, 161, 208 A Letter to Dion, Occasion’d by his Book call’d Alciphron or the Minute Philosopher (Mandeville), 130 Levinson, Sanford, 83 Levy, David, 32, 39–40, 75, 86–9, 154 analytical egalitarianism and, 78, 130, 152 bias and, 156–7 experts, economic theory of, and, 155 incentives and, 173 motivations and, 165–6 Smith, A., and, 133–4 Lewis, Paul, 34 Ley, David, 145 liberalism, competition and, 227 liberty, equality and, 18–19 licensing restrictions, professions and, 205–10 Lincoln, Abraham, 163–4 Lindblom, Charles, 18 Lives of the Painters, Sculptors and Architects (Vasari), 126–7 Lloyd-Jones, Hugh, 46 local truth, epistemic systems and, 180–1 Lofgren, Mike, 225–6 long bomb phenomenon, spontaneous order and, 98–9 Longo, Giuseppe, 121 Lotka, Alfred J., 121–2 Lovejoy, Arthur, 166 Luban, David, 146 Luckmann, Thomas, 11, 24, 32–7, 55, 71–2, 116, 205
division of knowledge and, 143, 200 expertise and, 40–1 Hayek and, 34 monopoly and, 36–7, 88, 209 natural attitude and, 35 nihilation and, 36, 67, 191 social constructionism and, 33–4 spontaneous order and, 34 universal experts and, 36 Lutz, Donna J., 119 Lynch, Micael, 41 Macedonia, Aristotle and, 52–3. See also Alexander of Macedonia; Philip of Macedonia Madison, James, 135 magical thinking, 216 man on the street, Schutz and, 80–2 Mandeville, Bernard, 102, 127, 129–30, 230 compassion and, 167–8, 175, 208 disagreement and, 131 discussion and, 87–8 division of knowledge and, 119–20, 122–3, 128–33 division of labor and, 138 division of opinion and, 131, 134 knowledge and, 11, 121, 128, 131, 134, 139 manners and, 129, 131–2 motivations and, 166–8 the poor and, 129–30 pride and, 166–7 reflexivity and, 87–8 satire and, 130–1 skepticism and, 56 spontaneous order and, 128 manners, Mandeville and, 129, 131–2 Mannheim, Karl, 30–1, 116 A Manual of Medical Jurisprudence (Taylor, A. S.), 57–8 Marco Polo, 25 Margolis, Howard, 168 market failure, 113–14 market for ideas, 215–17 market structure, 88–9, 91–2, 189–200. See also competitive markets; monopoly incentives and, 204 as product of design, 111 Martin, Adam, 185 Martin, David, 34 Marx, Karl, 30, 137–9
Index Marxism, 77, 144–5 Matthews, J Rosser, 179 Mayfield, Brandon, 158, 165, 211 McCullagh, Declan, 229 McKelway, St. Clair, 38 McKeon, Michael, 118 McPherson, Michael S., 163 McQuade, Thomas J., 215 medical witnesses, 57–61 medicine, 36–7. See also American Medical Association Mendel, Gregor, 177–8 Menger, Carl, 33, 99–100, 139–40 Merlan, Philip, 53–4 Merton, Robert, 31–2, 72–3, 84, 117 Mesmer, Franz Anton, 198 Michelangelo, 126–7 Middle Ages, 30 Milgrom, P., 86, 88, 156, 158 military-industrial complex, 222–3. See also deep state, entangled Mill, John Stuart, 86 Miller, James C. III, 17 Miller, Jeff, 52–3 Millikan, Robert, 178–9 Milner, A. David, 174 minimum wage, 107, 110–11 Mises, Ludwig von, 16, 19, 100, 119 division of knowledge and, 141–2 social constructionism and, 33 Mnookin, Jennifer L., 14, 66 Molière, 25–6 monarchy, Aristotle and, 48–9, 52–3 money, 99–100 barter and, 100 cigarettes as, 100 monopoly, 36–7, 88, 90, 153–4, 191, 209 competitive markets and, 88–9 expert failure and, 12 monopsony, 214–15 moral character, expert witnesses and, 58–9 moral superiority, knowledge imposition and, 235–6 morality, division of knowledge and, 146 moralizing, experts, literature on, and, 38–9 Morgenstern, Oskar, 171 motivations, 163–80. See also information choice theory, motivational assumptions of approbation and, 165 egoistic rationality and, 164–5
275
identity and, 165 praiseworthiness and, 165–6 sympathy and, 165 Mudde, Cas, 6–7 Mudie, Robert, 136–7 Mueller, Dennis, 164 Mueller, Robert, 229 Müller, Gerhard, 54–5 Musacchio, J.M., 169 Nadler, Jerrold, 229 NAS. See National Academy of Sciences “NAS Report,” 117 National Academy of Sciences (NAS), 117, 211–13 National Commission on Forensic Science, 213–14 National Front. See Front National National Institute of Forensic Science (NIFS), 211–13 national security. See secrecy national security state, 223. See also deep state, entangled natural attitude, 35 Neisser, Ulrich, 175 Neumann, John von, 171 The New Science (Vico), 133 Nifong, Michael, 214 NIFS. See National Institute of Forensic Science nihilation, 36, 67, 191 Nilsson, Martin, 44 Nock, Arthur Darby, 45 non-expert power, 9, 26–32. See also well-informed citizens normal accidents, expert failure and, 201–2 normative turn, STS and, 41 North Korea, 104–5, 110 novelty intermediation, 161, 195–6 Obamacare, 5 obedience, 9, 44. See also experts, rule of observer effects, 58, 174–6 blinding and, 197–200 precautions against, 177–9 Odling, William, 62, 117 Ojakangas, Mika, 51 On Liberty (Mill), 86 O’Neill, Brian C., 71 opinions, 38, 57, 65, 152, 160–2. See also common sense as credence goods, 159–60
276
Index
opinions (cont.) experts as paid for, 8, 38, 42, 152–4 governments and, 161 optimality, market failure and, 113–14 Orchard, Lionel, 164 The Organization of Inquiry (Tullock), 173 Ostrom, Vincent, 13–14 Owen, Robert, 135–7 Pak, Sunyoung, 110 Palmer, William, 59–61 Pareto, Vilfredo, 113–14 partial redundancy. See synecological redundancy participation, democracy and, 84, 86 party rule administrative state and, 16–17 arbitrariness and, 16 populism and, 16 payment, 59–61. See also opinions, experts as paid for Pearson, Karl, 28, 51–2, 80, 82 Peart, Sandra, 32, 39–40, 75, 86–9, 154 analytical egalitarianism and, 78, 130, 152 experts, economic theory of, and, 155 incentives and, 173 motivations and, 165–6 praiseworthiness and, 165–6 Smith, A., and, 133–4 Peasant Revolt of 1381, 25 Penrose, Clement B., 65 Perrow, Charles, 201–2 The Phenomenology of the Social World (Schutz), 142 Philip of Macedonia, 48–50, 52, 54 philosophers as experts, 9. See also Socratic tradition Pichert, James W., 176 Pinker, Steven, 132 Plato, 45–8, 51–2, 54–5, 124–5 Platonic Academy, 49–50 Plutarch, 49 Podolsky, Scott H., 197–8 Poisons in Relation to Medical Jurisprudence and Medicine (Taylor, A. S.), 60 Polansky, Ronald, 45 Polanyi, Michael, 122 political economy, division of knowledge and, 140–2 Politics (Aristotle), 48, 51
polyarchy hierarchy and, 14–15 tyranny and, 15 the poor, Mandeville and, 129–30 Popper, Karl, 117, 122 population policies, 71–2 populism, 6–8, 16 positive predictive value (PPV), expertise, ecology of, and, 183–4 Posner, Richard A., 212 Potts, Jason, 88, 161 poverty. See also technocratic illusion rights and, 13, 69 Socrates and, 47 power, 68–73, 90. See also non-expert power Foucault and, 31, 68–9 knowledge and, 31 PPV. See positive predictive value praiseworthiness, 165–6, 197 preferences, experts and, 88, 195–6 Prendergast, Renee, 122, 128, 131 Price, David H., 223 prices, knowledge and, 146 pride, Mandeville and, 166–7 principal-agent model, 157–8, 172–3 Principles of Psychology (James), 169–70 prison system, eugenics and, 5–6, 70 privatization, 109 professions disagreement and, 63–4, 205 expert failure and, 205–11 licensing restrictions and, 205–10 prosecution, expert witnesses and, 60 Protestantism, 194 psychiatry, 23 public choice theory, 11, 153, 163 egoistic rationality and, 163–5 utility maximization and, 164–5 racism, 2, 206–8 Radford, R.A., 100 Radnitzky, Gerard, 121 Rahula, Walpola, 193 railroads, 212 randomly chosen citizens, well-informed citizens and, 82–3 rationality, Weberian, 24–5. See also bounded rationality; egoistic rationality Rawls, John, 85–6 Read, Leonard, 103, 120–1 Reagan, Michael D., 223 Reason, James, 201–2
Index “Reconstruction in Europe: An Introduction” (Keynes), 55 Reeve, C.D.C., 45–6 reflexivity, 79–80, 87–8, 236 experimental economics and, 77 experts, literature on, and, 76–80 information choice theory and, 90–1 reliability and, 76 satire and, 77–8 self-exemption and, 77 regulation, 18–19, 211–14, 225–6. See also government intervention, economics and; restrictions, competition and forensic science and, 211, 213–14 infinite regress and, 212 market for ideas and, 216–17 regulatory capture and, 212–14 regulatory capture, 91, 212–14, 231 Rehg, William, 211–12 Reid, Sue, 5 relative unanimity, experts and, 79 reliability, 9, 26–32, 42, 76 competitive markets and, 37, 151 institutions and, 37 virtue and, 73–4 religion, 192. See also Buddhism; Catholicism; Delphic oracle; Protestantism competition and, 76, 192–4 expert dependent choice and, 192–4 Socratic tradition and, 44–6, 54–5, 194 rent control, 151 Republic (Plato), 46, 48 restrictions, competition and, 106–8 Reynolds, Russell, 62–3 Rieder, Travis, 71 rights, poverty and, 13, 69 Risinger, Michael, 66, 154–5, 170, 174–7, 183 rivalry, expertise, ecology of, and, 205 The Road to Serfdom (Hayek), 192 della Robbia, Luca, 127 Roberts, J., 86, 88, 156, 158 Robertson, Christopher T., 199, 218 role effects, 176–7 Rome, ancient, 112 Rosenthal, Robert, 58, 174 Ross, Stephen A., 157–8 Roucek, Joseph, 173 Rubel, Alexander, 44–5, 54 rule of law, 226 administrative state and, 15–17 experts, rule of, and, 226 Ryle, G., 122
277
Salter, Alex, 216 Samuelson, Paul, 47 Sandford, Jeremy, 40 satire, 77–8, 130–1 Savage, Deborah A., 63 Schiemann, John, 31 Schumer, Chuck, 229 Schutz, Alfred, 19–20, 33, 40, 80–2, 162, 236 division of knowledge and, 118, 142–3, 169–70 Hayek and, 118, 142–3 well-informed citizens and, 80–2, 118 science. See also forensic science; junk science; National Academy of Sciences; social sciences court testimony and, 57 social structure and, 72–3 speculative knowledge and, 136, 140 science and technology studies (STS), 41, 92 science studies, 76–7, 112, 220 scientific assessor, expert witnesses and, 61–2 scientific witnesses. See expert witnesses Scott, Peter Dale, 224–5, 229 secrecy, 5, 228–31 SELECT. See Synecological, EvoLutionary, Exosomatic, Constitutive and Tacit knowledge self-exemption, reflexivity and, 77 selfishness. See egoistic rationality self-rule. See autonomy Senn, Peter R., 100 The Sensory Order (Hayek), 79–80 Shakespeare, William, 25 Shaw, Julia, 235–6 Sherif, Carolyn, 177 Sherif, Muzafer, 177 Simon, Herbert A., 144, 146, 168–70 Singerman, David Roth, 28–9, 191–2 skepticism, 56 slavery, 15, 109–10 Small, Albion, 140 Smith, Adam, 34, 76, 78, 98, 103, 119–20 approbation and, 165 autonomy and, 196 division of knowledge and, 133–4, 137–9 division of labor and, 101–2, 138–9 impartial spectator and, 166–8 Levy and, 133–4 Menger and, 139–40
278
Index
Smith, Adam (cont.) Peart and, 133–4 religion and, 192 slavery and, 109–10 speculative knowledge and, 134 sympathy and, 165, 167–8, 208 Vasari and, 127 Smith, Adam C., 105, 228 Smith, Nicholas D., 45 Smith, R. Angus, 61–2, 74 Smith, Vernon, 18, 120, 218 Snowden, Edward, 231 The Social Construction of Reality (Berger, P., and Luckmann), 33 social constructionism, 33–4 social distribution of knowledge. See division of knowledge social sciences anthill problem and, 19–20, 78–9, 143, 236 explanation of the principle and, 79–80 social services, UK, 4–5 social structure, science and, 72–3 social work, 2–5, 72 socialism division of knowledge and, 141 impossibility of rational planning in, 221 sociological ambivalence, Merton and Barber and, 72–3 Socrates, 27–8, 45–7, 73, 124–5 Delphic oracle and, 45–6, 124 as expert, 46–7 poverty and, 47 Socratic tradition, 9, 43–56, 194. See also Aristotle; Plato; Platonic Academy divination and, 44 eugenics and, 51–2 experts, rule of, and, 46–55 knowledge imposition and, 235 Sokal, Alan D., 232 “Some Academic ‘Rackets’ in the Social Sciences” (Roucek), 173 Song of Roland, 232–3 Soros, George, 225 South Korea, 110 Soviet Union, 104 special juries, 56 spectators, spontaneous order and, 98–9, 104 spontaneous order, 34, 97–106, 128 as abstract, 105–6 ancient writers and, 125–6 as complex, 105 complex adaptive systems and, 106 division of knowledge and, 34, 116
division of labor and, 34, 101–3 experts, economic theory of, and, 10 general equilibrium theory and, 98, 102 Hayek and, 97, 105–6 long bomb phenomenon and, 98–9 money and, 99–100 as without purpose, 106 spectators and, 98–9, 104 unintended consequences and, 106 sterilization, forced, 28–9, 70–1, 80 Stigler, George J., 226 stock markets, 141, 203 Strauss, Peter L., 13 Stretton, Hugh, 164 Strudler, Alan, 146 STS. See science and technology studies supply, of opinions, 162 Svorny, Shirley, 209 symmetry principle, 41–2 sympathy, 165, 167–8, 197, 208. See also compassion; praiseworthiness Synecological, EvoLutionary, Exosomatic, Constitutive and Tacit knowledge (SELECT), 120–2, 146–7, 176, 234 synecological redundancy, 182–5, 205, 230 Taleb, Nassim, 185 Taylor, Alfred Swaine, 57–8, 60–1 Taylor, John Pitt, 58 Taylor, Laura O., 168 technocracy, populism and, 7 technocratic illusion, 13, 69–70 testability, 236–7 testimony, 32. See also court testimony Thatcher, Margaret, 16–17 Themistius, 48 The Theory of Moral Sentiments (Smith, A.), 166–8 Thompson, William C., 108–9, 111–12, 201, 203 Tiebout, C., 194–5 tightly-coupled systems, expert failure and, 201–2 Tom Jones (Fielding), 77 transnational corporations. See corporations transparency, 39–40, 75 The Travels of Marco Polo (Marco Polo), 25 Treatise on the Law of Evidence (Taylor, J. P.), 58 Triplett, Jeremy S., 204 Trump, Donald, 1, 229 truth. See also local truth expertise and, 41–2 market for ideas and, 215–16
Index social constructionism and, 33–4 utility maximization and, 163–4 Tsuji, M., 171 Tullock, Gordon, 153, 164, 173 Turkey, 223–4 Turner, Stephen, 23, 31–2, 35, 78, 190 cultural studies and, 68 democracy and, 84–5 Habermas and, 85–6 normal accidents and, 201–2 normative turn and, 41 Pearson and, 80 science studies, 76–7 well-informed citizens and, 82 Tversky, A., 176 tyranny, polyarchy and, 15 Uexküll, Jakob von, 169 unification, division of knowledge and, 140 unintended consequences, spontaneous order and, 106 unitary mind, honest errors and, 174 United Kingdom (UK). See also AngloAmerican law; British common law social services and, 4–5 United States (US), central banking and, 190–1. See also Anglo-American law; Child Protective Services; deep state, entangled; specific topics universal experts, 36, 67 Ünsar, Seda, 224 US. See United States “The Use of Knowledge in Society” (Hayek), 142 used cars, 35, 159 utility maximization, 12, 163–8
Vasari, Giorgio, 121, 126–7 Velupillai, Vela, 171–2 Vico, Giambattista, 133 virtue, 73–5 Vitali, Stefania, 227–8 voucher programs, 194 Wagner, Richard E., 105, 185, 228 Walras, Leon, 98, 102, 113–14 Washburn, Emory, 63 Wasserman, David, 146 Watts, Nicole F., 223 Weber, Max, 24–5 well-informed citizens, 80–4, 91, 118 White, L. H., 191 Whitman, D. Glen, 157, 176 Wierzchosławski, Rafał Paweł, 40 willful fraud, 154–5 Williams, Joan C., 5 Williamson, Oliver, 109 Willingham, Cameron Todd, 182 Wilson, Woodrow, 13–14, 27, 84–5, 235 Wittgenstein, Ludwig, 56, 147 Wolpert, David H., 172 Woodward, John, 38 word of mouth, 160 Wrona, Richard M., 231–2, 235 Wycliffe, John, 193 Wyden, Ron, 228–9 Xenophon, 46–7 Yeager, Leland B., 111, 141, 215 Yilmaz, Ihsan, 224 Zain, Fred, 163–4 Zick, Timothy, 210 Zuiderent-Jerak, Teun, 112
279